chash
stringlengths
16
16
content
stringlengths
267
674k
90f035db81796835
Degenerate energy levels From Infogalactic: the planetary knowledge core Jump to: navigation, search This article is about different quantum states having the same energy. For other uses, see Degeneracy. "Quantum degeneracy" redirects here. It sometimes refers to a degenerate matter. In quantum mechanics, an energy level is said to be degenerate if it corresponds to two or more different measurable states of a quantum system. Conversely, two or more different states of a quantum mechanical system are said to be degenerate if they give the same value of energy upon measurement. The number of different states corresponding to a particular energy level is known as the degree of degeneracy of the level. It is represented mathematically by the Hamiltonian for the system having more than one linearly independent eigenstate with the same eigenvalue. In classical mechanics, this can be understood in terms of different possible trajectories corresponding to the same energy. Degenerate states in a quantum system Effect of degeneracy on the measurement of energy In the absence of degeneracy, if a measured value of energy of a quantum system is determined, the corresponding state of the system is assumed to be known, since only one eigenstate corresponds to each energy eigenvalue. However, if the Hamiltonian \hat{H} has a degenerate eigenvalue E_n of degree gn, the eigenstates associated with it form a vector subspace of dimension gn. In such a case, several final states can be possibly associated with the same result E_n, all of which are linear combinations of the gn orthonormal eigenvectors |E_{n,i}\rangle. In this case, the probability that the energy value measured for a system in the state |\psi\rangle will yield the value E_n is given by the sum of the probabilities of finding the system in each of the states in this basis, i.e. P(E_n)=\sum_{i=1}^{g_n}|\langle E_{n,i}|\psi\rangle|^2 Degeneracy in different dimensions Degeneracy in one dimension In several cases, analytic results can be obtained more easily in the study of one-dimensional systems. For a quantum particle with a wave function |\psi\rangle moving in a one-dimensional potential V(x), the time-independent Schrödinger equation can be written as -\frac{\hbar^2}{2m}\frac{d^2\psi}{dx^2} + V\psi =E\psi Since this is an ordinary differential equation, there are two independent eigenfunctions for a given energy E at most, so that the degree of degeneracy never exceeds two. It can be proved that in one dimension, there are no degenerate bound states for normalizable wave functions. A sufficient condition on a piecewise potential V and the energy E is the existence of two real numbers M,x_0 with M \neq 0 such that \forall x > x_0 we have V(x) - E \geq M^2.[1] In particular, V is bounded by below in this criterion. Degeneracy in two-dimensional quantum systems Particle in a rectangular plane Consider a free particle in a plane of dimensions L_x and L_y in a plane of impenetrable walls. The time-independent Schrödinger equation for this system with wave function |\psi\rangle can be written as -\frac{\hbar^2}{2m}\left(\frac{\partial^2 \psi}{{\partial x}^2} +\frac{\partial^2 \psi}{{\partial y}^2}\right) =E\psi The permitted energy values are E_{n_x,n_y}=\frac{\pi^2 \hbar^2}{2m}\left(\frac{n_x^2}{L_x^2}+\frac{n_y^2}{L_y^2}\right) The normalized wave function is \psi_{n_x,n_y}(x,y)=\frac 2{\sqrt{L_xL_y}} \sin\left(\frac{n_x\pi x}{L_x}\right)\sin\left(\frac{n_y\pi y}{L_y}\right) where n_x,n_y=1,2,3... So, quantum numbers n_x and n_y are required to describe the energy eigenvalues and the lowest energy of the system is given by E_{1,1}=\pi^2\frac{\hbar^2}{2m}\left(\frac 1{L_x^2}+\frac 1{L_y^2}\right) For some commensurate ratios of the two lengths L_x and L_y, certain pairs of states are degenerate. If L_x/L_y=p/q, where p and q are integers, the states (n_x, n_y) and (pn_y/q, qn_x/p) have the same energy and so are degenerate to each other. Particle in a square box File:Degrees of degeneracy of different levels for a particle in a square box.jpg In this case, the dimensions of the box L_x = L_y = L and the energy eigenvalues are given by Since n_x and n_y can be interchanged without changing the energy, each energy level is at least twice as degenerate when n_x and n_y are different. Degenerate states are also obtained when the sum of squares of quantum numbers corresponding to different energy levels are the same. For example, the three states (nx = 7, ny = 1), (nx = 1, ny = 7) et (nx = ny = 5) all have E=50 \frac{\pi^2\hbar^2}{2mL^2} and constitute a degenerate set. Finding a unique eigenbasis in case of degeneracy If two operators \hat{A} and \hat{B} commute, i.e. [\hat{A},\hat{B}]=0, then for every eigenvector |\psi\rangle of \hat{A}, \hat{B}|\psi\rang is also an eigenvector of \hat{A} with the same eigenvalue. However, if this eigenvalue, say \lambda, is degenerate, it can be said that \hat{B}|\psi\rangle belongs to the eigenspace E_\lambda of \hat{A}, which is said to be globally invariant under the action of \hat{B}. For two commuting observables A and B, one can construct an orthonormal basis of the state space with eigenvectors common to the two operators. However, \lambda is a degenerate eigenvalue of \hat{A}, then it is an eigensubspace of \hat{A} that is invariant under the action of \hat{B}, so the representation of \hat{B} in the eigenbasis of \hat{A} is not a diagonal but a block diagonal matrix, i.e. the degenerate eigenvectors of \hat{A} are not, in general, eigenvectors of \hat{B}. However, it is always possible to choose, in every degenerate eigensubspace of \hat{A}, a basis of eigenvectors common to \hat{A} and \hat{B}. Choosing a complete set of commuting observables If a given observable A is non-degenerate, there exists a unique basis formed by its eigenvectors. On the other hand, if one or several eigenvalues of \hat{A} are degenerate, specifying an eigenvalue is not sufficient to characterize a basis vector. If, by choosing an observable \hat{B}, which commutes with \hat{A}, it is possible to construct an orthonormal basis of eigenvectors common to \hat{A} and \hat{B}, which is unique, for each of the possible pairs of eigenvalues {a,b}, then \hat{A} and \hat{B} are said to form a complete set of commuting observables. However, if a unique set of eigenvectors can still not be specified, for at least one of the pairs of eigenvalues, a third observable \hat{C}, which commutes with both \hat{A} and \hat{B} can be found such that the three form a complete set of commuting observables. Degenerate energy eigenstates and the parity operator The parity operator is defined by its action in the |r\rangle representation of changing r to -r, i.e. \langle r|P|\psi\rangle=\psi(-r) The eigenvalues of P can be shown to be limited to \pm1, which are both degenerate eigenvalues in an infinite-dimensional state space. An eigenvector of P with eigenvalue +1 is said to be even, while that with eigenvalue −1 is said to be odd. Now, an even operator \hat{A} is one that satisfies, \tilde{A}=P \hat{A} P while an odd operator \hat{B} is one that satisfies P \hat{B}+\hat{B} P=0 Since the square of the momentum operator \hat{P}^2 is even, if the potential V(r) is even, the Hamiltonian \hat{H} is said to be an even operator. In that case, if each of its eigenvalues are non-degenerate, each eigenvector is necessarily an eigenstate of P, and therefore it is possible to look for the eigenstates of \hat{H} among even and odd states. However, if one of the energy eigenstates has no definite parity, it can be asserted that the corresponding eigenvalue is degenerate, and P|\psi\rangle is an eigenvector of \hat{H} with the same eigenvalue as |\psi\rangle. Degeneracy and symmetry Mathematically, the relation of degeneracy with symmetry can be clarified as follows. Let us consider a symmetry operation associated with a unitary operator S. Under such an operation, the new Hamiltonian is related to the original Hamiltonian by a similarity transformation generated by the operator S, such that H'=SHS^{-1}=SHS^\dagger, since S is unitary. If the Hamiltonian remains unchanged under the transformation operation S, we have Now, if |\alpha\rangle is an energy eigenstate, where E is the corresponding energy eigenvalue. which means that S|\alpha\rangle is also an energy eigenstate with the same eigenvalue E. If the two states |\alpha\rangle and S|\alpha\rangle are linearly independent (i.e. physically distinct), they are therefore degenerate. In cases where S is characterized by a continuous parameter \epsilon, all states of the form S(\epsilon)|\alpha\rangle have the same energy eigenvalue. Symmetry group of the Hamiltonian Types of degeneracy Systematic or essential degeneracy Accidental degeneracy Examples of systems with accidental degeneracies The Coulomb and Harmonic Oscillator potentials Particle in a constant magnetic field The hydrogen atom Main article: Hydrogen Atom In atomic physics, the bound states of an electron in a hydrogen atom show us useful examples of degeneracy. In this case, the Hamiltonian commutes with the total orbital angular momentum \hat{L^2}, its component along the z-direction, \hat{L_z}, total spin angular momentum \hat{S^2} and its z-component \hat{S_z}. The quantum numbers corresponding to these operators are l, m_l, s (always 1/2 for an electron) and m_s respectively. The energy levels in the hydrogen atom depend only on the principal quantum number n. For a given n, all the states corresponding to l=0n-1 have the same energy and are degenerate. Similarly for given values of n and l, the (2l+1), states with m_l = -ll are degenerate. The degree of degeneracy of the energy level En is therefore :\sum_{l \mathop =0}^{n-1}(2l+1) = n^2, which is doubled if the spin degeneracy is included.[2] The degeneracy with respect to m_l is an essential degeneracy which is present for any central potential, and arises from the absence of a preferred spatial direction. The degeneracy with respect to l is often described as an accidental degeneracy, but it can be explained in terms of special symmetries of the Schrödinger equation which are only valid for the hydrogen atom in which the potential energy is given by Coulomb's law.[2] Isotropic three-dimensional harmonic oscillator It is said to be isotropic since the potential V(r) acting on it is rotationally invariant, i.e. :V(r)=1/2(m\omega^2r^2) where \omega is the angular frequency given by \sqrt{k/m}. -\frac{\hbar^2}{2m}\left(\frac{\partial^2 \psi}{\partial x^2}+\frac{\partial^2 \psi}{\partial y^2}+\frac{\partial^2 \psi}{\partial z^2}\right)+(1/2){m\omega^2(x^2+y^2+z^2)\psi}=E\psi So, the energy eigenvalues are E_{n_x,n_y,n_z}=(n_x+n_y+n_z+3/2)\hbar\omega or, E_n=(n+3/2)\hbar\omega where n is a non-negative integer. So, the energy levels are degenerate and the degree of degeneracy is equal to the number of different sets {n_x,n_y,n_z} satisfying which is equal to \sum_{n_x=0}^n (n-n_x+1)=(n+1)(n+2)/2 Only the ground state is non-degenerate. Removing degeneracy Physical examples of removal of degeneracy by a perturbation Symmetry breaking in two-level systems If E_1 and E_2 are the energy levels of the system, such that E_1=E_2=E, and the perturbation W is represented in the two-dimensional subspace as the following 2X2 matrix 0 & W_{12} \\ W_{12}^* & 0 then the perturbed energies are • H2+ molecule, in which the electron may be localized around either of the two nuclei. Fine-structure splitting The perturbation Hamiltonian due to relativistic correction is given by where p is the momentum operator and m is the mass of the electron. The first-order relativistic energy correction in the |nlm\rangle basis is given by E_r=(-1/8m^3c^2)\langle nlm|p^4|nlm \rangle Now p^4=4m^2(H^0+e^2/r)^2 E_r=(-1/2mc^2)[E_n^2+2E_ne^2\langle 1/r \rangle + e^4\langle 1/r^2 \rangle] where \alpha is the fine structure constant. which may be written as The first order energy correction in the |j,m,l,1/2\rangle basis where the perturbation Hamiltonian is diagonal, is given by where a_0 is the Bohr radius. The total fine-structure energy shift is given by for j=l\pm1/2 Zeeman effect The splitting of the energy levels of an atom when placed in an external magnetic field because of the interaction of the magnetic moment \vec{m} of the atom with the applied field is known as the Zeeman effect. Taking into consideration the orbital and spin angular momenta, \vec{L} and \vec{S}, respectively, of a single electron in the Hydrogen atom, the perturbation Hamiltonian is given by- where m_l=-e \vec{L}/2m and m_s=-e \vec{S}/m. Thus, \hat{V}=e (\vec{L}+2\vec{S})\cdot\vec{B}/2m Now, in case of the weak-field Zeeman effect, when the applied field is weak compared to the internal field, the spin-orbit coupling dominates and \vec{L} and \vec{S} are not separately conserved. The good quantum numbers are n, l, j and mj, and in this basis, the first order energy correction can be shown to be given by E_z=-\mu_B g_j B m_j, where \mu_B={e\hbar}/2m is called the Bohr Magneton.Thus, depending on the value of m_j, each degenerate energy level splits into several levels. Lifting of degeneracy by an external magnetic field For each value of ml, there are two possible values of ms, \pm1/2. Stark effect For the hydrogen atom, the perturbation Hamiltonian is- if the electric field is chosen along the z-direction. The energy corrections due to the applied field are given by the expectation value of \hat{H}_{s} in the |nlm\rangle basis. It can be shown by the selection rules that \langle nlm_l|z|n_1l_1m_{l1}\rangle\ne0 when l=l_1\pm1 and m_l=m_{l1}. The degeneracy is lifted only for certain states obeying the selection rules, in the first order. The first-order splitting in the energy levels for the degenerate states |2,0,0\rangle and |2,1,0\rangle, both corresponding to n=2, is given by \Delta E_{2,1,m_l}=\pm|e|(\hbar^2)/(m_e e^2)E. See also 1. 1.0 1.1 Messiah,Albert. Quantum mechanics. Amsterdam: North-Holland Publishing Company, 1967. p. 98-106 2. 2.0 2.1 Eugen Merzbacher Quantum Mechanics (3rd ed., John Wiley 1998), pp.267-8 ISBN 0-471-88702-1 Further reading • Quantum Mechanics(Volume 1), Claude Cohen-Tannoudji, Bernand Diu, Frank Laloe • Principles of Quantum Mechanics, R.Shankar
7e9b48b8936388b2
Citation for this page in APA citation style.           Close Mortimer Adler Rogers Albritton Alexander of Aphrodisias Samuel Alexander William Alston Louise Antony Thomas Aquinas David Armstrong Harald Atmanspacher Robert Audi Alexander Bain Mark Balaguer Jeffrey Barrett William Belsham Henri Bergson George Berkeley Isaiah Berlin Richard J. Bernstein Bernard Berofsky Robert Bishop Max Black Susanne Bobzien Emil du Bois-Reymond Hilary Bok Laurence BonJour George Boole Émile Boutroux Michael Burke Joseph Keim Campbell Rudolf Carnap Ernst Cassirer David Chalmers Roderick Chisholm Randolph Clarke Samuel Clarke Anthony Collins Antonella Corradini Diodorus Cronus Jonathan Dancy Donald Davidson Mario De Caro Daniel Dennett Jacques Derrida René Descartes Richard Double Fred Dretske John Dupré John Earman Laura Waddell Ekstrom Herbert Feigl John Martin Fischer Owen Flanagan Luciano Floridi Philippa Foot Alfred Fouilleé Harry Frankfurt Richard L. Franklin Michael Frede Gottlob Frege Peter Geach Edmund Gettier Carl Ginet Alvin Goldman Nicholas St. John Green H.Paul Grice Ian Hacking Ishtiyaque Haji Stuart Hampshire Sam Harris William Hasker Georg W.F. Hegel Martin Heidegger Thomas Hobbes David Hodgson Shadsworth Hodgson Baron d'Holbach Ted Honderich Pamela Huby David Hume Ferenc Huoranszki William James Lord Kames Robert Kane Immanuel Kant Tomis Kapitan Jaegwon Kim William King Hilary Kornblith Christine Korsgaard Saul Kripke Andrea Lavazza Keith Lehrer Gottfried Leibniz Jules Lequyer Michael Levin George Henry Lewes David Lewis Peter Lipton C. Lloyd Morgan John Locke Michael Lockwood E. Jonathan Lowe John R. Lucas Alasdair MacIntyre Ruth Barcan Marcus James Martineau Storrs McCall Hugh McCann Colin McGinn Michael McKenna Brian McLaughlin John McTaggart Paul E. Meehl Uwe Meixner Alfred Mele Trenton Merricks John Stuart Mill Dickinson Miller Thomas Nagel Otto Neurath Friedrich Nietzsche John Norton Robert Nozick William of Ockham Timothy O'Connor David F. Pears Charles Sanders Peirce Derk Pereboom Steven Pinker Karl Popper Huw Price Hilary Putnam Willard van Orman Quine Frank Ramsey Ayn Rand Michael Rea Thomas Reid Charles Renouvier Nicholas Rescher Richard Rorty Josiah Royce Bertrand Russell Paul Russell Gilbert Ryle Jean-Paul Sartre Kenneth Sayre Moritz Schlick Arthur Schopenhauer John Searle Wilfrid Sellars Alan Sidelle Ted Sider Henry Sidgwick Walter Sinnott-Armstrong Saul Smilansky Michael Smith Baruch Spinoza L. Susan Stebbing Isabelle Stengers George F. Stout Galen Strawson Peter Strawson Eleonore Stump Francisco Suárez Richard Taylor Kevin Timpe Mark Twain Peter Unger Peter van Inwagen Manuel Vargas John Venn Kadri Vihvelin G.H. von Wright David Foster Wallace R. Jay Wallace Ted Warfield Roy Weatherford William Whewell Alfred North Whitehead David Widerker David Wiggins Bernard Williams Timothy Williamson Ludwig Wittgenstein Susan Wolf Michael Arbib Walter Baade Bernard Baars Gregory Bateson John S. Bell Charles Bennett Ludwig von Bertalanffy Susan Blackmore Margaret Boden David Bohm Niels Bohr Ludwig Boltzmann Emile Borel Max Born Satyendra Nath Bose Walther Bothe Hans Briegel Leon Brillouin Stephen Brush Henry Thomas Buckle S. H. Burbury Donald Campbell Anthony Cashmore Eric Chaisson Jean-Pierre Changeux Arthur Holly Compton John Conway John Cramer E. P. Culverwell Charles Darwin Richard Dawkins Terrence Deacon Lüder Deecke Richard Dedekind Louis de Broglie Max Delbrück Abraham de Moivre Paul Dirac Hans Driesch John Eccles Arthur Stanley Eddington Gerald Edelman Paul Ehrenfest Albert Einstein Hugh Everett, III Franz Exner Richard Feynman R. A. Fisher Joseph Fourier Philipp Frank Lila Gatlin Michael Gazzaniga GianCarlo Ghirardi J. Willard Gibbs Nicolas Gisin Paul Glimcher Thomas Gold Brian Goodwin Joshua Greene Jacques Hadamard Patrick Haggard Stuart Hameroff Augustin Hamon Sam Harris Hyman Hartman John-Dylan Haynes Donald Hebb Martin Heisenberg Werner Heisenberg John Herschel Art Hobson Jesper Hoffmeyer E. T. Jaynes William Stanley Jevons Roman Jakobson Pascual Jordan Ruth E. Kastner Stuart Kauffman Martin J. Klein Simon Kochen Hans Kornhuber Stephen Kosslyn Ladislav Kovàč Leopold Kronecker Rolf Landauer Alfred Landé Pierre-Simon Laplace David Layzer Benjamin Libet Seth Lloyd Hendrik Lorentz Josef Loschmidt Ernst Mach Donald MacKay Henry Margenau James Clerk Maxwell Ernst Mayr John McCarthy Ulrich Mohrhoff Jacques Monod Emmy Noether Abraham Pais Howard Pattee Wolfgang Pauli Massimo Pauri Roger Penrose Steven Pinker Colin Pittendrigh Max Planck Susan Pockett Henri Poincaré Daniel Pollen Ilya Prigogine Hans Primas Adolphe Quételet Juan Roederer Jerome Rothstein David Ruelle Erwin Schrödinger Aaron Schurger Claude Shannon David Shiang Herbert Simon Dean Keith Simonton B. F. Skinner Roger Sperry John Stachel Henry Stapp Tom Stonier Antoine Suarez Leo Szilard Max Tegmark William Thomson (Kelvin) Giulio Tononi Peter Tse Vlatko Vedral Heinz von Foerster John von Neumann John B. Watson Daniel Wegner Steven Weinberg Paul A. Weiss John Wheeler Wilhelm Wien Norbert Wiener Eugene Wigner E. O. Wilson H. Dieter Zeh Ernst Zermelo Wojciech Zurek Konrad Zuse Fritz Zwicky Free Will Mental Causation James Symposium Stuart Kauffman Stuart Kauffman is a medical doctor, a theoretical biologist, a MacArthur fellow, and a strong defender of the idea that the self-organization of complex adaptive systems must be added to Darwinian evolutionary theory to account for the emergence of life. Kauffman was an early faculty member in residence at the Santa Fe Institute, which is well known for its contributions to complexity theory, chaos theory, and information theory. His strong views on the need to augment natural selection have been controversial among systems biologists, but Kauffman said that at the outset about his work, which he called a "heretical possibility." The origin of life, rather than having been vastly improbable, is instead an expected collective property of complex systems of catalytic polymers and the molecules on which they act. Life, in a deep sense, crystallized as a collective self-reproducing metabolism in a space of possible organic reactions. If this is true, then the routes to life are many and its origin is profound yet simple. This view is indeed heretical. Most students of the origin of life hold that life must be based on the self-templating character of RNA or RNA-like molecules. Because of such self-templating, any RNA molecule would specify its base pair complement; hence a "nude gene" might reproduce itself. After that, according to most thinkers, these simplest replicating molecules built up around themselves the complex set of RNA, DNA, and protein molecules which constituted a self-reproducing system coordinating a metabolic flow and capable of evolving. Chapter 7 unfolds this new view, which is based on the discovery of an expected phase transition from a collection of polymers which do not reproduce themselves to a slightly more complex collection of polymers which do jointly catalyze their own reproduction. In this theory of the origin of life, it is not necessary that any molecule reproduce itself. Rather, a collection of molecules has the property that the last step in the formation of each molecule is catalyzed by some molecule in the system. The phase transition occurs when some critical complexity level of molecular diversity is surpassed. At that critical level, the ratio of reactions among the polymers to the number of polymers in the system passes a critical value, and a connected web of catalyzed reactions linking the polymers arises and spans the molecular species in the system. This web constitutes the crystallization of catalytic closure such that the system of polymers becomes collectively self-reproducing. While heretical, this new body of theory is robust in the sense that the conclusions hold for a wide variety of assumptions about prebiotic chemistry, about the kinds of polymers involved, and about the capacities of those polymers to catalyze reactions transforming either themselves or other, very similar polymers. It is also robust in leading to a fundamental new conclusion: Molecular systems, in principle, can both reproduce and evolve without having a genome in the familiar sense of a template-replicating molecular species. It is no small conclusion that heritable variation, and hence adaptive evolution, can occur in a self-reproducing molecular system lacking a genome. Since Darwin's theory of evolution, Mendel's discovery of the "atoms" of heredity, and Weismann 's theory of the germ plasm, biologists have argued that evolution requires a genome. False, I claim. Also, this new body of theory is fully testable. If correct, sufficiently complex systems of RNA or protein polymers should be collectively autocatalytic. In Chapter 8 these new concepts are extended to the crystallization of a connected metabolism. I strongly suspect that, rather than having formed piecemeal, a connected metabolism, like a self-reproducing set of catalytic polymers, emerged spontaneously as a phase transition when a sufficient number of potentially catalytic polymers were mixed with a sufficiently complex set of organic molecules. In this condition, a critical ratio of number of catalyzed reactions to number of molecular species present is surpassed, and a connected web of catalyzed transformations arises. Life began whole and integrated, not disconnected and disorganized, Kauffman's ideas about autocatalytic systems are shared by Terrence Deacon. Kauffman thought that he might even discover "laws" of self-organization. In his 1995 book, At Home in the Universe, he identified the discovery of such laws as showing that human life followed directly from these pre-existing laws, which would replace the arbitrary and purposeless system of Darwinian natural selection. Humans (and even paradise?) would be implicit in those laws. Random variation, selection sifting. Here is the core, the root. Here lies the brooding sense of accident, of historical contingency, of design by elimination. At least physics, cold in its calculus, implied a deep order, an inevitability. Biology has come to seem a science of the accidental, the ad hoc, and we just one of the fruits of this ad hocery. Were the tape played over, we like to say, the forms of organisms would surely differ dramatically. We humans, a trumped-up, tricked-out, horn-blowing, self-important presence on the globe, need never have occurred. So much for our pretensions; we are lucky to have our hour. So much, too, for paradise. Where, then, does this order come from, this teeming life I see from my window: urgent spider making her living with her pre-nylon web, coyote crafty across the ridgetop, muddy Rio Grande aswarm with nosee- ems (an invisible insect peculiar to early evenings)? Since Darwin, we turn to a single, singular force, Natural Selection, which we might as well capitalize as though it were the new deity. Random variation, selection-sifting. Without it, we reason, there would be nothing but incoherent disorder. I shall argue in this book that this ldea is wrong. For, as we shall see, the emerging sciences of complexity begin to suggest that the order is not all accidental, that vast veins of spontaneous order lie at hand. Laws of complexity spontaneously generate much of the order of the natural world. It is only then that selection comes into play, further molding and refining. Such veins of spontaneous order have not been entirely unknown, yet they are just beginning to emerge as powerful new clues to the origins and evolution of life. We have all known that simple physical systems exhibit spontaneous order: an oil droplet in water forms a sphere; snowflakes exhibit their evanescent sixfold symmetry. What is new is that the range of spontaneous order is enormously greater than we have supposed. Profound order is being discovered in large, complex, and apparently random systems. I believe that this emergent order underlies not only the origin of life itself, but much of the order seen in organisms today. So, too, do many of my colleagues, who are starting to find overlapping evidence of such emergent order in all different kinds of complex systems. The existence of spontaneous order is a stunning challenge to our settled ideas in biology since Darwin. Most biologists have believed for over a century that selection is the sole source of order in biology, that selection alone is the "tinkerer" that crafts the forms. But if the forms selection chooses among were generated by laws of complexity, then selection has always had a handmaiden. It is not, after all, the sole source of order, and organisms are not just tinkered-together contraptions, but expressions of deeper natural laws. If all this is true, what a revision of the Darwinian worldview will lie before us! Not we the accidental, but we the expected. Information philosophy completely agrees with the idea of replacing the idea of God with the cosmic creative process. Kauffman's plaintive "so much for paradise" lament above is extended in his latest book, Reinventing the Sacred, to the thesis that his "new view of God" would replace a "creator" with the "ceaseless creativity of the universe, biosphere, and human culture and history." Creativity is emergent and unexplainable by reduction to the "causally closed" world of natural law. The augmentation of Darwinian natural selection with "partially lawless" self-organization of Kauffman's first two books is extended to finding a new "place for our spirituality." Since humans invented God, he says, we can reinvent "God as the natural creativity of the universe." We need a place for our spirituality, and a Creator God is one such place. I hold that it is we who have invented God, to serve as our most powerful symbol. It is our choice how wisely to use our own symbol to orient our lives and our civilizations. I believe we can reinvent the sacred. We can invent a global ethic, in a shared space, safe to all of us, with one view of God as the natural creativity in the universe. This is an eminently worthwhile project. Its success will depend on the appeal of the arguments and explanations to secularists wanting substitute reasons beyond the humanism that Kauffman calls "too thin" (p.xii). For him, the new sacred will provide explanations for mind, consciousness, agency, meaning, purpose, values, and life itself. Can Kauffman's explanations sway those with a belief in a God that promises an escape from death, an afterlife in paradise? Probably not, but short of that, deeper explanations for great problems in biology, psychology, and philosophy make Kauffman's project worth examining very closely. Like many emergentists, Kauffman attacks the reductionist views epitomized by the great physicist Steven Weinberg's two famous dicta: "The explanatory arrows always point downward" to physics, and "The more we comprehend the universe, the more pointless it seems." In brief, reductionism is the view that society is to be explained in terms of people, people in terms of organs, organs by cells, cells by biochemistry, biochemistry by chemistry, and chemistry by physics. To put it even more crudely, it is the view that in the end, all of reality is nothing but whatever is "down there" at the current base of physics: quarks or the famous strings of string theory, plus the interactions among these entities. Physics is held to be the basic science in terms of which all other sciences will ultimately be understood. As Weinberg puts it, all explanations of higher-level entities point down to physics. And in physics there are only happenings, only facts. What does Kauffman put in place of reductionism, which he calls the "Galilean ideal?" Beyond the complexity and self-organization of complex adaptive auto-catalytic systems of his earlier work, he adds a "partially lawless" element that he sees in a non-standard interpretation of quantum mechanics called decoherence. His explanation of mind and consciousness depends on his idea of a "poised state" must conscious mind be classical, rather than quantum or a mixture of quantum and classical? Could consciousness be a very special poised state between quantum coherence and decoherence to classicity by which "immaterial, nonobjective" mind "acts" on matter? Most physicists say this is impossible. As I will show in the next chapter, recent theories and experiments suggest otherwise. I am hardly the first person to assert that consciousness may be related to quantum phenomena. In 1989, the physicist Roger Penrose, in The Emperor's New Mind, proposed that consciousness is related to quantum gravity, the still missing union of general relativity and quantum mechanics. Here I will take a different tack and suggest that consciousness is associated with a poised state between quantum "coherent" behavior and what is called "decoherence" of quantum possibilities to "classical" actual events. I will propose that this is how the immaterial—not objectively real—mind has consequences for the actual classical physical world. I warn you that this hypothesis is highly controversial—the most scientifically improbable thing I say in this book. Yet as we will see, there appear to be grounds to investigate it seriously... I will make use of decoherence to classical behavior as the means by which a quantum coherent conscious mind of pure possibilities can have actual classical consequences in the physical world. This will be how mind has consequences for matter. Note that I do not say "how mind acts on matter," because I am proposing that the consequences in the classical world of the quantum mind are due to decoherence, which is not itself causal in any normal classical sense. Thus I will circumvent the worry about how the immaterial mind has causal effects on matter by asserting that the quantum coherent mind decoheres to have consequences for the classical world, but does not act causally on the material world. As we will see, this appears to circumvent the very old problem of mental causation and provide a possible, if still scientifically unlikely, solution to how the mind "acts" on matter... The cornerstone of my theory is that the conscious mind is a persistently poised quantum coherent-decoherent system, forever propagating quantum coherent behavior, yet forever also decohering to classical behavior. I describe the requirements for this theory in more detail below. Here, mind—consciousness, res cogitans—is identical with quantum coherent immaterial possibilities, or with partially coherent quantum behavior, yet via decoherence, the quantum coherent mind has consequences that approach classical behavior so very closely that mind can have consequences that create actual physical events by the emergence of classicity. Thus, res cogitans has consequences for res extensa! Immaterial mind has consequences for matter. More, in the poised quantum coherent-decoherent biomolecular system I will posit, quantum coherent, or partially coherent, possibilities themselves continue to propagate, but remain acausal. This will become mental processes begetting mental processes. Information philosophy agrees that mind is immaterial, but it is also "objectively real" Some physicists will object to my use of the word immaterial with reject to quantum phenomenon. They would want to say instead, "not objectively real," like a rock. I am happy to accept this language, and will use immaterial to mean "not objectively real." In a recent paper on arXiv, Kauffman speculated about solutions to the problem of free will, the mind-body problem, and suggested a new interpretation of quantum mechanics. He calls the "causal closure" of classical physics (basically reduction ) the source of the idea that we are machines and our minds are epiphenomenal. He proposes a new dualism of ontologically real actuals (he uses Descartes' term res extensa) and ontologically real possibles (he calls res potentia, which could come from Aristotle or Heisenberg). The Schrödinger equation describes the evolution of these possibilities. He puzzles over nonlocality. And he presents his "poised realm," which hovers reversibly between quantum coherent and "classical" worlds. He then proposes a "quantum mind" in which decoherence produces an "acausal loss of phase information from the open quantum system to the universe." This has "acausal consequences for the classical 'meat' of the brain," he says. Kauffman suggests that he can "decircularize" the "Strong Free Will Theorem of John Conway and Simon Kochen. He first states the two part standard argument against free will. If the mind is determined, "classical physics holds we have no free will at all." If we try to use quantum indeterminism to achieve an ontologically free will, it is merely random...So a random quantum event occurs in my brain,...but I am not “responsible”, the quantum event was random. So even if measurement is real, and ontologically indeterminate, so underlies a “free will”, that will cannot be responsible. Kauffman proposes "a broad new formulation of quantum mechanics in terms a new triad: Actuals, Possibles and Mind - conscious observation acausally mediating measurement, and doing. Here new actuals create new possibles which are available via mind to be measured to create new actuals to create new "adjacent possibles" in a persistent becoming of the universe. I want us to consider a totally new view of quantum mechanics and reality, consisting of ontologically real actuals that obey the law of the excluded middle, ontologically real possibles that do not obey the law of the excluded middle in quantum behavior before measurement, and mind measuring and responsible free willed. In this view, measurement creates new actuals that acausally and outside of space and inside time consistent with Special Relativity, can instantaneously and acausally alter what is now possible, hence account for instantaneous changes in wave functions upon measurement and non-locality. If new "actuals" are created acausally by quantum mind "measurements," the outcomes are statistical, that is random. We need the randomness to create the possibles from which the outcome/choice/selection is adequately determined In turn new possibles are available for mind to measure and do, creating new actuals, in a persistent cycle of quantum enigma free willed and conscious becoming in a radically participatory universe with a non-epiphenomenal “cosmic” consciousness and doings wherever measurements happen. Normal | Teacher | Scholar
5fc3fe5e213f755e
Slashdot is powered by your submissions, so send in your scoop Forgot your password? NASA Science NASA Gravity Probe Confirms Two Einstein Predictions 139 sanzibar writes "After 52 years of conceiving, testing and waiting, marked by scientific advances and disappointments, one of Stanford's and NASA's longest-running projects comes to a close with a greater understanding of the universe. Stanford and NASA researchers have confirmed two predictions of Albert Einstein's general theory of relativity, concluding one of the space agency's longest-running projects. Known as Gravity Probe B, the experiment used four ultra-precise gyroscopes housed in a satellite to measure two aspects of Einstein's theory about gravity. The first is the geodetic effect, or the warping of space and time around a gravitational body. The second is frame-dragging, which is the amount a spinning object pulls space and time with it as it rotates." NASA Gravity Probe Confirms Two Einstein Predictions Comments Filter: • by Anonymous Coward on Thursday May 05, 2011 @04:53AM (#36033044) Please, can somebody restore the fortune database? Thanks. Uh, and First Post. • Re: (Score:1, Offtopic) by hcpxvi ( 773888 ) Uh, what he said. I'd mod him up if I had any mod points. Not that I have had any for months, despite excellent karma. The new Slashdot: too buggy to be fit for purpose. • by rhook ( 943951 ) on Thursday May 05, 2011 @06:16AM (#36033308) The new Slashdot: too buggy to be fit for purpose. I have to agree with this, several bugs. The most annoying one is having the comments scroll to the top of the page when I click anything. • I know this is off topic, because I need glasses, I use the + and - keys in Opera to zoom the screen a bit. But now ./ does something to ignore those keystrokes. I have to go to Options and toggle filter controls. It doesn't seem to matter if it's on or off, I have to just toggle it to another state. Then it works. A day or so later, I have to do it again.. • by Shippu ( 1888522 ) I don't have this problem. It's probably Opera's fault though. For some months I've been wanting to try Firefox/IE9/Chromium because Opera has many unfixed bugs that go back even to version 9. For example, I can't select any text in this text box without doing a right click>select all first. I reported this to them 4 years ago. • by amaupin ( 721551 ) on Thursday May 05, 2011 @09:48AM (#36034508) Homepage Links are now unclickable, at least on the first 4 or 5 tries. Each time you click a link in someone's post, the page jumps and/or another post expands/collapses. The sheer level of ignorance and/or lack of interest in their own site on the part of the Slashdot owners is mind-boggling. (Click on links? I must be new here.) Seriously, Slashdot, fix your goddam site. • by Ogive17 ( 691899 ) I'm curious why /. looks like shit while using IE8 or Firefox but looks pretty good on my Droid X's native browser. I was browsing from my phone during a phone conference yesterday and couldn't believe how functional the page looked. • Um, maybe the developer uses a Droid X for development work. That would explain quite a lot actually... • by Xacid ( 560407 ) And here I thought it was just my fault for not using IE... • by Hatta ( 162192 ) Mark as untrusted. Switch to classic discussion mode in your preferences. • by JWW ( 79176 ) Couldn't agree more. EVERYTIME /. upgrades the first thing I do is go back and turn classic discussion mode back on. • dont click anything. CmdrTaco --sent from my iPhone • by sjwt ( 161428 ) no, but I can link to the related saturday morning breakfast cereal comic. This is why experimental scientists hare theoretical scientists [] • by dotancohen ( 1015143 ) on Thursday May 05, 2011 @08:47AM (#36033992) Homepage Please, can somebody restore the fortune database? Thanks. Uh, and First Post. Restore it? It works fine for me, here: In fact, I've been seeing that for a few days! • Honey? (Score:4, Funny) by mangu ( 126918 ) on Thursday May 05, 2011 @05:09AM (#36033112) "Imagine the Earth as if it were immersed in honey," Francis Everitt, GP-B principal investigator at Stanford University in Palo Alto, Calif., said in a statement Doh, this is Slashdot, we want a car analogy, please. And have the numerical results expressed in libraries of congress per football field. Thanks. • OK, geodetic effect, check. Frame-dragging, check. Commence dev. project warp drives • by roger_pasky ( 1429241 ) on Thursday May 05, 2011 @06:21AM (#36033332) Agreed, make it so. Geordi, estimate developement period from current stardate. Data, start doing some calculations. Wesley, contact Dr. Sheldon Cooper and piss him off. • NASA and the USA (Score:5, Insightful) by mustPushCart ( 1871520 ) on Thursday May 05, 2011 @05:22AM (#36033158) I am not an American, but I have seen both the blue pearl image and the pale blue dot image. I have read about how long these projects have run and the astounding quality of the instruments that must be on satellites like these along with the massive foresight it must have taken at launch time to make them relevant decades later. You can criticize the USA all you want for their wars, and I have heard some harsh criticism of NASA too but the most astounding images and discoveries have always come from the here because they are on the pinnacle of space exploration. The world would be a lot less interesting if it wasn't for them. • by Anonymous Coward Have you seen the comments in TFA by this David de Hilster guy? What a fruitloop. Check out his picture []. Want some love particles, baby? • by a_hanso ( 1891616 ) on Thursday May 05, 2011 @05:53AM (#36033256) Journal [] has a simple animation explaining the gravity probe B experiment. • That's great... but given a quantum physics and that little bugger of a concept known as the observer effect (basically ALL experience is subjective to the observer - even scientific ones...) how do we know the results we are recording are actual vs what we believe we should be experiencing and therefore are willing to see? Sure I could be wrong in what I am saying, but let me know and I'll entertain it in my field of awareness as possibility and perhaps I'll experience it differently...or maybe not. ;) Y • by sandytaru ( 1158959 ) on Thursday May 05, 2011 @08:40AM (#36033938) Journal The effects of gravity are at macro scales, not quantum scales. From what I understand, the observer effect doesn't really kick in until you start talking about stuff smaller than atoms. The universe is a bit more well-behaved at scale sizes larger than an atom, where chemistry and classical physics kick in. Our other end of non-understanding doesn't start until you get to the very macro, all the dark matter and dark energy floating around out there that no one really knows anything about. • Exactly. Quantum mechanics only starts to be noticeable about ~50nm or so. In contrast, gravity is normally only noticeable with objects best measured in yottagrams (that's "quintillions of tons", for those of us a bit fuzzy on the extreme SI prefixes). Now, there's been a huge amount of speculation as to how the two combine, especially from theoretical physicists like Dr. Hawking. However, there have been absolutely no experiments in quantum gravity, for one simple reason: the only time you get that much • In contrast, gravity is normally only noticeable with objects best measured in yottagrams 1.61lb is considerably less than a yottagram. Cavendish Experiment [] • Yes, and that experiment required some of the greatest precision technologically possible at that time. I'm talking objects big enough that the force of gravity they exert is clearly and immediately obvious, just as I was talking about quantum effects only being clearly and immediately obvious below 50nm. You can certainly detect both phenomena at lower masses or greater distances, but that is hardly relevant to the discussion of practical effects. • The effects of gravity are at macro scales, not quantum scales. The effects are on all scales. Just because nobody can currently describe how a single photon warps space as it travels does not mean it does not occur. We know it does. • by blueg3 ( 192743 ) on Thursday May 05, 2011 @08:55AM (#36034052) That's not part of quantum mechanics at all. That's a gross generalization made philosophical that arose out of an actual quantum mechanical principle. Measurement-related QM principles, like wavefunction collapse and Heisenburg, are only meaningful when what you're observing is the size and scale of a quantum state, which is very, very small. Gravitational effects are for the most part (and in this case) for large objects, where QM principles are unimportant. • by qc_dk ( 734452 ) And it could also be related to a gross misgeneralization of the theory of relativity. Which basically states the exact opposite: That any careful observer in any frame of reference will agree on the value of the speed of light and the laws of physics. A better name would have been the theory of constancy. • It depends on your perspective. It's "relativity" because most measurements you make *are* relative to your reference frame, only the speed of light (and various invariant quantities) are absolute. The relativity that SR and GR deal with is different in kind than the "peculiarities" of quantum mechanics. And, the previous post was correct: the observation-related uncertainties of QM are (mostly) only important when systems get to microscopic scales. Yes, the same microscopic laws apply to macroscopic phys • by blueg3 ( 192743 ) Only observers in inertial reference frames agree on the laws of physics, no? • by Anonymous Coward on Thursday May 05, 2011 @08:58AM (#36034090) You need to actually study quantum physics if you want to talk about these things like an adult. It's obvious to everyone that HAS studied quantum physics that you're spouting nonsense and claiming that Science supports you. Quit watching "What the bleep do we know?". It's full of people lying to you to sell you an idea (and one scientist who was duped and every single quote taken out of context). • by xehonk ( 930376 ) The observer effect is not something specific to self-aware observers. It can simply be interaction with other matter - which has then "observed" the item in question. Now with that out of the way, what you want to happen has no influence on what does happen. That's simply not what the observer effect is about. • by tm2b ( 42473 ) Sorry, you're making a comment on Quantum Mechanics. I am going to have to ask to see you explain any version of a Schrödinger equation, or ask you to stop. That should really be a law. • I usually bow out of stories like this, but must make one comment: Anybody who thinks time is important as a metric is seriously missing the point. • ... but the Chinese are actively doing it - as seen here in 2007 []. Sometimes we to just shut up and do it else we'll have deja vu like solar energy [] or nuclear power [] • I'm sorry, I posted this comment to the wrong article... sigh. • But your first post got Score:1 and your second got Score:2. I think the day is about here when the long running two-million monkey experiment that is will be shut down. Oh, and thank you, Dr. Einstein, for thinking about this stuff and putting it in a form that could be challenged experimentally. • Finally I can put an end to all of those naysayers of gravitation theory! • Look - it's just at THEORY - you admitted it yourself right in your post. Go find some facts and get back with me. I've got a Bible full of them right here at my desk, and there isn't a single mention of gravity. I can't believe you're still blathering on about this... ;-) • Now if I recall correctly, they were also looking for the existence of gravitational waves.. which they.. didn't find.. correct? • by Greyfox ( 87712 ) on Thursday May 05, 2011 @09:21AM (#36034254) Homepage Journal Relativity and black holes look like bugs in a not-very-well thought-out physics simulation. This sort of thing makes me wonder if the universe isn't just some extra-dimensional college kid's thesis project on how to find the best way to turn hydrogen into plutonium. • In the beginning, Bob created the heavens and the earth. But his emulation of Newtonian physics was but partially implemented, and so he only got a B-. • by qc_dk ( 734452 ) Dear Mr. 94343, I would like to thank you for considering our ilustrious instituion. I regret to inform you, however, that you have not been accepted to our "Universe creation and it's applications" Ph.d. programme. While your admission project did indeed show a lot of practical skill and hard effort, we believe your theoretical understanding is somewhat deficit. We asked for the best way to turn hydrogen into plutonium, not iron. We encourage you to take another year of theoretical physics, and reapplying for t • When I read something like "confirms Einstein's theory" AGAIN I just get annoyed. In my opinion, the mission would only be a success if it found a flaw in Einstein's theories. Those theories are many decades old and I'm hungry for some totally new physics. I get so disappointed when I hear that the Pioneer mystery (or whichever one was curving unexpectedly) is solved using perfectly well known physics. Where are the new unknown rules that we can use to create new breakthrough technologies? • by notpaul ( 181662 ) • From an extra-dimensional point of view, Hydrogen may as well already be Plutonium. • However the Stanford satellite supposedly is ten times more accurate • Why it took 52 years (Score:5, Interesting) by rotenberry ( 3487 ) on Thursday May 05, 2011 @10:39AM (#36035148) From what I have heard, the reason it took 52 years to get this spacecraft into space was political, not technical. There is no doubt that the technology developed to measure these parameters is very impressive. The real question is whether or not it was worth the effort. When I was at JPL in the 1980s a person who had published numerous papers in both experimental and theoretical relativity explained why scientists within the space program were not supporting this project. Since this conversation took place thirty years ago I must paraphrase: "No modern theory of gravity predicts anything else, and if the measurements showed anything but the predicted results it would be assumed to be an experimental error. Unlike the technology used to search for gravitational radiation (which is also used to study the atmospheres of planets), the hardware in this spacecraft cannot be used for any other scientific experiment." So for 52 years the money has been used for other science. For a much more worthy project read about the recently canceled LISA project. If you wish to read about the politics of how a science project is chosen by NASA I can think of no better description that Steven W. Squyres' "Roving Mars" where he describes how the Mars Rovers were nearly canceled. • by radtea ( 464814 ) on Thursday May 05, 2011 @11:48AM (#36035984) No modern theory of gravity predicts anything else Except Moffat's, of course. And while every experimental anomaly is first dismissed as error, the fact (you remember those things, facts?) is that scientists have an excellent record of poking away at anomalies until a robust, consistent explanation is found. Sometimes the explanation is mundane--the Pioneer Anomaly, for example. Sometimes it is profound--the anomalous precession of the orbit of Mercury comes to mind, which was measured quite precisely in the 1850's, if I recall correctly, some sixty years before the underlying cause was found. People who say things like this are simply ignorant of the history and timescales on which science actually operates. It is entirely implausible that a group of people who have collectively worked over hundreds of years to account for dozens of tiny numerical anomalies in extremely difficult precision measurements would suddenly throw up their hands and say, "OK, I guess we can ignore the data now!" • by Anonymous Coward Like everything else, science does not have access to infinite resources. However, posts such as yours remind us there is an infinite amount of testing to do. For example, we could pose the question of whether or not a ball and a feather fall at the same rate as each other on Pluto, if dropped simultaneously. In the case where our need for resources outpaces our access to them, we must prioritize what is important. One way of doing this is time and potential for payoff. Consider how many years the hypothetic Very likely, but nobody would have been absolutely sure. Physicists would have looked at possible theories that were in accordance with the experimental results, and come up with other tests. The Michelson-Morley experiment was similar in effect. People thought it very odd that it didn't show ether drift, but the theories were firmly established, and so physicists kept worrying at it. More expe They cancelled LISA?! D= If it's because there's no room in the budget for LISA and a shuttle-derived heavy-lift vehicle, I'm personally going to go kick a bunch of congresscritters in the jewels. • Sometimes I wonder if these great minds that pops up from time to time (Newton, Copernicus, Einstein etc) are really one of us. It's funny how they appear, completely revolutionize a field or offer a world changing new perspective and then disappear, just to have us mere mortals work for years and decades to understand, confirm and accept it. Applause again for Einstein, you are a bit creepy to be completely honest. • My understanding was that (satellite-based) GPS would give you a drastically inaccurate position reading without an algorithmic correction for frame-dragging. If so, it would seem that part of Einstein's predictions were validated quite a few years ago. • by Strider- ( 39683 ) on Thursday May 05, 2011 @01:40PM (#36037490) No, GPS does takes General Relativity and Special Relativity into account, and confirms both nicely. Due to the motion of the spacecraft in orbit with respect to us on the ground, one would expect the GPS satellites to lose about 7 microseconds a day. However, because the satellites are further out of our gravity well, General Relativity predicts the satellites will gain about 45 microseconds a day. Basically, this means that if GR and SR were not taken into account, the GPS system would be useless after about 2 minutes. Source: [] However, the effect of Frame Dragging is many orders of magnitude smaller, to the point where it will not have a measurable effect on GPS. To even have a hope of measuring it, Gravity Probe B had gyroscopes made from a set of the most perfect spheres ever manufactured. If you were to scale these spheres up to the size of the earth, the tallest mountain would be less than 1 meter tall. • by Required Snark ( 1702878 ) on Thursday May 05, 2011 @02:40PM (#36038456) According to this paper [] the Gravity Probe B experiment results were not very useful. The goal was to get numerical results to 1% accuracy, and the actual measurements only achieved %19 percent accuracy. This was due to a design error. Mechanically, the spheres were the roundest objects ever manufactured, Everitt explained. Were one blown up to the size of Earth, the biggest hill on it would be 3 meters tall. However, trapped charges in the niobium made the gyroscopes far less round electrically; an Earth-sized map of a sphere's voltage landscape would sport peaks as high as Mount Everest. Interactions between those imperfections and ones in the gyroscopes' housing created tiny tugs, and to reach the final precisions, researchers spent 5 years figuring out how to correct for them. On top of that, other researchers made better measurements using other much cheaper satellites. So they got scooped and their final results were not what they had planned. Not a complete failure, but not a real success either. • This is cool news! When I first got deep into physics, I often considered the ideal of; "a hot air balloon floating(not) around an earth without an atmosphere", and "would the balloon be dragged around the plaint as it rotates(by gravity)?", now I feel satisfied that know the answer! Which leads to the next n question: If you took our solar system and placed it at the most significant Lagrange point between two galaxy's, would our understanding of physical constants change? ;) And also the intermediary
c83efba2e8a3e2d9
Instabilities in low-dimensional magnetism In the previous Section, measurements and simulations were discussed of ultrafast magnetization phenomena in three dimensions; here possibilities are considered for using the SwissFEL to investigate the quantum-fluctuating behavior of low-dimensional magnetic systems [13]. In many magnetic insulators, magnetic moments interact through an exchange of electrons between neighboring sites. Such exchange interactions are shor t-ranged. If these interactions are isotropic, such systems can be described by the well-known Heisenberg Hamiltonian, which is given by: H_{Heisenberg} = -J \sum_{(ij)} \bar{S}_i \cdot \bar{S}_j where J is the exchange energy, and the summation is over nearest-neighbor spins; If J is positive, the spins Si and Sj tend to align ferromagnetically. For an ordered magnetic phase, the temperature-dependent change in the saturation magnetization can be calculated [14] in this model as Ms(0) – Ms(T) ∝ Nsw(T), where N_{sw}(T) \propto \frac{k^{d-1}}{\exp{[\epsilon(k)/k_B T]} -1} dk is the density of spin-waves excited at the temperature T. For d=3 dimensions, this integral is propor tional to T3/2, giving the well-known Bloch 3/2-law. For dimensions lower than d=3, the expression for Nsw(T) diverges, implying that fluctuations will prevent the occurrence of long-range magnetic order. This is a fundamental result, which has been rigorously proven by Mermin and Wagner [15], and which means that many types of magnetic systems do not order at any finite temperature. Some systems are disordered even at zero temperature, where thermal fluctuations are absent, due to the presence of quantum fluctuations in the ground state. This can happen if a static arrangement of magnetic moments is not an eigenstate of the Hamiltonian, causing quantum fluctuations to generate a new type of ground state. These disordered systems form ferromagnetically or antiferromagnetically-coupled spin liquids, and their quantum fluctuations, as described by the intermediate scattering function S(Q,t) (see chapter V), represent a par ticularly rich field of investigation for the SwissFEL. Two-dimensional case (d=2) As an example of the dynamics of a 2d-magnetic structure, consider the case of an infinite in-plane anisotropy (Sz=0): the so-called xy-model: H_{xy} = -J \sum_{(ij)} (S_i^x S_j^x + S_i^y S_j^y) As for the 2d-Heisenberg model, there is no magnetic order at finite temperature in the xy-model. However, it is found that spin correlations cause the formation at low temperature of a disordered array of magnetic vortices, with radii of order R. The cost in exchange energy incurred by the formation of such a vor tex is πJln(R/a), where a is the lattice constant, and the gain in entropy represented by the varying position of the vor tex center is 2kBln(R/a). Hence the free energy F = ln(R/a)(πJ-2kBT) becomes negative below the Kosterlitz-Thouless transition, at the temperature TKT=πJ/2kB, which separates a non-vor tex from a vor tex phase. At low temperatures, the S=1/2 layered perovskite ferromagnet K2CuF4 is approximately described by the xy-model, going through a Kosterlitz-Thouless transtion at 5.5 K. A fur ther example of (quasi) 2d-magnetic dynamics is that of vor tex core reversal in thin magnetic nanostructures (see Infobox). One-dimensional case (d=1) Magnetism in one dimension in the zero-temperature limit is par ticularly interesting, because it arises from quantum fluctuations. Consider first the isotropic J>0 Heisenberg model for a one-dimensional chain of N spins S=1/2, with periodic boundary conditions. The (ferromagnetic) ground state can be represented as |Ψ0 〉 = |↑↑↑...↑〉. In the Bethe Ansatz, the excited states of the system are built up as a superposition of states with discrete numbers of flipped spins. If we confine ourselves to single-spin (r=1) excitations: |n 〉= |↑↑↑...↑↓↑...↑〉 (here the nth spin has been reversed), we can write the excited state as |\Psi_1 \rangle = \sum_{n=1}^{N} a(n) |n \rangle It is then a straightforward exercise to compute from the Schrödinger equation (for convenience, written in terms of the raising and lowering operators S±) the excited- state energy E1, and one finds, for large N, that excitations exist with arbitrarily small excitation energies E1 – E0; i.e., the excitation spectrum is gapless. Higher level excitations, involving multiple spin flips r = 2, 3, 4, ..., become increasingly cumbersome to handle, but the gapless spectrum is retained (Figure I.8a shows the analogous result for the 1d-antiferromagnetic spin ½ chain [16]). Magnetic vortex core switching The magnetic vor tex is a very stable, naturally-forming magnetic configuration occurring in thin soft-magnetic nanostructures. Due to shape anisotropy, the magnetic moments in such thin-film elements lie in the film plane. The vor tex configuration is characterized by the circulation of the in-plane magnetic structure around a ver y stable core of only a few tens of nanometers in diameter, of the order of the exchange length. A par ticular feature of this structure is the core of the vor tex, which is perpendicularly magnetized relative to the sample plane. This results in two states: “up” or “down”. Their small size and per fect stability make vor tex cores promising candidates for magnetic data storage. A study by Her tel et al. [27] based on micromagnetic simulations (LLG equation) has shown that, strikingly, the core can dynamically be switched between “up” and “down” within only a few tens of picoseconds by means of an external field. Figure I.i6 below simulates the vortex core switching in a 20 nm thick Permalloy disk of 200 nm diameter after the application of a 60 ps field pulse, with a peak value of 80 mT. Using field pulses as shor t as 5 ps, the authors show that the core reversal unfolds first through the production of a new vor tex with an oppositely oriented core, followed by the annihilation of the original vortex with a transient antivortex structure. To date, no experimental method can achieve the required temporal (a few tens of ps) and spatial (a few tens of nm) resolution to investigate this switching process. The combination of the high-energy THz pump source and circularly-polarized SwissFEL probe pulses will allow such studies. One of the simplest ways for a material to avoid magnetic order and develop macroscopic quantum correlations is through the creation of an energy gap Eg in the excitation spectrum. Since Eg is of the order of the exchange interaction, the gap introduces a time-scale for fluctuations which is typically on the order of femtoseconds. One such phenomenon is the spin Peierls effect. This is related to the better-known charge Peierls metalinsulator transition (see Chapter V). In the spin-Peierls effect, a uniform 1d, S=1/2 spin chain undergoes a spontaneous distor tion, causing dimerization, and hence the appearance of two different exchange couplings J±δJ (see Fig. I.9). For δJ sufficiently large, S=0 singlet pairs are formed on the stronger links, implying a non-magnetic state and a finite energy gap to the excited states (see Fig. I.8b). The Peierls state is stable if the resulting lowering of magnetic energy more than compensates for the elastic energy of the lattice distortion. Note that the distor tion is a distinctive feature which is visible with hard X-ray diffraction. The spin-chain compound CuGeO3 is an inorganic solid which undergoes a spin-Peierels trasition at 14 K. A more subtle quantum effect also leads to an energy gap in the excitation spectrum of an antiferromagnetic Heisenberg chain of integral spins [17]. As conjectured by Haldane, neighboring S=1 spins can be resolved into two S=1/2 degrees of freedom, effectively forming singlet bonds (see Fig. I.10). This valence bond state is responsible for the existence of a Haldane energy gap, since long wavelength spin excitations cannot be generated without breaking the valence bonds. A consequence of the Haldane mechanism is a spatial correlation function for magnetic excitations which decays exponentially with distance, compared with the power-law dependence in the case of gapless excitations. An inorganic material which demonstrates the Haldane phenomenon is Y2BaNiO5. The Haldane mechanism is also used to describe the dynamic behavior of finite 1d S=1 antiferromagnetic chains, as investigated in Mg-doped Y2BaNiO5 by inelastic neutron scattering [18]. The finite chains are generated by the non-magnetic Mg impurities, and the ends of the chains represent S=1/2 impurities with a strong nano-scale correlation, with the result that the Haldane gap becomes a function of chain length. In an applied magnetic field, the triplet spin excitations undergo Zeeman splitting, eventually becoming a new ground state (see Fig. I.11). Thus the Zeeman transitions are hybrid excitations with both local and cooperative proper ties. They therefore serve as probes of the quantum correlation functions, which are otherwise difficult to access. The temperature and field ranges for such studies var y with the material, but effects can be observed in many systems at T ~1 K and B ~1 T. Some quantum magnets show magneto-electric interactions [19], which may allow perturbation of the quantum states by electric or optical pulses. In this case, it will be possible to probe the temporal evolution of macroscopic quantum correlations in a pump-probe experiment at the SwissFEL. Zero-dimensional case (d=0) The extreme brightness of the SwissFEL allows measurements of magnetic phenomena on dilute samples consisting of isolated nanopar ticles, with effectively zero dimension. A recent realization of such nanopar ticles is the single-molecule magnet, manganese acetate (Mn12Ac) shown in Fig. I.1, in which 12 magnetic Mn ions, with a total spin S=10, are held in close proximity by an organic molecular framework. Another example is the creation of magnetic nanodots by sub-monlayer deposition onto a high-index sur face of a metal (see Fig. I.12). If they have a magnetic anisotropy above the superparamagnetic limit, such nanopar ticles may exhibit room-temperature ferro- or antiferromagnetic order, and undergo sub-nanosecond quantum tunnelling between different magnetization directions [21]. Details of this tunnelling, including field-enhancement of the rate, are an attractive topic in ultrafast magnetization dynamics, suitable for study with the SwissFEL.
0054f1ad995e5d2e
Research ArticleOPTICS Deconvolution of optical multidimensional coherent spectra See allHide authors and affiliations Science Advances  01 Jun 2018: Vol. 4, no. 6, eaar7697 DOI: 10.1126/sciadv.aar7697 Optical coherent multidimensional spectroscopy is a powerful technique for unraveling complex and congested spectra by spreading them across multiple dimensions, removing the effects of inhomogeneity, and revealing underlying correlations. As the technique matures, the focus is shifting from understanding the technique itself to using it to probe the underlying dynamics in the system being studied. However, these dynamics can be difficult to discern because they are convolved with the nonlinear optical response of the system. Inspired by methods used to deblur images, we present a method for deconvolving the underlying dynamics from the optical response. To demonstrate the method, we extract the many-particle diffusion Green’s functions for excitons in a semiconductor quantum well from two-dimensional coherent spectra. Optical coherent multidimensional spectroscopy (CMDS) has become an established tool for investigating material properties (112). It has been applied on a wide range of materials including photosynthetic complexes (1, 4), colloidal and epitaxial semiconductor quantum dots (6, 1317), atomic vapors (18), semiconductor quantum wells (7, 11, 12, 19, 20), metal surfaces (21), and two-dimensional (2D) materials (2224). Physical processes that are accessible through the additional spectral dimensions include energy transfer, coherent coupling, relaxation, and dipole-dipole interaction (4, 14, 18, 24, 25). Furthermore, the homogeneous and inhomogeneous linewidths can be determined separately (7, 23, 26, 27), giving access to microscopic dephasing and distributions. Although CMDS has been successful, it remains a niche technique because of difficulty in understanding the rich spectra and obtaining insight into underlying material properties. Any insight is typically realized by comparison to theoretical results, which must incorporate both the elaborate theoretical tools needed to calculate the spectra (7, 2830) in addition to the particular materials and processes being studied. Furthermore, the information of interest is often obscured by spectral broadening such that interpretation of the spectra is sometimes described as “blobology.” This situation is further exacerbated by the lack of understanding about CMDS outside the spectroscopy community. To address this challenge, we propose a new paradigm for analyzing coherent multidimensional spectra. Our approach is inspired by methods developed in imaging (3134) to deconvolve the effects of the imaging instrument from the acquired image. In a similar fashion, the effects due to the spectroscopic method can be deconvolved from the acquired spectra to reveal underlying material properties and dynamics. The deconvolution requires a theoretical description of CMDS that accounts for the presence/absence and strength of coherences, incoherent processes, the nature of the eigenstates, optical selection rules, and functions that parametrize the material model. Details inaccessible to the spectroscopy are averaged out. On the basis of this theoretical description, algorithms can be developed that implement deconvolution techniques to extract the functions parametrizing the model for the material. A few general descriptions, and hence algorithms, should cover a wide range of CMDS methods and materials. In the future, when more algorithms are developed and the approach matures, it may be possible to produce standardized programs that can be used by experimentalists as part of the routine data processing. Extracting a material’s properties in this way is easier and more intuitive for a non-expert in CMDS to interpret and allows for direct comparison with theoretical models of the underlying physical phenomena. The method that we present here as proof of principle does not assume a specific form for the Green’s function, which describes energy flow in the material, and thus is suitable for a continuum, noninvertible cases, and ill-posed, ambiguous cases including noise. A different approach has been proposed for the inversion of 2D spectra for a few coupled pigments (35). In this case, the population transfer matrix is uniquely determined, which assumes a specific form of the Green’s function formulated in terms of relaxation rates and line shape. This assumption restricts the applicability to a few discrete states. Our method is also extendable to the discrete case but is particularly well suited for congested states and even continua or when dark states are important. However, the presented formulas and algorithms will require modifications and extensions for applications/materials beyond the presented example material type, where the assumptions used in the derivation do not hold. To demonstrate this paradigm for analyzing CMDS, we select the example of spectral diffusion of the exciton distribution in a disordered semiconductor quantum well. For this example, a theoretical model and experimental data already exist (25). Since incoherent exciton relaxation dominates the evolution of the spectra, only the spectral line shapes and relaxation Green’s functions enter the simplified model (see the Supplementary Materials), making it a good candidate for demonstrating this concept. The extracted line-shape function contains the inhomogeneous distribution and energy-dependent homogeneous line shape. Prior efforts to analyze CMDS peak shapes only extracted constants characterizing inhomogeneous and homogeneous broadening, which assumes certain functional forms (26, 27). An exciton is an electron-hole pair bound by the Coulomb attraction but free to move as a unit. Confining them in a quantum well increases their oscillator strength, and hence their optical nonlinearity, which provides a strong signal in a CMDS experiment (5, 12, 19, 20). Real quantum wells always have some degree of disorder, primarily due to fluctuations in the well thickness, which results in localization of some states and a mobility edge marking the gradual transition from localized to delocalized states (25, 36). The corresponding variation in energy produces inhomogeneous broadening in the optical spectrum of the excitonic resonance. Spectral diffusion results from spatial migration of excitons among the states, often mediated by acoustic phonons, including across the mobility edge. The specific 2D coherent spectra presented here are produced by exciting a sample with a sequence of three cocircularly polarized pulses with wave vectors k1, k2, and k3, as illustrated in Fig. 1A. Their interaction gives rise to a signal in the direction kI = −k1 + k2 + k3, which corresponds to a photon echo if the pulse with wave vector k1 arrives first. 2D spectra can be generated by measuring the amplitude and phase of the signal as a function of both τ, the time between pulses k1 and k2, and τ’, the time over which the signal is emitted, and by performing a 2D Fourier transform. During the delay T between pulses k2 and k3, exciton relaxation processes can occur. Spectral diffusion from the initial absorption energy to the final emission energy of the excitons can be tracked through the evolution of the 2D spectra as function of T. Cocircularly polarized pulses are used to avoid the participation of bound biexcitons. Furthermore, coherences during T and exciton-exciton interactions do not significantly influence the spectra. Since the exciton resonance is spectrally narrow, we neglect effects of finite pulse bandwidth. Furthermore, the phonon system is assumed to be a bath with constant temperature. Fig. 1 Pulse sequence of a 2D photon echo and visualization of optical transformation. (A) The pulse sequence applied in the 2D photon echo spectroscopy. (B) Visualization of the transformation from an object O(x′, y′) to an image I(x, y) using the convolution with the PSF cf. Eq. 2. Considering these assumptions and approximations, the 2D spectrum isEmbedded Image(1)based on the sum-over-states approach (28), where frequencies Ω1 and Ω2 result from Fourier transformation with respect to τ and τ’ (see the Supplementary Materials for the derivation of Eq. 1 and eq. S33 for a detailed discussion of the validity range of Eq. 1). The line-shape function, L(Δω, ω), depends on ω the exciton frequency and Δω, the detuning from ω. The line-shape function describes the two-time correlation function of the absorption and emission processes (37). The relaxation Green’s function, G1, ω2; T), is the probability that an excitation absorbed at ω1 is emitted at ω2 after time T. Extracting G1, ω2; T) is the main goal because it captures the exciton redistribution dynamics, which give rise to the spectral diffusion. Equation 1 shows that G1, ω2; T) is convolved in two dimensions with L(Δω, ω), so we must find a way to deconvolve them. The problem of 2D deconvolution has been addressed in image processing. Specifically, the image of an object O(x′, y′) can be represented asEmbedded Image(2)where the point spread function (PSF) describes the effect of the optical apparatus (see Fig. 1B). The PSF is often extracted by using the image of a point source to enable reconstruction of the original O(x′, y′) from the image. The structural similarity between Eqs. 1 and 2 suggests that methods used to reconstruct images might be applicable to 2D spectra. However, there is no equivalent to a point source for a 2D spectrum. Thus, we need a different strategy for determining L(Δω, ω) from a spectrum. For zero waiting time T ≈ 0, G1, ω2; T ≈ 0) = δ(ω1 − ω2)/D1), where δ(x) is the Dirac delta distribution and D(ω)is the density of states, and thus, the spectrum isEmbedded Image(3)where Embedded Image. The spectrum corresponding to Eq. 3 has a diagonal form, as shown in Fig. 2A, with the inhomogeneous width along the diagonal and the homogeneous width in the cross-diagonal direction. Since Eq. 3 only depends on Embedded Image, we can extract Embedded Image using an optimization algorithm, as described in the Supplementary Materials. If we can also extract D(ω), then L(ω) can be determined. Fig. 2 Photon echo spectra and data extracted from T = 0 ps. (A and B) Normalized experimental photon echo spectra (absolute value) for T = 0 ps and T = 20 ps at 5 K. (C) Reconstructed relaxation Green’s function for T = 0 ps at 5 K. (D) Absolute value of line-shape function L(Δω, ω) for 20 K. (E and F) Rescaled line-shape function L(Δω, ω)/|L(0, ω)| for 5 and 20 K, respectively, with the corresponding oscillator strength L(0, ω) and D(ω) given as inset. In (D), (E), and (F), the gray lines mark the area with low reconstruction error. To extract D(ω), we need to use a different spectroscopic measurement that also depends on D(ω) in conjunction with Embedded Image. One possibility is the linear absorptionEmbedded Image(4)from which we extract Embedded Image using an optimization algorithm. Examples of the input spectra and extracted functions used in the deconvolution are given in Fig. 2. Figure 2D shows a reconstructed Embedded Image. The absolute error between the calculated spectrum and the experimental data is minimized. As a result, the quality of the extracted line-shape function is only good in areas with large signals. In areas with lower signal strength, the reconstructed line-shape function may have random phase jumps and oscillations, resulting later in artifacts in the reconstructed Green’s function. L(Δω, ω) includes the line shape along Δω and the oscillator strength distribution multiplied by the density of states along ω. In Fig. 2 (E and F), the line shape L(Δω, ω)/|L(0, ω)|, the oscillator strength L(0, ω)/D(ω), and density of states D(ω) are plotted separately for temperatures 5 and 20 K. We focus on high oscillator strength areas with low reconstruction error ranging from 1543.8 to 1545.2 meV, marked by the gray lines. For 5 K, the linewidth increases with increasing energy since an increased number of scattering states are reachable for higher energies due to the increasing D(ω). For 20 K, the linewidth is broader than for 5 K, as expected, and the width stays almost constant inside the trusted area. This broadening results from the higher bath temperature that opens most scattering channels for lower exciton states, which do not contribute at 5 K. All states inside the distribution have a similar lifetime. The oscillator strength distributions are very similar for both temperatures, as expected. After extracting the line-shape function Embedded Image and the density of states D(ω), we are now ready to extract the relaxation Green’s function, G1, ω2; T), from Eq. 1 using an optimization algorithm. The parts of the Green’s function connected to bright states are successfully extracted, whereas those connected to dark states do not contribute to the signal. Thus, only part of the full Green’s function is successfully extracted, and the overall probability is not conserved since relaxation involving the dark states and exciton recombination occurs. In the following discussions, we focus on the area with sufficient oscillator strength for valid reconstruction (the area between 1543.8 and 1545.2 meV). For energies lower than 1543.8 meV, no excitons and therefore no oscillator strength exist in the quantum well, and thus, many spurious features appear in the Green’s functions in this spectral region. For energies higher than 1545.2 meV, a continuum of excitons with smoothly decreasing oscillator strength exists; therefore, distortion above 1543.8 meV is expected to be smaller but can still lead to false results. For T = 0 ps (see Fig. 2C), a perfect reconstruction would lead to a strict diagonal shape. The deviations from the expected diagonal shape will be used for T > 0 ps as an indicator for problems such as spurious features or noise that are not reliable. In Fig. 3 (A and B), the Green’s function for the 1s exciton relaxation is shown for T = 10 and 20 ps at a temperature of 20 K. The reconstructed G1, ω2; T) is compared to the simulated result using the theory from the study of Singh et al. (25). The details of the reconstruction of the Green’s function are ambiguous, so it is possible that multiple Green’s functions reproduce the experimental spectrum equally well. The ambiguity represents the resolution limit and is influenced by the width of the line shapes, as well as the discretization and resolution of the experimental data. This ambiguity causes visible (oscillatory) noise in the reconstructed Green’s functions (examples of the ambiguous reconstruction can be found in the Supplementary Materials). Starting at T = 10 ps, off-diagonal contributions (around the horizontal line) in the Green’s function show exciton redistribution (spectral diffusion), almost covering a weak diagonal contribution. After 10 ps, the excitons are broadly distributed over more localized states with larger oscillator strength and lower energy, closer to the initially excited energy and nonequilibrium temperature. After longer delay times, the maxima of the distributions move toward higher energy until the maxima converge at the same final energy for different initial energies (visible as a horizontal feature parallel to the abscissa), which reflects the quasi-equilibrium distribution at the lattice temperature. Fig. 3 Reconstructed and simulated relaxation Green’s function at 20 K. (A and B) Reconstructed relaxation Green’s functions G1, ω2; T) for the initial energy ℏω1 and the final energy ℏω2 for T = 10 ps and T = 20 ps at 20 K. (C and D) Corresponding simulated relaxation Green’s functions. (The exciton–acoustic-phonon scattering in the second order Born-Markov approximation overestimates the relaxation times by a factor of 2 at 20 K (25); therefore, we use T = 5 and 10 ps from theory for the comparison.) In (A) and (B), the green contour line shows off-diagonal contribution at T = 0 ps, its presence indicates areas that may contain large artifacts and spurious features. Diagonal and horizontal lines provide visual guidance. The simulated Green’s function shown in Fig. 3 (C and D) shows qualitatively the same behavior and prominent features, such as the disappearing diagonal and the horizontal off-diagonal contribution moving toward higher final energies. It includes scattering of exciton with acoustic phonons and radiative recombination in a disorder potential (25, 36). Since we know that the model with exciton–acoustic-phonon scattering in the second order Born-Markov approximation overestimates the relaxation times by a factor of 2 at 20 K (25), we use T = 5 and 10 ps from theory for the comparison. Overall, the Green’s function from simulation is much smoother than the reconstructed Green’s function. At 5 K in Figs. 2C and 4 (A and B), the diagonal is more dominant than for 20 K. It is clearly visible in the Green’s function for T = 0 and T = 10 ps, since only a few excitons have recombined and scattered. It is much sharper in the Green’s function than in the spectra in Fig. 2 (A and B), highlighting the success of the deconvolution. The decay of the diagonal contribution is slower, as in the high temperature case with no redistribution of the off-diagonal contribution to higher energies for longer delay times. Instead, the distribution moves toward lower temperatures for longer delay times compared to the higher initial excitation, reflecting the lower bath temperature. Limitations in reconstructing the off-diagonal distribution with lower amplitude near the high-amplitude diagonal appear in the plot. The high-amplitude diagonal masks part of the low-amplitude contribution and generates echoes of the diagonal visible along the off-diagonal, visible above and below the diagonal. Again, the simulated relaxation Green’s function in Fig. 4 (C and D) shows qualitatively the same behavior as the reconstructed, for 5 K; also, the quantitative agreement is better compared to 20 K, since the second order Born-Markov approximation is more suitable for lower temperatures. However, we observe a stronger contribution above the diagonal (relaxation toward higher energies) in the extracted data than in the simulated data. We believe that this is caused partially by the larger reconstruction error from low oscillator strength in the area, which is seen as false signal at T = 0 ps as well. Higher (hot)–phonon temperature in the experiment caused by the excitation may be another reason, but we believe that it is mainly caused by reconstruction errors. Fig. 4 Reconstructed and simulated relaxation Green’s function at 5 K. (A and B) Reconstructed relaxation Green’s functions G1, ω2; T) for the initial energy ℏω1 and the final energy ℏω2 for T = 10 ps and T = 20 ps at 5 K. (C and D) Corresponding simulated relaxation Green’s functions. In (A) and (B), the green contour line shows off-diagonal contribution at T = 0 ps, and its presence indicates areas that may contain large artifacts and spurious features. Diagonal and horizontal lines provide visual guidance. In conclusion, we have extracted Green’s functions from CMDS using deconvolution methods inspired by those developed for image processing. This approach allows a direct comparison between theoretical and simulated Green’s functions, which can be calculated by specialists in materials physics theory without requiring that they also become experts in the spectroscopic method. We illustrated this procedure for spectral diffusion inside the exciton manifold of a semiconductor quantum well, producing good agreement and enhanced insight into the processes occurring in the system. We were able to extract the energy-dependent homogeneous line shape and the oscillator strength. The photon echo signal was generated by a sequence of three actively phase-stabilized cocircularly polarized excitation pulses with wave vectors k1, k2, and k3 (cf. Fig. 1A). The photon echo signal was collected in the direction kI = −k1 + k2 + k3. The signal was heterodyned with a reference pulse and detected through spectral interferometry to measure both amplitude and phase. The signal was recorded as delay τ was scanned, and delay T was kept constant. The signal was then Fourier-transformed with respect to τ. The sample was a four-period 10-nm-wide GaAs quantum well with 10-nm-wide Al0.3Ga0.7As barriers. The excitation was restricted to the heavy-hole exciton resonance with ~150-fs-long pulses. The experiment was carried out for different sample temperatures 5 K to 20 K using a sample-in-vapor helium flow cryostat [cf. study of Singh et al. (25) for more details]. The simulation used a sum over state treatment analogous to the study of Abramavicius et al. (28) for calculating the spectra. The exciton wave functions were obtained from a numerical solution of the 2D Schrödinger equations in relative and in center of mass coordinates. The calculation of the wave functions included Coulomb interaction and a random disorder potential caused by quantum well width fluctuations (36). The exciton wave function was used for calculating radiative and exciton-phonon scattering rates in the second-order Born-Markov approximation (36). The density matrix equations of motion were then solved numerically using the Portable, Extensible Toolkit for Scientific Computation (PETSC) library (38, 39) for obtaining the relaxation Green’s functions. In the end, the resulting quantities were averaged over several random realizations [cf. study of Singh et al. (25) for more details]. Extraction algorithms For extracting a quantity f such as the line-shape or the Green’s function, first, a cost function C(f) was defined. The major contribution to the cost function C(f) is the error between the measured quantity (for example, a spectrum) calculated from f compared to the experimental data. Other contributions to the cost function C(f) ensured specific features of f, such as a specific functional form, smoothness, etc. The cost function was then minimized using the TAO (Toolkit for Advanced Optimization) package from PETSC (3840) applying suitable constraints. More details about the used cost functions and the extraction procedure are provided in the Supplementary Materials. Supplementary material for this article is available at Supplementary Text: Reconstruction algorithms Supplementary Text: Derivation of material model fig. S1. Different reconstructed Green’s function G1, ω2; T) at 5 K and T = 10 ps. fig. S2. Contributing pathways to the photon echo signal. Acknowledgments: This work was inspired, in part, by the suggestions of R. Merlin (U. Michigan). Funding: The work at University of Michigan and JILA was primarily supported by the Chemical Sciences, Geosciences, and Energy Biosciences Division, Office of Basic Energy Science, Office of Science, U.S. Department of Energy under award no. DE-FG02-02ER15346 and no. DE-SC0015782. The work at Technische Universität Berlin was supported by the Deutsche Forschungsgemeinschaft through SFB 951 B12 and GRK 1558 A4. Author contributions: S.T.C. conceived the experimental concept. R.S. and M.S. ran the experiments. M.R. designed and calculated the simulation and the extraction algorithms. S.T.C. and M.R. wrote the manuscript. All authors discussed the results and commented on the manuscript. The discussions of all authors lead to the idea of the extraction algorithms. Competing interests: S.T.C. is an inventor on a patent application related to this work filed by the University of Michigan (no. 20180073856, 15 September 2017). All the other authors declare that they have no competing interests. Data and materials availability: All data needed to evaluate the conclusions in the paper are present in the paper and/or the Supplementary Materials. Additional data related to this paper may be requested from the authors. View Abstract Navigate This Article
8d399f05cb9ff55c
Open main menu Wikipedia β In mathematics, a Clifford algebra is an algebra generated by a vector space with a quadratic form, and is a unital associative algebra. As K-algebras, they generalize the real numbers, complex numbers, quaternions and several other hypercomplex number systems.[1][2] The theory of Clifford algebras is intimately connected with the theory of quadratic forms and orthogonal transformations. Clifford algebras have important applications in a variety of fields including geometry, theoretical physics and digital image processing. They are named after the English geometer William Kingdon Clifford. The most familiar Clifford algebras, the orthogonal Clifford algebras, are also referred to as (pseudo-)riemannian Clifford algebras, as distinct from to symplectic Clifford algebras.[3] Introduction and basic propertiesEdit A Clifford algebra is a unital associative algebra that contains and is generated by a vector space V over a field K, where V is equipped with a quadratic form Q : VK. The Clifford algebra Cℓ(V, Q) is the "freest" algebra generated by V subject to the condition[4] where the product on the left is that of the algebra, and the 1 is its multiplicative identity. The idea of being the "freest" or "most general" algebra subject to this identity can be formally expressed through the notion of a universal property, as done below. The free algebra generated by V may be written as the tensor algebra n≥0 V ⊗ ... ⊗ V, that is, the sum of the tensor product of n copies of V over all n, and so a Clifford algebra would be the quotient of this tensor algebra by the two-sided ideal generated by elements of the form vvQ(v)1 for all elements vV. The product induced by the tensor product in the quotient algebra is written using juxtaposition (e.g. uv). Its associativity follows from the associativity of the tensor product. The Clifford algebra has a distinguished subspace V.[5] Such a subspace cannot in general be uniquely determined given only a K-algebra isomorphic to the Clifford algebra. If the characteristic of the ground field K is not 2, then one can rewrite this fundamental identity in the form is the symmetric bilinear form associated with Q, via the polarization identity. Quadratic forms and Clifford algebras in characteristic 2 form an exceptional case. In particular, if char(K) = 2 it is not true that a quadratic form uniquely determines a symmetric bilinear form satisfying Q(v) = ⟨v, v, nor that every quadratic form admits an orthogonal basis. Many of the statements in this article include the condition that the characteristic is not 2, and are false if this condition is removed. As a quantization of the exterior algebraEdit Clifford algebras are closely related to exterior algebras. Indeed, if Q = 0 then the Clifford algebra Cℓ(V, Q) is just the exterior algebra ⋀(V). For nonzero Q there exists a canonical linear isomorphism between ⋀(V) and Cℓ(V, Q) whenever the ground field K does not have characteristic two. That is, they are naturally isomorphic as vector spaces, but with different multiplications (in the case of characteristic two, they are still isomorphic as vector spaces, just not naturally). Clifford multiplication together with the distinguished subspace is strictly richer than the exterior product since it makes use of the extra information provided by Q. The Clifford algebra is a filtered algebra, the associated graded algebra is the exterior algebra. More precisely, Clifford algebras may be thought of as quantizations (cf. Quantum group) of the exterior algebra, in the same way that the Weyl algebra is a quantization of the symmetric algebra. Weyl algebras and Clifford algebras admit a further structure of a *-algebra, and can be unified as even and odd terms of a superalgebra, as discussed in CCR and CAR algebras. Universal property and constructionEdit Let V be a vector space over a field K, and let Q : VK be a quadratic form on V. In most cases of interest the field K is either the field of real numbers R, or the field of complex numbers C, or a finite field. A Clifford algebra Cℓ(V, Q) is a unital associative algebra over K together with a linear map i : V → Cℓ(V, Q)[6] satisfying i(v)2 = Q(v)1 for all vV, defined by the following universal property: given any unital associative algebra A over K and any linear map j : VA such that j(v)2 = Q(v)1A for all vV (where 1A denotes the multiplicative identity of A), there is a unique algebra homomorphism f : Cℓ(V, Q) → A such that the following diagram commutes (i.e. such that fi = j): In characteristic not 2, the quadratic form Q may be replaced by a symmetric bilinear form  , in which case the requirement on j is A Clifford algebra as described above always exists and can be constructed as follows: start with the most general algebra that contains V, namely the tensor algebra T(V), and then enforce the fundamental identity by taking a suitable quotient. In our case we want to take the two-sided ideal IQ in T(V) generated by all elements of the form   for all   and define Cℓ(V, Q) as the quotient algebra Cℓ(V, Q) = T(V)/IQ. The ring product inherited by this quotient is sometimes referred to as the Clifford product[7] to distinguish it from the exterior product and the scalar product. It is then straightforward to show that Cℓ(V, Q) contains V and satisfies the above universal property, so that Cℓ is unique up to a unique isomorphism; thus one speaks of "the" Clifford algebra Cℓ(V, Q). It also follows from this construction that i is injective. One usually drops the i and considers V as a linear subspace of Cℓ(V, Q). The universal characterization of the Clifford algebra shows that the construction of Cℓ(V, Q) is functorial in nature. Namely, Cℓ can be considered as a functor from the category of vector spaces with quadratic forms (whose morphisms are linear maps preserving the quadratic form) to the category of associative algebras. The universal property guarantees that linear maps between vector spaces (preserving the quadratic form) extend uniquely to algebra homomorphisms between the associated Clifford algebras. Basis and dimensionEdit If the dimension of V over K is n and {e1, ..., en} is an orthogonal basis of (V, Q), then Cℓ(V, Q) is free over K with a basis The empty product (k = 0) is defined as the multiplicative identity element. For each value of k there are n choose k basis elements, so the total dimension of the Clifford algebra is Since V comes equipped with a quadratic form, there is a set of privileged bases for V: the orthogonal ones. An orthogonal basis is one such that   for  , and   where   is the symmetric bilinear form associated to Q. The fundamental Clifford identity implies that for an orthogonal basis   for  , and   This makes manipulation of orthogonal basis vectors quite simple. Given a product   of distinct orthogonal basis vectors of V, one can put them into a standard order while including an overall sign determined by the number of pairwise swaps needed to do so (i.e. the signature of the ordering permutation). Examples: real and complex Clifford algebrasEdit The most important Clifford algebras are those over real and complex vector spaces equipped with nondegenerate quadratic forms. Each of the algebras Cℓp,q(R) and Cℓn(C) is isomorphic to A or AA, where A is a full matrix ring with entries from R, C, or H. For a complete classification of these algebras see classification of Clifford algebras. Real numbersEdit Clifford algebras find application in geometric algebra. Every nondegenerate quadratic form on a finite-dimensional real vector space is equivalent to the standard diagonal form: where n = p + q is the dimension of the vector space. The pair of integers (p, q) is called the signature of the quadratic form. The real vector space with this quadratic form is often denoted Rp,q. The Clifford algebra on Rp,q is denoted Cℓp,q(R). The symbol Cℓn(R) means either Cℓn,0(R) or Cℓ0,n(R) depending on whether the author prefers positive-definite or negative-definite spaces. A standard basis {e1, ..., en} for Rp,q consists of n = p + q mutually orthogonal vectors, p of which square to +1 and q of which square to −1. The algebra Cℓp,q(R) will therefore have p vectors that square to +1 and q vectors that square to −1. A few low-dimensional cases are: Cℓ0,0(R) is naturally isomorphic to R since there are no nonzero vectors. Cℓ0,1(R) is a two-dimensional algebra generated by e1 that squares to −1, and is algebra-isomorphic to C, the field of complex numbers. Cℓ0,2(R) is a four-dimensional algebra spanned by {1, e1, e2, e1e2}. The latter three elements all square to −1 and anticommute, and so the algebra is isomorphic to the quaternions H. Cℓ0,3(R) is an 8-dimensional algebra-isomorphic to the direct sum HH, the split-biquaternions. Complex numbersEdit One can also study Clifford algebras on complex vector spaces. Every nondegenerate quadratic form on a complex vector space of dimension n is equivalent to the standard diagonal form Thus, for each dimension n, up to isomorphism there is only one Clifford algebra of a complex vector space with a nondegenerate quadratic form. We will denote the Clifford algebra on Cn with the standard quadratic form by Cℓn(C). For the first few cases one finds that Cℓ0(C) ≅ C, the complex numbers Cℓ1(C) ≅ CC, the bicomplex numbers Cℓ2(C) ≅ M2(C), the biquaternions where Mn(C) denotes the algebra of n × n matrices over C. Examples: constructing quaternions and dual quaternionsEdit In this section, Hamilton's quaternions are constructed as the even sub algebra of the Clifford algebra Cℓ0,3(R). Let the vector space V be real three-dimensional space R3, and the quadratic form Q be derived from the usual Euclidean metric. Then, for v, w in R3 we have the bilinear form (or scalar product) Now introduce the Clifford product of vectors v and w given by This formulation uses the negative sign so the correspondence with quaternions is easily shown. Denote a set of orthogonal unit vectors of R3 as e1, e2, and e3, then the Clifford product yields the relations The general element of the Clifford algebra Cℓ0,3(R) is given by The linear combination of the even degree elements of Cℓ0,3(R) defines the even subalgebra Cℓ[0] (R) with the general element which shows that the even subalgebra Cℓ[0] (R) is Hamilton's real quaternion algebra. To see this, compute Dual quaternionsEdit In this section, dual quaternions are constructed as the even Clifford algebra of real four-dimensional space with a degenerate quadratic form.[8][9] Let the vector space V be real four-dimensional space R4, and let the quadratic form Q be a degenerate form derived from the Euclidean metric on R3. For v, w in R4 introduce the degenerate bilinear form This degenerate scalar product projects distance measurements in R4 onto the R3 hyperplane. The Clifford product of vectors v and w is given by Note the negative sign is introduced to simplify the correspondence with quaternions. Denote a set of mutually orthogonal unit vectors of R4 as e1, e2, e3 and e4, then the Clifford product yields the relations The general element of the Clifford algebra Cℓ(R4, d) has 16 components. The linear combination of the even degree elements defines the even subalgebra Cℓ[0] (R4, d) with the general element The basis elements can be identified with the quaternion basis elements i, j, k and the dual unit ε as This provides the correspondence of Cℓ[0] (R) with dual quaternion algebra. To see this, compute The exchanges of e1 and e4 alternate signs an even number of times, and show the dual unit ε commutes with the quaternion basis elements i, j, and k. Examples: in small dimensionEdit Let K be any field of characteristic not 2. Rank 1Edit If Q has diagonalization ⟨a⟩, that is there is a non-zero vector x such that Q(x) = a, then Cℓ(V, Q) is a K-algebra generated by an element x satisfying x2 = a, so it is the étale quadratic algebra K[X] / (X2a). In particular, if a = 0 (that is, Q is the zero quadratic form) then Cℓ(V, Q) is the dual numbers algebra over K. If a is a non-zero square in K, then Cℓ(V, Q) ≃ KK. Otherwise, Cℓ(V, Q) is the quadratic field extension K(a) of K. Rank 2Edit If Q has diagonalization a, b with non-zero a and b (which always exists if Q is non-degenerated), then Cℓ(V, Q) is a K-algebra generated by elements x and y satisfying x2 = a, y2 = b and xy = −yx. Thus Cℓ(V, Q) is the (generalized) quaternion algebra (a, b)K. We retrieve Hamilton's quaternions when a = b = −1, since H = (−1, −1)R. As a special case, if some x in V satisfies Q(x) = 1, then Cℓ(V, Q) = M2(K). Relation to the exterior algebraEdit Given a vector space V one can construct the exterior algebra ⋀(V), whose definition is independent of any quadratic form on V. It turns out that if K does not have characteristic 2 then there is a natural isomorphism between ⋀(V) and Cℓ(V, Q) considered as vector spaces (and there exists an isomorphism in characteristic two, which may not be natural). This is an algebra isomorphism if and only if Q = 0. One can thus consider the Clifford algebra Cℓ(V, Q) as an enrichment (or more precisely, a quantization, cf. the Introduction) of the exterior algebra on V with a multiplication that depends on Q (one can still define the exterior product independently of Q). The easiest way to establish the isomorphism is to choose an orthogonal basis {e1, ..., en} for V and extend it to a basis for Cℓ(V, Q) as described above. The map Cℓ(V, Q) → ⋀(V) is determined by Note that this only works if the basis {e1, ..., en} is orthogonal. One can show that this map is independent of the choice of orthogonal basis and so gives a natural isomorphism. If the characteristic of K is 0, one can also establish the isomorphism by antisymmetrizing. Define functions fk: V × ... × V → Cℓ(V, Q) by where the sum is taken over the symmetric group on k elements. Since fk is alternating it induces a unique linear map k(V) → Cℓ(V, Q). The direct sum of these maps gives a linear map between ⋀(V) and Cℓ(V, Q). This map can be shown to be a linear isomorphism, and it is natural. A more sophisticated way to view the relationship is to construct a filtration on Cℓ(V, Q). Recall that the tensor algebra T(V) has a natural filtration: F0F1F2 ⊂ ..., where Fk contains sums of tensors with order k. Projecting this down to the Clifford algebra gives a filtration on Cℓ(V, Q). The associated graded algebra is naturally isomorphic to the exterior algebra ⋀(V). Since the associated graded algebra of a filtered algebra is always isomorphic to the filtered algebra as filtered vector spaces (by choosing complements of Fk in Fk+1 for all k), this provides an isomorphism (although not a natural one) in any characteristic, even two. In the following, assume that the characteristic is not 2.[10] Clifford algebras are Z2-graded algebras (also known as superalgebras). Indeed, the linear map on V defined by v ↦ −v (reflection through the origin) preserves the quadratic form Q and so by the universal property of Clifford algebras extends to an algebra automorphism Since α is an involution (i.e. it squares to the identity) one can decompose Cℓ(V, Q) into positive and negative eigenspaces of α Since α is an automorphism it follows that: where the bracketed superscripts are read modulo 2. This gives Cℓ(V, Q) the structure of a Z2-graded algebra. The subspace Cℓ[0](V, Q) forms a subalgebra of Cℓ(V, Q), called the even subalgebra. The subspace Cℓ[1](V, Q) is called the odd part of Cℓ(V, Q) (it is not a subalgebra). This Z2-grading plays an important role in the analysis and application of Clifford algebras. The automorphism α is called the main involution or grade involution. Elements that are pure in this Z2-grading are simply said to be even or odd. Remark. In characteristic not 2 the underlying vector space of Cℓ(V, Q) inherits an N-grading and a Z-grading from the canonical isomorphism with the underlying vector space of the exterior algebra ⋀(V).[11] It is important to note, however, that this is a vector space grading only. That is, Clifford multiplication does not respect the N-grading or Z-grading, only the Z2-grading: for instance if Q(v) ≠ 0, then v ∈ Cℓ1(V, Q), but v2 ∈ Cℓ0(V, Q), not in Cℓ2(V, Q). Happily, the gradings are related in the natural way: Z2N/2NZ/2Z. Further, the Clifford algebra is Z-filtered: The degree of a Clifford number usually refers to the degree in the N-grading. The even subalgebra Cℓ[0](V, Q) of a Clifford algebra is itself isomorphic to a Clifford algebra.[12][13] If V is the orthogonal direct sum of a vector a of nonzero norm Q(a) and a subspace U, then Cℓ[0](V, Q) is isomorphic to Cℓ(U, −Q(a)Q), where −Q(a)Q is the form Q restricted to U and multiplied by −Q(a). In particular over the reals this implies that: In the negative-definite case this gives an inclusion Cℓ0,n−1(R) ⊂ Cℓ0,n(R), which extends the sequence RCHHH ⊂ ... Likewise, in the complex case, one can show that the even subalgebra of Cℓn(C) is isomorphic to Cℓn−1(C). In addition to the automorphism α, there are two antiautomorphisms that play an important role in the analysis of Clifford algebras. Recall that the tensor algebra T(V) comes with an antiautomorphism that reverses the order in all products: Since the ideal IQ is invariant under this reversal, this operation descends to an antiautomorphism of Cℓ(V, Q) called the transpose or reversal operation, denoted by xt. The transpose is an antiautomorphism: (xy)t = yt xt. The transpose operation makes no use of the Z2-grading so we define a second antiautomorphism by composing α and the transpose. We call this operation Clifford conjugation denoted   Of the two antiautomorphisms, the transpose is the more fundamental.[14] Note that all of these operations are involutions. One can show that they act as ±1 on elements which are pure in the Z-grading. In fact, all three operations depend only on the degree modulo 4. That is, if x is pure with degree k then where the signs are given by the following table: k mod 4 0 1 2 3   + + (−1)k   + + (−1)k(k−1)/2   + + (−1)k(k+1)/2 Clifford scalar productEdit When the characteristic is not 2, the quadratic form Q on V can be extended to a quadratic form on all of Cℓ(V, Q) (which we also denoted by Q). A basis independent definition of one such extension is where ⟨a⟩ denotes the scalar part of a (the degree 0 part in the Z-grading). One can show that where the vi are elements of V – this identity is not true for arbitrary elements of Cℓ(V, Q). The associated symmetric bilinear form on Cℓ(V, Q) is given by One can check that this reduces to the original bilinear form when restricted to V. The bilinear form on all of Cℓ(V, Q) is nondegenerate if and only if it is nondegenerate on V. It is not hard to verify that the operator of left/right Clifford multiplication by the transpose   of an element   is the adjoint of left/right Clifford multiplication by   itself with respect to this inner product. That is, Structure of Clifford algebrasEdit In this section we assume that characteristic is not 2, the vector space V is finite-dimensional and that the associated symmetric bilinear form of Q is non-singular. A central simple algebra over K is a matrix algebra over a (finite-dimensional) division algebra with center K. For example, the central simple algebras over the reals are matrix algebras over either the reals or the quaternions. • If V has even dimension then Cℓ(V, Q) is a central simple algebra over K. • If V has even dimension then Cℓ[0](V, Q) is a central simple algebra over a quadratic extension of K or a sum of two isomorphic central simple algebras over K. • If V has odd dimension then Cℓ(V, Q) is a central simple algebra over a quadratic extension of K or a sum of two isomorphic central simple algebras over K. • If V has odd dimension then Cℓ[0](V, Q) is a central simple algebra over K. The structure of Clifford algebras can be worked out explicitly using the following result. Suppose that U has even dimension and a non-singular bilinear form with discriminant d, and suppose that V is another vector space with a quadratic form. The Clifford algebra of U + V is isomorphic to the tensor product of the Clifford algebras of U and (−1)dim(U)/2dV, which is the space V with its quadratic form multiplied by (−1)dim(U)/2d. Over the reals, this implies in particular that These formulas can be used to find the structure of all real Clifford algebras and all complex Clifford algebras; see the classification of Clifford algebras. Notably, the Morita equivalence class of a Clifford algebra (its representation theory: the equivalence class of the category of modules over it) depends only on the signature (pq) mod 8. This is an algebraic form of Bott periodicity. Clifford groupEdit The class of Clifford groups (a.k.a. Clifford–Lipschitz groups[15]) was discovered by Rudolf Lipschitz.[16] In this section we assume that V is finite-dimensional and the quadratic form Q is nondegenerate. An action on the elements of a Clifford algebra by its group of units may be defined in terms of a twisted conjugation: twisted conjugation by x maps yx y α(x)−1, where α is the main involution defined above. The Clifford group Γ is defined to be the set of invertible elements x that stabilize the set of vectors under this action,[17] meaning that for all v in V we have: This formula also defines an action of the Clifford group on the vector space V that preserves the quadratic form Q, and so gives a homomorphism from the Clifford group to the orthogonal group. The Clifford group contains all elements r of V for which Q(r) is invertible in K, and these act on V by the corresponding reflections that take v to v2v,rr/Q(r). (In characteristic 2 these are called orthogonal transvections rather than reflections.) If V is a finite-dimensional real vector space with a non-degenerate quadratic form then the Clifford group maps onto the orthogonal group of V with respect to the form (by the Cartan–Dieudonné theorem) and the kernel consists of the nonzero elements of the field K. This leads to exact sequences Over other fields or with indefinite forms, the map is not in general onto, and the failure is captured by the spinor norm. Spinor normEdit In arbitrary characteristic, the spinor norm Q is defined on the Clifford group by It is a homomorphism from the Clifford group to the group K× of non-zero elements of K. It coincides with the quadratic form Q of V when V is identified with a subspace of the Clifford algebra. Several authors define the spinor norm slightly differently, so that it differs from the one here by a factor of −1, 2, or −2 on Γ1. The difference is not very important in characteristic other than 2. The nonzero elements of K have spinor norm in the group (K×)2 of squares of nonzero elements of the field K. So when V is finite-dimensional and non-singular we get an induced map from the orthogonal group of V to the group K×/(K×)2, also called the spinor norm. The spinor norm of the reflection about r, for any vector r, has image Q(r) in K×/(K×)2, and this property uniquely defines it on the orthogonal group. This gives exact sequences: Note that in characteristic 2 the group {±1} has just one element. From the point of view of Galois cohomology of algebraic groups, the spinor norm is a connecting homomorphism on cohomology. Writing μ2 for the algebraic group of square roots of 1 (over a field of characteristic not 2 it is roughly the same as a two-element group with trivial Galois action), the short exact sequence yields a long exact sequence on cohomology, which begins The 0th Galois cohomology group of an algebraic group with coefficients in K is just the group of K-valued points: H0(G; K) = G(K), and H12; K) ≅ K×/(K×)2, which recovers the previous sequence where the spinor norm is the connecting homomorphism H0(OV; K) → H12; K). Spin and Pin groupsEdit In this section we assume that V is finite-dimensional and its bilinear form is non-singular. (If K has characteristic 2 this implies that the dimension of V is even.) The Pin group PinV(K) is the subgroup of the Clifford group Γ of elements of spinor norm 1, and similarly the Spin group SpinV(K) is the subgroup of elements of Dickson invariant 0 in PinV(K). When the characteristic is not 2, these are the elements of determinant 1. The Spin group usually has index 2 in the Pin group. Recall from the previous section that there is a homomorphism from the Clifford group onto the orthogonal group. We define the special orthogonal group to be the image of Γ0. If K does not have characteristic 2 this is just the group of elements of the orthogonal group of determinant 1. If K does have characteristic 2, then all elements of the orthogonal group have determinant 1, and the special orthogonal group is the set of elements of Dickson invariant 0. There is a homomorphism from the Pin group to the orthogonal group. The image consists of the elements of spinor norm 1 ∈ K×/(K×)2. The kernel consists of the elements +1 and −1, and has order 2 unless K has characteristic 2. Similarly there is a homomorphism from the Spin group to the special orthogonal group of V. In the common case when V is a positive or negative definite space over the reals, the spin group maps onto the special orthogonal group, and is simply connected when V has dimension at least 3. Further the kernel of this homomorphism consists of 1 and −1. So in this case the spin group, Spin(n), is a double cover of SO(n). Please note, however, that the simple connectedness of the spin group is not true in general: if V is Rp,q for p and q both at least 2 then the spin group is not simply connected. In this case the algebraic group Spinp,q is simply connected as an algebraic group, even though its group of real valued points Spinp,q(R) is not simply connected. This is a rather subtle point, which completely confused the authors of at least one standard book about spin groups. Clifford algebras Cℓp,q(C), with p + q = 2n even, are matrix algebras which have a complex representation of dimension 2n. By restricting to the group Pinp,q(R) we get a complex representation of the Pin group of the same dimension, called the spin representation. If we restrict this to the spin group Spinp,q(R) then it splits as the sum of two half spin representations (or Weyl representations) of dimension 2n−1. If p + q = 2n + 1 is odd then the Clifford algebra Cℓp,q(C) is a sum of two matrix algebras, each of which has a representation of dimension 2n, and these are also both representations of the Pin group Pinp,q(R). On restriction to the spin group Spinp,q(R) these become isomorphic, so the spin group has a complex spinor representation of dimension 2n. More generally, spinor groups and pin groups over any field have similar representations whose exact structure depends on the structure of the corresponding Clifford algebras: whenever a Clifford algebra has a factor that is a matrix algebra over some division algebra, we get a corresponding representation of the pin and spin groups over that division algebra. For examples over the reals see the article on spinors. Real spinorsEdit To describe the real spin representations, one must know how the spin group sits inside its Clifford algebra. The Pin group, Pinp,q is the set of invertible elements in Cℓp,q that can be written as a product of unit vectors: Comparing with the above concrete realizations of the Clifford algebras, the Pin group corresponds to the products of arbitrarily many reflections: it is a cover of the full orthogonal group O(p, q). The Spin group consists of those elements of Pinp, q which are products of an even number of unit vectors. Thus by the Cartan-Dieudonné theorem Spin is a cover of the group of proper rotations SO(p, q). Let α : Cℓ → Cℓ be the automorphism which is given by the mapping v ↦ −v acting on pure vectors. Then in particular, Spinp,q is the subgroup of Pinp,q whose elements are fixed by α. Let (These are precisely the elements of even degree in Cℓp,q.) Then the spin group lies within Cℓ[0] The irreducible representations of Cℓp,q restrict to give representations of the pin group. Conversely, since the pin group is generated by unit vectors, all of its irreducible representation are induced in this manner. Thus the two representations coincide. For the same reasons, the irreducible representations of the spin coincide with the irreducible representations of Cℓ[0] To classify the pin representations, one need only appeal to the classification of Clifford algebras. To find the spin representations (which are representations of the even subalgebra), one can first make use of either of the isomorphisms (see above) ≈ Cℓp,q−1, for q > 0 ≈ Cℓq,p−1, for p > 0 and realize a spin representation in signature (p, q) as a pin representation in either signature (p, q − 1) or (q, p − 1). Differential geometryEdit One of the principal applications of the exterior algebra is in differential geometry where it is used to define the bundle of differential forms on a smooth manifold. In the case of a (pseudo-)Riemannian manifold, the tangent spaces come equipped with a natural quadratic form induced by the metric. Thus, one can define a Clifford bundle in analogy with the exterior bundle. This has a number of important applications in Riemannian geometry. Perhaps more importantly is the link to a spin manifold, its associated spinor bundle and spinc manifolds. Clifford algebras have numerous important applications in physics. Physicists usually consider a Clifford algebra to be an algebra with a basis generated by the matrices γ0, ..., γ3 called Dirac matrices which have the property that where η is the matrix of a quadratic form of signature (1, 3) (or (3, 1) corresponding to the two equivalent choices of metric signature). These are exactly the defining relations for the Clifford algebra Cℓ , whose complexification is Cℓ which, by the classification of Clifford algebras, is isomorphic to the algebra of 4 × 4 complex matrices Cℓ4(C) ≈ M4(C). However, it is best to retain the notation Cℓ , since any transformation that takes the bilinear form to the canonical form is not a Lorentz transformation of the underlying spacetime. The Clifford algebra of spacetime used in physics thus has more structure than Cℓ4(C). It has in addition a set of preferred transformations – Lorentz transformations. Whether complexification is necessary to begin with depends in part on conventions used and in part on how much one wants to incorporate straightforwardly, but complexification is most often necessary in quantum mechanics where the spin representation of the Lie algebra so(1, 3) sitting inside the Clifford algebra conventionally requires a complex Clifford algebra. For reference, the spin Lie algebra is given by This is in the (3, 1) convention, hence fits in Cℓ The Dirac matrices were first written down by Paul Dirac when he was trying to write a relativistic first-order wave equation for the electron, and give an explicit isomorphism from the Clifford algebra to the algebra of complex matrices. The result was used to define the Dirac equation and introduce the Dirac operator. The entire Clifford algebra shows up in quantum field theory in the form of Dirac field bilinears. The use of Clifford algebras to describe quantum theory has been advanced among others by Mario Schönberg,[19] by David Hestenes in terms of geometric calculus, by David Bohm and Basil Hiley and co-workers in form of a hierarchy of Clifford algebras, and by Elio Conte et al.[20][21] Computer visionEdit Clifford algebras have been applied in the problem of action recognition and classification in computer vision. Rodriguez et al.[22] propose a Clifford embedding to generalize traditional MACH filters to video (3D spatiotemporal volume), and vector-valued data such as optical flow. Vector-valued data is analyzed using the Clifford Fourier Transform. Based on these vectors action filters are synthesized in the Clifford Fourier domain and recognition of actions is performed using Clifford Correlation. The authors demonstrate the effectiveness of the Clifford embedding by recognizing actions typically performed in classic feature films and sports broadcast television. While this article focuses on a Clifford algebra of a vector space over a field, and more specifically real and complex numbers, the definition extends without change to a module over any unital, associative, commutative ring.[3] See alsoEdit 1. ^ W. K. Clifford, "Preliminary sketch of bi-quaternions, Proc. London Math. Soc. Vol. 4 (1873) pp. 381–395 2. ^ W. K. Clifford, Mathematical Papers, (ed. R. Tucker), London: Macmillan, 1882. 3. ^ a b see for ex. Z. Oziewicz, Sz. Sitarczyk: Parallel treatment of Riemannian and symplectic Clifford algebras. In: Artibano Micali, Roger Boudet, Jacques Helmstetter (eds.): Clifford Algebras and their Applications in Mathematical Physics, Kluwer Academic Publishers, ISBN 0-7923-1623-1, 1992, p. 83 4. ^ Mathematicians who work with real Clifford algebras and prefer positive definite quadratic forms (especially those working in index theory) sometimes use a different choice of sign in the fundamental Clifford identity. That is, they take v2 = −Q(v). One must replace Q with −Q in going from one convention to the other. 5. ^ P. Lounesto (1996), "Counterexamples in Clifford algebras with CLICAL", Clifford Algebras with Numeric and Symbolic Computations: 3–30  or abridged version 6. ^ (Vaz & da Rocha 2016) make it clear that the map i (γ in this quote) is included in the structure of a Clifford algebra by defining it as "The pair (A, γ) is a Clifford algebra for the quadratic space (V, g) when A is generated as an algebra by {γ(v) | vV} and {a1A | aR}, and γ satisfies γ(v)γ(u) + y(u)γ(v) = 2g(v, u) for all v, uV." 7. ^ Lounesto 2001, §1.8. 8. ^ J. M. McCarthy, An Introduction to Theoretical Kinematics, pp. 62–5, MIT Press 1990. 9. ^ O. Bottema and B. Roth, Theoretical Kinematics, North Holland Publ. Co., 1979 10. ^ Thus the group algebra K[Z/2] is semisimple and the Clifford algebra splits into eigenspaces of the main involution. 11. ^ The Z-grading is obtained from the N grading by appending copies of the zero subspace indexed with the negative integers. 12. ^ Technically, it does not have the full structure of a Clifford algebra without a designated vector subspace, and so is isomorphic as an algebra, but not as a Clifford algebra. 13. ^ We are still assuming that the characteristic is not 2. 14. ^ The opposite is true when using the alternate (−) sign convention for Clifford algebras: it is the conjugate which is more important. In general, the meanings of conjugation and transpose are interchanged when passing from one sign convention to the other. For example, in the convention used here the inverse of a vector is given by v−1 = vt / Q(v) while in the (−) convention it is given by v−1 = v / Q(v). 15. ^ Vaz & da Rocha 2016, p. 126. 16. ^ Lounesto 2001, §17.2. 17. ^ Perwass, Christian (2009), Geometric Algebra with Applications in Engineering, Springer Science & Business Media, ISBN 978-3-540-89068-3 , §3.3.1 18. ^ Weinberg 2002 19. ^ See the references to Schönberg's papers of 1956 and 1957 as described in section "The Grassmann–Schönberg algebra  " of:A. O. Bolivar, Classical limit of fermions in phase space, J. Math. Phys. 42, 4020 (2001) doi:10.1063/1.1386411 20. ^ Conte, Elio (14 Nov 2007). "A Quantum-Like Interpretation and Solution of Einstein, Podolsky, and Rosen Paradox in Quantum Mechanics". arXiv:0711.2260  [quant-ph].  21. ^ Elio Conte: On some considerations of mathematical physics: May we identify Clifford algebra as a common algebraic structure for classical diffusion and Schrödinger equations? Adv. Studies Theor. Phys., vol. 6, no. 26 (2012), pp. 1289–1307 22. ^ Rodriguez, Mikel; Shah, M (2008). "Action MACH: A Spatio-Temporal Maximum Average Correlation Height Filter for Action Classification". Computer Vision and Pattern Recognition (CVPR).  Further readingEdit External linksEdit
cfac61cbe3f70900
Weinberg on the measurement problem One common answer is that, in a measurement, the spin (or whatever else is measured) is put in an interaction with a macroscopic environment that jitters in an unpredictable way…This is called decoherence…But this begs the question. If the deterministic Schrödinger equation governs the changes through time not only of the spin but also of the measuring apparatus and the physicist using it, then the results of measurement should not in principle be unpredictable. So we still have to ask, how do probabilities get into quantum mechanics? One response to this puzzle was given in the 1920s by Niels Bohr, in what came to be called the Copenhagen interpretation of quantum mechanics…This answer is now widely felt to be unacceptable.There seems no way to locate the boundary between the realms in which, according to Bohr, quantum mechanics does or does not apply. As it happens, I was a graduate student at Bohr’s institute in Copenhagen, but he was very great and I was very young, and I never had a chance to ask him about this. The instrumentalist approach is a descendant of the Copenhagen interpretation, but instead of imagining a boundary beyond which reality is not described by quantum mechanics, it rejects quantum mechanics altogether as a description of reality…. It seems to me that the trouble with this approach is not only that it gives up on an ancient aim of science: to say what is really going on out there. It is a surrender of a particularly unfortunate kind. In the instrumentalist approach, we have to assume, as fundamental laws of nature, the rules (such as the Born rule I mentioned earlier) for using the wave function to calculate the probabilities of various results when humans make measurements. Thus humans are brought into the laws of nature at the most fundamental level. According to Eugene Wigner, a pioneer of quantum mechanics, “it was not possible to formulate the laws of quantum mechanics in a fully consistent way without reference to the consciousness.”… These problems are partly avoided in the realist—as opposed to the instrumentalist—approach to quantum mechanics. Here one takes the wave function and its deterministic evolution seriously as a description of reality. But this raises other problems… In the realist approach the history of the world is endlessly splitting; it does so every time a macroscopic body becomes tied in with a choice of quantum states. This inconceivably huge variety of histories has provided material for science fiction, and it offers a rationale for a multiverse, in which the particular cosmic history in which we find ourselves is constrained by the requirement that it must be one of the histories in which conditions are sufficiently benign to allow conscious beings to exist. But the vista of all these parallel histories is deeply unsettling, and like many other physicists I would prefer a single history. There is another thing that is unsatisfactory about the realist approach, beyond our parochial preferences….We can still talk of probabilities as the fractions of the time that various possible results are found when measurements are performed many times in any one history; but the rules that govern what probabilities are observed would have to follow from the deterministic evolution of the whole multiverse…Several attempts following the realist approach have come close to deducing rules like the Born rule that we know work well experimentally, but I think without final success. Lately I have been thinking about a possible experimental search for signs of departure from ordinary quantum mechanics in atomic clocks… Since Weinberg is fine with probabilities but rejects instrumentalist accounts, I wonder if he would accept the well-defined and non-anthropocentric reality — evolving stochastically — that would automatically be defined by a uniquely preferred set-selection principle (if one could be found). On the other hand, fellow titan Sheldon Glashow is not so impressed with the measurement problem. Bookmark the permalink. One Comment 1. Sheldon Glashow’s take on the measurement problem is really sad and really frustrating. It’s really just an argument from authority: he cites other famous physicists calling the Everett interpretation rubbish. He doesn’t provide a single reasoned argument against it. I can understand a layman appealing to authority, but Glashow really should know better. I find Weinberg much more palatable. At least he understands the big structure of the arguments at stake. By the way, Weinberg also devoted an entire chapter to the measurement problem in his textbook, Lectures on Quantum Mechanics. Leave a Reply
829e9efb29f13f50
Time Control Technologies by Dr. David Lewis Anderson from AndersonInstitute Website The ability to control time in both a forward and backwards direction is possible within the laws of our mathematics and physics. The chart below (click for larger view) compares ten different technologies an methods. Key characteristics are identified for each and described below. Under each key characteristic is a column with either a solid or empty circle. A solid circle indicates a key characteristic is supported by the indicated technology or method, an empty circle indicates it is not. • “Time Control” indicates whether travel to future, past, or both are possible. • “Matter Transport” is solid if both matter and information can be transported, empty if only information can be transported. • “Tech Viability” is solid if the technology or method is viable with present state-of-the-art technology or within two generations. • “Possible Without Exotic Materials” is solid if materials required are available today or within two generations. • “Relatively Low Input Power” is solid if time control is achievable within power generation capabilities available today or within two generations. The time control technologies and methods above include the following: 1. Quantum Tunneling 2. Near-Lightspeed 3. Alcubierre Warp Drive 4. Faster-than-Light 5. Time-warped Field 6. Circulating Light Beams 7. Wormholes 8. Cosmic Strings 9. Tipler Cylinder 10. Casimir Effect Quantum Tunneling is an evanescent wave coupling effect that occurs in quantum mechanics. The correct wavelength combined with the proper tunneling barrier makes it possible to pass signals faster than light, backwards in time. In the diagram above light pulses consisting of waves of various frequencies are shot toward a 10 centimeter chamber containing cesium vapor. All information about the incoming pulse is contained in the leading edge of its waves. This information is all the cesium atoms need to replicate the pulse and send it out the other side. At the same time it is believed an opposite wave rebounds inside the chamber cancelling out the main part of the incoming pulse as it enters the chamber. By this time the new pulse, moving faster than the speed of light, has traveled about 60 feet beyond the chamber. Essentially the pulse has left the chamber before it finished entering, traveling backwards in time. The key characteristics of the application of quantum tunneling for time control and time travel are presented in the picture below. This is followed by more detail describing the phenomenon below. Wave-mechanical tunneling (also called quantum-mechanical tunneling, quantum tunneling, and the tunnel effect) is an evanescent wave coupling effect that occurs in the context of quantum mechanics because the behavior of particles is governed by Schrödinger’s wave-equation. All wave equations exhibit evanescent wave coupling effects if the conditions are right. Wave coupling effects mathematically equivalent to those called “tunneling” in quantum mechanics can occur with Maxwell’s wave-equation (both with light and with microwaves), and with the common non-dispersive wave-equation often applied (for example) to waves on strings and to acoustics. For these effects to occur there must be a situation where a thin region of “medium type 2” is sandwiched between two regions of “medium type 1”, and the properties of these media have to be such that the wave equation has “traveling-wave” solutions in medium type 1, but “real exponential solutions” (rising and falling) in medium type 2. In optics, medium type 1 might be glass, medium type 2 might be vacuum. In quantum mechanics, in connection with motion of a particle, medium type 1 is a region of space where the particle total energy is greater than its potential energy, medium type 2 is a region of space (known as the “barrier”) where the particle total energy is less than its potential energy. If conditions are right, amplitude from a traveling wave, incident on medium type 2 from medium type 1, can “leak through” medium type 2 and emerge as a traveling wave in the second region of medium type 1 on the far side. If the second region of medium type 1 is not present, then the traveling wave incident on medium type 2 is totally reflected, although it does penetrate into medium type 2 to some extent. Quantum Tunneling Introduction Quantum Tunneling The scale on which these “tunneling-like phenomena” occur depends on the wavelength of the traveling wave. For electrons the thickness of “medium type 2” (called in this context “the tunneling barrier”) is typically a few nanometers; for alpha-particles tunneling out of a nucleus the thickness is very much less; for the analogous phenomenon involving light the thickness is very much greater. With Schrödinger’s wave-equation, the characteristic that defines the two media discussed above is the kinetic energy of the particle if it is considered as an object that could be located at a point. In medium type 1 the kinetic energy would be positive, in medium type 2 the kinetic energy would be negative. There is no inconsistency in this, because particles cannot physically be located at a point: they are always spread out (“delocalized”) to some extent, and the kinetic energy of the delocalized object is always positive. What is true is that it is sometimes mathematically convenient to treat particles as behaving like points, particular in the context of Newton’s Second Law and classical mechanics generally. In the past, people thought that the success of classical mechanics meant that particles could always and in all circumstances be treated as if they were located at points. But there never was any convincing experimental evidence that this was true when very small objects and very small distances are involved, and we now know that this viewpoint was mistaken. However, because it is still traditional to teach students early in their careers that particles behave like points, it sometimes comes as a big surprise for people to discover that it is well established that traveling physical particles always physically obey a wave-equation (even when it is convenient to use the mathematics of moving points). Clearly, a hypothetical classical point particle analyzed according to Newton’s Laws could not enter a region where its kinetic energy would be negative. But, a real delocalized object, that obeys a wave-equation and always has positive kinetic energy, can leak through such a region if conditions are right. An approach to tunneling that avoids mention of the concept of “negative kinetic energy” is set out below in the section on “Schrödinger equation tunneling basics”. Reflection and tunneling of an electron wave packet directed at a potential barrier. The bright spot moving to the left is the reflected part of the wave packet. A very dim spot can be seen moving to the right of the barrier. This is the small fraction of the wave packet that tunnels through the classically forbidden barrier. Also notice the interference fringes between the incoming and reflected waves. An electron approaching a barrier has to be represented as a wave-train. This wave-train can sometimes be quite long – electrons in some materials can be 10 to 20 nm long. This makes animations difficult. If it were legitimate to represent the electron by a short wave-train, then tunneling could be represented as in the animation alongside. It is sometimes said that tunneling occurs only in quantum mechanics. Unfortunately, this statement is a bit of linguistic conjuring trick. As indicated above, “tunneling-type” evanescent-wave phenomena occur in other contexts too. But, until recently, it has only been in quantum mechanics that evanescent wave coupling has been called “tunneling”. (However, there is an increasing tendency to use the label “tunneling” in other contexts too, and the names “photon tunneling” and “acoustic tunneling” are now used in the research literature.) With regards to the mathematics of tunneling, a special problem arises. For simple tunneling-barrier models, such as the rectangular barrier, the Schrödinger equation can be solved exactly to give the value of the tunneling probability (sometimes called the “transmission coefficient”). Calculations of this kind make the general physical nature of tunneling clear. One would also like to be able to calculate exact tunneling probabilities for barrier models that are physically more realistic. However, when appropriate mathematical descriptions of barriers are put into the Schrödinger equation, then the result is an awkward non-linear differential equation. Usually, the equation is of a type where it is known to be mathematically impossible in principle to solve the equation exactly in terms of the usual functions of mathematical physics, or in any other simple way. Mathematicians and mathematical physicists have been working on this problem since at least 1813, and have been able to develop special methods for solving equations of this kind approximately. In physics these are known as “semi-classical” or “quasi-classical” methods. A common semi-classical method is the so-called WKB approximation (also known as the “JWKB approximation”). The first known attempt to use such methods to solve a tunneling problem in physics was made in 1928, in the context of field electron emission. It is sometimes considered that the first people to get the mathematics of applying this kind of approximation to tunneling fully correct (and to give reasonable mathematical proof that they had done so) were N. Fröman and P.O. Fröman, in 1965. Their complex ideas have not yet made it into theoretical-physics textbooks, which tend to give simpler (but slightly more approximate) versions of the theory. An outline of one particular semi-classical method is given below. Three notes may be helpful. In general, students taking physics courses in quantum mechanics are presented with problems (such as the quantum mechanics of the hydrogen atom) for which exact mathematical solutions to the Schrödinger equation exist. Tunneling through a realistic barrier is a reasonably basic physical phenomenon. So it is sometimes the first problem that students encounter where it is mathematically impossible in principle to solve the Schrödinger equation exactly in any simple way. Thus, it may also be the first occasion on which they encounter the “semi-classical-method” mathematics needed to solve the Schrödinger equation approximately for such problems. Not surprisingly, this mathematics is likely to be unfamiliar, and may feel “odd”. Unfortunately, it also comes in several different variants, which doesn’t help. Also, some accounts of tunneling seem to be written from a philosophical viewpoint that a particle is “really” point-like, and just has wave-like behavior. There is very little experimental evidence to support this viewpoint. A preferable philosophical viewpoint is that the particle is “really” delocalized and wave-like, and always exhibits wave-like behavior, but that in some circumstances it is convenient to use the mathematics of moving points to describe its motion. This second viewpoint is used in this section. The precise nature of this wave-like behavior is, however, a much deeper matter, beyond the scope of this article on tunneling. Although the phenomenon under discussion here is usually called “quantum tunneling” or “quantum-mechanical tunneling”, it is the wave-like aspects of particle behavior that are important in tunneling theory, rather than effects relating to the quantization of the particle’s energy states. For this reason, some writers prefer to call the phenomenon “wave-mechanical tunneling. George Gamow By 1928, George Gamow had solved the theory of the alpha decay of a nucleus via tunneling. Classically, the particle is confined to the nucleus because of the high energy requirement to escape the very strong potential. Under this system, it takes an enormous amount of energy to pull apart the nucleus. In quantum mechanics, however, there is a probability the particle can tunnel through the potential and escape. Gamow solved a model potential for the nucleus and derived a relationship between the half-life of the particle and the energy of the emission. Alpha decay via tunneling was also solved concurrently by Ronald Gurney and Edward Condon. Shortly thereafter, both groups considered whether particles could also tunnel into the nucleus. After attending a seminar by Gamow, Max Born recognized the generality of quantum-mechanical tunneling. He realized that the tunneling phenomenon was not restricted to nuclear physics, but was a general result of quantum mechanics that applies to many different systems. Today the theory of tunneling is even applied to the early cosmology of the universe. Quantum tunneling was later applied to other situations, such as the cold emission of electrons, and perhaps most importantly semiconductor and superconductor physics. Phenomena such as field emission, important to flash memory, are explained by quantum tunneling. Tunneling is a source of major current leakage in Very-large-scale integration (VLSI) electronics, and results in the substantial power drain and heating effects that plague high-speed and mobile technology. Another major application is in electron-tunneling microscopes which can resolve objects that are too small to see using conventional microscopes. Electron tunneling microscopes overcome the limiting effects of conventional microscopes (optical aberrations, wavelength limitations) by scanning the surface of an object with tunneling electrons. Quantum tunneling has been shown to be a mechanism used by enzymes to enhance reaction rates. It has been demonstrated that enzymes use tunneling to transfer both electrons and nuclei such as hydrogen and deuterium. It has even been shown, in the enzyme glucose oxidase, that oxygen nuclei can tunnel under physiological conditions. Back to Contents Near-Lightspeed Travel has the ability to significantly dilate time, sending an accelerating traveler rapidly forward in time relative to those left behind before her travel. The closer to the speed of light, the further into the future the travel. The key characteristics of the application of near-lightspeed travel for time control and time travel are presented in the picture below. This is followed by more detail describing the effect below. Alcubierre Warp Drive The Alcubierre drive, also known as the Alcubierre metric or Warp Drive, is a mathematical model of a spacetime exhibiting features reminiscent of the fictional “warp drive” from Star Trek, which can travel “faster than light” (although not in a local sense – see below). The key characteristics of the application of Alcubierre warp drives for time control and time travel are presented in the picture below. This is followed by more detail describing the effect below. Alcubierre Warp Drive Description In 1994, the Mexican physicist Miguel Alcubierre proposed a method of stretching space in a wave which would in theory cause the fabric of space ahead of a spacecraft to contract and the space behind it to expand. The ship would ride this wave inside a region known as a warp bubble of flat space. Since the ship is not moving within this bubble, but carried along as the region itself moves, conventional relativistic effects such as time dilation do not apply in the way they would in the case of a ship moving at high velocity through flat spacetime. Also, this method of travel does not actually involve moving faster than light in a local sense, since a light beam within the bubble would still always move faster than the ship; it is only “faster than light” in the sense that, thanks to the contraction of the space in front of it, the ship could reach its destination faster than a light beam restricted to travelling outside the warp bubble. Thus, the Alcubierre drive does not contradict the conventional claim that relativity forbids a slower-than-light object to accelerate to faster-than-light speeds. Alcubierre Metric The Alcubierre Metric defines the so-called warp drive spacetime. This is a Lorentzian manifold which, if interpreted in the context of general relativity, exhibits features reminiscent of the warp drive from Star Trek: a warp bubble appears in previously flat spacetime and moves off at effectively superluminal speed. Inhabitants of the bubble feel no inertial effects. The object(s) within the bubble are not moving (locally) faster than light, instead, the space around them shifts so that the object(s) arrives at its destination faster than light would in normal space. Alcubierre chose a specific form for the function f, but other choices give a simpler spacetime exhibiting the desired “warp drive” effects more clearly and simply. Mathematics of the Alcubierre drive Using the 3+1 formalism of general relativity, the spacetime is described by a foliation of space-like hypersurfaces of constant coordinate time t. The general form of the Alcubierre metric is: where α is the lapse function that gives the interval of proper time between nearby hypersurfaces, βI is the shift vector that relates the spatial coordinate systems on different hypersurfaces and γij is a positive definite metric on each of the hypersurfaces. The particular form that Alcubierre studied is defined by: with R > 0 and σ > 0 arbitrary parameters. Alcubierre’s specific form of the metric can thus be written; With this particular form of the metric, it can be shown that the energy density measured by observers whose 4-velocity is normal to the hypersurfaces is given by where g is the determinant of the metric tensor. Thus, as the energy density is negative, one needs exotic matter to travel faster than the speed of light. However, generating enough exotic matter and sustaining it to perform feats such as faster-than-light travel (and also to keep open the ‘throat’ of a wormhole) is thought to be impractical. Low has argued that within the context of general relativity, it is impossible to construct a warp drive in the absence of exotic matter. It is generally believed that a consistent theory of quantum gravity will resolve such issues once and for all. Physics of the Alcubierre drive For those familiar with the effects of special relativity, such as Lorentz contraction and time dilation, the Alcubierre metric has some apparently peculiar aspects. In particular, Alcubierre has shown that even when the ship is accelerating, it travels on a free-fall geodesic. In other words, a ship using the warp to accelerate and decelerate is always in free fall, and the crew would experience no accelerational g-forces. Enormous tidal forces would be present near the edges of the flat-space volume because of the large space curvature there, but by suitable specification of the metric, these would be made very small within the volume occupied by the ship. The original warp drive metric, and simple variants of it, happen to have the ADM form which is often used in discussing the initial value formulation of general relativity. This may explain the widespread misconception that this spacetime is a solution of the field equation of general relativity. Metrics in ADM form are adapted to a certain family of inertial observers, but these observers are not really physically distinguished from other such families. Alcubierre interpreted his “warp bubble” in terms of a contraction of “space” ahead of the bubble and an expansion behind. But this interpretation might be misleading, since the contraction and expansion actually refers to the relative motion of nearby members of the family of ADM observers. In general relativity, one often first specifies a plausible distribution of matter and energy, and then finds the geometry of the spacetime associated with it; but it is also possible to run the Einstein field equations in the other direction, first specifying a metric and then finding the energy-momentum tensor associated with it, and this is what Alcubierre did in building his metric. This practice means that the solution can violate various energy conditions and require exotic matter. The need for exotic matter leads to questions about whether it is actually possible to find a way to distribute the matter in an initial spacetime which lacks a “warp bubble” in such a way that the bubble will be created at a later time. Yet another problem is that, according to Serguei Krasnikov, it would be impossible to generate the bubble without being able to force the exotic matter to move at locally FTL speeds, which would require the existence of tachyons. Some methods have been suggested which would avoid the problem of tachyonic motion, but would probably generate a naked singularity at the front of the bubble. Significant problems with the metric of this form stem from the fact that all known warp drive spacetimes violate various energy conditions. It is true that certain experimentally verified quantum phenomena, such as the Casimir effect, when described in the context of the quantum field theories, lead to stress-energy tensors which also violate the energy conditions and so one might hope that Alcubierre type warp drives could perhaps be physically realized by clever engineering taking advantage of such quantum effects. However, if certain quantum inequalities conjectured by Ford and Roman hold, then the energy requirements for some warp drives may be absurdly gigantic, e.g. the energy -1067gram equivalent might be required to transport a small spaceship across the Milky Way galaxy. This is orders of magnitude greater than the mass of the universe. Counterarguments to these apparent problems have been offered, but not everyone is convinced they can be overcome. Chris Van Den Broeck, in 1999, has tried to address the potential issues. By contracting the 3+1 dimensional surface area of the ‘bubble’ being transported by the drive, while at the same time expanding the 3 dimensional volume contained inside, Van Den Broeck was able to reduce the total energy needed to transport small atoms to less than 3 solar masses. Later, by slightly modifying the Van Den Broeck metric, Krasnikov reduced the necessary total amount of negative energy to a few milligrams. Krasnikov proposed that, if tachyonic matter could not be found or used, then a solution might be to arrange for masses along the path of the vessel to be set in motion in such a way that the required field was produced. But in this case the Alcubierre Drive vessel is not able to go dashing around the galaxy at will. It is only able to travel routes which, like a railroad, have first been equipped with the necessary infrastructure. Miguel Alcubierre The pilot inside the bubble is causally disconnected with its walls and cannot carry out any action outside the bubble. However, it is necessary to place devices along the route in advance, and since the pilot cannot do this while “in transit”, the bubble cannot be used for the first trip to a distant star. In other words, to travel to Vega (which is 26 light-years from the Earth) one first has to arrange everything so that the bubble moving toward Vega with a superluminal velocity would appear and these arrangements will always take more than 26 years. Coule has argued that schemes such as the one proposed by Alcubierre are not feasible because the matter to be placed on the road beforehand has to be placed at superluminal speed. Thus, according to Coule, an Alcubierre Drive is required in order to build an Alcubierre Drive. Since none have been proven to exist already then the drive is impossible to construct, even if the metric is physically meaningful. Coule argues that an analogous objection will apply to any proposed method of constructing an Alcubierre Drive. Faster-than-Light Travel is an interesting and controversial subject. According to special relativity anything that could travel faster-than-light would move backward in time. As the same time, special relativity states that this would require infinite energy. On the other hand, what some physicists refer to as “apparent” or “effective” FTL is the hypothesis that unusually distorted regions of spacetime might permit matter to reach distant locations faster than what it would take light in the “normal” route (though still moving subluminally through the distorted region). The key characteristics of the application of faster-than-light travel for time control and time travel are presented in the picture below. This is followed by more detail describing the effect below. • Some processes propagate faster than c, but cannot carry information. • Light travels at speed c/n when not in a vacuum but traveling through a medium with refractive index = n (causing refraction), and in some materials other particles can travel faster than c/n (but still slower than c), leading to Cherenkov radiation. Faster-than-light communication is, by Einstein’s theory of relativity, equivalent to time travel. These transformations have important implications: Therefore any theory which permits “true” FTL also has to cope with time travel and all its associated paradoxes, or else to assume the Lorentz invariance to be a symmetry of thermodynamical statistical nature (hence a symmetry broken at some presently unobserved scale). • While Special and general relativity do not allow superluminal speeds locally, non-local means may be possible, which means moving with space rather than moving through space. Radically Curve Spacetime Using Slip String Drive There is one way that doesn’t violate Relativity. Andrew L. Bender’s Slip String Drive. Bender proposes traveling by completely isolating a region of spacetime from the rest of our universe using Einstein’s gravity waves. These compression waves of spacetime are generated by a ship, which emits them from its hull in all directions until it is completely isolated from the rest of our universe. Then, by emitting more gravity waves behind the ship, it stretches out its isolated bubble into an egg-shape, causing external spacetime to squeeze in on the bubble unevenly, propelling the craft forward at speeds no longer limited by relativity. Time passes normally within the isolated region, eliminating the possibility of paradox or time travel. Ignore special relativity This option is popular particularly in science fiction. However, empirical and theoretical evidence strongly supports Einstein’s theory of special relativity as the correct description of high-speed motion, which generalizes the more familiar Galilean relativity, which is actually an approximation at conventional (much less than c) speeds. Faster light (Casimir vacuum and quantum tunneling) Casimir Vacuum Force The experimental determination has been made in vacuum. However, the vacuum we know is not the only possible vacuum which can exist. The vacuum has energy associated with it, called the vacuum energy. This vacuum energy can perhaps be changed in certain cases. When vacuum energy is lowered, light itself has been predicted to go faster than the standard value ‘c’. This is known as the Scharnhorst effect. Accordingly there has as yet been no experimental verification of the prediction. A recent analysis argued that the Scharnhorst effect cannot be used to send information backwards in time with a single set of plates since the plates’ rest frame would define a “preferred frame” for FTL signaling. However, with multiple pairs of plates in motion relative to one another the authors noted that they had no arguments that could “guarantee the total absence of causality violations”, and invoked Hawking’s speculative chronology protection conjecture which suggests that feedback loops of virtual particles would create “uncontrollable singularities in the renormalized quantum stress-energy” on the boundary of any potential time machine, and thus would require a theory of quantum gravity to fully analyze. The physicists Günter Nimtz and Alfons Stahlhofen, of the University of Koblenz, claim to have violated relativity experimentally by transmitting photons faster than the speed of light. They say they have conducted an experiment in which microwave photons – relatively low energy packets of light – travelled “instantaneously” between a pair of prisms that had been moved up to 3 ft apart, using a phenomenon known as quantum tunneling. Nimtz told New Scientist magazine: “For the time being, this is the only violation of special relativity that I know of.” Give up causality While this gets around the infinite acceleration problem, it still would lead to closed timelike curves (i.e., time travel) and causality violations. Causality is not required by special or general relativity, but is nonetheless generally considered a basic property of the universe that cannot be sensibly dispensed with. Because of this, most physicists expect (or perhaps hope) that quantum gravity effects will preclude this option. This is understood to be due to the expansion of the space between the objects, and general relativity still reduces to special relativity in a “local” sense, meaning that two objects passing each other in a small local region of spacetime cannot have a relative velocity greater than c, and will move more slowly than a light beam passing through the region. Give up (absolute) relativity There are speculative theories that claim inertia is produced by the combined mass of the universe (e.g., Mach’s principle), which implies that the rest frame of the universe might be preferred by conventional measurements of natural law. Non-physical realms A very popular option in space opera is to assume the existence of some other realm (typically called hyperspace, subspace, or slipspace) which is accessible from this universe, in which the laws of relativity are usually distorted, bent, or nonexistent, facilitating rapid transport between distant points in this universe, sometimes with acceleration differences – that is, not requiring as much energy or thrust to go faster. Space-time distortion Heim theory In 1977, a controversial paper on Heim theory theorized that it may be possible to travel faster than light by using magnetic fields to enter a higher-dimensional space, and the paper received some media attention in January 2006. However, due to the many unproven assumptions in the paper, there have been few serious attempts to conduct further experiments. Quantized space and time As given by the Planck length, there is a minimum amount of ‘space’ that can exist in this universe (1.616×10−35 meters). This limit can be used to determine a minimum time quantization of 5.391×10−44 seconds, which corresponds to a beam of light with a wavelength approaching the Planck length. This means that there is a physical limit to how much blue shift a beam of light can endure. According to general relativity there is no limit to this shift, and an infinitesimally small space can exist, but according to well accepted quantum theory these limits do exist. This is precisely what happens towards the center of a black hole; the incoming light becomes blue shifted past the Planck length as it approaches the region of discontinuity within our universe. The argument is: if a black hole with finite mass can create such a discontinuity in the fabric of space and time, why would people be unable to do the same thing using a finite amount of energy and acceleration? The hypothetical elementary particles that have this property are called tachyons. Their existence has neither been proven nor disproven, but even so, attempts to quantize them show that they may not be used for faster-than-light communication. General relativity Another possible system is the wormhole, which connects two distant locations as though by a shortcut. Both distortions would need to create a very strong curvature in a highly localized region of space-time and their gravity fields would be immense. To counteract the unstable nature, and prevent the distortions from collapsing under their own ‘weight’, one would need to introduce hypothetical exotic matter or negative energy. FTL phenomena Daily motion of the Heavens For an earthbound observer objects in the sky complete one revolution around the earth in 1 day. Alpha Centauri which is the nearest star outside the Solar system is about 4 light years away. On a geostationary view Alpha Centauri has a speed many times greater than “c” as the rim speed of an object moving in a circle is a product of the radius and angular speed. It is also possible on a geostatic view for objects such as comets to vary their speed from subluminal to superluminal and vice versa simply because the distance from the earth varies. Comets may have orbits which take them out to more than 1000 AU. Circumference of a circle radius 1000 AU is greater than one light day. In other words, a comet at such a distance is superluminal in a geostatic frame. Light spots and shadows If a laser is swept across a distant object, the spot of light can easily be made to move at a speed greater than c. Similarly, a shadow projected onto a distant object can be made to move faster than c. In neither case does any matter or information travel faster than light. Closing speeds This correctly reflects the rate at which the distance between the two particles is decreasing, from the observer’s point of view and is called the closing speed. However, it is not the same as the velocity of one of the particles as would be measured by a hypothetical fast-moving observer traveling alongside the other particle. To obtain this, the calculation must be done according to the principle of special relativity. If the two particles are moving at velocities v and -v, or expressed in units of c, β and − β, where which is less than the speed of light. Proper speeds If a spaceship travels to a planet one light year (as measured in the Earth’s rest frame) away from Earth at high speed, the time taken to reach that planet could be less than one year as measured by the traveler’s clock (although it will always be more than one year as measured by a clock on Earth). The value obtained by dividing the distance traveled, as determined in the Earth’s frame, by the time taken, measured by the traveler’s clock, is known as a proper speed or a proper velocity. There is no limit on the value of a proper speed as a proper speed does not represent a speed measured in a single inertial frame. A light signal that left the Earth at the same time as the traveler would always get to the destination before the traveler. Phase velocities above c Group velocities above c Universal expansion The expansion of the universe causes distant galaxies to recede from us faster than the speed of light, if commoving distance and cosmological time are used to calculate the speeds of these galaxies. However, in general relativity, velocity is a local notion, so velocity calculated using commoving coordinates does not have any simple relation to velocity calculated locally. Rules that apply to relative velocities in special relativity, such as the rule that relative velocities cannot increase past the speed of light, do not apply to relative velocities in commoving coordinates, which are often described in terms of the “expansion of space” between galaxies. Astronomical observations Quantum mechanics Since the underlying behavior doesn’t violate local causality or allow FTL it follows that neither does the additional effect of wavefunction collapse, whether real or apparent. To quote Richard Feynman: …there is also an amplitude for light to go faster (or slower) than the conventional speed of light. You found out in the last lecture that light doesn’t go only in straight lines; now, you find out that it doesn’t go only at the speed of light! It may surprise you that there is an amplitude for a photon to go at speeds faster or slower than the conventional speed, c. – Richard Feynman There has sometimes been confusion concerning the latter point. Say you have 4 pairs of entangled matter such that (x0,y0) are distinct from and won’t affect (x1,y1), (x2,y2), etc. If y0 changes you know that x0 changed, the same being true for the other pairs. Right there you have a nibble’s worth of information transfer any time x0, x1, x2, etc. are changed immediately altering y0, y1, and y2 respectively. Monitoring the y bits will immediately tell you when the entangled x bits are updated. – SkewsMe.com Hartman effect The Hartman Effect The Hartman effect is the tunneling effect through a barrier where the tunneling time tends to a constant for large barriers. “should not be linked to a velocity since evanescent waves do not propagate”. Casimir effect EPR Paradox A 2008 quantum physics experiment also performed by Nicolas Gisin and his colleagues in Geneva, Switzerland has determined that the “speed” of the quantum non-local connection (what Einstein called spooky action at a distance) has a minimum lower bound of 10,000 times the speed of light. Delayed choice quantum eraser Delayed Choice Quantum Eraser The characteristic of this experiment is that the observation of the second photon can take place at a later time than the observation of the first photon, which may give the impression that the measurement of the later photons “retroactively” determines whether the earlier photons show interference or not, although the interference pattern can only be seen by correlating the measurements of both members of every pair and so it can’t be observed until both photons have been measured, ensuring that an experimenter watching only the photons going through the slit does not obtain information about the other photons in an FTL or backwards-in-time manner. Variable speed of light The interpretation of this statement is as follows. Variable Speed of Light These units are defined to be independent and so cannot be described in terms of each other. As an alternative to using a particular system of units, one can reduce all measurements to dimensionless quantities expressed in terms of ratios between the quantities being measured and various fundamental constants such as Newton’s constant, the speed of light and Planck’s constant; physicists can define at least 26 dimensionless constants which can be expressed in terms of these sorts of ratios and which are currently thought to be independent of one another. Time-warped Fields use energy within curvatures of spacetime surrounding a rotating mass or energy field to generate containable and controllable fields of closed-timelike curves that can move matter and information forward or backward in time. David Lewis Anderson, USAF Officer and Scientist, founder of time-warped field theory. As general relativity predicts, rotating bodies drag spacetime around themselves in a phenomenon referred to as frame-dragging. This rotational frame-dragging effect is also known as the Lense-Thirring effect. The rotation of an object alters space and time, dragging a nearby object out of position compared to the predictions of Newtonian physics. The predicted effect is small – about one part in a few trillion. However, as Dr. David Lewis Anderson proposed in 1987 with his announcement of time-warped field theory, the difference in potential energy between two different areas of twisted spacetime due to frame-dragging is significantly large. Even the smallest twist in spacetime contains enormous energy potential and can be used to create containable and controllable fields of close-timelike curves without the need for significant input power. This makes both forward and reverse time control possible within the limits of technology today. The key characteristics of the application of time-warped fields for time control and time travel are presented in the picture below. This is followed by more detail describing the science below. Frame Dragging Effect Basics The Anderson Time Reactor operates by accessing the high energy potential and effects, existing across two regions of twisted spacetime, to create containable and controllable fields of closed-timelike curves. Rotational frame-dragging appears in the general principle of relativity and similar theories in the vicinity of rotating massive objects. Under this effect, the frame of reference in which a clock ticks the fastest is one which is rotating around the object as viewed by a distant observer. This also means that light traveling in the direction of rotation of the object will move around the object faster than light moving against the rotation as seen by a distant observer. It is now the best-known effect, partly thanks to the Gravity Probe B experiment. Linear frame dragging is the similarly inevitable result of the general principle of relativity, applied to linear momentum. Although it arguably has equal theoretical legitimacy to the “rotational” effect, the difficulty of obtaining an experimental verification of the effect means that it receives much less discussion and is often omitted from articles on frame-dragging Static mass increase is another effect. The effect is an increase in inertia of a body when other masses are placed nearby. While not strictly a frame dragging effect, it is also derived from the same equation of general relativity. It is a tiny effect that is difficult to confirm experimentally. Mathematical Derivation of Frame Dragging Frame-dragging may be illustrated most readily using the Kerr metric, which describes the geometry of spacetime in the vicinity of a mass M rotating with angular momentum J where rs is the Schwarzschild radius and where the following shorthand variables have been introduced for brevity We may re-write the Kerr metric in the following form In the plane of the equator this simplifies to: Thus, an inertial reference frame is entrained by the rotating central mass to participate in the latter’s rotation; this is frame-dragging. Frame-dragging occurs about every rotating mass and at every radius r and colatitude θ. The Anderson Time Reactor Twisted spacetime around the earth, or any rotating body, contains enormous levels of potential energy. This is due to the tension in the fabric of spacetime caused by inertial frame-dragging. Time-warped field theory shows how a properly configured energy beam can be used to initiate and maintain the coupling of two different areas of slightly twisted spacetime. This enables the discharge of significantly greater levels of stored potential energy and generates controllable fields of closed-timelike curves. The system that couples these two regions of different spacetime potential is common referred to as an Anderson Time Reactor or spacetime battery. The Anderson Time Reactor is a system that couples two different areas of twisted spacetime, with two different spacetime tensions. The system can access and create a conduit to harvest that stored energy and through the coupling process create dense fields of Closed Timelike Curves (CTCs). A reactor consists of a region of spacetime, large or small, surrounding a rotating mass, where inertial frame dragging effects are present twisting spacetime between two regions of space. David Lewis Anderson A specialized beam emitter, with a localized source nearer to the rotating mass, is directed toward a more distant region of space, across the region of twisted spacetime created by inertial frame-dragging. A series of power collectors near and surrounding the beam emitter provide a conduit to then channel and control the received power. The resulting effect is that the potential energy in the twisted fabric of spacetime is coupled or bridged from the distant point to the local power collector array. The entire process is initiated and controlled by the system. The Anderson Time Reactor system achieves this by using the application of Time-warped Field theory to create the ability to leak, tap into and control the greater energy stored in this spacetime tension (or energy potential difference), in between the distant point and the localized point in spacetime. In the most basic terms, the Time Reactor can be looked at as a simple spacetime battery, accessing the significant potential energy that existing around any rotating body anywhere in spacetime. Spacetime-Motive Force Spectral image of energy pattern near time reactor emitter and power collector array showing coupling and discharge of spacetime-motive force including energy drift in the direction of inertial frame dragging of the Earth. New Mexico, USA, 2008 The coupling of these two points accesses what Dr. Anderson labeled a “spacetime-motive force” with the ability to produce high energy and time-warped fields allowing the containment and controlling of fields of closed-timelike curves. The force between the localized and distant point is called the open spacetime-motive force. The open spacetime-motive force, even in the minimal effects of inertial frame-dragging, can be extremely large by present-day power generation standard standards. It is estimated that a single next-generation time reactor may have the ability to produce more than all of the worlds combined power generation capabilities today. The amount of spacetime motive force depends on several factors. These include the mass of the rotating body, its rotation speed, relative orientation of the two point to the axis of rotation, and the medium and distance between the localized and distant points in space. More simply, it is a function of the degree of inertial frame-dragging and the characteristics of the medium through which the Time Reactor must operate between the two regions to open a “discharge path.” Also, the amount of energy that is accessed, or time-warped fields generated, can be controlled in several ways through phasing and other characteristics of the emitter and power collector array. A Practical Approach to Achieving Time Control Practical time control and time travel requires significantly large energy levels, from some source, to operate effectively. To achieve time control we can attempt to generate this large energy level or, as an alternative, access and channel the energy already existing and inherent in natural processes and the basic makeup or fabric of spacetime surrounding our planet. As stated above, it is estimated that a single next-generation time reactor may have the ability to produce more than all of the world’s combined power generation capabilities today. Time-warped field theory demonstrates a practical way to generate the necessary concentrated CTCs and high power levels, without high input power, for practical time control The fabric of spacetime is elastic and very powerful. It takes a tremendous amount of power to create even the slightest twist in spacetime. One can think of the fabric of spacetime surrounding a rotating mass, like the Earth, to be a spring or a battery. The rotating mass creates a twist in the fabric of spacetime who’s natural state and desire is to unwind, just like a spring, or to discharge, just like a battery. Time-warped field technology uses relatively low input power to open a discharge path for this spacetime battery. This technology itself does not create the energy levels required for time control and time travel. Instead, it relies on and operates using the energy stored within twisted spacetime around a rotating body that is created by the inertial frame-dragging effect. With only a small amount of system input power, time-warped field theory shows how enormous power levels can be accessed. The coupling and discharge process, initiated and also defined by time-warped field theory and technology, generates significant levels of spacetime-motive force that can be used to generate very concentrated fields of closed-timelike curves near the Time Reactor’s emitter and power collector array. These fields of closed-timelike curves are concentrated and controllable and can permit both forward and backwards time control. Circulating Light Beams can be created using gamma and magnetic fields to warp time. The approach can twist space that causes time to be twisted, meaning you could theoretically walk through time as you walk through space. A number of interesting post-Newtonian phenomena are known to occur for rotating distributions of matter in Einstein’s general theory of relativity. Inertial frame dragging, for example, is a consequence of the weak gravitational field of a slowly rotating massive sphere. In addition, exact solutions of the Einstein field equations indicate the presence of closed timelike lines for rotating Kerr black holes, van Stockum rotating dust cylinders, and the rotating universe of Gödel. The key characteristics of the application of circulating light beams for time control and time travel are presented in the picture below. This is followed by more detail describing the approach below. Recently, Ronald L. Mallett solved the linearized Einstein field equations to obtain the gravitational field produced by the electromagnetic radiation of a unidirectional ring laser. It was shown that a massive spinning neutral particle at the center of the ring laser exhibited inertial frame dragging. Ronald L Mallett Traveling close to the speed of light will slow a clock, even an atomic clock. Likewise, a clock outside our atmosphere, far away from any gravitational pull, will run faster than a clock on earth. Therefore, if an artificial gravitational force were created, time travel would, in theory, be possible. Mallett believes he has found a way to make it happen. By trapping light inside a photonic crystal, he can cause it to circulate. The energy of the circulating light will cause the space inside the circle to twist, causing a gravitational force. This concept can be thought of as a spoon stirring a pot. The light is the spoon rotating around the inner rim of the pot. The space is the liquid being swirled by the spoon. As the space twists, it will coil the normally linear passage of time with it, spiraling the past, present, and future together into one continuous loop. It is this twisting of space and time that Mallett believes will make time travel possible. Mallett and his partner at the University of Connecticut, Dr. Chandra Raychoudhuri, are seeking National Science Foundation funding for experiments that they hope will support their theories. Their first experiment will be to trap light in a crystal and observe the reaction of a neutron inside the circle. Mallett will insert polarized neutrons (neutrons that all spin in one direction) into the center of the circulating light. If he sees a change in their spin he will know that space is indeed being twisted inside of the crystal. Should this experiment prove successful, the team will apply for funding to conduct studies to see if time bending is evident inside the circle of light. Dr. Mark Silverman at Trinity College in nearby Hartford has suggested a possible way to see evidence of time bending: Two identical samples of a radioactive substance would be prepared with identical half-lives. One would be introduced into the time machine circulating in the same direction as the light, the other in the opposite direction. If, at the end of the experiment, one sample had decayed further than the other, Mallett’s theories of time travel would be supported. Where the experiments will go from there is unclear. There is a vast difference between slowing the decay rate of a radioactive particle and sending a human back in time. Science aside, sending people through time creates philosophical issues as well as physical ones. Consider the “Grandparent Paradox” in which a time traveler goes back in time and kills her grandparents, thus negating her entire existence. If she were never born, then she couldn’t go back in time in the first place. Mallett explains paradoxes such as these with a parallel-universe theory. He believes that with every decision we make, another version of us makes the opposite decision and splits off into a parallel universe. Thus the time traveler was born in the universe where she did not kill her grandparents. This is where the line between philosophy and physics seems to blur. “All of these things have their root in philosophy,” says Mallett. But he explains that the difference between physics and philosophy is experiment. “All of these things would be philosophy without experimentation,” he says. True, the parallel-universe theory has not been directly supported by experiment, but Mallett uses the Heisenberg Uncertainty Principle to explain why the parallel universe theory is probable. Heisenberg’s Uncertainty Principle says that we cannot predict both the position of an electron and its spin at any given moment. Without this principle, “the universe should have collapsed immediately after it was formed,” says Mallett. A hydrogen atom, one of the building blocks of our universe, consists of a proton and an electron. Since the proton and electron have opposite charges they should be attracted to each other, collide, and destroy the atom. But if that happened, we would know both the position of the electron (the point of impact with the proton) and its spin (none); therefore it is impossible for them to collide. Sun distorting spacetime. Similar to the Uncertainty Principle, quantum mechanics works on the theory that one can’t make a definite prediction about anything that will happen next. Therefore the parallel-universe theory works well. What will happen next can’t be predicted because in fact, everything happens next. It has long been known(3, 4) that the van Stockum solution for the exterior metric of an infinitely long rotating dust cylinder contains closed timelike lines. Dr. Mallett has proposed that closed timelike curves also occur for an infinitely long circulating cylinder of light. This model also shares some of the same limitations as the van Stockum solution in that the metric is not asymptotically flat, however, has emphasized that certain aspects of an infinitely long rotating dust cylinder may be shared by a long finite one. This may also apply to a long but finite circulating cylinder of light. Since the 1930’s, physicists have speculated about the existence of “wormholes” in the fabric of space. Wormholes are hypothetical areas of warped spacetime with great energy that can create tunnels through spacetime. if traversable would allow a traveler to quickly move through great distances in space and also travel through time. The difficulty lies in keeping the wormhole open while the traveler makes his journey: If the opening snaps shut, he will never survive to emerge at the other end. For years, scientists believed that the transit was physically impossible. But recent research, especially by the U.S. physicist Kip Thorne, suggests that it could be done using exotic materials capable of withstanding the immense forces involved. Even then, the time machine would be of limited use – for example, you could not return to a time before the wormhole was created. Using wormhole technology would also require a society so technologically advanced that it could master and exploit the energy within black holes. Hermann Weyl Spacetime can be viewed as a 2D surface (to simplify understanding) that, when ‘folded’ over, allows the formation of a wormhole bridge. A wormhole has at least two mouths that are connected to a single throat or tube. If the wormhole is traversable, then matter can ‘travel’ from one mouth to the other by passing through the throat. While there is no observational evidence for wormholes, spacetime containing wormholes are known to be valid solutions in general relativity. John Archibald Wheeler The term wormhole was coined by the American theoretical physicist John Archibald Wheeler in 1957. However, the idea of wormholes had already been theorized in 1921 by the German mathematician Hermann Weyl in connection with his analysis of mass in terms of electromagnetic field energy. This analysis forces one to consider situations…where there is a net flux of lines of force through what topologists would call a handle of the multiply-connected space and what physicists might perhaps be excused for more vividly terming a ‘wormhole’. The key characteristics of the application of wormholes for time control and time travel are presented in the picture below. This is followed by more detail describing the science below. The basic notion of an intra-universe wormhole is that it is a compact region of spacetime whose boundary is topologically trivial but whose interior is not simply connected. Formalizing this idea leads to definitions such as the following, taken from Matt Visser’s Lorentzian Wormholes. If a Minkowski spacetime contains a compact region Ω, and if the topology of Ω is of the form Ω ~ R x Σ, where Σ is a three-manifold of nontrivial topology, whose boundary has topology of the form dΣ ~ S2, and if, furthermore, the hypersurfaces Σ are all spacelike, then the region Ω contains a quasi-permanent intra-universe wormhole. Characterizing inter-universe wormholes is more difficult. For example, one can imagine a ‘baby’ universe connected to its ‘parent’ by a narrow ‘umbilicus’. One might like to regard the umbilicus as the throat of a wormhole, but the spacetime is simply connected. Schwarzschild wormholes Diagram of a Schwarzschild Wormhole Lorentzian wormholes known as Schwarzschild wormholes or Einstein-Rosen bridges are bridges between areas of space that can be modeled as vacuum solutions to the Einstein field equations by combining models of a black hole and a white hole. This solution was discovered by Albert Einstein and his colleague Nathan Rosen, who first published the result in 1935. However, in 1962 John A. Wheeler and Robert W. Fuller published a paper showing that this type of wormhole is unstable, and that it will pinch off instantly as soon as it forms, preventing even light from making it through. Before the stability problems of Schwarzschild wormholes were apparent, it was proposed that quasars were white holes forming the ends of wormholes of this type. While Schwarzschild wormholes are not traversable, their existence inspired Kip Thorne to imagine traversable wormholes created by holding the ‘throat’ of a Schwarzschild wormhole open with exotic matter (material that has negative mass/energy). Wormholes would act as shortcuts connecting distant regions of space-time. By going through a wormhole, it might be possible to travel between the two regions faster than a beam of light through normal space-time. Lorentzian traversable wormholes would allow travel from one part of the universe to another part of that same universe very quickly or would allow travel from one universe to another. The possibility of traversable wormholes in general relativity was first demonstrated by Kip Thorne and his graduate student Mike Morris in a 1988 paper; for this reason, the type of traversable wormhole they proposed, held open by a spherical shell of exotic matter, is referred to as a Morris-Thorne wormhole. Later, other types of traversable wormholes were discovered as allowable solutions to the equations of general relativity, including a variety analyzed in a 1989 paper by Matt Visser, in which a path through the wormhole can be made in which the traversing path does not pass through a region of exotic matter. However in the pure Gauss-Bonnet theory exotic matter is not needed in order for wormholes to exist- they can exist even with no matter. A type held open by negative mass cosmic strings was put forth by Visser in collaboration with Cramer et al., in which it was proposed that such wormholes could have been naturally created in the early universe. Wormholes connect two points in spacetime, which means that they would in principle allow travel in time, as well as in space. In 1988, Morris, Thorne and Yurtsever worked out explicitly how to convert a wormhole traversing space into one traversing time.[4] However, it has been said a time traversing wormhole cannot take you back to before it was made but this is disputed. Faster-than-light travel Special relativity only applies locally. You can walk slowly while reaching your destination more quickly because the distance is smaller. Time travel A wormhole could allow time travel. For example, consider two clocks at both mouths both showing the date as 2000. After being taken on a trip at relativistic velocities, the accelerated mouth is brought back to the same region as the stationary mouth with the accelerated mouth’s clock reading 2005 while the stationary mouth’s clock read 2010. A traveler who entered the accelerated mouth at this moment would exit the stationary mouth when its clock also read 2005, in the same region but now five years in the past. Such a configuration of wormholes would allow for a particle’s world line to form a closed loop in spacetime, known as a closed timelike curve. It is thought that it may not be possible to convert a wormhole into a time machine in this manner; some analyses using the semi-classical approach to incorporating quantum effects into general relativity indicate that a feedback loop of virtual particles would circulate through the wormhole with ever-increasing intensity, destroying it before any information could be passed through it, in keeping with the chronology protection conjecture. This has been called into question by the suggestion that radiation would disperse after traveling through the wormhole, therefore preventing infinite accumulation. The debate on this matter is described by Kip S. Thorne in the book Black Holes and Time Warps. There is also the Roman ring, which is a configuration of more than one wormhole. This ring seems to allow a closed time loop with stable wormholes when analyzed using semi-classical gravity, although without a full theory of quantum gravity it is uncertain whether the semi-classical approach is reliable in this case. Theories of wormhole metrics describe the spacetime geometry of a wormhole and serve as theoretical models for time travel. An example of a (traversable) wormhole metric is the following: One type of non-traversable wormhole metric is the Schwarzschild solution: In fiction Wing Commander ships are configured with jump drives to propel a spacecraft between two connecting stellar systems. Wormholes are features of science fiction as they allow interstellar (and sometimes inter-universal) travel within human timescales. It is common for the creators of a fictional universe to decide that faster-than-light travel is either impossible or that the technology does not yet exist, but to use wormholes as a means of allowing humans to travel long distances in short periods. Military science fiction (such as the Wing Commander games) often uses a “jump drive” to propel a spacecraft between two fixed “jump points” connecting stellar systems. Connecting systems in a network like this results in a fixed “terrain” with choke points that can be useful for constructing plots related to military campaigns. The Alderson points used by Larry Niven and Jerry Pournelle in The Mote in God’s Eye and related novels are an example, although the mechanism does not seem to describe actual wormhole physics. David Weber has also used the device in the Honorverse and other books such as those based upon the Starfire universe. Naturally occurring wormholes form the basis for interstellar travel in Lois McMaster Bujold’s Vorkosigan Saga. They are also used to create an Interstellar Commonwealth in Peter F. Hamilton’s Commonwealth Saga. In Jack L. Chalker’s The Rings of the Master series, interstellar class spaceships are capable of calculating complex equations and punching Wormholes in the fabric of the Universe in order to enable rapid travel. Concept of wormholes is used in The Wild Blue Yonder, a science fiction film by Werner Herzog. Mass Relay Map in the Video Game Mars Effect The Mass Relays in the videogame Mass Effect can be perceived as stabilized wormholes that allow for near instantaneous, “faster-than-light” travel from one end to the other. The Massively Multiplayer Online Game EVE Online utilizes wormholes extensively as they are created in the use of the stargate technology which allows for interstellar travel in the game world. The Vega Strike first-person space trading and combat simulator features wormholes to travel through star systems. The engine is open-source and has various mods and total conversions which have wormholes too, like Vega Trek, a Vega Strike mod based on the Star Trek universe. Or the Privateer Remake, a remake of Wing Commander: Privateer. Bajoran Wormhole in Star Trek Wormholes also play pivotal roles in science fiction where faster-than-light travel is possible though limited, allowing connections between regions that would be otherwise unreachable within conventional timelines. Several examples appear in the Star Trek franchise, including the Bajoran wormhole in the Deep Space Nine series. In 1979’s Star Trek: The Motion Picture the USS Enterprise was trapped in an artificial wormhole caused by an imbalance in the calibration of the ship’s warp engines when it first achieved faster-than-light speed. In the Star Trek: Voyager series, the cybernetic species the Borg use what, in the Star Trek universe, are referred to as transwarp conduits, allowing ships to move nearly instantaneously to any part of the galaxy in which an exit aperture exists. Although these conduits are never described as “wormholes”, they appear to share several traits in common with them. The 1979 Disney film The Black Hole’s plot centers around a massive black hole, although it makes virtually no use of then-current worm-hole physics, with only one rather desultory mention of an Einstein-Rosen bridge. A trip through the black hole turns theological, abandoning scientific rationale. Wormhole Transporter in the movie Contact. In Carl Sagan’s novel Contact and subsequent 1997 film starring Jodie Foster and Matthew McConaughey, Foster’s character Ellie travels 26 light years through a series of wormholes to the star Vega. The round trip, which to Ellie lasts 18 hours, passes by in a fraction of a second on Earth, making it appear she went nowhere. In her defense, Foster mentions an Einstein-Rosen bridge and tells how she was able to travel faster than light and time. Analysis of the situation by Kip Thorne, on the request of Sagan, is quoted by Thorne as being his original impetus for analyzing the physics of wormholes. Wormholes play major roles in the television series Farscape, where they are the cause of John Crichton’s presence in the far reaches of our own galaxy, and in the Stargate series, where stargates create a stable artificial wormhole where matter is dematerialized, converted into energy, and is sent through to be rematerialized at the other side. In the latter series, the devices were discovered in Egypt by an archeologist, and were built by aliens known as the Ancients or the Alterans. In the science fiction series Sliders, a wormhole (or vortex, as it is usually called in the show) is used to travel between parallel worlds, and one is seen at least once or twice in every episode. In the pilot episode it was referred to as an “Einstein-Rosen-Podolsky bridge”. Wormhole in movie Donnie Darko The central theme in the movie Donnie Darko revolves around Einstein-Rosen bridges. It is possible that the Webway technology used by the Eldar of the fictional Warhammer 40,000 could be perceived as wormhole technology. In Command & Conquer 3 and in its expansion the Scrin faction (an alien life form with unknown origins from outer solar system) uses artificial wormholes for military purposes to convey infantry and vehicles behind enemy lines. In the Invader Zim episode, “A Room with a Moose” Zim utilizes a wormhole to send his classmates into a parallel universe that consists entirely of a room with a large moose inside it. The television series Strange Days at Blake Holsey High is about a wormhole the science club found at their school. In an episode called “wormhole” in the 13th season of the long running American series Power Rangers, called Power Rangers SPD the spd rangers go through a wormhole to team up with the previous team of Power Rangers Dino Thunder from year 2004, after their enemy Emperor Grumm goes through one. In the video game “Supreme Commander” the UEF faction utilizes aether-gates for long distance military strikes. Black hole in video game Spore In the video game “Spore”, the player can travel through various black holes, which act as wormholes for the player to go to its counterpart located usually on the other side of the galaxy; something that would take much longer to do by flying there manually. In the 1995-1996 FOX military science fiction series SPACE: Above and Beyond, during the first several episodes, the United Earth Force travel through wormholes, called the “Kali Region” or “Galileo Region” to arrive at exo-solar destinations. This idea is abandoned after the second episode. In the movie Race to Witch Mountain the 2 aliens from a planet which is 3000 light years away from Earth use wormholes to travel to Earth. In the 2009 Doctor Who Easter special, Planet of the Dead, the Doctor and a group of passengers aboard a double-decker bus are transported to an alien world via a wormhole. Cosmic Strings Cosmic Strings are a hypothetical 1-dimensional (spatially) topological defect in the fabric of spacetime left over from the formation of the universe. Interaction could create fields of closed time-like curves permitting backwards time travel. Some scientists have suggested using “cosmic strings” to construct a time machine. By maneuvering two cosmic strings close together – or possibly just one string plus a black hole – it is theoretically possible to create a whole array of “closed time-like curves.” Your best bet is to fire two infinitely long cosmic strings past each other at very high speeds, then fly your ship around them in a carefully calculated figure eight. In theory, you would be able to emerge anywhere, anytime! At the moment, these are purely theoretical objects that might possibly be left over from the creation of the universe in the Big Bang. A black hole contains a one-dimensional singularity – an infinitely small point in the space-time continuum. A cosmic string, if such a thing existed, would be a two-dimensional infinitely thin line that has even stranger effects on the fabric of space and time. Although no one has actually found a cosmic string, astronomers have suggested that they may explain strange effects seen in distant galaxies. A cosmic string is a 1-dimensional (spatially) topological defect in various fields. Cosmic strings are hypothesized to form when the field undergoes a phase change in different regions of spacetime, resulting in condensations of energy density at the boundaries between regions. This is somewhat analogous to the imperfections that form between crystal grains in solidifying liquids, or the cracks that form when water freezes into ice. The phase changes that produce cosmic strings may have occurred in the earliest moments of the universe’s evolution. The key characteristics of the application of cosmic strings for time control and time travel are presented in the picture below. This is followed by more detail describing the theory below. Cosmic strings, if they exist, would be extremely thin with diameters on the same order as a proton. They would have immense density, however, and so would represent significant gravitational sources. A cosmic string 1.6 kilometers in length may be heavier than the Earth. However general relativity predicts that the gravitational potential of a straight string vanishes: there is no gravitational force on static surrounding matter. The only gravitational effect of a straight cosmic string is a relative deflection of matter (or light) passing the string on opposite sides (a purely topological effect). A closed loop of cosmic string gravitates in a more conventional way. During the expansion of the universe, cosmic strings would form a network of loops, and their gravity could have been responsible for the original clumping of matter into galactic superclusters. A cosmic string’s vibrations, which would oscillate near the speed of light, can cause part of the string to pinch off into an isolated loop. These loops have a finite lifespan due to decay via gravitational radiation. Other types of topological defects in spacetime are domain walls, monopoles, and textures. Observational evidence It was once thought that the gravitational influence of cosmic strings might contribute to the large-scale clumping of matter in the universe, but all that is known today through galaxy surveys and precision measurements of the cosmic microwave background fits an evolution out of random, Gaussian fluctuations. These precise observations therefore tend to rule out a significant role for cosmic strings. Gravitational lensing of a galaxy by a straight section of a cosmic string would produce two identical, undistorted images of the galaxy. In 2003 a group led by Mikhail Sazhin reported the accidental discovery of two seemingly identical galaxies very close together in the sky, leading to speculation that a cosmic string had been found. However, observations by the Hubble Space Telescope in January 2005 showed them to be a pair of similar galaxies, not two images of the same galaxy. A cosmic string would produce a similar duplicate image of fluctuations in the cosmic microwave background, which might be detectable by the upcoming Planck Surveyor mission. A second piece of evidence supporting cosmic string theory is a phenomenon observed in observations of the “double quasar” called Q0957+561A,B. Originally discovered by Dennis Walsh, Bob Carswell, and Ray Weymann in 1979, the double image of this quasar is caused by a galaxy positioned between it and the Earth. The gravitational lens effect of this intermediate galaxy bends the quasar’s light so that it follows two paths of different lengths to Earth. The result is that we see two images of the same quasar, one arriving a short time after the other (about 417.1 days later). However, a team of astronomers at the Harvard-Smithsonian Center for Astrophysics led by Rudolph Schild studied the quasar and found that during the period between September 1994 and July 1995 the two images appeared to have no time delay; changes in the brightness of the two images occurred simultaneously on four separate occasions. Schild and his team believe that the only explanation for this observation is that a cosmic string passed between the Earth and the quasar during that time period traveling at very high speed and oscillating with a period of about 100 days. The Laser Interferometer Gravitational-Wave Observatory (LIGO) and upcoming gravitational wave observatories will search for cosmic strings as well as other phenomenon with the byproduct of gravitational waves. String theory and cosmic strings There is no direct connection between string theory and the theory of cosmic strings (the names were chosen independently by analogy with ordinary string). However, work in string theory revived interest in cosmic strings in the early 2000s. In 2002 Henry Tye and collaborators observed the production of cosmic strings during the last stages of brane inflation. It was also pointed out by string theorist Joseph Polchinski that the expanding Universe could have stretched a “fundamental” string (the sort which superstring theory considers) until it was of intergalactic size. Such a stretched string would exhibit many of the properties of the old “cosmic” string variety, making the older calculations useful again. Furthermore, modern superstring theories offer other objects which could feasibly resemble cosmic strings, such as highly elongated one-dimensional D-branes (known as “D-strings”). As theorist Tom Kibble remarks, “string theory cosmologists have discovered cosmic strings lurking everywhere in the undergrowth”. Older proposals for detecting cosmic strings could now be used to investigate superstring theory. Scientists at the LIGO Livingston Observatory in Louisiana are searching for evidence of gravitational waves. Superstrings, D-strings or other stringy objects stretched to intergalactic scales would radiate gravitational waves, which could presumably be detected using experiments like LIGO. They might also cause slight irregularities in the cosmic microwave background, too subtle to have been detected yet but possibly within the realm of future observability. Note that most of these proposals depend, however, on the appropriate cosmological fundamentals (strings, branes, etc.), and no convincing experimental verification of these has been performed. Tipler Cylinder A Tipler Cylinder uses a massive and long cylinder spinning around its longitudinal axis. The rotation creates a frame-dragging effect and fields of closed time-like curves traversable in a way to achieve subluminal time travel to the past. Civilizations with the technology to harness black holes might be better advised to leave wormholes alone and try the time-warp method suggested by U.S. astronomer Frank Tipler. He has a simple recipe for a time machine: First take a piece of material 10 time the mass of the Sun, squeeze it together and roll it into a long, thin, super-dense cylinder – a bit like a black hole that has passed through a spaghetti factory. Then spin the cylinder up to a few billion revolutions per minute and see what happens. Tipler predicts that a ship following a carefully plotted spiral course around the cylinder would immediately find itself on a “closed, time-like curve.” It would emerge thousands, even billions, of years from its starting point and possibly several galaxies away. There are problems, though. For the mathematics to work properly, Tipler’s cylinder has to be infinitely long. Also, odd things happen near the ends and you need to steer well clear of them in your timeship. However, if you make the device as long as you can, and stick to paths close to the middle of the cylinder, you should survive the trip! The Tipler cylinder, also called a Tipler time machine, is a hypothetical object theorized to be a potential mode of time travel – an approach that is conceivably functional within humanity’s current understanding of physics, specifically the theory of general relativity, although later results have shown that a Tipler cylinder could only allow time travel if its length would appear infinite. The key characteristics of the application of Tipler Cylinders for time control and time travel are presented in the picture below. This is followed by more detail describing the approach below. The Tipler cylinder was discovered as a solution to the equations of general relativity by Willem Jacob van Stockum in 1936 and Kornel Lanczos in 1924, but not recognized as allowing closed timelike curves until an analysis by Frank Tipler in 1974. Tipler showed in his 1974 paper, “Rotating Cylinders and the Possibility of Global Causality Violation” that in a spacetime containing a massive, infinitely long cylinder which was spinning along its longitudinal axis, the cylinder should create a frame-dragging effect. This frame-dragging effect warps spacetime in such a way that the light cones of objects in the cylinder’s proximity become tilted, so that part of the light cone then points backwards along the time axis on a space time diagram. Therefore a spacecraft accelerating sufficiently in the appropriate direction can travel backwards through time along a closed timelike curve or CTC. Closed timelike curve formation using rotating cylinder model CTC’s are associated, in Lorentzian manifolds which are interpreted physically as spacetimes, with the possibility of causal anomalies such as going back in time and potentially shooting your own grandfather, although paradoxes might be avoided using some constraint such as the Novikov self-consistency principle. They have an unnerving habit of appearing in some of the most important exact solutions in general relativity, including the Kerr vacuum (which models a rotating black hole) and the van Stockum dust (which models a cylindrically symmetrical configuration of rotating pressureless fluid or dust). An objection to the practicality of building a Tipler cylinder was discovered by Stephen Hawking, who posited a conjecture showing that according to general relativity it is impossible to build a time machine in any finite region that satisfies the weak energy condition, meaning that the region contains no exotic matter with negative energy. The Tipler cylinder, on the other hand, does not involve any negative energy. Tipler’s original solution involved a cylinder of infinite length, which is easier to analyze mathematically, and although Tipler suggested that a finite cylinder might produce closed timelike curves if the rotation rate were fast enough, he did not prove this. A spirallohedron of 6 hyperstrings from 6 parallel universes But Hawking argues that because of his conjecture, “it can’t be done with positive energy density everywhere! I can prove that to build a finite time machine, you need negative energy.” Hawking’s proof appears in his 1992 paper on the chronology protection conjecture, where he examines, “the case that the causality violations appear in a finite region of spacetime without curvature singularities” and proves that “there will be a Cauchy horizon that is compactly generated and that in general contains one or more closed null geodesics which will be incomplete. One can define geometrical quantities that measure the Lorentz boost and area increase on going round these closed null geodesics. If the causality violation developed from a noncompact initial surface, the averaged weak energy condition must be violated on the Cauchy horizon.” Casimir Effect The Casimer Effect is a physical force arising from a quantized field, for example between two uncharged plates. This can produce a locally mass-negative region of space-time that could stabilize a wormhole to allow faster than light travel. In quantum field theory, the Casimir effect and the Casimir-Polder force are physical forces arising from a quantized field. When this field is instead studied using quantum electrodynamics, it is seen that the plates do affect the virtual photons which constitute the field, and generate a net force – either an attraction or a repulsion depending on the specific arrangement of the two plates. The key characteristics of the application of the Casimir Effect for time control and time travel are presented in the picture below. This is followed by more detail describing the effect below. This force has been measured, and is a striking example of an effect purely due to second quantization. However, the treatment of boundary conditions in these calculations has led to some controversy. In fact “Casimir’s original goal was to compute the van der Waals force between polarizable molecules” of the metallic plates. Thus it can be interpreted without any reference to the zero-point energy (vacuum energy) or virtual particles of quantum fields. Hendrik Casimir Dutch physicists Hendrik B. G. Casimir and Dirk Polder proposed the existence of the force and formulated an experiment to detect it in 1948 while participating in research at Philips Research Labs. In modern theoretical physics, the Casimir effect plays an important role in the chiral bag model of the nucleon; and in applied physics, it is significant in some aspects of emerging microtechnologies and nanotechnologies. Vacuum energy The causes of the Casimir effect are described by quantum field theory, which states that all of the various fundamental fields, such as the electromagnetic field, must be quantized at each and every point in space. In a simplified view, a “field” in physics may be envisioned as if space were filled with interconnected vibrating balls and springs, and the strength of the field can be visualized as the displacement of a ball from its rest position. The second quantization of quantum field theory requires that each such ball-spring combination be quantized, that is, that the strength of the field be quantized at each point in space. Canonically, the field at each point in space is a simple harmonic oscillator, and its quantization places a quantum harmonic oscillator at each point. Excitations of the field correspond to the elementary particles of particle physics. However, even the vacuum has a vastly complex structure, all calculations of quantum field theory must be made in relation to this model of the vacuum. The vacuum has, implicitly, all of the properties that a particle may have: spin, or polarization in the case of light, energy, and so on. On average, all of these properties cancel out: the vacuum is, after all, “empty” in this sense. One important exception is the vacuum energy or the vacuum expectation value of the energy. The Casimir Effect Simulation of Casimir Force In this case, the correct way to find the zero point energy of the field is to sum the energies of the standing waves of the cavity. To each and every possible standing wave corresponds an energy; say the energy of the nth standing wave is En. The vacuum expectation value of the energy of the electromagnetic field in the cavity is then with the sum running over all possible values of n enumerating the standing waves. The factor of 1/2 corresponds to the fact that the zero-point energies are being summed – it is the same 1/2 as appears in the equation… In particular, one may ask how the zero point energy depends on the shape s of the cavity. Each energy level En depends on the shape, and so one should write En(s) for the energy level, and, for the vacuum expectation value. At this point comes an important observation: the force at point p on the wall of the cavity is equal to the change in the vacuum energy if the shape s of the wall is perturbed a little bit, say by δs, at point p. That is, one has This value is finite in many practical calculations. Casimir’s calculation In the original calculation done by Casimir, he considered the space between a pair of conducting metal plates at distance a apart. kx and ky are the wave vectors in directions parallel to the plates, and In the end, the limit where polar coordinates were introduced to turn the double integral into a single integral. The q in front is the Jacobian, and the 2π comes from the angular integration. The integral is easily performed, resulting in But ζ( − 3) = 1 / 120 and so one obtains The Casimir force per unit area, Fc / A for idealized, perfectly conducting plates with vacuum between them is (hbar, ħ) is the reduced Planck constant, c is the speed of light, a is the distance between the two plates. The force is negative, indicating that the force is attractive: by moving the two plates closer together, the energy is lowered. The presence of shows that the Casimir force per unit area Fc / A is very small, and that furthermore, the force is inherently of quantum-mechanical origin. More recent theory Concept of zero-point energy module using the Casimir Effect In addition to these factors, complications arise due to surface roughness of the boundary and to geometry effects such as degree of parallelism of bounding plates. For boundaries at large separations, retardation effects give rise to a long-range interaction. For the case of two parallel plates composed of ideal metals in vacuum, the results reduce to Casimir’s. The heat kernel or exponentially regulated sum is where the limit experimental setup for the conversion of vacuum energy into mechanical-energy. However, the formalism of quantum field theory makes it clear that the vacuum expectation value summations are in a certain sense summations over so-called “virtual particles”. Casimir effect and wormholes Repulsive forces Other scientists have also suggested the use of gain media to achieve a similar levitation effect, though this is controversial because these materials seem to violate fundamental causality constraints and the requirement of thermodynamic equilibrium. An experimental demonstration of the Casimir-based levitation was recently demonstrated by the Capasso group at Harvard through experiments involving a gold-coated particle and silica thin film immersed in bromobenzene. Classical ‘Critical’ Casimir Effect In 2008, physicists in Germany made the first direct measurements of the “critical Casimir effect”, a classical analogue of the quantum Casimir effect. This effect had been theoretically predicted in 1978 by Michael Fisher and Pierre-Gilles de Gennes but all observations had been indirect. In this experiment, the critical Casimir effect arises in a mixed liquid that is close to its critical point. The liquid used was a solution of water and the oil 2,6-lutidine which has a critical point of 34°C at normal atmospheric pressure. As this liquid approaches its critical point, the oil and water starts separate into small regions whose size and shape are subject to statistical fluctuations and that exhibit random Brownian motion. To demonstrate the effect, a tiny coated Styrofoam ball is suspended in the liquid close to the wall of its coated glass container. The ball and the container coatings are the same and both have a preference for either oil or water. As the liquid nears its critical point, total internal reflection microscopy is used to detect displacements of the ball. From the sudden movements detected only towards the glass, the classical Casimir force was calculated to be approximately 600 fN (6 x 10−13 N). To tune the effect for repulsion, the coatings of the glass and the ball are changed so that one prefers oil and the other water. While the German physicists say this reverse critical Casimir effect could be useful in nanoelectromechanical systems, its dependence upon a very specific temperature presently limits its usefulness. One Response to Time Control Technologies Leave a Reply WordPress.com Logo Google+ photo Twitter picture Facebook photo Connecting to %s %d bloggers like this:
bf7b02b38098aebf
The Aharonov-Bohm effect This title sounds very exciting. It is – or was, I should say – one of these things I thought I would never ever understand, until I started studying physics, that is. 🙂 Having said that, there is – incidentally – nothing very special about the Aharonov-Bohm effect. As Feynman puts it: “The theory was known from the beginning of quantum mechanics in 1926. […] The implication was there all the time, but no one paid attention to it.” To be fair, he also admits the experiment itself – proving the effect – is “very, very difficult”, which is why the first experiment that claimed to confirm the predicted effect was set up in 1960 only. In fact, some claim the results of that experiment were ambiguous, and that it was only in 1986, with the experiment of Akira Tonomura, that the Aharonov-Bohm effect was unambiguously demonstrated. So what is it about? In essence, it proves the reality of the vector potential—and of the (related) magnetic field. What do we mean with a real field? To put it simply, a real field cannot act on some particle from a distance through some kind of spooky ‘action-at-a-distance’: real fields must be specified at the position of the particle itself and describe what happens there. Now you’ll immediately wonder: so what’s a non-real field? Well… Some field that does act through some kind of spooky ‘action-at-a-distance.’ As for an example… Well… I can’t give you one because we’ve only been discussing real fields so far. 🙂 So it’s about what a magnetic (or an electric) field does in terms influencing motion and/or quantum-mechanical amplitudes. In fact, we discussed this matter  quite a while ago (check my 2015 post on it). Now, I don’t want to re-write that post, but let me just remind you of the essentials. The two equations for the magnetic field (B) in Maxwell’s set of four equations (the two others specify the electric field E) are: (1) B = 0 and (2) c2×B = j0 + ∂E/ ∂t. Now, you can temporarily forget about the second equation, but you should note that the B = 0 equation is always true (unlike the ×E = 0 expression, which is true for electrostatics only, when there are no moving charges). So it says that the divergence of B is zero, always. Now, from our posts on vector calculus, you may or may not remember that the divergence of the curl of a vector field is always zero. We wrote: div (curl A) = •(×A) = 0, always. Now, there is another theorem that we can now apply, which says the following: if the divergence of a vector field, say D, is zero – so if D = 0, then D will be the curl of some other vector field C, so we can write: D×C. When we now apply this to our B = 0 equation, we can confidently state the following:  If B = 0, then there is an A such that B×A We can also write this as follows:·B = ·(×A) = 0 and, hence, B×A. Now, it’s this vector field A that is referred to as the (magnetic) vector potential, and so that’s what we want to talk about here. As a start, it may be good to write out all of the components of our B×A vector: formula for B In that 2015 post, I answered the question as to why we’d need this new vector field in a way that wasn’t very truthful: I just said that, in many situations, it would be more convenient – from a mathematical point of view, that is – to first find A, and then calculate the derivatives above to get B. Now, Feynman says the following about this argument in his Lecture on the topic: “It is true that in many complex problems it is easier to work with A, but it would be hard to argue that this ease of technique would justify making you learn about one more vector field. […] We have introduced A because it does have an important physical significance: it is a real physical field.” Let us follow his argument here. Quantum-mechanical interference effects Let us first remind ourselves of the quintessential electron interference experiment illustrated below. [For a much more modern rendering of this experiment, check out the  Tout Est Quantique video on it. It’s much more amusing than my rather dry exposé here, but it doesn’t give you the math.] We have electrons, all of (nearly) the same energy, which leave the source – one by one – and travel towards a wall with two narrow slits. Beyond the wall is a backstop with a movable detector which measures the rate, which we call I, at which electrons arrive at a small region of the backstop at the distance x from the axis of symmetry. The rate (or intensityI is proportional to the probability that an individual electron that leaves the source will reach that region of the backstop. This probability has the complicated-looking distribution shown in the illustration, which we understand is due to the interference of two amplitudes, one from each slit. So we associate the two trajectories with two amplitudes, which Feynman writes as A1eiΦ1 and A2eiΦ2 respectively. As usual, Feynman abstracts away from the time variable here because it is, effectively, not relevant: the interference pattern depends on distances and angles only. Having said that, for a good understanding, we should – perhaps – write our two wavefunctions as A1ei(ωt + Φ1and A2ei(ωt + Φ2respectively. The point is: we’ve got two wavefunctions – one for each trajectory – even if it’s only one electron going through the slit: that’s the mystery of quantum mechanics. 🙂 We need to add these waves so as to get the interference effect: R = A1ei(ωt + Φ1A2ei(ωt + Φ2= [A1eiΦ1 A2eiΦ2eiωt Now, we know we need to take the absolute square of this thing to get the intensity – or probability (before normalization). The absolute square of a product, is the product of the absolute squares of the factors, and we also know that the absolute square of any complex number is just the product of the same number with its complex conjugate. Hence, the absolute square of the eiωt factor is equal to |eiωt|2 = eiωteiωt = e= 1. So the time-dependent factor doesn’t matter: that’s why we can always abstract away from it. Let us now take the absolute square of the [A1eiΦ1 A2eiΦ2] factor, which we can write as: |R|= |A1eiΦ1 A2eiΦ2|= (A1eiΦ1 A2eiΦ2)·(A1eiΦ1 A2eiΦ2) = A1+ A2+ 2·A1·A2·cos(Φ1−Φ2) = A1+ A2+ 2·A1·A2·cosδ with δ = Φ1−Φ2 OK. This is probably going a bit quick, but you should be able to figure it out, especially when remembering that eiΦ eiΦ = 2·cosΦ and cosΦ = cos(−Φ). The point to note is that the intensity is equal to the sum of the intensities of both waves plus a correction factor, which is equal to 2·A1·A2·cos(Φ1−Φ2) and, hence, ranges from −2·A1·A2 to +2·A1·A2. Now, it takes a bit of geometrical wizardry to be able to write the phase difference δ = Φ1−Φas δ = 2π·a/λ = 2π·(x/L)·d/λ —but it can be done. 🙂 Well… […] OK. 🙂 Let me quickly help you here by copying another diagram from Feynman – one he uses to derive the formula for the phase difference on arrival between the signals from two oscillators. A1 and A2 are equal here (A1 = A2 = A) so that makes the situation below somewhat simpler to analyze. However, instead, we have the added complication of a phase difference (α) at the origin – which Feynman refers to as an intrinsic relative phasetriangle When we apply the geometry shown above to our electron passing through the slits, we should, of course, equate α to zero. For the rest, the picture is pretty similar as the two-slit picture. The distance in the two-slit – i.e. the difference in the path lengths for the two trajectories of our electron(s) – is, obviously, equal to the d·sinθ factor in the oscillator picture. Also, because L is huge as compared to x, we may assume that trajectory 1 and 2 are more or less parallel and, importantly, that the triangles in the picture – small and large – are rectangular. Now, trigonometry tells us that sinθ is equal to the ratio of the opposite side of the triangle and the hypotenuse (i.e. the longest side of the rectangular triangle). The opposite side of the triangle is x and, because is very, very small as compared to L, we may approximate the length of the hypotenuse with L. [I know—a lot of approximations here, but… Well… Just go along with it as for now…] Hence, we can equate sinθ to x/L and, therefore, d·x/L. Now we need to calculate the phase difference. How many wavelengths do we have in a? That’s simple: a/λ, i.e. the total distance divided by the wavelength. Now these wavelengths correspond to 2π·aradians (one cycle corresponds to one wavelength which, in turn, corresponds to 2π radians). So we’re done. We’ve got the formula: δ = Φ1−Φ= 2π·a/λ = 2π·(x/L)·d/λ. Huh? Yes. Just think about it. I need to move on. The point is: when is equal to zero, the two waves are in phase, and the probability will have a maximum. When δ = π, then the waves are out of phase and interfere destructively (cosπ = −1), so the intensity (and, hence, the probability) reaches a minimum.  So that’s pretty obvious – or should be pretty obvious if you’ve understood some of the basics we presented in this blog. We now move to the non-standard stuff, i.e. the Aharonov-Bohm effect(s). Interference in the presence of an electromagnetic field In essence, the Aharonov-Bohm effect is nothing special: it is just a law – two laws, to be precise – that tells us how the phase of our wavefunction changes because of the presence of a magnetic and/or electric field. As such, it is not very different from previous analyses and presentations, such as those showing how amplitudes are affected by a potential − such as an electric potential, or a gravitational field, or a magnetic field − and how they relate to a classical analysis of the situation (see, for example, my November 2015 post on this topic). If anything, it’s just a more systematic approach to the topic and – importantly – an approach centered around the use of the vector potential A (and the electric potential Φ). Let me give you the formulas: The first formula tells us that the phase of the amplitude for our electron (or whatever charged particle) to arrive at some location via some trajectory is changed by an amount that is equal to the integral of the vector potential along the trajectory times the charge of the particle over Planck’s constant. I know that’s quite a mouthful but just read it a couple of times. The second formula tells us that, if there’s an electrostatic field, it will produce a phase change given by the negative of the time integral of the (scalar) potential Φ. These two expressions – taken together – tell us what happens for any electromagnetic field, static or dynamic. In fact, they are really the (two) law(s) replacing the q(v×B) expression in classical mechanics. So how does it work? Let me further follow Feynman’s treatment of the matter—which analyzes what happens when we’d have some magnetic field in the two-slit experiment (so we assume there’s no electric field: we only look at some magnetic field). We said Φ1 was the phase of the wave along trajectory 1, and Φ2 was the phase of the wave along trajectory 2. Without magnetic field, that is, so B = 0. Now, the (first) formula above tells us that, when the field is switched on, the new phases will be the following: Hence, the phase difference δ = Φ1−Φwill now be equal to: Now, we can combine the two integrals into one that goes forward along trajectory 1 and comes back along trajectory 2. We’ll denote this path as 1-2 and write the new integral as follows: Note that we’re using a notation here which suggests that the 1-2 path is closed, which is… Well… Yet another approximation of the Master. In fact, his assumption that the new 1-2 path is closed proves to be essential in the argument that follows the one we presented above, in which he shows that the inherent arbitrariness in our choice of a vector potential function doesn’t matter, but… Well… I don’t want to get too technical here. Let me conclude this post by noting we can re-write our grand formula above in terms of the flux of the magnetic field B: So… Well… That’s it, really. I’ll refer you to Feynman’s Lecture on this matter for a detailed description of the 1960 experiment itself, which involves a magnetized iron whisker that acts like a tiny solenoid—small enough to match the tiny scale of the interference experiment itself. I must warn you though: there is a rather long discussion in that Lecture on the ‘reality’ of the magnetic and the vector potential field which – unlike Feynman’s usual approach to discussions like this – is rather philosophical and partially misinformed, as it assumes there is zero magnetic field outside of a solenoid. That’s true for infinitely long solenoids, but not true for real-life solenoids: if we have some A, then we must also have some B, and vice versa. Hence, if the magnetic field (B) is a real field (in the sense that it cannot act on some particle from a distance through some kind of spooky ‘action-at-a-distance’), then the vector potential A is an equally real field—and vice versa. Feynman admits as much as he concludes his rather lengthy philosophical excursion with the following conclusion (out of which I already quoted one line in my introduction to this post): “This subject has an interesting history. The theory we have described was known from the beginning of quantum mechanics in 1926. The fact that the vector potential appears in the wave equation of quantum mechanics (called the Schrödinger equation) was obvious from the day it was written. That it cannot be replaced by the magnetic field in any easy way was observed by one man after the other who tried to do so. This is also clear from our example of electrons moving in a region where there is no field and being affected nevertheless. But because in classical mechanics A did not appear to have any direct importance and, furthermore, because it could be changed by adding a gradient, people repeatedly said that the vector potential had no direct physical significance—that only the magnetic and electric fields are “real” even in quantum mechanics. It seems strange in retrospect that no one thought of discussing this experiment until 1956, when Bohm and Aharonov first suggested it and made the whole question crystal clear. The implication was there all the time, but no one paid attention to it. Thus many people were rather shocked when the matter was brought up. That’s why someone thought it would be worthwhile to do the experiment to see if it was really right, even though quantum mechanics, which had been believed for so many years, gave an unequivocal answer. It is interesting that something like this can be around for thirty years but, because of certain prejudices of what is and is not significant, continues to be ignored.” Well… That’s it, folks! Enough for today! 🙂 2 thoughts on “The Aharonov-Bohm effect 1. Pingback: Feynman’s Lecture on Superconductivity | Reading Feynman 2. Pingback: 70(1-2): Probability amplitudes | Exercises for the Feynman Lectures Leave a Reply You are commenting using your account. Log Out /  Change ) Google+ photo Twitter picture Facebook photo Connecting to %s
eb1091f4e5b4569f
Tag Archives: Theoretical physics Minkowski’s Spacetime | Relativity 23 June 15, 2016Relativity1500 vuesEdit Einstein’s heart palpitations | Relativity 17 May 22, 2016Relativity1249 vuesEdit Einstein’s happiest thought | Relativity 14 May 11, 2016Relativity2857 vuesEdit How did Newton figure out gravity? Relativity 13 May 08, 2016Relativity1297 vuesEdit Space contraction and time dilation | Relativity 6 March 30, 2016Relativity1385 vuesEdit The Unlikely Correctness of Newton’s Laws April 30, 2014Article4826 vuesEdit Do moving objects exhaust? Does the Moon accelerate? How strong is the gravity pull of the Moon on the Earth compared to that of the Earth on the Moon? While we've all learned Newton's laws of motion, many of us would get several answers of these questions wrong. That's not so surprising, as Newton's laws are deeply counter-intuitive. By stressing their weirdness with Veritasium videos, this article dives into a deep understanding of classical mechanics. Spacetime of General Relativity June 02, 2013Article11611 vuesEdit Most popular science explanations of the theory of general relativity are very nice-looking. But they are also deeply misleading. This article presents you a more accurate picture of the spacetime envisioned by Albert Einstein. Does God play dice? May 28, 2013Article1056 vuesEdit For Albert Einstein, the answer is no. But what did he mean? Has the greatest theoretical physicist of all time really missed the bandwagon of quantum physics? What are the real issues of the controversy that has opposed him to the Copenhagen School (Bohr, Heisenberg ...)? Back to the physics of the early twentieth century, its history, philosophy and ideas. Dynamics of the Wave Function: Heisenberg, Schrödinger, Collapse March 04, 2013Article6657 vuesEdit On one hand, the dynamics of the wave function can follow Schrödinger equation and satisfy simple properties like Heisenberg uncertainty principle. But on the other hand, it can be probabilistic. This doesn't mean that it's totally unpredictable, since the unpredictability is amazingly predictable. Find out how these two dynamics work! The Essence of Quantum Mechanics January 21, 2013Article8813 vuesEdit Quantum mechanics is the most accurate and tested scientific theory, Its applications to real life are countless, as all new technologies are based on its principles. Yet, it's also probably the most misunderstood theory, because it constantly contradicts common sense. This article presents the most important features of the theory. Spacetime of Special Relativity October 14, 2012Article2762 vuesEdit Einstein's theory of relativity is the best-known breakthrough of the History of science. The reason for that isn't only the accuracy of the theory, but also and mainly its beauty. As Einstein once said: "Most of the fundamental ideas of science are essentially simple, and may, as a rule, be expressed in a language comprehensible to everyone." This is what the article aims at showing Einstein's simple ideas of special relativity and their beauty. The forces of Nature: from Newton to String Theory September 27, 2012Article2200 vuesEdit We live in a very complex universe, and as far as one can remember, men have always wanted to know everything about it: where do we come from? what is our world made of? To answer these questions, sciences were developped, and among them, theoretical physics: using mathematics to describe certain aspects of Nature. From […]
f55630bb405e41eb
A Scientific Free-Will: In Oppostion To Deterministic Free-Will This is a review of the points covered in this paper: Towards a scientific concept of free will as a biological trait: spontaneous actions and decision-making in invertebrates by Björn Brembs. Thanks to Bruno for the link. The paper dismisses the ‘real’ (metaphysical) free-will of dualism and theism (the soul?), but is more specifically aimed at giving a scientific account of free-will, but one that is not constrained to a completely illusory free-will as suggested by determinism. It is not as clear as this paper claims that the universe is not deterministic. It might not be, but we as human animals have a specific difficulty in establishing this. I’m quite happy to say we don’t know what ultimate reality is, if there is an ultimate reality. I’m quite happy to say we can’t be sure that the universe is actually deterministic. But all of science seems to be based on determinism. Well, at least it depends on causality. There is the notion of causality without determinism, but this seems a bit of a cheat. In most respect causation and determinism can be used interchangeably. This paper seems to be based on the conflation of underlying state of affairs (the universe is deterministic or not) and what humans can deduce from it (to humans the universe is indeterminate). So throughout I’ll try to distinguish between these: Determinism, non-determinism – the extent to which fundamentally the universe is deterministic in the sense that any event is caused by one or more other events in a causal chain so that the outcome is determined by prior events. This is really an ontological position, about what the universe is and how it behaves. There are variations. Determinacy, indeterminacy – the extent to which any one part of a deterministic system can or cannot know all about (and possibly may know nothing about) some other part. This determinacy is essentially an epistemological notion, in human terms, or an informational notion in a more general sense. 2 – The rejection of determinism I don’t see any reason to suppose that current science demonstrates that the universe is non-deterministic. Section 2 is not very convincing in its rejection of determinism since at least some of the examples specifically do not refute determinism. The ‘chance’ aspects of quantum mechanics are not unanimously agreed to be non-deterministic – though our limited understanding of it may give us the impression it is non-deterministic. Double slit experiments do not speak to determinism or non-determinism – they only imply that our models (wave v particle) are insufficient alone to describe such phenomena. Even Heisenberg’s uncertainty principle does not refute determinism as clearly as some people make out. Determinism isn’t about being able to actually measure what will happen, it’s about what does happen as the result of an event that occurs. There are all sorts of details that are confusing about the ‘measurement’ of a particles position and velocity, to do with what is actually doing the measuring (and lots of nonsense about it being a conscious being, as opposed to mere interaction with anything, including other particles). The real issues of determinism are not to do with quantum mechanics. There’s the possibility that quantum mechanics phenomena are deterministic. The problem is more fundamental than current science can explain. For example, if this universe is deterministic then was it too ’caused’ or is it just deterministic from the Big Bang onwards. An infinite regress of deterministic universes seems to be unpalatable for some reason, but I can’t figure out why. It’s not as if we have direct experience of anything outside our universe to come to any opinion about whether infinite regress is de rigueur for universal creation systems or not. We’re simply in the dark on all of this. So, what we are left with is that the universe appears deterministic, and much of our classical science uses that fact – and brain science is classical science down to the level of molecules and the chemistry of the brain. There’s no convincing argument that quantum mechanics is truly non-deterministic, as opposed to simply being indeterminate. Applying quantum ideas to brain science is just as much a shot in the dark as the ‘metaphysical’ free-will it is supposed to avoid. deterministic models are sufficient for brain science, until such time as real evidence to the contrary appears 3. Behavioural variability as an adaptive trait Some scholars have resorted to quantum uncertainty in the brain as the solution, providing the necessary discontinuity in the causal chain of events. This is not unrealistic, as there is evidence that biological organisms can evolve to take advantage of quantum effects. For instance, plants use quantum coherence when harvesting light in their photosynthetic complexes. There are forms of indeterminism that are still causal. I’m not sure where this discontinuity in the causal chain might be. This doesn’t do anything but introduce the above uncertainties in our understanding of physics, but this doesn’t refute determinism at this level. If some quantum event occurs in a plant, and that causes a molecule in the plant to absorb some light with the consequential result of photosynthesis in action, then that quantum event ’caused’, ‘determined’ that the reaction would take place. What quantum uncertainty fails to do in such cases is explain how anything is remotely certain or predictable. But to attribute this to free-will is no different than talking about sodium ions. Whether one particular sodium ion makes it through a sodium channel in a particular neuron will ‘determine’ whether that neuron fires or not – and if that particular neuron constitutes a tipping point in some micro decision that the brain makes, then that micro-decision will fire or not, and that in turn will contribute to the way a larger decision occurs. Quantum events are so far below the level at which we can analyse human decisions that for any particular decision they are not worth considering. There is sufficient indeterminacy in any classical assessment of the brain without having to look for quantum effects to explain indeterminacy. Quantum events are a fundamental part of electronics, but you can bet that most proponents of free-will very specifically do not attribute consciousness and free-will to electronic systems – i.e. computers. Moreover, and more importantly, the pure chance of quantum indeterminism alone is not what anyone would call ‘freedom’. ‘For surely my actions should be caused because I want them to happen for one or more reasons rather that they happen by chance’. This is precisely where the biological mechanisms underlying the generation of behavioural variability can provide a viable concept of free will. Part of the problem here is that this paper is essentially re-defining free-will in a materialistic scientific sense, and yet still requires a ‘degree’ of freedom to describe personal agency. But on the whole this paper still makes ‘real’ free-will just as illusory as is described by determinism. ‘Unpredictable’ to who? To the animals that are in the middle of the evolutionary process. The selection pressures are deterministic pressures that drive individual animals behaviour, but those behaviours can still be adequately indeterminate to other animals, and, to a great extent, to themselves. This is a fine example of conflating non-determinism with the indeterminacy of knowledge (information) to an individual entity. Escape behaviours are analysed at a macro level of a complex individual, and at best the response of bulk areas of the brain of the a complex individual. The C-start example is illustrating the causal complexity of events – the snake does ’cause’ or ‘determine’ that the fish responds to the snake’s advantage. This is hardly a refutation of determinism. Note that if the Mauthner cell was to respond to ‘randomness’ then its response would be non-deterministic, and the fish would not respond with the C-start behaviour so predictably – the snake has learned (in the evolutionary sense) to take advantage of that predictable response, the determinacy of the outcome of an action. The whole notion of non-determinism is its own demise, or else nothing would be predictable at all. The unpredictability of behaviour we find in biological systems can be sufficiently described by indeterminacy of complex classical systems. All the examples in section 3 are examples of how deterministic systems are subject to influences that to those systems are indeterminate; so looked at in isolation it looks like the system has some unpredictability. But that does not mean it isn’t part of some wider system were all the component events are ‘determined’ by prior events. In evolutionary terms it is put as random mutation and natural selection. But here the ‘random’ mutation is only apparently random to us, because of the vast complexity and the inaccessibility of the DNA that is mutating. But for any DNA molecule that mutates there will be an obvious causal event at the molecular level that caused that molecule to mutate (e.g. chemically driven mutation), or it might result from some atomic decay process, maybe triggered by a passing subatomic particle. Some of these physical events at this level are at the forefront of particle physics, but do not as yet refute a deterministic mutation, and so do not imply that evolution is a non-deterministic process. The best adapted survive (the natural selection bit) because of causal events in their environment (their environment includes their own bodies; and brains, for entities that have them). 4. Brains are in control of variability These observations suggest that there must be mechanisms by which brains control the variability they inject into their motor output. Some components of these mechanisms have been studied. For instance, tethered flies can be trained to reduce the range of the variability in their turning manoeuvres. Well, then the training has causally determined that their behaviour should change, by mechanisms relating to how all animals with brains learn (see Eric Kandel and others on memory, learning, conditioning). Variability is not shown to be non-deterministic by this section. In fact it gives some good examples to support the deterministic world view – even though the determinism is many levels removed, to the extent that most animal behaviour patterns are statistical outcomes of extremely complex causal systems. 5. What are the neural mechanisms generating behavioural variability? Instead, a nonlinear signature was found, suggesting that fly brains operate at critically, meaning that they are mathematically unstable, which, in turn, implies an evolved mechanism rendering brains highly susceptible to the smallest differences in initial conditions and amplifying them exponentially [63]. Put differently, fly brains have evolved to generate unpredictable turning manoeuvres. Instability is not non-determinism. It just means that a particular system or part of a system is finely tuned to respond (be caused to change) by small changes to its inputs (its environment). It is still a deterministic system, just less predictable to other systems nearby, particularly those trying to predict the outcome based on immediate stimulus alone. Of course there are all the precursor developments that put the system into that unstable state. The various learning and conditioning examples given by Eric Kandel illustrate the variability of neuronal systems depending on the frequency and type of stimulus. This does not mean that within these neurons the processes are not deterministic. 6. Determinism versus indeterminism is a false dichotomy Together with Hume, most would probably subscribe to the notion that ’tis impossible to admit of any medium betwixt chance and an absolute necessity’ [75]. For example, Steven Pinker (1997, p. 54) concurs that ‘A random event does not fit the concept of free will any more than a lawful one does, and could not serve as the long-sought locus of moral responsibility’ [76]. However, to consider chance and lawfulness as the two mutually exclusive sides of our reality is only one way to look at the issue. The problem here is that this paper is confusing ‘determinism’, the underlying mechanism that ‘drives’ events, with ‘indeterminacy’, the inability of any system (including but not restricted to humans) to ‘determine’ or predict what a particular outcome will be. The unstable nonlinearity, which makes brains exquisitely sensitive to small perturbations, may be the behavioural correlate of amplification mechanisms such as those described for the barrel cortex [74]. This nonlinear signature eliminates the two alternatives, which both would run counter to free will, namely complete (or quantum) randomness and pure, Laplacian determinism. No it does not! The stability or instability of particular mechanisms only relates to how sensitive a system is to being ‘determined’ to change by deterministic precursors, it’s stimulus inputs, and its current state in detail. This has been a problem for psychology – the treatment of the brain as a black box. Various stimuli can illicit the same bahaviour, and the same stimuli can illicit different behaviour – even in the same subject – because there is insufficient knowledge about what’s going on inside. These represent opposite and extreme endpoints in discussions of brain functioning, which hamper the scientific discussion of free will. They only hamper the science in that many philosophers and theists want there to be some magical ‘real’ free-will that is outside the causal reach of a deterministic universe, and those philosophers and theists are in some cases getting involved in the debate (Bill Klemm being an example of a theist scientist who lets his theism dictate his view in this regard). So this issue of the nature of free-will at a more fundamental level is important, and ongoing. Instead, much like evolution itself, a scientific concept of free will comes to lie between chance and necessity, with mechanisms incorporating both randomness and lawfulness. Here the term ‘chance’ can just mean trivial ‘inditerminacy’, but it does not refute philosophical determinism upon which all science is based. The Humean dichotomy of chance and necessity is invalid for complex processes such as evolution or brain functioning. In the sense that the distinction is unimportant once the general notion of determinism is accepted and the science moves on, regardless of what some philosophers and theists want to be the case. Brain science can proceed with a deterministic model – it can hardly be said that this model has been exhausted. Such phenomena incorporate multiple components that are both lawful and indeterminate. This seems more correct, using the term: ‘indeterminate’. It can be said that it is all lawful (obeying physical laws) and as such any part of it, and the interaction of that part with any other, produces a determinate outcome; but we cannot determine that outcome, primarily because of the complexity. This breakdown of the determinism/indeterminism dichotomy … The dichotomy is not determinism/indeterminacy, but determinism/non-determinism. It’s perfectly reasonable in a deterministic universe for it to have parts that are indeterminate to other parts – i.e. one part cannot ‘know’ about another part until such time as the second part impacts on (‘determines’ change in) the first part. Stochasticity is not a nuisance, or a side effect of our reality. Evolution has shaped our brains to implement ‘stochasticity’ in a controlled way, injecting variability ‘at will’. Without such an implementation, we would not exist. Yes, fine – if ‘stochastic just means unpredictably variable to us. This is not refuting determinism. A scientific concept of free will cannot be a qualitative concept. The question is not any more ‘do we have free will?'; the questions is now: ‘how much free will do we have?'; ‘how much does this or that animal have?’. Free will becomes a quantitative trait. This is really about the extent to which an animal (or any system) is autonomous, in the sense of the extent to which complex processes inside it (mostly its brain, for an animal) ‘determine’ its behaviour. For a more autonomous system it is less immediately dependent on its environment for its behaviour than is a less autonomous one. But both are completely deterministic in that all the processes on the inside and outside are governed by deterministic physical laws – always depending of course on the extent to which low level determinism actually does prevail. 7. Initiating activity: actions versus responses This is more about the extent to which systems are indeterminate, not about the underlying determinism. 8. Freedom of choice For instance, isolated leech nervous systems chose either a swimming motor programme or a crawling motor programme to an invariant electrical stimulus [78–80]. Every time the stimulus is applied, a set of neurons in the leech ganglia goes through a so far poorly understood process of decision-making to arrive either at a swimming or at a crawling behaviour. The stimulus situation could not be more perfectly controlled than in an isolated nervous system, excluding any possible spurious stimuli reaching sensory receptors unnoticed by the experimenter. In fact, even hypothetical ‘internal stimuli’, generated somehow by the animal must in this case be coming from the nervous system itself, rendering the concept of ‘stimulus’ in this respect rather useless. This is expressing only how difficult it is to account for actions within neurons. The inner action of a neuron, with all its internal processes controlling the expression of neurotransmitters, the migration of triggers up and down the inner pathways, such as those determining gene expression and inhibition, all the outside details of what allows the action potential to fire, the stimuli that determine how and when it grows synapses in the local learning memory process, etc. A neuron is already a complex system. It doesn’t matter how precise an external stimulus may be, the subsequent outcomes will be variable. But that does not mean that the countless molecular events going on inside and around the neuron are not deterministic. Yet, under these ‘carefully controlled experimental circumstances, the animal behaves as it damned well pleases’ (Harvard Law of Animal Behaviour) [34]. This itself is just an expression of the indeterminacy of the measured system, not that it actually does have ‘real’ free-will, or that the underlying physics is non-determinate. Seymour Benzer, one of the founders of Neurogenetics, captured this phenomenon in the description of his first phototaxis experiments in 1967: ‘ … if you put flies at one end of a tube and a light at the other end, the flies will run to the light. But I noticed that not every fly will run every time. If you separate the ones that ran or did not run and test them again, you find, again, the same percentage will run. But an individual fly will make its own decision’. Distinguish ‘real’ free-will from indeterminacy. That each fly ‘will make its own decision’ is an expression of this indeterminacy, not only in the minds of the experimenters, but also to the fly. The fly does not ‘know’ or decide of its own ‘real’ free-will – it simply ‘behaves’ in accordance to the multitude of complex deterministic operations that are going on inside its tiny little brain, and within that brain’s 100,000 neurons. One hundred thousand neurons in a fruit fly! How the hell is a simple light box experiment supposed to expose the determinism or non-determinism of the underlying countless number of molecules within each of those neurons to an extent that would make the fly behaviour ‘non-determinate’? The behaviour is only ‘indeterminate’ due to this complexity. All these experiments are bulk property statistical experiments, at least on some scale. When trying to measure the behaviour of flies with a light box the outcome is bound to be a statistical measure of the indeterminate behaviour of countless deterministic events at the scale of the neuron, and below that at the molecule, and below that of the atomic and subatomic activity. John Searle has described free will as the belief ‘that we could often have done otherwise than we in fact did’ [92]. Taylor & Dennett cite the maxim ‘I could have done otherwise’ [93]. Clearly, leeches and flies could and can behave differently in identical environments. But the crucial point here is that they could not know that they could have done otherwise, or that they would have done. In some cases we may loose nearly all our autonomy. A man falls off a cliff and smashes on the rocks below. I say, “Wow, once he started falling, did he have to die?” and John Searle says, “He could have done otherwise.” – What? he could have used his free-will to fly back up to the cliff? We acknowledge some obvious restrictions to our free-will. In other cases what’s going on when a bunch of neurons spark around in our heads and ‘decide’ to raise our left hand or right hand, the notion that we ‘could have done otherwise’ doesn’t really capture the internal complexity of that event, and certainly doesn’t demonstrate ‘real’ free-will, and certainly doesn’t refute determinism. While some argue that unpredictable (or random) choice does not qualify for their definition of free will [2], it is precisely the freedom from the chains of causality that most scholars see as a crucial prerequisite for free will. This confuses indeterminacy, chance (whatever that is) and ‘real’ free-will – unless of course we re-define free-will just to mean outcomes of complex deterministic yet indeterminate systems. 9. Consciousness and freedom It thus is no coincidence that we all feel that we possess a certain degree of freedom of choice. Because we cannot determine all the micro-deterministic events that drive our internal decision making processes. It’s quite plausible, and consistent with classical deterministic physics, that a system that is limited-self-aware (has some data about itself, but cannot monitor most of itself, particularly its central control system) that it has some representation of itself as spontaneously making decisions. It makes sense that depriving humans of such freedom is frequently used as punishment and the deprived do invariably perceive this limited freedom as undesirable. We only feel this is the case because we have innate (determined by evolution and development) physiological drives that emerge as emotional desires to have freedom of motion. One feature that distinguishes most animals from plants is that they must move to survive – to hunt and to avoid being hunted. It seems a good evolutionary adaptation to make restriction of movement an undesirable situation that the whole body fights against – again expressed in some animals, particular humans, as an emotional discomfort in having freedom of movement restricted. But again, not refuting determinism The concept that we can decide to behave differently even under identical circumstances underlies not only our justice systems. The circumstances are never the same! Every time an organism responds to some stimulus it changes the organism, which, in whatever minor degree it may be, has the potential to change the response next time the stimulus is applied. And all the time, time is ticking by and the environment is changing. But be careful, because this link to justice is part of the problem – our illusion that we have ‘real’ free-will can lead to injustice by attributing all responsibility only to the individual. Thankfully there is at least some consideration of extenuating circumstances in many cases – at least in sentencing if not in judgement of guilt. Electoral systems, our educational systems, parenting and basically all other social systems also presuppose behavioural variability and at least a certain degree of freedom of choice. This tends to our desire for freedom in that it allows our complex brains a psychological freedom. Many people do question the extent to which democracy implies real freedom (and even question the notion of freedom). It may be that its greatest importance is that it makes us feel free, so satisfying our psychological and physiological desire for freedom of movement – which translated into more abstract terms used by humans means political freedom. The data reviewed above make clear that the special property of our brain that provides us with this freedom surely is independent of consciousness. Consciousness is not a necessary prerequisite for a scientific concept of free will. This is a good point – but note that ‘free-will’ here is the re-defined free-will, which from my perspective is still subject to deterministic physical mechanisms. But I agree they are distinct. A system can be autonomous (free) to some degree without being conscious. A tossed stone is free to fly through the air and fall to the ground – but of course this then begs the question of what the ‘free’ in free-will really means. Can a system lack all autonomy (not sure that can be the case) and still be conscious? Not so sure about that one. We sometimes have to work extremely hard to constrain our behavioural variability in order to behave as predictably as possible. Yes. Which shows that our will is not as free as we would like it to be. Which begs the question, for the religious, that if God wanted to give us free-will, why is it so un-free from deterministic constraints? Therefore, the famous experiments of Benjamin Libet and others since then [2,4,5,98–100] only serve to cement the rejection of the metaphysical concept of free will and are not relevant for the concept proposed here. Here the ‘metaphysical concept of free-will’ is referring to what I’ve been calling ‘real’ free-will. But Libet’s experiments do not cement the rejection of ‘real’ free-will, and I’d have thought they were of interest to this re-defined ‘scientific’ free-will, in that they refer to the timing of brain events and choices made, and the conscious awareness of those choices. Conscious reflection, meditation or discussion may help with difficult decisions, but this is not even necessarily the case. The degree to which our conscious efforts can affect our decisions is therefore central to any discussion about the degree of responsibility our freedom entails, but not to the freedom itself. This is the interesting point when it comes to responsibility and the autonomy of an individual. If two men are walking towards me and one attacks me and the other then defends me, then I can attribute immediate causation (identify the most significant entities in the causal chain of events). I can say that the action of one and not the other determined that I had a nose bleed. But there may be many prior causes that determined why I was struck by the first man, and this is where responsibility and determinisms and the extent of autonomy come into play. 10. The Self and Agency In contrast to consciousness, an important part of a scientific concept of free will is the concept of ‘self’. It is important to realize that the organism generates an action itself, spontaneously. In chemistry, spontaneous reactions occur when there is a chemical imbalance. The system is said to be far from thermodynamic equilibrium. Biological organisms are constantly held far from equilibrium, they are considered open thermodynamic systems. However, in contrast to physical or chemical open systems, some of the spontaneous actions initiated by biological organisms help keep the organism away from equilibrium. Every action that promotes survival or acquires energy sustains the energy flow through the open system, prompting Georg Litsche to define biological organisms as a separate class of open systems (i.e. ‘subjects'; [101]). Because of this constant supply of energy, it should not be surprising to scientists that actions can be initiated spontaneously and need not be released by external stimuli. In controlled situations where there cannot be sufficient causes outside the organism to make the organism release the particular action, the brain initiates behaviour from within, potentially using a two-stage process as described above. The boy ceases to play and jumps up. This sort of impulsivity is a characteristic of children every parent can attest to. We do not describe the boy’s action with ‘some hidden stimuli made him jump’—he jumped of his own accord. The jump has all the qualities of a beginning. The inference of agency in ourselves, others and even inanimate objects is a central component of how we think. Assigning agency requires a concept of self. How does a brain know what is self? This paragraph describes the illusion of self and free-will quite well. That the processes that initiate action are sometimes predominantly, and on a small time scale maybe wholly, attributable to internal processes, is the cause of our illusion. Those internal processes are still deterministic at the lower levels, with various collections of internal events coming together to trigger an externally visible behaviour. It’s the fact that we the observers, and sometimes the subject that is performing the behaviour, are not aware of the precursor internal causes that it looks so spontaneous to us – and this is the root of attribution of the concept of free-will. Free-will seems more like a psychological perception than a reality. One striking characteristic of actions is that an animal normally does not respond to the sensory stimuli it causes by its own actions. The best examples are that it is difficult to tickle oneself… This is still a comparison of outcomes from deterministic sequences of events. It relates to the complexity of the system and the availability of internal feedback that makes tickling oneself different than being tickled by someone else. If you doubt this distinction then look up Dead Hand (definition 1). Thus, in order to understand actions, it is necessary to introduce the term self. The concept of self necessarily follows from the insight that animals and humans initiate behaviour by themselves. As a general convenience in many circumstances I’d agree that this is a good model for such complex systems as humans with the degree of complex indeterminate autonomous behaviour we exhibit. It would make no sense to assign a behaviour to an organism if any behavioural activity could, in principle, be traced back by a chain of causations to the origin of the universe. I would agree with this to some extent. In the mugger example I gave above I don’t have to trace causes back to the Big Bang to determine that the most predominant immediate cause of my pain was the mugger, not my defender. This is simple cause and effect, nothing to do with agency in the free-will sense. In my house the circuit breaker keeps tripping. I discover that unplugging my fridge prevents this, but unplugging all other appliances doesn’t. I blame the fridge and replace it. The problem persists with the new fridge. On further investigation I find the fault is with the wall socket behind the fridge – plug anything there and the breaker trips. This illustrates the problem with the simplistic notion of free-will and personal responsibility. Sometimes we do have to look further than the immediate agent for the behaviour we witness. It might save hanging the wrong man – or in my case replacing a working fridge. An animal or human being is the agent causing a behaviour, as long as no sufficient causes for this activity to occur are coming from outside the organism. And here lies the tricky bit. Sometimes those apparent spontaneous and ‘freely-willed’ actions of animals and people are pre-determined by circumstances that conspire to form the decision making process we are witnessing in the present. We could blame a drug user for ‘choosing’ to do drugs – but if such a person is from an abusive drug-taking family then what would we expect them to do? That a man born and raised in Iran is a strident Muslim need be no surprise to us in the West – though Christians don’t necessarily see their route to Christianity being so conformal to prior causes. That a child spontaneously leaps around or shouts odd words might be an indication he has Tourette syndrome, whereas some observers might think him rude. Many human undesirable behaviours previously attributed to free-will have subsequently been attributed to specific conditions beyond the control of the subject. The free-will model – particularly the religious one associated with sinning – isn’t that helpful a model. Agency is assigned to entities who initiate actions themselves. Agency is crucial for moral responsibility. Behaviour can have good or bad consequences. It is the agent for whom the consequences matter the most and who can be held responsible for them. And so it is believed by Libertarians, and fundamentalist theists alike. There are no limits to how this simplistic view of our animal nature can be used to limit our freedoms, in the very act of declaring them free. 11. Why still use the term free-will today? By providing empirical data from invertebrate model systems supporting a materialistic model of free will, I hope to at least start a thought process that abandoning the metaphysical concept of free will does not automatically entail that we are slaves of our genes and our environment, forced to always choose the same option when faced with the same situation. I do think the ‘materialistic model of free will’ shows the ‘real’ (metaphysical) free-will model to be illusory – or at least illustrates it not to be so straight forward we can go on attributing blame and dishing out punishment willy-nilly. But I do think we can accept quite easily that we are slaves to our genes and environment – but to an indeterminate extent that makes this particular piece of knowledge non-constraining psychologically. As put earlier in the article, but not quite expressed in this sense, it’s the indeterminate nature of games that make them interesting. Flipping a double headed coin is not as interesting a game as flipping a normal coin – and in the case of the latter it makes no difference how deterministic the outcome is from a point of view of the physical laws of the universe, because to us it’s indeterminate. So, we cannot say we are always ‘forced to always choose the same option’, because we are not – the options are determined, but indeterminate to us: psychologically this is free-will. We may be constrained by determinism to make a specific choice on a specific occasion, but the same determinism, effected by other subsequent states, may result in a different choice next time. This time based indeterminacy makes arguments that ‘I could have chosen otherwise’ quite meaningless. In fact, I am confident I have argued successfully that we would not exist if our brains were not able to make a different choice even in the face of identical circumstances and history. We have not the slightest clue about rerunning history, but if determinism pertains then history cannot be rerun, but if it could then we’d end up with the same outcome. Only if the universe is truly non-deterministic could it be said that running the universe again would result in different outcomes – but doing so would result in a different universe altogether, at least one in which the person wanting to try this would not exists, and probably the earth would not exist either. If quantum indeterminacy was at work then even with the same starting state we would end up with quite a different universe. The only sense in which this notion of rerunning history and making different decisions makes sense is in fact if ‘real’ free-will was something above and beyond and independent of the otherwise deterministic material reality of the universe. In this article, I suggest re-defining the familiar free will in scientific terms rather than giving it up, only because of the historical baggage all its connotations carry with them. One may argue that ‘volition’ would be a more suitable term, less fraught with baggage. However, the current connotations of volition as ‘willpower’ or the forceful, conscious decision to behave against certain motivations render it less useful and less general a term than free will. Fair points. Deciding what to call it is tricky, given the baggage. Finally, there may be a societal value in retaining free will as a valid concept, since encouraging a belief in determinism increases cheating [103] But this is a misconception about what is implied by it, as illustrated by Jesus and Mo. And if determinism is the case, and free-will is illusory, then is it really scientifically sound to deny this because some people will entertain this misconception and think they can cheat? Look at it this way. If I decide to save a drowning man then I was driven to it, deterministically, by all be genetic, developmental, personal societal history and the current state of my brain as I weigh up the danger to myself and the pleas of the drowning man – my action is determined in that sense. But if I say, ah well, what does it matter, I cannot help leaving him to drown – then it’s determined that I do say that, and yes, this then is the determined outcome. Whichever action I take is the determined action. And it may well be that initially the acquisition of determinism as a philosophy of the mind does lead to the outcome that the man drowns. But then so could the ‘real’ free-will model, in that it can be used as an excuse too: he shouldn’t have been messing about near dangerous water, it’s his fault he’s drowning. And in all this either excuse may be a psychological mask for a fear that is preventing me saving the drowning man – my brain deterministically invented excuses either way. In the end we just do what we do. The psychological approach we have towards it is itself determined. The point is that it is indeterminate to us, so we go on appearing to make choices, and apparently sometimes rationalising those choices later, and that rationalisation is itself a deterministic process going on in the brain. So the extent to which this entity, me, is autonomous and can make decisions, seems to be down to influences that drive me one way of another. That I will change is inevitable – until my component parts dies and distribute so that there is no longer any value in the concept of ‘me’. That I will change in a way that suits my biological drives is not under my control, beyond this degree of autonomy. I cannot help, it seems, but view the world this way, and go on making the case for determinism this way. Unless this entity, me, is persuaded to some other point of view – entirely deterministically though as yet indeterminate to me. I no longer agree that ‘ ‘free will’ is (like ‘life’ and ‘love’) one of those culturally useful notions that become meaningless when we try to make them ‘scientific’ ‘ [96]. The scientific understanding of common concepts enrich our lives, they do not impoverish them, as some have argued [100]. This is why scientists have and will continue to try and understand these concepts scientifically or at least see where and how far such attempts will lead them. It is not uncommon in science to use common terms and later realize that the familiar, intuitive understanding of these terms may not be all that accurate. Initially, we thought atoms were indivisible. Today we do not know how far we can divide matter. Initially, we thought species were groups of organisms that could be distinguished from each other by anatomical traits. Today, biologists use a wide variety of species definitions. Initially, we thought free will was a metaphysical entity. Today, I am joining a growing list of colleagues who are suggesting it is a quantitative, biological trait, a natural product of physical laws and biological evolution, a function of brains, maybe their most important one. Yep. That’s more like it. The trouble is Björn, you can’t help it. You are driven to this point of view by the deterministic causal universe. 14 thoughts on “A Scientific Free-Will: In Oppostion To Deterministic Free-Will 1. All arguments seem to derive from a flawed premise that the issue should be addressed by the obvious fact that all actions have a cause. Of course they are determined by their causes, but that sheds little light on free-will, where the real issue is whether the cause of a person’s action was caused by a choice that was inevitable or was freely selected among more or less equally valid alternatives. There is too much other stuff in this post to discuss. But I would like to stress that linking free-will to invertebrate brains is nonsense. By definition, free will, if it exists, requires conscious-mind choice, which is physiologically not possible in most animals. Their actions, even when “voluntary,” are more likely to be driven by their brain anatomy and experiential programming — obviously not free-will. I recently published a comprehensive critique of the “free-will” experiments of Libet and others (see 70) Klemm, W. R. 2010. Free will debates: simple experiments are not so simple. Advances in Cognitive Psychology. 6: (6) 47-65.). The paper is available free, on line. W. R. Klemm 2. Great critique, Ron! Thanks – it always feel good to see that at least one person has read a paper all the way through! :-) I don’t have much to disagree with. Even when rejecting Quantum Mechanics (and you’re in good company, Einstein did as well), you could still study the biology of spontaneous behavior. As I wrote in the paper: “Because of this nonlinearity, it does not matter (and it is currently unknown) whether the ‘tiny disturbances’ are objectively random as in quantum randomness or whether they can be attributed to system, or thermal noise.” The philosopher requires principled chance, the biologist not necessarily. However, with respect to Quantum Mechanics, I usually don’t make the decisions myself. I talk to physicists and as long as the physicists tell me the universe is non-determinate, that’s where I’m standing – with the professionals. In physics, I’m a mere amateur, so when physicists tell me that the wave function collapses whether or not someone’s watching (Hawking radiation), I go with the professionals. If the physicists tell me the math predicts genuine stochasticity for certain events, I go with the pros, if the physicists tell me the observed violations of the Bell inequalities rule out local realism, I take their word for it. As always, that doesn’t mean Quantum Mechanics ought to be rejected and some form of determinism take its place! However, given that scientific theories are notoriously difficult to raise from the dead, I’ll keep Newton buried until the physicist dig him out again :-) One minor point: with the examples of determinate behavior in evolution as non-evolutionary stable, I was referring to evolution and not the individual animal or its brain: the evolutionary process would be able to predict probably even the most complex determinate system if it ever were important enough. That doesn’t mean the brain making the prediction ‘knows’ what it’s doing, while it is doing that. But those are minor points and you might be correct when you say: “Quantum events are so far below the level at which we can analyse human decisions that for any particular decision they are not worth considering.” One might also say that from all the myriads of contributing factors such as genes, history, environment, etc: every factor contributes do little to the actual decision that saying the action was generated by the agent ‘himself/herself’ makes the most sense. It is precisely the nonlinear nature of the decision-making circuits that any of these irrelevant event may be the event that set a cascade of events in place which did cause the action, even the sodium quantum jump. Again, we’re almost in total agreement, except for the Quantum Mechanics and that I leave to the professionals. P.S.: Thanks for the determinism/determinacy language lesson. I was unaware of that distinction (English is only my fourth language). Is this commonly accepted usage or a particular definition? 3. Hi Bill, Thanks for responding. Just a reminder, we were discussing your paper, or more specifically your post, over at Mind Blogger, and we got so far but you felt unable to continue. But I would be interested in your opinion on what you think free-will really is. What’s the mechanism involved? How does it square with laws of conservation? For example, if free-will is something above and beyond the material, then it seems like something outside the material realm is imposing itself on the material realm when somone supposedly makes a decision that causes a motor action. Also, given that human brains are physical objects that in many respects are like animal brains, what specifically is it about human brains that introduce this free-will. I’m not sure why you think conscious mind-choice is not possible in animals if it possible in humans. Many animals are clearly conscious, though we may agree or disagree about the extent to which various species are self-aware. “Their actions, even when ‘voluntary’, are more likely to be driven by their brain anatomy and experiential programming — obviously not free-will.” I do agree with you here, but only because that’s the way I view human actions. So neither humans or animals have ‘real’ free-will. Or if humans do, what specifically is different about other animals that prevents them having it? 4. Hi Björn, Thanks for responding. I take your point about letting physicists tell us about physics, but I do find if you listen to them in conversation they do seem to be more cautious about the extent to which the universe is deterministic or not. The determinism/determinacy distinction is one that I don’t see made often in this type of debate, maybe because of the cross-over of science and philosophy, and the many ways that these terms are used. I’m surprised the distinction isn’t made more often. I don’t think it’s a language issue, since Determinism is a philosophical term, and determinacy seems to be used more in maths, computing, and science, all with slightly different meanings. Part of the problem is that what is considered a non-determinate system in one respect might be determinate in another. To me there seems to be a failure to acknowledge the degree to which complexity contributes. So, in terms of the Cashmore paper (The Lucretian swerve), even if we could establish actual determinism at the all levels of physics, to humans the brain with all its neurons and molecules would still appear indeterminate, stochastic. With sufficient digits a finite psudo-random number generator can give the appearance of stochasticism, and is treated as such in simulations – and yet at the level of bits it is entirely deterministic. The outcome is that to the science of the brain it may be that physical determinism and non-determinism are indistinguishable, and maybe that’s sufficient. 1. “The outcome is that to the science of the brain it may be that physical determinism and non-determinism are indistinguishable, and maybe that’s sufficient.” Indeed, we don’t know how decision-making works, so we don’t know to what extent this distinction is relevant. As long as quantum mechanics (and the non-determinism it espouses) remains the state of the art in physics, we have actual ‘new’ beginnings in the universe and hence in the brain. This is all that is required, because in principle the nonlinearity of the brain would be capable of amplifying such events. To find out if this a part of what is actually going on is my research :-) 5. I just have an objection when you talk about “material realm”. What is your definition of “matter” in first place? I think that matter is nowadays in science a concept as vague as free will seems to be. Any concept of “matter” made from the ancient Greeks to classical physics was shattered by quantum mechanics and relativity. And if we are to say that the brain can be reduced to it’s interacting molecules then we must inevitably have to use quantum mechanics to describe it’s behavior. In some simple systems as gases this is not necessary, because we may depreciate molecular interactions. But in such a complex system as the brain is, we will have no choice but to use quantum simulations to understand it. Well there is a problem of treating the whole brain with quantum mechanics. Since it’s an open system, it cannot be treated as an independent system, we will need to treat the whole body and it’s environment. This is because the Hamiltonian looses its hermitian character in an open system, unless it’s surrounded by a zero-flux surface on the electron density, which is not the case (this can be easily demonstrated using the Lagrangian approach of Schrödinger equation). The consequence is that everything we calculate from the brain alone, using quantum mechanics will be physically nonsense. Even if we could make such a calculation, since the brain is a dynamical system, we would have to use wave-packets to describe it’s temporal evolution, which is a probabilistic process. For simple macroscopic objects, this temporal evolution reduces to a deterministic process. But can we really be sure that the same will happen to the brain? We have also to consider that it is at non-zero temperatures, so I wouldn’t say yes or not, I just think we are still too far from answering this. By the way, the appearance of quantum cryptography may answer one of your questions. If the universe is all started again, will everything be the same? Well, since a code generated by quantum cryptography is truly random, if the universe starts all over again to the point where the first quantum code was generated, then there exists the probability that this code will not be the same that the one that happened the first time. (Unless we are able to find the “hidden” variables that some people claim to exist that makes quantum mechanics a deterministic theory). Anyways this is just philosophy, we will never know what will happen if the universe starts all over again (if it really had a “start”). 6. Hi Bruno, I agree that this is mostly philosophy, but it does determine how we view the free-will issue. There seems to be a reluctance to accept determinism as being important to brain function because we don’t like the idea that we are not really ‘free’ in some sense that is more significant than it being merely epistemological indeterminate. Even Björn’s paper and some of those he references seem to want to avoid determinism because of the consequences, resulting in a necessity to reclaim free-will. This philosophical problem persists today, even in the minds of materialist monists, whether they are specifically determinist or non-determinists (classifying the ‘quantum’ understanding as the primary science based non-determinism). Which brings me to my use of ‘material realm’, as merely a general term to distinguish it from ‘spiritual realm’. For all the early science of people like Copernicus and Newton did to dispell the more literal understanding of the spiritual (heaven, earth, hell), it was still possible to reasonably believe in the ‘spiritual realm’, and the connection to through man’s soul – one reason being that it seemed ludicrous to suppose that intelligence could emerge from inanimate matter. Some sort of dualism seemed obvious, with or without God. Though Darwinism gave a reason to think monistically, that mind can come from matter and matter is everything, it still remained, and does remain, philosophically plausible that everything is spiritual – solipsism, for one, cannot be refuted, and various idealisms, including a monistic supernatural based spiritualism are still easy presuppositions to adopt. And these monisms, and spiritually based dualism still pertain as belief systems, even among scientists who work on evolution, particle physics and cosmology. If anything quantum mechanics and relativity, in their dispelling the strict classical interpretations give support to a more spiritual inclination, since classical matter seems to evaporate before our eyes. Only the abundant, in-your-face, perpetual persistence of material experiences and the corresponding overwhelming results of science, i.e. evidence, are persuasive to a ‘material’ or ‘naturalistic’ or ‘non-supernatural’ monism. Only the total lack or reliable evidence (plenty of unreliable) drives us to abandon spiritual monism, or other idealisms. We only have to ask, what would it feel like if solipsism were true, and all this physical reality were imaginary, but overwhelmingly imaginary as it is? The answer is that it would feel just like this – and so why not just accept the way it feels and discard solipsism. But that’s still not enough for many – certainly not for the religious, and not for many atheists. We seem to have a psychological drive to be free, and that freedom seems to require non-determinism. A certain theistic view would have it that determinism can’t be true ‘because’ it would refute free-will as given by God – almost as if determinism can’t be the case because of its inconvenience to religion. But there are atheist who have a similar perspective because they don’t like the idea that it challenges our ‘humanism’, what it is to be a human with free-will. Raymond Tallis comes to mind – he’s currently doing the rounds of the British Humanist Association with his book Aping Mankind. Tallis suffers the double dread of not only the challenge of our free-will being illusory, but also our ‘beastial’ animal nature – combining a chronic distaste for both in a stream of books that have re-hashed these views for some time. And Tallis was a respected neurologist (retired – ‘was’ referring to his profession, not the respect he acquires). The philosophical debate for our human nature and our free-will is alive and well. And at the forefront of science we are still stuck with doing philosophy – as a guide for where to look and what to consider, and also as a discipline to challenge complacency. This is not a done deal. And nor is the determinism/non-determinism debate – with some physicists giving a nod to ‘adequate determinism’ the jury is still out on the case for or against determinism. That there is some determinism at some level still poses the main challenge to free-will. What Björn is describing as free-will still has sufficient deterministic elements in it with regard to the brain to make it difficult to accept anything other than a trivial free-will that is indistinguishable from some description of automata. That the debate is still alive try the latest Sam Harris post on The Mystery of Consciousness. Though not specifically about free-will it does address the philosophical and scientific problems relating to consciousness (though I disagree with some points). Consciousness (specifically self-awareness) appears to be required to actually recognise that we have or appear to have free-will. Part of the problem is that not all physicists are of the same opinion about the significance of philosophy. Some might see the chance nature of quantum events as pretty much good enough evidence that determinism is dead, while others might think not, and yet others prefer not to commit. And of course many scientists have a certain disdain for philosophy, and many philosophers feel their role is most significant on these issues, so we can really say that all parties explore the full possibilities. “Well, since a code generated by quantum cryptography is truly random…” What does ‘truly random’ mean? How is that actually established? Or, how is that ‘determined’? Any experiment that produces unpredictable results merely has results that we cannot predict. How do you distinguish true randomness from our epistemological limitations? You think that chance is ontologically true, or an epistemological phenomenon? Even when ‘random’ events occur in the brain their outcome has a deterministic effect once they occur – in a ‘bulk’ sense, so it still seems that any way you look at it human behaviour is determined, or not ‘free’, which was my main objection to the “In Opposition To Deterministic Free-Will” aspect of Björn’s paper. You really do have to refute the philosophical determinism before there is a conclusive challenge to determinism. Given this comment… “Anyways this is just philosophy, we will never know what will happen if the universe starts all over again” I’m not sure this one applies… Since we know so little about universes I don’t think we could rule out the possibility that even ‘random’ events as our current science witnesses them would not in fact re-run. I take on board though that trying to ‘determine’ to what extent quantum events matter in the brain is part of Björn’s research. But that has a specific relevance to the nature of brain operations, and it seems to me to be too far off addressing ‘real’ free-will – by assuming that particular debate to be over. This may be the case – and Harris addresses this in terms of what entities can be conscious. Maybe there is some element of consciousness in some other species, and maybe even in non-biological systems (e.g. computers – as in main post). But that might still be the case even if determinism ruled, because we don’t understand consciousness, either scientifically of philosophically enough to say it would not. 7. Hi Björn, I appreciate the concept of ‘new beginnings’, which would avoid the predestination view of determinism, but that still doesn’t remove the causal aspect of determinism, so that once events have occurred they are ‘determining’ further activity. Plus see my comments to Bruno on the philosophy surrounding quantum events. From your paper… Given the vast moment-to-moment variability at the physics, chemistry, biology levels I don’t think “forced to always choose the same option” is meaningful. Even in a fully deterministic universe consecutive ‘choices’ would always be different in the detail, even when we do in fact make the same choice on consecutive occasions at the macro level (e.g. raising one’s left arm or right arm). Only actually re-running the universe would address this; but for now there is the unknown of whether random events in this current run would turn out different in another run. So with this I see a problem with talking about ‘new beginnings’ – how do you distinguish, in a single run universe, what these ‘new beginnings are’, since looking at that level we could consider all events to be new beginnings. And without these new beginnings I think we’re unable to address the determinism issue to the extent that it can be ruled out by the study of quantum effects in the brain. The significant point from my perspective was expressed by Sam Harris, “Whether they are predictable or not, we do not cause our causes.” It’s significant because this turns on the understanding of what this ‘we’ represents, who or what the ‘I’ is. In your terms you classify the physical entity, the body and brain as the thing that is the most recent, local focused conglomeration of pre-causes that cause some behaviour, and as such define free-will in that sense. But that seems just as much the behaviour of an automaton as when I might say free-will is illusory and we are destined (leave for the moment the epistemological issue of being pre-destination) to do what we do. Only a ‘real’ free-will is worthy of the name ‘free-will’, so that any other kind should be labelled ‘illusory’ free-will, or ‘scientific’ free-will. Given all the above, including my comments on how we all seem to have a psychological distaste for the lack free-will, could I put a specific question to you personally. Is your opposition (as implied by the title of your paper) to a deterministic free-will based solely on your impartial perception of the science, or is it influenced by your psychological distaste for a total lack of free-will? I ask because I wonder if you would object to considering humans as biological automata, and all that this (might) imply for our ethical and legal perceptions. Back to your question about the difference between determinism and determinacy. I suppose in English these are pretty close. Perhaps the more correct way is to describe it in terms of ontology and epistemology, but for me the notion of the indeterminacy of a determinate system seems to bring home both the difference and the relation between them. And more specifically is the indeterminacy of Determinism. I thought this looked interesting, but I don’t have access to it at the moment: Are deterministic descriptions and indeterministic descriptions observationally equivalent? Thanks for the link to your other post. I’ll take a look at that soon. 8. “but for now there is the unknown of whether random events in this current run would turn out different in another run.” Well, this is where Quantum Mechanics (aka ‘the evidence’) tells us that things would be rather different. Thus, Einstein’s determinism today is not evidence-based any longer (more like ‘faith-based’). I think over a hundred years of failed searches for determinism should be fairly convincing to anyone following the evidence that determinism is dead (until some new evidence brings it back) – very much like 150 years of failing to falsify evolutionary theory pretty much doom any form of creationism – until the creator comes and reveals herself :-) New beginnings(TM) would only be those events which provide independence of the events at the big bang, as far as I understand it. In macroscopic terms, this would probably get non-trivial: one would have to understand the processes leading up to the event well enough to be able to say which role any quantum events in them played. if there’s one with a causal role present, one would call it a new beginning (or as some have called it (e.g. Richard Windeker, see citations of our fly paper in PLoS One) ‘information sources’. I think the main concept about brain function that makes brains (rather than other nonlinear systems) a ‘self’ is their ability to control the gain of the circuits that amplify the system noise in an adaptive way. I hypothesize that, for instance, it is via this ability that consciousness might exert its influence on our decisions: by allowing seeding randomness to gain more or less influence on our brain states. ‘Real’ free-will is dead anyway and so if anything it should get the qualifying adjective (like, in this case, perhaps ‘colloquial’, ‘former’, ‘ancient’, metaphysical’, etc.), while for the presently valid one, the adjective (why not ‘scientific’?) might be dropped. This is a common convention, I think, to refer to old notions with a qualifying adjective. But I actually don’t care too much about the nomenclature, as long as it’s reasonable and consensus-based. 9. Hi Björn, I still don’t see how QM is relevant. QM is about indeterminacy now, in this run of the universe. I don’t know there is anything that would count as evidence regarding multiple runs of the universe. Even quantumn indeterminacy might be quite determinate in a re-run – i.e. the same quantum events occur on each run – though of course indterminate to us during each run, since during each run it would look like the one and only run to us. But even so, I don’t think the evidence is as strong as you imply, when Sean Carroll has this to say: “Quantum mechanics is where things get interesting. When a quantum state is happily evolving along according to the Schrödinger equation, everything is perfectly deterministic; indeed, more so than classical mechanics, because the space of states (Hilbert space) doesn’t allow for the kind of non-generic funny business that let non-deterministic classical solutions sneak in. But when we make an observation, we are unable to deterministically predict what its outcome will be. (And Bell’s theorem at least suggests that this inability is not just because we’re not smart enough; we never will be able to make such predictions.) At this point, opinions become split about whether the loss of determinism is real, or merely apparent. This is a crucial question for both physicists and philosophers, but not directly relevant for the question of free will.” (my emphasis) On one of your other points… “I hypothesize that, for instance, it is via this ability [ability to control the gain of the circuits that amplify the system noise in an adaptive way] that consciousness might exert its influence on our decisions: by allowing seeding randomness to gain more or less influence on our brain states.” Wouldn’t there be sufficient ‘randomness’, i.e. complex indeterminate interactivity, between neurons and other components, to account for any un-conscious fluctuations contributing to what are larger scale patterns that constitute plans and intentions? Without quantum effects being necessarily relevant? 10. Indeed, there may be enough ‘classical’ fluctuations to provide sufficient seeds for spontaneous behavior. Because we don’t know how these biological processes work, we don’t know the quantity (and hence relevance) of QM contributions. That being said, unless brains are bubbles in which QM is excluded from the rest of the universe, QM randomness also occurs in brains. This entails that the QM contribution must be >0 – we just don’t know by how much. For the principle question of theoretical predictability, QM contributions >0 are sufficient, no matter how small. For a collection of publications interpreted by me as dismissing causal determinism as religion-like, see: In particular, WRT the quote by Carroll above: it’s not only when we measure that the wave collapses. The wave of course collapses every single time any particles interact. The randomness thus introduced has been shown by Bell’s theorem (and others) to be an inherent property of the universe. 11. Even if we accept that there may be randomness or something we perceive indeterminate, is it not worth noting that Bohr himself scoffed openly in very hostile fashion at Arthur Compton’s speech regarding quantum mechanics and0the theory indicating evidence of free will? Bohr, a man by all accounts appreciative of such a perspective given how deeply influenced by Kierkegaard he was, rebuked such a statement. Bohr, the proponent of quantum mechanics to Einstein’s opposition stated above in comments. Quantum determinacy (or indeterminacy) doesn’t mean therefore free of cause; it’s a stretch, Bohr knew it, and Bohr refused to make it at the time. 12. Hi Filipe, Thanks for dropping by. “Free of cause” Yes, exactly. With regard to that, what I wonder about is the following. Not specifically about free-will, but related. What is randomness? Really, what is it? I’m not talking about probability, which is a model we use. Probability can be used to model uncertain and chaotic systems that might be determinate, but which to us, because of complexity, are indeterminate. For example, take a penny and hold it horizontally heads up 1mm of a flat horizontal surface and drop it. It will land on the surface heads up. To all intents and purposes this is a determinate system – at least to the degree that we rely on for all our mechanical systems. Introduce more uncertainty, by dropping it from a greater hight and introducing spin and we humans start to use probability to model the outcomes of multiple trials because we can’t determine deterministically if it will land heads or tails. It may still be a practically determinate system, but is epistemologically indeterminate to us. So, this is mrealy the use of the probabilistic model for what are essentially detrministic systems – to our macro approximation. But on the small scale of quantum mechanics we are told that events are inherently ‘random’. OK, so these ‘random’ events fit a prababilistic model – very inconvenient. But what ’causes’ this ‘randomness’ and what is ‘randomness’? Does a specific random event have cause? If it’s uncaused, but eventually has causal consequences (triggers a sensor) is it an uncaused cause? I’ve yet to get an answer, and so to me it seems premature to merely claim ‘randomness’. So, two particles each behave ‘randomly’, but appear entangled so that they provide compatible outcomes that would not otherwise be expected of independently random events. All we can say is that these particles are indeed entagled in some undetermined way, but their behaviour matches a probabilistic model. I’m not sure we can say that the events are truly random – unless ‘random’ here trivially means fitting a probabilistic model. Perhaps the better question is: what is causation? Leave a Reply WordPress.com Logo Twitter picture Facebook photo Google+ photo Connecting to %s
369e05b3bd7eeb1a
digplanet beta 1: Athena Share digplanet: Applied sciences Schematic representation of evanescent waves propagating along a metal-dielectric interface. The charge density oscillations, when associated with electromagnetic fields, are called surface plasmon-polariton waves. The exponential dependence of the electromagnetic field intensity on the distance away from the interface is shown on the right. These waves can be excited very efficiently with light in the visible range of the electromagnetic spectrum. An evanescent wave is a near-field wave with an intensity that exhibits exponential decay without absorption as a function of the distance from the boundary at which the wave was formed. Evanescent waves are solutions of wave-equations, and can in principle occur in any context to which a wave-equation applies. They are formed at the boundary between two media with different wave motion properties, and are most intense within one third of a wavelength from the surface of formation. In particular, evanescent waves can occur in the contexts of optics and other forms of electromagnetic radiation, acoustics, quantum mechanics, and "waves on strings".[1][2] Evanescent wave applications[edit] In optics and acoustics, evanescent waves are formed when waves traveling in a medium undergo total internal reflection at its boundary because they strike it at an angle greater than the so-called critical angle.[1][2] The physical explanation for the existence of the evanescent wave is that the electric and magnetic fields (or pressure gradients, in the case of acoustical waves) cannot be discontinuous at a boundary, as would be the case if there was no evanescent wave field. In quantum mechanics, the physical explanation is exactly analogous—the Schrödinger wave-function representing particle motion normal to the boundary cannot be discontinuous at the boundary. Electromagnetic evanescent waves have been used to exert optical radiation pressure on small particles to trap them for experimentation, or to cool them to very low temperatures, and to illuminate very small objects such as biological cells or single protein and DNA molecules for microscopy (as in the total internal reflection fluorescence microscope). The evanescent wave from an optical fiber can be used in a gas sensor, and evanescent waves figure in the infrared spectroscopy technique known as attenuated total reflectance. In electrical engineering, evanescent waves are found in the near-field region within one third of a wavelength of any radio antenna. During normal operation, an antenna emits electromagnetic fields into the surrounding nearfield region, and a portion of the field energy is reabsorbed, while the remainder is radiated as EM waves. Recently, a graphene-based Bragg grating (one-dimensional photonic crystal) has been fabricated and demonstrated its competence for excitation of surface electromagnetic waves in the periodic structure using a prism coupling technique.[3] In quantum mechanics, the evanescent-wave solutions of the Schrödinger equation give rise to the phenomenon of wave-mechanical tunneling. In microscopy, systems that capture the information contained in evanescent waves can be used to create super-resolution images. Matter radiates both propagating and evanescent electromagnetic waves. Conventional optical systems capture only the information in the propagating waves and hence are subject to the diffraction limit. Systems that capture the information contained in evanescent waves, such as the superlens and near field scanning optical microscopy, can overcome the diffraction limit; however these systems are then limited by the system's ability to accurately capture the evanescent waves.[4] The limitation on their resolution is given by k \propto \frac{1}{d} \ln{\frac{1}{\delta}}, where k is the maximum wave vector that can be resolved, d is the distance between the object and the sensor, and \delta is a measure of the quality of the sensor. More generally, practical applications of evanescent waves can be classified in the following way: 1. Those in which the energy associated with the wave is used to excite some other phenomenon within the region of space where the original traveling wave becomes evanescent (for example, as in the total internal reflection fluorescence microscope) 2. Those in which the evanescent wave couples two media in which traveling waves are allowed, and hence permits the transfer of energy or a particle between the media (depending on the wave equation in use), even though no traveling-wave solutions are allowed in the region of space between the two media. An example of this is so-called wave-mechanical tunnelling, and is known generally as evanescent wave coupling. Total internal reflection of light[edit] Top to bottom: representation of a refracted incident wave and an evanescent wave at an interface. For example, consider total internal reflection in two dimensions, with the interface between the media lying on the x axis, the normal along y, and the polarization along z. One might naively expect that for angles leading to total internal reflection, the solution would consist of an incident wave and a reflected wave, with no transmitted wave at all, but there is no such solution that obeys Maxwell's equations. Maxwell's equations in a dielectric medium impose a boundary condition of continuity for the components of the fields E||, H||, Dy, and By. For the polarization considered in this example, the conditions on E|| and By are satisfied if the reflected wave has the same amplitude as the incident one, because these components of the incident and reflected waves superimpose destructively. Their Hx components, however, superimpose constructively, so there can be no solution without a non-vanishing transmitted wave. The transmitted wave cannot, however, be a sinusoidal wave, since it would then transport energy away from the boundary, but since the incident and reflected waves have equal energy, this would violate conservation of energy. We therefore conclude that the transmitted wave must be a non-vanishing solution to Maxwell's equations that is not a traveling wave, and the only such solutions in a dielectric are those that decay exponentially: evanescent waves. Mathematically, evanescent waves can be characterized by a wave vector where one or more of the vector's components has an imaginary value. Because the vector has imaginary components, it may have a magnitude that is less than its real components. If the angle of incidence exceeds the critical angle, then the wave vector of the transmitted wave has the form \mathbf{k} \ = \ k_y \hat{\mathbf{y}} + k_x \hat{\mathbf{x}} \ = \ i \alpha \hat{\mathbf{y}} + \beta \hat{\mathbf{x}}, which represents an evanescent wave because the y component is imaginary. (Here α and β are real and i represents the imaginary unit.) For example, if the polarization is perpendicular to the plane of incidence, then the electric field of any of the waves (incident, reflected, or transmitted) can be expressed as \mathbf{E}(\mathbf{r},t) = \mathrm{Re} \left \{ E(\mathbf{r}) e^{ i \omega t } \right \} \mathbf{\hat{z}} where \scriptstyle\mathbf{\hat{z}} is the unit vector in the z direction. Substituting the evanescent form of the wave vector k (as given above), we find for the transmitted wave: E(\mathbf{r}) = E_o e^{-i ( i \alpha y + \beta x ) } = E_o e^{\alpha y - i \beta x } where α is the attenuation constant and β is the propagation constant. Evanescent-wave coupling[edit] In optics, evanescent-wave coupling is a process by which electromagnetic waves are transmitted from one medium to another by means of the evanescent, exponentially decaying electromagnetic field.[5] plot of 1/e-penetration depth of the evanescent wave against angle of incidence in units of wavelength for different refraction indices Coupling is usually accomplished by placing two or more electromagnetic elements such as optical waveguides close together so that the evanescent field generated by one element does not decay much before it reaches the other element. With waveguides, if the receiving waveguide can support modes of the appropriate frequency, the evanescent field gives rise to propagating-wave modes, thereby connecting (or coupling) the wave from one waveguide to the next. Evanescent-wave coupling is fundamentally identical to near field interaction in electromagnetic field theory. Depending on the impedance of the radiating source element, the evanescent wave is either predominantly electric (capacitive) or magnetic (inductive), unlike in the far field where these components of the wave eventually reach the ratio of the impedance of free space and the wave propagates radiatively. The evanescent wave coupling takes place in the non-radiative field near each medium and as such is always associated with matter; i.e., with the induced currents and charges within a partially reflecting surface. This coupling is directly analogous to the coupling between the primary and secondary coils of a transformer, or between the two plates of a capacitor. Mathematically, the process is the same as that of quantum tunneling, except with electromagnetic waves instead of quantum-mechanical wavefunctions. See also[edit] 1. ^ a b Tineke Thio (2006). "A Bright Future for Subwavelength Light Sources". American Scientist (American Scientist) 94 (1): 40–47. doi:10.1511/2006.1.40.  2. ^ a b Marston, Philip L.; Matula, T.J. (May 2002). "Scattering of acoustic evanescent waves...". Journal of the Acoustical Society of America 111 (5): 2378. Bibcode:2002ASAJ..111.2378M. doi:10.1121/1.4778056.  3. ^ Sreekanth, Kandammathe Valiyaveedu; Zeng, Shuwen; Shang, Jingzhi; Yong, Ken-Tye; Yu, Ting (2012). "Excitation of surface electromagnetic waves in a graphene-based Bragg grating". Scientific Reports 2. Bibcode:2012NatSR...2E.737S. doi:10.1038/srep00737.  4. ^ Neice, A., "Methods and Limitations of Subwavelength Imaging", Advances in Imaging and Electron Physics, Vol. 163, July 2010 5. ^ Zeng, Shuwen; Yu, Xia; Law, Wing-Cheung; Zhang, Yating; Hu, Rui; Dinh, Xuan-Quyen; Ho, Ho-Pui; Yong, Ken-Tye (2012). "Size dependence of Au NP-enhanced surface plasmon resonance based on differential phase measurement". Sensors and Actuators B: Chemical 176: 1128. doi:10.1016/j.snb.2012.09.073.  6. ^ Fan, Zhiyuan; Zhan, Li; Hu, Xiao; Xia, Yuxing (2008). "Critical process of extraordinary optical transmission through periodic subwavelength hole array: Hole-assisted evanescent-field coupling". Optics Communications 281 (21): 5467. Bibcode:2008OptCo.281.5467F. doi:10.1016/j.optcom.2008.07.077.  7. ^ Karalis, Aristeidis; J.D. Joannopoulos, Marin Soljačić (February 2007). "Efficient wireless non-radiative mid-range energy transfer". Annals of Physics 323: 34. arXiv:physics/0611063v2. Bibcode:2008AnPhy.323...34K. doi:10.1016/j.aop.2007.04.017.  8. ^ "'Evanescent coupling' could power gadgets wirelessly", Celeste Biever, NewScientist.com, 15 November 2006 9. ^ Wireless energy could power consumer, industrial electronicsMIT press release 10. ^ Axelrod, D. (1 April 1981). "Cell-substrate contacts illuminated by total internal reflection fluorescence". The Journal of Cell Biology 89 (1): 141–145. doi:10.1083/jcb.89.1.141. PMC 2111781. PMID 7014571.  External links[edit] Original courtesy of Wikipedia: http://en.wikipedia.org/wiki/Evanescent_wave — Please support Wikipedia. 2412 videos foundNext >  Evanescent Waves Students use microwaves and a set of paraffin prisms to observe evanescent electromagnetic waves. QM2.7: Potential Step E ≤ V₀ - The strange evanescent wave The Potential Step, case: E ≤ V₀ We account for the nonzero probability that the wave function penetrates the classically forbidden region with the concept o... Evanescent and Propagating Waves Time domain simulation of a set of plane waves for different wavenumbers (k). At first the wavenumber is positive real number and keeps reducing down to 0. T... The wave velocity in the blue medium is lower than the wave velocity in the green medium. The angle of incidence is sufficiently large to prevent any transmi... Prism Tunneling Simulation of energy coupling through evanescent waves. Calculation with the free interactive FDTD-Toolbox for Matlab, see http://www.problemsinelectrodynami... Surface Plasmon Animation Taken form biocore http://www.biacore.com/lifesciences/technology/introduction/following_interaction/index.html Surface Plasmon,Surface Enhanced Raman Spectr... Evanescent Optical Wave (Το Λανθάνον Κύμα) Επισκεφθείτε την σελίδα μας: http://www.science-photos.gr ------ ΠΕΡΙΓΡΑΦΗ ΒΙΝΤΕΟ ------ Βιντεοσκόπηση του πειράματος της άμεσης παρατήρησης του Λανθάνοντος ... Microscopy: Total Internal Reflection Fluorescence (TIRF) Microscopy (Daniel Axelrod) Total Internal Reflection Fluorescence (TIRF) Microscopy is a technique that only illuminates dye molecules near a surface. In this video, the pioneer of TIR... Restoration of s-polarized evanescent waves and subwavelength imaging by a single dielectric slab Video abstract for the article 'Restoration of s-polarized evanescent waves and subwavelength imaging by a single dielectric slab' by Omar El Gawhary, Nick J... Characterization of sequential exocytosis in a human neuroendocrine cell line using evanescent wave From the Springer article: Characterization of sequential exocytosis in a human neuroendocrine cell line using evanescent wave microscopy and "virtual trajec... 2412 videos foundNext >  We're sorry, but there's no news about "Evanescent wave" right now. Oops, we seem to be having trouble contacting Twitter Talk About Evanescent wave You can talk about Evanescent wave with people all over the world in our discussions. Support Wikipedia
e6d1f0103797fbcc
Main Menu Block Schedule Printer friendly page Qualitative Behaviour and Controllability of Partial Differential Equations / Comportement qualitatif et controlabilité des EDP (Org: Holger Teismann, Acadia University) DAVID AMUNDSEN, Carleton University Resonant Solutions of the Forced KdV Equation The forced Korteweg-de Vries (fKdV) Equation provides a canonical model for evolution of weakly nonlinear dispersive waves in the presence of additional effects such as external forcing or variable topography. While the symmetries and integrability of the underlying KdV structure facilitate extensive analysis, in this generalized setting such favourable properties no longer hold. Through physical and numerical experimentation it is known that a rich family of resonant steady solutions exist, yet qualitative analytic insight into them is limited. Based on hierarchical perturbative and matched asymptotic approaches we present a formal mathematical framework for construction of solutions in the small dispersion limit. In this way not only obtaining accurate analytic representations but also important a priori insight into the response of the system as it is detuned away from resonance. Specific examples and comparisons in the case of a fundamental periodic resonant mode will be presented. Joint work with M. P. Mortell (UC Cork) and E. A. Cox (UC Dublin). SEAN BOHUN, Penn State The Wigner-Poisson System with an External Coulomb Field This system of equations describes the time evolution of the quantum mechanical behaviour of a large ensemble of particles in a vacuum where the long range interactions between the particles can be taken into account. The model also facilitates the introduction of external classical effects. As tunneling effects become more pronounced in semiconductor devices, models which are able to bridge the gap between the quantum behaviour and external classical effects become increasingly relevant. The WP system is such a model. Local existence is shown by a contraction mapping argument which is then extended to a global result using macroscopic control (conservation of probability and energy). Asymptotic behaviour of the WP system and the underlying SP system is established with a priori estimates on the spatial moments. Finally, conditions on the energy are given which (a) ensure that the solutions decay and (b) ensure that the solutions do not decay. SHAOHUA CHEN, University College of Cape Breton Boundedness and Blowup for the Solution of an Activator-Inhibitor Model We consider a general activator-inhibitor model ut = eDu - mu +  up vt = D Dv - nv +  ur with the Neumann boundary conditions, where rq > (p-1)(s+1). We show that if r > p-1 then the solutions exist long time for all initial values and if r > p-1 and q < s+1 then the solutions are bounded for all initial values. However, if r < p-1 then, for some special initial values, the solutions will blow up. STEPHEN GUSTAFSON, University of British Columbia, Mathematics Department, 1984 Mathematics Rd., Vancouver, BC V6T 1Z2 Scattering for the Gross-Pitaevskii Equation The Gross-Pitaevskii equation, a nonlinear Schroedinger equation with non-zero boundary conditions, models superfluids and Bose-Einstein condensates. Recent mathematical work has focused on the finite-time dynamics of vortex solutions, and existence of vortex-pair traveling waves. However, little seems to be known about the long-time behaviour (eg. scattering theory, and the asymptotic stability of vortices). We address the simplest such problem-scattering around the vacuum state-which is already tricky due to the non-self-adjointness of the linearized operator, and "long-range" nonlinearity. In particular, our present methods are limited to higher dimensions. This is joint work in progress with K. Nakanishi and T.-P. Tsai. HORST LANGE, Universitaet Köln, Weyertal 86-90, 50931 Köln, Germany Noncontrollability of the nonlinear Hartree-Schrödinger and Gross-Pitaevskii-Schrödinger equations We consider the bilinear control problem for the nonlinear Hartree-Schrödinger equation [HS] (which plays a prominent role in quantum chemistry), and for the Gross-Pitaevskii-Schrödinger equation [GPS] (of the theory of Bose-Einstein condensates); for both systems we study the case of a bilinear control term involving the position operator or the momentum operator. A target state uT Î L2(R3) is said to be reachable from an initial state u0 Î L2(R3) in time T > 0 if there exists a control s.t. the system allows a solution state u(t,x) with u(0,x) = u0(x), u(T,x) = uT(x). We prove that, for any T > 0 and any initial datum u0 Î L2 (R3) \{0}, the set of non-reachable target states (in time T > 0) is relatively L2-dense in the sphere {u Î L2(R3) | ||u||L2 = ||u0||L2} (for both [HS] and [GPS]). The proof uses Fourier transform, estimates for Riesz potentials for [HS], estimates for the Schrödinger group associated with the Hamiltonian -D+x2 for [GPS]. HAILIANG LI, Department of Pure and Applied Mathematics, Osaka University, Japan On Well-posedness and Asymptotics of Multi-dimensional Quantum Hydrodynamics In the modelling of semiconductor devices in nano-size, for instance, MOSFET's and RTD's where quantum effects (like particle tunnelling through potential barriers and built-up in quantum wells) take place, the quantum hydrodynamical equations are important and dominative in the description of the motion of electron or hole transport under the self-consistent electric field. These quantum hydrodynamic equations consist of conservation laws of mass, balance laws of momentum forced by an additional nonlinear dispersion (caused by the quantum (Bohm) potential), and self-consistent electric field. In this talk, we shall review the recent progress on the multi-dimensional quantum hydrodynamic equations, including the mathematical modelings based on the moment method applied to the Wigner-Boltzmann equation, rigorous analysis on the well-posedness for general, nonconvex pressure-density relation and regular large initial data, long time stability of steady-state under a quantum subsonic condition, and global-in-time relaxation limit from the quantum hydrodynamic equations to the quantum drift-diffusion equations, and so on. Joint with A. Jüngel, P. Marcati, and A. Matsumura. DONG LIANG, York University, 4700 Keele Street, Toronto, Ontario M3J 1P3 Analysis of the S-FDTD Method for Three-Dimensional Maxwell Equations The finite-difference time-domain (FDTD) method for Maxwell's equations, firstly introduced by Yee, is a very popular numerical algorithm in computational electromagnetics. However, the traditional FDTD scheme is only conditionally stable. The computation of three-dimensional problems by the scheme will need much more computer memory or become extremely difficult when the size of spatial steps becomes very small. Recently, there is considerable interest in developing efficient schemes for the problems. In this talk, we will present a new splitting finite-difference time-domain scheme (S-FDTD) for the general three-dimensional Maxwell's equations. Unconditional stability and convergence are proved for the scheme by using the energy method. The technique of reducing perturbation error is further used to derive a high order scheme. Numerical results are given to illustrate the performance of the methods. This research is joint work with L. P. Gao and B. Zhang. KIRSTEN MORRIS, University of Waterloo Controller Design for Partial Differential Equations Many controller design problems of practical interest involve systems modelled by partial differential equations. Typically a numerical approximation is used at some stage in controller design. However, not every scheme that is suitable for simulation is suitable for controller design. Misleading results may be obtained if care is not taken in selecting a scheme. Sufficient conditions for a scheme to be suitable for linear quadratic or H¥ controller design have been obtained. Once a scheme is chosen, the resulting approximation will in general be a large system of ordinary differential equations. Standard control algorithms are only suitable for systems with model order less than 100 and special techniques are required. KEITH PROMISLOW, Michigan State University Nonlocal Models of Membrane Hydration in PEM Fuel Cells Polymer electrolyte membrane (PEM) fuel cells are unique energy conversion devices, effeciently generating useful electric voltage from chemical reactants without combustion. They have recently captured public attention for automotive applications for which they promise high performance without the pollutants associated with combustion. >From a mathematical point of view the device is governed by coupled systems of elliptic, parabolic, and degenerate parabolic equations describing the heat, mass, and ion tranpsort through porous medias and polymer electrolyte membranes. This talk will describe the overall funtionality of the PEM fuel cell, presenting analysis of the slow, nonlocal propagation of hydration fronts within the polymer electrolyte membrane. TAI-PENG TSAI, University of British Columbia, Vancouver Boundary regularity criteria for suitable weak solutions of Navier-Stokes equations I will present some new regularity criteria for suitable weak solutions of the Navier-Stokes equations near boundary in space dimension 3. Partial regularity is also analyzed. This is joint work with Stephen Gustafson and Kyungkeun Kang. top of page Copyright © Canadian Mathematical Society - Société mathématique du Canada.
eb6039f8213b9551
Thursday, January 14, 2016 What is Life? The other day, I came across a paper by Erwin Schrödinger , first published in 1944, titled simply, "What is Life?" I didn't read it and have no intention of reading it. And I highly recommend you not read it either! According to Wikipedia, Schrödinger is known for his "Schrödinger's cat" thought-experiment. He was a Nobel Prize-winning Austrian physicist who developed a number of fundamental results in the field of quantum theory, which formed the basis of wave mechanics: he formulated the wave equation (stationary and time-dependent Schrödinger equation) and revealed the identity of his development of the formalism and matrix mechanics. Schrödinger proposed an original interpretation of the physical meaning of the wave function. In addition, he was the author of many works in various fields of physics: statistical mechanics and thermodynamics, physics of dielectrics, colour theory, electrodynamics, general relativity, and cosmology, and he made several attempts to construct a unified field theory.  In other words, Schrödinger is an intelligent and accomplished man! Or is he? After all, who are we to question a Nobel Prize winner who has contributed so much to the advancement of Science in the 20th century? I'm not trying to be flippant here. The reason I question his thesis on life is the same reason I question modern civilization and its stories. In his book What Is Life? Schrödinger addressed the problems of genetics, looking at the phenomenon of life from the point of view of physics. And therein lies the problem: looking at the phenomenon of life from the point of view of physics. I wrote about Scientism before where I question the applicability of Science beyond the world of the hard Physical Sciences. Indeed, it was another Nobel Prize winner, an Economist, who brought this to our attention. Friedrich A. Hayek, who is also Austrian, argued against applying Science to the field of Economics. And just as well, Science is not applicable to an understanding of existential matters. Schrödinger's thesis on the definition of life falls squarely in the madness of Scientism. How did civilized man decide that Science has an answer to such questions? And how did civilized man even come up with questions like that? I wonder if this is yet another artifact of the process of separation from nature and from our true selves that civilization enables and is enabled by. It wasn't always this way. We didn't ask questions like this before, much less proceed to employ arbitrary tools to answer them. There is no Scientific explanation for the origin of the Universe... there is the Big Bang theory but what was there before the Big Bang? Rupert Sheldrake jokes about this. He quotes Scientists as saying, "give us one free miracle and we will explain the rest." :) How did life originate? That's a central question for many Scientists. But first, how does one go about defining "life"? Somehow, we have a definition for what's life and what's not life. And according to modern Science, at some point, ages ago, non-life became life, non-alive chemical molecules got together to have a party and decided to become "alive". One wouldn't need to ponder over this too long before one realizes that the notion of "alive" is rather arbitrary. It's as if we write up a research paper and viola, we have drawn a distinction between what we will now declare to be alive and what we will now declare to be not-alive. So we have taken the creation around us, and drawn an arbitrary line and went around marking things as alive and not-alive. This is the work of fiction, not Science. An indigenous person would laugh at such insanity. He would say, all of creation is alive. Alive and vibrant, every stream, every mountain, every rock, every molecule. Even every electron, as the double-slit experiment amply shows. Modern man would rather run around in circles trying to explain how life came from non-life, asking for a free miracle, but would never admit that the very distinction between life and non-life itself is rather arbitrary, entirely made-up, conjured out of thin air... OK, right, written up by a civilized Scientist called Schrödinger. Once we make that distinction, a whole hierarchy starts forming... humans at the top, of course, animals next, plants, multi-cellular organisms, single-cell organisms, etc. Again, who's to say a human being is more alive than an animal? That a cow is more of an animal than a fish? When I tell people I am vegetarian, sometimes I am asked if I eat fish. I tell them no, because a fish is an animal. Then they tell me that a fish doesn't have as many feelings as a cow, so it is more like a vegetable than an animal. Where do people get these ideas from, I do not know! Chickens are less than cows, they say. And plants are the least because they don't move. We're obsessed with categories and hierarchies. Guess that's pretty much the way of Empire. A king is more alive than the commoner, right? I am not sure we need to make such arbitrary distinctions. The indigenous person who lived in harmony with land for 200,000 years never made such distinctions. The indigenous person is an animist at heart. He saw in things a certain spirit that we don't see. They are alive in their own way. Just like the Earth is alive. Scientists would deny that. An indigenous person wouldn't. Calling something non-alive or dead points to something in our own psyches, our own consciousness that is dead. Not being able to see the entire Universe as alive and conscious is a symptom of our own impaired consciousness. Calling a rock dead and inanimate points to an ossification of our own minds, a hardening of our own otherwise soft nature. It takes being partly dead to see death. Things mean a lot to me. I try to fix things before tossing them in the garbage. I reuse paper napkins multiple times, if I have to use them at all. Those things came from living trees. There's hardly a distinction. Everything is made of the same elements which go round and round making one thing today and another tomorrow. Any distinctions we make point to our own fragmented minds. Things are as alive as people, maybe more so... Love them with all your heart, if you're so moved to! And they will love you back! We live in a conscious Universe imbued with spirit and we can participate in it just as other life forms do. And everything is alive. This, we have known for a long time. We just need to remember it. Check out this fascinating film: The Animal Communicator and find out what else we've forgotten. 1. Dear Satish, Your words above are so good you might have written us into a corner with no more to say. Maybe just us folks trying to live day to day. Some little God somewhere, gave up scripting the movements of every living particle, forming everything. DETERMINED to relax and float in a sea of phospherscent stars the little God tinkerd with Intelligent Design (ID) Let there be FREEWILL. Let it go, let it all Flow. With omnipotence ID insures any element can and will intercede...if you please. LOVE do not fight, the creation of Poetry in motion and Physics and Photons and fishy physical things are what the waves of manifestation form. Easy breezy to look behind a billion big bangs. You have total freewill to determine what you can and cannot see. Little little little fractals - forever - in an infinite sea. Determine what you will, or be very still, your floating in it. Composed of it. Physics and LIFE OF PI...Cosmic oceans of fishy things. LOVE it all --- if you will () Crowfoot, of the Blackfoot Nation, 1821-1877 Native Americans have long viewed the animals as 'nations' and 'societies' unto themselves. 3. NOTHING elite about living forever. EVERYTHING does. Eternal Entropy recycling every type of Energy. Transforming spirits & souls & your essence. Scanning sorting and completing lessons. Higher Education I.D. LOVES to invite you to better planets. They pre-exist. Something for everyone...Some keep their populations and problems very small. You can get there from right here. When your truly ready, you will make the quantum leap from with-in. Are we, are we, are we ourselves? Well most spirit returns Now maybe we've learned To stop this whirl of a lie To this earth we are bound I ask you Are we, are we, are we ourselves? A planet with perfect land use already designed by you, for you, from the one you artfully drew from thin air. Honey, your poodle was more than a little doG. Your puppyman and farm friends were always reading you. Now you know, what you always knew,they were hatching love places...Running wild & free, together, reunited to the one where you really want to go. No more trauma in store. No more glass ceiling or ocean floor. No Marco. No polo after the dark matter and blue sky collide. Mo oh Mo, Welcome to the worm's a bit cozy as time flies by out there. Both hot & cold, you know how troubles grow old. So good to see your human side - test after test your strong heart is always in the right place. Face to Face I shake hands in a new place with you. No more secrets left to reveal. Shep - A crystal river of love flows to you. The past is present. This weekend I go with the flow to visit your childhood place. Here we are but ghosts in a machine. But in full living reality your footprints and imprints on the Crystal River ways "our" existence is too hard to explain. From Cotterell to Kierkegard you touch on it all. I leave you all here (for now) as the Crystal River nuclear plant decommissioning is nearly done. Be fearless, Be happy, A universe of other kingdoms will come. One for the Kogi, ONE for all. It is now completely up to each of you to choose. You have every free option & determined belief in your heart. Very big & very small. 1. Mark, I love the way you weave poetry into your discourse. I picture sitting with you at a table and hearing you casually speaking like this. You're Mark-speak is what inspired me to post my poem below. Please don't ever stop being Mark. And Satish, your blog post topic today has spoken to my most passionate cause; the questioning of scientism's over emphasis on reason and logic and the negative influence that was brought to our global culture as a result. Once the scientism worldview became imbalanced and exclusive, we were set upon a dark path indeed. It never should have been used as an exclusive worldview, it was just another tool. Instead, it became a religion, one complete with its own priests and even its own inquisition too. Thanks for commenting about it. And Mark, more poem speak ... I love it! 2. " became a religion, one complete with its own priests and even its own inquisition too." LWA, yes, Scientism is the new religion. The Richard Dawkins' of the world, its high priests and pontiffs. And modern schools the temples where children are immersed in a STEM-heavy curriculum. First graders in Silicon Valley are being taught computer programming. A whole generation of machine men in the making. They'd know all about spaghetti code! We had some interesting discussions on Scientism along similar lines on NBL last year. Plenty of apologists for this religion! 3. They'd know all about spaghetti code! You know, the other day I had to resist cracking a joke about how 'of course I know what those things are.' Nuclear reactor software needn't be precise, only just accurate. And spaghetti code is a party where a bunch of software engineers get together and eat spaghetti instead of pizza for a change. Dude, I was in that industry! But then I thought, naw, it's not a good enough joke to use in a gum fight, lol. I'm so glad to be speaking with a group of people who can identify this false road we went down. I read a book once (don't ask me what it was though) that delved into symbolic thinking, sort of like Jung was into. It posited that 'reason' was actually like a sickness that crept into out minds somewhere along the way. Imagine if it was true that we lived in a sort of holographic quantum universe where manifestations were purely generations produced from or in our subjective minds. If this were actually the case, then reason and objective science would be the ultimate delusion and mental illness indeed. My view is that at the very least it shouldn't be the exclusive determinant for defining our universe. All things in balance and moderation, at the very least. And Dawkins is certainly not moderate. As a matter of fact, every time I hear Dawkins pontificate, I never even hear him make any actual arguments anyway. He's always just performing to his crowd by placing his hands on his forehead dramatically, or sighing, or cracking an insult at someone for the laugh it gets him from his audience. What's up with that? I use reason, and I'm not a fundamental religionist either, and I just don't get the guy (Dawkins.) Maybe his books are different, but unfortunately his behavior and debates have discouraged me from ever reading one. You know, come to think of it, for proper andante noodles, spaghetti does need to be cooked more precisely than accurately. Damn dude, I passed their cheesy tests. Mmmm, drippy cheesy spaghetti code, yum. 4. LWA, allow me to return the favor, if only to a small comparative degree. I've read some Dawkins, and watched several of his debates. Don't go there. It's painful how little he understands, and how often he lies and just makes stuff up. He made a statement something to the effect that if there was a God, he was more incomprehensible than any theologian ever considered. Well, the unknowable, incomprehensible, mysterious nature of God is a pretty well established theological and general religious foundational precept. Anyone who knows anything about religion knows that. There is even an ancient Latin term for it, that I can't find at the moment; I tell you, Google has gone to computer hell lately. He made another comment once that really caught my attention. He talked about how complex the universe is, how incomprehensibly large and complex, and if there was a God, God would have to be more complex than the universe by far, and how was that possible. Or something very close to that. I was stunned that he misses entirely the elegant simplicity of the universe, its fundamental forms, its basis in the most dense, fertile simplicity that is replicated and adapted. The universe isn't founded in complexity, but simplicity, replication, and consciousness, all within a larger context of ongoing change. At least, that's how I see it. Anyway, he is the worst philosopher I've ever had the misfortune to listen to. Not as bright as he and his followers think. I'm not qualified to judge his work as a biologist, but as a philosopher and deep thinker, he's remarkably limited. His logic is lacking, too. 5. Thanks oldgrowth. I agree, I only ever watched a few of his most very brief clips, and it was to be able to debate 'about' him with some people who seemed to be fans of his. So I watched a few clips. I agree, and that's exactly what I meant; he seems to make no real logical arguments at all. What seemed to me to be impressing these fans of his was exactly what I described above. 'Well you're a fool' Hahahahahaha goes the crowd. 'Well that's preposterous.' Hahahahahaha goes the crowd. 'It boggles the mind.' Hahahahahaha goes the crowd. And I was left thinking, sheeesh, I don't even see an argument here, he's just a boorish comedian is all. He just hurls insults and acts all exasperated. So ya, no worries. I've been thoroughly turned off of Dawkins, and it took very little to do it too. I just didn't get him at all, or what the big deal was about him. People must think being the biggest ass wins the argument. Okey dokey then. Thanks oldgrowthforest for making sure I didn't hurt myself with him, lol. :) 4. A "radical" commentary about human behavior toward animals that was on AlterNet today. 1. Shockingly, there are registered non-profits that promote hunting for fun... This was quite moving when I first saw it - Earthlings - a documentary that shows the many ways humans mistreat and murder animals. "Earthlings is a 2005 documentary film about humanity’s use of animals as pets, food, clothing, entertainment, and for scientific research.Since we all inhabit the Earth, all of us are considered earthlings. There is no sexism, no racism, or speciesism in the term earthling. It encompasses each and every one of us, warm or cold-blooded, mammal, vertebrae or invertebrate, bird, reptile, amphibian, fish, and human alike. 2. This is going to be hard, isn't it? :( 3. Yes, OGF. The friend I watched it with covered her eyes many times. 4. Well, even though I know about and already have opinions about everything I saw in that video Satish, I watched, every bit of it, without looking away once. I guess all I can say about it is ... I watched it. You don't need to see it oldgrowthforest, how about we just say I watched it for the both of us and leave it at that. It's the sort of thing that makes one pray for extinction, real soon. I am at a loss for anything more to say about it. Love's a bit low at the moment, so I'll just sign off for now I think. Peace and love gang. 5. - " It's the sort of thing that makes one pray for extinction, real soon." I have seen that "earthlings"film too some years ago. I was extremely shocked, shocked like some little child, angry, sad and extremely shocked. No matter, how disconnected from Nature a person has become, you will always find suffering at his core. Adolf Hitler was a suffering person, disconnected from Nature, disconnected from his own Heart, from his own emotions, disconnected from empathy. Adolf Hitler was a suffering person. We can find many of disconnected persons in modern society. No matter, how criminal and shameless they are, you will always find some suffering little child at the core of their hearts. When someone does evil to other beings, then because he is disconnected and suffering. To be disconnected means to suffer for shure. A person who realized real freedom and peace within, won't do evil to others. A real free and balanced person does not do evil things to other beings. A joyful, peaceful and free person wants to share joy, peace and freedom. Buddha for example said, that we should try to have some empathy, compassion even for evil beings, because those beings are badly suffering beings, but at their very core there lives some cosmic spark, some cosmic essence, the same cosmic essence that is within ourselves, within everyone, everything. I have to remember this every single day when I feel hatred against the machine, against Empire, against others, these evil, suffering beings. I have to learn that Balance anew every single day, like some Yoga. I bow before the Kogi, who showed so much compassion, even for their stupid, little, younger Brothers _()_ 6. Sorry, I wanted to speak to LWA with my last comment. 7. Apologies, LWA, if the video drained your energy... I mean, how can it not? I watched it twice over the years, once by myself and once with a friend. It's terrible. Just grotesque. Worse than "The Cove". I think I sometimes think of this blog as a place that provides cookie crumbs for that person who's a bit lost. He/she would somehow land here, go through the posts and the comments and start putting together a picture of just what goes on in this world. Not just the ugly parts, but the beautiful parts too. I was closed off to both for the longest time, and only in the last few years did I start seeing the connections between the dots. I am certain those of us who are here now are well aware of the extent of the predicament we find ourselves in. Most of us, anyway. But I have a feeling that once bizarre things start happening, people would want to know what's going on, how we ended up here, etc. Of course, by the time we get there, we will have crossed the point of no return. So all that will be left to do is to understand (intellectual work) and pray (spiritual work). Assuming we will have more than one shock along the way, there will be opportunities for people to try and get closer to their true selves. My hope is that all that we discuss here, all the thoughts and ideas we commit to the space here, all the energy we put into the morphic field via this and other blogs will one day serve others, when they need it the most. 8. No problem Satish, it was my choice, even though I was already aware of everything it covered. After meditating a bit on it, I came to realize why I stayed watching it. I don't really wind up doing much that doesn't turn out to have some reason or another as to why, I have pretty solid guidance in that regard. So, no worries. It's all good. I admonish oldgrowth not to view it though. No deep animal empath should ever view that. Cheers Satish, it's all good. Energy is more or less back now. It was just a temporary despondency. ;) 9. Thanks for your wise reminder Nemesis. Forgive thine enemy. I agree, at the core of every freak is often just a suffering abused person who is only just lashing out. Thanks for reminding me too, Nemesis, that I have yet to watch the Kogi videos. I know, I know, where have I been, out partying or something? I think tonight might turn out to be the night I settle in and check those out, so I thank you for bringing that back into my awareness. Thank you for having this message for me. Cheers Nem, get off your computer and get wanking on that guitar already ... emote, emote, emote, Quas-emoto ! :) 10. Nice! And maybe this would be a good antidote to Earthlings - Anna Breytenbach interview (in case you haven't watched it yet) I just watched it. Anna is amazing... I like how she explains the process of inter-species communication in plain English. 11. Yeah, those Kogi videos are worth watching a few times... thanks, Nemesis, for pointing us to them. And thanks, OGF, for introducing us to Anna Breytenbach. 12. @LWA " Thanks for your wise reminder Nemesis." It was a reminder to me as well ;-) Man, sometimes I feel like I will grab that long, heavy sword and clean that evil bastards off the planet 8-) Thank you for your comforting thoughts about this place here. Yeah, these morphic fields. I read some book written by Sheldrake many years ago. It is about resonance, morphic resonance (isn't Music the same in some sense?). When I saw the video about the "Animal Communicator" that oldgrowthforest posted, I thought of morphic resonance too. These indigenous people in the film talked about it as well in some sense: The world, the Kosmos is like a net, everything inter-connected through morphic resonance like a web, through all times and spaces. Some noise, some vibration here makes some noise, some vibration there. The Kogi tell us about Aluna, the Great Mother, who weaves the Web of the Kosmos, the Cosmic Web. Maya too weaves her cosmic web. Mayas/Alunas web isn't just illusion, you only get lost in her web, when you don't respect the Dharma, IMO. This web is the Dream of the Great Mother (Nagual/Intuition), she gives birth to the Spirit (Tonal/Ratio). These ancient caves, with all these wonderful paintings from the stoneage in it, tell the same story, I think: The animistic interconnection of beings and things within a web of natural, cosmic, morphic resonance, Natural Mystic so to say. When times get real tough, when times get "bizzar", then people could start to panic. When people realize, what direction we are heading to, many of them will start to panic, that's one of my big concerns. When too many people panic, the situation gets completely out of control. No, I am not in panic for now, I studied the weird, modern planetary situation my whole life, so I got used to it. But I smell upcoming panic in the not too distant future out there in the streets. Climate change gains momentum ever more, when this just goes on exponentially, many people will start to panic. Panic transmits via morphic resonance very fast, like shock-waves moving through some swarm of birds. This will be the moment, when civ finds it's end. Maybe good for the planet, but shurely bad for all the children of this world. Hey, thank you very much for "Animal Communicator". It touched my heart and bones. I liked the black leopard the most, "Diabolo", hahaha, later "Spirit". I love both names. I feel like some diabolic, spiritual black panther sometimes, hahaha: " His vision, from the constantly passing bars, has grown so weary that it cannot hold anything else. It seems to him there are a thousand bars; and behind the bars, no world. As he paces in cramped circles, over and over, the movement of his powerful soft strides is like a ritual dance around a center in which a mighty will stands paralyzed. Only at times, the curtain of the pupils lifts, quietly--. An image enters in, rushes down through the tensed, arrested muscles, plunges into the heart and is gone. Rainer Maria Rilke 13. Sorry, the title of the Rilke poem is "The Panther". 14. Nem ~ that has been one of my, if not my very most, favorite Rilke poems for a long time. I also completely loved your post of the Maya Angelou poem previously. that one is so diamond good. 15. @mo flow Glad you liked "Still I rise" by Maya Angelou. I will never forget the victims of Empire. I was always extremely sad and angry about how Empire grew big through slavery and exploitation. " I’m a black ocean, leaping and wide, Welling and swelling I bear in the tide. To me this is more than just about the black people. To me it's about cosmic Blackness, the Nagual, Kali, the Great Mother, who gives life and takes it. To me the color of Empire is just grey, ugly, lifeless, nagging grey. Those buisy men in their grey 1000$ suits, with their grey faces, their grey bloody money, their grey, machinelike hearts and minds. I like black very much, black like "Diabolo/Spirit" in the "Animal Communicator". "The Panther" is one of my favorite Rilke poems as well. Everybody knows what it's about immediately. Living in a prison, on prisonplanet. But there is this image, this remembrance of freedom, freedom is still alive within that Panther, not forgotten, not dead. Freedom lives on, even behind bars (like within Nelson Mandela during his time in apartheid-prison): 16. I have suspected that it's easier for Germans than anglos to appreciate blackness. 5. . Hopsilophodon hops along. On three toes he goes; grubbling. And the hummina-hummina hums and numbs and hums and numbs, Like summoned drums, As hopsilophodon hops along. On three toes he goes; grubbling. And the hummina-hummina is all around him. (Grubble – to feel or grope in the dark.) A Hypsilophodon was a little three toed, bipedal, vegetarian dinosaur that existed during the cretaceous. Hypsilophodon may have foraged around at night to avoid being eaten. This is a poem I wrote many years ago as a metaphor for incarnation. I thought you all might enjoy it. 6. "all of creation is alive" oh yes. all of creation, every last electron, is part of the One. and the One is most definitely alive! Satish, your commentary on this issue really hits home for me. the life/non-life distinction that science tries to define is so much at the heart of where things have gone wrong for us. we (western civ science we) first give ourselves the right to make that definition. then we forget this was completely arbitrary, and created out of thin air, and then we give our definition the power of God. and we now have that power, ourselves! voila, and we get to mold and move the world as we see fit, based on this definition. isn't that a niftly parlor trick? like you say, this is not Science. this is completely fiction. the worst kind of story making, because it is both unconscious, and driven by (deeply unconscious) ill intent ~ the desire to control, at all costs. lwa, I love your poem! very evocative of your metaphor, and also harks nicely to the jabberwocky, one of my faves. more poetry! so beautiful, this whole post, Mark. 1. @mo flow It is my fault on NBL, I know that. I invest too much time on the internet while my guitar get's very jealous, she demands lots and lots of devotion, she is more jealous than any woman I ever met :D So I guess that I secretely/unconsciously dare to get banned on NBL. I also seem to seek farewells because of my own bio. When one had to say farewell to so many people during a lifetime, you start to seek farewells all the time, it get's repetitive. At least, that's a big part of my sometimes strange personality, kinda firebug, burning down everything all the time, leaving everything behind all the time, clinging to nobody and nothing. But I know that this is not Balance the right way. I think, I should leave NBL alone for a while and just write some spare comments here on Kuku. I feel more comfortable here, more at home. 2. hey Nem ~ I knew exactly what your secret desire was, and why. the way you said Please alone was enough to clue me in, but I kinda caught on a while ago about certain things you mention here ~ knowing your bio as you have mentioned it on NBL. you are right. it's not balanced. but what exactly does it mean in today's world to find true balance? is this actually possible? it is about peace, that much is clear. peace within flow. I'm working on this all the time. 3. - @mo flow Yeah, you know me quite good. And you are right, it's about Peace and it's about Balance. And yes, it is hard sometimes to stay in Balance in today's world. An important things seems to be Comprehension. You got Comprehension and I appreciate that very much. It's late in the evening. I am sitting here with guitar in one hand, coffee in the other hand and trying to get the next song into flesh and blood. My fingers are ice-cold, that makes it harder, the heater shut off automatically already, I am freezing actually, brrrr. It is one thing to learn lyric and chords, but it's another thing, to breathe Life into a song. Well, I am on track with Satish's essay now, cool- "What is Life?" From a musical point of view, one has to breathe real Life into a song. You need to be in Balance. It's not enough to learn the chords and the lyric. It can be just one easy chord and one simple line of lyric- if you don't breathe Life into it, it sounds lifeless, stiff, mechanical, without breathing, without vibration. The air must vibrate, there has to be a certain vibration, a certain Resonance (hat tip to Rupert Sheldrake). You can feel, if a song- performance is alive or not. When it's really alive, then there is a certain kind of communion between the performer and the audience, a certain vibration that vibrates in the air, forth and back between audience and performer, they actually become One on some level, on a musical level. That's why a life- performance is called "life-" performance. In communication it's the same, I think. "Comm- unication", "Comm- union"... quite the same, isn't it? Thank you for your comprehension, dear mo flow! ... brrrr, it's real cold now here in my little living-room... brrrr... I have to get more coffee now, brrr 4. Resonance, yeah... Re- sonare 7. Dear Satish, I've haven't looked here on your blog for a long time, and I'm so happy that I did! What a wonderful essay. You say what I feel and know to be true. I don't think I could watch "Earthlings" because I've seen too much already and my imagination makes me see the rest. For example, every time I see a horse (or a donkey), I can't help but imagine all a cruel ways these lovely, powerful animals have been exploited and misused for the "benefit" of man, especially in all those "glorious wars". I'm just re-reading Tolstoy's War and Peace which contains some very explicit scenes of how horses were used like (throw-away) machines. They way they went through artillery horses, so many of them just for servicing one gun - utterly dreadful. This is not the sort of thing they show in movies or TV adaptations. But yes, that's the reality when humans are disconnected from life. Cruelty and exploitation becomes second nature. And dear Mark too, I agree with LWA, you have a truly lyrical way of expressing yourself. Remember last year, at this time, I said here that I live from season to season? You picked up on that in your poetic way. So here we are again, another season, a very early spring. I have some lovely hellebores (Helleborus orientalis)or Lenten roses in the garden. They don't care about the deluge-like rain we have here or the cold and look positively exotic, more like plants from a tropical climate. The cold makes them bend their heads to the ground but as soon as the sun comes out, they stand up straight again, drinking in the light with pleasure. I'll send some pictures to your email address. And OGF and MO here again. I've missed you both! 1. . Hi Sabine. I think we spoke face to face on NBL just once. I've read a bit of you here on kuku's past threads, and I just wanted to say hi. So ... hi! Nice to see you drop in again. :) 2. Welcome back, Sabine... so good to see you here again :) I don't think it's a random coincidence that we were talking about you here and you decide to show up and say Hi. There's more going on here. It looks like your approach to taking it season by season is becoming the norm for all of us. May we all have one more season. 8. Sabine! so good to see you here again. I've been thinkin about you, and wondering how you are doing. great to hear about your hellebores. yes, please stay FAR away from Earthlings. I am not going there, myself. Satish, solivagant, all ~ here is something from the Bhagavad Gita, that goes straight to the heart of what we were discussing in the previous thread. I am moving this discussion here to make it less unwieldy. from The Ninth Teaching ~ The Sublime Mystery (Krishna speaking, paragraphs 8, 9, 10) Gathering in my own nature, again and again I freely create this whole throng of creatures, helpless in the force of my nature. These actions do not bind me, since I remain detached in all my actions, Arjuna, as if I stood apart from them. Nature, with me as her inner eye, bears animate and inanimate beings; and by reason of this, Arjuna, the universe continues to turn. so in the paragraph 8 here, we have, perfectly, the idea of both "freely create" and "helpless in the force of my nature". meaning, in a very real way, helpless as in out of control. this is exactly how it was. as I was describing previously, when the One surrenders to its own nature, it is utterly helpless. it is completely caught up in the force of its own nature. that force is extreme, beyond all conception. interestingly, relating to this current thread, paragraph 10 here talks about animate and inanimate. but says first he is behind all of this, with his inner eye. by reason of his own living nature, it all turns. it is all alive. Krishna specifies over and over, in various ways, that he is the Creator, without being Identified with his creation. earlier in this same chapter, he says "my self quickens creatures, sustaining them without being in them." this is absolutely the case. there is no way the One, the true being of Krishna itself, can be IN nature. every atom would be instantly annihilated with the extreme force of that Being, if the One itself lived in that atom. the One supplies all the life force that is behind everything, sustaining everything. and the universe turns, alive. 1. I love the Bhagavad Gita. 2. Hi, Sabine! I mentioned you in a post just last night, and here you are. So happy to see you here. I hope you watch the Anna Breytenbach documentary if you haven't seen it before. Are you familiar with her? I thought of you and your love for nature when I posted it here. 3. mo, that's quite interesting... what you experienced being articulated by the Gita. I am going to be pondering over this for a while, of course... for now, these thoughts comes to mind... there has to be a seed, a very small seed of non-control, of agency, even when we're very much out of control. It's like a thin thread that sprouts and becomes a rope strong enough to reel in an otherwise out-of-control scenario. It's as if the yin has shrunk to an infinitesimal size while the yang is fully expressing itself, as fully as it can while still leaving a little space for a diminutive yin. Without that almost non-existent yin, the yang wouldn't know itself anyway. And the yin will never yield completely. And when the yang is done, when it has had enough fun, when it is ready to return to balance with the yin, the seed of yin sprouts and grows and occupies the space that the yang is relinquishing, and they are back in balance, until, of course, it's time for yin to express itself fully... In that eternal vibration, pictured as a wave, there is still a force that acts at the extremes, one that seeks to restore balance. In fact, the force is maximum at the extremes. So, I wonder if, if anything is out of control, if there's something that we cannot escape from, it's this duality itself. We simply can't be totall y out of control, nor can we be totally in control. Now, that idea, that phenomenon is perhaps the only thing that is out of our control. We simply can't vanquish the little seed, whether it's the yin or the yang. As usual, I seek to relate these musings of the cosmic game to the story of humanity. And I think of sociopathy as that seed that was ever present in the best of times, in the most harmonious and balanced periods of our existence on this planet. The seed must have existed all along, throughout our evolution, and it had finally sprouted at some point along the way, whether it was 10,000 years ago with the dawn of agriculture, or even further back, with the invention of language and the beginning of the loss of telepathic abilities. At some point, that seed sprouted and what we're seeing today is the full-blown expression of sociopathy and all that it engenders. The sheer virtue and skill of our indigenous ancestors was in knowing about this seed, and keeping it from sprouting just yet, which they did by ostracizing, banishing, or killing the one or two sociopaths that showed up in every tribe every now and then. But at some point, the sociopath ran over his tribe, perhaps when the tribe was caught in a state of weakness due to a natural calamity or some such situation that threw the members into a bit of a chaos when they couldn't keep tabs on the sociopath. This is perhaps the "out-of-control" aspect of the One. And it answers that question, "why would a decent God let the world go to hell?" It's also interesting to see that in current times, the indigenous peoples and the meditators and the monks are the ones who are being the most resilient in their psyches and minds, because, at some level, they understand what's going on and the inevitable nature of some of the ongoing events. Well, those are just some thoughts... I don't know if that's the way it is. Although, I never stop looking for "the way things are" despite my doubt that there is a way that things are :) OGF would know what I mean! 4. I like your thoughts here, Satish. I want to be careful about this one idea you mention. the "helpless in the force of my nature" ~ meaning out of control aspect of the One ~ creates everything. every manifest thing, and every unmanifest possibility, arise out of that. the infinite ways a system could achieve different forms of harmonious balance, for an arbitrary amount of time, arise out of that infinitely "surrendered to" force of the One's nature. this is a key way I felt the infinity of the One. ALL of manifest and unmanifest Creation has to happen right NOW. the only way that is possible is if the One allows its INFINITY to dominate all other considerations. infinite force, power, foresight, intelligence, sensitivity, and who knows how many other aspects of its true nature that we have no words for whatsoever. yes, that a sociopath could arise out of this infinitely forceful creation is definitely part of the deal. but everything else that exists, or might exist, also arises out of that same infinite creation. 5. mo, you seem to have experienced the tremendous power and strength that's behind all of creation, that gives life to everything, life not in biological terms, but that spirit of existence to everything. It's as if a little piece of nuclear fuel can be demonstrated to be made up of an immense amount of energy. But that is still physical energy, measurable in kilotons or megatons, and there's something even more powerful about the kind of force you perceived, it being infinite. It's interesting to imagine all of creation as being due to the out-of-control nature of the One. 6. infinitely satisfied and infinitely hungry for MORE! the definition of OC peaceful paradox. that's me. But you cannot see me with your own eye; I will give you a divine eye to see the majesty of my discipline. Krishna speaking to Arjuna, from The Eleventh Teaching ~ The Vision of Krishna's Totality If the light of a thousand suns were to rise in the sky at once... the Vision of Krishna's Totality is a powerful, if very poetic, description of "seeing from the outside" the true nature of Krishna's Totality. it is so powerful and complete, and accurate in some key nuances of detail, that taken along with everything else in the Gita, I am convinced that the original author or authors of the Gita new from the inside exactly what they were talking about. anyone can be there, or be IT in Totality, as ultimately, we arise from IT, and our awareness is IT's awareness. that's really the only point of the Gita and all similar works. just to communicate the reality that you are IT, and you can know this again, in complete Truth. 9. Dear SABINE, I wanted to be sure to remind everyone on NBL of you as we started this year. I something along the lines of "If Horton hears a Who, I hope SABINE is out in her garden praying for one more year." Both NEMESIS & LWA are so much like Mayan reflections of myself. I feel like I can read them rather than write. I love it how each of them is so wise, yet down to earth stumbling around making the same mistakes I do in comments last year. Yikes, now I just stepped in it again saying that. Praying my brothers above know exactly what I'm trying to say here with a laugh. Very stormy on the other side of Fla tonight. Earthings & The Cove are horror movies. Worse yet the COVE slaughter has increased. I try to balance with the art & poetry aspects to reflect what beauty there is all over the world. I'm at the Crystal River lodge in this amazing cypress jungle. Thunder & wind. A vase of fresh lilies on the table. A flickering candle in this old cabin room. Thinking of how some future advanced intel might be able to read what we recorded about these times. I might not spell this right but I bet each of you has heard of the Akaskic record. Since everything in space is always in motion, all you have to do is capture exactly where every star and atom is at right now...and wow...that a perfect code system to return to this exact moment in time. I used to say that only radiation would remain when I was dealing with that very serious issue. However, tonight I'm sure the force of love does last longer. Stronger. Strangers in the night we will all begin...again. 1. You didn't step in it, Mark. But I did over there. Then I smeared it all over myself and ran around naked right through the center of the place waving my arms and shouting. It was all in good fun though. I like your mental picture of the Akashic record. It's also sort of based, I believe, on there really being no time anyway. No future, no past, just now. Shamanic time. All of it happening right now. And me covered with it and running around naked waving my arms. Don't blow away Mark, we need you. Peace and Love All. 2. Of course that was love I smeared myself with, and I was only naked underneath my clothes. Jeepers people. But the arm waving was a bit weird. 3. @Mark Austin Shure it does. Yes, Lake'ch, I and I 10. About Life, Balance, the Self, Individuation, Duality and Oneness: " C. G. Jung - The Self" C. G. Jung about Synchronicity: 1. - SYNCHRONICITY, the constant, meaningful, conscious, spiritual relationship between inner and outer world- this, the inner and outer relationship of Psyche and Kosmos, is exactly what the indigineous people, the Kogi never have lost. But modern man lost this Lake'ch, this meaningful relationship to himSelf, to others, to Nature, to the Kosmos. There is a crystal-clear relation between the destruction of the ecologic web and the psyche of modern man: An eco-spiritual crisis, the loss of Balance between inner and outer world, psyche and Nature/Kosmos. 2. "There is a crystal-clear relation between the destruction of the ecologic web and the psyche of modern man" Yes, this is indeed an eco-spiritual crisis we're going through. I like that term. Our collective psyche, currently wayward, is related to the ongoing destruction on the planet. A disconnection from source, one that leads us to a rather restrictive and limited worldview that characterizes the richer aspects of the Universe as superstition, is similar to the disconnection from Mother Earth, from Earthlings, from Nature and from all of Creation. 3. Exactly. And modern, purely materialistic science or technology can not solve that crisis (allthough not all modern scientists are purely materialistic). I like, what Jung said about the relationship (meaningful syncronicity) of inner psyche and outer matter: It might be one and the same entity, just viewed from within and viewed from outside. Alunas web connects both through syncronicity in a meaningful sense (Love? Consciousness? Inter-connectedness). The inner and the outer world mirror eachother. 11. Kundalini Yoga meditation for atomic radiation. It can't hurt. 1. Yogi Bhajan is the person who brought Kundalini Yoga to the west. I wrote briefly about him in the previous thread. One of Yogi Bhajan’s sutras for the Aquarian Age is “Recognize that the other person is you.” All these old cultures seemed to know this. Cheers other me's. 2. " “If your inside is in a turmoil, this meditation will prevent you from dying. It can be done anytime, and its effect will be to calm you, to energize you, and to relax you.” – Yogi Bhajan" Yes, mon! Calmed, energized, prevented from dying and relaxed now I can read the more toxic, radioactive comments on NBL without reacting to them, cool 8-) 3. And yes, I am born under the sign of Aquarius, coincidently. I am baptized in the waters of the black river. 4. The black river Styx: To me, Death is a big part of Life. When I was baptized in the black river Styx, I found out, that the source of Life and Death is within, manifested as/within Breath (Anima, Psyche, Odem), within the eternal tides of breathing in and breathing out, the tides of the oceans, the tides of the Kosmos. This Breath wanders, travels around eternaly, but it doesn't get lost while we travel the passage of Death, for womb and grave are the same, fertile humus. 5. Nemesis, thank you for sharing all these gems of spiritual insights here. I connect with them. You have said it very well about the inseparability of Oneness and dualism, the relationship between and inseparability of the inner and outer worlds, etc. 6. Satish, yes, I enjoy to share this joy. If I ever babble too much, then please give me a hint. I get out of control sometimes, swept away by inspiration poaring itself into words ;-) 12. - Still comments on NBL that try to "prove scientifically" that every single man, that humankind, Homo Sapiens (that's how science calls it) as a species is a beast, cancer all in all, who exploits and destroys and kills everything, ha ha. Comments, that try to prove that those who respect and handle Mother Earth with care and live in Balance with the Cosmic Law, are the same like those, who rape and exploit and destroy Mother Earth. There have always been countless tribes, countless human beings, who respect Mother Earth and prove those "scientific evidences" wrong. And they still exist today. There is no scientific way to turn a lie into truth. 1. - These Agents of Empire, who say that all human beings are unballanced rapists, beasts, thiefs, exploiters, liers, cheaters ect, suffer from a psychological projection. Those people didn't integrate their own shadow (dark side) of their psyche and therefore project it onto all others and everything else. And they end up saying stuff like that: " The only thing naked apes do that is sustainable is bullshit and moralize." This quote implies two things: That 1. sustainability is futile/impossible and that 2. trying to live sustainable means to "moralize", to "bullshit". This a perfect example of ignorance, Donald Trump, every redneck and alike would be proud of. What does "moralize" mean here? To live in Balance with Nature? Not to exploit and rape Nature? To warn about the consequences, if we don't live in Balance with Nature? Then "moralizing" to me is a very good thing. Let me quote Guy McPherson here: "... Instead of changing, people embedded within the dominant paradigm prefer to disparage others. “China is horrible,” they proclaim. “They’re burning all that coal, polluting the air.” Or maybe it’s Brazil this week. Or India, using all those “resources” we need here in the homeland." Exactly. And one could add: " And all others would have done the same as I do, as we do, all indiginous people, bullshit, no matter who, all human beings, all naked apes are as ignorant as I am, as we are. All the same. Period." Yes, an ugly, bullying, psychological projection that is. " Carl Jung – Shadow Projection": The "Bardo Thodol" tells us in a more mythical language, that, in the Bardo of Death, we will be inevitably confronted with our own, inner shadow. In fact: See you in the Bardo, 2. hey Nem ~ really loving everything you've been saying here. keep it up! 3. hey mo flow, glad you can relate to it. The way, how Empire clings to denial, is just insane, suicidal. Empire will just go on with denial, until it's end. Well, that's the way, Samsara (fueled by greed, hatred, ignorance) works. 13. What a wonderful way to laugh till I cry. Waving my arms and yep I'm actually naked but fully clean and showered. Hot & Muggy after the rain. Gosh I was going to say something serious about the Decommissioning tour today, but I just read a letter from MO about not caring...and well I have to admit all my 6 dollar bionic bits are too tired to care about big stuff anymore either. I've seen more than I ever imagined I could in this life sailing the seas. D.C. ABC Cern, cc me on all of infinity. Still I have this wild spark of love for everything. Now how can I possibly love a dolphin slaughter,,shit,,I don't - I'm just grateful that I got to witness so much. Certainly learn what I would not do if I got to create another tiny speck like this just across the galaxy. The Milky Way is so vast. Forget the universe. Just the scale of our own solar system is really really huge. So I guess it only makes sense that just for a nanosec infinity let technology and everything go. It just blinked. That's all. And it still has no idea what we are experiencing here. Blogging isn't even real shouting. Oh but LWA you sure made me laugh out loud and now I still feel all happy and cozy out of no where. Maybe it's only the feelings that matter? I do not feel like an Illusion. LOVE everything said above. Nemesis warm up your guitar fingers. Nothing Mayan tonight. This is real and real can be really nice despite all it's deadly flaws. Better cover my thighs with diamonds, the spirit of Maya Angelou just told me I look like a manatee...oh dear me...not even diamonds are forever. Does love radiate? Once I sailed all day thinking I was on Autopilot, but when I went to switch the autohelm off I discovered it was a rare occurrence of every sail, rope, wench, wind waves everything pulling together so perfect that NATURE was the autopilot. I should write something beautiful about how stunned I was to discover that harmony. But the biggest lesson was that I only knew it at the END of that perfect balance day on Banderas bay. 1. . I think even better than Mark’s know it poet speak, is his sneaky speak. It’s almost like that tiny spot of yin barely detectable in the teardrop of the crying, hurting yang. Maybe it's only the feelings that matter? You know, one thing I’ve learned from my magical mystery tour, and I told mo about this once, is that being able to stay still with the love and the peace, and to still be able to find the beauty and the ability to have compassion, while right in the center of hell, is somewhat of an attainment of sorts. It could even possibly win you a prize somewhere down the line. Maybe you could win a small candy treat or something by staying with the love. I’ve also seen that when collapsing waves of choice in the quantum, that it’s more the feelings that determine the outcomes anyway, rather than the intent or desire or the entrainment on something. Although, entrainment certainly can arise out of feelings, especially out of dark and scary ones. That’s why the fear has worked so well for the naughty masters. Specks of doubt are deadly little things, like pieces of plutonium. It’s not easy to trust and to believe, compared to the easy certainty of doubt. And above all, it’s tough to conjure peace and relaxation and love in the midst of pain and horror and the goings on at the likes of the Cove. But, if you slip into fear, then that’s what you’ll project and create around yourself … a fearful existence is what fear manifests. That’s why I think the following sneaky speak was such a powerful message from Mark. This is why I get concerned sometimes about all those nice folks at NBL, especially the thoughts of apneaman last night. So ugly and fearful. But alas, I’ve never actually been able to save anybody, really. Not a single one. Not even that fellow in the south of France who lost his girlfriend, and then his forest, and then his hope, and who then turned to dreaming of violence. But, with Lake’ch, maybe just getting myself there will be just enough to make a ripple that will grow and touch and bump, and without so much effort even. It wasn’t until I withdrew mentally from NBL a month back that I even found out why I was there to begin with. Just like your sailing trip Mark. Funny how that goes, like time running in reverse or something, the idea of retrospect. Seeing how it was all perfection to begin with anyway, and just needed to be left alone to fall into place, without the need of interference by conscious thought. No fussing required. I just needed to be willing to walk along the way was all, knowing it would all make sense later. Sneaky speak. I do love it. Chuck chuck chuck chuck … tee hee hee hee hee Ha ha ha ha ... Ho ho hoo HOO HOO (Oops, that was my outside laugh, wasn’t it? Sorry ‘bout that. Apparently I’ve gone kuku now.) Mark, you're absolutely incredible!!!!! And then, there were the comments of LWA. You guys are so amazingly "out there!" Nothing to say. Just glad you're there to light up the way. 3. - @Mark Austin Yeah, despite all these shiny technology gadgets, they still have no clue, what "conscious experience" means. They can track the traces of conscious experience in the brain, but not experience itself. They are searching for consciousness within brains, within neurons, within time and space, but not within. They'd really like to grab that human spirit and put it in a box to fulfill total, infinite control. They will end up with their own human experience inside, from within, when times get tough and shiny techno-gadgets don't help anymore. " Maybe it's only the feelings that matter? I do not feel like an Illusion." To feel is a great gift. To feel means to be interconnected, connected to oneself and the rest of it. The ability to feel is vital in evolution, is vital for survival. Emotional resonance, balanced emotional perception is the basis of effective, rational intelligence, IMO. Without emotion and ratio in Balance, shit happens. Modern man can explain splitting of the atom, but can't feel the suffering of Mother Earth. They can fly to the moon, but can't reduce unnecessary suffering on this planet obviously. Sail, rope, wench, wind, wave, yin, yang, nature and you as One great stream of consciousness on autopilot. Great, inspiring image. When Oneness happens, when YinYang, the Tao is in Balance, I can't even say anymore, if it's autopilot or manual control, it just feels like this: The same when I breathe in and out like a swing: I just can't even say, if Nature breathes me or if I am breathing Nature, hahaha. Sail on, Captain Austin! 4. "I just needed to be willing to walk along the way was all, knowing it would all make sense later." That's exactly it for me.. what doesn't make sense now will make sense later. That's part of the deal. Part of the infinite and unfailing justice that the Universe is made out of. Maybe there will be a point when it doesn't have to make sense, and that's good too, because there's enough going on in the moment that's more than good enough. It's like trying to remember something that was just on my mind, trying hard for a split second, before suddenly realizing that it doesn't matter because the current moment is plenty rich. And if it's important enough, it will come back. It all works out. Every time. Always. 5. . Nature creates all beings without erring: this is its straightness. It is calm and still: this is its foursquareness. It tolerates all creatures equally: this is its greatness. Therefore it attains what is right for all without artifice or special intentions. Man achieves the height of wisdom when all that he does is as self evident as what nature does. For this very reason the earth has no need of a special purpose. Everything becomes spontaneously what it should rightly be, for in the law of heaven life has an inner light that it must involuntarily obey. Therefore in all matters the individual hits upon the right course instinctively and without reflection, because he is free of all those scruples and doubts which induce a timid vacillation and lame the power of decision. The I Ching (Wilhelm) A rather scathing indictment on the folly of reason, I would say. 14. Nemesis and Mark, You both talk about yin and yang and express what I fell too in almost everything you write here, but from the point of view of an old woman, I'm not entirely happy with this symbol of Oneness. I have one or two thoughts on that I'd like to share. I'll get back to this tomorrow (17.35 h here) because I'll have to start making some food. Enjoy the rest of light for today. 15. Hola Satish I hope the day finds you at peace. The indigenous person is an animist at heart and so am I! I remember a conversation with my daughter over a thanksgiving dinner many years ago about what is alive and what is not. I made the comment that even the plate she was eating off of was alive.... She called me crazy of course and we all laughed... I said that even though the plate looked solid and would break into a thousand pieces there where in fact molecules moving within the plate. Even a grain of sand is alive in the world. My daughter, my wife and I have all been Vegan for about 5 years now. We love and respect all life and it's creation in the world. I wish that everyone in the world could see life as the indigenous people see it. Humanity, for all it's good things, has failed to grasp this basic truth that our world is a living breathing entity and will in the end self regulate. We see it happening everyday. As your article states.... We will become another part of the great cosmic plan. Something to be joyful about!!!!!! Peace my brother 1. Great comments. I have not read about animism, but seem to be thoroughly animistic anyhow. I suppose that not knowing how it's supposed to work gives me a chance to discover. I find that why I tend to ignore or mistreat my environment is that I have been programmed not to see it as living. And when (all too rarely) I see it as living, I'm considerably kinder and more careful in how I relate to it. 16. - What is Life? What is the source of life? I'd like to share some further thoughts on that question: It is too obvious, that the environment isn't just some kind of "surrounding", we are a part of it, we share the cosmic essence with it, not just spiritually, but literally, it's all about sharing and inter-connectedness: We are connected to Mother Earth through the air we breathe, through the food we eat, the water we drink, the wind we feel on our skin, through just everything. We come from Mother Earth, we live now through Mother Earth and our body/mind will go back to Mother Earth. The indigenous people knew it always and still know it today. They had and have their rituals, often celebrated in caves. The caves are the womb of Mother Earth and the indigenous people (the people from the stoneage as well) see those caves as the place, where every life comes from and where any lifes goes back to. It's the Nagual, we call it "darkness", but it's the source of light and dark. The source, where also dreams, thoughts and feelings emerge. It's the inner, mental body, that Buddhists call the "subtle" body, but it's no physical body at all (but it's the source of the physical body and the physical world), it doesn't have any expansion or mass in time and space in a materialistic sense, it's middle is everywhere and nowhere, it is hidden within, therefore, modern, materialistic science can't find it, hahaha. But it's there, all the time. Do, for instance, stories myths, archetypes or does the web of culture, ideas ect have expansion in time and space? Well, obviously, in some sense. But that expansion, in a materialistic sense, just can't be measured exactly. But it's there. How big or how long or how heavy is a thought, an emotion, an inspiration? Do dreams have expansion in time and space? I think, that trees are dreaming, Mother Earth is dreaming, Nature is dreaming, Man is dreaming, the whole Kosmos is dreaming in some sense. Those dreams can't be found through microscopes or within some particles in CERN, hahaha. But it's there, it's all there, within. Imagination, inspiration, meditation, Samadhi. The timeless source of dreams and reality, the source of time and space, is within. There is a ultra-deep connection between the ancient, timeless, inner dreamworld and the real world (Syncronicity is part of that connection as well). Modern man got lost in physical, rational time and space alone, he lost the connection to his inner, im-material world of visions and dreams and inspiration and therefore, he lost the meaningful connection to anything else. He lost his soul, his self, his natural Home, modern man is a restless wandering orphan, searching for himself, but the faster and more remote he searches, the more he loses himself. When we look at Nature, when we look at that stunning, vast Kosmos, when we look at others, we see the material "outside" of the miracle, so to say. But to find the source of this miracle, we have to search within, we have to turn our sight inwards in the most intimate way, it can't be found in laboratories or hadron colliders, never ever. Whoever searches for that source within, will find it for shure, it's never gone, always there, the cosmic source, the infinite source. It's within me, it's within you, forever and always. 1. Well said, Nemesis! Excellent thoughts. Very clear... Love it! 2. Thank you very much for your positive resonance, Sir Satish Musunuru! I'd like to go on then with some further thoughts about our relationship to the essential, cosmic source within and our relationship to anything and anybody else... It's not about superpowers or becoming all-knowing in the ordinary sense. It's not about "controlling" anything or "conquering" anything or "profiting from" anything. It's about relationship. What is my relationship to it all, what is your, what is anybodies relationship to it all? And what kind of unavoidable responsibility is connected to that relationship? We have to look and listen deep, deep within, to find out... When we look at a tree, then don't we often see more of our own, personal relationship to that tree, our own prejudices, instead of seeing the real tree? To say, that "this tree does only exist, as long, as I exist", tells something about a certain kind of relationship to that tree. It implies, that the tree has no own life, that I am the creator of that tree. But I am definitely NOT the creator of that tree, but within myself I am the creator of my own relationship to that tree. I am not the creator, the father of that tree, but I am a brother of that tree. Maybe, that tree is just some "hallucination" of some brain? Some say that. But that kind of statements tell more about the personal, individual relationship to that tree and life in general, than anything else. It is a childish inflation of the Ego, to say that "the tree, the Kosmos only exists, as long as I exist, when I will be gone, then the tree, everything else, the Kosmos will be gone too". It is great confusion, Ego-Confusion. But: The tree and I share the same source within. Like, as if the source would be a river and both, the tree and I, share the same source of life, this river, this water of life. The tree is it's own being, I am my own being, but we share the same cosmic river, the same cosmic water within. We can call this source "Big Bang" or "God" or "Shunyata" or "Tao" or whatever, names don't matter much. It is One source, but it creates Many things and beings. This source is anywhere and nowhere. It is the Cosmic Breath, that breathes within high tide and low tide of the oceans, that breathes within the circulation of the rivers on planet Earth, it breathes within the winds, it breathes within the circulation of the galaxies, it breathes within life and death, it breathes within ourselves and within Music. This Cosmic Breath, this Cosmic Source is within everything and all of us, within matter and beings, every second, infinitely. It is within the dance of the atoms and molecules and stones and bacterias and plants, animals, human beings, gods. We can't find it through microscopes, large, larger, largest hadron colliders or Hubble telescopes, but we can find it in our own, individual, personal relationship to it all, within. Our relationship to it all defines, who we are and who we will be. Some call this unavoidable, individual relationship Karma/Vipaka. No chance, to run away from it, no chance, to run away from yourself. The most inner essence, the most inner cosmic source within can't be destroyed, can't die, can't cease, can't dissolve, for it is the source of it all, for it is everywhere and nowhere. Find out, what your most intimate, real, living relationship to that source, to the Kosmos is, from within. You don't need any guru, any master, any scientist, any superpower, any money, to find your own relationship to it all, within. All you need is awareness. 3. This is an eco-spiritual crisis we're facing... there's no separating the two anymore... we have lived as if there's a great distinction between the worldly matters we face here and the spiritual matters we face inside, or after death, or anywhere but here and now, living in the world. The longer we go, the more we see them merge, the more we're confronted by their inter-relatedness, the more we're forced to deal with them both at once and see them as a single crisis. The ecological crisis is a spiritual crisis. "King’s sacred view of nature, based in African American tradition, aligns with African and other indigenous traditions, mystical traditions, and much of the eco-spiritual thinking that would later develop. “Although God is beyond nature he is also immanent in it,” King wrote. “Probably many of us who have been so urbanized and modernized need at times to get back to the simple rural life and commune with nature… We fail to find God because we are too conditioned to seeing man-made skyscrapers, electric lights, aeroplanes, and subways.”" 4. - @Satish Musunuru Thank you very much for the information about Martin Luther King’s thinking and feeling! I find some inspiring similarities to my own thinking and feeling in your quote, I didn’t know of until now. Respect to Martin Luther King Yes, that's my own thinking. We are connected to the eco-spiritual Reality through our relationship to it. But I see many beings, who all have their own life through their own, individual relationship to the same cosmic source we all share. I say, the source "within", because we cannot find this source without our very own, inner, individual relationship to it. I am not talking about seperation here, but about a respect relationship to all things and beings, I respect their own, individual Karma here. When I love a human being or any another being, then I love it, because it has it's own individual being, I love it for itself. Everybody has his own Karma, I can never experience anyone elses experience, I can never experience anyone elses Karma, only my own. But I can relate to other beings. You see it through your individual eyes, through your individual relationship to it, I see it through mine. You walk in your shoes, I walk in mine. We both walk on the same road, but everybody walks on his own trail. The fish sees it from below, the eagle sees it from above. This is my respectful relationship to all things and beings, I respect their own "So-Sein", their own, individual "Suchness", Tathātā. " In its very origin suchness is of itself endowed with sublime attributes. It manifests the highest wisdom which shines throughout the world, it has true knowledge and a mind resting simply in its own being. It is eternal, blissful, its own self-being and the purest simplicity; it is invigorating, immutable, free... Because it possesses all these attributes and is deprived of nothing, it is designated both as the Womb of Tathagata and the Dharma Body of Tathagata." 17. Nicolas Wessberg Natural Reserve is part of the Costa Rican National Park System. It’s a gorgeous piece of beachfront land, just north of Montezuma. You can easily reach it by walking 15-20 minutes, just past Ylang Ylang Resort. Named for Olof “Nicolas” Wessberg, the Swedish man who lived in Montezuma decades ago and became the founder of the country’s national park system, this park was founded in 1994 as a permanent landmark dedicated to his memory. It has no services, camp sites, etc and is a “Reserva Absoluta”, meaning that no one can normally go inside except for park rangers. But, you can walk walk past it on the trail to Romelia park and Playa Grande.
7c2e7bf2bcc31cca
fr en es pt       RSS astronoo   about   google+     Structure of the atom  Automatic translationAutomatic translation Category: matter and particles Updated November 06, 2013 Everything we see is made up of atoms, many atoms. It was watching the smallest constituents of the matter that scientists have been able to explain, in the twentieth century, the operation of the entire universe. An atom is constituted by a core around which moves one or more electrons. What characterizes the core is its number of protons ( Z) ranging from 1 to 110, it is it who determines the element, such as iron (FE26) has 26 protons, 26 is the atomic number. The number of neutrons (N) ranging from 0 to 160, characteristic of the isotopes of the element, for example, hydrogen (H1) has a proton and no neutron, deuterium (H2) has a proton and a neutron, tritium (H3) has a proton and two neutrons. These three forms of hydrogen have only one electron, since there is only one electrical load, the single proton. Attention that it is only in the case of hydrogen that is given a different name to the isotopes of the element, in all other cases we indicate only the number of nucleons thereby find the number of neutrons. For example iron (FE26) has several isotopes Fe56, we understand that Fe56 has 30 neutrons, Fe57 has 31 neutrons, Fe58 32 neutrons, the number of neutrons differs well isotopes. In the atom are electrons that give consistency to the material, yet it is very light, its mass is about 10-27 grams, the proton is 2000 times heavier and concentrates majority of the mass of the atom (99.99 % ). For stable atoms, the mass is between 1.674 × 10-24 g for Hydrogen and 3.953 × 10-22 g for uranium. Since 1811, we also know the approximate size of an atom, Amedeo Avogadro (1776-1856) estimated the size of atoms at 10-10 meter, i.e. a little more than 10 millionths of a millimeter. In 1911, Ernest Rutherford (1871-1937) discovered the atomic nucleus and specifies the structure of the atom by bombarding gold foil with particles from the radioactive decay of uranium. He gives a nucleus size in the order of 10-14 meters. There is a little more than one hundred different atoms, these are the elements such as hydrogen, carbon, oxygen or iron. The New Zealand physicist had the idea of a representation of the atomic nucleus. Rutherford represents each atom as a mini solar system, the center and the nucleus like planets orbit the electrons. The nucleus itself is represented as a mature grains (picture to the right). This pictorial representation is false but has two advantages, it clearly differentiates the two particles, the proton and neutron and we understand that  the core, very compact, is circumscribed within a defined volume. But since the advent of quantum mechanics in the 1920s, the nucleus image is disturbing, the nucleus is no longer a system of balls associated together. The nucleus is governed by quantum mechanics, in other words it exist than if it is observable but observe the protons and neutrons inside the nucleus as they are in the picture, is not possible because it would illuminate the particles with a light so intense that the nucleus would instantly disintegrate. This representation in form of grains blackberry hides the quantum concept of matter. It is the same for the electron, no longer represents the electron as a particle which rotates on a very regular orbit around the nucleus. The electron is both a wave and a particle, the wave-particle duality is the foundation of quantum mechanics. In quantum mechanics the electron does not follow a single path, it is located in a region around the nucleus, called the electronic cloud or atomic orbital.  classic representation of the atom Image: Representation of the atomic nucleus, in the form of grains blackberry, has two advantages, we differentiate the two particles, the proton and neutron, and it is clear that the nucleus, very compact, is circumscribed within a defined volume. All nucleus and all isotopes have between 1 and 110 protons and between 0 and 160 neutrons. However, this representation is false because it hides the quantum concept. Since the 1920s, the nucleus is no longer a system of balls associated together, it is a quantum system much more troubling. Electron cloud or atomic orbital Since 1924, all matter is associated with a wave, is the assumption of Louis de Broglie (1892-1987). With this hypothese, he generalizes to all the particles of matter, wave-particle duality brought to light by Max Planck (1858-1947) in the early 20th century. All subatomic particles therefore have a wavelength. The wavelength λ of a subatomic particle and momentum p are related by the equation: λ = h / p, where h is Planck's constant, the momentum p i.e. product of the mass by the speed vector (p = mv). Since we known, thanks to Einstein's famous formula that all matter has an associated energy (E = mc2). In other words, more the wavelength is small and more energy is high (E = h / λ). This energy will modify the shape of atoms. The foundations of quantum mechanics are then posed. In short, matter is really composed of very small particles, fermions (electrons, neutrinos, quarks) that have mass, charge, energy, dimension, a wave, a spin. But what these particles resemble, in the world of the infinitely small ? In 2013 we still can not see the particles of the atomic nucleus but only the outer layer of the atom i.e. the electron cloud. The electron cloud occupies all the spatial extent of the atom as it is about 10 000 times greater than its nucleus. In wave quantum mechanics, a particle is represented by a wave function, but it is very difficult to represent the fundamental concept of quantum mechanics or quantum state of a system. In 1927, Max Born (1882-1970) gave an interpretation of the wave function where the square of the wave function represents the probability, when you make a measurement, finding the particle at a specific location function. A wave function is an amplitude or a probability density function of the presence system in a given position at a given instant. This function has a complex value. If a real number or an actual value is, for example, the length of a segment on a straight line, a complex value is represented by a vector in a plane, this vector has not only a length in space but also a phase which corresponds to the direction of the vector. If you no longer represents the electron as a point particle on a regular orbit around a nucleus, how can you make a picture? Well here, the electron does not follow a single path around the nucleus, it is somewhere in a vast region that we call the electron cloud or atomic orbital. The state of an electron is represented by the volume of the space around the nucleus in which it is localized. The fundamental state of hydrogen is about one angstrom is 10-10 meters. To represent the electron in this region just imagine a grain of rice of about 5 mm to move in a sphere about 50 meters in diameter. In addition, the shape of this region of atomic space depends on the energy of the electron and its momentum is what we see on the image cons. Thus the orbital of the electron characteristics may take various forms depending on the nature of the atom, for example, the orbital of the hydrogen atom on the first row at the top has a spherical shape, the orbital second row has the shape of two drops of water, the third row on the orbital in the form of four drops of water. In summary, in the orbiting region of the space where the electron is delocalized, the electron state is a superposition of all possible positions within the atomic orbital whose shape varies. The shape of the orbital changes when excited atom, as in the first row. If excites even more atom, the shape of the orbital change again as the second row or electronic layer. In a very excited state called "Rydberg state", electrons are delocalized in a torus "large radius" that can measure up to 1000 angstroms, the principal quantum number n (number of layer) is very high between 50 and 100. nota: an electron attracted by the positive charge of the nucleus, can not "stick to the nucleus" because it would mean that the spatial extension of the wave function is reduced to a point. The Schrödinger equation says that an electron in the vicinity of the nucleus is in a orbital to geometry determined by the quantum numbers which satisfy this equation. In short, an electron is confined in the vicinity of the nucleus by the electrostatic potential well. When the potential energy is increasing, we say that the particle moves in a potential well.  electronic orbitals Image: Representation of the first electronic orbitals of hydrogen based on the energy of the electron and its angular momentum, the energy level increases from top to bottom (n = 1 , 2, 3) and the momentum increases from left to right , (I = s , p, d , f , g) . This image shows the probability density of finding the electron, the black color represents the density 0, i.e. the area where the electron will never adventure. The white color represents the maximum density, i.e. the area where the electron passes most often. Between black and white in the orange-red zone, the probability density increases. Quantum numbers are represented by letters n, is the principal quantum number, it defines the energy level of the electron. I is the orbital quantum number, or secondary, as it defines the electronic sublayers, s (sharp) for l = 0 , p (principal) for l = 1 , d (diffuse) for l = 2 , f (Fundamental) for I = 3 , then (to the excited states ) g , h , i , ... the quantum number m is the magnetic quantum number or tertiary . Credit image: GNU Free Documentation License. See also Quantum field theory Fields of the real... Reproduction prohibited without permission of the author. nanoparticles, carbon nanotube Scale of the nanoparticles... Radioactivity of the Earth Radioactivity of the Earth... Neutrino, constituent of matter and beta emission... Magnetism and magnetization Magnetic order and magnetization... Leap second Leap second... Electromagnetic spectrum of the spectrum...
0133a429d13bdf5f
Thursday, June 30, 2016 1- Change is not merely necessary to life - it is life. ( Alvin Toffler) 2- Change is the process by which the future invades our lives. (Alvin Toffler)  3- Man has a limited biological capacity for change. When this capacity is overwhelmed, the capacity is in future shock. (Alvin Toffler)  4- The illiterate of the 21st Century are not those who cannot read and write but those who cannot learn, unlearn and relearn. (Alvin Toffler 5- The future always comes too fast and in the wrong order. (Alvin Toffler) 6- Knowledge is the most democratic source of power. (Alvin Toffler) 7- One of the definitions of sanity is the ability to tell real from unreal. Soon we'll need a new definition. (Alvin Toffler) 8- The great growling engine of change - technology. (Alvin Toffler) 9- Our technological powers increase, but the side effects and potential hazards also escalate. (Alvin Toffler) 10- Technology feeds on itself. Technology makes more technology possible. (Alvin Toffler) 11- It is better to err on the side of daring than the side of caution. (Alvin Toffler) 12-Rational behavior ... depends upon a ceaseless flow of data from the environment. It depends upon the power of the individual to predict, with at least a fair success, the outcome of his own actions. To do this, he must be able to predict how the environment will respond to his acts. Sanity, itself, thus hinges on man's ability to predict his immediate, personal future on the basis of information fed him by the environment. 13- Change is the only constant (Heidi Toffler) a) In memoriam Alvin Toffler So Alvin Toffler has died last Monday and I remembered with deep nostalgia  reading, first "The future Shock" and later "The Third Wave", both important and influential.  Toffler's role is explained in the following sentence  "His insights about how society behaves when too much change happens too quickly helps to guide our new direction for The World Future Society." We (I am speaking about my wife and me plus our closer circles) have liked very much these writings, Toffler was a huge literary talent and for persuasiveness. However then we were living in an anomalous  society (here the term is perfect, for LENR it must be avoided!) see my Septoe: 42. The Future Shock was amortized by irrationality. Then after 1990 our world became a bit more rational politically, socially and economically speaking and the Future has arrived, is accelerating and we can participate more actively to it. It does not happened exactly as Toffler has predicted it but we have understood that In predictions there exists things fundamentally more important than inerrancy- as catching the Spirit of the time and Toffler has done this masterfully. Has he predicted the Internet /Web? It seems yes, in a way! Perhaps Toffler has exaggerate a bit with the SHOCK; I asked my 0 years old grand-daughter Nora if it was difficult to advance from Mama's laptop to her own tablet and then to the smartphone she received at her birthday. Not at all it was much more painful to learn to read, write and the basics of math, IT is more human and rational. b) LENR's specific shock(s)  The case of Cold Fusion/LENR: its past was shocking enough, its present is a teal shock and its future has to be made also so, but in the best sense!  Iwas repeatedly shocked by the slow development of the LENR field in contrast with the Tofflerian future in action. Now when this started to change, dark forces conspire to kill the LENR technology dream. Please complete the details yourself. Of Mice, Materials and Men Who is talking about LENR on social media forums? A poem for IH and their silence at the death of their LENR dreams Do not go gentle into that good night Do not go gentle into that good night, Old age should burn and rave at close of day; Rage, rage against the dying of the light. Though wise men at their end know dark is right, Because their words had forked no lightning they Do not go gentle into that good night. Good men, the last wave by, crying how bright Their frail deeds might have danced in a green bay, Rage, rage against the dying of the light. Wild men who caught and sang the sun in flight, Do not go gentle into that good night. Grave men, near death, who see with blinding sight Blind eyes could blaze like meteors and be gay, Rage, rage against the dying of the light. And you, my father, there on the sad height, Do not go gentle into that good night. Rage, rage against the dying of the light. Space team discovers universe is self-cleaning it is also about a source of cosmic energy A Good slogan: "Face your fears" Fear comes from a deep-seated place of feeling rather than thinking. (Jeff Wise) tensions of modern learning Wednesday, June 29, 2016 The Rhino Principle. A rhino is not a particularly subtle or intelligent creature, yet it has managed to dominate the savanna through sheer determination and aim. It takes initiative when it sees something it wants and puts everything into what it does best: charge! I've always been suspicious of collective truths. (Eugene Ionesco) It is not the answer that enlightens, but the question. (Eugene Ionesco) a) Why is IH fabricating so many memes? More readers have asked why the supporters of IH try to create so many anti-Rossi and again so many pro-IH memes? A first aspect showing they are not good in the art of Memetics is the "optimal density" of memes. Exactly as in agriculture where it is an optimal density of plants at crops, too dense memes start to mutually annihilate each other; just remember the incompatible "the plant is impossible" (people killed by heat not removable) and the other"the plant does not work at all" Indeed is if a list of memes  is done and used according to a plan, or an "everything goes" ineffective mentality rules without control.  Why do they do this? Because, very probably,  they do not have anything better! If they did, they would not have  made another motion to dismiss The situation is: Rossi wants to go in Court, IH wants to escape. You can guess three causes why is this soo......please!  b) The stoppable subtleties of Jed Rothwell's personal memes Admirable hyperactive for his followers< Jed Rothwell has created his own discussion thread serving simultaneously as a meme generator, a guillotine for everything Andrea Rossi and a training camp for his, very specific subtleties. This it is: I was wrong about Rossi, but what I fear most is that I might be partly right Subtle as a rhinoceros, Jed writes here about the very document that has irreversibly convinced him that there was no excess heat in the 1MW experiment- he has received this from Andrea Rossi Subtle as a rhinoceros he seemingly suggest here: "Andrea Rossi is a collection of sins and evilness, however even he is able to appreciate a genius, a genuine high class expert and this must be the reason that this otherwise very secret document has found me. Unfortunately, contrary to Rossi's expectation the paper has convinced me quasi instantly but beyond any doubt ,that the  plant does not work at all, the Test  is worse than a disaster."   The text is imagined but this must be the idea. A possible- but morally impossible! alternative is that Jed has invented the ghost document thing as a - again- subtle trap for Rossi who desperate seeing that Jed convinces everybody that no excess heat- will come with is data and Jed will officially mas sacrate and annihilate them- an easy pray for him!. Rossi did not want to comment about Rothwell's calorimetric genius, he knows why?! I have to confess that I have not searched the rothwellisms thoroughly- surly I will not be one of his biographers however the following statement- a strong pro-IH mme: has an even higher degree of rhinocerian subtlety- it is text and context- not extract: "Our enemies will put fraud front and center. This will be another blow against cold fusion, thanks to Rossi. By the grace of God we may still have money from I.H.,  without which this field would be dead, dead, dead." Some critics have found it needs a few definitions, then Jed has never spoken about the grace of God- as far I remember- the message is crystal clear: "IH is the savior of LENR!"    Isn't it too much to say that without money from IH LENR would be 3-times dead; simply dead is not sufficient? Money from SKINR, possible from the ARMY, money outside US as in Japan, India, Russia, China,  EU etc.will not be able to keep LENR  in a half dead  state? For Jed, the idea that LENR needs even more new ideas and young researchers than (micro)-funding is too subtle? He says directly to al LENR researchers: "Be against Rossi, be with IH- otherwise your research will die, die, die!" Further- no comment! c) Re-read Eugene Ionesco's play- we LENR fighters have to avoid rhinocerization! It is plenty of absurdity in this process of creating the memes of IH. I well remember this play about people losing their humanity I am terrified and will not tell more, my readers can decide if the meme-factory has something rhinocerian in it or, if on the contrary it is a congregation of angels?! However, I will finish in Jed';s NEW style asking: "For God's sake, IH please go boldly and openly to the Trial!" 1) Is Clueless Jed Rothwell Paid or Played to Slander Penon and the ERV Reports on the MW COP~50 E-Cat Plant? 2) Andrea Rossi answers Gerard McEk June 28, 2016 at 8:11 AM Dear Andrea, You recently said that the light of the QuarkX has given you an idea how the Rossi-effect may work. (In other words: you may have seen the light). 1. Do you make any progress with the theory and 2. Do you expect it to lead to new patents? In the past you said that you were preparing many patents. 3. Do you expect some of these to be published soon? 4. Is there any progress in the domestic QuarkX or 5. Do you expect that the lower temperature Ecat to be the most suitable solution? Thank you for answering our questions. Kind regards, Gerard Andrea Rossi June 28, 2016 at 3:55 PM Gerard McEk: 1. yes 2. yes 3. no 4. yes 5. I do not know yet Thank you for your attention, Warm Regards, 3) Andrea Rossi does not answers and does not comment: June 28, 2016 at 1:52 PM Dear Dr Andrea Rossi: sifferkoll link given here My comment: IH again tries to escape from the litigation. If 1/100 of the slanders and the lies deposited in the blogs by the mad dogs of IH were true, IH would be eager to go in court…the fact that they are trying to delay and to suffocate the litigation makes clear that they are afraid of it. Evidently they know that you have evidence that will defeat them in Court, where what counts is not the chattering of the mad dogs, but the real evidence. In fact it appears that you are fighting to go in Court, they are trying to run away. 4) Russian language video: "News re LENR and CNF philosophical storm: Seminar: "Philosophical Storm"  June 28, 2016 presentation of Igor Iurievich Danilov First part: QuarkX of Andrea Rossi Second part: Microbes of Tamara Vladimirovna Sahno and Viktor Mihailovich Kurashov 5) Did Jed Rothwell Admit Being an IH Contracted Spin Doctor with a Freudian Slip? The correct link to the Calaon paper is this: Understanding of molecular hydrogen has implications from industry to medicine Tuesday, June 28, 2016 The rule or domination by a meme or memes which are cultural practices or ideas that are transmitted verbally or by repeated actions from one person's conceptions to the minds of other people. My Septoe: "20. We live in memecracies, ideas dominate us." In the original introduction to the word meme in the last chapter of 'The Selfish Gene,' I did actually use the metaphor of a 'virus.' So when anybody talks about something going viral on the Internet, that is exactly what a meme is, and it looks as though the word has been appropriated for a subset of that. (Richard Dawkins) Image result for memes quotes a) IH 's plan seems to be based on memes- killer for Rossi and friendly for them 'Meme', the cultural equivalent of "gene" is a concept and word of vital importance  however paradoxically it is not a strong meme itself- it is a bit too intellectual. However you cannot think well if you do not consider the existence of memes. I have written a lot about them including in this Blog.  If you are not familiar with memes please read at least: It is my pleasure to announce you that now again memes have helped me to solve a prob;em I found first very difficult- in retrospective I was slow and non-creative and rigid in thinking. It is about the enigma of the furious and seemingly senseless character and plant and technology assassination campaign of the IH propagandist lead by Jed Rothwell. (see a new opus by him below).  Why, for Hermes's sake, if they are right and can automatically  win the Trial? Why, for Minerva's sake, if they are wrong, does it help when facts speak at the Trial? First it is obvious that IH manifests a totally negative enthusiasm toward the Trial and try very hard to escape from it see papers at 4) Legal battle. No traces of the noble spirit of "Fiat Justitia, pereat mundus",_et_pereat_mundus Justice at any price, but perhaps the cost is too high and the chances to win not so very high. So what they actually do is clear: Stay calm but angry, inventive, efficient, and make MEMES- two types:  A- Killer anti-Rossi and anti- what belongs to Rossi memes; B- Friendly nice pro-IH memes A-memes are cheap, free, but B-memes have a cost and need more fantasy.. and money. PLEASE read for that the opinions of Doug Marker. The plan is to disseminate these memes on the Web, make them contagious, the Press, the public opinion and perhaps even the jurors from the Court  will be memefied so the 'obviously good' will increase tremendously its chances to defeat the 'evidently malefic.' We live in memecracies. Indeed?  b) Jed Rothwell's  new opus  Did Rossi and IH have a valid contract that states; that if the general performance test were successful . . ." I have not read the contract carefully, and I know little about contracts. Here is what I know: the performance was not successful. The data from Rossi proves that the machine did not work. "IH has not paid and said the Test was not good; where is the first written document with serious warnings from IH to Rossi saying this; was it after the 1st, 2nd or 3rd ERV report?" This is not a technical question. This has no bearing on calorimetry or science. This question illustrates how you have missed the point. You cannot judge a technical question by looking at people's behavior, or by examining business contracts. This question is fluff. The dispute between Rossi and I.H. is about calorimetry. It is about flow rates, temperatures, steam quality and instruments. Your questions are irrelevant. Even if you knew the answers, they would not bring you one millimeter closer to knowing whether the machine worked or not. Instead of waiting to learn the technical details, you obsess over these unrelated, non-scientific questions and gossip that has no bearing on the technical issues. I do not understand how a person with a technical background could make such a mistake.  So dear Jed there are only technical questions, later you say we have to apply the Scientific Method. Please apply it to Rossi's question regarding persuasion of the investors, OK? 1) Ok, So What Did Really Happen When Industrial Heat F*cked Up the Deal with Leonardo/Rossi? And Why? 2) Jones Day Lawyer Drones on Repeat in Another MTD. However, again Showing the Malicious Intent of IH! 6) The mystery of the irrational withdraw of the ECAT support  7) TheNewFire - LENR News 8) Yet Another LENR Theory: Electron-mediated Nuclear Reactions (EMNR)  9) Andrea Calaon∗ Independent Researcher, Monza, Italy  An attempt is made to build an LENR theory that does not contradict any basic principle of physics and gives a relatively simple explanation to the plethora of experimental results. A single unconventional assumption is made, namely that nuclei are kept together by a magnetic attraction mechanism, as proposed in the 1980s of the past century by Valerio Dallacasa and Norman Cook. This assumption contradicts a non-proven detail of the standard model, which instead attributes the nuclear force to a residual effect of the strong interaction. The theory is based also on a property of the electron which has been known for long, but has rarely been used: the Zitterbewegung (ZB). This property should allow the magnetic attraction mechanism that binds nucleons together, to manifest also between the electron and any isotope of hydrogen, leading to the formation of three neutral pseudo-particles (the component particles remain separate entities), collectively named here Hydronions (or Hyd). These pseudo-particles can then couple with other nuclei and lead to a fusion reaction “inside” the electron. The Coulomb barrier is not overcome kinetically, but through what could be interpreted as a range extension of the nuclear force itself, realized by the electron when some specific conditions are satisfied. The most important of these necessary conditions is that the electron has to “orbit” the hydrogen nucleus at a frequency of 2.055 × 1016Hz. This frequency corresponds to photons with an energy of about 85 eV or equivalently a wavelength of 14.6 nm in the Extreme Ultra Violet (EUV). So the large quanta of nuclear energy fractionate into EUV photons during the formation of the Hydronions and during the coupling of Hydronions to other nuclei. The formation of Hydronions requires the so called Nuclear Active Environment (NAE), which is what makes LENR so rare and difficult to reproduce. The numbers suggest that the NAE forms when an unshielded atomic core electron orbital that has an “orbital frequency” near to the coupling frequency is stricken by a naked Hydrogen Nucleus (HNu). This theory therefore implies that the NAE is not inside the metal matrix, but in its immediate neighbourhood. The best candidate atoms for a NAE are listed, based on the energy of their ionization energies. The coincidence with the most common LENR materials appears noteworthy. The Electron Mediated Nuclear Reactions (EMNR) theory can explain also very rapid runaway conditions, radio emissions, biological NAE, and the so called “strange radiation”. ⃝ c 2016 ISCMNS. All rights reserved. ISSN 2227-3123 Keywords: EMNR theory, Extreme ultra violet, Hydronion, 10) Electron Deep Orbits of the Hydrogen Atom J. L. Paillet-1 , A. Meulenberg- 2  1 Aix-Marseille University, France,  2 Science for Humanity Trust, Inc., USA,  This work continues our previous work [1] and in a more developed form [2]), on electron deep orbits of the hydrogen atom. An introduction shows the importance of the deep orbits of hydrogen (H or D) for research in the LENR domain, and gives some general considerations on the EDO (Electron Deep Orbits) and on other works about deep orbits. A first part recalls the known criticism against the EDO and how we face it. At this occasion we highlight the difference of resolution of these problems between the relativistic Schrödinger equation and the Dirac equation, which leads for this latter, to consider a modified Coulomb potential with finite value inside the nucleus. In the second part, we consider the specific work of Maly and Va’vra [3], [4]) on deep orbits as solutions of the Dirac equation, so-called Deep Dirac Levels (DDLs). As a result of some criticism about the matching conditions at the boundary, we verified their computation, but by using a more complete ansatz for the “inside” solution. We can confirm the approximate size of the mean radii of DDL orbits and that decreases when the Dirac angular quantum number k increases. This latter finding is a self-consistent result since (as distinct from the atomic-electron orbitals) the binding energy of the DDL electron increases (in absolute value) with k. We observe that the essential element for obtaining deep orbits solutions is special relativity. Some thoughts on why Jed Rothwell is so surprisingly persistent with his comments and claims that the entire Rossi 12 month test was a failure (and by default plus his blunt claims that Andrea Rossi and his claims dishonest). Firstly, those of us who made published comments about IH's highly questionable behavior (such as my own comments published here) honed in on these aspects ... 1) That IH had paid 2 lots of money to Rossi for eCat tech 2) That IH have been in a relationship with Rossi for close on 3 years and not given any prior clear indication before the 12 month test of any issues with that relationship. 3) That Rossi's claim that IH used the early phases of the 12 month test to do fundraising. (this was very damning to IH's position). 4) Especially, that when this all exploded, the loudest anti Rossi voices were almost all known anti LENR people as well and thus were clearly biased and opportunistically leaping in to exploit the Rossi IH rift but were still using the situation to attack LENR in general. What I am seeing, is that IH have adjusted their tactics in their publicity battle with Andrea Rossi by privately enlisting the support of people we all know are pro LENR (such as Jed Rothwell - and 1 or 2 others). Jed is currently and without doubt, a champion for IH's position. The issue you (Peter) raise regadring this is how is Jed able to be so certain when few others have access to what ever it was/is that he has been given access to. The outcome: Jed is proclaiming (probably with some sense of justification) that we now need to accept 'Jed says' vs 'Rossi says'. IMHO, it is clear that IH have set out to enlist the support of recognized pro LENR identities to counter the quite effective anti IH messages that Andrea Rossi put out around the time he filed his lawsuit. But the questions you (Peter) raise, are valid questions and deserve to be answered. Clearly there will be no answers from Jed who argues he doesn't know other than the material passed to him to enlist his active anti Rossi remarks. So, by enlisting people who are known to be pro LENR people, IH are effectively countering the harsh criticism some of us directed at them and also those people who we know are both anti LENR and anti-Rossi and who leaped on the IH bandwagon as champions of their battle with Rossi.  It seems to me IH saw itself in difficulty in the word war until it was able to associate its position with pro-LENR people and break away from only being supported by the opportunity grabbing anti-LENR voices we all know so well. In regard to IH actively enlisting pro-LENR people to publicly join in their defense, If I were in their shoes (they are clearly in a difficult position) I would do what they are doing too. But, it does raise the question as to what kind of inducements or assistance that IH might be offering these people to go public on their behalf and in their defense against Andrea Rossi. I know that all LENR researchers need and seek support so when pro-LENR people clearly become vociferous supporters of the IH position against Andrea Rossi, it justifiably raises the question as to what 'rewards' tangible / intangible, these pro IH voices are being offered. All such questions are valid and deserve answers. Doug Marker Monday, June 27, 2016 What is impossible in LENR ? That Andrea Rossi will give it up before everybody will be able to get energy from it. That is the only impossibility I can be sure of,  (Andrea Rossi). Image result for paavo nurmi quotationImage result for jesse owens quotations “Citius, Altius, Fortius. Faster, Higher, Stronger.” (Olympic quote)  In soccer, LENR and Life Citius always wins ! Yesterday my wife and I were watching soccer UEFA EURO 2016 Belgium vs, Hungary. After a few minutes remembering my career as apprentice chronometrist for athletics from the 1950s my father acting then as coach- I told my wife:"see the Belgians are running so much faster, under 10.5 seconds per 100 meters while the Hungarians are slower then 11 seconds per 100 meters. it will be  catastrophe.  Something similar happened a few days ago when the Albanians "outrun" the Romanian team. Fastness is so important in other sports too, I remember that as a kid I was sent to maestro Pellegrini's fencing school and he was contented with me- clumsy but very fast moving hands. It was in our period of reading Dumas and Michel Zevaco etc.  so it was a cult of fencing and duelling but it is over now I have some duels with LENR people as below. It is not easy to speak about fastness in classic LENR  However Rossi obviously loves speed inclusive in development. b) Facts can be understood only in their context. Jed Rothwell: You have not seen the data, so you have no basis to be convinced. Or not convinced. This is a technical issue. Opinions don't count. Everything hinges on flow rates, temperatures, instrument specifications, and so on. Based on these factors, experts at I.H. concluded that the reactor is not producing any excess heat. I am far less capable than those experts, but to the best of my ability, looking at a sample of that data, I too reached that conclusion. You, Peter Gluck and everyone else will have to wait to see the data, and also the analysis of it from Rossi and from I.H. You cannot decide anything until then. You cannot even have an opinion. The rules of engineering and science say that every judgement must be grounded in facts, and you have no facts. I think it is a grave mistake for Peter to assume he knows what is going on, and to assume that Rossi is right in this dispute, and that I.H. and I are lying. Since he has no facts, this reaction is purely emotional. It is irrational. Since he has no engineering details, he trots out all kinds of half-baked notions about business contracts, or the timing of announcements, or he quotes lies spread by Rossi -- as if you can draw a technical conclusion from such fluff! It is pathetic. Peter is wrong. He will regret it if the facts are ever revealed. In science, you must never let your emotions or wishful thinking overrule rational, objective, fact-based analysis. Jed continues stubbornly to non-answering to my 5 stupid, nosy and irrelevant questions and, as a symptom of something I still do not want to  define exactly he answers to an imaginary question I have never put This question, his not mine can be formulated as:  "Rossi says test good, IH says test bad. Being a Rossi fan and having b ovo great prejudices against IH, I believe Rossi. Why, on which basis, you, Jed are certain that IH is right? IT IS AN NONEXISTENT QUESTION! I will repeat and explain my questions in a form a bit more accessible  for you, supposing you are right 100%. Rossi wrong, IH right. e will state together if this manoeuvre contributes to the missing IQ of the questions, makes them less impertinent and gives them a minimum of sense and relevance. NOTE. I see logical rational straight thinking and discussions are not on your list of strengths so I must have more patience with you, just first I want to tell you about FACTS that are your privilege and not given to dorinary people: Facts have significance only in context. A first fast example. You read; "Edmond Dantes has mercilessly ruined the lives of three rich and happy men" What  a sadistic rascal is the natural reaction to this but if you put the fact in proper context - the story of Count Monte Cristo by Alexandre Dumas= changes the understanding of the facts completely, isn't? Now your facts being OK let's return, for the lasttime to the lowly evaluated questions 1- Did Rossi and IH have a valid contract that states; that if the general performance test were successful, they should pay a great sum to Rossi? Possible answers Yes and No- the contrct was broken by IH.  Seemingly facts missing it was not and it has opened Rossi's way to a Trial. Jed please do not sya me that IH is happy with Trial, I am stupid but not sooo stupid! It means if Rossi's results were indeed  such a catastrophe total and continuous to maintain the contract.. why 's sake (the Greek God of Greed) What could be confidential or secret in such a document of angelic honesty- "you are in trouble, we do not see excess heat even with the magnifier glass!" Harmony between thoughts, words and action is essential even to a company.It did not happened even at the end of the test or at the receipt oh ERV report no. 4. It happened when the Trial started. What is ou fact, in what context? 3- IH employees have participated at the test in parallel with Rossi's men; is there a written document showing they are in any way discontent with the test and the test being “a disaster”? Is this a toxic question?  Rossi says they were there- what is the fact yu know and those who ask- no? 4- When was the total incompetence of the ERV discovered; i.e. the inadequacy of the measuring instruments and when was it stated that the measurements are fatally flawed? (a document dated in 2015?) As far as I can understand the methods of measurement were the same- dreadful from start o finish, they were never good then. This is a sad but explosive fact in the context of  a vlid 94 milliion contract. for a successful test. 5- Rossi claims: “All I know is that Darden and JT Vaughn collected $150 million after the test of the 1 MW E-Cat began, using the first and second report of the ERV as a tool to get the money, then after the 4th report ( equal to the former ones) they said what they said and did not pay” Is this slander and false accusations? Is this slander or false accusation? This question deserves its color, it can be an infamous accusation but it comes from Rossi and who knows better the facts related to the 1MW plant?  Is it a stupid question? Not at all because it is disturbing It is nosy only if it false, completely. It is not relevant for Jed but it can be relevant for many people some of them quite influential due to its deeper significance. So Jed I ask you to not invent my questions, retract at least "nosy" and feel free to play with your facts that are flawed like the ultraviolet unicorns- invisible, intangible, unverifiable  missing birth certificates and.. prepare to get facts from the Trial. My own sources say, but I can not reveal their identity that the trial will take place in the first 5 days of September. A new rule- all the witnesses will be obliged to perform an IQ test before testifying. You have arranged this, ? 1) Excess Heat Generation in Ni + LiAlH4 System (New Report by I.N. Stepanov and V.A. Panchelyuga) 2) LENR afternoon with Ubaldo Mastromatteo- more videos Pomeriggio Lenr Ubaldo Mastromatteo (5) Claudio Pace have to ask Ubaldo to send us the text! not clear yet in which extent it is about LENR in the frame of rational mysticism 3) An interesting paper signalled by EGO OUT on June 25, is discussed here: [Vo]:Ukrainian Paper on the active particle of LENR 4) A cold fusion paper in Dutch: 5) Andrey Illich Fursov (Russian Historian, sociologist, polytologist and publicist): About the Nuclear Cold Fusion of Ivan Stepanovich Filimonenko 6) At June 21, 2016 at Geneva Switzerland it was a press conference about an epochal discovery of transmutation of chemical elements by a biochemical method. At the press conference have participated Tamar SahnoViktor Kutashov scientists who made this discovery and Vladislav Karabanov administrator and leader oif this project. Link to the patent for this invention Very interesting I started to discuss with Vladimir Vysotskii about this he is the greatest specialist in biochemical transmutations. 7) Also see the above info, here: Russian Team “Actinides” Announces Discovery of Industrial Biochemical Method of Elemental Transmutation (Press Conference and Press Release) 8 ) Greg Goble Energy 54+ Black Swans listed by Paul Maher Umair Haque: "The Art of Awakening" It is time for a LENR awakening! Why rudeness at work is contagious and difficult to stop
72f1a51b54ccd5e1
Physicists in France have used pairs of bouncing droplets on a fluid surface to simulate the Zeeman effect – a phenomenon that played an important role in the early development of quantum mechanics. The ability to simulate purely quantum effects using such a classical system could provide insight into how the mathematics of quantum mechanics should be interpreted. What does the Schrödinger equation mean? The question has been debated by physicists since this central tenet of quantum mechanics was introduced nearly 90 years ago. While its predictive power has been verified many times over in laboratories all around the world, exactly how the solutions to the equation (the wave-fuctions) should be interpreted is still not clear. The most popular school of thought is the famous "Copenhagen interpretation", formulated by Niels Bohr and Werner Heisenberg in the 1920s. This probalistic interpretation of quantum mechanics holds that the observable properties of a particle do not have definite values until they are measured. However, this view is not universally accepted and another interpretation of quantum mechanics favoured by some physicists is the so-called "pilot wave" interpretation, formulated by Louis de Broglie in 1927 and later developed by David Bohm. This assumes that the observable properties of quantum particles are defined at all times but that they are guided by a wave, which neatly explains wave–particle duality. This is an example of a hidden variable theory because it explains the measurable properties of quantum mechanics as the consequence of a physically real, but experimentally inaccessible, feature – the wave. Contrived or intuitive? The two theories are mathematically indistinguishable, so some physicists see the so-called "Bohm interpretation" as a contrived attempt to explain the experimental results of quantum mechanics without embracing the weirdness of the Copenhagen interpretation. However, in 1980 Michael Berry and colleagues at the University of Bristol in the UK used an analogy with surface waves in a classical fluid to come up with a more intuitive explanation of a bizarre quantum phenomenon called the Aharonov–Bohm effect (discovered by the same Bohm). Now, Yves Couder at the University of Paris Diderot and colleagues have explored this analogy further by looking at the behaviour of tiny, bouncing droplets called "walkers" as they move across the surface of a vibrating bath of silicone oil. The drops create waves on the surface of the fluid and are, in turn, influenced by these waves. According to Couder, this provides an interesting parallel with the "pilot wave" model of quantum mechanics. "There is a symbiosis between the droplet and the wave," explains Couder, "because if there is no droplet there is no wave. And if there is no wave the droplet doesn't move." Couder and his colleagues believe that this interaction between a walker and the waves that it creates is an example of wave-particle duality in a classical system because, while the droplet is localized in space like a particle, its motion can be influenced by anything that affects the pilot wave. Bound states Couder is emphatic that his group's system is not an exact analogy to quantum mechanics because, for example, it requires a continuous input of energy by vibrating the bath. Nevertheless, in previous research his group has managed to use walkers to create classical analogies to the quantum effects of single-particle diffraction and tunnelling. It has also shown that two walkers can orbit each other to form bound states, in an analogy to the quantized bound states in an atom. In the new research, the group investigated the Zeeman effect – a quantum effect whereby the energy levels in an atom split in the presence of an external magnetic field. An atom is a bound state of a nucleus and one or more electrons – and this is simulated using a bound state of two walkers. To create an analogy to an applied magnetic field, the researchers rotated the bath. The two-walker bound state was then free to rotate either with or against the rotation of the bath – simulating the orbital angular-momentum states of an atom. In the absence of the simulated magnetic field, both of these rotational states have the same energy. However, when the bath is rotated the energy of the rotational states split, with one increasing and the other decreasing – just like the angular-momentum states of an atom in a magnetic field. The team also saw abrupt transitions between energy levels. Fernando Lund of the University of Chile in Santiago, who has headed a research group looking at similar problems but who was not involved in the current research says: "The most significant feature of this paper, and of others by the same team, is the masterful use of state-of-the-art technology to bring out analogies between classical and quantum physics that can be easily visualized." He suggests that it might be interesting to try to visualize other quantum phenomena using classical means. "My favourite candidate would be the half-integer spin of some particles like electrons," he says. "Not that I can see any way of going about answering this question!" The research is published in Physical Review Letters.
b5a9fd7d279db199
• Chemistry & Biochemistry • Courses 1010C Essentials of General Chemistry 4 credits Introduces students to the essential theories and principles of general chemistry and their application to modern society. Topics include chemical reactions, atomic and molecular structure, stoichiometry, bonding, the periodic table, acid-base theory, equilibrium, properties of gases, liquids and solids, and kinetics. The lecture course emphasizes problem-solving techniques while the laboratory portion introduces students to the methods of scientific investigation and basic laboratory techniques. (lecture: 3 hours; lab: 2 hours) Laboratory fee. 1045C, 1046C General Chemistry 4 credits Lecture and laboratory course for students going into the biological, chemical, health, or physical sciences. Atomic structure and stoichiometry; properties of gases, liquids, and solids; thermochemistry; quantum theory; electronic structures of atoms and molecules; chemical bonding; properties of solutions; thermodynamics; chemical equilibria including acid¬base and solubility; kinetics; electrochemistry; nuclear chemistry. Laboratory experiments enhance understanding of principles taught in lectures. Emphasis on quantitative techniques; computer interfacing and spreadsheet applications. Second semester includes semimicro qualitative analysis. (lecture: 3 hours; recitation: 1 hour; lab: 3 hours) Laboratory fee. 1125C Analytical Chemistry 4 credits Theory and practice of classical and modern analytical chemistry. Laboratory applications of volumetric, gravimetric, and instrumental methods including potentiometry, spectrophotometry, and chromatography. One laboratory hour is a conference hour. (lecture: 2 hours; lab: 5 hours) Laboratory fee. Prerequisite: CHEM 1046C. 1213C Organic Chemistry I 5 credits .The structure, properties, synthesis and reactions of hydrocarbons and alkyl halides, reaction mechanisms, stereochemistry. A brief discussion of carboxylic acids, their derivatives, carbohydrates and amino acids. Laboratory experiments are designed to illustrate methods of separation, purification, identification, and synthesis of organic compounds. Spectroscopic measurements and molecular modeling are included.  (lecture: 3 hours; recitation: 1 hour; lab: 4 hours) Laboratory fee. Prerequisite: CHEM 1046C. 1214R Organic Chemistry II 3 credits Conjugated unsaturated systems, aromatic hydrocarbons, structure, properties,     syntheses and reactions of the main classes of organic compounds, spectroscopy, polymers and compounds of biological importance. (lecture 3 hours; recitation 1 hour. Prerquisite: CHEM 1213C) 1376R Biochemistry—Lecture 3 credits Structure and function of biomolecules; kinetics and mechanism of enzymes; bioenergetics and metabolism; membrane structure and dynamics; signal transduction. Prerequisite: CHEM 1213C or permission of the instructor. 1377L Biochemistry Lab 2 credits Laboratory experiments are designed to illustrate methods of purification, separation, and characterization of proteins; acid-base titration of amino acids; biomembranes; enzyme kinetics; molecular modeling, computational chemistry, and bioinformatics of biologically relevant molecules.Prerequisite: CHEM 1376R. 1415R Physical Chemistry— Lecture 3 credits Thermodynamics, chemical equilibrium, solutions, electrochemistry. Applications to biological and biochemical problems are used to illustrate general principles.Prerequisites: CHEM 1046C; MATH 1412 (or higher)  1416R Physical Chemistry— Lecture 3 credits Quantum chemistry; the Schrödinger equation and some simple applications; extension to three-dimensional systems; H¬atom; many electron atoms; structure of molecules; introduction to computational methods (molecular mechanics, ab initio methods); molecular spectroscopy; statistical mechanics; kinetic theory; chemical kinetics. Prerequisites: CHEM 1046C; PHYS 1031C or 1041C; MATH 1413 1930; 1931 Current Topics 2 or 3 credits Selected subjects in chemistry. Discussion of current developments, problems, and literature. Open to seniors and selected juniors majoring in chemistry.Prerequisite: permission of the instructor. 1937 Seminar in Advanced Chemistry 1 credit Topics in all fields of chemistry presented by students and guest lecturers. Seminar meeting two hours every two weeks. Pre¬ or co-requisite: CHEM 1214R or permission of the instructor. 4901, 4902 Independent Study Yeshiva University 500 West 185th Street New York, NY 10033 Connect With YU
d461d9e134143fdd
Issue 62 <i>Infinite Energy</i> new energy foundation who are we? apply for grants donate to nef infinite energy magazine   about the magazine back issues read ie author instructions change of address contact us gene mallove collection   lenr-canr magazine index in the news in the news   mit and cold fusion report technical references key experimental data new energy faq <i>Infinite Energy</i> The 2005 MIT Cold Fusion Colloquium, Honoring Eugene Mallove Scott Chubb Dr. Mitchell Swartz (MIT '70, '84) is to be commended for arranging another scientific meeting at MIT for cold fusion scientists, other scientists and engineers, potential investors, and patent lawyers who have monitored the field, as well as individuals who are beginning to be aware of the field. Though he has organized conferences since 1991, this one had two purposes: 1) To commemorate and honor the memory of former cold fusion scientist and engineer, and editor and founder of Infinite Energy, Dr. Eugene Mallove (MIT '69); and 2) To help to advance communication about cold fusion. Dr. Swartz also is to be commended for organizing a gathering where related issues of a controversial nature, about cold fusion, could be discussed. These included an open-ended discourse about a new theory involving a potential form of coherent nuclear reaction, additional discourse about issues related to patents, the breakdown of the patent process in areas related to cold fusion, and the lack of scientific dialogue about the field that resulted from particular events associated with the early, inappropriate, and inaccurate assessments by mainstream scientists (including prominent individuals at MIT) about particular experiments. A number of individuals who either are employed by MIT or actively interact with individuals who work there should also be commended for trying to alter the existing dynamic. These include Dr. Mitchell Swartz and Profs. Peter Hagelstein and Keith Johnson, who have been directly involved not only with MIT scientists but with members of the cold fusion community. Dr. Swartz is also to be commended for organizing and publicizing this event, with modest support and in a short period of time. Swartz's former colleague, Richard Shyduroff, was helpful in arranging the particular location (the Physics Department at MIT) for the event. In particular, Shyduroff, who was a co-founder of MIT's E-Club (entrepreneurial club), agreed that in exchange for giving a lecture on cold fusion to the MIT's E-Club's Transportation Forum, the E-Club agreed to work with JET Thermal Products and Cold Fusion Times to get a room at MIT. This was hoped to be in the same physics room (the Kodak Room, a.k.a. "6-120") where Mitchell, Gene Mallove, and Richard had cold fusion meetings twice in the early 1990s. MIT's E-Club alumni helpers included Kurt Kevelle (MIT '90), Dave Sirkin ('92), Nancy Gardner ('81) and Geoff Day, Nick Haschka, John Hawksley, Robert Tompkins, and Corey Fucetola. Gayle Verner from JET Thermal Products was a major help throughout. The zest soon led to securing not only the Kodak Room, but also the nearby Killian Room, where the tribute to Gene Mallove was held. The event, held on May 21, 2005, and the location were especially appropriate. The event was timely because although Eugene Mallove was tragically murdered slightly more than a year ago, many people who have been involved with cold fusion research have not had the opportunity to publicly acknowledge and mourn their deeply-felt sense of loss. The timing of the event, which was slightly more than a year after Eugene's death, and its location, reflect well not only on Mitchell but on Eugene. In the truest sense, it goes without saying that the sincerity by these people conveyed the essence of the most important attributes of Eugene's character and life, and his involvement with science: To speak up robustly and truthfully, with a sense of awe and fascination about issues that count in life and in science. Scientific talks related to cold fusion, the associated controversy, and related science were scheduled in the Physics Department, Room 6-120, at MIT, between 8:30 a.m. and 6:30 p.m. Following the scientific talks, additional informal discussions and presentations took place, including a screening of a movie about cold fusion, "Breaking Symmetry," produced and written by former MIT professor, Dr. Keith Johnson, which evolved as a direct consequence of a misrepresentation of the relevant science, including a fraudulent negative excess heat measurement, that took place at the MIT Plasma Fusion Center, immediately after the initial cold fusion announcements in 1989. This movie has not been widely distributed, but is available from Infinite Energy and on It reportedly has inspired a collaborative effort between the well-known Boston PBS television station WGBH and Disney (through ABC television) to develop a television series, titled "The Institute," that will document failures by MIT administrators to deal with fraudulent claims of "negative" cold fusion results. Approximately 90 to 150 people were in attendance throughout the program, at any one time. The audience included mainstream scientists associated with the field, scientists from the academic and private sectors, potential investors, and students. The day was kicked off with an array of morning refreshments and then a brief introduction by Dr. Mitchell Swartz, Science Coordinator of the Event, and President of JET Thermal Products, Inc. Dr. Mitchell Swartz began the colloquium pointing out that cold fusion can decrease our reliance on foreign energy sources and make the Kyoto Protocol moot. The amount of heavy water contained in one cubic mile of ocean contains— if tapped through cold fusion— the energy equivalent of known oil reserves on Earth. If cold fusion were to substitute for current energy production systems, then pollution and harmful "greenhouse" gases would disappear as present pollution (30,000 tons carbon dioxide, 600 tons sulfur dioxide, and 80 tons nitrogen oxide from 9,000 tons of coal per gigawatt-day) would be replaced by only four pounds of safe inert helium. Professor David J. Nagel, from George Washington University, presented the first scientific talk, which was an overview of many of the effects that have been observed in experiments related to cold fusion (and low-energy nuclear reactions—LENR). Professor Nagel became involved with cold fusion, almost from the outset, while he was working as superintendent of the Condensed Matter and Radiation Science Division at the Naval Research Laboratory (NRL). In this capacity, he initially observed events in the field and, subsequently (to a limited extent) sponsored research related to cold fusion and in related areas at NRL; he also wrote important review papers about the field. In 2004, he, Peter Hagelstein, Mike McKubre, Randall Hekmann, and Talbot Chubb prepared a detailed white paper, in which key effects from cold fusion experiments were summarized for the DOE review panel that re-assessed research in this area. In his talk, Nagel summarized some of the more important effects that (my paraphrase of his words) "are so convincing that they simply will not go away." In particular, Nagel provided a detailed summary of evidence that "cold fusion" involves nuclear reactions. He began by citing the types of nuclear evidence, including: 1) Evidence that the amounts of excess heat are so large that it would violate the laws of physics and chemistry to explain them as being anything but the result of nuclear reactions; 2) Evidence that additional amounts of the usual, garden variety of helium (heavy helium) that is most abundant in nature appears after the excess heat is produced at the levels that would be expected, based on the assumption that virtually all of the mass that is lost (through a deuteron (d) + d helium-4 nucleus, nuclear reaction) is being converted into heat, in a manner that is consistent with the E=mc2 relationship, identified by Einstein; 3) Evidence that radioactive tritium (the unstable isotope, hydrogen-3) has been observed at levels well beyond background only when deuterium is used; 4) Evidence that in a number of experiments, significant amounts of neutrons, X-rays, and gamma-rays have been observed in these kinds of experiments; 5) Evidence that large, localized forms of energy release are taking place, as manifested by unusual deformations ("craters") at the surfaces of electrodes that (as emphasized, subsequently, by Russ George) are difficult to account for by normal physics and chemistry, and 6) Evidence that hot spots involving significant, local heating is taking place at the surfaces of cathodes. In addition, Professor Nagel pointed out that a body of evidence is accumulating that indicates that new elements are being created in cold fusion reactions. To support the idea that forms of transmutations might be taking place, Nagel cited a non-random pattern in the isotopes that has been observed independently in the work by Mizuno and Miley, in this context. He also emphasized the remarkable findings by Kevin Wolf, in which characteristic gamma ray signatures throughout the spectrum of elements were observed, suggesting that many different kinds of nuclear decays and reactions appear to be possible, as a consequence of LENR. Professor Nagel also provided a hand-out that summarized a new, novel theory by Widom and Larsen, involving a speculation that weak interactions might play an important role in a number of effects. Professor Yeong Kim mentioned that he had thought of a similar idea in the past, but had concluded that the effect probably would not account for most of the phenomena. In fact, Widom and Larsen appear to have made use of ideas that were also suggested by former University of Tennessee Professor Lali Chatterjee (now, primary U.S. Editor for the Institute of Physics Publishing), during a symposium held June 7-11, 1998, in Nashville, Tennessee, by the American Nuclear Society, that was organized by Professor George Miley. As opposed to these earlier formulations, Widom and Larsen have developed a more refined model that potentially has a useful triggering mechanism for initiating the process. On the other hand, as I pointed out, their theory is incomplete, in the sense that although it does suggest a possible, cooperative effect (based on potentially large changes in electromagnetic field that could be induced through surface plasmons, in the neighborhood of palladium-deuteride, or a deuterium rich nickel surface), the theory does not explain how the resulting reaction (which involves the formation of neutrons, from proton-electron capture) would lead to observed by-products, at rates commensurate with experiments. An interesting point, in this context, is that magnetic effects, potentially, could trap the neutrons (which are dispersed over many unit cells) in surfaces of nickel (where magnetism is present) in a way that could explain why nickel (Ni) might favor excess heat reactions, involving a "normal" or "light" hydrogen nucleus (consisting of protons), as opposed to a heavy hydrogen nucleus (consisting of a proton-neutron pair). I also suggested, in this context, that the situation involving heavy deuterium in palladium, as opposed to the apparent situation involving "normal" (or "light") water in situations involving nickel, could be very different and that their theory, potentially, could apply in the nickel situation (where it has been difficult to understand how a reaction could occur involving protons, exclusively), but not in the palladium-deuteride situation (where ordinary deuteron (d) + d helium-4 appears to be the more logical reaction). Professor Nagel suggested that although he was not capable of responding to the points that Professor Kim and I had made, the fact that we had raised these points appeared to imply that the theory suggested by Widom and Larsen might have important consequences for guiding future experiments. An interesting point, in this context, involves a potentially testable hypothesis: That electric fields that are as much as 100-1,000 times greater than those present in near-equilibrium situations in conventional palladium-deuteride surfaces would be involved in the potential triggering mechanism. Russ George spoke next and provided a summary of work in cold fusion. But as opposed to providing a more general overview of the many different effects that have been observed, he focused on particular experiments that are related to the most promising effects. In particular, George provided an excellent summary, similar to the one he presented at the APS meeting that took place on March 24, which is discussed in a separate article in this issue (see p. 42). In particular, he discussed some of his work in sonofusion, electrolysis, and helium measurements. He also provided background about a number of key effects associated with the field, and identified a new approach (which he referred to as a form of argument, which I would paraphrase as "forensic evidence of nuclear reactions") for justifying the idea that nuclear reactions are taking place, based on a comparison of scanning electron microscope (SEM) images of changes in surface morphology in the electrodes used in his work on sonofusion, and the work by Pamela Mosier-Boss. George further suggested that this kind of analysis applies to known features associated with collisions of alpha (and other) particles at the surfaces of metals in fission reactions. In particular, even crude estimates of the local energy densities that are required to create the kinds of crater-like structures that he and Pamela Mosier-Boss observed, that also appear to be present in these alpha particle emissions (from fission reactions), are so large that it would be difficult to account for their appearance, based on the normal situation in metals. Similar structures were also observed by Vittorio Violante (and co-workers) in experiments carried out at the ENEA, Frascati Cold Fusion facility, in Italy. He also suggested, based on work that he did with Arata in Japan, and with gas-loading experiment procedures developed by Les Case (which George performed at SRI), that the key to reproducible excess heat may involve the use of nano-scale size Pd crystals. In keeping with the philosophy of emphasizing important, common trends associated with most cold fusion experiments, Russ George did not emphasize a number of nuclear effects that have been difficult to reproduce (such as the production of neutrons, high energy particles, and tritium), which Professor Nagel mentioned in his talk. An important point, in this context, is associated with perspective and emphasis. Nagel made these comments in order to emphasize the breadth and depth of the unknown effects, while George emphasized effects and trends which are reasonably well-accepted, within the community, and which may have practical utility. Emeritus Professor John Dash, from Portland State University, presented a talk, titled, "Characterization of Titanium Cathodes After Electrolysis in Heavy Water." Dash presented preliminary results involving high resolution transmission electron microscopy of a thin titanium cathode after electrolysis in heavy water. The technique provides a new method for resolving lattice related effects (through fringe patterns) at spacing less than 1-10 millionth of an inch (1 billionth of a meter). These patterns allow him and his co-workers to isolate and identify individual crystals and the arrangement of crystals in a larger matrix. At these scales, collections of particles in square cross section-like patterns were observed, indicating particular structures (similar in length scale to the kinds of structures that George suggested might be important). These square structures contained Ti, Ni, Cu, Zn, and Pt atoms, some of which could be the result of nuclear transmutations. The Pt probably originated from the Pt anode that was used in the electrolysis, but the origin of the other elements is not known. Further studies are necessary and are being undertaken. Secondary Ion Mass Spectroscopy (SIMS) was used to analyze the masses of the unusual elements that were found. MIT Electrical Engineering Professor Peter Hagelstein summarized some of the more important developments in his theory (involving a 16 year effort) to understand anomalies in metal-D systems. His newer, more important results are based on new ideas involving potential forms of nuclear coordinate coupling to coordinates associated with lattice vibrations (phonons) and electrons, which have coordinates that change considerably more slowly and over longer distances. To make this coupling occur, he suggests that coherent phonons (similar to coherent photons, in lasers) can cause phonon-nuclear coupling that can result in nuclear reactions. I summarized some of this material in my article on the APS session (see p. 43). (Also, during the APS session, I presented some of the material that he presented at MIT. An audio recording of this is available through Important points are: 1) That including lattice effects (through phonons) involving changes in nuclear coordinates have been ignored in the past; 2) Changes in these coordinates can lead to coherent forms of coupling between the lattice and nuclear positions that do not occur in free space; and 3) That these forms of coupling can lead to the formation of new and novel forms of compact deuteron-deuteron pairs that potentially can be capable of interacting with Pd nuclei and isotopes of Pd. Hagelstein further speculates that these forms of coupling can cause neutrons to be released, with intermediate forms of momentum (i.e., between extremely low momentum, associated with conventional interactions with a lattice, and higher momentum where this cannot take place) through a distribution that can be capable of coupling neutrons and/or charged particles in new and novel ways. The theory can potentially explain a number of "fast alpha reactions" (i.e., reactions that involve energetic helium-4 nuclei) and multi-deuteron reactions (i.e., reactions involving many proton-neutron pairs), associated with potential multi-deuteron effects observed by Kasagi and with alpha emissions, observed by Lipson. Hagelstein also suggested that neutrons, released through coupling involving compact deuteron pairs with Pd nuclei, might provide a mechanism for creating neutron clusters. These clusters, in turn, based on ideas that John Fisher suggested at ICCF10 and ICCF11, could lead to the formation of charged particle clusters that could explain anomalous alpha particle tracks that Oriani has observed in CR-39 films, located outside cold fusion cells. At the conclusion of his talk, Hagelstein was asked a question that has been at the forefront of cold fusion research since it began: What possible impact could cold fusion science have on society? In response to this question, optimistically Professor Hagelstein suggested that recently he became aware of something very novel and new: The possibility that a commercial cold fusion device would be marketed soon, by a group working with an individual from South Korea who had received a patent for a cold fusion-related technology. After Professor Hagelstein's talk, at about 11:30 a.m., attendees made their way to the Killian Room for a memorial service for Gene Mallove. Individuals involved with cold fusion, as well as other individuals who knew and admired Gene, expressed their deeply-felt sorrow about his tragic murder. His widow, Joanne, and their son, Ethan, expressed their gratitude for honoring Gene and remembering him with this conference. Then friends and colleagues came to the microphone to share very poignant memories of their friendship and interactions with this cold fusion advocate and scientist whose life was so brutally cut short on May 14, 2004. Mitchell Bogart recited kiddush, the Jewish prayer for those who have passed on. The admiration, sincerity, and heartfelt remarks at this event were truly moving and inspiring. As I arrived home after the conference, in thinking about all of the fascinating things I had heard throughout the day, this event stood out because it really reminded me about what counts: Truly sincere, idealistic people like Gene bring out the best in all of us. The events at the MIT colloquium mirrored this. Gene brought us there, if not in the flesh, in the spirit of the people, what they said, and with the intensity of their words. After the memorial service, an informal luncheon took place. This was the first opportunity for attendees of the colloquium to talk to each other informally as they enjoyed a fabulous array of Vietnamese and Thai foods. It was at this point that I began to appreciate both the depth (in a number of cases) and the breadth of the interest of individuals who attended the conference. After the luncheon, a new session took place that focused primarily on theoretical talks. The first talk was presented by Professor Yeong Kim, from Purdue University. Professor Kim presented a talk, titled, "Theory of Boson Ground-State Fusion Mechanism for Cold Fusion and Acoustic-Induced Cold Fusion with Micro/Nano-Scale High-Density Plasmas in Deuterated Metals/Liquids." By way of introduction, Professor Kim began by emphasizing that there have been many reports of experimental evidences for LENR processes in condensed matter as documented in a recent DOE review (discussed in IE #58 and #61; also see and through the various ICCF Proceedings. However, he also emphasized that most experimental results cannot be reproduced on demand. This situation has prevented us from the development of a coherent theoretical understanding or working theoretical model of the phenomenon, which can be used to guide us in designing and carrying out new experimental tests to sort out essential parameters and controls, needed to achieve reproducibility on demand (ROD). In the talk, Kim outlined a procedure for doing this, through a theoretical model that he has developed that is based on a Bose-Einstein (BE) fusion mechanism that is applicable to the results of many different types of LENR and transmutation experiments. Kim pointed out that there have been a number of recent experimental results that indicate that LENR processes in condensed matter are surface phenomena (SP) that occur in micro- and nano-scale active (hot) spots in the surface regions rather, than bulk phenomenon (BP) in the bulk of the deuterated metals. He and his colleague Z.L. Zubarev have carried out a number of theoretical studies, involving the proposed BE fusion mechanism, using an approximate solution to the many-body Schrödinger equation for a system of N identical charged, integer-spin nuclei ("Bose" nuclei) confined in micro- and nano-scale cavities (Y.E. Kim and A.L. Zubarev, Fusion Tech., 37, 151, 2000; Y.E. Kim and A.L. Zubarev, Italian Physical Society Proceedings, 70, 37, 2000; Y.E. Kim and A.L. Zubarev, Physical Review A, 64, 013603, 2001; Y.E. Kim, Progress of Theoretical Physics Supplement, 154, 379, 2004; Y.E. Kim and A.L. Zubarev, "Mixtures of Charged Bosons Confined in Harmonic Traps and Bose-Einstein Condensation Mechanism for Low Energy Nuclear Reactions and Transmutation Processes in Condensed Matters," Proceedings of ICCF11, Marseille, France, 2004). To apply their BE fusion mechanism, the ground-state solution is used to obtain theoretical estimates of the probabilities and rates of nuclear fusion for N identical Bose nuclei confined in an ion trap or an atomic cluster. One of the main predictions is that (due to the many-body nature of the problem) the Coulomb interaction between two charged bosons may be suppressed when a sufficiently large number N of Bose nuclei are included and that, as a consequence, the conventional Gamow factor (associated deuteron-deuteron fusion) may be absent. Recently, he and A.L. Zubarev have generalized the one-species LENR theory of the BE fusion mechanism that they have used for reactions such as (D+D) to the two-species case and applied it to (D+Li) reactions (Y.E. KIM and A.L. Zubarev, Proceedings of ICCF11, Marseille, France, 2004). The only unknown parameter of the theory is the probability of the BE ground-state occupation W. Since W is expected to increase as the effective temperature of the BE ground-state decreases, the nuclear reaction rates for the BE fusion mechanism are expected to increase at lower temperatures. Kim suggested that a number of effects could be explained through the BE fusion principle (and its extension to the two-species case). The effects include the following: 1) deuteron beam experiments (Kasagi et al., Rolf et al., and others), in which anomalously large cross-sections for high energy particle emission are observed when low energy deuterons strike a Ti (or other metal) target that has been loaded with deuterium; 2) electrolysis experiments (for example, along the lines of the work by Fleischmann and Pons); 3) gas experiments (Arata and Zhang, Case, and others); 4) nuclear emissions (Jones et al., ICCF10, and others); 5) transient acoustic cavitation experiments (Stringham et al. and others); 6) neutron-induced acoustic cavitation experiments; 7) bubble fusion results (by Taleyarkhan et al., Science, 295, 1868, 2002), involving cavitating bubbles imploding upon themselves, at nucleation centers. The basis of the principle that Kim suggested could account for all of these effects involves two steps: 1) The formation of a high density plasma (D+'s, e-'s, etc.) trapped in a micro/nano-scale cavity in a metal surface region; and 2) Trapped within the cavity, the plasma's kinetic energy becomes stifled, and the individual D+'s are forced into a common (lowest energy) bosonic, many-body state. (This kind of state, effectively, can mimic a Bose Einstein Condensate so that momentum can be transferred to many entities in the state, instantaneously.) As opposed to the situation involving other Bose Condensates, in the many-body state suggested by Professor Kim in order for nuclear reactions to take place, the Coulomb repulsion between each D+ with the remaining D+'s is included explicitly. The associated barrier is overcome through an energy minimization procedure that includes a competition between attractive forces provided by a harmonic trapping potential (similar to potentials provided by magneto-optical traps, that are used to trap alkali atoms in atomic Bose Einstein Condensates) and Coulomb repulsion. The predictions of the BE fusion mechanism can be tested in well-designed experiments in order to find out whether the BE fusion mechanism is a correct unifying theory for the LENR and transmutation processes in condensed matter. An interesting feature of the BE mechanism is that it predicts that with decreasing temperature, reaction rate can actually go up (which is consistent with a similar prediction that Talbot Chubb and I have made). Dr. Mitchell Swartz gave the next talk entitled, "Possible Parameter to Describe Optimal Operating Point." In it, he emphasized a number of important, but somewhat subtle points, about hydrogen absorption, loading, flux, and the ratio of energies controlling loading (organizing energy from the applied electric field intensity to the random disorder of thermal energy) and how to obtain the excess heat effect that are not widely appreciated. He then developed a parameter which he named in honor of Eugene Mallove. Swartz began with equations showing that the loading of hydrogen into a metal lattice is very much a non-equilibrium process because the mass transfer leading to metal loading (or worse, by Swartz's continuum calculations, gas evolution) takes place in the changes evoked by the applied electric field to the surface and bulk electrons and deuterons. These changes occur inside, outside, and at the surface of the metal. The non-equilibrium mass transfer process, driven by the applied electric field, plays a key role in the generation of excess heat. Dr. Swartz's continuum loading flux equations include diffusion down concentration gradients and electrophoretic drift from an applied electric field. Rather than fitting curves, Dr. Swartz uses deuteron diffusivity and electrophoretic mobility, to understand—and control—these systems. He converts these parameters with the Einstein relation to illustrate that the loading rate and ultimate deuteron availability (secondary to the applied electric field) to the palladium lattice is basically at odds with gas evolution at the cathode. Simply put, not all of the deuterons enter, and load, the metal. This is important because Swartz's continuum deuteron flux equations predict that the loading of hydrogen isotopes into the metal is controlled by the ratio of the applied electric field energy to thermal energy kBT (kB=Boltzmann constant, T=temperature) and is opposed by competitive gas evolving reactions at the metal electrode surface. Swartz said in the equations that describe flux, rate of flux, and related quantities, during the loading process, effectively, electrolytically-induced effects, that have been widely viewed as being necessary for initiating cold fusion (for example, bubble formation) at high-loading, generally oppose the desired cold fusion reactions, which possibly suggests that fluctuations that are induced by these effects (as opposed to the effects themselves) probably are important. In particular, in one particular equation, from a paper he wrote in 1992 (Swartz. M., Fusion Technology, 22, 296-300, 1992), the desired contributions to the loading process are shown to have kinetics that are not proportional to "electrolysis related terms," but (in appropriate units) to "1—the associated electrolysis related terms." Swartz emphasized that the flow equations are especially important because they predict the optimal operating point (OOP) that is implicit in the behavior of cold fusion palladium-heavy water systems and nickel-light water systems that produce excess heat. The OOPs (or OOPs manifolds) can be found by plotting excess power (in the case of OOPs manifolds, excess heat or helium, or other excess product) as a function of input electrical power. The OOP is the relatively narrow peak (maximum) of the biphasic production rate curve for the products obtained by the desired reactions (heat, helium-4) as a function of input electrical power. At the center of the OOP, the peak power ratio and system output are at a relative maximum, and so the peak is where each system should be (optimally) operated. Driving with electrical input power beyond this operating point yields a typical fall-off of the observed power ratio for increasing input power or current levels, towards a power gain ratio of 1 and then less. In particular, Swartz's plots of product and heat output vs. input electrical power in a number of experiments (his, Arata/Zhang, Miles, and others) all show this distinctive pattern, in which output power can rise and decline suddenly, as the input power varies only by a relatively small amount. He refers to the locations of these sharp variations in output as OOPS manifold, with the peak referred to as the OOP. At the MIT meeting, Swartz showed new graphs with information on how the OOP manifold, with the OOP at the top of the manifold "tent," is dynamic and grows with loading and electrode maturation (which takes weeks). Swartz also revealed a new parameter which describes the shape of OOP manifold, and might herald significant differences between different types of OOP manifolds. This parameter may enable further understanding of how to control these devices and systems, and may predict which can be engineered for precise control by servo- and other systems. Swartz named that parameter to honor his friend and colleague, Dr. Eugene Mallove. Those OOP manifolds that are very tall and thin ["high Malloves"] have the highest likelihood of such servo-control, such as Dr. Bass suggested. For example, the Arata data, described by Russ George, and is characterized by Swartz as "19 Mallove" OOP, indicating a very high, very narrow peak. The heat and helium-4 production data of Pd/D2O of Dr. Swartz and Dr. Miles have "4 Mallove" and "5 Mallove" OOP manifolds, which are also robust, though less narrow, peaks of important cold fusion systems. Swartz also presented new data obtained from his dual-ohmic control calorimeters, which measure not only calibrated excess heat but also elucidate the boil-off effect (first observed by Pons and Fleischmann, and subsequently by several other groups), in which excess heat continues during electrolysis experiments, after all of the electrolyte has boiled away. In particular, Swartz has shown that measurement requires examining not just the integral of the released energy (the "heat after death"), but also the kinetics of how it is produced, in the absence of input power. In that sense, he feels it is more appropriate to view the effect as being triggered by a form of delay in excess power production (possibly caused by a redistribution of material). Within this context, he suggested that a more appropriate term for "heat after death" might be "tardive thermal power," which in a figurative sense, Dr. Swartz referred to as "the first time-derivative of 'heat after death.'" Dr. Talbot Chubb gave a talk titled, "How Physics Supports Cold Fusion." He suggested that Fleischmann-Pons (F-P) cold fusion does not require new physics. Instead it requires that the right physics be used to model the right problem. Normal nuclear physics uses quantum mechanical scattering theory to model nuclear changes that occur when high energy charged particles like protons and deuterons impact nuclei. To understand cold fusion, he said it is necessary to model the right problem using the right rules of the game. In a lattice, a good starting point involves recognizing the difference between free space and a lattice. In free space, deuterons are free to scatter and are not constrained in their motion by other charged particles. In a lattice, deuterons are not free to scatter but are constrained to move in particular ways by the particles in the lattice. For this reason, the scattering rules that apply in free space fusion are not the appropriate rules for describing fusion and other forms of LENR in solids. In solids, the appropriate rules involve an energy minimization procedure that includes all of the charged particles in the lattice. This, in turn, can lead to counter-intuitive results, in which appreciable overlap between charged particles can take place. In particular, he cited a well-known example of how harged particles (two electrons) can have appreciable overlap in an environment (a neutral helium atom) where the particles are constrained (through energy minimization) to "move" in a particular way as a result of interacting with additional charged particles (the two protons in the helium nucleus). Chubb said that, as opposed to some form of scattering or collision (which would require high energy particles to be released) in a lattice, the "trick" that can make cold fusion possible is the discovery of how, as a result of the deuterons being in a metal lattice, it can become possible to greatly reduce the potential energy and the associated proton-proton and deuteron-deuteron repulsion, without requiring that the deuterons acquire a high velocity. He said a "trick" is required because the kinetic energy doesn't normally dominate the repulsion in a lattice (or elsewhere), especially when the deuterons have low velocity. Chubb thinks the trick is what he calls "coherent partitioning." He said that the idea of "coherent partitioning" goes back to the 1920s, when Felix Bloch showed that valence electrons assume a wave-like form when they exist in a metal. Their wave functions have the array symmetry of their hosting metal crystal. This geometry has come to be called Bloch-function symmetry. He said the valence electrons are "coherently partitioned," so that a part of each Bloch electron occurs in each unit cell in its metal crystal host. The trick is to get some of the resident deuterons to assume the Bloch form. Dr. Chubb emphasized that most of the rules that he believes are needed to understand Pons and Fleischmann cold fusion are spelled out in standard textbooks (F. Seitz, The Modern Theory of Solids, McGraw Hill, New York, 1940, pp. 195-234.; E. Merzbacher, Quantum Mechanics, John Wiley & Sons, New York, 1961, pp. 64-75, 466-471) and are based on well-known ideas about quantum mechanics that have been known since the time he learned the subject in the 1940s. The other rules, he said, come from ideas and experimental results that have been published in cold fusion literature and from known effects associated with the resonance structure of alpha-alpha collisions and of 8Be that are explained in a standard nuclear physics textbook (K. Heyde, Basic Ideasand Concepts in Nuclear Physics , Institute of Physics Publishing, Bristol, 1994, pp. 54, 299-323). Chubb likes to think of cold fusion and related LENR as a mystery story. Recently, he has come to believe that a key set of clues were provided not only in the initial work of Fleischmann and Pons (M. Fleischmann and S. Pons, J. Electroanal. Chem., 261, 301, 1989), but in the much more recent work involving transmutations by Iwamura et al. (Y. Iwamura, M. Sakano, and T. Itoh, "Elemental Analysis of Pd Complexes: Effects of D2 Gas Permeation," Jpn. J. Appl. Phys., 41, 4642, 2002) and in the observations by Oriani and Fisher (R.A. Oriani and J.C. Fisher, "Energetic Charged Particles Produced in the Gas Phase by Electrolysis," Proc. ICCF10, World Scientific, 2005, in press) of alpha particle showers, outside electrolytic cells. [Editors Note: We are publishing an article by Dr. Chubb, summarizing these and related ideas, in the current issue of Infinite Energy—see p. 19.] After Talbot Chubb spoke, I gave a talk, titled, "Understanding LENR Processes and Cold Fusion, Using Conventional Condensed Matter Physics." I summarized the relationship of the ion band state theory (that has formed the basis of Talbot Chubb's and my theories about LENR) to the more general theoretical framework (based on generalized multiple scattering theory) that can be used to understand the relationship between a number of theories (our earlier theory, my more recent improvements of this theory, associated with broken gauge symmetry, Peter Hagelstein's theory, Yeong Kim's theory, Julian Schwinger's theory) that include mathematical expressions that relate nuclear reaction rate to the underlying many-body physics. I also presented a time-line that shows the history of our theory and its relationship to experiments. In particular, in this context, I pointed out that as a result of the agreement between four ion band state theory predictions and later experimental observations, the credibility of the theory helped to inspire a ten-year, collaborative effort (involving the Naval Air Warfare Center, Weapons Division, the Naval Space Warfare Systems Center, and the Naval Research Laboratory) that focused on understanding heat and anomalous nuclear effects in palladium-deuterium systems. The four predictions (which were subsequently confirmed by experimental observations) were that in the Pons and Fleischmann excess heat experiments: 1) The primary products should be heat and helium-4; 2) The heat should be produced without high energy products and should be at levels commensurate with the energy that results when helium-4 is produced from deuteron+deuteron fusion; 3) The helium-4 should be found primarily outside and near the surfaces of PdD electrodes; and 4) High-loading would be required to initiate the excess heat effect in PdD. Subsequently, in the time-line that documented the relationship of our theoretical predictions to experimental observations, I also pointed out an additional success: That based on our suggestion that by embedding a number of micro-scale (and smaller) PdD crystals into a porous medium, Dr. Bahkta Rath of the Naval Research Laboratory (NRL) suggested to Dr. Ashraf Imam (also from NRL) that it might be useful for him to prepare Pd-B alloys, with a Pd concentration that is sufficiently small that the two metals would effectively remain segregated (i.e., they would form an immiscible alloy). Subsequently, Dr. Imam made various immiscible Pd-B alloys that Melvin Miles used as electrodes in excess heat experiments. Seven out of eight of these alloys produced excess heat. In the one instance that the alloy failed to produce excess heat, it was possible to identify a reason for the failure: The alloy developed a significant fracture that made it impossible to load it with deuterium at the levels that are required for producing excess heat. I concluded my talk by discussing a very new result that I had reported at the APS meeting: It is possible to identify a potential triggering mechanism that not only is consistent with the underlying electronic structure and behavior of fully-loaded PdD but can provide an explanation of the long incubation times (which range between seconds or minutes and days, and even weeks) that are required to initiate excess heat, after full-loading has been achieved. A novel, and potentially key, aspect of this mechanism is that it suggests that the triggering effect is related to crystal size: A critical range of sizes (involving characteristic dimensions ranging between ~6 and 60 nano-meters) should be optimal for triggering the effect; while at sizes larger than this, triggering times rapidly increase to the point that the reaction will never take place. I have included additional information about this aspect of my talk in the article about the APS meeting that is also included in this issue [p. 40]. Dr. Robert Bass, a senior engineer at Innoventech Inc., presented the final talk that was explicitly related to cold fusion. Over the years, Dr. Bass, using concepts that have evolved from efforts involving a number of well-known physicists, has developed an interesting, semi-classical model that suggests LENR effects can be triggered by resonant phenomena, defined by situations in which odd or even multiples of a deBroglie wavelength of potentially interacting particles approximately equals the separation between neighboring unit cells, in a one-dimensional lattice. According to this theory, resonant fusion phenomena can be triggered through effects associated with the behavior of a particular quantity (that he refers to as the Schwinger Ratio). Because this quantity is proportional to the square root of the mass of the potentially interacting particles, multiplied by the lattice spacing, its behavior is material and mass dependent. Based on criteria associated with an observation that he has made concerning potential tunneling, he has concluded that whether or not tunneling can take place can be inferred from the requirement that the Schwinger Ratio be either odd or even. From these criteria, Bass suggests it is possible to predict that excess heat will be produced with both heavy water and "light" water in Ni, but with heavy water only and not "light" water in Pd. Bass mentioned that this particular prediction, to his knowledge, is counter-intuitive to one which Rabinowitz (in Int. J. Theor. Phys.), after reviewing 371 cold fusion-theory papers, had argued that "no" known or foreseeable cold fusion theory could ever possibly pass. Following Dr. Bass, Emeritus MIT Professor Keith H. Johnson presented an interesting talk concerning an idea that he and D.P. Clougherty published in July 1989 (in Modern Physics Letters B, 3, 10, 795-803), concerning an hypothesis that some form of d-d fusion reaction might be possible, as a result of a structural phase transition that they predicted would occur (based on ab initio quantum chemistry calculations) involving nano-crystalline forms of Pd. In particular, through this mechanism (which could induce a strong, anharmonic distortion of the lattice), he said they were able to find fusion rates that were comparable to those observed by Jones et al. (based on their neutron measurements) but that they could not account for the heat results (based on standard fusion models) observed by Pons and Fleischmann. During his talk, Professor Johnson also provided some background about the screenplay that he wrote for the movie, "Breaking Symmetry," which deals with a fictitious story about cold fusion (that is based, in a peripheral way, on the events that took place at MIT in 1989). He showed a number of film clips from the movie. One particular scene was especially memorable because, indirectly, one of the characters immortalized Eugene Mallove's book, Fire from Ice: Searching for the Truth Behind the Cold Fusion Furor, by referring to the book as being the authoritative source of correct information about cold fusion. It was especially fitting that Professor Johnson showed this clip at a colloquium that was dedicated to the memory of Gene. There were also additional talks that were not directly related to the science of cold fusion. These included a talk by Professor Robert Rines, former head of MIT's patent office department and dean and founder of the Franklin Pierce Law School, for Intellectual Property, about the deliberate decision by the Patent and Trademark Office to block cold fusion. There were also talks about non-cold fusion alternative clean energy systems. These included talks by: Ken Shoulders, Peter Graneau, and Brian Ahern. A copy of Ken Shoulders' talk is available online (, in a pdf file titled, "EVOs and Hutchinson Effect.pdf." Additional information about work that Ken and his son Steve have done on charge clusters and related effects can be found in Infinite Energy #61. Peter Graneau's talk dealt with the idea of developing an alternative energy storage process by partially extracting some of the latent heat energy that is present at all air-water interfaces. Brian Ahern spoke about a new, clean diesel technology. I would like to thank Dr. Mitchell Swartz and Gayle Verner for helping me organize and prepare this article and for providing material about their work. I would also like to thank Talbot Chubb, John Dash, Yeong Kim, David Nagel, and Ken Shoulders for providing additional material. Thanks also go to Steven Krivit and John Coviello for providing a pre-publication version of an article that John prepared about the MIT Colloquium, which will appear in the July edition of New Energy Times, at Mitchell Swartz adds: "We are all grateful to the Massachusetts Institute of Technology for hosting the meeting and encouraging the scientific arts thereby. Every one of the lecturers went out of their way to educate about cold fusion and alternative energy technologies, and we thank them for their time. Outside support for the 2005 MIT Cold Fusion Colloquium was provided by JET Thermal Products, the New Energy Foundation, Cold Fusion Times, the MIT E-Club, ZeroPoint, and GCR Consulting LLC. As a result of their generosity, the event was a major contribution to further advancing the dialogue and research on cold fusion and alternative energy. Gene would have been very proud."
edaeb9a15b8d0beb
« · » Section 8.5: Towards a Wave Packet Solution Please wait for the animation to completely load. Considering our failure (the lack of localization and normalization) with using only one solution to the Schrödinger equation (a single time-dependent energy eigenfunction) for the free-particle problem, what about a superposition of plane wave solutions which you have explored in Sections 8.3 and 8.4?   Restart.  While these constructions approach a localized solution, there are always copies of the localized solution created. Instead of a sum of individual solutions, consider an integral, Ψ(x) = 1/(2πħ)1/2 ∫ Φ(p) eipx/ħ dp [integral from −∞ to +∞] (8.11) which is called a Fourier transform. The Fourier transform adds a continuum of plane wave solutions, eipx/ħ, weighted by a function of momentum, φ(p). This function of momentum is called the momentum-space wave function since it plays the same role in momentum space as ψ(x) does in position space. The momentum-space wave function, φ(p), is itself the inverse Fourier transform of ψ(x) and is given by: Φ(p) = 1/(2πħ)1/2 ∫ Ψ(x) eipx/ħ dp [integral from −∞ to +∞] (8.12) Now, we seek to understand the generic wave function as defined by the Fourier transform in the first equation by substituting a reasonable function for φ(p) and calculating the position-space wave function. Consider a normalized Gaussian distribution in momentum centered on a momentum, p0, such that Φ(p) =  (α1/21/4) exp[−α2(p p0)2/2]. (8.13) Note that |Φ(p)|2 goes to 1/e of its maximum value when p = p0 ± 1/α. Therefore 1/α tells us something about the spread of the momentum-space wave function. This momentum-space wave function is shown in the bottom panel of the animation.  In the animation, ħ = 2m = 1. To find the position-space wave function, we must use Eq. (8.13) in Eq. (8.11) and evaluate the resulting integral. When we do this Gaussian integral, we get:4 Ψ(x) =  [π−1/4 ħ)−1/2] exp(ip0x/ħx2/2α2ħ2). (8.14) Look at the animation to see how the position-space wave function is related to the original momentum-space wave function.  The bottom panel shows momentum space and the top panel shows position space. Vary p0 and α and see what happens. As p0 gets larger and positive, the momentum-space wave function shifts to the right and is centered on the new value of p0. The position-space wave function now has bands of color which represent the exp(ip0x/ħ) factor in the wave function. As α increases, the momentum-space wave function narrows and the position-space wave function widens (which is a result of the Heisenberg uncertainty principle). Our packet has almost all of the right features we want in a packet that simulates a particle. However, it does not have a time dependence and it does not allow us to shift the initial position, x0, of the packet to any value we like. We will add these features next. 4Since the momentum-space wave function was normalized, so is the resulting position-space wave function. In general, due to the relationship between ψ(x) and φ(p) as expressed in the first and second equation, we have that: ∫ |ψ(x)|2 dx = ∫ |φ(p)|2 dp [integrals from −∞ to +∞], and hence if one is normalized, so is the other.} The OSP Network: Open Source Physics - Tracker - EJS Modeling Physlet Physics Physlet Quantum Physics
485946c0f7e0b735
Sign up Here's how it works: 1. Anybody can ask a question 2. Anybody can answer Continuing from my first question titled, Simple Quantum Mechanics question about the Free particle, (part1) Griffiths goes on and says, "A fixed point on the waveform corresponds to a fixed value of the argument and hence to x and t such that," x $\pm$ vt = constant, or x = $\mp$ vt + constant What does this mean? I am so confused. He goes on and says that psi(x,t) might as well be the following: $$\psi(x,t) = A e^{i(kx-(hk^2/2m)t}$$ because the original equation, $$\psi(x,t) = A e^{ik(x-(\hbar k/2m)t)} + B e^{-ik(x + (\hbar k/2m)t)}$$ only differs by the sign in front of k. He then says let k run negative to cover the case of waves traveling to the left, $k = \pm \sqrt{(2mE)}/\hbar$. Then after trying to normalize psi(x,t), you find out you can't do it! He then says the following, "A free particle cannot exist in a stationary state; or, to put it another way, there is no such thing as a free particle with a definite energy." How did he come to this logical deduction. I don't follow. Can someone please explain Griffith's statement to me? share|cite|improve this question Take for example a sine wave at 90degrees as the point of the function. It means that: the function moves/ changes value in time, when you sit on a specific x. The 90 degree passes over you, As I said in the comment to the previous: plot. – anna v May 19 '11 at 4:00 I think he's just saying that a maximum of a wave is moving much like the particle itself, so its group velocity may be interpreted as the particle's velocity. Of course, this is not the proper canonical way to deal with the observable called "velocity" in QM. – Luboš Motl May 19 '11 at 4:03 The "A free particle cannot exist in a stationary state; or, to put it another way, there is no such thing as a free particle with a definite energy." seems to me wrong, as far as experimental observations go. Otherwise the LHC is an exercise in futility. He probably means " a free particle cannot be represented by a psi function of the plane wave type shown". This is true, one needs wavepackets to represent a free particle mathematically. – anna v May 19 '11 at 5:20 p.s. to my above: The plane wave represented by the psi above, once it starts it goes on to infinity inexorably by the functional form as t is not stopped. Thus it cannot represent a particle, stationary or not. Particles are all around us. – anna v May 19 '11 at 5:37 up vote 7 down vote accepted The first part about velocities says that we're looking at a function $$\psi(x,t) = \psi(u)$$ for $u=x-vt$. For example, $\psi = \cos(x-vt)$. Now pick some fixed value for $\psi$, say $0.4$. Find a place where $\psi = 0.4$ such as $x=1.16$. If you let some small amount of time $\textrm{d}t$ go by, then look at $x=1.16$ again, $\psi$ has changed a little, but if you look at the point $x = 1.16 + v\textrm{d}t$, you will find $\psi = 0.4$ there, so we could say that the place where $\psi$ is $0.4$ is moving at speed $v$. This is not the same as saying that the particle is moving at that speed. It is saying that $v$ is the phase velocity of the wave. The second part about normalizability says that a wavefunction must be an element in a vector space called a Hilbert space (by physicists; I think mathematicians call it $L_2$). The Hilbert space consists of wavefunctions that are square-normalizable; you can square them, integrate from negative infinity to infinity, and get a finite value. Things that die off exponentially on their tails do this, for example. The sine function doesn't die off exponentially, or at all. If you square it and integrate from negative infinity to infinity, you get something infinite. Thus, the sine wave doesn't represent a reasonable probability density function for the location of a particle. The particle would be "infinitesimally likely to be observed everywhere in an infinite region", which physically does not make sense. Instead, for a real particle, we must have a normalizable wavefunction. Since a free particle with definite energy would have a pure sinusoidal wavefunction, a free particle with definite energy is physically not possible. share|cite|improve this answer I submit that the correct statement should be: "particles cannot be represented by plane waves." Particles exist, they are free have a definite energy given by the momentum measurement within the Heisenberg uncertainty principle but they are represented by wave packets because they cannot be represented by plane waves from the reasoning above. It is putting the cart in front of the horse to talk of plane waves as free particles. – anna v May 19 '11 at 5:46 @anna I'm not sure I understand completely. If the free particle has a definite momentum, its wavefunction is a plane wave because that is the eigenfunction of the momentum operator. No plane waves and no exact momentum then go hand and hand, whichever one comes first. – Mark Eichenlaub May 19 '11 at 6:15 Yes, true. I object to the statement that the plane wave could represent a free particle ever. We choose the representations to fit reality, and should not a priory reality to fit the representations: because there exists a plane wave solution it must represent a particle. – anna v May 19 '11 at 6:25 @anna I think it's pretty standard to find a solution to an equation and then ask whether that solution is physically meaningful. That's all I was trying to do. – Mark Eichenlaub May 19 '11 at 6:28 Fine, I understand that. The statement in the book quoted by OP (btw what are the words from which OP is derived?) assumes that the plane wave describes a particle and concludes that a particle cannot be stationary, (which our experimental data manifestly say is wrong), instead of concluding that "this wavefunction cannot describe a physical particle". – anna v May 19 '11 at 8:59 I would normally just add this as a comment, but I still can not comment. I would say the author is just saying that because the arguments are effectively the same, and the modulus of each number is arbitrary, one can use the distributive law to simplify the expression. If one tries to normalize using standard procedure, one gets: $$A^2$$ If one does not specify finite boundary conditions, one must find the integral of this constant value from -infinity to infinity which gives an infinite value, and thus is not normalizable. share|cite|improve this answer Well here is a +1 so you can reach the 50 needed for comments. – anna v May 19 '11 at 10:57 cool! – Unassuminglymeek May 19 '11 at 23:54 I will answer this part of the question since it is important to distinguish what we are doing as physicists. As Mark says in his comment above: Which is fine. When the answer is "no", as in this case, one goes ahead to find physically meaningful solutions that will describe a free particle. There is ample experimental evidence that particles exist, from atomic physics to nuclear physics to particle physics. Solving Schroedinger's equation for a potential well not only gives well defined state functions, the solutions describe the behaviour of the particle quite well to excellently. One then solves the "no potential" equation and finds the plane waves . First thought: no potential means free particle. second thought: are these plane wave functions well behaved so that they can be used to describe quantum mechanically a free particle? Third thought should be: No, they cannot be normalized to a probability function. Fourth thought: Could one use these solutions as a complete set to find a function that can be normalized and its probability describes a physical free particle ? Bingo: wave packets. The statement "there is no such thing as a free particle with a definite energy" is wrong for a physical particle. It should be : plane waves cannot describe free particles. Free particles ( experiment) trump plane waves ( theory). You were right to be confused. share|cite|improve this answer There is also experimental evidence that the existence of particles is not a simple matter. Bell, Kochen-Specker, the usual suspects. At high energies we see events (macroscopic or mesoscopic thermodynamic transitions in meticulously tuned experimental apparatus) that are obviously linked together --having a common cause, one might say-- but at the lower energies of non Particle Physics, the statistics of events that are separated by relatively large distances are more effectively modeled by probabilities over field configurations [which, sorry, is overcomplicated relative to the Question]. – Peter Morgan May 19 '11 at 13:07 There seems to be a difference of opinion, unexpressed, as to what we mean when we say 'free particle'. It has to mean, at the very least, that it does not interact with any potential energy term. We can, then, consider it as part of a closed system in which there is no potential energy, so its Hamiltonian has no potential energy term. In fact, we usually mean to say that its Hamiltonian has the usual form from quantising a classical free Hamiltonian, and it is clear that that is what Mr. Eichenlaub, and some other contributors, meant. (The web interface here for some ungodly reason turns my backslash into I merely refer to the Hamiltonian written above.) Now no such particles exist in Nature for the trivial reason that there are, indeed potential energy terms all over the place, no particle is really isolated from the rest of the universe. So the question is about a hypothetical situation, and is a reasonable question. A free particle cannot be in an eigenstate of the Hamiltonian because the Hilbert space of states does not possess any such eigenstates: this is simply another way of saying what Mr. Eichenlaub wrote, and it cannot be really criticised. The next step is that if it had a definite energy it would be an eigenstate, so that is why it cannot have a definite energy. It is rather unreal to first criticise this line of reasoning by saying one must look at reality only, since really there are no free particles at all. But it is then positively inconsistent to go on and talk about experimentally observed free particles. Obviously there can be particles which are approximately free, but then, they could have a wave function that was approximately a plane wave: i.e., a narrow-band superposition in which their position would have a very large variance (but not infinite), and so the probabilities of finding it in any very small location would be practically zero (but not exactly zero). What would be the limits of this approximation? We would have to be justified in neglecting the potential energy, of which there are chiefly two kinds to worry about : forces exerted by other particles, and gravity. So if the particle were far away from all other particles, this might be justified. But the larger the variance of its position, the harder it is to arrange this... I know that the practicalities of the existence of a free particle were not what the questioner was asking about, but anyone who wishes to make reality trump the formulation of a simple, sensible question about the free Hamiltonian is obligated to come to grips with reality and analyse the limits of this kind of approximation. If my tentative analysis in the previous paragraph is correct, this kind of approximation is justified when we are dealing with a more or less monochromatic wave.... The situation of a monochromatic wave is not what you might think. It is stationary, so nothing is moving. When regarded as a wave, it doesn't change its position, it is not moving that way. When regarded as a particle, it is not moving either. That is the whole point of being in a stationary state... and is the difference between phase velocity and particle velocity. share|cite|improve this answer Quantum mechanics states that the wave function must be square integrable. So, even although the plane wave is a solution of the stationnary Schrödinger equation, its not a physically accepted wave function. Now the wave packets or linear combinations of wave planes are physically acceptable solution but are NOT a solution of the stationary time independent Schrödinger's equation.So, your professor is right No free particle can exist in a stationnary state. Neverless, the time dependent Schrödinger equation can describe the evolution of the wave packet starting from an initial wave packet. share|cite|improve this answer It is the time-independent Schroedinger equation $$\frac {d^2 \psi} {d x^2} = - k^2 \psi$$ where $k^2 = \frac {2mE} {\hbar^2}$ that quantum mechanics uses to describe a free particle, not any other equation. Said equation has the following function as the solution: $$ \psi(x) = A e^{ikx} + Be^{-ikx}$$ which is a plane wave and not a wave packet. If the solution of the above equation were not a plane wave but a wave packet then it should be clearly shown and plane waves should not be discussed at all. A theory, however, should have a predictive quality. Quantum mechanics predicts the state of a free particle to be described by a plane wave (because, as said, that is what the solution of Schrodinger equation is) but that turns out to be non-physical (does not comply with experiment) for reasons already discussed. This is a major deficiency of quantum mechanics, along with a number of other deficiencies, which should be clearly spelled out and not sidetracked by offering constructs unrelated to Schrodinger's equation. Any comments? share|cite|improve this answer "Quantum mechanics predicts the state of a free particle to be described by a plane wave" is simply wrong. Plane waves form a (note the indefinite article here) basis for the solution space, but that is very much a different thing from saying that QM describes particles as plane waves. Wave packets (which are linear combination of plane waves, and so are also in the solution space) are the physically realized solutions. – dmckee Jun 28 '13 at 17:50 -1. Please stop talking about deficiencyies in QM.! It is spam already. – centralcharge Jun 29 '13 at 13:03 Your Answer
43ac9033ed4fd297
Take the 2-minute tour × I am trying to solve a Schrödinger equation for a particle hitting a step potential using NDSolve in Mathematica. Here is my code: mu = 6.; m = mu; R = 5.; Vs2 = 4./(2*m*R^2); Vs = -10./(2*m*R^2) + Vs2; Energy = 0.001 VCC[r_] = Vs*UnitStep[R - r] + Vs2*UnitStep[r - R]; L = 0; system = {RC''[r] + 2/r*RC'[r] + (-L*(L + 1)/r^2 - 2*mu*(VCC[r] - Energy))*RC[r] == 0, RC[0.001] == 1.0, RC'[0.001] == 0.0 }; syssol = NDSolve[system, {RC[r]}, { r, 0.001, 1000.}, MaxSteps -> 10000000]; Plot[Evaluate[{RC[r]} /. syssol], {r, 0.001, 200.0}, PlotRange -> {-1.1, 1.1}] There should be decaying wave when particle hits the potential step, but NDSolve gives an increasing result. I am sure there is some trick to fix this, so I am waiting for you help. share|improve this question Before posting this question, you should first have followed the advice in the comments to your identical question on StackOverflow: stackoverflow.com/questions/9855619/… –  Jens Mar 25 '12 at 18:38 What is your m ? Also take a look at this (it's a bit different case but you can see how everything is set up): demonstrations.wolfram.com/ScatteringOverPotentialStep –  Vitaliy Kaurov Mar 25 '12 at 18:39 By the way, this has exact solutions, so I'm guessing this could be a homework problem and hence not appropriate for this forum. –  Jens Mar 25 '12 at 18:43 you forgot to define m here (this is to make it simpler for someone to cut and paste this code directly) –  acl Mar 25 '12 at 18:43 and, the root of you problem is mathematical, not an error in programming. did you try to do it analytically, for instance? –  acl Mar 25 '12 at 18:44 1 Answer 1 noeckel’s answer on StackOverflow is spot on. This is not a Mathematica issue, this is a mathematical issue. Namely, Mathematica is giving you the correct solution to the system of differential equation and boundary conditions given. The conditions given (and in particular the derivative imposed at the origin) are incompatible with the expected decay. Bear in mind that, at $r \geq 5$, your wavefunction will have two components of the form $\exp(\alpha r)$ and $\exp(-\alpha r)$. For each set of boundary conditions, you get a different linear combination of these two, and the only conditions that make sense are those for which the diverging term is zero. share|improve this answer +1. If memory serves me well, usual conditions for the radial wave function are the absence of increasing exponent at infinity and finiteness at zero (or at least, convergence of the integral of Abs[f[r]]^2 over the volume around zero. The divergence of this integral would correspond to the classical fall-on-the-center scenario, for potentials ~ 1/r^a with a > 2 IIRC). This rules out fixing the derivative at zero, as you said, and fixing the w.f. value itself does not make sense either, since it is fixed by w.f. normalization (that is, unless I forgot everything, it's been a while :)) –  Leonid Shifrin Mar 25 '12 at 19:24 sorry guys, m = mu. Also, I guess this is not exactly mathematical issue because mathematica does not realize bounded problems (f''(r)-k^2*f(r)==0 type problems), so you have to tune one of the parameters (energy, potential etc.) in order to get decaying solution. I am able to do that up to r=50-60, but I would like to get it for farther distance. –  serelha Mar 25 '12 at 23:11 By the way, I have an analytic solution for this type problem, and my purpose is to compare my analytic result to numerical calculation. This is not homework. –  serelha Mar 25 '12 at 23:17 Your Answer
3e6cd1c5a16e6a7a
Sign up Here's how it works: 1. Anybody can ask a question 2. Anybody can answer I was watching Allan Adams' lecture on energy eigenfunctions, and there's one part (around 43 minutes into the lecture) that confuses me. Suppose we have the initial wave function $\Psi (x,0)$ such that $\hat{E}\,\Psi (x,0)=E \,\Psi (x,0)$ for some constant $E$. Then, plugging this into the Schrödinger equation, we'd get: \begin{align} i \hbar \frac{\partial}{\partial t} \Psi (x, 0) &= E \, \Psi (x,0) \\ \frac{\partial}{\partial t} \Psi (x, 0) &= \frac{E}{i \hbar} \, \Psi (x,0) \tag{1}\\ \therefore \Psi (x, t) &= \exp\left({-i \frac{E\,t}{\hbar}}\right) \Psi(x,0) \tag{2} \end{align} I'm a bit confused about how to go from $(1)$ to $(2)$. Now if we make the additional assumption that $\hat{E}\,\Psi (x,t)=E \,\Psi (x,t)$ for all $t$, then the Schrödinger equation becomes: \begin{align} \frac{\partial}{\partial t} \Psi (x, t) &= \frac{E}{i \hbar} \, \Psi (x,t) \end{align} and I can solve this differential equation easily and get $(2)$. But from watching that part of the lecture, it seems we only need to assume a weaker statement - that the initial wave function is an energy eigenfunction. But then, it's not clear to me how I can get the solution $(2)$ from $(1)$. Am I missing something? Update: Thanks for all the answers. After reading through the accompanying lecture note, we indeed need to assume that the energy operator is a constant over time. share|cite|improve this question I'm sure this is a duplicate, but after some searching I can't previous instances. Anyhow, you assume $\Psi(x,t) = \psi(x)\phi(t)$ i.e. the wavefunction is a product of space dependant and time dependant functions. Feed this back into the Schrodinger equation, use the fact that $\psi(x) = \Psi(x,0)$ is an eigenfunction and you get the equation for $\phi(t)$ – John Rennie Jul 1 '14 at 5:20 up vote 0 down vote accepted By the simple form of the equation (1) you wrote down, Allan really meant $$ \frac{\partial}{\partial t} \Psi(x,t)|_{t=0} = \frac{E}{i\hbar}\Psi(x,0) $$ He just used the notation where $t=0$ is substituted from the beginning but he clearly did mean that $\Psi(x)$ is first considered as a general function of $t$, then differentiated, and then we substitute $t=0$. This equation says that the time derivative of $\Psi(x,t)$ at $t=0$ is proportional to the same wave function. By itself, it does not imply that $\Psi(x,t)$ for an arbitrary later $t$ will be given by equation (2): if we only constrain the derivative at one moment $t=0$, the wave function may do whatever it wants at later (or earlier) moments $t$. However, we may generalize (1) to any moment $t$ which is what you wrote down $$ \frac{\partial}{\partial t} \Psi(x,t) = \frac{E}{i\hbar}\Psi(x,t) $$ and this equation does imply (2). If the $t$-derivative of $\Psi(x,t)$ is proportional to the same $\Psi(x,t)$, then $\Psi(x,t)$ and $\Psi(x,t')$ are proportional to each other for each $t,t'$. That implies that $\Psi(x,t)$ must factorize to $\Psi(x)f(t)$ and the function $t$ is the simple complex exponential to solve the equation with the right coefficient. Because you are more or less writing the same things, I find it plausible that you are not missing anything. share|cite|improve this answer Going from your (1) to your (2) is bogus as you suspect. Your (1) isn't even really a differential equation: the independent variable $t$ does not appear. (1) is just using the initial condition to find what the time derivative is at $t = 0$. Indeed to directly conclude that the solution is (2), we would need your stronger assumption. One correct way to arrive at (2) is to assume that the solution takes the form $\Psi(x,t) = f(t) \Psi(x,0)$, which will give you $f(t) = \exp(Et/i\hbar).$ This clearly matches the initial condition. You could also just make the stronger assumption. This amounts to making an educated guess about the solution to the differential equation. This is a perfectly valid way of solving differential equations, as long as you can match the initial conditions. share|cite|improve this answer If this isn't at a higher level than the original lectures (which I haven't seen, actually), the correct way to go from $\psi (x,0)$ to $\psi (x,t)$ in Quantum Mechanics, is to employ the time evolution operator $\exp(-i{\hat H}t/\hbar)$, where ${\hat H}$ is the Hamiltonian, on $\psi (x,0)$, i.e. $$\psi(x,t) = {\hat T} \left(\psi (x,0)\right) = \exp(-i{\hat H}t/\hbar) \ \psi (x,0)$$ Now, the exponential can be expanded into the series $\sum_i (-i{\hat H}/\hbar)^n/n!$, i.e. an infinite series in powers of ${\hat H}$. That can be simplified by using your original time-independent Schrodinger equation, $${\hat H} \ \psi (x,0) = E \ \psi (x,0)$$, for each term. This would lead you another infinite series (exponential), but in terms of $E$ rather than ${\hat H}$ this time around. On simplifying you would reach the desired equation $$\psi (x,t) = \exp(-iEt/\hbar) \ \psi (x,0)$$ share|cite|improve this answer Your Answer
0d921a6fef8eeffb
Erwin Schrödinger Erwin Schrödinger Erwin Schrödinger.jpg Erwin Rudolf Josef Alexander Schrödinger August 12 1887(1887-08-12) Erdberg, Vienna, Austria-Hungary Died January 4 1961 (aged 73) Vienna, Austria Residence Austria, Ireland Nationality Flag of Austria Austria Flag of Republic of Ireland Ireland Field Physics Institutions University of Wroclaw University of Zurich University of Berlin University of Oxford University of Graz Dublin Institute for Advanced Studies Alma mater University of Vienna Academic advisor  Friedrich Hasenöhrl Known for Schrödinger's equation Notable prizes Nobel Prize.png Nobel Prize in Physics (1933) Quantum physics Quantum mechanics Introduction to... Mathematical formulation of... Fundamental concepts Decoherence · Interference Uncertainty · Exclusion Transformation theory Ehrenfest theorem · Measurement Double-slit experiment Davisson-Germer experiment Stern–Gerlach experiment EPR paradox · Popper's experiment Schrödinger's cat Schrödinger equation Pauli equation Klein-Gordon equation Dirac equation Advanced theories Quantum field theory Wightman axioms Quantum electrodynamics Quantum chromodynamics Quantum gravity Feynman diagram Copenhagen · Ensemble Hidden variables · Transactional Many-worlds · Consistent histories Quantum logic Consciousness causes collapse Planck · Schrödinger Heisenberg · Bohr · Pauli Dirac · Bohm · Born de Broglie · von Neumann Einstein · Feynman Everett · Others Schrödinger in 1933, when he was awarded the Nobel Prize in Physics. Erwin Schrödinger's gravesite. Erwin Rudolf Josef Alexander Schrödinger (August 12, 1887 – January 4, 1961) was an Austrian-Irish physicist who achieved fame for his contributions to quantum mechanics, especially the Schrödinger equation, for which he received the Nobel Prize in 1933. In 1935, after extensive correspondence with personal friend Albert Einstein, he proposed the Schrödinger's cat as an imaginary thought experiment to magnify the Copenhagen Interpretation of quantum experiments to the patent-absurdity of a cat that is both full-of-life and stone-cold-dead at the same time, in a supposition of states. In 1887, Schrödinger was born in Erdberg, Vienna, to Rudolf Schrödinger (cerecloth producer, botanist) and Georgine Emilia Brenda (daughter of Alexander Bauer, Professor of Chemistry, k.u.k. Technische Hochschule Vienna). His father was a Catholic and his mother was a Lutheran. In 1898, he attended the Akademisches Gymnasium. Between 1906 and 1910, Schrödinger studied in Vienna under Franz Serafin Exner (1849-1926) and Friedrich Hasenöhrl (1874-1915). He also conducted experimental work with Friedrich Kohlrausch. In 1911, Schrödinger became an assistant to Exner. Middle years In 1914, Erwin Schrödinger achieved Habilitation (venia legendi). Between 1914 and 1918, he participated in war work as a commissioned officer in the Austrian fortress artillery (Gorizia, Duino, Sistiana, Prosecco, Vienna). On April 6, 1920, Schrödinger married Annemarie Bertel. The same year, he became the assistant to Max Wien, in Jena, and in September 1920, he attained the position of ao. Prof. (Ausserordentlicher Professor), roughly equivalent to Reader (UK) or associate professor (U.S.), in Stuttgart. In 1921, he became O. Prof. (Ordentlicher Professor, that is, full professor), in Breslau (now Wrocław, Poland). In 1918, he made up his mind to abandon physics for philosophy, but the city he had hoped to obtain a post in was ceded to Austria in the peace treaties ending World War I. Schrödinger, therefore, remained a physicist. In 1922, he attended the University of Zürich. In January 1926, Schrödinger published in the Annalen der Physik, the paper, "Quantisierung als Eigenwertproblem" (tr. Quantisation as an Eigenvalue Problem_ on wave mechanics and what is now known as the Schrödinger equation. In this paper, he gave a "derivation" of the wave equation for time independent systems, and showed that it gave the correct energy eigenvalues for the hydrogen-like atom. This paper has been universally celebrated as one of the most important achievements of the twentieth century, and created a revolution in quantum mechanics, and indeed of all physics and chemistry. A second paper was submitted just four weeks later that solved the quantum harmonic oscillator, the rigid rotor, and the diatomic molecule, and gives a new derivation of the Schrödinger equation. A third paper in May showed the equivalence of his approach to that of Heisenberg and gave the treatment of the Stark effect. A fourth paper in this most remarkable series showed how to treat problems in which the system changes with time, as in scattering problems. These papers were the central achievement of his career and were at once recognized as having great significance by the physics community. In 1927, he joined Max Planck at the Friedrich Wilhelm University in Berlin. In 1933, however, Schrödinger decided to leave Germany; he disliked the Nazis' antisemitism. He became a Fellow of Magdalen College at the University of Oxford. Soon after he arrived, he received the Nobel Prize together with Paul Adrien Maurice Dirac. His position at Oxford did not work out; his unconventional personal life (Schrödinger lived with two women) was not met with acceptance. In 1934, Schrödinger lectured at Princeton University; he was offered a permanent position there, but did not accept it. Again, his wish to set up house with his wife and his mistress may have posed a problem. He had the prospect of a position at the University of Edinburgh but visa delays occurred, and in the end he took up a position at the University of Graz in Austria in 1936. Later years In 1938, after Hitler occupied Austria, Schrödinger had problems because of his flight from Germany in 1933 and his known opposition to Nazism. He issued a statement recanting this opposition. (He later regretted doing so, and he personally apologized to Einstein). However, this did not fully appease the new dispensation and the university dismissed him from his job for political unreliability. He suffered harassment and received instructions not to leave the country, but he and his wife fled to Italy. From there he went to visiting positions in Oxford and Ghent Universities. In 1940, he received an invitation to help establish an Institute for Advanced Studies in Dublin, Ireland. He became the Director of the School for Theoretical Physics and remained there for 17 years, during which time he became a naturalized Irish citizen. He wrote about 50 further publications on various topics, including his explorations of unified field theory. In 1944, he wrote What is Life? which contains a discussion of Negentropy and the concept of a complex molecule with the genetic code for living organisms. According to James D. Watson's memoir, DNA, The Secret of Life, Schrödinger's book gave Watson the inspiration to research the gene, which led to the discovery of the DNA double helix structure. Similarly, Francis Crick, in his autobiographical book What Mad Pursuit, described how he was influenced by Schrödinger's speculations about how genetic information might be stored in molecules. Schrödinger stayed in Dublin until retiring in 1955. During this time, he remained committed to his particular passion. Scandalous involvements with students occurred and he fathered two children by two different Irish women. In 1956, he returned to Vienna (chair ad personam). At an important lecture during the World Energy Conference, he refused to speak on nuclear energy because of his skepticism about it and gave a philosophical lecture instead. During this period, Schrödinger turned from mainstream quantum mechanics' definition of wave-particle duality and promoted the wave idea alone, causing much controversy. Personal life Schrödinger decided, in 1933, that he could not live in a country in which persecution of Jews had become a national policy. Alexander Frederick Lindemann, the head of physics at Oxford University, visited Germany in the spring of 1933, to try to arrange positions in England for some young Jewish scientists from Germany. He spoke to Schrödinger about posts for one of his assistants and was surprised to discover that Schrödinger himself was interested in leaving Germany. Schrödinger asked for a colleague, Arthur March, to be offered a post as his assistant. The request for March stemmed from Schrödinger's unconventional relationships with women. His relations with his wife had never been good and he had had many lovers with his wife's knowledge. Anny had her own lover for many years, Schrödinger's friend Hermann Weyl. Schrödinger asked for March to be his assistant because, at that time, he was in love with March's wife, Hilde. Many of the scientists who had left Germany spent the summer of 1933 in Alto Adige/Südtirol. Here, Hilde became pregnant with Schrödinger's child. On November 4, 1933, Schrödinger, his wife, and Hilde March arrived in Oxford. Schrödinger had been elected a fellow of Magdalen College. Soon after they arrived in Oxford, Schrödinger heard that, for his work on wave mechanics, he had been awarded the Nobel prize. In the spring of 1934, Schrödinger was invited to lecture at Princeton University and while there he was made an offer of a permanent position. On his return to Oxford, he negotiated about salary and pension conditions at Princeton but in the end he did not accept. It is thought that the fact that he wished to live at Princeton with Anny and Hilde both sharing the upbringing of his child was not found acceptable. The fact that Schrödinger openly had two wives, even if one of them was married to another man, was not well received in Oxford either. Nevertheless, his daughter, Ruth Georgie Erica was born there on May 30, 1934.[1] On January 4, 1961, Schrödinger died in Vienna of tuberculosis at the age of 73. He left a widow, Anny (born Annamaria Bertel on Dec. 3, 1896, died Oct. 3, 1965), and was buried in Alpbach (Austria). Color science • "Theorie der Pigmente von größter Leuchtkraft," Annalen der Physik, (4), 62, (1920), 603-622 • "Grundlinien einer Theorie der Farbenmetrik im Tagessehen," Annalen der Physik, (4), 63, (1920), 397-426; 427-456; 481-520 (Outline of a theory of color measurement for daylight vision) • "Farbenmetrik," Zeitschrift für Physik, 1, (1920), 459-466 (Color measurement) Legacy: The equation In the quantum view, an atom is a quantum probability wave with the particles jittering about in it over time. Schrödinger's enduring contribution to the the development of science was to describe the atom in terms of the wave and particle aspects of the electron established by the pioneers of quantum mechanics. His insight, while derived by considering the electron, applies equally to the quarks and all other particles discovered after his time. In particle physics, the electron, like all fundamental particles is a unified entity of wave and particle (or wave-particle duality). The wavefunction tells the particle what to do over time, while the interactions of the particle tell the wave how to develop and resonate. The wave aspect of the atomic electron is atom-size, while the particle aspect is point-like even at scales thousands of times smaller than the proton. The electron jitters so rapidly about in the wave that, over even fractions of a second, it behaves as a 'solid' cloud with the shape of the wave. Applying the classical wave equation to the electron, Schrödinger derived an equation—which now bears his name—that gave the shape of the wave and thus the shapes of the atoms. The wavefunction, unlike classical waves that can be measured with real numbers—is measured with complex numbers involving the square root of minus one. In English, his equation states that the negative rate of change in the value of the wavefunction at a point at a distance from the tiny central nucleus equals the product of 1. the value of the wavefunction at that point 2. the difference in total and potential energy at that point (called the action and 3. the inertia of the particle—its rest mass-energy. This requirement determines the shape of the wavefunction, or orbital, and hence the shape of the atom. In math symbols, this is written: A simple expression of the Schrödinger Equation where the Greek letter psi, ψ, is the complex value of the wavefunction at a distance from the nucleus, m is the mass, E is the total energy, V is the potential energy at that distance from the nucleus, and the h-bar squared—Planck's Constant divided by 2π—simply converts the mass and energy measured in human units—such as grams and ergs—into natural units. The electron density over time at any point equals the absolute square of the complex number value of the wavefunction, |ψ|^2, which is always a real number. While this equation, remarkably enough, explains everything about the nature of atoms, the current state of math is such that it can only be solved for the two simplest atoms, hydrogen and helium. Perturbation theory is used to generate approximate—and quite accurate—solutions for the more complex atoms. Max Born, one of the quantum founding-fathers, stated his opinion, in 1926, that: "The Schrödinger Equation enjoys in modern physics the same place as in classical physics do the equations derived by Newton, Lagrange, and Hamilton."[2] Born's opinion of the legacy and fame was correct; the Schrödinger Equation is the very foundation of all atomic physics and chemistry. Named after him Books by Erwin Schrödinger • Schrödinger, Erwin. 1996. Nature and the Greeks, and Science and Humanism. Cambridge: Cambridge University Press. ISBN 0521575508 • Schrödinger, Erwin. 1995. The Interpretation of Quantum Mechanics. Woodbridge, CT: Ox Bow Press. ISBN 1881987094 • Schrödinger, Erwin. 1989. Statistical Thermodynamics. New York: Dover Publications. 1989. ISBN 0486661016 • Schrödinger, Erwin. 1984. Collected papers. Braunschweig, DE: Friedr. Vieweg & Sohn. 1984. ISBN 3700105738 • Schrödinger, Erwin. 1983. My View of the World. Woodbridge, CT: Ox Bow Press. ISBN 0918024307 • Schrödinger, Erwin. 1956. Expanding Universes. Cambridge: Cambridge University Press. • Schrödinger, Erwin. 1950. Space-Time Structure. Cambridge: Cambridge University Press. ISBN 0521315204 • Schrödinger, Erwin. 1946. What is Life?. New York: Macmillan. 1. Walter John Moore, Schrödinger: Life and Thought (Cambridge University Press, 1992). ISBN 0-521-43767-9 2. L.I. Ponomarev, The Quantum Dice (Moscow: Mir Publishers, 1988). ISBN 5030002162 • Asimov, Isaac. 1964. Asimov's Biographical Encyclopedia of Science and Technology; the Living Stories of More than 1000 Great Scientists from the age of Greece to the Space Age. Garden City, N.Y.: Doubleday. ISBN 20144414 • Cropper, William H. 2001. Great Physicists: The Life and Times of Leading Physicists from Galileo to Hawking. Oxford: Oxford University Press. ISBN 0195137485 • Grosse, Harald and André Martin. 2005. Particle Physics and the Schrödinger Equation. Cambridge: Cambridge University Press. ISBN 0521017785 • Moore, Walter John. 1992. Schrödinger: Life and Thought. Cambridge: Cambridge University Press. ISBN 0521437679 External links
d9e9f75826206b69
Latest Issue of Science News cover 3/7 Reviews & Previews The least physics you need is a lot in ‘Quantum Mechanics' Leonard Susskind and Art Friedman offer the theoretical minimum you need to know to start doing physics 10:05am, June 1, 2014 Sponsor Message Quantum Mechanics: The Theoretical Minimum Leonard Susskind and Art Friedman Basic Books, $26.99 If you’re ever banished to a desert island and allowed to take just one book, here it is. Given enough time, with no distractions, you could use it to eventually master quantum mechanics. Susskind’s latest book, this one with coauthor Friedman, is the second in a series on the “theoretical minimum” you’d need to know to actually be able to do physics. The first focused on the basics of classical physics. This one takes you deep into the weird realm of quantum theory. There’s no popularization here. Just clear, straightforward exposition of basic quantum principles and their mystifying implications. Susskind starts with qubits, the quantum units of information typically associated with the spins of subatomic particles. How such spins are measured, and the mathematical apparatus needed to account for the results of those measurements, form the core of the quantum theoretical toolkit. Eventually you’ll encounter quantum entanglement, the puzzling connection between some quantum particles that so disturbed Einstein. Further on comes the Heisenberg Uncertainty Principle, which Susskind is able to explain thanks to the preceding foundation, rather than just stating it, as most popular books do. Ultimately the Schrödinger equation itself emerges, the key mathematical tool for the bulk of quantum research. By then, you’d be ready to attempt quantum mechanical calculations at home, if you weren’t trapped on that island. But don’t think it will be easy. Without a solid background in relatively high-level math, it’s a formidable challenge to master all the symbolic notation and complex concepts like vector spaces and tensor products. Just reading this book won’t produce quantum competence. You would have to work through it thoroughly, several times, and you’d probably need to reread the first volume, on classical physics, as well. Still, even without mastering all the calculational complexities, a careful read at the very least offers deeper insight into the logic and mathematical substance of quantum physics than you’ll get from any popular account. It may still be true, as Feynman said, that “nobody understands quantum mechanics.” But by carefully studying Susskind’s presentation of it, you might at least be able to come close.  More Humans & Society articles
9a18337c49324d2b
Wave–particle duality From Wikipedia, the free encyclopedia   (Redirected from Wave-particle duality) Jump to: navigation, search Wave–particle duality is the fact that every elementary particle or quantic entity exhibits the properties of not only particles, but also waves. It addresses the inability of the classical concepts "particle" or "wave" to fully describe the behavior of quantum-scale objects. As Einstein wrote: "It seems as though we must use sometimes the one theory and sometimes the other, while at times we may use either. We are faced with a new kind of difficulty. We have two contradictory pictures of reality; separately neither of them fully explains the phenomena of light, but together they do".[1] Various opinions have arisen about this. Initiated by Louis de Broglie, before the discovery of quantum mechanics, and developed later as the de Broglie-Bohm theory, the pilot wave interpretation does not regard the duality as paradoxical, seeing both particle and wave aspects as always coexisting. According to Schrödinger the domain of the de Broglie waves is ordinary physical space-time.[2] This formal feature exhibits the pilot wave theory as non-local, which is considered by many physicists to be a grave defect in a theory.[3] Still in the days of the old quantum theory, another pre-quantum-mechanical version of wave–particle duality was pioneered by William Duane,[4] and developed by others including Alfred Landé.[5] Duane explained diffraction of x-rays by a crystal in terms solely of their particle aspect. The deflection of the trajectory of each diffracted photon was due to quantal translative momentum transfer from the spatially regular structure of the diffracting crystal.[6] Fourier analysis reveals the wave–particle duality as a simply mathematical equivalence, always present, and universal for all quanta. The same reasoning applies for example to diffraction of electrons by a crystal. In the light of de Broglie's ideas, Erwin Schrödinger developed his wave mechanics by referring the universal wave aspect not to ordinary physical space-time, but rather to a profoundly different and more abstract 'space'. The domain of Schrödinger's wave function is configuration space.[2] Ordinary physical space-time allows more or less direct visualization of cause and effect relations. In contrast, configuration space does not directly display cause and effect linkages. Sometimes, nevertheless, it seemed as if Schrödinger visualized his own waves as referring to ordinary space-time, and there was much debate about this.[7] Niels Bohr regarded the "duality paradox" as a fundamental or metaphysical fact of nature. A given kind of quantum object, will exhibit sometimes wave, sometimes particle, character, in respectively different physical settings. He saw such duality as one aspect of the concept of complementarity.[8] Bohr regarded renunciation of the cause-effect relation, or complementarily, of the space-time picture, as essential to the quantum mechanical account.[9] Werner Heisenberg considered the question further. He saw the duality as present for all quantic entities, but not quite in the usual quantum mechanical account considered by Bohr. He saw it in what is called second quantization, which generates an entirely new concept of fields which exist in ordinary space-time, causality still being visualizable. Classical field values (e.g. the electric and magnetic field strengths of Maxwell) are replaced by an entirely new kind of field value, as considered in quantum field theory. Turning the reasoning around, ordinary quantum mechanics can be deduced as a specialized consequence of quantum field theory.[10][11] Because of the difference of views of Bohr and Heisenberg, the main sources of the so-called Copenhagen interpretation, the position of that interpretation on wave–particle duality is ill-defined. In a modern perspective, wave functions arise naturally in relativistic quantum field theory in the formulation of free quantum fields. They are necessary for the Lorentz invariance of the theory. Their form and the equations of motion they obey are dictated by under which representation of the Lorentz group they transform.[12] Origin of theory[edit] The idea of duality originated in a debate over the nature of light and matter that dates back to the 17th century, when Christiaan Huygens and Isaac Newton proposed competing theories of light: light was thought either to consist of waves (Huygens) or of particles (Newton). Through the work of Max Planck, Albert Einstein, Louis de Broglie, Arthur Compton, Niels Bohr, and many others, current scientific theory holds that all particles also have a wave nature (and vice versa).[13] This phenomenon has been verified not only for elementary particles, but also for compound particles like atoms and even molecules. For macroscopic particles, because of their extremely short wavelengths, wave properties usually cannot be detected.[14] Brief history of wave and particle viewpoints[edit] Aristotle was one of the first to publicly hypothesize about the nature of light, proposing that light is a disturbance in the element aether (that is, it is a wave-like phenomenon). On the other hand, Democritus—the original atomist—argued that all things in the universe, including light, are composed of indivisible sub-components (light being some form of solar atom).[15] At the beginning of the 11th Century, the Arabic scientist Alhazen wrote the first comprehensive treatise on optics; describing refraction, reflection, and the operation of a pinhole lens via rays of light traveling from the point of emission to the eye. He asserted that these rays were composed of particles of light. In 1630, René Descartes popularized and accredited the opposing wave description in his treatise on light, showing that the behavior of light could be re-created by modeling wave-like disturbances in a universal medium ("plenum"). Beginning in 1670 and progressing over three decades, Isaac Newton developed and championed his corpuscular hypothesis, arguing that the perfectly straight lines of reflection demonstrated light's particle nature; only particles could travel in such straight lines. He explained refraction by positing that particles of light accelerated laterally upon entering a denser medium. Around the same time, Newton's contemporaries Robert Hooke and Christiaan Huygens—and later Augustin-Jean Fresnel—mathematically refined the wave viewpoint, showing that if light traveled at different speeds in different media (such as water and air), refraction could be easily explained as the medium-dependent propagation of light waves. The resulting Huygens–Fresnel principle was extremely successful at reproducing light's behavior and, subsequently supported by Thomas Young's 1803 discovery of double-slit interference, was the beginning of the end for the particle light camp.[16][17] Thomas Young's sketch of two-slit diffraction of waves, 1803 The final blow against corpuscular theory came when James Clerk Maxwell discovered that he could combine four simple equations, which had been previously discovered, along with a slight modification to describe self-propagating waves of oscillating electric and magnetic fields. When the propagation speed of these electromagnetic waves was calculated, the speed of light fell out. It quickly became apparent that visible light, ultraviolet light, and infrared light (phenomena thought previously to be unrelated) were all electromagnetic waves of differing frequency. The wave theory had prevailed—or at least it seemed to. While the 19th century had seen the success of the wave theory at describing light, it had also witnessed the rise of the atomic theory at describing matter. In 1789, Antoine Lavoisier securely differentiated chemistry from alchemy by introducing rigor and precision into his laboratory techniques; allowing him to deduce the conservation of mass and categorize many new chemical elements and compounds. However, the nature of these essential chemical elements remained unknown. In 1799, Joseph Louis Proust advanced chemistry towards the atom by showing that elements combined in definite proportions. This led John Dalton to resurrect Democritus' atom in 1803, when he proposed that elements were invisible sub components; which explained why the varying oxides of metals (e.g. stannous oxide and cassiterite, SnO and SnO2 respectively) possess a 1:2 ratio of oxygen to one another. But Dalton and other chemists of the time had not considered that some elements occur in monatomic form (like Helium) and others in diatomic form (like Hydrogen), or that water was H2O, not the simpler and more intuitive HO—thus the atomic weights presented at the time were varied and often incorrect. Additionally, the formation of H2O by two parts of hydrogen gas and one part of oxygen gas would require an atom of oxygen to split in half (or two half-atoms of hydrogen to come together). This problem was solved by Amedeo Avogadro, who studied the reacting volumes of gases as they formed liquids and solids. By postulating that equal volumes of elemental gas contain an equal number of atoms, he was able to show that H2O was formed from two parts H2 and one part O2. By discovering diatomic gases, Avogadro completed the basic atomic theory, allowing the correct molecular formulae of most known compounds—as well as the correct weights of atoms—to be deduced and categorized in a consistent manner. The final stroke in classical atomic theory came when Dimitri Mendeleev saw an order in recurring chemical properties, and created a table presenting the elements in unprecedented order and symmetry. But there were holes in Mendeleev's table, with no element to fill them in. His critics initially cited this as a fatal flaw, but were silenced when new elements were discovered that perfectly fit into these holes. The success of the periodic table effectively converted any remaining opposition to atomic theory; even though no single atom had ever been observed in the laboratory, chemistry was now an atomic science. Animation showing the wave-particle duality with a double slit experiment and effect of an observer. Increase size to see explanations in the video itself. See also quiz based on this animation. Particle impacts make visible the interference pattern of waves. A quantum particle is represented by a wave packet. Interference of a quantum particle with itself. Click images for animations. Turn of the 20th century and the paradigm shift[edit] Particles of electricity[edit] At the close of the 19th century, the reductionism of atomic theory began to advance into the atom itself; determining, through physics, the nature of the atom and the operation of chemical reactions. Electricity, first thought to be a fluid, was now understood to consist of particles called electrons. This was first demonstrated by J. J. Thomson in 1897 when, using a cathode ray tube, he found that an electrical charge would travel across a vacuum (which would possess infinite resistance in classical theory). Since the vacuum offered no medium for an electric fluid to travel, this discovery could only be explained via a particle carrying a negative charge and moving through the vacuum. This electron flew in the face of classical electrodynamics, which had successfully treated electricity as a fluid for many years (leading to the invention of batteries, electric motors, dynamos, and arc lamps). More importantly, the intimate relation between electric charge and electromagnetism had been well documented following the discoveries of Michael Faraday and James Clerk Maxwell. Since electromagnetism was known to be a wave generated by a changing electric or magnetic field (a continuous, wave-like entity itself) an atomic/particle description of electricity and charge was a non sequitur. Furthermore, classical electrodynamics was not the only classical theory rendered incomplete. Radiation quantization[edit] Black-body radiation, the emission of electromagnetic energy due to an object's heat, could not be explained from classical arguments alone. The equipartition theorem of classical mechanics, the basis of all classical thermodynamic theories, stated that an object's energy is partitioned equally among the object's vibrational modes. This worked well when describing thermal objects, whose vibrational modes were defined as the speeds of their constituent atoms, and the speed distribution derived from egalitarian partitioning of these vibrational modes closely matched experimental results. Speeds much higher than the average speed were suppressed by the fact that kinetic energy is quadratic—doubling the speed requires four times the energy—thus the number of atoms occupying high energy modes (high speeds) quickly drops off because the constant, equal partition can excite successively fewer atoms. Low speed modes would ostensibly dominate the distribution, since low speed modes would require ever less energy, and prima facie a zero-speed mode would require zero energy and its energy partition would contain an infinite number of atoms. But this would only occur in the absence of atomic interaction; when collisions are allowed, the low speed modes are immediately suppressed by jostling from the higher energy atoms, exciting them to higher energy modes. An equilibrium is swiftly reached where most atoms occupy a speed proportional to the temperature of the object (thus defining temperature as the average kinetic energy of the object). But applying the same reasoning to the electromagnetic emission of such a thermal object was not so successful. It had been long known that thermal objects emit light. Hot metal glows red, and upon further heating, white (this is the underlying principle of the incandescent bulb). Since light was known to be waves of electromagnetism, physicists hoped to describe this emission via classical laws. This became known as the black body problem. Since the equipartition theorem worked so well in describing the vibrational modes of the thermal object itself, it was trivial to assume that it would perform equally well in describing the radiative emission of such objects. But a problem quickly arose when determining the vibrational modes of light. To simplify the problem (by limiting the vibrational modes) a longest allowable wavelength was defined by placing the thermal object in a cavity. Any electromagnetic mode at equilibrium (i.e. any standing wave) could only exist if it used the walls of the cavities as nodes. Thus there were no waves/modes with a wavelength larger than twice the length (L) of the cavity. Standing waves in a cavity The first few allowable modes would therefore have wavelengths of : 2L, L, 2L/3, L/2, etc. (each successive wavelength adding one node to the wave). However, while the wavelength could never exceed 2L, there was no such limit on decreasing the wavelength, and adding nodes to reduce the wavelength could proceed ad infinitum. Suddenly it became apparent that the short wavelength modes completely dominated the distribution, since ever shorter wavelength modes could be crammed into the cavity. If each mode received an equal partition of energy, the short wavelength modes would consume all the energy. This became clear when plotting the Rayleigh–Jeans law which, while correctly predicting the intensity of long wavelength emissions, predicted infinite total energy as the intensity diverges to infinity for short wavelengths. This became known as the ultraviolet catastrophe. The solution arrived in 1900 when Max Planck hypothesized that the frequency of light emitted by the black body depended on the frequency of the oscillator that emitted it, and the energy of these oscillators increased linearly with frequency (according to his constant h, where E = hν). This was not an unsound proposal considering that macroscopic oscillators operate similarly: when studying five simple harmonic oscillators of equal amplitude but different frequency, the oscillator with the highest frequency possesses the highest energy (though this relationship is not linear like Planck's). By demanding that high-frequency light must be emitted by an oscillator of equal frequency, and further requiring that this oscillator occupy higher energy than one of a lesser frequency, Planck avoided any catastrophe; giving an equal partition to high-frequency oscillators produced successively fewer oscillators and less emitted light. And as in the Maxwell–Boltzmann distribution, the low-frequency, low-energy oscillators were suppressed by the onslaught of thermal jiggling from higher energy oscillators, which necessarily increased their energy and frequency. The most revolutionary aspect of Planck's treatment of the black body is that it inherently relies on an integer number of oscillators in thermal equilibrium with the electromagnetic field. These oscillators give their entire energy to the electromagnetic field, creating a quantum of light, as often as they are excited by the electromagnetic field, absorbing a quantum of light and beginning to oscillate at the corresponding frequency. Planck had intentionally created an atomic theory of the black body, but had unintentionally generated an atomic theory of light, where the black body never generates quanta of light at a given frequency with an energy less than . However, once realizing that he had quantized the electromagnetic field, he denounced particles of light as a limitation of his approximation, not a property of reality. Photoelectric effect illuminated[edit] While Planck had solved the ultraviolet catastrophe by using atoms and a quantized electromagnetic field, most contemporary physicists agreed that Planck's "light quanta" represented only flaws in his model. A more-complete derivation of black body radiation would yield a fully continuous and 'wave-like' electromagnetic field with no quantization. However, in 1905 Albert Einstein took Planck's black body model to produce his solution to another outstanding problem of the day: the photoelectric effect, wherein electrons are emitted from atoms when they absorb energy from light. Since their discovery eight years previously, electrons had been the thing to study in physics laboratories worldwide. In 1902 Philipp Lenard discovered that the energy of these ejected electrons did not depend on the intensity of the incoming light, but instead on its frequency. So if one shines a little low-frequency light upon a metal, a few low energy electrons are ejected. If one now shines a very intense beam of low-frequency light upon the same metal, a whole slew of electrons are ejected; however they possess the same low energy, there are merely more of them. The more light there is, the more electrons are ejected. Whereas in order to get high energy electrons, one must illuminate the metal with high-frequency light. Like blackbody radiation, this was at odds with a theory invoking continuous transfer of energy between radiation and matter. However, it can still be explained using a fully classical description of light, as long as matter is quantum mechanical in nature.[18] If one used Planck's energy quanta, and demanded that electromagnetic radiation at a given frequency could only transfer energy to matter in integer multiples of an energy quantum , then the photoelectric effect could be explained very simply. Low-frequency light only ejects low-energy electrons because each electron is excited by the absorption of a single photon. Increasing the intensity of the low-frequency light (increasing the number of photons) only increases the number of excited electrons, not their energy, because the energy of each photon remains low. Only by increasing the frequency of the light, and thus increasing the energy of the photons, can one eject electrons with higher energy. Thus, using Planck's constant h to determine the energy of the photons based upon their frequency, the energy of ejected electrons should also increase linearly with frequency; the gradient of the line being Planck's constant. These results were not confirmed until 1915, when Robert Andrews Millikan, who had previously determined the charge of the electron, produced experimental results in perfect accord with Einstein's predictions. While the energy of ejected electrons reflected Planck's constant, the existence of photons was not explicitly proven until the discovery of the photon antibunching effect, of which a modern experiment can be performed in undergraduate-level labs.[19] This phenomenon could only be explained via photons, and not through any semi-classical theory (which could alternatively explain the photoelectric effect). When Einstein received his Nobel Prize in 1921, it was not for his more difficult and mathematically laborious special and general relativity, but for the simple, yet totally revolutionary, suggestion of quantized light. Einstein's "light quanta" would not be called photons until 1925, but even in 1905 they represented the quintessential example of wave-particle duality. Electromagnetic radiation propagates following linear wave equations, but can only be emitted or absorbed as discrete elements, thus acting as a wave and a particle simultaneously. Developmental milestones[edit] Huygens and Newton[edit] The earliest comprehensive theory of light was advanced by Christiaan Huygens, who proposed a wave theory of light, and in particular demonstrated how waves might interfere to form a wavefront, propagating in a straight line. However, the theory had difficulties in other matters, and was soon overshadowed by Isaac Newton's corpuscular theory of light. That is, Newton proposed that light consisted of small particles, with which he could easily explain the phenomenon of reflection. With considerably more difficulty, he could also explain refraction through a lens, and the splitting of sunlight into a rainbow by a prism. Newton's particle viewpoint went essentially unchallenged for over a century.[20] Young, Fresnel, and Maxwell[edit] In the early 19th century, the double-slit experiments by Young and Fresnel provided evidence for Huygens' wave theories. The double-slit experiments showed that when light is sent through a grid, a characteristic interference pattern is observed, very similar to the pattern resulting from the interference of water waves; the wavelength of light can be computed from such patterns. The wave view did not immediately displace the ray and particle view, but began to dominate scientific thinking about light in the mid 19th century, since it could explain polarization phenomena that the alternatives could not.[21] In the late 19th century, James Clerk Maxwell explained light as the propagation of electromagnetic waves according to the Maxwell equations. These equations were verified by experiment by Heinrich Hertz in 1887, and the wave theory became widely accepted. Planck's formula for black-body radiation[edit] Main article: Planck's law In 1901, Max Planck published an analysis that succeeded in reproducing the observed spectrum of light emitted by a glowing object. To accomplish this, Planck had to make an ad hoc mathematical assumption of quantized energy of the oscillators (atoms of the black body) that emit radiation. It was Einstein who later proposed that it is the electromagnetic radiation itself that is quantized, and not the energy of radiating atoms. Einstein's explanation of the photoelectric effect[edit] Main article: Photoelectric effect In 1905, Albert Einstein provided an explanation of the photoelectric effect, a hitherto troubling experiment that the wave theory of light seemed incapable of explaining. He did so by postulating the existence of photons, quanta of light energy with particulate qualities. In the photoelectric effect, it was observed that shining a light on certain metals would lead to an electric current in a circuit. Presumably, the light was knocking electrons out of the metal, causing current to flow. However, using the case of potassium as an example, it was also observed that while a dim blue light was enough to cause a current, even the strongest, brightest red light available with the technology of the time caused no current at all. According to the classical theory of light and matter, the strength or amplitude of a light wave was in proportion to its brightness: a bright light should have been easily strong enough to create a large current. Yet, oddly, this was not so. Einstein explained this conundrum by postulating that the electrons can receive energy from electromagnetic field only in discrete portions (quanta that were called photons): an amount of energy E that was related to the frequency f of the light by E = h f\, where h is Planck's constant (6.626 × 10−34 J seconds). Only photons of a high enough frequency (above a certain threshold value) could knock an electron free. For example, photons of blue light had sufficient energy to free an electron from the metal, but photons of red light did not. More intense light above the threshold frequency could release more electrons, but no amount of light (using technology available at the time) below the threshold frequency could release an electron. To "violate" this law would require extremely high intensity lasers which had not yet been invented. Intensity-dependent phenomena have now been studied in detail with such lasers.[22] Einstein was awarded the Nobel Prize in Physics in 1921 for his discovery of the law of the photoelectric effect. De Broglie's wavelength[edit] Main article: Matter wave Propagation of de Broglie waves in 1d—real part of the complex amplitude is blue, imaginary part is green. The probability (shown as the colour opacity) of finding the particle at a given point x is spread out like a waveform; there is no definite position of the particle. As the amplitude increases above zero the curvature decreases, so the amplitude decreases again, and vice versa—the result is an alternating amplitude: a wave. Top: Plane wave. Bottom: Wave packet. In 1924, Louis-Victor de Broglie formulated the de Broglie hypothesis, claiming that all matter,[23][24] not just light, has a wave-like nature; he related wavelength (denoted as λ), and momentum (denoted as p): \lambda = \frac{h}{p} This is a generalization of Einstein's equation above, since the momentum of a photon is given by p = \tfrac{E}{c} and the wavelength (in a vacuum) by λ = \tfrac{c}{f}, where c is the speed of light in vacuum. De Broglie's formula was confirmed three years later for electrons (which differ from photons in having a rest mass) with the observation of electron diffraction in two independent experiments. At the University of Aberdeen, George Paget Thomson passed a beam of electrons through a thin metal film and observed the predicted interference patterns. At Bell Labs Clinton Joseph Davisson and Lester Halbert Germer guided their beam through a crystalline grid. De Broglie was awarded the Nobel Prize for Physics in 1929 for his hypothesis. Thomson and Davisson shared the Nobel Prize for Physics in 1937 for their experimental work. Heisenberg's uncertainty principle[edit] In his work on formulating quantum mechanics, Werner Heisenberg postulated his uncertainty principle, which states: \Delta here indicates standard deviation, a measure of spread or uncertainty; x and p are a particle's position and linear momentum respectively. \hbar is the reduced Planck's constant (Planck's constant divided by 2\pi). Heisenberg originally explained this as a consequence of the process of measuring: Measuring position accurately would disturb momentum and vice versa, offering an example (the "gamma-ray microscope") that depended crucially on the de Broglie hypothesis. It is now thought, however, that this only partly explains the phenomenon, but that the uncertainty also exists in the particle itself, even before the measurement is made. In fact, the modern explanation of the uncertainty principle, extending the Copenhagen interpretation first put forward by Bohr and Heisenberg, depends even more centrally on the wave nature of a particle: Just as it is nonsensical to discuss the precise location of a wave on a string, particles do not have perfectly precise positions; likewise, just as it is nonsensical to discuss the wavelength of a "pulse" wave traveling down a string, particles do not have perfectly precise momenta (which corresponds to the inverse of wavelength). Moreover, when position is relatively well defined, the wave is pulse-like and has a very ill-defined wavelength (and thus momentum). And conversely, when momentum (and thus wavelength) is relatively well defined, the wave looks long and sinusoidal, and therefore it has a very ill-defined position. de Broglie–Bohm theory[edit] Couder experiments,[25] "materializing" the pilot wave model. De Broglie himself had proposed a pilot wave construct to explain the observed wave-particle duality. In this view, each particle has a well-defined position and momentum, but is guided by a wave function derived from Schrödinger's equation. The pilot wave theory was initially rejected because it generated non-local effects when applied to systems involving more than one particle. Non-locality, however, soon became established as an integral feature of quantum theory (see EPR paradox), and David Bohm extended de Broglie's model to explicitly include it. In the resulting representation, also called the de Broglie–Bohm theory or Bohmian mechanics,[26] the wave-particle duality vanishes, and explains the wave behaviour as a scattering with wave appearance, because the particle's motion is subject to a guiding equation or quantum potential. "This idea seems to me so natural and simple, to resolve the wave-particle dilemma in such a clear and ordinary way, that it is a great mystery to me that it was so generally ignored",[27] J.S.Bell. The best illustration of the pilot-wave model was given by Couder's 2010 "walking droplets" experiments,[28] demonstrating the pilot-wave behaviour in a macroscopic mechanical analog.[25] Wave behavior of large objects[edit] Since the demonstrations of wave-like properties in photons and electrons, similar experiments have been conducted with neutrons and protons. Among the most famous experiments are those of Estermann and Otto Stern in 1929.[29] Authors of similar recent experiments with atoms and molecules, described below, claim that these larger particles also act like waves. A wave is basically a group of particles which moves in a particular form of motion i.e. to and fro, if we break that flow by an object it will convert into radiants. A dramatic series of experiments emphasizing the action of gravity in relation to wave–particle duality was conducted in the 1970s using the neutron interferometer.[30] Neutrons, one of the components of the atomic nucleus, provide much of the mass of a nucleus and thus of ordinary matter. In the neutron interferometer, they act as quantum-mechanical waves directly subject to the force of gravity. While the results were not surprising since gravity was known to act on everything, including light (see tests of general relativity and the Pound–Rebka falling photon experiment), the self-interference of the quantum mechanical wave of a massive fermion in a gravitational field had never been experimentally confirmed before. In 1999, the diffraction of C60 fullerenes by researchers from the University of Vienna was reported.[31] Fullerenes are comparatively large and massive objects, having an atomic mass of about 720 u. The de Broglie wavelength is 2.5 pm, whereas the diameter of the molecule is about 1 nm, about 400 times larger. In 2012, these far-field diffraction experiments could be extended to phthalocyanine molecules and their heavier derivatives, which are composed of 58 and 114 atoms respectively. In these experiments the build-up of such interference patterns could be recorded in real time and with single molecule sensitivity.[32][33] In 2003, the Vienna group also demonstrated the wave nature of tetraphenylporphyrin[34]—a flat biodye with an extension of about 2 nm and a mass of 614 u. For this demonstration they employed a near-field Talbot Lau interferometer.[35][36] In the same interferometer they also found interference fringes for C60F48., a fluorinated buckyball with a mass of about 1600 u, composed of 108 atoms.[34] Large molecules are already so complex that they give experimental access to some aspects of the quantum-classical interface, i.e., to certain decoherence mechanisms.[37][38] In 2011, the interference of molecules as heavy as 6910 u could be demonstrated in a Kapitza–Dirac–Talbot–Lau interferometer.[39] In 2013, the interference of molecules beyond 10,000 u has been demonstrated.[40] Whether objects heavier than the Planck mass (about the weight of a large bacterium) have a de Broglie wavelength is theoretically unclear and experimentally unreachable; above the Planck mass a particle's Compton wavelength would be smaller than the Planck length and its own Schwarzschild radius, a scale at which current theories of physics may break down or need to be replaced by more general ones.[41] Recently Couder, Fort, et al. showed[42] that we can use macroscopic oil droplets on a vibrating surface as a model of wave–particle duality—localized droplet creates periodical waves around and interaction with them leads to quantum-like phenomena: interference in double-slit experiment,[43] unpredictable tunneling[44] (depending in complicated way on practically hidden state of field), orbit quantization[45] (that particle has to 'find a resonance' with field perturbations it creates—after one orbit, its internal phase has to return to the initial state) and Zeeman effect.[46] Treatment in modern quantum mechanics[edit] Wave–particle duality is deeply embedded into the foundations of quantum mechanics. In the formalism of the theory, all the information about a particle is encoded in its wave function, a complex-valued function roughly analogous to the amplitude of a wave at each point in space. This function evolves according to a differential equation (generically called the Schrödinger equation). For particles with mass this equation has solutions that follow the form of the wave equation. Propagation of such waves leads to wave-like phenomena such as interference and diffraction. Particles without mass, like photons, have no solutions of the Schrödinger equation so have another wave. The particle-like behavior is most evident due to phenomena associated with measurement in quantum mechanics. Upon measuring the location of the particle, the particle will be forced into a more localized state as given by the uncertainty principle. When viewed through this formalism, the measurement of the wave function will randomly "collapse", or rather "decohere", to a sharply peaked function at some location. For particles with mass the likelihood of detecting the particle at any particular location is equal to the squared amplitude of the wave function there. The measurement will return a well-defined position, (subject to uncertainty), a property traditionally associated with particles. It is important to note that a measurement is only a particular type of interaction where some data is recorded and the measured quantity is forced into a particular eigenstate. The act of measurement is therefore not fundamentally different from any other interaction. Following the development of quantum field theory the ambiguity disappeared. The field permits solutions that follow the wave equation, which are referred to as the wave functions. The term particle is used to label the irreducible representations of the Lorentz group that are permitted by the field. An interaction as in a Feynman diagram is accepted as a calculationally convenient approximation where the outgoing legs are known to be simplifications of the propagation and the internal lines are for some order in an expansion of the field interaction. Since the field is non-local and quantized, the phenomena which previously were thought of as paradoxes are explained. Within the limits of the wave-particle duality the quantum field theory gives the same results. There are two ways to visualize the wave-particle behaviour: by the "standard model", described below; and by the Broglie–Bohm model, where no duality is perceived. Below is an illustration of wave–particle duality as it relates to De Broglie's hypothesis and Heisenberg's uncertainty principle (above), in terms of the position and momentum space wavefunctions for one spinless particle with mass in one dimension. These wavefunctions are Fourier transforms of each other. The more localized the position-space wavefunction, the more likely the particle is to be found with the position coordinates in that region, and correspondingly the momentum-space wavefunction is less localized so the possible momentum components the particle could have are more widespread. Conversely the more localized the momentum-space wavefunction, the more likely the particle is to be found with those values of momentum components in that region, and correspondingly the less localized the position-space wavefunction, so the position coordinates the particle could occupy are more widespread. Position x and momentum p wavefunctions corresponding to quantum particles. The colour opacity (%) of the particles corresponds to the probability density of finding the particle with position x or momentum component p. Top: If wavelength λ is unknown, so are momentum p, wave-vector k and energy E (de Broglie relations). As the particle is more localized in position space, Δx is smaller than for Δpx. Bottom: If λ is known, so are p, k, and E. As the particle is more localized in momentum space, Δp is smaller than for Δx. Alternative views[edit] Wave–particle duality is an ongoing conundrum in modern physics. Most physicists accept wave-particle duality as the best explanation for a broad range of observed phenomena; however, it is not without controversy. Alternative views are also presented here. These views are not generally accepted by mainstream physics, but serve as a basis for valuable discussion within the community. Both-particle-and-wave view[edit] The pilot wave model, originally developed by Louis de Broglie and further developed by David Bohm into the hidden variable theory proposes that there is no duality, but rather a system exhibits both particle properties and wave properties simultaneously, and particles are guided, in a deterministic fashion, by the pilot wave (or its "quantum potential") which will direct them to areas of constructive interference in preference to areas of destructive interference. This idea is held by a significant minority within the physics community.[47] At least one physicist considers the "wave-duality" as not being an incomprehensible mystery. L.E. Ballentine, Quantum Mechanics, A Modern Development, p. 4, explains: When first discovered, particle diffraction was a source of great puzzlement. Are "particles" really "waves?" In the early experiments, the diffraction patterns were detected holistically by means of a photographic plate, which could not detect individual particles. As a result, the notion grew that particle and wave properties were mutually incompatible, or complementary, in the sense that different measurement apparatuses would be required to observe them. That idea, however, was only an unfortunate generalization from a technological limitation. Today it is possible to detect the arrival of individual electrons, and to see the diffraction pattern emerge as a statistical pattern made up of many small spots (Tonomura et al., 1989). Evidently, quantum particles are indeed particles, but whose behaviour is very different from classical physics would have us to expect. It has been claimed[citation needed] that the Afshar experiment[48] (2007) shows that it is possible to simultaneously observe both wave and particle properties of photons. This claim is, however, rejected by other scientists.[citation needed] Wave-only view[edit] At least one scientist proposes that the duality can be replaced by a "wave-only" view. In his book Collective Electrodynamics: Quantum Foundations of Electromagnetism (2000), Carver Mead purports to analyze the behavior of electrons and photons purely in terms of electron wave functions, and attributes the apparent particle-like behavior to quantization effects and eigenstates. According to reviewer David Haddon:[49] Mead has cut the Gordian knot of quantum complementarity. He claims that atoms, with their neutrons, protons, and electrons, are not particles at all but pure waves of matter. Mead cites as the gross evidence of the exclusively wave nature of both light and matter the discovery between 1933 and 1996 of ten examples of pure wave phenomena, including the ubiquitous laser of CD players, the self-propagating electrical currents of superconductors, and the Bose–Einstein condensate of atoms. Albert Einstein, who, in his search for a Unified Field Theory, did not accept wave-particle duality, wrote:[50] This double nature of radiation (and of material corpuscles)...has been interpreted by quantum-mechanics in an ingenious and amazingly successful fashion. This interpretation...appears to me as only a temporary way out... The many-worlds interpretation (MWI) is sometimes presented as a waves-only theory, including by its originator, Hugh Everett who referred to MWI as "the wave interpretation".[51] The Three Wave Hypothesis of R. Horodecki relates the particle to wave.[52][53] The hypothesis implies that a massive particle is an intrinsically spatially as well as temporally extended wave phenomenon by a nonlinear law. Neither-wave-nor-particle view[edit] It has been argued that there are never exact particles or waves, but only some compromise or intermediate between them. For this reason, in 1928 Arthur Eddington[54] coined the name "wavicle" to describe the objects although it is not regularly used today. One consideration is that zero-dimensional mathematical points cannot be observed. Another is that the formal representation of such points, the Dirac delta function is unphysical, because it cannot be normalized. Parallel arguments apply to pure wave states. Roger Penrose states:[55] "Such 'position states' are idealized wavefunctions in the opposite sense from the momentum states. Whereas the momentum states are infinitely spread out, the position states are infinitely concentrated. Neither is normalizable [...]." Relational approach to wave–particle duality[edit] Relational quantum mechanics is developed which regards the detection event as establishing a relationship between the quantized field and the detector. The inherent ambiguity associated with applying Heisenberg's uncertainty principle and thus wave–particle duality is subsequently avoided.[56] Image of Wave-Particle nature of light has been captured[edit] Researchers at the Ecole polytechnique federale de Lausanne have experimentally been able to image the wave-particle nature of light waves.[57][58] The full length technical paper published on 2 March 2015, is available for access.[59] Although it is difficult to draw a line separating wave–particle duality from the rest of quantum mechanics, it is nevertheless possible to list some applications of this basic idea. • Wave–particle duality is exploited in electron microscopy, where the small wavelengths associated with the electron can be used to view objects much smaller than what is visible using visible light. • Similarly, neutron diffraction uses neutrons with a wavelength of about 0.1 nm, the typical spacing of atoms in a solid, to determine the structure of solids. See also[edit] Notes and references[edit] 1. ^ Harrison, David (2002). "Complementarity and the Copenhagen Interpretation of Quantum Mechanics". UPSCALE. Dept. of Physics, U. of Toronto. Retrieved 2008-06-21.  2. ^ a b Schrödinger, E. (1928). Wave mechanics, pp. 185–206 of Électrons et Photons: Rapports et Discussions du Cinquième Conseil de Physique, tenu à Bruxelles du 24 au 29 Octobre 1927, sous les Auspices de l'Institut International de Physique Solvay, Gauthier-Villars, Paris, pp. 185–186; translation at p. 447 of Bacciagaluppi, G., Valentini, A. (2009), Quantum Theory at the Crossroads: Reconsidering the 1927 Solvay Conference, Cambridge University Press, Cambridge UK, ISBN 978-0-521-81421-8. 3. ^ Bransden, B.H., Joachain, C.J. (1989/2000). Quantum Mechanics, second edition, Pearson, Prentice Hall, Harlow UK, ISBN 978-0-582-35691-7, p. 760. 4. ^ Duane, W. (1923). The transfer in quanta of radiation momentum to matter, Proc. Natl. Acad. Sci. 9(5): 158–164. 5. ^ Landé, A. (1951). Quantum Mechanics, Sir Isaac Pitman and Sons, London, pp. 19–22. 6. ^ Heisenberg, W. (1930). The Physical Principles of the Quantum Theory, translated by C. Eckart and F.C. Hoyt, University of Chicago Press, Chicago, pp. 77–78. 7. ^ Heisenberg, W., (1967). Quantum theory and its interpretation, quoted on p. 56 by eds. J.A. Wheeler, W.H. Zurek, (1983), Quantum Theory and Measurement, Princeton University Press, Princeton NJ, from ed. S. Rozental, Niels Bohr: his Life and Work as seen by his Friends and Colleagues, North Holland, Amsterdam. 8. ^ Kumar, Manjit (2011). Quantum: Einstein, Bohr, and the Great Debate about the Nature of Reality (Reprint ed.). W. W. Norton & Company. p. 242, 375-376. ISBN 978-0393339888.  9. ^ Bohr, N. (1927/1928). The quantum postulate and the recent development of atomic theory, Nature Supplement April 14 1928, 121: 580–590. 10. ^ Camilleri, K. (2009). Heisenberg and the Interpretation of Quantum Mechanics: the Physicist as Philosopher, Cambridge University Press, Cambridge UK, ISBN 978-0-521-88484-6. 11. ^ Preparata, G. (2002). An Introduction to a Realistic Quantum Physics, World Scientific, River Edge NJ, ISBN 978-981-238-176-7. 12. ^ Weinberg, S. (2002), The Quantum Theory of Fields 1, Cambridge University Press, ISBN 0-521-55001-7  Chapter 5. 13. ^ Walter Greiner (2001). Quantum Mechanics: An Introduction. Springer. ISBN 3-540-67458-6.  15. ^ Nathaniel Page Stites, M.A./M.S. "Light I: Particle or Wave?," Visionlearning Vol. PHY-1 (3), 2005. http://www.visionlearning.com/library/module_viewer.php?mid=132 16. ^ Young, Thomas (1804). "Bakerian Lecture: Experiments and calculations relative to physical optics". Philosophical Transactions of the Royal Society 94: 1–16. Bibcode:1804RSPT...94....1Y. doi:10.1098/rstl.1804.0001.  17. ^ Thomas Young: The Double Slit Experiment 18. ^ Lamb, Willis E.; Scully, Marlan O. (1968). "The photoelectric effect without photons" (PDF).  19. ^ "Observing the quantum behavior of light in an undergraduate laboratory". American Journal of Physics 72: 1210. Bibcode:2004AmJPh..72.1210T. doi:10.1119/1.1737397.  20. ^ "light", The Columbia Encyclopedia, Sixth Edition. 2001–05. 21. ^ Buchwald, Jed (1989). The Rise of the Wave Theory of Light: Optical Theory and Experiment in the Early Nineteenth Century. Chicago: University of Chicago Press. ISBN 0-226-07886-8. OCLC 18069573 59210058.  22. ^ Zhang, Q (1996). "Intensity dependence of the photoelectric effect induced by a circularly polarized laser beam". Physics Letters A 216 (1-5): 125–128. Bibcode:1996PhLA..216..125Z. doi:10.1016/0375-9601(96)00259-9.  23. ^ Donald H Menzel, "Fundamental formulas of Physics", volume 1, page 153; Gives the de Broglie wavelengths for composite particles such as protons and neutrons. 24. ^ Brian Greene, The Elegant Universe, page 104 "all matter has a wave-like character" 25. ^ a b See this Science Channel production (Season II, Episode VI "How Does The Universe Work?"), presented by Morgan Freeman, https://www.youtube.com/watch?v=W9yWv5dqSKk 26. ^ Bohmian Mechanics, Stanford Encyclopedia of Philosophy. 27. ^ Bell, J. S., "Speakable and Unspeakable in Quantum Mechanics", Cambridge: Cambridge University Press, 1987. 28. ^ Y. Couder, A. Boudaoud, S. Protière, Julien Moukhtar, E. Fort: Walking droplets: a form of wave-particle duality at macroscopic level? , doi:10.1051/epn/2010101, (PDF) 29. ^ Estermann, I.; Stern O. (1930). "Beugung von Molekularstrahlen". Zeitschrift für Physik 61 (1-2): 95–125. Bibcode:1930ZPhy...61...95E. doi:10.1007/BF01340293.  30. ^ R. Colella, A. W. Overhauser and S. A. Werner, Observation of Gravitationally Induced Quantum Interference, Phys. Rev. Lett. 34, 1472–1474 (1975). 31. ^ Arndt, Markus; O. Nairz; J. Voss-Andreae, C. Keller, G. van der Zouw, A. Zeilinger (14 October 1999). "Wave–particle duality of C60". Nature 401 (6754): 680–682. Bibcode:1999Natur.401..680A. doi:10.1038/44348. PMID 18494170.  32. ^ Juffmann, Thomas; et al. (25 March 2012). "Real-time single-molecule imaging of quantum interference". Nature Nanotechnology. Retrieved 27 March 2012.  33. ^ Quantumnanovienna. "Single molecules in a quantum interference movie". Retrieved 2012-04-21.  34. ^ a b Hackermüller, Lucia; Stefan Uttenthaler; Klaus Hornberger; Elisabeth Reiger; Björn Brezger; Anton Zeilinger; Markus Arndt (2003). "The wave nature of biomolecules and fluorofullerenes". Phys. Rev. Lett. 91 (9): 090408. arXiv:quant-ph/0309016. Bibcode:2003PhRvL..91i0408H. doi:10.1103/PhysRevLett.91.090408. PMID 14525169.  35. ^ Clauser, John F.; S. Li (1994). "Talbot von Lau interefometry with cold slow potassium atoms.". Phys. Rev. A 49 (4): R2213–17. Bibcode:1994PhRvA..49.2213C. doi:10.1103/PhysRevA.49.R2213. PMID 9910609.  36. ^ Brezger, Björn; Lucia Hackermüller; Stefan Uttenthaler; Julia Petschinka; Markus Arndt; Anton Zeilinger (2002). "Matter-wave interferometer for large molecules". Phys. Rev. Lett. 88 (10): 100404. arXiv:quant-ph/0202158. Bibcode:2002PhRvL..88j0404B. doi:10.1103/PhysRevLett.88.100404. PMID 11909334.  37. ^ Hornberger, Klaus; Stefan Uttenthaler; Björn Brezger; Lucia Hackermüller; Markus Arndt; Anton Zeilinger (2003). "Observation of Collisional Decoherence in Interferometry". Phys. Rev. Lett. 90 (16): 160401. arXiv:quant-ph/0303093. Bibcode:2003PhRvL..90p0401H. doi:10.1103/PhysRevLett.90.160401. PMID 12731960.  38. ^ Hackermüller, Lucia; Klaus Hornberger; Björn Brezger; Anton Zeilinger; Markus Arndt (2004). "Decoherence of matter waves by thermal emission of radiation". Nature 427 (6976): 711–714. arXiv:quant-ph/0402146. Bibcode:2004Natur.427..711H. doi:10.1038/nature02276. PMID 14973478.  39. ^ Gerlich, Stefan; et al. (2011). "Quantum interference of large organic molecules". Nature Communications 2 (263). Bibcode:2011NatCo...2E.263G. doi:10.1038/ncomms1263. PMC 3104521. PMID 21468015.  40. ^ Eibenberger, S.; Gerlich, S.; Arndt, M.; Mayor, M.; Tüxen, J. (2013). "Matter–wave interference of particles selected from a molecular library with masses exceeding 10 000 amu". Physical Chemistry Chemical Physics 15 (35): 14696–14700. doi:10.1039/c3cp51500a. PMID 23900710.  41. ^ Peter Gabriel Bergmann, The Riddle of Gravitation, Courier Dover Publications, 1993 ISBN 0-486-27378-4 online 42. ^ http://www.youtube.com/watch?v=W9yWv5dqSKk - You Tube video - Yves Couder Explains Wave/Particle Duality via Silicon Droplets 43. ^ Y. Couder, E. Fort, Single-Particle Diffraction and Interference at a Macroscopic Scale, PRL 97, 154101 (2006) online 44. ^ A. Eddi, E. Fort, F. Moisy, Y. Couder, Unpredictable Tunneling of a Classical Wave–Particle Association, PRL 102, 240401 (2009) 45. ^ Fort, E.; Eddi, A.; Boudaoud, A.; Moukhtar, J.; Couder, Y. (2010). "Path-memory induced quantization of classical orbits". PNAS 107 (41): 17515–17520. doi:10.1073/pnas.1007386107.  46. ^ http://prl.aps.org/abstract/PRL/v108/i26/e264503 - Level Splitting at Macroscopic Scale 47. ^ (Buchanan pp. 29–31) 48. ^ Afshar S.S. et al: Paradox in Wave Particle Duality. Found. Phys. 37, 295 (2007) http://arxiv.org/abs/quant-ph/0702188 arXiv:quant-ph/0702188 49. ^ David Haddon. "Recovering Rational Science". Touchstone. Retrieved 2007-09-12.  50. ^ Paul Arthur Schilpp, ed, Albert Einstein: Philosopher-Scientist, Open Court (1949), ISBN 0-87548-133-7, p 51. 51. ^ See section VI(e) of Everett's thesis: The Theory of the Universal Wave Function, in Bryce Seligman DeWitt, R. Neill Graham, eds, The Many-Worlds Interpretation of Quantum Mechanics, Princeton Series in Physics, Princeton University Press (1973), ISBN 0-691-08131-X, pp 3–140. 52. ^ Horodecki, R. (1981). "De broglie wave and its dual wave". Phys. Lett. A 87 (3): 95–97. Bibcode:1981PhLA...87...95H. doi:10.1016/0375-9601(81)90571-5.  53. ^ Horodecki, R. (1983). "Superluminal singular dual wave". Lett. Novo Cimento 38: 509–511.  54. ^ Eddington, Arthur Stanley (1928). The Nature of the Physical World. Cambridge, UK.: MacMillan. p. 201.  55. ^ Penrose, Roger (2007). The Road to Reality: A Complete Guide to the Laws of the Universe. Vintage. p. 521, §21.10. ISBN 978-0-679-77631-4.  56. ^ http://www.quantum-relativity.org/Quantum-Relativity.pdf. See Q. Zheng and T. Kobayashi, Quantum Optics as a Relativistic Theory of Light; Physics Essays 9 (1996) 447. Annual Report, Department of Physics, School of Science, University of Tokyo (1992) 240. 57. ^ [1] 58. ^ EPFL News 2015-02-03 The first ever photograph of light as both a particle and wave 59. ^ [2] External links[edit]
46e01fbd30fc074a
Hawking and unitarity The previous blog article about the very same topic was here. There have been roughly three major groups of answers that people proposed. 1. One of them is essentially dead today; it is the remnant theory. It argued that the black hole does not evaporate completely. Instead, a small light remnant with a large entropy remains after the evaporation process - and this remnant is what preserves the information. This approach is highly disfavored today because such small seeds simply should not be able to carry large entropy (because it violates holography). Moreover, this approach does not save unitarity anyway because the scenario still assumes the thermal radiation to be in a mixed state. 2. The other two general answers are obvious. One of them says that the information is lost, indeed. The qualitative features of Hawking's semiclassical calculations - the evolution into mixed states - survive in the exact analysis, too. Such an approach is popular among the General Relativity fundamentalists who believe that the fabric of spacetime is exactly what we think it is classically; causality in particular must be exact and no information can ever get out from a black hole. I formulated the argument in a way that makes it clear that it looks dumb to me - especially today when we know that topology of space may change and that black holes exist in unitary backgrounds of string theory. The Hawking process itself is an example of a violation of the strict rules of locality and causality by black hole physics! 3. The last answer, the only one that has always respected the principles of the 20th century physics, says that the information is preserved in the same way as in any other process in the world - burning books is an example. (Only later, I noticed that Hawking has independently chosen the very same example.) When we burn books, it looks as though we are destroying information, but of course the information about the letters remains encoded in the correlations between the particles of smoke that remains; it's just hard to read a book from its smoke. The smoke otherwise looks universal much like the thermal radiation of a black hole. But we know that if we look at the situation in detail, using the full many-body Schrödinger equation, the state of the electrons evolves unitarily. The same thing must hold for black holes. And the feeling that such a transfer of information is impossible because of the horizon is just an illusion; it is an artifact of the semiclassical approximation that paints the rules of locality and causality as more strict than they are in the full theory. Locality and causality are, in general, approximate emergent concepts that appear in the (semi)classical limit. The power of the full theory of quantum gravity to violate locality and causality in a subtle way is manifested whenever horizons develop, and it is responsible for the conservation of the information. Note that the conservation of the information is the only answer that can be acceptable for a physicist who treats the postulates of quantum mechanics seriously. No doubt, the postulates of quantum mechanics seem rigid and un-modifiable, while the exact degrees of freedom and terms in the Lagrangian that describe general relativity are flexible. The quantum mechanical postulates have a higher priority, and they tell us that the information must be preserved in the details of the nearly thermal Hawking radiation that remains after the black hole disappears. While Stephen Hawking has believed that the information was lost - and he has made bets of this kind - he eventually switched to our side in the summer of 2003 or 2004 (I am uncertain now). As you could hear from CNN and other major global new agencies, he officially admitted that his opinion was incorrect. The deep insights in string theory have convinced him that John Preskill was right and the bet is lost; Hawking gave an encyclopedia to Preskill as promised. Among these insights that have convinced Hawking, you find Matrix theory and especially the AdS/CFT correspondence. Gravity in asymptotically AdS spaces has an equivalent description in terms of a conformal field theory living on its boundary. This conformal field theory is manifestly unitary and has no room for destruction of the information. This answers an equivalent question about gravity, too. This brings most sane physicists to the opinion that the information is preserved and gravitational physics is not that special after all. But it does not give us a quantitative, calculable framework that would explain how does the information get out of the black holes and what do these subtle correlations that remember the initial state look like. Hawking's recent solution Hawking has announced that he had solved the problem. The main ideas of his solutions are the following ones: • The scattering S-matrix is the main "nice" observable that should be calculated in a theory of quantum gravity. (I fully agree.) • The scattering does not prevent a black hole from being formed, but such a black hole is just like any other intermediate state or resonance. (I fully agree.) • The thermal nature of the resulting radiation is a consequence of an approximation (that becomes accurate for large black holes) but there is no qualitative difference between black hole intermediate states and other intermediate states; the transition if smooth. (It was actually just me who formulated this point in this way.) • Just like in quantum field theory, the Euclidean setup combined with the Wick rotation is an essential technical tool to do the calculations; Hawking refers to Euclidean gravity as the "only sane way" to do quantum gravity. In the gravitational context, this approach was promoted and improved by Hawking and Gibbons. In fact, the Euclidean approach may be even more important in quantum gravity than it is in quantum field theory and its procedures may represent am even larger fraction of the derivations in the gravitational context. (I agree, and as far as I know, the people who disagree - such as Jacques Distler - have not offered any rational and valid arguments so far.) OK, so Hawking tells you to calculate the S-matrix by a Euclidean path integral over topologically trivial configurations (spacetimes) - those that are continuously connected to the empty spacetime. Such a process may involve a production of a large number of particles in the final state which is a hallmark of an intermediate black hole. Once you calculate the Euclidean S-matrix, you Wick rotate the results to get the amplitudes for the Minkowski signature. Note that we have only included the topologically trivial spacetimes and this is a good choice that preserves unitarity. On the second page, Hawking proceeds with some technical subtleties. He wants to allow strong gravitational fields to occur even in the initial and final states, it seems. (It does not seem necessary when one talks about the generic S-matrix elements but it is conceivable that these strong fields appear in the Euclidean spacetime anyway.) With strong gravitational fields in place, one can't meaningfully define the wavefunction at time "t" because there is no preferred diff-invariant way of slicing the spacetime. Hawking solves this by a seemingly bizarre operation. He calculates a partition sum with periodic Euclidean time instead of the transition amplitude; it is not 100% clear at this point how will he introduce the initial and final states to this setup. (Note that the Euclidean time is spacelike and it should therefore not be interpreted as a source of the usual violation of causality.) Moreover, this partition sum has a volume-extensive divergent factor. Hawking regulates this infrared problem by introducing a small negative (anti-de-Sitter-like) cosmological constant that does not change local physics of small black holes much. He obviously deforms the picture into an AdS one in order to get a background that is as well-defined as the usual AdS/CFT backgrounds in string theory. Hawking states that because we are making all measurements at infinity, we can never be sure whether a black hole is present inside or not. This looks like cheating to me; equivalently, it suggests that no true solution is being looked for. Of course that if we only work with the boundary degrees of freedom, we will see no unitarity violations and no problems associated with the black hole dynamics. It's simply because all these things are encoded in the CFT which is unitary. The true surviving question is how is this unitary description reconciled with the bulk interpretation in which a macroscopic black hole is demonstrably present and has the potential to cause information loss headaches. Hawking does not have a working convergent path integral beyond the semiclassical approximation, but let us join Hawking and pretend that this problem is absent. He computes the partition sum over geometries whose boundaries are topologically S^2 (the sphere at infinity) times an S^1 (the periodic Euclidean time) at infinity; he works in four spacetime dimensions. There are two simple spacetimes with this boundary: B^3 times S^1 is the empty flat (or anti-de-Sitter) spacetime while S^2 times D^2 is the anti-de-Sitter Schwarzschild topology. While the empty spacetime can be foliated, the S^2 times D^2 cannot because it has no S^1 factor, roughly speaking. Because it can't be foliated, you can't even define what the conservation of the information should mean in this topologically non-trivial case. The contribution to the correlators coming from the topologically trivial case are conserved as the Lorentzian time T grows; the contributions from the topologically non-trivial backgrounds decay. On page 3, Hawking confirms that he was inspired by Maldacena's hep-th/0106112 about the eternal black holes in anti de Sitter space. In that case, you also have two - actually three - geometries that fit into the S^1 times S^2 boundary: empty space, small black holes, large black holes (compared to the radius of curvature). The large black holes dominate the ensemble; they have a large negative action. Nevertheless, using the bulk techniques you may calculate that a correlator of O(x)O(y) on the boundary decays for large separations (while it has the usual flat-space behavior if x,y are nearby). Such a decrease looks much like other cases of information loss; nevertheless in this case you may argue that there is a unitary CFT behind it and the exponential decrease may be in principle reduced to repeated scattering. Maldacena also showed that the contribution of the empty spacetime does not decay and it has the right magnitude to be consistent with unitarity; Hawking argues that he strengthened this observation by having showed that the path integral over topologically trivial spacetimes only is unitary. (Again, it is not obvious whether his formal argument holds in reality because of the usual loop UV problems of general relativity.) The large black holes are not too interesting because they don't evaporate. Instead, we want to look at the small black holes. Hawking has been trying to find a Euclidean geometry corresponding to an evaporating Lorentzian black hole for years. Now he says that he failed because there is no such geometry. In the Euclidean setup, only the metrics that can be foliated - empty space and eternal black holes - should be added to the path integral. One of the main question that you must certainly ask is: Why does dynamics over topologically trivial spacetime look like the creation of a long-lived black hole with horizons in the Lorentzian signature? I believe that Hawking does not fully answer this question; he only says that "thermal fluctuations may occasionally be large enough to cause a gravitational collapse that creates a small black hole". Let me re-iterate that such a short comment is deeply unsatisfactory. What we want to understand in the first place is the bulk description of the process in which we can see that the usual long-lived black hole is there; we want to see how are the concepts of locality and causality corrected so that the information can escape. Hawking only says that this solution of the information loss puzzle is possible. We could have said the same thing just because there is a dual unitary CFT description. But the local bulk dynamical mechanisms that make these things possible remain nearly as cloudy as before. Some of Hawking's conclusions say: • There are no baby universes branching off - which is what Hawking used to think. The information is preserved purely in our Universe. • The black hole can form while remaining topological trivial because its evaporation may be viewed as a tunnelling process (Hartle-Hawking). Although this comment can't be considered to be a quantitative answer to my main question, I like it, and let me describe an analogy. Imagine quantum mechanics of a particle on a line. The classically inaccessible regions (E smaller than V) may be compared to the black hole interior. Classically, these are qualitatively different regions from the rest. However, quantum mechanically, the qualitative difference disappears because of tunnelling. All points on the line are qualitatively on equal footing. You can get there. This is why the black hole should be thought of as having a trivial topology quantum mechanically. The situation would change for an infinite inaccessible region (infinite black hole) where you can't tunnel. Let me summarize: Hawking's argument why the evolution is unitary probably works and The Reference Frame agrees with virtually all of Hawking's broader opinions, but such a solution is not much different from the observation that the dual CFT is unitary. The question why these unitary processes look like a small long-lived black hole and how the necessary correlations are created remains mostly unanswered. Hawking has lost a bet but he seems to think that he has made the critical steps to solve the information loss puzzle. While he has given the encyclopedia of baseball to John Preskill, next time he will give him the ashes from a burned book (or the nearly thermal Hawking radiation) because John Preskill can always reconstruct the information out of them. Add to Digg this Add to reddit snail feedback (3) : reader Quantoken said... Lubos said: Not so easily, Hawking would have to give not just the ashes, but also every little bit of debrises and that could be flying away, and every single photons emitted during the burning, as well as preserve the exact direction and time that the photons fly away. You need every little bit of quantum information to reconstruct the book. Hawking might as well could be easier give Preskill a time machine to allow him travel back to the moment right before the book was burned, or, just give the book. reader Olias said... Lubos, I believe there is an hidden variable contained within the end-script by Hawking. The Ashes: may be with reference to Cricket, rather than Baseball? There is also a saying in the English Language:Its Just not Cricket!..which has the meaning of, Rule's have been Broken? So in the context of say, Baseball, one can state that if a player has gained an unfair advantage, by say bending the Rule Book, English people tend to shout:It's just not Cricket! the hawking paragraph again , and one can see a little bit of Hawking 'handwaving', tinged with a famous sense of humour? reader esc said... Thanks for an accessable explaination of this situation. I caught part of the recent Discovery Channel program on this tonight in a bar with poorly-written subtitles, and needed a quick catch-up on the current state of things in this world. I can't wait for a book to come out that is as comprehensible to a casual user of high science like myself as Gleik's Chaos was back in high school, but covering more recent events.
1c665c5c569206d0
Branches of physics From Wikipedia, the free encyclopedia Jump to: navigation, search Domains of major fields of physics Physics deals with the combination of matter and energy. It also deals with a wide variety of systems, about which theories have been developed that are used by physicists. In general, theories are experimentally tested numerous times before they are accepted as correct as a description of Nature (within a certain domain of validity). For instance, the theory of classical mechanics accurately describes the motion of objects, provided they are much larger than atoms and moving at much less than the speed of light. These theories continue to be areas of active research: for instance, a remarkable aspect of classical mechanics known as chaos was discovered in the 20th century, three centuries after the original formulation of classical mechanics by Isaac Newton (1642–1727). These "central theories" are important tools for research in more specialized topics, and any physicist, regardless of his or her specialization, is expected to be literate in them. Classical mechanics[edit] Classical mechanics is a model of the physics of forces acting upon bodies. It is often referred to as "Newtonian mechanics" after Isaac Newton and his laws of motion.It is introduced by M.A Hassan Younis. Thermodynamics and statistical mechanics[edit] The first chapter of The Feynman Lectures on Physics is about the existence of atoms, which Feynman considered to be the most compact statement of physics, from which science could easily result even if all other knowledge was lost.[1] By modeling matter as collections of hard spheres, it is possible to describe the kinetic theory of gases, upon which classical thermodynamics is based. Thermodynamics studies the effects of changes in temperature, pressure, and volume on physical systems on the macroscopic scale, and the transfer of energy as heat.[2][3] Historically, thermodynamics developed out of the desire to increase the efficiency of early steam engines.[4] The starting point for most thermodynamic considerations is the laws of thermodynamics, which postulate that energy can be exchanged between physical systems as heat or work.[5] They also postulate the existence of a quantity named entropy, which can be defined for any system.[6] In thermodynamics, interactions between large ensembles of objects are studied and categorized. Central to this are the concepts of system and surroundings. A system is composed of particles, whose average motions define its properties, which in turn are related to one another through equations of state. Properties can be combined to express internal energy and thermodynamic potentials, which are useful for determining conditions for equilibrium and spontaneous processes. &\nabla \cdot \mathbf{D} = \rho_f \\ &\nabla \cdot \mathbf{B} = 0 \\ &\nabla \times \mathbf{H} = \mathbf{J}_f + \frac{\partial \mathbf{D} }{\partial t} Maxwell's equations of electromagnetism The special theory of relativity enjoys a relationship with electromagnetism and mechanics; that is, the principle of relativity and the principle of stationary action in mechanics can be used to derive Maxwell's equations,[7][8] and vice versa. The theory of special relativity was proposed in 1905 by Albert Einstein in his article "On the Electrodynamics of Moving Bodies". The title of the article refers to the fact that special relativity resolves an inconsistency between Maxwell's equations and classical mechanics. The theory is based on two postulates: (1) that the mathematical forms of the laws of physics are invariant in all inertial systems; and (2) that the speed of light in a vacuum is constant and independent of the source or observer. Reconciling the two postulates requires a unification of space and time into the frame-dependent concept of spacetime. General relativity is the geometrical theory of gravitation published by Albert Einstein in 1915/16.[9][10] It unifies special relativity, Newton's law of universal gravitation, and the insight that gravitation can be described by the curvature of space and time. In general relativity, the curvature of spacetime is produced by the energy of matter and radiation. Quantum mechanics[edit] Quantum mechanics is the branch of physics treating atomic and subatomic systems and their interaction with radiation. It is based on the observation that all forms of energy are released in discrete units or bundles called "quanta". Remarkably, quantum theory typically permits only probable or statistical calculation of the observed features of subatomic particles, understood in terms of wavefunctions. The Schrödinger equation plays the role in quantum mechanics that Newton's laws and conservation of energy serve in classical mechanics—i.e., it predicts the future behavior of a dynamic system—and is a wave equation that is used to solve for wavefunctions. In 1924, Louis de Broglie proposed that not only do light waves sometimes exhibit particle-like properties, but particles may also exhibit wave-like properties. Two different formulations of quantum mechanics were presented following de Broglie's suggestion. The wave mechanics of Erwin Schrödinger (1926) involves the use of a mathematical entity, the wave function, which is related to the probability of finding a particle at a given point in space. The matrix mechanics of Werner Heisenberg (1925) makes no mention of wave functions or similar concepts but was shown to be mathematically equivalent to Schrödinger's theory. A particularly important discovery of the quantum theory is the uncertainty principle, enunciated by Heisenberg in 1927, which places an absolute theoretical limit on the accuracy of certain measurements; as a result, the assumption by earlier scientists that the physical state of a system could be measured exactly and used to predict future states had to be abandoned. Quantum mechanics was combined with the theory of relativity in the formulation of Paul Dirac. Other developments include quantum statistics, quantum electrodynamics, concerned with interactions between charged particles and electromagnetic fields; and its generalization, quantum field theory. Interdisciplinary fields[edit] To the interdisciplinary fields, which define partially sciences of their own, belong e.g. the The table below lists the core theories along with many of the concepts they employ. Theory Major subtopics Concepts Classical mechanics Newton's laws of motion, Lagrangian mechanics, Hamiltonian mechanics, kinematics, statics, dynamics, chaos theory, acoustics, fluid dynamics, continuum mechanics Density, dimension, gravity, space, time, motion, length, position, velocity, acceleration, Galilean invariance, mass, momentum, impulse, force, energy, angular velocity, angular momentum, moment of inertia, torque, conservation law, harmonic oscillator, wave, work, power, Lagrangian, Hamiltonian, Tait–Bryan angles, Euler angles Electromagnetism Electrostatics, electrodynamics, electricity, magnetism, magnetostatics, Maxwell's equations, optics Capacitance, electric charge, current, electrical conductivity, electric field, electric permittivity, electric potential, electrical resistance, electromagnetic field, electromagnetic induction, electromagnetic radiation, Gaussian surface, magnetic field, magnetic flux, magnetic monopole, magnetic permeability Thermodynamics and statistical mechanics Heat engine, kinetic theory Boltzmann's constant, conjugate variables, enthalpy, entropy, equation of state, equipartition theorem, thermodynamic free energy, heat, ideal gas law, internal energy, laws of thermodynamics, Maxwell relations, irreversible process, Ising model, mechanical action, partition function, pressure, reversible process, spontaneous process, state function, statistical ensemble, temperature, thermodynamic equilibrium, thermodynamic potential, thermodynamic processes, thermodynamic state, thermodynamic system, viscosity, volume, work, granular material Quantum mechanics Path integral formulation, scattering theory, Schrödinger equation, quantum field theory, quantum statistical mechanics Adiabatic approximation, black-body radiation, correspondence principle, free particle, Hamiltonian, Hilbert space, identical particles, matrix mechanics, Planck's constant, observer effect, operators, quanta, quantization, quantum entanglement, quantum harmonic oscillator, quantum number, quantum tunneling, Schrödinger's cat, Dirac equation, spin, wavefunction, wave mechanics, wave–particle duality, zero-point energy, Pauli Exclusion Principle, Heisenberg Uncertainty Principle Relativity Special relativity, general relativity, Einstein field equations Covariance, Einstein manifold, equivalence principle, four-momentum, four-vector, general principle of relativity, geodesic motion, gravity, gravitoelectromagnetism, inertial frame of reference, invariance, length contraction, Lorentzian manifold, Lorentz transformation, mass–energy equivalence, metric, Minkowski diagram, Minkowski space, principle of relativity, proper length, proper time, reference frame, rest energy, rest mass, relativity of simultaneity, spacetime, special principle of relativity, speed of light, stress–energy tensor, time dilation, twin paradox, world line 2. ^ Perot, Pierre (1998). A to Z of Thermodynamics. Oxford University Press. ISBN 0-19-856552-6.  4. ^ Clausius, Ruldolf (1850). On the Motive Power of Heat, and on the Laws which can be deduced from it for the Theory of Heat. Poggendorff's Annalee dere Physic, LXXIX (Dover Reprint). ISBN 0-486-59065-8.  5. ^ Van Ness, H.C. (1969). Understanding Thermodynamics. Dover Publications, Inc. ISBN 0-486-63277-6.  7. ^ Landau and Lifshitz (1951, 1962), The Classical Theory of Fields, Library of Congress Card Number 62-9181, Chapters 1–4 (3rd edition is ISBN 0-08-016019-0) 8. ^ Corson and Lorrain, Electromagnetic Fields and Waves ISBN 0-7167-1823-5 9. ^ Einstein, Albert (November 25, 1915). "Die Feldgleichungen der Gravitation". Sitzungsberichte der Preussischen Akademie der Wissenschaften zu Berlin: 844–847. Retrieved 2006-09-12. 
2dbc1ad27925e391
Citation for this page in APA citation style.           Close Mortimer Adler Rogers Albritton Alexander of Aphrodisias Samuel Alexander William Alston Louise Antony Thomas Aquinas David Armstrong Harald Atmanspacher Robert Audi Alexander Bain Mark Balaguer Jeffrey Barrett William Belsham Henri Bergson George Berkeley Isaiah Berlin Richard J. Bernstein Bernard Berofsky Robert Bishop Max Black Susanne Bobzien Emil du Bois-Reymond Hilary Bok Laurence BonJour George Boole Émile Boutroux Michael Burke Joseph Keim Campbell Rudolf Carnap Ernst Cassirer David Chalmers Roderick Chisholm Randolph Clarke Samuel Clarke Anthony Collins Antonella Corradini Diodorus Cronus Jonathan Dancy Donald Davidson Mario De Caro Daniel Dennett Jacques Derrida René Descartes Richard Double Fred Dretske John Dupré John Earman Laura Waddell Ekstrom Herbert Feigl John Martin Fischer Owen Flanagan Luciano Floridi Philippa Foot Alfred Fouilleé Harry Frankfurt Richard L. Franklin Michael Frede Gottlob Frege Peter Geach Edmund Gettier Carl Ginet Alvin Goldman Nicholas St. John Green H.Paul Grice Ian Hacking Ishtiyaque Haji Stuart Hampshire Sam Harris William Hasker Georg W.F. Hegel Martin Heidegger Thomas Hobbes David Hodgson Shadsworth Hodgson Baron d'Holbach Ted Honderich Pamela Huby David Hume Ferenc Huoranszki William James Lord Kames Robert Kane Immanuel Kant Tomis Kapitan Jaegwon Kim William King Hilary Kornblith Christine Korsgaard Saul Kripke Andrea Lavazza Keith Lehrer Gottfried Leibniz Michael Levin George Henry Lewes David Lewis Peter Lipton C. Lloyd Morgan John Locke Michael Lockwood E. Jonathan Lowe John R. Lucas Alasdair MacIntyre Ruth Barcan Marcus James Martineau Storrs McCall Hugh McCann Colin McGinn Michael McKenna Brian McLaughlin John McTaggart Paul E. Meehl Uwe Meixner Alfred Mele Trenton Merricks John Stuart Mill Dickinson Miller Thomas Nagel Otto Neurath Friedrich Nietzsche John Norton Robert Nozick William of Ockham Timothy O'Connor David F. Pears Charles Sanders Peirce Derk Pereboom Steven Pinker Karl Popper Huw Price Hilary Putnam Willard van Orman Quine Frank Ramsey Ayn Rand Michael Rea Thomas Reid Charles Renouvier Nicholas Rescher Richard Rorty Josiah Royce Bertrand Russell Paul Russell Gilbert Ryle Jean-Paul Sartre Kenneth Sayre Moritz Schlick Arthur Schopenhauer John Searle Wilfrid Sellars Alan Sidelle Ted Sider Henry Sidgwick Walter Sinnott-Armstrong Saul Smilansky Michael Smith Baruch Spinoza L. Susan Stebbing Isabelle Stengers George F. Stout Galen Strawson Peter Strawson Eleonore Stump Francisco Suárez Richard Taylor Kevin Timpe Mark Twain Peter Unger Peter van Inwagen Manuel Vargas John Venn Kadri Vihvelin G.H. von Wright David Foster Wallace R. Jay Wallace Ted Warfield Roy Weatherford William Whewell Alfred North Whitehead David Widerker David Wiggins Bernard Williams Timothy Williamson Ludwig Wittgenstein Susan Wolf Michael Arbib Walter Baade Bernard Baars Gregory Bateson John S. Bell Charles Bennett Ludwig von Bertalanffy Susan Blackmore Margaret Boden David Bohm Niels Bohr Ludwig Boltzmann Emile Borel Max Born Satyendra Nath Bose Walther Bothe Hans Briegel Leon Brillouin Stephen Brush Henry Thomas Buckle S. H. Burbury Donald Campbell Anthony Cashmore Eric Chaisson Jean-Pierre Changeux Arthur Holly Compton John Conway John Cramer E. P. Culverwell Charles Darwin Richard Dawkins Terrence Deacon Lüder Deecke Louis de Broglie Max Delbrück Abraham de Moivre Paul Dirac Hans Driesch John Eccles Arthur Stanley Eddington Paul Ehrenfest Albert Einstein Hugh Everett, III Franz Exner Richard Feynman R. A. Fisher Joseph Fourier Philipp Frank Lila Gatlin Michael Gazzaniga GianCarlo Ghirardi J. Willard Gibbs Nicolas Gisin Paul Glimcher Thomas Gold Brian Goodwin Joshua Greene Jacques Hadamard Patrick Haggard Stuart Hameroff Augustin Hamon Sam Harris Hyman Hartman John-Dylan Haynes Donald Hebb Martin Heisenberg Werner Heisenberg John Herschel Art Hobson Jesper Hoffmeyer E. T. Jaynes William Stanley Jevons Roman Jakobson Pascual Jordan Ruth E. Kastner Stuart Kauffman Martin J. Klein Simon Kochen Hans Kornhuber Stephen Kosslyn Ladislav Kovàč Rolf Landauer Alfred Landé Pierre-Simon Laplace David Layzer Benjamin Libet Seth Lloyd Hendrik Lorentz Josef Loschmidt Ernst Mach Donald MacKay Henry Margenau James Clerk Maxwell Ernst Mayr John McCarthy Ulrich Mohrhoff Jacques Monod Emmy Noether Abraham Pais Howard Pattee Wolfgang Pauli Massimo Pauri Roger Penrose Steven Pinker Colin Pittendrigh Max Planck Susan Pockett Henri Poincaré Daniel Pollen Ilya Prigogine Hans Primas Adolphe Quételet Juan Roederer Jerome Rothstein David Ruelle Erwin Schrödinger Aaron Schurger Claude Shannon David Shiang Herbert Simon Dean Keith Simonton B. F. Skinner Roger Sperry John Stachel Henry Stapp Tom Stonier Antoine Suarez Leo Szilard Max Tegmark William Thomson (Kelvin) Peter Tse Vlatko Vedral Heinz von Foerster John von Neumann John B. Watson Daniel Wegner Steven Weinberg Paul A. Weiss John Wheeler Wilhelm Wien Norbert Wiener Eugene Wigner E. O. Wilson H. Dieter Zeh Ernst Zermelo Wojciech Zurek Fritz Zwicky Free Will Mental Causation James Symposium Roger Penrose Roger Penrose thinks that new physical phenomena, as yet unobserved, may be responsible for consciousness and free will. In particular, he has developed a theory of "correct" quantum gravity, later called "objective reduction," that allows the superposition of quantum states to collapse into a single state without the randomness or indeterminacy of standard quantum mechanics. Penrose thinks the mysteries of consciousness and free will can be explained by quantum mysteries. In his 1994 book The Emperor's New Mind, considers the idea that the unconscious mind in generating alternative possibilities for original thoughts. What, then, is my view as to the role of the unconscious in inspirational thought? I admit that the issues are not so clear as I would like them to be. This is an area where the unconscious seems indeed to be playing a vital role, and I must concur with the view that unconscious processes are important. I must agree, also, that it cannot be that the unconscious mind is simply throwing up ideas at random. There must be a powerfully impressive selection process that allows the conscious mind to be disturbed only by ideas that 'have a chance'. I would suggest that these criteria for selection — largely 'aesthetic' ones, of some sort — have been already strongly influenced by conscious desiderata (like the feeling of ugliness that would accompany mathematical thoughts that are inconsistent with already established general principles). In relation to this, the question of what constitutes genuine originality should be raised. It seems to me that there are two factors involved, namely a 'putting-up' and a 'shooting-down' process. I imagine that the putting-up could be largely unconscious and the shooting-down largely conscious. Without an effective putting-up process, one would have no new ideas at all. But, just by itself, this procedure would have little value. One needs an effective procedure for forming judgements, so that only those ideas with a reasonable chance of success will survive. In dreams, for example, unusual ideas may easily come to mind, but only very rarely do they survive the critical judgements of the wakeful consciousness. (For my own part, I have never had a successful scientific idea in a dreaming state, while others, such as the chemist Kekule in his discovery of the structure of benzene, may have been more fortunate.) In my opinion, it is the conscious shooting-down (judgement) process that is central to the issue of originality, rather than the unconscious putting-up process; but I am aware that many others might hold to a contrary view. Penrose is very concerned about determinism (in which the future is completely determined) and a form of "strong" determinism in which every event in the universe has been pre-determined from the beginning of the universe. He calls the deterministic evolution of the Schrödinger equation of motion U, and the random collapse (or reduction) of the wave function R. CQG is his theory of "correct quantum gravity." Determinism and strong determinism So far I have said little about the question of 'free will' which is normally taken to be the fundamental issue of the active part of the mind—body problem. Instead, I have concentrated on my suggestion that there is an essential non-algorithmic aspect to the role of conscious action. Normally, the issue of free will is discussed in relation to determinism in physics. Recall that in most of our SUPERB theories there is a clear-cut determinism, in the sense that if the state of the system is known at any one time, then it is completely fixed at all later (or indeed earlier) times by the equations of the theory. In this way there seems to be no room for 'free will' since the future behaviour of a system seems to be totally determined by the physical laws. The determinism of U and the indeterminism and randomness of R leads Penrose to the standard argument against free will Even the U part of quantum mechanics has this completely deterministic character. However, the R 'quantum-jump' is not deterministic, and it introduces a completely random element into the time-evolution. Early on, various people leapt at the possibility that here might be a role for free will, the action of consciousness perhaps having some direct effect on the way that an individual quantum system might jump. But if R is really random, then it is not a great deal of help either, if we wish to do something positive with our free wills. My own point of view, although it is not very well formulated in this respect, would be that some new procedure (CQG) takes over at the quantum—classical borderline which interpolates between U and R (each of which are now regarded as approximations), and that this new procedure would contain an essentially non-algorithmic element. This would imply that the future would not be computable from the present, even though it might be determined by it. I have tried to be clear in distinguishing the issue of computability from that of determinism, in my discussions in Chapter 5. It seems to me to be quite plausible that CQG might be a deterministic but non-computable theory. Sometimes people take the view that even with classical (or U-quantum) determinism there is no effective determinism, because the initial conditions cannot ever be well-enough known that the future could actually be computed. Sometimes very small changes in the initial conditions can lead to very large differences in the final outcome. This is what happens, for example, in the phenomenon known as 'chaos' in a (classical) deterministic system — an example being the uncertainty of weather prediction. However, it is very hard to believe that this kind of classical uncertainty can be what allows us our (illusion of?) free will. The future behaviour would still be determined, right from the big bang, even though we would be unable to calculate it. The same objection might be raised against my suggestion that a lack of computability might be intrinsic to the dynamical laws — now assumed to be non-algorithmic in character — rather than to our lack of information concerning initial conditions. Even though not computable, the future would, on this view, still be completely fixed by the past — all the way back to the big bang. In fact, I am not being so dogmatic as to insist that CQG ought to be deterministic but non-computable. My guess would be that the sought-for theory would have a more subtle description than that. I am only asking that it should contain non-algorithmic elements of some essential kind. To close this section, I should like to remark on an even more extreme view that one might hold towards the issue of determinism. This is what I have referred to as strong determinism. According to strong determinism, it is not just a matter of the future being determined by the past; the entire history of the universe is fixed, according to some precise mathematical scheme, for all time. Such a viewpoint might have some appeal if one is inclined to identify the Platonic world with the physical world in some way, since Plato's world is fixed once and for all, with no 'alternative possibilities' for the universe! (I sometimes wonder whether Einstein might have had such a scheme in mind when he wrote 'What I'm really interested in is whether God could have made the world in a different way; that is, whether the necessity of logical simplicity leaves any freedom at all!' (letter to Ernst Strauss; see Kuznetsov 1977, p. 285). In his 1997 book Shadows of the Mind, Penrose speculated further that free will might result from a dualistic mind influencing the random R process. This was the "interactionist" view of neuroscientist John Eccles and philosopher Karl Popper. Penrose doubts that responsibility and control are possible, given the standard argument against free will §1.11. The issue of 'responsibility' raises deep philosophical questions concerning the ultimate causes of our behaviour. It might well be argued that each of our actions is ultimately determined by our inheritance and by our environment — or else by those numerous chance factors that continually affect our lives. Are not all of these influences 'beyond our control', and therefore things for which we cannot ultimately be held responsible? Is the matter of 'responsibility' merely one of convenience of terminology, or is there actually something else — a 'self lying beyond all such influences — which exerts a control over our actions? The legal issue of 'responsibility' seems to imply that there is indeed, within each one of us, some kind of an independent 'self with its own responsibilities — and, by implication, rights — whose actions are not attributable to inheritance, environment, or chance. If it is other than a mere convenience of language that we speak as though there were such an independent 'self, then there must be an ingredient missing from our present-day physical understandings. The discovery of such an ingredient would surely profoundly alter our scientific outlook. This book will not supply an answer to these deep issues, but I believe that it may open the door to them by a crack — albeit only by a crack. It will not tell us that there need necessarily be a 'self whose actions are not attributable to external cause, but it will tell us to broaden our view as to the very nature of what a 'cause' might be. A 'cause' could be something that cannot be computed in practice or in principle. I shall argue that when a 'cause' is the effect of our conscious actions, then it must be something very subtle, certainly beyond computation, beyond chaos, and also beyond any purely random influences. Whether such a concept of 'cause' could lead us any closer to an understanding of the profound issue (or the 'illusion'?) of our free wills is a matter for the future. (p.36) 6.8 Is it consciousness that reduces the state vector? Among those who take |Ψ> seriously as a description of the physical world, there are some who would argue — as an alternative to trusting U at all scales, and thus believing in a many-worlds type of viewpoint — that something of the nature of R actually takes place as soon as the consciousness of an observer becomes involved. The distinguished physicist Eugene Wigner once sketched a theory of this nature (Wigner 1961). The general idea would be that unconscious matter — or perhaps just inanimate matter — would evolve according to U, but as soon as a conscious entity (or 'life') becomes physically entangled with the state, then something new comes in, and a physical process that results in R takes over actually to reduce the state. There need be no suggestion, with such a viewpoint, that somehow the conscious entity might be able to 'influence' the particular choice that Nature makes at this point. Such a suggestion would lead us into distinctly murky waters and, as far as I am aware, there would be a severe conflict with observed facts with any too simplistic suggestion that a conscious act of will could influence the result of a quantum-mechanical experiment. Thus, we are not requiring, here, that 'conscious free will' should necessarily be taking an active role with regard to R (but cf. §7.1, for some alternative viewpoints). No doubt some readers might expect that, since I am searching for a link between the quantum measurement problem and the problem of consciousness, I might find myself attracted by ideas of this general nature. I should make myself clear that this is not the case. It is probable, after all, that consciousness is a rather rare phenomenon throughout the universe. There appears to be a good deal of it occurring in many places on the surface of the earth, but as far as evidence has presented itself to us to this date, there is no highly developed consciousness — if, indeed, any at all — right out into depths of the universe many light centuries away from us. It would be a very strange picture of a 'real' physical universe in which physical objects evolve in totally different ways depending upon whether or not they are within sight or sound or touch of one of its conscious inhabitants. (p.329) 7.1 Large-scale quantum action in brain function? Brain action, according to the conventional viewpoint, is to be understood in terms of essentially classical physics — or so it would seem. Nerve signals are normally taken to be 'on or off' phenomena, just as are the currents in the electronic circuits of a computer, which either take place or do not take place — with none of the mysterious superpositions of alternatives that are characteristic of quantum actions. Whilst it would be admitted that, at underlying levels, quantum effects must have their roles to play, biologists seem to be generally of the opinion that there is no necessity to be forced out of a classical framework when discussing the large-scale implications of those primitive quantum ingredients. The chemical forces that control the interactions of atoms and molecules are indeed quantum mechanical in origin, and it is largely chemical action that governs the behaviour of the neurotransmitter substances that transfer signals from one neuron to another — across tiny gaps that are called synaptic clefts. Likewise, the action potentials that physically control nerve-signal transmission itself have an admittedly quantum-mechanical origin. Yet it seems to be generally assumed that it is quite adequate to model the behaviour of neurons themselves, and their relationships with one another, in a completely classical way. It is widely believed, accordingly, that it should be entirely appropriate to model the physical functioning of the brain as a whole, as a classical system, where the more subtle and mysterious features of quantum physics do not significantly enter the description. This would have the implication that any possible significant activity that might take place in a brain is indeed to be taken as either 'occurring' or 'not occurring'. The strange superpositions of quantum theory, that would allow simultaneous 'occurring' and 'not occurring' — with complex-number weighting factors — would, accordingly, be considered to play no significant role. Whilst it might be accepted that at some submicroscopic level of activity such quantum superpositions do 'really' take place, it would be felt that the interference effects that are characteristic of such quantum phenomena would have no role at the relevant larger scales. Thus, it would be considered adequate to treat any such superpositions as though they were statistical mixtures, and the classical modelling of brain activity would be perfectly satisfactory FAPP [for all practical purposes]. There are certain dissenting opinions from this, however. In particular, the renowned neurophysiologist John Eccles has argued for the importance of quantum effects in synaptic action (see, in particular, Beck and Eccles (1992), Eccles (1994)). He points to the presynaptic vesicular grid — a paracrystalline hexagonal lattice in the brain's pyramidal cells — as being an appropriate quantum site. Also, some people (even including myself; cf. ENM [The Emperor's New Mind ], pp. 400-401, and Penrose 1987) have tried to extrapolate from the fact that light-sensitive cells in the retina (which is technically part of the brain) can respond to a small number of photons (Hecht et al. 1941) — sensitive even to a single photon (Baylor et al. 1979), in appropriate circumstances—and to speculate that there might be neurons in the brain, proper, that are also essentially quantum detection devices. With the possibility that quantum effects might indeed trigger much larger activities within the brain, some people have expressed the hope that, in such circumstances, quantum indeterminacy might be what provides an opening for the mind to influence the physical brain. Here, a dualistic viewpoint would be likely to be adopted, either explicitly or implicitly. Perhaps the 'free will' of an 'external mind' might be able to influence the quantum choices that actually result from such non-deterministic processes. On this view, it is presumably through the action of quantum theory's R-process that the dualist's 'mind-stuff' would have its influence on the behaviour of the brain. The status of such suggestions is unclear to me, especially since, in standard quantum theory, quantum indeterminacy does not occur at quantum-level scales, since it is the deterministic U-evolution that always holds at this level. It is only in the magnification process from the quantum to the classical levels that the indeterminacy of R is deemed to occur. On the standard FAPP viewpoint, this indeterminacy is something that 'takes place' only when sufficient amounts of the environment become entangled with the quantum event. In fact, as we have seen in §6.6, on the standard view it is not even clear what 'taking place' actually means. It would be hard, on conventional quantum-physical grounds, to maintain that the theory does actually allow an indeterminacy to occur just at the level where a single quantum particle, such as a photon, atom, or small molecule, is critically involved. When (for example) a photon's wavefunction encounters a photon-sensitive cell, it sets in train a sequence of events that remains deterministic (action of U) so long as the system can be considered to stay 'at the quantum level'. Eventually, significant amounts of the environment become disturbed and, on the conventional view, one considers that R has occurred FAPP. One would have to contend that the 'mind-stuff' somehow influences the system only at this indeterminate stage. According to the viewpoint on state reduction that I have myself been promoting in this book (cf. §6.12), to find the level at which the R-process actually becomes operative, we must look to the quite large scales that become relevant when considerable amounts of material (microns to millimetres in diameter — or perhaps a good deal more, if no significant mass movement is involved) become entangled in the quantum state. (I shall henceforth denote this fairly specific but putative procedure by OR, which stands for objective reduction*. In any case, if we try to adhere to the above dualist viewpoint, where we are looking for somewhere where an external 'mind' might have an influence on physical behaviour — presumably by replacing the pure randomness of quantum theory by something more subtle—then we must indeed find how the 'mind's' influence could enter at a much larger scale than single quantum particles. We must look to wherever the cross-over point occurs between the quantum and classical levels. As we have seen in the previous chapter, there is no general agreement about what, whether, or where such a cross-over point might be. The question of the absolute nature of morality is relevant to the legal issues of §1.11. There is relevance, also, to the question of 'free will', as was raised at the end of §1.11: might there be something that is beyond our inheritance, beyond environmental factors, and beyond chance influences — a separate 'self' that has a profound role in controlling our actions? I believe that we are very far from an answer to this question. As far as the arguments of this book go, all that I could claim with any confidence would be that whatever is indeed involved must lie in principle beyond the capabilities of those devices that we presently call 'computers'. (p.401) The Andromeda Paradox In his 1989 book The Emperor's New Mind, Penrose developed a form of the Rietdijk - Putnam argument that claims to prove the universe is pre-determined for special relativistic reasons. This is a more sophisticated version of "block universe" arguments for determinsm, like those of Hermann Minkowski and J. J. C. Smart Penrose's argument is called the Andromeda Paradox. He shows that two people walking past each other in the street could have very different present moments. If one of the people were walking towards the Andromeda Galaxy, events in this galaxy might be hours or even days advanced of the events on Andromeda for the person walking in the other direction. If this occurs, it would have dramatic effects on our understanding of time. Penrose highlighted the consequences by discussing a potential invasion of Earth by aliens living in the Andromeda galaxy. On Earth, one person might live in a universe where the Andromedeans have not yet decided to invade, whilst someone passing them in the street could live in a universe where alien spaceships are underway. Penrose describes the situation: The observers cannot see what is happening in Andromeda. It is light-years away. The paradox is that they have different ideas of what is happening "now" in Andromeda. The Arrow of Time Also in his 1989 book The Emperor's New Mind, Penrose speculated on the connection between information, entropy, and the arrow of time. Recall that the primordial fireball was a thermal state — a hot gas in expanding thermal equilibrium. Recall, also, that the term 'thermal equilibrium' refers to a state of maximum entropy. (This was how we referred to the maximum entropy state of a gas in a box.) However, the second law demands that in its initial state, the entropy of our universe was at some sort of minimum, not a maximum! What has gone wrong? One 'standard' answer would run roughly as follows: True, the fireball was effectively in thermal equilibrium at the beginning, but the universe at that time was very tiny. The fireball represented the state of maximum entropy that could be permitted for a universe of that tiny size, but the entropy so permitted would have been minute by comparison with that which is allowed for a universe of the size that we find it to be today. As the universe expanded, the permitted maximum entropy increased with the universe's size, but the actual entropy in the universe lagged well behind this permitted maximum. The second law arises because the actual entropy is always striving to catch up with this permitted maximum. Penrose's "standard" answer is the work of David Layzer, in his 1975 Scientific American article "The Arrow of Time." Here is the Layzer picture: A Summary of Quantum Mechanics At the end of his chapter on "Quantum Magic and Quantum Mystery" Penrose summarized the situation: Let us briefly review what standard quantum theory has actually told us about how we should describe the world, especially in relation to these puzzling issues — and then ask: where do we go from here? Recall, first of all, that the descriptions of quantum theory appear to apply sensibly (usefully?) only at the so-called quantum level — of molecules, atoms, or subatomic particles, but also at larger dimensions, so long as_ energy differences between alternative possibilities remain very small. At the quantum level, we must treat such 'alternatives' as things that can coexist, in a kind of complex-number-weighted superposition. The complex numbers that are used as weightings are called probability amplitudes. Each different totality of complex-weighted alternatives defines a different quantum state. and any quantum system must be described by such a quantum state. Often, as is most clearly the case with the example of spin, there is nothing to say which are to be 'actual' alternatives composing a quantum state and which are to be just 'combinations' of alternatives. In any case, so long as the system remains at the quantum level, the quantum state evolves in a completely deterministic way. This deterministic evolution is the process U, governed by the important Schrödinger equation. When the effects of different quantum alternatives become magnified to the classical level, so that differences between alternatives are large enough that we might directly perceive them, then such complex-weighted superpositions seem no longer to persist. Instead, the squares of the moduli of the complex amplitudes must be formed (i.e. their squared distances from the origin in the complex plane taken), and these real numbers now play a new role as actual probabilities for the alternatives in question. Only one of the alternatives survives into the actuality of physical experience, according to the process R (called reduction of the state vector or collapse of the wavefunction; completely different from U). It is here, and only here, that the non-determinism of quantum theory makes its entry. The quantum state may be strongly argued as providing an objective picture. But it can be a complicated and even somewhat paradoxical one. When several particles are involved, quantum states can (and normally 'do') get very complicated. Individual particles then do not have 'states' on their own, but exist only in complicated 'entanglements' with other particles, referred to as correlations. When a particle in one region is 'observed' in the sense that it triggers some effect that becomes magnified to the classical level, then R must be invoked — but this apparently simultaneously affects all the other particles with which that particular particle is correlated. Experiments of the Einstein-Podolsky-Rosen (EPR) type (such as that of Aspect, in which pairs of photons are emitted in opposite directions by a quantum source, and then separately have their polarizations measured many metres apart) give clear observational substance to this puzzling, but essential fact of quantum physics: it is non-local (so that the photons in the Aspect experiment cannot be treated as separate independent entities)! If R is considered to act in an objective way (and that would seem to be implied by the objectivity of the quantum state), then the spirit of special relativity is accordingly violated.'No objectively real space-time description of the (reducing) state-vector seems to exist which is consistent with the requirements of relativity! However the observational effects of quantum theory do not violate relativity. Note that any theory that "explained" when and where and in which direction a random event like photon emission or nuclear decay happens would change it from an indeterministic to a deterministic event! Quantum theory is silent about when and why R should actually (or appear to?) take place. Moreover, it does not, in itself, properly expIain why the classical-level world 'looks' classical. 'Most' quantum states do not at all resemble classical ones! Where does all this leave us? I believe that one must strongly consider the possibility that quantum mechanics is simply wrong when applied to macroscopic bodies — or, rather that the laws U and R supply excellent approximations, only, to some more complete, but as yet undiscovered, theory. It is the combination of these two laws together that has provided all the wonderful agreement with observation that present theory enjoys, not U alone. If the linearity of U were to extend into the macroscopic world, we should have to accept the physical reality of complex linear combinations of different positions (or of different spins, etc.) of cricket balls and the like. Common sense alone tells us that this is not the way that the world actually behaves! Cricket balls are indeed well approximated by the descriptions of classical physics. They have reasonably well-defined locations, and are not seen to be in two places at once, as the linear laws of quantum mechanics would allow them to be. If the procedures U and R are to be replaced by a more comprehensive law, then, unlike Schrodinger's equation, this new law would have to be non-linear in character (because R itself acts non-linearly). Some people object to this, quite rightly pointing out that much of the profound mathematical elegance of standard quantum theory results from its linearity. However, I feel that it would be surprising if quantum theory were not to undergo some fundamental change in the future — to something for which this linearity would be only an approximation. There are certainly precedents for this kind of change. Newton's elegant and powerful theory of universal gravitation owed much to the fact that the forces of the theory add up in a linear way. Yet, with Einstein's general relativity, this linearity was seen to be only an (albeit excellent) approximation — and the elegance of Einstein's theory exceeds even that of Newton's! For Teachers For Scholars Penrose, Roger, 1987, "Newton, Quantum Theory, and Reality, in 300 Years of Gravity (Cambridge, Cambridge U. Press) Penrose, Roger, 1989, The Emperor's New Mind (New York, Penguin Books) Penrose, Roger, 1994, Shadows of the Mind (New York, Vintage) Penrose, Roger, 1997, The Large, the Small, and the Human Mind (Cambridge, Cambridge U. Press) Chapter 1.5 - The Philosophers Chapter 2.1 - The Problem of Knowledge Home Part Two - Knowledge Normal | Teacher | Scholar
bac91fbf4c486c79
Posts Tagged ‘underlying reality’ Materia Prima 10 December 2012 The Beginning of the End So this is it for me and Bernard d’Espagnat’s On Physics and Philosophy. In the final chapter d’Espagnat allows himself to speculate on the philosophical and spiritual importance of his veiled reality (which he capitalizes) in particular, and the results of modern physics in general. The chapter is entitled “The Ground of Things.” It is in these concluding sections that d’Espagnat makes his final defence of a materia prima, a mind-independent reality, before the objections of both realists (who concentrate on empirical reality) and antirealists (who say mind is all). Some of those arguments say there’s no reality-in-itself, some say it exists but is inaccessible, and others say empirical reality is “reality.” Kant vs d’Espagnat D’Espagnat believes “the Real” is a mystery as it is (in his opinion) not accessible through discursive knowledge. He notes Immanuel Kant distinguished between phenomena and “reality-in-itself,” but disagrees with Kant that a mind-independent reality is just a boring “limiting concept” filled with “pure x.” Cassirer vs d’Espagnat Ernest Cassirer strongly objected to being content with a “mystery,” which he felt would be an unbearable block to scientific inquiry. D’Espagnat says when possible the search for clarity is admirable, but the true spirit of science is to follow where the facts lead it. The quantum entanglement shown in “Aspect-like experiments” (by Alain Aspect and others) are just part of our evolving scientific knowledge. Materialists vs Mystics Sometimes one should approach “mystery” the way mystics, poets, or composers have done so (though more often in the past). Realists (materialists) have no reason to believe they hold all the keys to knowledge, even in principle. As for the antirealists (and instrumentalists), if they think reality is something we ourselves build up, then mystery can hardly be called an exceptional “illusion.” Affect vs Effect The “affective” element of human existence is an aspect that seems to circumvent our rationality. Kant felt the “affective mind” was not “ordered on concepts” and therefore could shine no light on Being. D’Espagnat is more sympathetic to Descartes. Thought leads to the self-evidence of existence (“I think, therefore I am”), but d’Espagnat says just as self-evident will be our “joys and pains.” We base our conjectures on what we know most intimately, and what could be closer to us than our “affective consciousness”? This too should be able to inform us of Being, perhaps in some circumstances even better than science can. Realism vs “the Real” We can take a very realist position and imagine that if mankind disappeared the stars would continue in their courses. This is an argument for a mind-independent reality—just not the one d’Espagnat has in mind. D’Espagnat says just because our present existence is usually most conveniently described in realist terms (such as conventional space and time) doesn’t mean the realist position is actually true. Even particle physicists who use the realist language of minuscule points and well-defined trajectories know that’s not what’s “really” going on. Radical Idealism vs “the Real” On the other hand radical idealism believes there is no reality outside the mind. In other words, there’s no mind-independent reality. D’Espagnat says his earlier arguments, either based on no miracles or intersubjective agreement (see chapter five) undermine idealism but not his veiled reality position. Mathematical Realism vs “the Real” Whether it’s “Pythagorism” or “Mathematical Platonism” there’s a belief that mathematical developments are discovered not created. Again, this would be a mind-independent reality, but mathematically based. Physical reality is either grounded on a pre-existing mathematical reality or there’s some strong connection between the two. D’Espagnat reminds us that quantum formalism refers to observational predictions. It’s possible “the Real” is mathematically based, but quantum theory isn’t going to get you there. Brains in a Vat vs “the Real” D’Espagnat disagrees with Hilary Putnam’s thought experiment that places brains in vats. Connect electrodes to the brains and some supersmart being could send (in theory) images and other sensations directly to the brain. Putnam says a vat individual could not truthfully say, “We are brains in a vat.” That’s because his concept of a vat is based on an illusion. So there’s no connection between this particular version of a “ground of being” and our knowledge. D’Espagnat disagrees with the assumption that knowledge springs only from the senses. Also, Putnam’s imaginary statement refers to specific entities. D’Espagnat’s concept of “the Real” is “conceptually prior to any such description.” Self-Modification vs “the Real” Francisco Varela and collaborators proposed “enaction” theory. The brain’s main function is to modify its internal states rather than reflect the external world. External reality is neither a projection of our mind’s contents nor the source of those contents. There’s no need to imagine a “pre-given” reality. D’Espagnat faults Varela’s book for vague terminology. Does Varela mean “empirical reality” or “mind-independent reality” when he talks of “reality”? Is the “subjective” an individual’s subjectivity or intersubjectivity? D’Espagnat disagrees with Varela’s use of “secondary qualities” such as colour to make his arguments. Even Varela’s arguments about attention and perception fail to convince d’Espagnat. The mind may display selective attention but that’s a far cry from proving that mind and world somehow arise together. Structure vs “the Real” D’Espagnat says arguments against veiled reality will fail if they’re based on discursive (descriptive and rational) knowledge. In other words, arguments based on what structures we see or don’t see are irrelevant to “the Real” as “the Real” doesn’t have structures in the way we’re accustomed to think of them. Buddhism vs d’Espagnat D’Espagnat notes Varela’s frequent references to Buddhism. Buddhism speaks of “sunyata” or “emptiness” in rejecting objects in the world as intrinsically existing in the way we perceive them. Furthermore our living “selves” have no absolute existence as individuals. D’Espagnat hopes his veiled reality viewpoint will interest Buddhists, especially as there’s a pretty thick veil between consciousness and “the Real.” Heisenberg vs “the Real” D’Espagnat rejects Werner Heisenberg’s (posthumously published) view that empirical reality is a product of our human-made knowledge. Heisenberg felt there were various “regions of reality” such that our knowledge of biology, for instance, weren’t entirely dependent on our knowledge of physics. Heisenberg did think there might be something that’s “truly real,” vaguely reflected upon human consciousness. However, he felt this level of reality would still be situated within ordinary space and time. It’s on that count that d’Espagnat rejects Heisenberg’s arguments as irrelevant as “the Real” is not located in space and time. Pro vs Con In the end Heisenberg finds arguments both for and against a “ground of things” dubious. You can argue against a “ground of things” but only in the sense of a “pregiven,” describable “world-per-se.” D’Espagnat finds the “pro” arguments based on “commonsense” or a pre-existing mathematical reality also unconvincing. D’Espagnat believes a “more fact-based reasoning” is called for. Universality vs Events D’Espagnat says over the past half-century interest in chaos and complexity led some scientists to demote scientific laws and promote the role of the “event,” previously seen as more or less accidental. He says he argued against rejecting the “universal” in a 1990 book. He’s more ambivalent about the emphasis on “events,” which he says takes place within empirical reality. That reflects the way we’re “apprehending the Real” but doesn’t meant that’s what “the Real” is all about. For instance, we don’t see objects as nonseparable, but that’s what quantum theory tells us. D’Espagnat says Edgar Morin and others in this school of thought have somewhat retreated from their emphasis on events, complexity, and disorder. Morin acknowledges that “Aspect-type” experimental results have shown some limitations in his approach. Nominalism vs “the Real” D’Espagnat is unimpressed with the revival of nominalism among “cultivated, literary, avant-garde people.” It’s a belief system promoted in the Middle Ages by William of Ockham and others. Nominalists nowadays reject the universal while applauding individual initiative, which they feel is a product of individual knowledge. The problem is that nominalism is an all-encompassing philosophy, referring to all things, not just living beings. The discrete atoms of classical physics have given way to “collective modes of existence.” And again such arguments apply to empirical reality not “the Real.” D’Espagnat vs the “Enlightenment” D’Espagnat believes many sophisticated members of society are still enthralled by outmoded ideas of the “Enlightenment” (d’Espagnat’s quotes). D’Espagnat acknowledges that research on chaos and events may eventually back nominalism. However, he things quantum theory and inseparability will win out. Spinoza vs “the Real” D’Espagnat cautions against thinking Spinoza was a committed materialist when he talked of “God, in other words, nature.” Although Spinoza’s natura naturans sounds like “the Real,” his natura naturata sounds like phenomena. D’Espagnat does not agree there’s a willful, personal God behind all this, however. Veiled reality is not “intelligible,” unlike Spinoza’s view of Substance. Phenomenology vs “the Real” Classical physics introduced mechanical, then mathematical, idealizations of objects. How things supposedly “really are” became separated from our “direct experience.” Quantum theory reintroduced a role for the human mind to account for our experiences. In some ways quantum theories reinforces phenomenology. Phenomenology sees an act of creativity in the human mind. It takes various pieces of sensation and constructs some entity that shares these qualities. However, on some level the source of these sensations still independently exists. Quantum theory states that some physical quantities can only be observed through human intervention, thus undermining phenomenology’s belief in independently existing sources of phenomenon. Modern “Sages” vs “the Real” In “developed” societies there are “sages” who take rather contradictory views. They say there is a reality independent of us. But they also say it’s “obvious” we rely on our perceptions to gain access to that world. So they conclude it is illogical to speak of an “unreachable” reality. We should make only statements relying on sense data or tautologies (statements that are always logically true). However, d’Espagnat continues to oppose the view that our perceptions necessarily reflect reality as it really is. Our modern “sages” try to combine realism and positivism, converting “reality-per-se” into “observation-per-se.” But there is no “observation-per-se” as observations involve human intention and selection. The Describable vs “the Real” If we reject the materialists’ rejection of “the Real,” does that force us into the camp of the radical idealists? D’Espagnat says we shouldn’t confuse “the Real” and “the describable.” First, existence takes precedence over knowledge. Secondly, there is something that says “no” to any arbitrary constructions of reality. Third, it’s hard to imagine an “a priori” that evolves. And fourthly, there are universal laws that make predictions, and it’s hard to envision how laws could do so unless you believe in miracles. Even Michel Bitbol and Hervé Zwirn have not entirely rejected the concept of “the Real” even as they critique it. D’Espagnat says thinkers should avoid pushing deductive reasoning into areas where it may not strictly apply. As a sidenote, d’Espagnat says classical instrumentalism believes a concept’s meaning and “reference,” the collection of data about the concept, are the same. Even if you replace “data” with “prediction” it’s not a universal position as predictions require a predictor. And that predictor is some being who’s doing the predicting. Laws vs “the Real” Bitbol and Zwirn may move a bit toward Platonism when they acknowledge something may constrain us that is not entirely attributable to us. However, they believe this “something” is totally inaccessible. D’Espagnat disagrees, and thinks Plato would disagree too. “The Real” must have some influence on empirical reality’s structure as Maxwell’s laws (for instance) are obeyed by phenomena. D’Espagnat’s “extended causality” links not instances of phenomena but rather phenomena and “the Real.” These structural “extended causes” move beyond Kantian causality and recall Plato’s Ideas. Structures vs Hints of Structures D’Espagnat says “the Real” is prior to mind-matter splitting, so the mind may detect hints of the mind’s source, which is “the Real.” That veiled reality is not the same as the underlying reality described by structural realism. D’Espagnat says mind-independent reality is not the source of our physical laws. At best these laws are distortions of the “great structures” of “the Real.” At worst they’re just very obscure “traces.” In the end “the Real” isn’t describable, indescribable, or party describable. The first two options imply a total presence or lack of description, and the third option implies “the Real” has parts, which isn’t the case, says d’Espagnat. Conceptualization vs Meaning If “the Real” can’t be conceptualized can it have any “meaning”? D’Espagnat cites Zwirn’s argument imagining a creature as far ahead of humans as humans are ahead of dogs or monkeys. We can conceptualize things that dogs or monkeys can’t, so surely a superhuman being could conceptualize things we can’t. D’Espagnat believes that poets can allude to things that we somehow know exists even if these concepts can’t be made explicit. Plato’s Cave vs “the Real” At first glance Plato’s Cave approximates d’Espagnat’s view of veiled reality. It suggests the emergence of (shadowy) empirical reality (seen in the cave) from “the Real” (the porters who place their Platonic Ideas in front of the light). However, the fable doesn’t deal with how consciousness (the prisoners) would have emerged from “the Real.” Furthermore, “the Real” cannot be separated into parts (while the porters hold separate objects). We cannot conceptualize “the Real” yet Plato conceptualized his Ideas. Finally, even without prisoners there’d still be the shadows, while in d’Espagnat’s system phenomena would exist only in relation to consciousness. Traditional Thought vs “the Real” D’Espagnat warns against a syncretism of old cultural elements and new philosophical points, but he wonders if “the Real” has any bearing on traditional systems. Religions speak of an “immorality,” which suggests some absolute time that physics no longer can support. However, perhaps the other term “eternity” suggests escaping this illusory time. And perhaps there is a “continuous creation” of Being in a process independent of time. Heisenberg vs d’Espagnat Heisenberg, says d’Espagnat, doubted thought could illuminate deep matters as (according to Heisenberg) thought returns to its source. But d’Espagnat notes that new science has allowed us to move past old science’s viewpoints, such as materialism. So thought has been able to illuminate some deep matters. Platonism vs d’Espagnat D’Espagnat sees similarities between his view of causality and Aristotle’s. Aristotle was a realist and was concerned with causality not just in the realm of phenomena but in “reality-in-itself.” Furthermore, Aristotle was not beholden to the idea that causes precede effects. Instead there could be “final causes” to which things might tend under the influence of Aristotle’s God. As d’Espagnat’s veiled reality is beyond time, “the Real” could impart such a “final cause” on empirical reality. Also, Aristotle’s interest in causation beyond mere phenomena reminds d’Espagnat of his own interest in causation between “the Real” and empirical reality. Aristotle distinguished between “power” and “act” while Newton supposedly saw just “act.” Aristotle saw matter as the seat of a vague potentiality. Materia prima is pure potentiality. “Informed matter” exists on more and more complex levels. Simple beings can be the “matter” for more complex beings. These complex beings in this process are more “real” as their potentiality is expressed. Therefore the deep meaning of reality lies not in the tiny components of complex beings, but rather the meaning is the complex beings themselves. In a similar fashion, in empirical reality the wave functions have an “epistemological reality” at a lower level than, say, macroscopic objects in the wake of decoherence. Although Heisenberg did not cite decoherence he did ponder the possible role of wave functions as a “materia prima.” Abner Shimony went on exploring this issue. However, they’ve both admitted it’s hard to formulate these ideas precisely. Plato vs d’Espagnat As for Plato, d’Espagnat reminds us of his earlier concerns about the Plato’s Cave. However, for Plato the deeper meaning was not in the things themselves. They didn’t reside just in “us” either. He wasn’t a radical idealist. Platonic Ideas (and his concept of the “Good”) bear resemblance to “the Real.” However, Platonic Ideas are conceptualizable while “the Real” is not. Many scientists believe, still, that analyzing more and more sense data will get us closer to the deeper meaning of reality. However, advances in science have relied on a “rapprochement” between science and a philosophical position (Platonism) that questions such a program. D’Espagnat notes that “Platonism” is a term nowadays often interpreted as “Pythagorism” with real mathematical objects. D’Espagnat does not agree with “Pythagorism,” but notes that there’s some relationship between it and Platonism. Even veiled reality has a smidgeon of Pythagorism in it as empirical reality’s objects are somehow a dim reflection of “the Real.” Einstein vs d’Espagnat Albert Einstein appears to have believed “the Real” could in principle be apprehended in its details, even if in practice that was rarely possible. However, the goal remained to explore this deeper world by discovering universal laws. Einstein also believed in three levels of religious experience. The first was based on fear, the second morals, and the third transcends ordinary human views of God. At this third level, Einstein thought, a sublime order is reflected in nature and in thought. Even scientific materialists no longer believe the common materialism that the mass media disseminates. However, there have also been developments that make us question some of Einstein’s philosophical positions. D’Espagnat sees some compatibility between his views and Einstein’s even if Pythagorism doesn’t have to be entirely correct. “The Real” does not have to be totally intelligible. The human mind may tend toward the structures and qualities of “the Real” in the sense that Max Planck had a strong affective experience in his theoretical work. It’s not necessary that mathematics reveals everything about “the Real.” Rather, as long as we have some concept of “the Real” that we can tend to, the structures and qualities of the mind may be drawn to it even as it never fully understands it due to the mind’s limitations. The Spiritual vs the Scientific Maybe this idea is closer to Einstein’s third-level religious experience rather than a completely knowable “Real.” The human mind tends to quest and exploration, though never able to fully accomplish what it desires. Einstein was still grounded in physical materialism. Later developments in physics have shown us something more human-oriented. We can’t limit Being to just material components. The mind may somehow “recall” aspects of Being as consciousness is not just a product of matter. Archetypes of some of our feelings may lie with “the Real.” There’s no way to prove this, or disprove this. But crucially we can no longer see science as an impediment to the “spiritual impetus that moves mankind,” an impetus, according to Einstein, that makes us desire to live “the whole of what is.” And it is an impetus that possesses both unity and meaning. Making an Appearance 8 December 2012 Mind the Details Bernard d’Espagnat delves into finer and finer distinctions between his veiled reality position and similar (though not identical) views. The eighteenth chapter of his On Physics and Philosophy is entitled “Objects and Philosophy,” and there’s only one chapter to go after this. Philosophers vs Consciousness Researchers D’Espagnat says he takes mostly a philosophical approach in this book. Philosophers question the basis of our reality while consciousness researchers (such as neurologists) take physical realism as a given (whether they’re conscious of this or not). Mind vs Reality Radical idealists, who think mind is “primeval,” may wonder about the relationship between mind and “basic reality.” Supporters of d’Espagnat’s “veiled reality” or “open realism” approach are even more motivated to investigate. Truth vs Reality A physical realist can say that a true statement is “adequate to what reality really is.” This is the “similitude theory” of truth. Reality vs Representations But if we don’t have access to reality as it “really is” then we might say we have access only to “human representations” of “the Real.” Instead of worrying about whether statements are true to reality you might worry more about the verifiability of statements. Knowable vs Unknowable Reality Another problem with the “similitude” approach is that quantum mechanics, the best model of the world we have, fundamentally deals with observational probabilities not plain and simple facts. Even resorting to a Broglie-Bohm approach doesn’t help as “hidden variables” will be inaccessible to the observer even in principle. Idealism vs Veiled Reality A radical idealist or Kantian rejects the similitude approach anyway. A supporter of the veiled reality approach has to take a somewhat nuanced tact. Very broad statements about physical constants or “existences prior to knowledge” may hint at “the Real” without claiming to say anything directly about “the Real” as it “really is.” Appearances vs Veiled Reality If we’re not supposed to trust in “appearances” then what is reality really like? We might think that “the Real” is just an updated version of “appearances.” Or maybe mind-independent reality is so independent that it’s entirely inaccessible. D’Espagnat says both approaches are too extreme. Causal Links vs Predictive Laws We like our ordinary, everyday version of “realism” because it lets us imagine particular cause-and-effect relationships. It’s easier to explain things when we can point to particular causes rather than just patterns of observational predictions. D’Espagnat says some causal links are genuine and independent of us, but our interpretation of these links is very much our own. For instance, causality is closely related in our minds to the notion of “will,” which entails a very anthropomorphic (human-centred) view of reality. Intersubjective Agreement vs Appearances But what if a group of humans (and maybe even non-humans!) agree on certain observations? D’Espagnat says that this agreement combined with rules of observational prediction mean this is our “reality.” Saying they’re just “appearances” is misleading. It’s a kind of “reality.” However, modern physics reminds us that humans tend to “reify” (think of the world as a set of objects). So we still have to keep in mind that empirical reality is not the same as “the Real.” Empirical Reality vs Mind-Independent Reality Although d’Espagnat is comfortable with the term “reality” to describe our empirical reality, he says we have to remember these are two “orders” or “levels” of reality. Empirical reality isn’t just a mere variant on “the Real.” Identity Theory vs Efflorescence Theory In some of the more nuanced sections of the chapter d’Espagnat makes a distinction between identity theory and efflorescence theory. Identity theory states that a genuine sensation or awareness (perhaps even thought in general) is traceable to neurons or their components. The material aspect of these neurons is the ultimate cause of our sensations. Efflorescence theory attributes sensations and awareness to “neuronal activity” rather than the material aspects of neurons or their components. Strong vs Weak Completeness D’Espagnat’s main line of attack against identity theory is the completeness principle. In its strong version, quantum mechanics is assumed to be able to describe anything at all. In its weak version, if any theory can describe something then quantum mechanics can do so as well. This leaves open the concept of hidden variables. Since quantum mechanics is antirealist it’s hard to imagine how the strong completeness principle is compatible with identity theory. Even if you take the weak version of the completeness principle all you can conclude is that the identity theory may be true—but we can never show it to be so. But what if you reject the completeness principle entirely? If you used the Broglie-Bohm model you’d still have to deal with an entangled wave function, so sensations can’t be attributed just to some limited coordinates of a particular neuron. Or you can take the Roger Penrose approach by adding nonlinear terms to the Schrödinger equation. D’Espagnat says that approach may work, but he finds it too ad hoc. It’s also work still at an early stage, yet to face the scrutiny a full theory would need to endure. Brain vs Neuron States Now, efflorescence theory relies on neuronal activity not the material aspects of neurons to explain sensation, awareness, and (perhaps) thought itself. But neurologists believe brain states not neuronal states are what drives awareness. You can’t pinpoint a particular neuron or group of neurons that are responsible. It’s the collective action spread across the brain that is associated with awareness. D’Espagnat notes the parallel to quantum entanglement. Protomentality vs Mentality Alfred North Whitehead and other thinkers in the past have wondered if simple organisms or even inorganic entities can have awareness? Abner Shimony’s “potentiality” might satisfy some objections to this concept of protomentality. Various entities have the potentiality of consciousness, but this potentiality isn’t actualized unless a nervous system is present. Consciousness vs Components of Consciousness As a final objection to the efflorescence theory, d’Espagnat says that any component we cite will be part of our empirical reality. Empirical reality depends on our consciousness. Therefore how can something that depends on our consciousness be the cause of our consciousness? D’Espagnat vs The “Received” View The “received” view that thought is produced by matter is, according to d’Espagnat, “slightly useful” as a model but must be rejected as a plausible philosophical stance. Relative Quantum States vs Relative Consciousness Because the observer decides what to measure and how, quantum states are “relative” to these procedures. However, some quantum rules may be considered “in isolation.” They’re not predictive observational rules and hence don’t involve probability. They’re more like descriptions. However, to understand the quantum world you have to consider all quantum rules not just pick and choose the non-probabilistic ones. D’Espagnat says states of consciousness are somewhat similar. Definite vs Indefinite States of Consciousness Imagine a sealed-off laboratory. Paul makes a measurement. His state of consciousness is definite but Peter doesn’t know that until Paul, say, phones him with the measurement. This is a version of Wigner’s friend, and can be extended over and over again, with an observer outside a sealed room, which contains an observer outside a sealed room, etc. Peter thinks Paul’s state of consciousness is not just unknown (before the phone call) but also undefined. It’s a superposition of possible results (pointer values, for instance). Yet once Paul makes the measurement, Paul’s state of consciousness is definite from Paul’s point of view. Consciousness vs The Absolute This apparent conflict doesn’t change the fact that physics is all about predicting observations, says d’Espagnat. However, there’s a related issue. We shouldn’t think that “predictive states of consciousness” are like some Absolute or can even be a substitute for the Absolute. Quantum states are relative, and so are states of consciousness. More precisely, states of consciousness that are predictive are relative. Physical vs Mental So we see some sort of “solidarity” between the physical and the mental, but that doesn’t mean the mental can be reduced to the physical. Wigner’s Friends vs Ultimate Reality The series of “Wigner’s friends” who occupy increasingly large rooms is suggestive of an ultimate reality that we cannot gain access to. Wigner’s friends don’t have access to the overall wave function. Predictive vs Non-Predictive Consciousness However, nothing prevents us from pondering non-predictive states of consciousness. When Paul makes the observation, his state of consciousness becomes well-defined. It’s no longer predictive. Veiled Reality vs Co-Emergence Michel Bitbol, Hervé Zwirn, and other authors speak of thought and empirical reality “co-emerging” at the same time. It’s a “self-qualifying” process by which structure emerges from an initial and total lack of structure. D’Espagnat says his veiled reality viewpoint has an “ultimate ground” endowed with general structures even if they are “far from being knowable.” This ultimate ground may form the basis for not just scientific laws but also creative and mystical endeavours. Emergence vs Non-Emergence So, according to d’Espagnat, structures emerge but don’t co-emerge. They pre-exist. Co-emergence serves merely to connect consciousness and empirical (not ultimate) reality. D’Espagnat acknowledges that in the past he has talked of consciousness and empirical reality existing “in virtue of one another.” This does not mean that empirical reality emerged from consciousness. Furthermore, these words are meant to be evocative rather than a precise philosophical statement. He reiterates the impossibility of appearances, which depend on consciousness, somehow creating consciousness. Indexed vs Non-Indexed States of Consciousness Adopting Bitbol’s terminology, d’Espagnat says some beings may possess non-indexed states of consciousness. That means these states of consciousness are not relative to any particular experimental setup. However, these states of consciousness must therefore be non-predictive. Microscopic vs Macroscopic An idealized miniature version of a being would be too small to interact with the environment to become predictive. In the intermediate state between microscopic and macroscopic, such beings could accurately predict one class of observations but would wrongly predict another class of observations. For macroscopic beings that first class of observations would still be correctly predictable but the second class of observations would be essentially impossible. These practical observations are conveniently describable in realist language, while the practically impossible observations are not. So if we want to talk about co-emergence then we should imagine the co-emergence of “public and predictive” states of consciousness and empirical, physical reality. This co-emergence is constrained by the class of observations that macroscopic beings can perform. Co-emergence draws from a mind-independent reality that presumably, according to d’Espagnat, is beyond intersubjective description. And returning to the idea of potentiality, d’Espagnat says that in moving from the microscopic to the macroscopic the “ontological potentiality” of consciousness becomes empirical actuality. “The Real” is not in itself thought, but can give rise to thought. One World vs Many Egos There appears to be one universe but many minds. Radical idealists have trouble reconciling this situation. Schrödinger calls this the “arithmetical paradox” and proposed two solutions. There’s “Leibniz’ fearful doctrine of monads,” and there’s the belief the multiplicity is only apparent. Schrödinger preferred the second approach, akin to the Upanishads, which states there is unity behind the illusion. Veiled Reality vs Radical Idealism The multiple-room experimental setup showed that predictive states of consciousness are relative. It’s hard to see how all those observers could be part of just one mind. However, perhaps various observers are making mutually compatible observations, calculable using the general Born rule. This is the same as one observer making simultaneous measurements. This sounds compatible with Schrödinger’s viewpoint. However, that doesn’t solve the problem of the observer in that sealed-off inner room. It also doesn’t take decoherence into account. On the other hand, this decoherence also hides any theoretical possibility of discovering contradictions between multiple minds and the quantum structure of physical laws. D’Espagnat thinks more work needs to be done on this issue. Traces of the Real 18 November 2012 Traces of Reality The Process of Elimination Bernard d’Espagnat gets ever deeper into familiar, and largely friendly, territory. It’s a chapter about large agreements and small disagreements as these particular critics seem to agree as much as disagree with him. His major challengers and comrades will be Michel Bitbol and (less prominently) Hervé Zwirn. Form vs Content D’Espagnat first examines Bitbol’s “verbal issues” and questions about d’Espagnat’s logical arguments. Then he moves on to more substantive issues. Veiled Reality vs Dualism Bitbol suspects “Veiled Reality” is dualistic. Classically dualism means there’s mind and there’s matter, though even in Descartes’ time philosophers puzzled how the two could interact. Materialists later on would say mind is just a manifestation of matter, but d’Espagnot says Bitbol isn’t a materialist. D’Espagnat says if Bitbol’s objection is about interactions then he’s got it wrong. D’Espagnot says he doesn’t believe mind and matter are the building blocks of “reality as it really is.” Instead mind and matter emerge from the ground reality, an “Independent Reality.” Coming from the same source mind and matter aren’t fundamentally split from each other. “Veiled Reality” vs Veiled Reality Next is the issue of whether the term “Veiled Reality” is misleading. Although d’Espagnat admits the term might suggest a world of objects behind some veil, he said it’s just simply hard to compress the concept into two words. He admits he used to prefer a “non-watered-down structural realism,” but since then he’s undergone an “evolution” rather than a “revolution.” Objectivist Language vs Objectivist Philosophy D’Espagnat says it’s convenient to talk about an instrument dial pointing to a particular spot. But objectivist language is a matter of convenience, it’s not to taken literally. D’Espagnat uses that kind of language to talk about “impressions,” not events independent of “the existence of thought.” In any event, it seems Bitbol acknowledged the misinterpretation and moved on. D’Espagnat’s approach is an “essentially negative approach” of showing what capital-R “Reality” can’t be: plural, atomistic, embedded in space-time, for instance. He says Bitbol eventually realized this about d’Espagnat’s position. Broglie-Bohm vs Dualism D’Espagnat says the Broglie-Bohm models are logically consistent and follow a mostly “classically dualistic conception.” But the subject still isn’t “face to face” with the world as there are hidden variables and a “Universal nonseparable wave function.” Hence Broglie-Bohm isn’t a fully classical dualism. “A Priori” Dualism vs Observed Dualism Kant used “a priori” arguments to support his “thing-in-itself.” But can we use the data of modern physics instead as d’Espagnat has done? Bitbol said d’Espagnat based his arguments not on quantum mechanics in general. Rather he based it on a particular interpretation, one that rejects hidden variables. D’Espagnat says he’s made no secret of that. Science chooses among various explanations and tends to be wary of “an all-powerful Zeus, for example.” Bitbol calls these factors “ampliative” criteria. And even Bitbol acknowledges that Bohm’s theories lead to a “crisis in atomism” with their “nonlocality and contextuality.” D’Espagnat says nonlocality doesn’t force the “thing-in-itself” to be inaccessible. But it undermines the hope that “the Real” can be progressively unveiled. Knowledge Of vs Knowledge About Bitbol complained that d’Espagnat’s book Veiled Reality talked about “Independent Reality” as “something.” But wouldn’t that make this supposedly independent reality an empirical reality? D’Espagnat says he was careful to say the data would have “something to do” with this reality. It would be knowledge about this reality, but not knowledge of this reality. To Sketch vs Not to Sketch Bitbol says Kantian and neo-Kantian philosophers would object to the idea that an “Independent Reality” is “prestructed.” D’Espagnat says Bitbol needs to do more than just cite possible objectors. He needs to present an actual argument. Bitbol says d’Espagnat is implying observed phenomena lets one “sketch” various features of “Independent Reality.” D’Espagnat says that to talk about sketching is misleading. By giving up on the “locality principle” he’s also giving up on “sketching” Independent Reality. Nonetheless d’Espagnat acknowledges that it’s not just a process of elimination. He does conjecture that observational data may “in a distorted and incomprehensible way” somehow reflect some structures of “the Real.” Reflected Reality vs Reflected Thought Bitbol wonders if maybe predictive laws should be considered “distorted reflections” of our own mental contributions rather than of some “Independent Reality.” D’Espagnat says it’s important to distinguish between what’s sufficient and what’s necessary. Of course “our perceptive” context is important, but probably not enough to produce those perceptions. Furthermore, anyone can come up with an interesting principle and follow its consequences. One can choose to believe all connections between perception and reality are illusory. But that doesn’t mean you’ve proved your case. In the end d’Espagnat remains confident of his “prestructure” hypothesis, though it’s “but a plausible and admittedly unverifiable conjecture.” Evidence vs Other Factors Bitbol and Zwirn also wondered if one theory could be replaced by another for reasons other than evidence. D’Espagnat replies that you can’t tweak a “realist local theory” and make it work. Nonlocality isn’t nudging one theory out of the way—it’s demanding a different theory. If Zwirn and Bitbol believe perceptions come solely from us, then we could believe in an experimentally refuted theory. This may be somehow “rational” but a scientist won’t follow such a path that undermines “science and empirical knowledge in general.” Bitbol proposes some kind of transformation groups that would explain our sensory data’s “structural invariants.” D’Espagnat thinks the analogy from group theory is inexact. In any event, it’s not particularly interesting that nonlocality could appear in some “acceptable realist theory.” What’s important is that it tells us we can never use a local realist theory to explain all of our observed data. Nonseparability vs Unity D’Espagnat admits he went too far in saying the nonseparability of Independent Reality implies some kind of unity in that Independent Reality. He agrees with Bitbol that this statement demands a principle of the excluded middle such that rational categories cover all that is possible. The transcendent may not be so intelligible. Instead of Plotinus’s “One” we should think of a unity that is “the absolutely inexpressible” (pantè aporeton). This view is still consistent with the (unprovable) view that “poetry, music, painting etc.” may provide us with glimpses of “the Real.” Similarly, physical laws and their mathematical structure may be some sort of “traces” of an underlying structure. Nonetheless the connection between those traces and that structure “may well be undecipherable.” This is definitely less than what “structural realism” would expect. Critic vs Critiqued D’Espagnat turns from being critiqued to critiquing Bitbol and Zwirn. He doesn’t see how replacing a static “a priori” with a functional one improves matters. Either way, how do you explain how Newton’s law of gravity ended up with its precise form? D’Espagnat says it “partakes very much of utopia” to expect formalism to overcome observation, which is what he thinks Bitbol believes. An all-encompassing theory of symmetries and so on is unlikely to render it immune to experimental contradiction. Furthermore, quantum theory’s axioms (a framework theory) may someday be justifiable just on their formal basis. But those axioms form the basis of quantum theories, and these are theories “in the ordinary sense.” And it’s those ordinary theories that provide the evidence against locality, for instance. Because “all men, all civilizations” share the intuition of a reality outside of us, d’Espagnat is willing to give up on Independent Reality only if it’s proved false. And it’s a conjecture that can’t be proved false. Maybe some day a conjecture of greater plausibility will supplant the concept of an Independent Reality. For now, Bitbol’s conjecture doesn’t do that. Bitbol is reverting to a medieval approach of arguing from the general to the specific, says d’Espagnat. As for Zwirn, d’Espagnat heartily approves of his analysis of modern science’s conceptual challenges. D’Espagnat believes Zwirn commits some minor errors in summarizing d’Espagnat’s approach. It’s not based on “structural realism,” which Zwirn seems to imply. However, these aren’t a big deal, and the two thinkers agree on much, says d’Espagnat. In fact, he says Zwirn may have come up with an even more detailed version of “Veiled Reality” than he has.
6f014d9f58732ad5
Tuesday, November 28, 2006 Chemistry is often called the "central science" because it connects other sciences, such as physics, material science, nanotechnology, biology, pharmacy, medicine, bioinformatics, and geology.[2] These connections are formed through various sub-disciplines that utilize concepts from multiple scientific disciplines. For example, physical chemistry involves applying the principles of physics to materials at the atomic and molecular level. Chemistry pertains to the interactions of matter. These interactions may be between two material substances or between matter and energy, especially in conjunction with the First Law of Thermodynamics. Traditional chemistry involves interactions between substances in chemical reactions, where one or more substances become one or more other substances. Sometimes these reactions are driven by energetic (enthalpic) considerations, such as when two highly energetic substances such as elemental hydrogen and oxygen react to form the less energetic substance water. Chemical reactions may be facilitated by a catalyst, which is generally another chemical substance present within the reaction media but unconsumed (such as sulfuric acid catalyzing the electrolysis of water) or a non-material phenomenon (such as electromagnetic radiation in photochemical reactions). Traditional chemistry also deals with the analysis of chemicals both in and apart from a reaction, as in spectroscopy. All ordinary matter consists of atoms or the subatomic components that make up atoms; protons, electrons and neutrons. Atoms may be combined to produce more complex forms of matter such as ions, molecules or crystals. The structure of the world we commonly experience and the properties of the matter we commonly interact with are determined by properties of chemical substances and their interactions. Steel is harder than iron because its atoms are bound together in a more rigid crystalline lattice. Wood burns or undergoes rapid oxidation because it can react spontaneously with oxygen in a chemical reaction above a certain temperature. Substances tend to be classified in terms of their energy or phase as well as their chemical compositions. The three phases of matter at low energy are Solid, Liquid and Gas. Solids have fixed structures at room temperature which can resist gravity and other weak forces attempting to rearrange them, due to their tight bonds. Liquids have limited bonds, with no structure and flow with gravity. Gases have no bonds and act as free particles. Another way to view the three phases is by volume and shape: roughly speaking, solids have fixed volume and shape, liquids have fixed volume but no fixed shape, and gases have neither fixed volume nor fixed shape. Water (H2O) is a liquid at room temperature because its molecules are bound by intermolecular forces called Hydrogen bonds. Hydrogen sulfide (H2S) on the other hand is a gas at room temperature and standard pressure, as its molecules are bound by weaker dipole-dipole interactions. The hydrogen bonds in water have enough energy to keep the water molecules from separating from each other but not from sliding around, making it a liquid at temperatures between 0 °C and 100 °C at sea level. Lowering the temperature or energy further, allows for a tighter organization to form, creating a solid, and releasing energy. Increasing the energy (see heat of fusion) will melt the ice although the temperature will not change until all the ice is melted. Increasing the temperature of the water will eventually cause boiling (see heat of vaporization) when there is enough energy to overcome the polar attractions between individual water molecules (100 °C at 1 atmosphere of pressure), allowing the H2O molecules to disperse enough to be a gas. Note that in each case there is energy required to overcome the intermolecular attractions and thus allow the molecules to move away from each other. Scientists who study chemistry are known as chemists. Most chemists specialize in one or more sub-disciplines. The chemistry taught at the high school or early college level is often called "general chemistry" and is intended to be an introduction to a wide variety of fundamental concepts and to give the student the tools to continue on to more advanced subjects. Many concepts presented at this level are often incomplete and technically inaccurate, yet they are of extraordinary utility. Chemists regularly use these simple, elegant tools and explanations in their work because they have been proven to accurately model a very wide array of chemical reactivity, are generally sufficient, and more precise solutions may be prohibitively difficult to obtain. History of chemistry The roots of chemistry can be traced to the phenomenon of burning. Fire was a mystical force that transformed one substance into another and thus was of primary interest to mankind. It was fire that led to the discovery of iron and glass. After gold was discovered and became a precious metal, many people were interested to find a method that could convert other substances into gold. This led to the protoscience called Alchemy. Alchemy was practiced by many cultures throughout history and often contained a mixture of philosophy, mysticism, and protoscience (see Alchemy). Alchemists discovered many chemical processes that led to the development of modern chemistry. As history progressed the more notable alchemists (esp. Geber and Paracelsus) evolved alchemy away from philosophy and mysticism and developed more systematic and scientific approaches. The first alchemist considered to apply the scientific method to alchemy and to distinguish chemistry from alchemy was Robert Boyle (1627–1691); however, chemistry as we know it today was invented by Antoine Lavoisier with his law of Conservation of mass in 1783. The discoveries of the chemical elements has a long history culminating in the creation of the periodic table of the chemical elements by Dmitri Mendeleyev. The Nobel Prize in Chemistry created in 1901 gives an excellent overview of chemical discovery in the past 100 years. In the early part of the 20th century the subatomic nature of atoms were revealed and the science of quantum mechanics began to explain the physical nature of the chemical bond. By the mid 20th century chemistry had developed to the point of being able to understand and predict aspects of biology spawning the field of biochemistry. • Theoretical chemistry is the study of chemistry via fundamental theoretical reasoning (usually within mathematics or physics). In particular the application of quantum mechanics to chemistry is called quantum chemistry. Since the end of the Second World War, the development of computers has allowed a systematic development of computational chemistry, which is the art of developing and applying computer programs for solving chemical problems. Theoretical chemistry has large overlap with (theoretical and experimental) condensed matter physics and molecular physics. Essentially from reductionism theoretical chemistry is just physics, just like fundamental biology is just chemistry and physics. Other fields include Astrochemistry, Atmospheric chemistry, Chemical Engineering, Chemo-informatics, Electrochemistry, Environmental chemistry, Flow chemistry, Geochemistry, Green chemistry, History of chemistry, Materials science, Medicinal chemistry, Molecular Biology, Molecular genetics, Nanotechnology, Organometallic chemistry, Petrochemistry, Pharmacology, Photochemistry, Phytochemistry, Polymer chemistry, Solid-state chemistry, Sonochemistry, Supramolecular chemistry, Surface chemistry, and Thermochemistry. Fundamental concepts The most convenient presentation of the chemical elements is in the periodic table of the chemical elements, which groups elements by atomic number. Due to its ingenious arrangement, groups, or columns, and periods, or rows, of elements in the table either share several chemical properties, or follow a certain trend in characteristics such as atomic radius, electronegativity, electron affinity, and etc. Lists of the elements by name, by symbol, and by atomic number are also available. In addition, several isotopes of an element may exist. An ion is a charged species, or an atom or a molecule that has lost or gained one or more electrons. Positively charged cations (e.g. sodium cation Na+) and negatively charged anions (e.g. chloride Cl−) can form neutral salts (e.g. sodium chloride NaCl). Examples of polyatomic ions that do not split up during acid-base reactions are hydroxide (OH−) and phosphate (PO43−). A compound is a substance with a fixed ratio of chemical elements which determines the composition, and a particular organization which determines chemical properties. For example, water is a compound containing hydrogen and oxygen in the ratio of two to one, with the oxygen between the hydrogens, and an angle of 104.5° between them. Compounds are formed and interconverted by chemical reactions. Chemical bond A chemical bond is the multipole balance between the positive charges in the nuclei and the negative charges oscillating about them. More than simple attraction and repulsion, the energies and distributions characterize the availability of an electron to bond to another atom. These potentials create the interactions which holds together atoms in molecules or crystals. In many simple compounds, Valence Bond Theory, the Valence Shell Electron Pair Repulsion model (VSEPR), and the concept of oxidation number can be used to predict molecular structure and composition. Similarly, theories from classical physics can be used to predict many ionic structures. With more complicated compounds, such as metal complexes, valence bond theory fails and alternative approaches, primarily based on principles of quantum chemistry such as the molecular orbital theory, are necessary. See diagram on electronic orbitals. States of matter Chemical reactions A Chemical reaction is a process that results in the interconversion of chemical substances. Such reactions can result in molecules attaching to each other to form larger molecules, molecules breaking apart to form two or more smaller molecules, or rearrangement of atoms within or across molecules. Chemical reactions usually involve the making or breaking of chemical bonds. For example, substances that react with oxygen to produce other substances are said to undergo oxidation; similarly a group of substances called acids or alkalis can react with one another to neutralize each other's effect, a phenomenon known as neutralization. Substances can also be dissociated or synthesized from other substances by various different chemical processes. Quantum chemistry Solutions of the Schrödinger equation for the hydrogen atom gives the form of the wave function for atomic orbitals, and the relative energy of say the 1s,2s,2p and 3s orbitals. The orbital approximation can be used to understand the other atoms e.g. helium, lithium and carbon. Chemical Laws Interpersonal chemistry In the fields of sociology, behavioral psychology, and evolutionary psychology, with specific reference to intimate relationships or romantic relationships, interpersonal chemistry is a reaction between two people or the spontaneous reaction of two people to each other, especially a mutual sense of attraction or understanding.[4] In a colloquial sense, it is often intuited that people can have either good chemistry or bad chemistry together. Other related terms are team chemistry, a phrase often used in sports, and business chemistry, as between two companies.[5] Recent developments in neurochemistry have begun to shed light on the nature of the "chemistry of love", in terms of measurable changes neurotransmitters such as oxytocin, serotonin, and dopamine. The word chemistry comes from the earlier study of alchemy, which is basically the quest to make gold from earthen starting materials. As to the origin of the word “alchemy” the question is a debatable one; it certainly has Greek origins, and some, following E. Wallis Budge, have also asserted Egyptian origins. Alchemy, generally, derives from the old French alkemie; and the Arabic al-kimia: "the art of transformation." The Arabs borrowed the word “kimia” from the Greeks when they conquered Alexandria in the year 642 AD. Saturday, November 25, 2006 INTRODUCTION: The fungal functional type is comprised of sessile heterotrophs with cell walls. Rather than ingesting food as animals do, fungal organisms absorb food across the cell wall. The assemblage of organisms termed fungi are classified into two general categories. First, are the true fungi (Kingdom Fungi) which evolved from motile, aquatic protozoa that are also ancestors to the animal kingdom. True fungi first evolved as the chytrids (Phylum Chytridiomycota) who produce an enlarged globular cell from which numerous filaments grow into the food source. Chytrids produce motile spores and gametes, and the vegetative cells are ceonocytic (many nuclei float around in one big cell). Chytrids gave rise to the Zygomycetes (Phylum Zygomycota), which produce no motile cells but form ceonocytic hyphae. From the Zygomycetes, the advanced fungi arose and in time formed the Phylum Dikaryomycota. Organisms in the Dikaryomycota produce hyphae comprised of individual nucleated cells separated by walls (septate hyphae) with each cell having two haploid nuclei in what is called an N + N configuration. The two main groups of dikaryotic fungi are the ascomycetes (sac-forming fungi) and the basidiomycetes (club-forming fungi). The majority of fungi affecting humans are ascomycetes and basidiomycetes. The second category of fungal organisms is the pseudofungi, made up of various unrelated protista groups. The pseudofungi were formerly classified into the catch-all kingdom Protista but have recently been reclassified into more-specific kingdoms that reflect genetic relationships. Important pseudofungi are the Oomycetes (egg fungi and water molds), and the slime molds. Oomycetes are closely related to the stramenopilous algae - the brown algae, golden-brown algae and diatoms of the Kingdom Stramenopila. The close relationship between the oomycetes and the brown algae is evident in that both have cellulose walls, and they share the same type of flagella. Oomycetes descend from algae that lost their chloroplasts, and hence have adopted a heterotrophic life form. Oomycetes also produce filamentous hyphae to better absorb nutrients from the food source. Because they have no relationship with the chytrids, it is clear that the oomycetes and chytridiomycetes independently evolved the mycelial life form. Slime molds evolved from various ancient protozoa and have little genetic affinity to other fungi or algae groups. They are animal-like in that they ingest food early in the life-cycle, but are fungal-like in that they produce walled sporangia and spores. In today’s lab, you will have a chance to examine the diversity of both false and true fungi. The lab will begin with the false fungi and then follow an evolutionary sequence through the true fungi. Many specimens illustrate important reproductive phases in the life-cycles of these organisms. You should examine these carefully because reproductive features are important to understanding and distinguishing the various groups of fungi. Although not required, we urge you to draw and label the specimens you examine. Our experience has been that drawings with appropriate labels are the best way to learn the features emphasized in lab and lecture. Drawings are also an excellent study tool to refresh your memory just prior to the exam A) Slime molds (Kingdoms Myxomycota and Dictyosteliomycota): Slime molds are largely saprophytic and are typically found on decaying wood in moist forests. During the vegetative phase of the life cycle (see figure 16-6 on page 353 of your text), they begin life as independent amoebae, ingesting microscopic bits of organic debris. The free-living amoebae eventually swarm together to produce a multicellular blob called a plasmodium. After a while, the plasmodium forms cellulose walls around the nuclei and produces sporangia (or fructifications). The sporangia release large numbers of air-borne spores which germinate in the presence of water to form free-living amoebae, thus completing the life cycle. Slime molds defy simple categorization. They are animal-like in that they ingest food during the amoeboid- and plasmodial-phase. They are plant-like in their formation of cell walls and sporangia during reproduction. Because of the cell-walls formed during the reproductive phase they are considered fungal in nature. There are two major types of slime molds: A) the Myxomycota are the acellular, or true plasmodial slime modes: The plasmodium stage of this group is made up of large blobs of ceonocytic protoplasm with many nuclei inside. B) the Dictyosteliomycota are the cellular clime molds, where the plasmodium is made of individual cells separated by membranes (but not cell walls). Observations on Display: Fruiting bodies of slime molds are readily found in Ontario forests in the autumn. Some will be on display for you to examine, along with a diagram of the life cycle. B) Oomycetes: the water molds, or egg fungi (Kingdom Stramenopila) Oomycetes are characterized by the formation of large egg-bearing cells on the tips of specialized hyphae termed oogonia (see figure 17-4 on page 374 of your text). Large, non-motile eggs form inside the oogonia and are fertilized by male-like hyphae termed antheridia. The antheridia grow into the egg and deposit the male gametes, which then fuse with the egg to form a zygote, termed the oospore. The oospore undergoes mitosis and forms a sporangia. The spores that are produced disperse to infect leaves, seedlings, fish and dead organisms. Oomycetes are important saprophytes in aquatic habitats. In terrestrial habitats, they are generally parasitic. Important diseases caused by water molds are downy mildews (Peronospora), potato late blight (Phytophthora infestans), and damping off disease (Pythium spp.). We will examine three species, Achlya, a saprophytic water mold; Albugo candida, the white rust of mustard plants and Phytophthora infestans. Examine the following cultures with the dissecting microscope: 1. Achlya whole mounts: Achlya is a water mold that grows on organic debris in lakes and rivers. It forms floating mycelia mats arising from the food material, and in these mats sexual reproduction occurs. On your bench are prepared slides for you to examine with the light microscope. Focus on the blue-green material in the center of the slide. This is a hyphal mat with oogonia. Observe the large conspicuous oogonia within the mycelium. Inside the oogonia you can see zygotes (oospores), that will later divide to form sporangia. Examine the oogonia closely to see if additional hyphae are attached to it. These would be antheridia, which present the sperm nucleus to the egg cells within the oogonia. Note the properties of the vegetative hyphae. Do you see cross walls, or are the cells continous within a filament? 2. Phytophthora infestans: This organism causes potato late blight, one of the worst crop diseases in the history of humanity. In the 1840’s, Phytophthora infestans was introduced to Europe from Peru. It rapidly spread across the continent, destroying much of the potato crop. In Ireland, peasants were particularly dependent on the potato for survival, and the crop losses in 1845-1848 killed millions of Irish and forced the migration of millions more to North America. In continental Europe, the loss of the potato crop led to widespread economic failure and social revolt. Many of the radical worker movements that would later influence world history, such as Marxism-Leninism, arose in the wake of the food crisis caused by the Phytophthora outbreak. Examine the Phytophthora slide prepared fresh from a culture stained with cotton blue. Identify the round oogonia in the slide and examine the hyphae for cross walls between the cells. Are there any oospores within the Oogonia? Can you see any antheridial hyphae attached to the oogonia? If you do, show other students in your group. 3. Albugo candida (White Rust of Mustards): Albugo infects plants of the mustard family, forming white pustules on the leaves. These pustules are comprised of many asexual conidiospores bursting through the plants epidermis. Inside the plant, fungal hyphae form oogonia and antheridia, which will mate and form oospores. The oospores develop sporangia which disperse genetically-distinct spores. The two prepared slides show the extent of the infection by the parasitic white rust fungus. Examine the prepared cross section showing Albugo conidiaspores bursting through the epidermis of a mustard fruit. Note the strings of conidiospores forming under the epidermis. The bulge formed by the mass of conidia produce the rust pustule. Upon rupturing, the conidiospores are released on the wind to start a series of new infections. Examine the prepared slide of the Albugo sexual organs (oogonia and antheridia). The oogonia are evident as dense, red-staining circles among the cells of the leaf tissue. Notes and Drawings: We have assembled a range of specimens from the Kingdom Fungi, and they are arranged for you to examine beginning with the most primitive (the Chytrids) and then progressing through the Zygomycetes to the Ascomycetes and Basidiomycetes. Chytrids are mainly aquatic or parasitic. They are important decomposers of pollen, dead insects and seeds that fall into ponds and rivers. Others are parasites of algae, higher fungi, mosquitoes, rotifers, and water molds. The general body plan is to form a large, central ceonocytic globule that either directly invades the body of the host, or produces diminutive hyphae termed rhizomycelium, which invade the surface cells of the host. The rhizomycelium grows into the food source and absorbs nutrients across the chitenous cell wall. Chytrids are difficult to maintain in culture and have to be baited from natural sources. We have purchased slides showing a common parasitic chytrid, Synchytrium, invading a plant host, and have prepared a fresh culture of chytrids on snake skin. Chytrids are important decomposers of animal epithelial tissue in aquatic habitats. To capture them, we have placed strips of snake skin in pond water. Chytrid spores swim to the snake skin, and then germinate on the skin, forming a simple globular body with rhizomycelium. Synchytrium: Examine the prepared slide of Synchytrium that has infected either leaves or potato tubers. Note the simple globular body (the sorus) of the chytrid embedded in the tissue of the plant. Some of the globules may have matured into sporangia, and you may see spores inside. Chytrids on snakeskin: In the dissecting scope, you can see the rhizomycelia extending from the snake skin, along with many protozoa. Note the pinhead-like cells within the mycelia. These are either the central globules from which multiple filaments of rhizomycelium arise, or they are sporangia. In the light microscope, examine the globular cells and the hyphae growing away from them. The globular cell and rhizomycelium form the basic body plan of the saprophytic chytrids. You will also see many tiny creatures swimming about the mycelium. Some of these may be zoospores released from sporangia. Examine the mycelium for any evidence of sporangia. Sporangia will be apparent as they will have only one hyphae attached to them. The Zygomycetes are important saprophytes, including species that are major decomposers of dung and food. Members of this group have a zygotic lifecycle (see figure 15-11 on page 316 of your text). The gametes are non-motile, and are born on the tips of specialized, fertile hyphae termed gametangia. The gametangia contain many haploid nuclei. In zygomycetes, the sexual act consists of two fertile hyphae growing towards each other. As they approach each other the ends of the hyphae form gametangia. The gametangia come in contact, after which the end walls disintegrate, releasing the haploid gamete nuclei into one common space. Pairs of haploid nuclei then fuse, creating many diploid nuclei. The nucleate cell formed from the residuals of the two gametangia is termed a zygosporangium. Zygosporangia typically develop a thick wall that protects the diploid nuclei from harsh conditions, forming a many nucleate (ceonocytic) resting cell. Zygosporangia germinate when the diploid nuclei undergo meiosis to produce many new haploid nuclei. The haploid nuclei are walled off into distinct spores, which are released from a dispersal sporangium that grows out of the zygosporangium. The most commonly encountered zygomycetes are the bread molds, which are important saprophytes that grow on carbohydrate-rich foods, including bread. The mycelium formed on the surface of the bread is a cottony mass that is initially white but soon darkens as the mycelium forms asexual sporanagia. Large numbers of mitospores are released, allowing the fungus to quickly spread. Examine the following, first under the dissecting scope and then with the light microscope: 1. Zygorrhynchus moelleri: This mold growing on an agar plate shows the major stages of a typical Zygomycete life cycle. First examine the culture under the dissecting microscope, and then take a very small scraping of the agar with a dissecting needle or knife. Place this on a microscope slide, stain with the cotton blue stain on your bench, and examine at medium power with both phase contrast and normal visible light. With the dissecting scope, examine the cottony matrix of the mycelium, and the sporangia rising above it, forming dark spheres on elongated stalks. These are mostly mitosporangia used in asexual reproduction. You may be able to see zygosporangia mixed in amongst the mycelium. They will be dark, barrel-shaped granules on the surface of the agar. With the light microscope, observe the slide of agar, note the clear tubular nature of the hyphae and the absence of cross walls. Next observe any mitosporangia you may have opened while pressing down the cover slip. Finally, observe the zygosporangia. These are dark, barrel-shaped structures with rough walls. Note the hyphae attached to the zygosprangia. These are the stalks of the gametangia, and are termed suspensors. 2. Rhizopus stolonifera: This is the common bread-mold, a regular feature of most pantries. We have provided you with a Petri plate of Rhizopus to examine. Note the following with the dissecting scope under high power: Examine the mycelium and sporangia. You may be able to see elongated, horizontal hyphae connecting the sporangial stalks. Rhizopus spreads by these elongated hyphae, termed stolons (after the strawberry runners of the same name). Where stolons settle onto a food source, they produce anchoring hyphae that penetrate the food. Sporangia form above this contact point. This habit allows for rapid spread of Rhizopus over a loaf of bread. Typically, Zygomycetes reproduce asexually by mitospores when conditions are good, allowing for rapid spread over a new food source. They switch to sexual reproduction when the food is exhausted and conditions deteriorate. 3. Ungulate dung: We may display some moose or cow feces which may show a range of dung-zygomycetes, possibly including the hat-throwing fungus Pilobus. If anything of interest appears to be present, examine it with the dissecting scope and note the nature of the sporangia. The dikaryomycota were formerly classified as the phyla Ascomycota and Basiodiomycota (for example, see chapter 15 in your text), but recent advances in the systematic understanding have led to the merging of these two groups in a single phylum of higher fungi, the Dikaryomycota, with the ascomycetes and basiodiomycetes being separated into subphyla termed Ascomycotina and Basiodiomycotina. We will focus on these two groups. The common feature of these groups is the formation of dikaryotic hyphae. The dikaryon arises when the protoplast of haploid hyphae fuse (they undergo plasmogamy). The nuclei do not initially fuse, and the resulting mycelium is made up of cells that are dikaryotic, or in the N + N state. Fusion of the two nuclei occurs in the fruiting body of the fungus, forming a diploid cell that immediately undergoes meiosis and mitosis to produce four to eight spores. The spores are released from sporangia formed in the fruiting body. In the ascomycetes, eight spores are released from sacs termed asci (singular is ascus, from the Latin word for sac). In the basiodiomycetes, the spores are formed on the end of a club-like sporangia termed a basidium (from the Latin word for club). The fruiting bodies of each fungus are termed an ascocarp (ascoma in your text), and the basidiocarp (basidioma in your text). The mushroom cap is a basidiocarp. We have a number of specimens for you to examine today from each subphylum. 1. Subphyllum Ascomycotina a. Unicellular forms: the yeasts: These are single-celled fungi that typically live within the food medium. Most are saprophytic, although some can become parasitic. The yeast fungus Candida albicans is an important pathogen in humans, forming diaper-rash, vaginal and urethral tract infections, and the potentially deadly sexually-transmitted disease candidiasis. The common yeast Saccharomyces cerviseae is the yeast of baking, brewing and enology (wine making). This yeast is preferred in fermentations as it rapidly grows, produces pleasant as opposed to noxious or toxic waste-products, and is tolerant of high (>10%) concentrations of ethanol. Saccharomyces cereviseae: The common brewers yeast is growing on agar. Take a very small portion off the culture and smear into a drop of water on a microscope slide. Add a cover slip, and examine at low and then high power with the compound scope. Look for budding cells amongst the large numbers of indistinct single yeast cells. These are apparent from the blob-like cellular extensions, termed buds that arise from mature yeast cells. Rather than simply dividing in two as most algae and plant cells do, yeasts divide by extruding protoplasm into a bud. This extrusion is then encapsulated in a wall and split off to form a new independent cell. Occasionally, you may see some yeast forming asci: Yeasts live in both a haploid and diploid state. When conditions are harsh, two diploid yeast nuclei merge to form a zygote, which then undergoes meiosis to produce a four celled sac, the ascus. The ascus splits open to release the four cells, which then bud to start a new population of yeast cells. You may be able to see four-celled asci floating among the many cells in your slide. If so, show your classmates. b. Filamentous Ascomycetes: Multicellular ascomycetes produce hyphae and mycelium, and form ascocarps. Three types of ascocarps are produced by these fungi, cleistothecia (enclosed spheres), perithecia (vase-like) and apothecia (cup shaped). You should examine examples of each. b.1 Cleistothecial species: i. Powdery mildew (Uncinula spp.): Powdery mildews are common pathogenic fungi that infect leaves, forming a powdery mycelium on the surface. Powdery mildews reproduce asexually by forming chains of spores (conidiospores) on special hyphae termed conidophores. During sexual reproduction, they form a simple enclosed ascocarp, the cleistothecia. Cleistothecia are completely enclosed, with no opening for the developing spores to escape. When mature, the ascocarp wall ruptures, allowing enclosed asci with their ascospores to spill out and disperse. Often, cleistothecia have barbs and hooks, which can help disperse the entire ascocarp by clinging onto the fur of passing animals. Examine the following: Dissecting scope: Scan across the leaf infected with Uncinula to note the powdery mycelium, with chains of conidia rising above it. Periodically, you will see a pepper grain-like object with multiple elongated hooks attached. This is the cleistothecia. Light microscope: Scrape some cleistothecia onto a microscope slide and cover gently with a cover slip. Examine at low power. Next press of the cover slip to rupture the ascocarp and release the spores inside. ii. Powdery mildew on leaves: We also have specimens of unknown powdery mildews collected on leaves from around Toronto. Examine these under the dissecting scope for cleistothecia and conidia. b.2 Perithecial species: The perithecium is a vase-shaped ascocarp with a narrow, open neck. Inside are multiple asci with spores. When mature, the asci protrude from the neck of the perithecia and forcibly eject the spores into the air. Sordaria is a dung saprophyte that is closely related to Neurospora, the fungus that has become one of the leading model organisms in genetic research. Dissecting scope: Sordaria is growing on agar plates, and the perithecia can be seen as dark pepper-like grains mixed in a mass of conidia-forming hyphae. Examine the perithecia closely and note the pear-like shape of the ascocarp. Are any asci protruding from the perithecia? Light microscope: Scoop some perithecia onto a slide, cover and examine at low-to medium power. Gently push on the cover slip to squash open the perithecia. Note any football-shaped spores and asci that emerge. b.3. Apothecial species: The apothecium is an ascocarp where the asci are directly exposed to the air in a cup, dome or invaginated surface. The fungi are commonly called the saucer, or cup fungi, and they include many beautiful, brightly colored forest species. The delectable morel is an apothecial ascocarp. To demonstrate the apothecium structure and form, we have a set of prepared slides and live specimens from a number of species. i. Bispora centrina (Yellow-fairy cups – live specimens): These are wood decomposing fungi that form small, brightly yellow cup shaped apothecium. Examine the stick with the fairly cups closely. You may take the stick to you bench to examine with a dissecting scope. The asci are formed on the inner surface of the cup. Return the stick to the display area when finished as we have only a few specimens. ii. Peziza (prepared slides): Peziza is a cup fungus that grows on wood and is similar in form to Bispora. A life cycle of Peziza is shown in Fig. 15-14 on page 318 of your textbook. Examine the prepared cross sections of the Peziza ascocarp under medium power with your light microscope. You will note the sac-like structure of the ascus with 8 haploid spores inside. These arose from meiosis and one subsequent round of mitosis. Note the zone of fertile tissue where the asci form. Below this are fattened vegetative cells that form the support structure of the ascocarp. b.4 Ascomycetes of special note i. Claviceps purpurea (Ergot of Rye) We have display specimens of rye shoots infected with Ergot, caused by the perithecia-forming ascomycete Claviceps purpurea. Claviveps is an example of an endo-parasite, a fungus that grows within the stem and leaves of grasses. The fungus retards growth, but does not kill the host plant. In many instances, toxins produced by the fungus deter herbivory, and so the grass host can actually show superior performance relative to a non-infected plant that is eaten. In the case of ergot, the toxin produced is lysergic acid amide, from which the hallucinogenic drug lysergic acid diethyamide (LSD) was derived. Examine the infected rye and note the grain heads with enlarged, dark-colored protrusions extending out from the grass stalk. These are sclerotia, which occur where the fungus has completely infected a developing grain and replaced the grain with a tight mat of interwoven mycelium. As the growing season ends, the sclerotia fall to the ground and overwinter. In the spring, they produce perithecia and in turn, large numbers of spores that infect the new rye crop. Sclerotia break free and mix with the rye grain at harvest. People eating rye contaminated with ergot sclerotia experience severe poisoning, called ergotism. Symptoms include wild hallucinations coupled with extreme burning sensations in the extremities. Constriction of minor veins is common, leading to limbs dying and falling off. The pain is severe, and a typical victim would scream in agony while madly hallucinating. Before modern science explained the cause, people in the past would interpret the symptoms as an attack of demons, and in regions affected by ergot outbreaks, the citizens often turned to extreme religious practices to exorcise the devil. Throughout history, witch hunts, new religious movements, and mass hysteria have been attributed to ergot outbreaks. Today, ergot poisoning is rare, and rye grain is routinely screened to filter out the larger sclerotia. Sclerotia are now intentionally grown as a source of drugs to control internal bleeding, migraine headaches, and to alter mental states in psychiatric patients. ii. Peach Brown Rot (Monilinia fructicola): Many ascomycetes are severe pathogens of fruit crops. One of the worst is Peach Brown rot, which stunts trees and destroys mature peaches, apricots, cherries and related fruit. Infected trees form cankers on the twigs and leaves. Conidia erupting from the cankers are dispersed to infect other trees by asexual means. Fruits are infected as they near maturity. After infection, lesions develop on the fruit and it prematurely rots, falls to the ground and dries to a mummified carcass. Peach mummies are completely infected with the mycelium and in this form, the fungus will overwinter. In the spring, the fungus in the mummy form apothecia, from which spores will be released in huge numbers to infect new trees. Examine the Monilinia cultures on agar with the dissecting microscope and note the lemon-shaped conidiospores arising from the mycelium. You can examine these more closely by taking a small piece of agar and preparing it on a slide for examination with the light microscope. iii. Penecillium: Many ascomycetes are saprophytes that infect food and building materials in the home. Some are also sources of important drugs, while other produce powerful carcinogens. Penicillium is one of the most common molds in the household pantry, where it infects bread, fruits and milk products. Penicllium species are also important in making strong-flavored cheeses such rouquefort, gorgonzola, chamenbert, brie and Danish Blue. The blue-green color is actually the reproductive conidia of the Penicillium mold. Pennicilium is also the source of penicillin, the antibiotic that prevents wall synthesis in gram negative bacteria. Examine the culture with the dissecting scope and note the green-blue broom-like conidiospore masses rising above the mycelium. These masses give Penicillium molds their characteristic color. Take a sample and prepare a microscope slide of it. Examine the conidiophores with conidia under the compound microscope. Note the broom-like structure of the spore-bearing mass. We also have some blue-cheese on display. Examine the Pennicilium colony through the dissecting scope and try to identify the sporangia. iv. Aspergillus: Aspergillus species are common black-colored molds in the household environment. They are frequently found on bread, drywall, and grains. Many species produce aflotoxins, which are powerful carcinogens of the liver found in stored grains, peanuts and cereals, including corn flakes. It is unwise to eat foods contaminated with wild Aspergillus species as they likely contain aflotoxins. (For example, never eat wild peanuts, or musty old grain). Beneficial Aspergillus spp. are used to produce soy sauce, miso (fermented soy paste), and to ferment rice in an early step in sake production. Examine with a dissecting scope the culture on the agar plate and note the fan-shaped mass of conidia arising above the mycelium. Next, examine a piece of the mycelium to see the bulbous conidiophore. The dark masses of conidia give this fungus its particular color and shape. Take a small chunk of infected agar and prepare a slide of the sporangia for the light microscope. Examine the swollen top of the condiophores and the attached fan-shaped array of conidia. 2. The subphylum Basidiomycotina The most familiar fungi are the basidiomycetes. The fruiting bodies of the basidiomycetes (the basidiocarp) are the recognizable features of species of mushrooms, toadstools, coral fungi, shelf fungi and tooth fungi. In each, the main body lives underground or in wood as a dispersed mycelium. Although all basidiomycetes reproduce by forming spores on club-shaped basidia, there are actually two main groups: the homobasidiomycetes and the heterbasidiomycetes. The homobasidiomycetes produce one type of spore, the basidiospore. The heterobasidiomycetes produce two types of spores during the sexual life cycle. We will focus on the homobasidiomycete life cycle as exemplified by the common food mushroom, Agaricus campestris. Heterobasidiomycetes are the pathogenic rusts and smuts. a. Basidiomycete yeast (Rhodotorula ruba): Some basidiomycetes also have evolved the unicellular life form. A common basidiomycete yeast is the red yeast, Rhodotorula ruba, a contaminant of bathroom curtains, tile and grout. The pink scum in filthy bathtubs and showers is caused by Rhodotorula. (You may remember the battle between the Cat-in-the-Hat and pink bathroom scum). Examine the red yeast culture on display. If time permits, you may prepare a microscope slide of the cells from the agar culture. Examine them for budding and basidia, which are distinguished by an elongated shape and horn-like points on one end of the cell. b. Heterobasidiomycetes: The heterobasidiomycetes include the rust diseases of grasses, and smut diseases of maize. Other members of this group are wood decomposers such as the jelly fungi. We may have a jelly fungus in the wild mushroom display. Examine the specimens of grasses infected with wheat rust (Puccinia graminis). Note the rust-colored pustules forming on the blades of the grass. These are where asexual spores are formed to allow for continued infection of healthy plants during the summer. Black pustules appear in the late-summer. These are where teliospores are formed. Teliospores are overwintering spores that form basidiospores in the spring. If available, examine any corn smut (Ustilago maydis) that may be on display. Corn smuts attack developing corn kernels and produce large, grey-colored smutballs that are filled with dark spores. Immature smutballs are served as a delicacy in Latin American cuisines. Rusts and smuts are virulent parasites of grain crops, with the potential to wipe out the production of an entire region in any given year. The primary means of preventing infestation is to breed crop cultivars that are resistant to rust infections. The rusts eventually evolve new ways to infect the cultivar, so government agencies are continuously breeding resistance into varieties to stay ahead of the rust capacity to re-evolve virulence. Should breeding efforts fall behind (for example, via cost-cutting measures by governments and agribusiness), major rust outbreaks could result, ruining grain crops and causing food prices to sky-rocket. c. Homobasidiomycetes: the Mushrooms We have fresh specimens of the common store-bought mushroom for examination, along with cultures of the inky cap mushroom, and a range of wild mushrooms from southern Ontario. To aid in examining fine detail, we have prepared slides showing cross sections of mushroom caps for you to examine under the light microscope. A detailed diagram of the life-cycle of the mushroom is presented in Figure 15-19 of your textbook (page 321). c.1. The common food mushroom Agricus bisporus: Examine a) the mycelium of the spawn blocks on display, b) the young button mushrooms and c) mature-spore producing mushrooms from the collection of fresh mushrooms provided. i. Mycelial stage: sample mycelia of the mushroom spawn that is available. Stain with cotton blue and view with the light microscope under both normal light and phase contrast. Find some isolated hyphae and examine this under high power. Note the septate nature of the hyphae. This is one of the diagnostic features of the basidiomycetes. Each cell contains two haploid nuclei in the N + N configuration. A key feature of Basidiomycetes is the presence of clamp connections, which form after cell division in order to keep the N + N dikaryotic configuration intact (see figure 15-21 in your text). Clamp connections may be visible along the end walls of the hyphal cells, forming bulges or loops around the septate wall. ii. The mushroom button stage: The basidiocarp forms from tightly woven mycelia. Initially, the basidiocarp form a button, or egg stage. Cut open a button and examine a) the immature stalk (or stipe), b) the young, white to pink gills, and c) the developing cap which extends down over the stipe. With a razor blade, cut a thin slice of a gill, and look at the slice with the light microscope. Stain the gill with Melzer’s blue. You may be able to see developing basidia with miniature spores. iii. The mature mushroom: Note the features of the basidiocarp structure. The main parts are a) a well developed stalk, termed the stipe. The cap, termed the pileus, and the gills, where the spore bearing tissues occur. The gills have turn chocolate brown as millions of spores mature. Cut a thin slide of the mature gill and examine for spores and horned basidia. Stain with the Melzers stain placed on your bench. You should be able to isolate one or two good basidia from the mass of tissue. iv. Prepared slide of the mushroom cap: Examine under the light microscope the cross sections prepared of an Agaricus pileaus. The cross sections of gills clearly demonstrate the club shaped basidia arising from the zone of fertile tissue (the hymenium). Examine the basidia under high power and note how the spores are attached to the horn-like basidial tips (the sterigmata). The spores appear as party balloons taped to a club. v. Spore prints: As the spores mature, they are released and drift into the air below the cap. If the cap is place over paper, the spores cannot disperse and settle onto the paper to form a print of the mushroom gills. Spore prints have been prepared for you to examine. Spore prints are often used to identify particular mushrooms, and they are an easy way to verify the color of the spores. c.2. The Inky-cap mushroom, Coprinus cinereus. We have displays of the Inky-cap mushroom for you to examine. The cultures show how the mushroom arises from the mycelium. Prepared slides are also available if you wish to examine the gill structure and the basidia of Corpinus. The cap self-digests upon maturity forming a purple-ink colored mass of goo. c.3. The oyster mushroom Pleurotus ostreatus: Oyster mushrooms are delightful edible mushrooms that grow on logs and decaying stumps. They produce white spores on short gills, and exhibit a large, fan-shaped pileus with a short stipe. They are now commonly cultivated and are available in many supermarkets. Examine the oyster mushrooms for morphological structure, then thin slice a gill and examine for the basidia and white spores under the microscope. c.4. The chanterelle (Chanterellus cineareus): Chanterelles are a wonderful delicacy that is prized for its gentle, buttery flavor. Unlike the gilled mushrooms, chanterelles form their basidia on gently folded tissue underneath a wavy pileus. Assuming we have these available, take a small piece of the pileaus of a chanterelle and examine for basidia and spores. What color are the spores? c.6. The wild mushroom display: We have collected a variety of wild mushrooms for you to examine for variation in form. Examine each carefully, paying attention to the pileus, the stipe (if present) and where the spore bearing tissues are located. In many of the samples, gills are not present. Instead, the spores are produced on elongated tubes that form pores on the underside of the pileus, on teeth-like protrusions that hang below the pileus, on coral-like prongs, or on rumpled folds of tissue that resemble elbow skin. These traits distinguish the major families of mushroom-forming species. You may take some of the specimens and examine them under the dissecting scope in order to better see the pores, teeth or prongs of the spore-bearing tissue. Lichens are structures formed by close symbiotic relationships between an algae and a fungus. Both green and blue-green algae can serve as the algal symbiont, while the fungus is typically an ascomycete. Because the sexual stage of the lichen that is visible is that of the fungal partner (the mycobiont), the lichen is typically named after the fungus. In the symbiosis, the algae provide carbohydrates from photosynthesis while the fungus shelters the algae and gathers water and nutrients. Lichens can completely desiccate with no harm to the organisms inside. Upon wetting they rapidly rehydrate and resume activity. This ability allows them to live in extremely harsh surfaces, such as the branches and trunks of trees, the sides of rocks, and bare ground in deserts. In the boreal zone, lichens are important ground covers on bare soil, fallen branches and the surface of rocks. They also are common as epiphytes on trees. In general, they grow extremely slow, reflecting the harsh conditions of the habitats they live in. Lichens come in three general categories based on morphology. Examine the specimens displayed in the lab room. A. Foliose lichens: Lichens that exhibit a leafy shape are termed the foliose lichens. These are common in wetter habitats, for example, forest interiors in eastern Canada. Foliose lichens are important epiphytes growing on branches of standing trees. B. Fructicose lichens: These lichens are shrubby in appearance, with many narrow, highly branched stem-like structures. Fructicose lichens are common in the boreal forest, forming dense ground covers in spruce forests. When dry, they are extremely flammable and can be used as a fire starter. They also help wildfires spread and thus contribute to some of the severe forest fires that occur every summer in Canada. Fructicose lichens are common on the sides of trees, and often hang from the bra nches in dense growths termed Witch’s hair, or Old Man’s Beard. C. Crustose lichens: Crustose lichens occur in the most extreme terrestrial environments where life is possible. They grown on the sides of rocks, buildings and on bare soil, and are common in arid and polar deserts, including the dry valleys of Antarctica. Crustose lichens form brilliant yellow, orange, red and yellow-green colors on the rock, and are some of the most beautiful features in what are otherwise barren landscapes. Study Guide: You should be familiar with the major categories of fungi and the names of the common species of yeast, store mushrooms, and major disease organisms displayed in the lab. You need to know the terms presented in bold font, and should recognize an organism well enough to classify it to phylum or where relevant, to subphylum. Know and understand the distinguishing characteristics of the major phyla presented in lab, as well as that of the ascomycetes and basidiomycetes. Significance of bryophytes • Mosses (Bryophyta) • Liverworts (Marchantiophyta) • Hornworts (Anthocerotophyta) Scientific Name, Common Name Plantae, Land Plants Embryophytes, Green plants The Bryophyta or mosses, unlike the liverworts, are present in most terrestrial habitats (even deserts) and may sometimes be the dominant plant life. As with the liverworts the plant that we commonly see is the gametophyte. It shows the beginnings of differentiation of stem and leaves - but no roots. Mosses may have rhizoids and these may be multicellular but they do little more than hold the plant down. The stem shows some internal differentiation into hydroids and leptoids which are like xylem and phloem of higher plants but very simply organized with no connection to leaves or branching stems. The leaves are mostly one cell thick; sometimes the midrib is several cells thick but this does not contain conducting tissue so it is not equivalent to the vein of a leaf. Male and female gametophytes look identical except when they produce reproductive structures. The male plant produces clusters of antheridia which contain thousands of ciliate sperm. The female produces archegonia, each containing a single egg. Fertilization is dependent on water - sperm are splashed or swim to the archegonia. The zygote grows into the diploid sporophyte which remains attached to the female gametophyte It is a leafless stem with a seta or foot at one end, drawing nutrients from the gametophyte. At the other end is a capsule in which meiosis occurs to form spores. The archegonium grows around the developing sporophyte for a while but becomes separated from the gametophyte and is carried up to form a cap or calyptra over the sporangium. Curiously, the sporangia of some mosses have stomata much like those on the leaves of vascular plants. Immature moss capsules with calyptra The calyptra is lost when the sporangium is mature as is the operculum or lid on the end of the capsule. Underneath the operculum there are often peristome teeth which open under dry conditions and control spore release A spore germinates to produce a filamentous protonema which sooner or later produces buds that grow into new gametophytes. Ecology of mosses Mosses require abundant water for growth and reproduction. They can tolerate dry spells by drying out or,in the case of mosses like Sphagnum , by holding huge amounts of water in dead cells in the leaves. They look pretty lowly and insignificant, but have become dominant in particular habitats and Sphagnum itself is said to occupy 1% of the earth's surface (half the area of the USA). Because of its ability to soak up blood and its relative freedom from bacterial contamination Sphagnum was used in dressings. The moss itself is used in some horticultural media and it is an important source of peat. Polytrichum commune one of the larger mosses with mature sporophytes If you have tried to grow a lawn in a shady location you have probably been troubled by mosses as weeds. Like many lower organisms they are very sensitive to copper salts and can be controlled in this way. On the other hand mosses are green and better adapted to shade than most grasses, so maybe we should accept them in this situation. Natural Perspective The Plant Kingdom : Mosses and Allies Mosses (Phylum: Bryophyta ) Leafy Liverworts (Phylum: Hepatophyta , Class: Jungermanniidae ) Hornworts (Phylum: Anthoceraphyta ) Suggestions for the Use of Keys 1. Select appropriate keys for the materials to be identified. The keys may be in a flora, manual, guide' handbook, monograph, or revision (see Chapter 30). If the locality of an unknown plant is known, select a flora, guide, or manual treating the plants of that geographic area (see Guides to Floras in Chapter 30). If the family or genus is recognized, one may choose to use a monograph or revision. If locality is unknown. select a general work. If materials to be identified were cultivated, select one of the manuals treating such plants since most floras do not include cultivated plants unless naturalized. 2. Read the introductory comments on format details, abbreviations, etc. .before using the key. 3. Read both leads of a couplet before making a choice. Even though the first lead may seem to describe the unknown material, the second lead may be even more appropriate. 4. Use a glossary to check the meaning of terms you do not understand. 5. Measure several similar structures when measurements are used in the key, e.g. measure several leaves not a single leaf. Do not base your decisions on a single observation It is often desirable to examine several specimens. 6. Try both choices when dichotomies are not clear or when information is insufficient, and make a decision as to which of the two answers best fits the descriptions. 7. Verify your results by reading a description, comparing the specimen with an illustration or an authentically named herbarium specimen. Suggestions for Construction of Keys 1. Identify all groups to be included in a key. 2. Prepare a description of each taxon (see Chapter 24 for details for description and descriptive format). 3. Select "key characters" with contrasting character states. Use macroscopic, morphological characters and constant character states when possible. Avoid characteristics that can only be seen in the field or on specially prepared specimens, i.e., use those characteristics that are generally available to the user. 4. Prepare a Comparison Chart (see Figure 25-3). 5. Construct strictly dichotomous keys. 6. Use parallel construction and comparative terminology in each lead of a couple. 7. Use at least two characters per lead when possible. 8. Follow key format (indented or bracketed see Figures 25-1 and 25-2). 9. Start both leads of a couple with the same word if at all possible and successive leads with different words. 10. Mention the name of the plant part before descriptive phrases, e.g., leaves or flowers blue not blue flowers, leaves alternate not alternate leaves. 11. Place those groups with numerous variable character states in a key several times when necessary. 12. Construct separate keys for dioecious plants, for flowering or fruiting materials and for vegetative materials when pertinent. • Shrub or woody vine. o Woody vine; petals 7 or more 3. Decumaria o Shrub; petals 4 or 5.  Leaves alternate or on short spur branches.  Leaves pinnately veined; ovary superior; fruit a capsule 1. Itea  Leaves palmately veined; ovary inferior; fruit a berry 2. Ribes  Leaves opposite.  Petals usually 4;-stamens 20-40; fruit longitudinally dehiscent, not ribbed; 4. Philadelphus  Petals usually 5; stamens 8-10; fruit poricidally dehiscent, 10- to 15-ribbed 5. Hydrangea • Herbs. o Staminodia present; petals more than 10 mm long 6. Parnassia o Staminodia absent; petals less than 10 mm long.  Leaves ternately decompound 7. Astilbe  Leaves simple.  Flowers solitary in leaf axils, or in short, leafy cimes.  Sepals 4; carpels 2 8. Chrysosplenium  Sepals 5; carpels 3 9. Lepuropetalon  Flowers in racemes or panicles.  Petals pinnatifid or fringed; stem leaves opposite 10. Mitella  Petals not pinnatifid or fringed; stem leaves alternate or absent.  Ovary 1-celled.  Inflorescence paniculate; stamens 5 11. Heuchera  Inflorescence racemose; stamens 10 12. Tiarella  Ovary 2-celled.  Stamens 5; leaves palmately lobed 13. Boykinia  Stamens 10; leaves not palmately lobed 14. Saxifraga • 1. Shrub or woody vine 2. • 1. Herbs 6. o 2. Woody vine; petals 7 or more Decumaria. o 2. Shrub; petals 4 or 5 3. • 3. Leaves alternate or on short spur branches 4. • 3. Leaves opposite 5. o 4. Leaves pinnately veined; ovary superior; fruit a capsule Itea. o 4. Leaves palmately veined; ovary inferior; fruit a berry Ribes. • 5. Petals usually 4; stamens 20-40; fruit longitudinally dehiscent, not ribbed Philadelphus • 5. Petals usually 5; stamens 8-10; fruit poricidally dehiscent, 10-15 ribbed Hydrangea. o 6. Staminodia present; petals more than 10 mm long Parnassia. o 6. Staminodia absent; petals less than 10 mm long 7. • 7. Leaves ternately decompound Astilbe. • 7. Leaves simple 8. o 8. Flowers solitary in leaf axils, or in short, leafy cymes 9. o 8. Flowers in racemes or panicles 10. • 9. Sepals 4; carpels 2 Chrysosplenium. • 9. Sepals 5; carpels 3 Lepuropetalon. o 10. Petals pinnatifid or fringed; stem leaves opposite Mitella. o 10. Petals not pinnatifid or fringed; stem leaves alternate or absent 11. • 11. Ovary 1-celled 12. • 11. Ovary 2-celled 13. o 12. Inflorescence paniculate; stamens 5 Heuchera. o 12. Inflorescence racemose; stamens 10 Tiarella. • 13. Stamens 5; leaves palmately lobed Boykinia. • 13. Stamens 10; leaves not palmately lobed Saxifraga. Figure 25-2. Example of a bracketed key. (Modified from Radford, A. E., 11. E. Ahles, and C. R. Bell. 1968. Manual of the Vascular Flora of the Carolinas. University of North Carolina Press. Chapel Hill, North Carolina. Used with permission.) 1. Identification of an unknown. Select an unknown specimen and identify it by keying in an appropriate manual, flora, or monograph. Verify your results by reading a description, by comparing with an illustration or by checking with your instructor. 2. Preparation of a comparison chart. Select 5 or more specimens from the group provided by your instructor. Identify each by keying. Verify your results. Prepare a description of each similar to those in a flora or manual. Be sure characters and character states are in the same order. Select contrasting character states and prepare a comparison chart (see Figure 25-3). 3. Construction of keys. Construct a dichotomous key to these specimens using the information in the comparison chart. Decumaria Itea Ribes Parnassia Heuchera Saxifraga Habit Woody vine Shrub Shrub Herb Herb Herb Leaf arrangement Opposite Alternate Alternate or on spur roots Basal (Rosulate) Basal (Rosulate) Basal Petal Number 7-10 5 5 5 5 5 Locule Number 7-10 2 1 1 1 2 Stamen Number 7+ 5 5 5 (stamonodia 5) 5 10 Fruit Type Capsule Capsule Berry Capsule Capsule Capsule Figure 25-3. A comparison chart used in the construction of keys (for six of the genera in Figures 25-1 and 25-2). In almost every ditch in Holland with reasonably clean water we will in summer find slimy masses of filamentous algae, floating as scum on the surface. It looks rather distasteful, but a ditch like that is not polluted, only eutrophic (rich in nutrients). In spring these filamentous algae grow under water but when there is enough sunlight and the temperatures are not too low, they produce a lot of oxygen, sticking in little bubbles between the tangles of the algae. These come to the surface and become visible as slimy green masses. In these tangles we will find mainly three types of filamentous algae, Spirogyra, Mougeotia and Zygnema. In this article we will mainly write about Spirogyra. From a distance these slimy tangles look perhaps a bit dirty, but under the microscope the filaments are very beautiful and moreover, they have a spectacular way of reproducing. Spirogyra owes its name to a chloroplast (the green part of the cell) that is wound into a spiral, a unique property of this genus which makes it easily to recognise. In The Netherlands up till now there are found more than 60 species of Spirogyra , in the whole world more than 400. For the determination of a species it is necessary to look for reproducing specimens with spores. But a precise determination is not necessary for learning a lot of interesting facts from Spirogyra. It is easy to see that there are many species ; in a clean, eutrophic ditch with hard water in Holland we will find easily 20 different species. If we look at a filament of Spirogyra with the microscope, the first thing that attracts attention is the chloroplast, a narrow, banded spiral with serrated edges. The small round bodies in the chloroplast are pyrenoids, centres for the production of starch. In the middle of the cell we see the transparent nucleus, with fine strands linking it to the peripheral protoplasm. The filaments contain cells of different sizes and it is easy to find a new cell, just formed after a division. The really interesting part comes as Spirogyra reproduces sexually. When two filaments are close together, the process starts. Cell outgrowths form connections between the filaments and a sort of ladder is formed. The contents of the cells in one filament will go through the connection tubes to the cells in the other filament. A zygospore is formed with a thick cell wall, round or oval and with a brownish colour. This conjugation process takes place especially between half May and half June. The spores are liberated, sink to the bottom and germinate in the next spring to form a new filament. It is very worthwhile to look in a sample of algae for the different stages of this conjugation process. It is always a nice surprise to find the conjugating filaments. Spirogyra can also exhibit, apart from the ladder like conjugation, another form of conjugation. Two neighbouring cells in the same filament can connect via a tube. There are several other genera of related filamentous algae; Zygnema and Mougeotia, with respectively star like and plate like chloroplasts. These genera live in general in more acid, soft fresh water. The conjugation figures look different from those in Spirogyra, for instance X-like. Dune pools are a rich biotope for Spirogyra. In ditches the amount of species declines when the water becomes very eutrophic. Other filamentous algae then replace it, like Cladophora, Vaucheria and Enteromorpha. In the end we only will find duck weed. Then a ditch does not receive light, with disastrous consequences for the growths of plants and the production of oxygen. The Filamentous Algae. Darkfield, x120. Brightfield. x1000. Central portion of a Spirogyra cell showing nucleus and chloroplasts. Brightfield, x1000. Brightfield. x1000. Brightfield. x1000. Conjugation in Spirogyra. Once seen never forgotten. Darkfield. x400. Darkfield, x400. Darkfield, x1000. Cladophora and Microspora. The filamentous alga Cladophora is a common inhabitatant of freshwater locations. It is called blanket weed in some places -- not an inappropriate name when in late summer dense floating rafts of Cladophora can be found both at the pond's edge and in the open water, buoyed up with the oxygen generated by its own photosynthesis. Unlike Spirogyra, Cladophora is capable of branching, and seems to produce little or no mucilagineous secretion. This, and the fact that salts tend to crystallize on the filaments of older specimens, gives it a rougher, grittier feel than other filamentous algae. It is also more readily colonized by epiphytic diatoms and other algae, and provides a protected foraging environment for the smaller pond creatures such as protozoa, worms, small crustaceans and insect larvae. Its springiness also makes it more difficult to prepare the thin, flat specimens required by the microscope. Branching in this filament of Cladophora has begun with an outgrowth of the cell at the upper end near the cell wall junction. As the branch grows, differential growth of the main cell wall causes the branch to grow forwards rather than at right angles to the original cell. An interesting feature of the picture is the distribution of plastids in the two cells shown. Since the plastids are the energy converters of the cell, large numbers have migrated into the growing branch, where the energy requirement is greatest. The cell on the right shows a distribution of plastids normal to a resting cell. Darkfield, x300. Picture shows Cladophora at a branching point. The filaments are encrusted with diatoms (Gomphonema) and crystals of calcium carbonate which give the plant its rough gritty feel. Darkfield, x400. Microspora is common in ponds, especially in the winter months. It can be recognized by its reticulated chloroplast which covers the inside wall of the cell including the cell walls between one cell and the next. Darkfield, x600. Pteridium aquilinum Bracken Fern Bracken Fern Photo © by Earl J.S. Rook Flora, fauna, earth, and sky... The natural history of the northwoods Name: • Pteridium, from the Greek pteris (pteris), "fern" • aquilinum, from the Latin, "eagle like" • Bracken, an old English word for all large ferns, eventually applied to this species in particular. • Other common names include: Brake, Brake Fern, Eagle Fern, Female Fern, Fiddlehead, Hog Brake, Pasture Brake, Western Brackenfern, Grande fougere, Fougere d'aigle, Warabi (Qué), Örnbräken, Bräken, Slokörnbräken, Taigaörnbräken, Vanlig Örnbräken (Swe), Einstape (Nor), Ørnebregne (Dan), Sananjalka (Fin), Adlerfarn (Ger), Kilpjalg, Kotkajalg, Põldsõnajalg, Seatinarohi, Sõnajalg (Estonia) Taxonomy: • Kingdom Plantae, the Plants o Division Polypodiophyta, the True Ferns  Class Filicopsida  Order Polypodiales  Family Dennstaedtiaceae  Genus Pteridium • Taxonomic Serial Number: 17224 • Also known as Pteris aquilina, Asplenium aquilinum, Allosorus aquilinus, Ornithopteris aquilina, Filix aquilina, Filix-foemina aquilina, Pteris latiuscula • Considered a single, worldwide species, although some disagree Description: • A large, deciduous, rhizomatous fern • Fronds 1'-3' w/leaf stalk up to 3'' but usually shorter than leaf blade. Blades of frond divided into pinnae, the bottom pair sometimes large enough to suggest a three part leaf. Pinna divided into pinnules. On fertile fronds the spores are borne in sori beneath the outer margins of the pinnules. Fronds are killed by frost each winter and new fronds grow in spring. Dead fronds form a mat of highly flammable litter that insulates the below-ground rhizomes from frost when there is no snow cover. This litter also delays the rise in soil temperature and emergence of frost-sensitive fronds in the spring. • Rhizomes are the main carbohydrate and water storage organs (87% water). Rhizomes can be up to 1" diameter and branching is alternate. The rhizome system has two components. The long shoots form the main axis or stem of the plant. They elongate rapidly, have few lateral buds, do not produce fronds, and store carbohydrates. Short shoots, or leaf-bearing lateral branches, may be closer to the soil surface. They arise from the long shoots, are slow growing, and produce annual fronds and many dormant frond buds. Transition shoots start from both short and long shoots and may develop into either. • Roots thin, black, brittle extending from the rhizome to over 20" inches into the soil. • Brackenfern is a large, coarse, perennial fern that has almost horizontal leaves and can grow 1½ to 6½ feet tall (sometimes up to 10 feet). Unlike our more typical broadleaf perennials, this primitive perennial lacks true stems. Each leaf arises directly from a rhizome (horizontal underground stem), and is supported on a rigid leaf stalk. In addition, brackenfern does not produce flowers or seeds. Instead, it reproduces by spores and creeping rhizomes. This species often forms large colonies. • Root system - The black, scaly, creeping rhizomes (horizontal underground stems) are ½ inch thick, and can grow as much as 20 feet long and 10 feet deep. Stout, black, wide-spreading roots grow sparsely along the rhizomes. • Seedlings & Shoots - The curled leaves (fiddleheads) emerging from rhizomes in the spring are covered with silvery gray hair. • Stems - The leaf stalk (not a true stem) is tall (about the same length as the leaf), smooth, rigid and grooved in front. It is green when young, but turns dark brown later in the season. • Leaves - The leaf stalk supports a broad (3 feet long, 3 feet wide), triangular, dark green, leathery and coarse-textured leaf that often bends nearly horizontal. The leaf is divided into 3 parts, 1 terminal and 2 opposite. Each of the leaf parts is triangular and composed of numerous oblong, pointed leaflets, which are in turn composed of narrow, blunt-tipped subleaflets. • Fruits & Seeds - A continuous line of spore cases (spore-producing structures) is formed along the underside edge of leaflets, but the spore cases are partially or completely covered by inrolled leaf margins and are difficult to see. Spore cases produce minute, brown spores. • Biology: Spores of brackenfern are produced August through September. Brackenfern is one of the earliest ferns to appear in spring or after a fire. It sometimes forms large colonies of nearly solid stands. In the fall, it is one of the first plants to be killed by frost, resulting in large patches of crisp, brown foliage. • Brackenfern is resistant to many herbicides and is tolerant of various forms of mechanical control. However, effective control has been obtained by repeated removal of aboveground growth, which eventually exhausts the food reserves in the rhizomes. Identification: • Distinguished from other large North Country ferns by the large three part leaf atop a tall stalk. • Field Marks o broad triangular leaf held almost parallel to the ground o smooth, grooved, rigid stalk about as long as the leaf o narrowed tip to leaflets Distribution: • Global; throughout the world with the exception of hot and cold deserts • Grows on a variety of soils with the exception of heavily waterlogged sites. Efficient stomatal control allows it to succeed on sites that would be too dry for most ferns, and its distribution does not normally seem limited by moisture. Grows best on deep, well-drained soils with good water-holding capacity, and may dominate other vegetation on such sites. • Rhizomes are particularly effective at mobilizing phosphorus from inorganic sources into an available form for plant use. Bracken fern contributes to potassium cycling on sites and is associated with high levels of potassium. • A shade intolerant pioneer and succession species that is sufficiently shade tolerant to survive in light spots in old growth forests. • Light, windborne spores allow colonization of newly vacant areas. • Competition: Invades cultivated fields and disturbed areas, effectively competing for soil moisture and nutrients. Rhizomes grow under the roots of herbs and tree or shrub seedlings, and when the fronds emerge, they shade the smaller plants. In the winter dead fronds may bury other plants and press them to the ground. On some sites shading may protect tree seedlings and increase survival. • Allelopathy: Bracken fern's production and release of allelopathic chemicals is an important factor in its ability to dominate other vegetation. Farther north no allelopathic chemicals are released from the green fronds but are readily leached from standing dead fronds. Herbs may be inhibited for a full growing season after bracken fern is removed, apparently because active plant toxins remain in the soil. Fire: • A fire-adapted species throughout the world. Not merely well adapted to fire, it promotes fire by producing a highly flammable layer of dried fronds every fall. Repeated fires favor Bracken. • Primary fire adaptation is deeply buried rhizomes which sprout vigorously following fires before most competing vegetation is established. Windborne spores may disperse over long distances. • Fire removes competition and creates the alkaline soil conditions suitable for its establishment from spores • Fuel loading in areas dominated by bracken fern can be quite high. Associates: • Shrubs: Bunchberry (Cornus canadensis), Twinflower (Linnaea borealis) • Herbs: Wild Sarsaparilla (Aralia nudicaulis), Large Leaf Aster (Aster macrophyllus), Blue Bead Lily (Clintonia borealis), Gold Thread (Coptis trifolia), Bedstraws (Galium ssp.), Oak Fern (Gymnocarpium dryopteris), Canada Mayflower (Maianthemum canadense), Bishop's Cap (Mitella nuda), One Flowered Pyrola (Moneses uniflora), One Sided Pyrola (Pyrola secunda), Rose Twisted Stalk (Streptopus rosea), Starflower (Trientalis borealis), Kidney Leaf Violet (Viola renifolia), Violets (Viola spp.) • Mammals: Palatability is usually nil to poor History: • Considered so valuable during the Middle Ages it was used to pay rents. • Used as roofing thatch and as fuel when a quick hot fire was desired. • The ash was used as a source of potash in the soap and glass industry until 1860 and for making soap and bleach. The rhizomes were used in tanning leathers and to dye wool yellow. • Also used as a green mulch and compost • Fronds may release hydrogen cyanide (HCN) when they are damaged (cyanogenesis), particularly the younger fronds. Herbivores, including sheep, selectively graze young fronds that are acyanogenic (without HCN) Lignin, tannin, and silicate levels tend to increase through the growing season making the plants less palatable. Cyanide (HCN) levels fall during the season as do the levels of a thiaminase which prevents utilization of B vitamins. • Toxicity: Known to be poisonous to livestock throughout the US, Canada, and Europe. Simple stomach animals like horses, pigs, and rats develop a thiamine deficiency within a month. Acute bracken poisoning affects the bone marrow of both cattle and sheep, causing anemia and hemorrhaging which is often fatal. Blindness and tumors of the jaws, rumen, intestine, and liver are found in sheep feeding on bracken fern. • Toxicity: All parts of brackenfern, including rootstocks, fresh or dry leaves, fiddleheads and spores, contain toxic compounds, and are poisonous to livestock and humans. Consumption of brackenfern causes vitamin B1 deficiency in horses, and toxins can pass into the milk of cattle. Young leaves of brackenfern have been used as a human food source, especially in Japan, and may be linked to increased incidence of stomach cancer. Humans working outdoors near abundant stands of the plant may be at risk from cancer-causing compounds in the spores. • Facts and Folklore: It was once thought that, if the spores of the brackenfern were gathered on St. John's Eve, it would make the possessor invisible. In the 17th century, live brackenfern was set on fire in hopes of producing rain. Brackenfern fiddleheads have been used as a food source; however, their consumption has been linked to various types of cancer in humans. Reproduction: • Reproduces by spores and vegetatively by rhizomes • Most regeneration is vegetative. Many have searched for young plants growing from spores, but few have found them. However, spores do germinate and grow readily in culture. • Young plants produce spores by the end of the second growing season in cultivation but normally do not produce spores until the third or fourth growing season. A single, fertile frond can produce 300,000,000 spores annually. Spore production varies from year to year depending on plant age, frond development, weather, and light exposure. Production decreases with increasing shade. The wind-borne spores are extremely small. Dry spores are very resistant to extreme physical conditions, although the germination of bracken fern spores declines from 95-96% to around 30-35% after 3 years storage. The spores germinate without any dormancy requirement. Under favorable conditions, young plants could be found 6 to 7 weeks after the spores are shed. Under normal conditions the spores may not germinate until the spring after they are shed. • Sufficient moisture and shelter from wind are important factors in fern spore germination. Bracken fern spore germination appears to require soil sterilized by fire. On unsterilized soils spores may germinate, but the new plants are quickly overwhelmed by other growth. Temperatures between 59º and 86º F are generally best for germination, although bracken fern is capable of germination at 33º-36ºF. • A pH range of 5.5 to 7.5 is optimal for germination. Germination is indifferent to light quality; it is one of the few ferns that can germinate in the dark. Despite limitations on spore germination, genotype analysis in the Northeast indicates that many stands of bracken fern represent multiple establishment of individuals from spores. • When spores germinate, they produce bisexual, gamete-bearing plants about ¼" in diameter and one cell thick. These tiny plants have no vascular system and require very moist conditions to survive. The young spore-bearing plant which develops from the fertilized egg is initially dependent on the gametopyte until it develops its first leaf and roots. The first fronds are simple and lobed. They develop into thin, delicate fronds divided into lobed pinnae. They do not look like adult plants and are frequently not recognized as bracken fern. Cultivated plants begin to resemble adult fern after 18 weeks. The rhizomes begin to develop after there are a number (up to 10) of fronds and a well-developed root system or in the fifteenth week of growth under optimal conditions.In the first year rhizomes may grow to 86 inches long. By the end of a second year the rhizome system may exceed 6' in diameter. • Aggressive rhizome system gives it the ability to reproduce vegetatively and reduces dependence on water for reproduction. The rhizomatous clones can be up to 400' in diameter and hundreds of years old; some clones alive today may be over 1,000 years old. • Shaded plants produce fewerspores than plants in full sun • Bracken fern is a survivor. The fronds are generally killed by fire, but some rhizomes survive. The rhizomes are sensitive to elevated temperatures. During fires the rhizome system is insulated by mineral soil. Depth of the main rhizome system is normally between 3½" and 12" short rhizomes may be within 1½" of the surface and some rhizomes may be as deep as 40". • Well known postfire colonizer in eastern pine and oak forests. Fire benefits bracken by removing competition while it sprouts profusely from surviving rhizomes. New sprouts are more vigorous following fire, and bracken fern becomes more fertile, producing far more spores than it does in the shade • Spores germinate well on alkaline soils, allowing them to establish in the basic conditions created by fire. Propagation: • Division most successful method Cultivation: • Hardy to USDA Zone 3 (average minimum annual temperature -40ºF) • Characteristically found on soils with medium to very rich nutrients. • Cultivated and shaded plants produce fewer, thinner but larger fronds than open-grown plants Population Genetics and Evolution In 1908, G.H.Hardy and W. Weinberg independently suggested a scheme whereby evolution could be viewed as changes in frequency of alleles in a population of organisms. In this scheme, if A and a are alleles for a particular gene locus and each diploid individual has two such loci, then p can be designated as the frequency of the A allele and q as the frequency of the a allele. For example, in a population of 100 individuals ( each with two loci ) in which 40% of the alleles are A, p would be 0.40. The rest of the alleles would be ( 60%) would be a and q would be equal to 0.60. p + q = 1 These are referred to as allele frequencies. The frequency of the possible diploid combinations of these alleles ( AA, Aa, aa ) is expressed as p2 +2pq +q2 = 1.0. Hardy and Weinberg also argued that if 5 conditions are met, the population's alleles and genotype frequencies will remain constant from generation to generation. These conditions are as follows: • The breeding population is large. ( Reduces the problem of genetic drift.) • Mating is random. ( Individual show no preference for a particular mating type.) • There is no mutation of the alleles. • No differential migration occurs. ( No immigration or emigration.) • There is no selection. ( All genotypes have an equal chance of surviving and reproducing.) The Hardy-Weinberg equation describes an existing situation. Of what value is such a rule? It provides a yardstick by which changes in allelic frequencies can be measured. If a population's allelic frequencies change, it is undergoing evolution. Estimating Allele Frequencies for a Specific Trait within a Sample Population: Using the class as a sample population, the allele frequency of a gene controlling the ability to taste the chemical PTC (phenylthiocarbamide) could be estimated. A bitter taste reaction is evidence of the presence of a dominant allele in either a homozygous (AA) or heterozygous (Aa) condition. The inability to taste the PTC is dependent on the presence of the two recessive alleles (aa). Instead of using the PTC paper the trait for tongue rolling may be substituted. To estimate the frequency of the PTC -tasting allele in the population, one must find p. To find p, one must first determine q ( the frequency of the non tasting allele). 1. Using the PTC taste test paper, tear off a short strip and press it to your tongue tip. PTC tasters will sense a bitter taste. 2. A decimal number representing the frequency of tasters (p2+2pq) should be calculated by dividing the number of tasters in the class by the total number of students in the class. A decimal number representing the frequency of the non tasters (q2) can be obtained by dividing the number of non tasters by the total number of students. You should then record these numbers in Table 8.1. 3. Use the Hardy-Weinberg equation to determine the frequencies (p and q ) of the two alleles. The frequency q can be calculated by taking the square root of q2. Once q has been determined, p can be determined because 1-q=p. Record these values in Table 8.1 for the class and also calculate and record values of p and q for the North American population. Table 8.1 Phenotypic Proportions of Tasters and Nontasters and Frequencies of the Determining Alleles Phenotypes Allele Frequency Based on the H-W Equation Tasters (p2+2pq) Non Tastes(q2) p q Class Population #= %= #= %= North American Population 0.55 0.45 Topics for Discussion: 1. What is the percentage of heterozygous tasters (2pq) in your class? ______________________. 2. What percentage of the North American population is heterozygous for the taster allele? _____________ Case Studies: Case 1 ( Test of an Ideal Hardy-Weinberg Community) The entire class will represent a breeding population, so find a large open space for its simulation. In order to ensure random mating, choose another student at random. In this simulation, we will assume that gender and genotype are irrelevant to mate selection. The class will simulate a population of randomly mating heterozygous individuals with an initial gene frequency of 0.5 for the dominant allele A and the recessive allele a and genotype frequencies of 0.25AA, 0.50Aa, and 0.25aa. Record this on the Data page at the end of the lab. Each member of the class will receive four cards. Two cards will have A and two cards will have a. The four cars represent the products of meiosis. Each "parent" will contribute a haploid set of chromosomes to the next generation. 1. Turn the four cards over so the letters are not showing, shuffle them, and take the card on top to contribute to the production of the first offspring. Your partner should do the same. Put the cards together. The two cards represent the alleles of the first offspring. One of you should record the genotype of this offspring in the Case 1 section at the end of the lab. Each student pair must produce two offspring, so all four cards must be reshuffled and the process repeated to produce a second offspring. 2. The other partner should then record the genotype of the second offspring in the Case 1 section at the end of the lab. Using the genotypes produced from the matings, you and your partner will mate again using the genotypes of the two offspring. That is , student 1 assumes the genotype of the first offspring, and student 2 assumes the genotype of the second offspring. 3. Each student should obtain, if necessary, new cards representing their alleles in his or her respective gametes after the process of meiosis. For example, student 1 becomes the genotype Aa and obtains cards A,A,a,a; student 2 becomes aa and obtains cards,a,a,a,a. Each participant should randomly seek out another person with whom to mate in order to produce offspring of the next generation. You should follow the same mating procedure as for the first generation, being sure you record your new genotype after each generation in the Case 1 section. Class data should be collected after each generation for five generations. At the end of each generation, remember to record the genotype that you have assumed. Your teacher will collect class data after each generation by asking you to raise your hand to report your genotype. Allele frequency: The allele frequencies, p and q, should be calculated for the population after five generations of simulated random mating. Number of A alleles present at the fifth generation Number of offspring with genotype AA _____________ X 2= _______________ A alleles Number of offspring with genotype Aa _____________ X 1= ________________A alleles Total = ____________ A alleles p = Total number of A alleles = Total number of alleles in the population In this case, the total number of alleles in the population is equal to the number of students in the class X 2. Number of a alleles present at the fifth generation Number of offspring with genotype aa _____________ X 2= _______________ a alleles Number of offspring with genotype Aa _____________ X 1= ________________A alleles Total = ____________ a alleles q = Total number of a alleles = Total number of alleles in the population 1. What does the Hardy-Weinberg equation predict for the new p and q?. 2. Do the results you obtained in this simulation agree? __________ If not, why not? 3. What major assumption(s) were not strictly followed in this simulation? Case 2 ( Selection ) In this case you will modify the simulation to make it more realistic. in the natural environment , not all genotypes have the same rate of survival; that is, the environment might favor some genotypes while selecting against others. An example is the human condition sickle-celled anemia. It is a condition caused by a mutation on one allele, in which a homozygous recessive does not survive to reproduce. For this simulation you will assume that the homozygous recessive individuals never survive. Heterozygous and homozygous dominant individuals always survive. The procedure is similar to that for Case 1. Start again with your initial genotype, and produce your "offspring" as in Case 1. This time, However, there is one important difference. Every time your offspring is aa it does not reproduce. Since we want to maintain a constant population size, the same two parents must try again until they produce two surviving offspring. You may need to get new allele cards from the pool. Proceed through five generations, selecting against the homozygous offspring 100% of the time. Then add up the genotype frequencies that exist in the population and calculate the new p and q frequencies in the same way as it was done in Case 1. Number of A alleles present at the fifth generation Number of offspring with genotype Aa _____________ X 1= ________________A alleles Total = ____________ A alleles p = Total number of A alleles = Total number of alleles in the population Number of a alleles present at the fifth generation Number of offspring with genotype Aa _____________ X 1= ________________A alleles Total = ____________ a alleles q = Total number of a alleles = Total number of alleles in the population 1. How do the new frequencies of p and q compare to the initial frequencies in Case 1? 2. How has the allelic frequency of the population changed? 3. Predict what would happen to the frequencies of p and q if you simulated another 5 generations. Hardy-Weinberg Problems 1. In Drosophila, the allele for normal length wings is dominant over the allele for vestigial wings. In a population of 1,000 individuals, 360 show the recessive phenotype. How many individuals would you expect to be homozygous dominant and heterozygous for this trait? 2. The allele for the ability to roll one's tongue is dominant over the allele for the lack of this ability. In a population of 500 individuals, 25% show the recessive phenotype. How many individuals would you expect to be homozygous dominant and heterozygous for this trait? 3. The allele for the hair pattern called "widow's peak" is dominant over the allele for no "widow's peak." In a population of 1,000 individuals, 510 show the dominant phenotype. How many individuals would you expect of each of the possible three genotypes for this trait? 4. In a certain population, the dominant phenotype of a certain trait occurs 91 % of the time. What is the frequency of the dominant allele? Data Page: Case 1 ( Hardy-Weinberg Equilibrium ) Initial Class Frequencies: AA ________ Aa________ aa_________ My initial genotype :_______________ F1 Genotype ______ F2 Genotype ______ F3 Genotype ______ F4 Genotype ______ F5 Genotype ______ Final Class Frequencies: AA ________ Aa________ aa_________ p _________ q __________ Case 2 ( Selection ) Initial Class Frequencies: AA ________ Aa________ aa_________ My initial genotype :_______________ F1 Genotype ______ F2 Genotype ______ F3 Genotype ______ F4 Genotype ______ F5 Genotype ______ Final Class Frequencies: AA ________ Aa________ aa_________ p _________ q __________ Biology 198 Hardy-Weinberg practice questions p = frequency of the dominant allele in the population q = frequency of the recessive allele in the population p2 = percentage of homozygous dominant individuals q2 = percentage of homozygous recessive individuals 2pq = percentage of heterozygous individuals Remember the basic formulas: p = frequency of the dominant allele in the population q = frequency of the recessive allele in the population p2 = percentage of homozygous dominant individuals q2 = percentage of homozygous recessive individuals 2pq = percentage of heterozygous individuals 1. PROBLEM #1. A. The frequency of the "aa" genotype. Answer: 36%, as given in the problem itself. C. The frequency of the "A" allele. Answer: Since q = 0.6, and p + q = 1, then p = 0.4; the frequency of A is by definition equal to p, so the answer is 40%. D. The frequencies of the genotypes "AA" and "Aa." Answer: The frequency of AA is equal to p2, and the frequency of Aa is equal to 2pq. So, using the information above, the frequency of AA is 16% (i.e. p2 is 0.4 x 0.4 = 0.16) and Aa is 48% (2pq = 2 x 0.4 x 0.6 = 0.48). E. The frequencies of the two possible phenotypes if "A" is completely dominant over "a." Answers: Because "A" is totally dominate over "a", the dominant phenotype will show if either the homozygous "AA" or heterozygous "Aa" genotypes occur. The recessive phenotype is controlled by the homozygous aa genotype. Therefore, the frequency of the dominant phenotype equals the sum of the frequencies of AA and Aa, and the recessive phenotype is simply the frequency of aa. Therefore, the dominant frequency is 64% and, in the first part of this question above, you have already shown that the recessive frequency is 36%. 2. PROBLEM #2. Sickle-cell anemia is an interesting genetic disease. Normal homozygous individials (SS) have normal blood cells that are easily infected with the malarial parasite. Thus, many of these individuals become very ill from the parasite and many die. Individuals homozygous for the sickle-cell trait (ss) have red blood cells that readily collapse when deoxygenated. Although malaria cannot grow in these red blood cells, individuals often die because of the genetic defect. However, individuals with the heterozygous condition (Ss) have some sickling of red blood cells, but generally not enough to cause mortality. In addition, malaria cannot survive well within these "partially defective" red blood cells. Thus, heterozygotes tend to survive better than either of the homozygous conditions. If 9% of an African population is born with a severe form of sickle-cell anemia (ss), what percentage of the population will be more resistant to malaria because they are heterozygous (Ss) for the sickle-cell gene? Answer: 9% =.09 = ss = q2. To find q, simply take the square root of 0.09 to get 0.3. Since p = 1 - 0.3, then p must equal 0.7. 2pq = 2 (0.7 x 0.3) = 0.42 = 42% of the population are heterozygotes (carriers). 3. PROBLEM #3. A. The frequency of the recessive allele. Answer: Since we believe that the homozygous recessive for this gene (q2) represents 4% (i.e. = 0.04), the square root (q) is 0.2 (20%). B. The frequency of the dominant allele. Answer: Since q = 0.2, and p + q = 1, then p = 0.8 (80%). C. The frequency of heterozygous individuals. Answer: The frequency of heterozygous individuals is equal to 2pq. In this case, 2pq equals 0.32, which means that the frequency of individuals heterozygous for this gene is equal to 32% (i.e. 2 (0.8)(0.2) = 0.32). 4. PROBLEM #4. A. The percentage of butterflies in the population that are heterozygous. B. The frequency of homozygous dominant individuals. Answers: The first thing you'll need to do is obtain p and q. So, since white is recessive (i.e. bb), and 40% of the butterflies are white, then bb = q2 = 0.4. To determine q, which is the frequency of the recessive allele in the population, simply take the square root of q2 which works out to be 0.632 (i.e. 0.632 x 0.632 = 0.4). So, q = 0.63. Since p + q = 1, then p must be 1 - 0.63 = 0.37. Now then, to answer our questions. First, what is the percentage of butterflies in the population that are heterozygous? Well, that would be 2pq so the answer is 2 (0.37) (0.63) = 0.47. Second, what is the frequency of homozygous dominant individuals? That would be p2 or (0.37)2 = 0.14. 5. PROBLEM #5. A. The allele frequencies of each allele. Answer: Well, before you start, note that the allelic frequencies are p and q, and be sure to note that we don't have nice round numbers and the total number of individuals counted is 396 + 557 = 953. So, the recessive individuals are all red (q2) and 396/953 = 0.416. Therefore, q (the square root of q2) is 0.645. Since p + q = 1, then p must equal 1 - 0.645 = 0.355. B. The expected genotype frequencies. Answer: Well, AA = p2 = (0.355)2 = 0.126; Aa = 2(p)(q) = 2(0.355)(0.645) = 0.458; and finally aa = q2 = (0.645)2 = 0.416 (you already knew this from part A above). C. The number of heterozygous individuals that you would predict to be in this population. Answer: That would be 0.458 x 953 = about 436. D. The expected phenotype frequencies. Answer: Well, the "A" phenotype = 0.126 + 0.458 = 0.584 and the "a" phenotype = 0.416 (you already knew this from part A above). E. Conditions happen to be really good this year for breeding and next year there are 1,245 young "potential" Biology instructors. Assuming that all of the Hardy-Weinberg conditions are met, how many of these would you expect to be red-sided and how many tan-sided? Answer: Simply put, The "A" phenotype = 0.584 x 1,245 = 727 tan-sided and the "a" phenotype = 0.416 x 1,245 = 518 red-sided ( or 1,245 - 727 = 518). 6. PROBLEM #6. A very large population of randomly-mating laboratory mice contains 35% white mice. White coloring is caused by the double recessive genotype, "aa". Calculate allelic and genotypic frequencies for this population. Answer: 35% are white mice, which = 0.35 and represents the frequency of the aa genotype (or q2). The square root of 0.35 is 0.59, which equals q. Since p = 1 - q then 1 - 0.59 = 0.41. Now that we know the frequency of each allele, we can calculate the frequency of the remaining genotypes in the population (AA and Aa individuals). AA = p2 = 0.41 x 0.41 = 0.17; Aa = 2pq = 2 (0.59) (0.41) = 0.48; and as before aa = q2 = 0.59 x 0.59 = 0.35. If you add up all these genotype frequencies, they should equal 1. 7. PROBLEM #7. After graduation, you and 19 of your closest friends (lets say 10 males and 10 females) charter a plane to go on a round-the-world tour. Unfortunately, you all crash land (safely) on a deserted island. No one finds you and you start a new population totally isolated from the rest of the world. Two of your friends carry (i.e. are heterozygous for) the recessive cystic fibrosis allele (c). Assuming that the frequency of this allele does not change as the population grows, what will be the incidence of cystic fibrosis on your island? Answer: There are 40 total alleles in the 20 people of which 2 alleles are for cystic fibrous. So, 2/40 = .05 (5%) of the alleles are for cystic fibrosis. That represents p. Thus, cc or p2 = (.05)2 = 0.0025 or 0.25% of the F1 population will be born with cystic fibrosis. 8. PROBLEM #8. M MM 490 0.49 MN MN 420 0.42 N NN 90 0.09 Using the data provide above, calculate the following: A. The frequency of each allele in the population. Answer: Since MM = p2, MN = 2pq, and NN = q2, then p (the frequency of the M allele) must be the square root of 0.49, which is 0.7. Since q = 1 - p, then q must equal 0.3. B. Supposing the matings are random, the frequencies of the matings. Answer: This is a little harder to figure out. Try setting up a "Punnett square" type arrangement using the 3 genotypes and multiplying the numbers in a manner something like this: MM (0.49) MN (0.42) NN (0.09) MM (0.49) 0.2401* 0.2058 0.0441 MN (0.42) 0.2058 0.1764* 0.0378 NN (0.09) 0.0441 0.0378 0.0081* C. Note that three of the six possible crosses are unique (*), but that the other three occur twice (i.e. the probabilities of matings occurring between these genotypes is TWICE that of the other three "unique" combinations. Thus, three of the possibilities must be doubled. D. MM x MM = 0.49 x 0.49 = 0.2401 MM x MN = 0.49 x 0.42 = 0.2058 x 2 = 0.4116 MM x NN = 0.49 x 0.09 = 0.0441 x 2 = 0.0882 MN x MN = 0.42 x 0.42 = 0.1764 MN x NN = 0.42 x 0.09 = 0.0378 x 2 = 0.0756 NN x NN = 0.09 x 0.09 = 0.0081 E. The probability of each genotype resulting from each potential cross. Answer: You may wish to do a simple Punnett's square monohybrid cross and, if you do, you'll come out with the following result: MM x MM = 1.0 MM MM x MN = 0.5 MM 0.5 MN MM x NN = 1.0 MN MN x MN = 0.25 MM 0.5 MN 0.25 NN MN x NN = 0.5 MN 0.5 NN NN x NN = 1.0 NN 9. PROBLEM #9. A. The frequency of the recessive allele in the population. Answer: We know from the above that q2 is 1/2,500 or 0.0004. Therefore, q is the square root, or 0.02. That is the answer to our first question: the frequency of the cystic fibrosis (recessive) allele in the population is 0.02 (or 2%). B. The frequency of the dominant allele in the population. Answer: The frequency of the dominant (normal) allele in the population (p) is simply 1 - 0.02 = 0.98 (or 98%). C. The percentage of heterozygous individuals (carriers) in the population. Answer: Since 2pq equals the frequency of heterozygotes or carriers, then the equation will be as follows: 2pq = (2)(.98)(.02) = 0.04 or 1 in 25 are carriers. 10. PROBLEM #10. In a given population, only the "A" and "B" alleles are present in the ABO system; there are no individuals with type "O" blood or with O alleles in this particular population. If 200 people have type A blood, 75 have type AB blood, and 25 have type B blood, what are the alleleic frequencies of this population (i.e., what are p and q)? Answer: To calculate the allele frequencies for A and B, we need to remember that the individuals with type A blood are homozygous AA, individuals with type AB blood are heterozygous AB, and individuals with type B blood are homozygous BB. The frequency of A equals the following: 2 x (number of AA) + (number of AB) divided by 2 x (total number of individuals). Thus 2 x (200) + (75) divided by 2 (200 + 75 + 25). This is 475/600 = 0.792 = p. Since q is simply 1 - p, then q = 1 - 0.792 or 0.208. 11. PROBLEM #11. The ability to taste PTC is due to a single dominate allele "T". You sampled 215 individuals in biology, and determined that 150 could detect the bitter taste of PTC and 65 could not. Calculate all of the potential frequencies. Answer: First, lets go after the recessives (tt) or q2. That is easy since q2 = 65/215 = 0.302. Taking the square root of q2, you get 0.55, which is q. To get p, simple subtract q from 1 so that 1 - 0.55 = 0.45 = p. Now then, you want to find out what TT, Tt, and tt represent. You already know that q2 = 0.302, which is tt. TT = p2 = 0.45 x 0.45 = 0.2025. Tt is 2pq = 2 x 0.45 x 0.55 = 0.495. To check your own work, add 0.302, 0.2025, and 0.495 and these should equal 1.0 or very close to it. This type of problem may be on the exam. 12. PROBLEM #12. (You will not have this type of problem on the exam) What allelic frequency will generate twice as many recessive homozygotes as heterozygotes? Answer: We need to solve for the following equation: q2 (aa) = 2 x the frequency of Aa. Thus, q2 (aa) = 2(2pq). Or another way of writing it is q2 = 4 x p x q. We only want q, so lets trash p. Since p = 1 - q, we can substitute 1 - q for p and, thus, q2 = 4 (1 - q) q. Then, if we multiply everything on the right by that lone q, we get q2 = 4q - 4q2. We then divide both sides through by q and get q = 4 - 4q. Subtracting 4 from both sides, and then q (i.e. -4q minus q = -5q) also from both sides, we get -4 = -5q. We then divide through by -5 to get -4/-5 = q, or anotherwards the answer which is 0.8 =q. I cannot imagine you getting this type of problem in this general biology course although if you take algebra good luck                                                       FIND XICE MUSIC ON YOU TUBE                                                ...
7f070d0e16c82cc6
John Wright joins UT Austin I’m delighted to announce that quantum computing theorist John Wright will be joining the computer science faculty at UT Austin in Fall 2020, after he finishes a one-year postdoc at Caltech. John made an appearance on this blog a few months ago, when I wrote about the new breakthrough by him and Anand Natarajan: namely, that MIP* (multi-prover interactive proofs with entangled provers) contains NEEXP (nondeterministic double-exponential time). Previously, MIP* had only been known to contain NEXP (nondeterministic single exponential time). So, this is an exponential expansion in the power of entangled provers over what was previously known and believed, and the first proof that entanglement actually increases the power of multi-prover protocols, rather than decreasing it (as it could’ve done a priori). Even more strikingly, there seems to be no natural stopping point: MIP* might soon swallow up arbitrary towers of exponentials or even the halting problem (!). For more, see for example this Quanta article, or this post by Thomas Vidick, or this short story [sic] by Henry Yuen. John grew up in Texas, so he’s no stranger to BBQ brisket or scorching weather. He did his undergrad in computer science at UT Austin—my colleagues remember him as a star—and then completed his PhD with Ryan O’Donnell at Carnegie Mellon, followed by a postdoc at MIT. Besides the work on MIP*, John is also well-known for his 2015 work with O’Donnell pinning down the sample complexity of quantum state tomography. Their important result, a version of which was independently obtained by Haah et al., says that if you want to learn an unknown d-dimensional quantum mixed state ρ to a reasonable precision, then ~d2 copies of ρ are both necessary and sufficient. This solved a problem that had personally interested me, and already plays a role in, e.g., my work on shadow tomography and gentle measurements. Our little quantum information center at UT Austin is growing rapidly. Shyam Shankar, a superconducting qubits guy who previously worked in Michel Devoret’s group at Yale, will also be joining UT’s Electrical and Computer Engineering department this fall. I’ll have two new postdocs—Andrea Rocchetto and Yosi Atia—as well as new PhD students. We’ll continue recruiting this coming year, with potential opportunities for students, postdocs, faculty, and research scientists across the CS, physics, and ECE departments as well as the Texas Advanced Computing Center (TACC). I hope you’ll consider applying to join us. With no evaluative judgment attached, I can honestly say that this is an unprecedented time for quantum computing as a field. Where once faculty applicants struggled to make a case for quantum computing (physics departments: “but isn’t this really CS?” / CS departments: “isn’t it really physics?” / everyone: “couldn’t this whole QC thing, like, all blow over in a year?”), today departments are vying with each other and with industry players and startups to recruit talented people. In such an environment, we’re fortunate to be doing as well as we are. We hope to continue to expand. Meanwhile, this was an unprecedented year for CS hiring at UT Austin more generally. John Wright is one of at least four new faculty (probably more) who will be joining us. It’s a good time to be in CS. A huge welcome to John, and hook ’em Hadamards! (And for US readers: have a great 4th! Though how could any fireworks match the proof of the Sensitivity Conjecture?) 29 Responses to “John Wright joins UT Austin” 1. L Says: Wait isn’t BBQ a complexity class? 2. asdf Says: Wait, what? I thought entanglement and QM could be simulated classically in exponential time. So if classical MIP is NEXP, how can entanglement do better than add another layer or two of exponentials? I read Thomas Vidick’s and Kevin Hartnett’s articles, but not yet Henry Yuen’s, but don’t see much help. The whole thing seems paradoxical. I guess it’s a non-physical system because of the unbounded computation available to the provers, But still, before believing a MIP* proof maybe we have to worry that the provers know something we don’t about QM. Do the provers “merely” have unbounded Turing computation (i.e. pi-0-1 oracles) or can they go further, like for the truth set of arithmetic with arbitrary quantifier nesting? If the latter, they can generate fake “random” strings that fool pi-0-n unbounded verifiers for any finite n… 3. asdf Says: Aha regarding previous, Thomas Vidick’s article does help understand this and I’ll try to read it carefully, and to finish my thought from earlier. The Henry Yuen story wasn’t of much help though. It says what the MIP* provers can do, but not how they do it. 4. Scott Says: L #1: I’ve seen BQP misspelled BPQ (including at the STOC business meeting this year), at which point you’re only edit distance 1 from Texas’ state food. 5. Scott Says: asdf: The key point is that, with MIP*, no a-priori upper bound is placed on how many entangled qubits the provers could have. So, imagine the verifier asks them to implement some crazy strategy involving a stack-of-exponentials number of entangled qubits. In that case, simulating the provers by brute force—by which I mean, enumerating over all their possible actions to see whether they can make the verifier accept or not—would also take a stack-of-exponentials amount of time. Nothing similar happens in the unentangled case, where the provers’ strategies are always fully specified by just writing down exponentially-large truth tables. If it seems shocking to you that such a weird hypothetical scenario could actually rear its head … well, yes! Now you know why this subject has become so interesting. 6. Marshall Flax Says: I apologize for bringing politics into this, but neither can we proceed as though nothing is happening: I must say that a country with concentration camps doesn’t deserve fireworks. 7. John Wright Says: Thanks for the very kind blog post Scott! I’m excited to join UT 🙂 8. Scott Says: Marshall #6: Speaking only for myself, and not necessarily for anyone else mentioned in this post: you might be right. I’m as desirous as anyone of ending the monstrous cruelties on the border, and all the other assaults on the country’s ideals being perpetrated by the criminal in the White House. We’ll have an opportunity to do exactly that a year from now. It’s crucial that we don’t let that opportunity slip away, especially given the Democrats’ unique genius for losing unlosable elections. That will mean: staying united, not tearing each other apart by factional infighting and moral preening, and (especially) refusing to cede that the other side is “the real Americans” while we’re just cosmopolitan elitists. That itself, I think, would be a sufficient reason to go see the fireworks tonight, though to be honest it’s not why we’ll go. We’ll go because they’re happening regardless, the prime viewing area is like a 10-minute walk from us, we don’t have other plans for tonight, and the kids will like them. 9. asdf Says: Scott #5 thanks, that’s interesting. Stack-of-exponential number of entangled qubits I can deal with, but how do you do anything useful with them in polynomial time? Can you get that many into polynomially bounded physical space, so that the speed-of-light communication time to get at them doesn’t grow exponentially or worse? Otherwise, if the number of qubits and unbounded prover computation weren’t already tip-offs, ignoring the communication delays is another sign this model of computation is non-physical. And I still don’t see how to get from there to the halting problem. I mean you can solve the halting problem on a classical TM given an initial tape with enough consecutive “1”‘s written on it. You just have to know that the number of 1’s is finite but larger than BB(n), where n is the size of the query 😉 Meanwhile, somebody claims to have observed wavefunction collapse and been able to reverse it. You’ve probably seen that already, but if not: There is no wikipedia article about “quantum trajectory theory” which is probably a bad sign ;). Some web search indicates that it might have something to do with pilot-wave theory. 10. Scott Says: asdf #9: In interactive proof models, unless specified otherwise, only the verifier is restricted to polynomial time. The provers can use all the resources they want. To get up to the halting problem, we’d have to let the number of entangled qubits shared by the provers (and the running time of the provers) be literally infinite. No, of course this isn’t “physically realistic.” When did I ever say anything about “physically realistic” in this context? 😀 Realistic or not, there’s still a fundamental question about the nature of entanglement here—can our piddling, polynomial-time verifier do a test to find out, with high statistical confidence, whether the provers have a literally infinite amount of entanglement, rather than “merely” Ackermann(n) of it? We don’t yet know the answer to that question, but there’s now a very good chance that we’ll learn it soon. Exciting, right? Certainly more exciting than that article you inflicted on me in a drive-by linking, in blatant violation of this blog’s policies! 🙂 I started reading it but couldn’t finish. The total unwillingness to admit that “what happens in the middle of a measurement” is perfectly well captured by the standard Schrödinger equation, if we include the coupling to the environment, and that this is the entire point of decoherence theory … no, not on July 4th. Not while I’m on vacation with the family. 11. Sebastian Oberhoff Says: Do you mind sharing who the three other new faculty members you’ve alluded to are? 12. Scott Says: Sebastian #11: I’m not absolutely certain whether the info is public yet (but if not it will be soon). 13. asdf Says: Scott, sorry! I thought it was ok to post a link as long as I didn’t ask you to comment on it. Is Mike & Ike still your recommended textbook for understanding enough QC to make sense of this MIP* paper? Good point about only the verifier having to be polynomially bounded and about the infinite entanglement allowed to the provers. It sounds like there’s still an exponential amplification step that has to be iterated infinitely many times, giving a new(?) model of computation since if we believe the verifier can produce arithmetically random bits, the prover system can’t be classically simulated even with what you once called super duper pooper machines. Something still seems paradoxical. I’ve heard that essentially all the math found in physics, including QM, can be encoded in Peano arithmetic. But if MIP* can convince a verifier that a TM does or doesn’t halt, it can (interactively) prove the consistency or inconsistency of PA, or your favorite stronger effective theory, to arbitarily high certainty. I thought there were some theorems preventing that sort of thing. 14. Scott Says: asdf #13: You’re sort of mixing together a bunch of different subjects. MIP* is not so much a “new model of computation” as just an unusual complexity class—unusual because of the lack of any obvious ceiling on how much entanglement the provers might need. It’s been shown, however, that MIP* can’t give more power than either the halting problem or its complement (depending on exactly how MIP* with infinite entanglement is defined). So, MIP* might (or might not) go beyond the computable functions, but it doesn’t go up into the arithmetical hierarchy. If MIP* contained the complement of the halting problem, then yes, the verifier could ask the provers to prove the consistency of Peano arithmetic, ZFC, etc., and efficiently verify their claim. But keep in mind that, not only would this involve provers who we already agreed were physically unrealistic—but even if such provers existed, the verifier still wouldn’t end up with a finite-sized proof of Con(PA) or Con(ZFC) with which to convince anyone else. PA would still be unable to prove its own consistency, and likewise for ZFC. Neither Gödel nor any other known theorem rules this possibility out. Mike & Ike is still a standard reference for the field. But if you want to learn about MIP* specifically, you’d do much better to read the literature on that subject (some of it quite recent) that precedes Natarajan-Wright, like Cleve-Hoyer-Toner-Watrous, Ito-Vidick, Reichardt-Unger-Vazirani, and Slofstra. 15. Mike Says: Asdf #13 Vidick & Watrous also have a (long) survey entitled “Quantum Proofs” that is excellent. 16. gentzen Says: asdf, Scott: Thanks for that discussion. This possibility that MIP* could swallow the halting problem surprises and confuses me too. Here is my first confusion: Based on my understanding of MIP, I thought that MIP* only need to convince a verifier that a TM halts (in case it actually does). If it doesn’t halt, then the probability that the verifier gets convinced that it halts must be smaller than 1/3. Even with this restriction, I still have trouble to accept that MIP* might have this power (=my second confusion). But your discussion indicates that even this restriction might just be a matter of the exact definition: Can you explain how it happens that there can be an ambiguity between the halting problem or its complement, depending on the exact details of a seemingly unrelated point? Back to my second confusion: Even with this restriction, I still have trouble to accept that MIP* might have this power. …even if such provers existed, the verifier still wouldn’t end up with a finite-sized proof of … with which to convince anyone else. Even if a TM halts, no amount of information (+computation time) bounded by a computable function of the length of the description of the TM is in general sufficient to verify this. Can one work around that barrier by using randomness and “the verifier still cannot convince anybody else” alone, or is entanglement crucial in (possibly) overcoming that barrier? 17. Scott Says: gentzen #16: The entanglement between the provers—and moreover, unbounded entanglement between them—is crucial to everything we’re talking about here. But there are two different things one can mean by “unbounded entanglement.” First, we could say that the provers share a finite number of entangled qubits, but there’s no upper bound on how many. In that case, the halting problem is always an upper bound on what the provers can prove to the verifier, because a verifier with a HALT oracle could simply enumerate over all possible prover strategies (each of which is finite) looking for one that accepts. Alternatively, we can say that the provers share a literally infinite amount of entanglement, and can implement any strategy whatsoever where their operators commute with each other (the usual criterion for spatial separation in quantum field theory). In this case, the complement of the halting problem is always an upper bound on what the provers can prove to the verifier. The reason is harder to explain, but it follows from work by Navascues, Pironio, and Acin on semidefinite programming hierarchies. It’s currently unknown whether these two notions of “unbounded entanglement” are equivalent to each other. It’s known that the answer is yes if and only if the Connes Embedding Conjecture holds—that’s a famous unsolved problem in math from the 1970s. If MIP* turns out to be able to do the halting problem, that will imply that the two notions of entanglement are not equivalent, and therefore that the Connes Embedding Conjecture is false. I am not making any of this up. 18. Aula Says: Scott #17: [the provers] can implement any strategy whatsoever where their operators commute with each other Wouldn’t that kick the provers out of MIP* back into plain old MIP? I mean, the whole point of MIP* is that the provers must make non-commutative measurements, isn’t it? 19. Scott Says: Aula #18: Nope. Faraway operators always commute with each other, even if they’re applied to two halves of an entangled state. If they didn’t commute, then you wouldn’t even need entanglement, as you’d simply have a direct channel of communication. Of course the outcomes of the measurement operators won’t have a local hidden variable explanation, but that’s different. 20. gentzen Says: Scott #17: Thanks for your explanations. I am not making any of this up. Sorry, if my question implied anything like that. It was just that I had similar expectations like asdf, and your answers indicated that those expectations for MIP* were inappropriate. So I tried to be careful and explain where my expectations came from. In addition, it often turns out that my intuitive expectations about things related to probabilities are wrong. So I queried whether that was also the reason here why my expectations for MIP* were inappropriate. (In order to get a better feeling for the power of randomness, I now worked through proofs of Adelman’s theorem and Toda’s theorem. Turns out I had worked through a proof of Adelman’s theorem before, and that PP and Toda’s theorem are actually much less about probability than I had expected. Guess I will have to work through proofs related to IP and MIP to get a better feeling for the power of randomness when it comes to proofs.) 21. asdf Says: Scott, it really does seem to be a new model of computation since these infinite-entanglement machines can do something that classical oracle machines can’t, namely convince us puny humans of the truth or falsehood of otherwise undecidable propositions. I still don’t see how it can work. Let’s say I believe ZFC is consistent, so there is no integer n that is the Gödel number of a proof of 1=0. But iIf all you and I can agree on about the integers is PA, then there *is* such an n in some (maybe nonstandard) model of PA. So we ask the MIP* verifier to settle the question and it says it says ZFC is inconsistent. How does it convince us there is an actual inconsistency rather than that the provers are working in a nonstandard model of arithmetic? The theorems I was thinking of earlier were the “randomness doesn’t help computability” ones (e.g. by Sacks) mentioned in [this MO thread](https://mathoverflow.net/questions/58060) but I haven’t studied them. 22. Scott Says: asdf #21: If ZFC was inconsistent, then the provers could prove that to you—and moreover, prove that they were talking about “the standard model of the integers” (i.e., what outside of logic we simply call “the integers”)—by just showing you the damn inconsistency! So in some sense, the more interesting question is whether MIP* provers could prove to you that ZFC is consistent (or more generally, that a given TM runs forever). As I said, the answer is not yet known. Having said that, there’s also a question about whether they could prove to you that a given TM halts, in time that can be upper-bounded by some computable function of the size of the TM, with no dependence on the TM’s running time. That’s also an open problem right now. Both of these problems are intimately connected to the Connes Embedding Conjecture, and to the equivalence or inequivalence of the two different notions of infinite entanglement. But if the answers to the questions are yes, then there’s just a protocol that witnesses it, and (presumably) a proof of the protocol’s completeness and soundness that can be formalized in ZFC. In which case, an MIP* proof that a given TM halts is a proof that it halts, and a proof that it doesn’t is a proof that it doesn’t … in the real world. You don’t need to breathe a word about the formal figments of logic known as “nonstandard models of arithmetic,” any more than you would with ordinary zero-knowledge or interactive protocols, or for that matter with any other ordinary mathematical reasoning. It’s just a total irrelevancy. 23. asdf Says: Scott, with an unbounded verifier, the prover could possibly say “I’ll give you the contradiction one bit at a time, just send me another query when you’re ready for the next bit and I promise there are only finitely many of them”. Then it gives bit 1, bit 2, etc. but somehow never seems to finish: “I’m almost done! Just keep querying!”. Remember that the actual contradiction may be enormous, like the 10 zillionth iterated busy beaver function of whatever. It *might* stop someday but who knows. With a polynomially bounded verifier even that doesn’t work. Am I missing something? “Show the contradiction” seems like “show the forced checkmate” in the claim Black has a forced win in chess. The amazing thing about IP is that it shows the existence of the forced win without actually outputting the whole game tree. So I thought the MIP* equivalent would only show existence since demonstrating would be unbounded in size. 24. Scott Says: asdf #23: Yes, I said the same thing in my comment. If MIP* contained the halting problem, it would mean that the provers could prove that ZFC was inconsistent (if indeed it was) in time that was polynomially bounded in the length of the ZFC axioms, totally independent of the size of the contradiction. Conceptually, MIP* is extremely similar to IP, so if you understand the latter, you shouldn’t have a problem with MIP*. The entire point of an interactive protocol is that the verifier can become convinced of statements, without ever seeing an explicit line-by-line derivation (in this case, the actual ZFC contradiction) or being able to convince anyone else. All that matters is that the completeness and soundness conditions are satisfied. To review: Completeness: If a given statement is true, then the prover(s) can act in a way that causes the verifier to accept with overwhelming probability. Soundness: If a given statement is false, then no matter how the prover(s) act, the verifier will reject with overwhelming probability. That’s it. In CS theory since the 1980s, any system whatsoever that satisfies the two conditions above, no matter how weird it looks otherwise, gets to be dignified with the name “proof system.” 25. Alec Says: Just a note that Shyam is from Michel Devoret’s group at Yale (not Rob’s), although of course the two groups collaborate heavily. Excited to see him bring superconducting qubits to UT! 26. Scott Says: Alec #25: Thanks!! Fixed. 27. fred Says: The current number of people in the field is proportional to the current qubit record. 28. matt Says: If the Connes embedding conjecture is true, does that imply any limitations on the power of entangled provers? (the conjecture does have some relation to quantum games) 29. Scott Says: matt #28: Yes, I believe the Connes embedding conjecture implies MIP* is computable, since the definition that’s in RE (recursively enumerable) and the definition that’s in coRE would then coincide, and RE∩coRE = computable.
43822112c5ffa38e
Keywords | • Physics Loop quantum gravity Loop quantum gravity is one of the main subjects of research on the problem of devising a theory capable of describing the quantum aspect of gravitation. A quantum theory of gravitation is needed to understand the birth of the universe and what happens inside black holes. In classical general relativity, in these situations there are singularities with undesirable divergences of certain physical quantities. The requirement for consistency in the laws of physics also demands the unification of the laws of quantum mechanics and the laws of space-time. This is why theoretical physicists have been searching for a mythical theory of quantum gravitation for decades. The subject of quantum gravitation is extremely vast and it would probably take hundreds of pages to do it justice. It is well known that applying quantum mechanics to Einstein's equations of general relativity leads to infinite divergences when an attempt is made to couple the gravitation field to matter. There are situations in which approximate calculations of quantum gravity can be made without uncontrollable infinite quantities emerging. This is the case in some simple cosmology models described by what is called canonical quantum gravity and introduced in the 1960s by John Wheeler and above all Bryce DeWitt. Quantum cosmology To summarise, it is sought to apply the standard, so-called canonical, rules of quantification to Einstein's equations, meaning that an attempt is made to cast the latter in a so-called Hamiltonian form which is well known in analytical mechanics. To do this, like for the mechanics of a particle system, a phase space and a Hamiltonian function H must be introduced representing the total energy of the gravitational field+matter system. Just as a point in the phase space represents a set of possible positions-momentums for particles moving under the action of forces, a point in the phase space of the gravitational field will represent a possible state for the geometry of space-time curved by the presence of matter, and more generally of momentums and energies This space-time configuration space has been named superspace (not to be confused with supergravity). A Schrödinger equation can then be constructed with a wave function the square of which gives the probability of finding the geometry of space-time in a given state. This is precisely the Wheeler-DeWitt equation. The problem is that, unlike with N particles, the geometry of space-time is described by a tensor field with 10 components defined at each point in space-time. As there are an infinity of these it is easy to understand that solving such an equation is not an easy task. However, if a class of possible geometries is first defined depending only on a small number of parameters certain calculations do become possible. This amounts to truncating the preceding phase space by "freezing" degrees of freedom so as to only keep a mini superspace. Sur une sphère, le transport d'un vecteur tangent à une courbe parallèlement à elle-même le long d'une boucle ne ramène pas à l'identique ce vecteur. On voit ainsi que les orientations d'un vecteur tangent transporté parallèlement du pôle Nord selon le trajet NBA et selon le trajet NA diffèrent d'un angle &#945;. Sur une surface plane, cet angle serait nul. En considérant sur un espace-temps courbe un ensemble de boucles de ce genre, la connaissance des résultats des calculs des transports parallèles de vecteurs le long de ces boucles caractérise la courbure de l'espace-temps et sa forme. C'est le traitement quantique de ces boucles qui est utilisé pour faire de la gravitation quantique à boucles. On a sphere, transporting a vector tangent to a curve parallel to itself along a loop does not bring this vector back identically. Thus it can be seen that the orientations of a tangent vector transported parallel to the north pole along a path NBA and along a path NA differ by an angle α. On a plane surface, this angle would be zero. Considering a set of loops of this kind on a curved space-time ,knowing the results of calculations of parallel transports of vectors along these loops characterises the curvature and shape of space-time. It is the quantum treatment of these loops that is used to produce loop quantum gravity. © Luca Antonelli-wikipedia The simplest case is when the consistent and isotropic Friedman-Robertson-Walker (FRW) models are taken with the simplest imaginable field as the origin of the gravitation field: a scalar field described by a Klein-Gordon equation with a potential V(). The evolution over time of the gravitational field is here reduced to a single degree of freedom a(t). The Hamiltonian function of the system then takes a form similar to the one describing a particle with two position coordinates, here a(t) and (t), moving in a complicated potential. The quantification rules of such a system are well known in wave mechanics and the quantum equation describing these simple models of the universe is not more but less complicated than those encountered in atomic or molecular physics. The problem of the initial cosmological singularity As was hinted at above, in classical general relativity, the FRW models are problematical, as are many others, because it can be shown that when t=0 the curvature of space-time becomes infinite. The very notion of space-time collapses, blocking everything. The beginning of the universe, if this notion has a meaning, is then completely out of the reach of human knowledge. Similarly, an identical situation occurs when a star collapses to give a black hole in classical general relativity a space-time singularity forms and the laws of physics are broken. But this is not the first time that physics has been confronted with this type of problem. When the first models of atoms were being constructed, the electron rotating around the nucleus was in an unstable situation and ought to have ended up by collapsing into the nucleus and creating a singularity, there too, but not in space-time. The introduction of quantum mechanics and wave mechanics with a wave function, had then shown that there was only one series of discrete dynamic states accessible to the electron, the Bohr atom's famous energy levels. The wave function describing the probability of finding the electron in a region of space "spread" this very position making the previous collapse impossible. John Wheeler and Bryce DeWitt had clearly indicated that a similar process should occur with their Schrödinger equation of space-time. The singularities in general relativity would therefore probably be "smoothed" by quantum processing thereby preventing their formation. Results pointing to this had already been obtained in the late 1960s especially in the famous Hartle-Hawking model with imaginary time in the early 1980s. Unfortunately, as indicated above, each time it was the case of a very special situation in which the geometry of the universe was assumed not to depart very much from a certain form of consistency and isotropy that made it possible to considerably simplify the calculations. This is not satisfactory because such assumptions, though justifiable from certain aspects, are nevertheless wishful thinking. The theory should start from arbitrary space-time not partially predetermined and it is the calculations that should give the state of this space-time. To do this the Wheeler-DeWitt equation would have to be solved in a general or generic manner, but how should this be done? Ashtekar variables and loops A considerable breakthrough was made in the mid 1980s. The Indian physicist Abhay Ashtekar, who had been a post-doctoral student of the great Roger Penrose, introduced a formulation of Einstein's equations in the phase space of space-time that considerably simplified their Hamiltonian formulation. What he showed was a situation that was formally very close to that obtained with the Yang-Mills equations, especially for QCD. The techniques from this gauge theory of strong nuclear reactions, quantum chromodynamics, could then be transposed. It became apparent that the right variables to use were related to what are called holonomies, quantities that are calculated with parallel vector transports along loops in space-time. Les travaux de Carlo Rovelli l'ont conduit à découvrir que la LQG prédisait une structure discrète pour l'espace-temps au voisinage de l'échelle de Planck. The work of Carlo Rovelli led to the discovery that LQG predicted a discrete structure for space-time at around the Planck scale. © John Baez Lacking a general solution to the Wheeler-DeWitt equation, Lee Smolin and Carlo Rovelli managed to find large classes of solutions to the Wheeler-DeWitt equation but above all, they managed to rigorously set out the solution space for this equation. Like all Schrödinger equations, the solutions to these equations can be brought together in an abstract vector space similar to the usual space, this is the famous Hilbert space. A solution is then described by a point in that space located by a "position vector" In the language of quantum mechanics, the wave functions corresponding to a particular space-time geometry are state vectors. The principle of the superposition of quantum mechanical states then implies that the geometry of space-time can be in the form of a superposition of states given by the vector sum of these vectors. The most spectacular result was that it was possible, for the geometry of space-time, to construct surface and volume operators of which the spectra were discrete! A discrete space-time for cosmology and the entropy of black holes It is known that in quantum mechanics, quantities such as energy or momentum are given by operators. By acting on the wave function, which is mathematically similar to the function describing a light wave, the energy operator then extracts the various components of the spectrum of this wave. In the case of the hydrogen atom this gives discrete energy levels and orbits in which there is a discrete series of distances of the electron from the nucleus. The situation is really very similar because Bohr's principle of correspondence also applies in the case of the spectrum of areas and volumes. As the quantum number of larger and larger orbits increases, the difference between the energy levels becomes smaller and smaller, as do the spatial distances separating the orbitals. The discrete spectrum becomes continuous and quantum physics merges into classical physics. Thus, for increasingly large volumes the classical notion of continuous space-time reappears. Martin Bolowald was one of the first to see the implications in cosmology of the discrete character of space-time on the Planck scale given by LQG equations. He applied these to the previous FRW models and discovered several things. Even within the approximation of mini superspace, the situation is much more under control because the fact that the geometry of space-time is discrete already considerably reduces the number of degrees of freedom possible for space-time on approaching the primordial singularity. This allows to better justify the use of mini superspaces because of the very structure of LQG. It then appears that a curvature operator for space-time and an operator for "position" a(t) can be defined for the factor describing the expansion of the universe of FRW. Unlike the result obtained with LQG, the spectrum of these operators is discrete. Amazingly, whereas the spectrum of the latter operator has a value 0 corresponding to zero volume, the curvature has a maximum bound, and the space-time singularity is eliminated! There is then no problem in extending the structure of space-time to before what for us corresponds to time 0. There is then a "before the Big Bang". If the expansion factor of the universe with time is represented, it performs a movement reminiscent of a ball eternally bouncing elastically. This type of theory is called "bouncing Universe". It could be thought that each expansion-contraction reproduces an identical universe each time, but Bojowald pointed out that these quantum equations make the parameters describing each new cycle indeterminate, each time resulting in a loss of some of the information about the previous state. The discrete nature of the areas in LQG has also been used to find out the relationship between the entropy of a black hole and the value of the area of the horizon of the black hole. This value had been obtained previously using string theory for some supersymmetric black holes. In LQG the result is, however, valid for a Schwarzschild black hole of the type known to exist in nature. One of the creators of loop quantum gravity, the theoretical physicist Abhay Ashtekar. © Kavli Institute One of the creators of loop quantum gravity, the theoretical physicist Abhay Ashtekar. © Kavli Institute Loop quantum gravity - 3 Photos Fill out my online form.
aaea53c7a21b4a88
Quantum Hamilton-Jacobi equation Vipul Periwal Department of Physics, Princeton University, Princeton, New Jersey 08544 The nontrivial transformation of the phase space path integral measure under certain discretized analogues of canonical transformations is computed. This Jacobian is used to derive a quantum analogue of the Hamilton-Jacobi equation for the generating function of a canonical transformation that maps any quantum system to a system with a vanishing Hamiltonian. A formal perturbative solution of the quantum Hamilton-Jacobi equation is given. A remarkable formulation of classical dynamics is provided by the Hamilton-Jacobi equation: If satisfies where is the Hamiltonian, then the canonical transformation defined by maps the dynamical system governed by the Hamiltonian to a trivial dynamical system, one with vanishing Hamiltonian. To see this, note that using eq. 1. Boundary terms do not affect the phase space equations of motion, so this mapping determines identical classical dynamics[1]. The function is Hamilton’s principal function, or action, which acquires a greater significance in quantum mechanics[2, 3]. Quantum mechanically, canonical transformations of the form considered above do not generate equivalent quantum systems[4, 5, 6]. There is no natural action of the group of symplectomorphisms on the quantum Hilbert space. Alternatively, in Feynman’s formulation of quantum mechanics[3], the phase space path integral is not invariant under canonical transformations. The non-invariance of phase space (and coördinate space) path integral measures has been the focus of a great deal of work[6]. In the present work, the general problem of symplectic transformations will not be considered—I shall just consider the properties of the phase space path integral under the discretized analogues of canonical transformations of a particular type. The motivation is to answer the following: Is there a deformation of eq. 1 which allows a quantum mechanical map from an arbitrary quantum system to one with a vanishing Hamiltonian? This question has attracted some attention in the recent literature[7, 8]. After a short review of the path integral formulation to make the measure precise, I will compute the transformation of the measure under the transformations that keep the discretized term in the action invariant (up to total derivatives). These transformations differ from canonical transformations due to the discretization of the phase space path integral, so the Jacobian for the change of variables in the path integral is nontrivial. An important consistency check is satisfied by the result: the change is consistent with the group property of the canonical transformations considered, in the continuum limit. A particular application of this result gives the desired deformation of the Hamilton-Jacobi equation, with deformation parameter the Planck constant. From this, the quantum Hamilton-Jacobi equation, eq. 17, is immediate. The solution of eq. 17 as a formal perturbative series takes a simple form, eq. LABEL:series. We compute as a functional integral, choosing the momentum state to position state amplitude to obtain a symplectically invariant form for the path integral measure. Note and if is ordered so that all momentum operators appear on the left, Assume that the Hamiltonian is time-independent for notational simplicity, since the generalization to arbitrary Hamiltonians is trivial. Since with using between every factor of we find where Here, and and and are integrated over. In the continuum limit, and the measure can be described heuristically as an integration over all phase space paths satisfying with and integrated over. For the pitfalls in such continuum descriptions, see [4, 5, 6]. Eq. 5 can now be used to consider the properties of the phase space path integral under canonical transformations. The measure is clearly invariant under arbitrary –dependent canonical transformations as a straightforward mathematical fact. However, is not invariant under such transformations. The point of the following exercise is to find a transformation of integration variables that changes the term in in a simple way, and then to compute the Jacobian for this transformation. Consider defining functions implicitly by means of the following definitions, for arbitrary functions Now observe that with the first term in a telescoping series when summed over Note that eq. 7 has no dependence on Thus one finds Comparing eq. 8 with eq. 1, this is the form expected if time is discretized. I must now compute the effect of the substitutions in eq. 6 on the measure. Keeping fixed, I find that The Jacobian for the change of variables is therefore non-trivial. It is not possible to proceed further without some knowledge of the relation between the canonical variables with subscripts and the variables with subscripts in other words, without some restriction on the sequences and as I will come back to these restrictions momentarily. At a formal level, assuming that and are small as , it follows from eq.’s 9,10 that We can also derive the analogue of eq. 11 for Eq.’s 11,12 determine Jacobians that differ by the sign of the total time derivative contribution, indicating that this is a non-universal artifact of the discretization. Such contributions are, of course, to be expected, since the relation of the index to the continuum time variable for need not be the same. We use the ultralocality of the phase space measure to eliminate this total derivative contribution by averaging the Jacobians determined by eq.’s 11,12—heuristically, one can interpret this as setting the time associated with midway between and So, finally, assuming that is chosen to become a differentiable function of as we find where we use Eq. 13 has exactly the form that one expects, in the continuum limit, since successive canonical transformations obey a group law that is consistent with the form of the Jacobian. This is an important consistency check on the calculation. In hindsight, therefore, one merely needed to fix the coefficient in front of this term. We can check this Jacobian by performing an explicit calculation in any quantum mechanics problem, since the measure’s transformation properties are universal, i.e., independent of the Hamiltonian. A simple choice of Hamiltonian is the harmonic oscillator. In this case, one knows[3] that Choose This choice of amounts to with and satisfies the classical Hamilton-Jacobi equation, eq. 1. According to the calculations above (eq.’s 5,8), performing some trivial integrations the transition amplitude should equal Comparing this form to eq. 14, we find exact agreement. Eq. 13 implies that under the transformation defined by eq. 6, Thus, using eq. 8 and restoring , if satisfies eq. 6 will map the quantum system to a quantum system with a vanishing Hamiltonian. The telescoping terms in eq. 7 give rise to boundary terms in the path integral of and What are the conditions for the validity of the formal manipulations that lead from eq.’s 9,10 to eq. 13? The measure on phase space with the Hamiltonian must be concentrated on paths such that tends to zero with and similarly for with the measure determined by the transformed Hamiltonian. This is true with quite mild restrictions[5] on for and similar restrictions on for The smoothness of paths is trivially true after the change of variables if satisfies eq. 17, since the action is just In this context, it should be noted that the form of the transformed Hamiltonian, is only valid in the limit—for finite one must work with the discrete forms for all quantities, including the substitutions for in the Hamiltonian. It is difficult to make general statements about the discretized theory, as is well-known. The applications of eq. 17 to field-theoretic problems may be more interesting, for ordering difficulties in field theory are usually absorbed into renormalization constants[5]. Eq. 17 may appear to be a simple deformation of eq. 1, but in fact it is not. According to Jacobi’s theorem[1], finding a sufficient number of solutions of eq. 1 allows one to solve the dynamics of the system—the key point is that the variables are integration constants for these solutions, an interpretation possible since they do not appear in eq. 1 explicitly. This interpretation is not possible for eq. 17, so a priori one has to find appropriate choices of before one can even attempt to solve this equation, unless one treats as a perturbation parameter. Since such a perturbative solution is not a good approximation in general, one may be led to conclude that eq. 17 is of less practical value in quantum mechanics than eq. 1 is in classical mechanics. Nevertheless, eq. 17 is simple, and of conceptual value in understanding the classical limit of quantum mechanics. A formal solution to eq. 17 can be found as follows: Let Then The solution to this set of equations is obtained by the method of characteristic projections. Let be a complete integral of eq. 1, which of course coincides with the first equation in eq. LABEL:series, and a solution of which is just one of the classical equations of motion. Then is a solution of with analogous equations for We see, therefore, that the integral surfaces, indexed by of eq. LABEL:series, depend on the behaviour of integral surfaces as functions of Thus, the perturbative solution of eq. 17 incorporates information about quantum fluctuations by its dependence on the complete integral of eq. 1 at neighbouring values of It would be interesting to see if exactly solvable quantum mechanics problems can be interpreted as explicit solutions of eq. 17. Eq. 13 shows, further, that the transformation to classical action-angle variables leaves behind a non-trivial Hamiltonian, which takes into account quantum fluctuations. Classical canonical transformations that solve eq. 1, and satisfy will also solve the quantum dynamics, with the anomalous term serving as a computation of the fluctuation determinant about classical solutions, as in the harmonic oscillator considered above. The formulation considered above for canonical transformations may be too limited. The variables have a fundamentally different rôle to play in eq. 17 as compared to eq. 1, and it may be natural to look for solutions in which describe a non-commutative symplectic manifold. This is suggested by the fact that the quantum energy spectrum could have discrete and/or continuous components, and such a space cannot always be described as a commuting symplectic manifold[9]. In such a case the form of the anomaly will be different. It would be fascinating if quantum mechanics on a commuting phase space could be mapped to a vanishing Hamiltonian on a (possibly) non-commuting phase space. To conclude, I mention that two recent works[7, 8] have addressed related issues. In [7], it is claimed that the complete solution of the classical Hamilton-Jacobi equation, eq. 1, determines the quantum mechanical amplitude by means of a single momentum integration instead of a path integral. While the path integration of the trivial quantum mechanics with vanishing Hamiltonian indeed reduces to (a variant of) a phase space integration as mentioned above (and explicitly found in the case of the harmonic oscillator, eq. 15), eq. 17 is distinct from the classical equation, so it appears to contradict [7]. [8] postulates a diffeomorphic covariance principle, based partly on an SL(2,C) algebraic symmetry of a Legendre transform, and finds a modification of the classical Hamilton-Jacobi equation that has appropriate covariance properties for the postulated equivalence. Their function satisfies an equation quite different from eq. 17, and it is argued that is related to solutions of the Schrödinger equation. Functional integrals of any sort do not appear in [8], and there is no relation to the present result, eq. 13. I am grateful to S. Treiman and A. Anderson for helpful conversations, and I. Klebanov and W. Taylor for comments. This work was supported in part by NSF grant PHY96-00258.
c7ac64e8f271b4a5
zbMATH — the first resource for mathematics Some remarks on the nonlinear Schrödinger equation in the critical case. (English) Zbl 0694.35170 Nonlinear semigroups, partial differential equations and attractors, Proc. Symp., Washington/DC 1987, Lect. Notes Math. 1394, 18-29 (1989). [For the entire collection see Zbl 0673.00012.] For the nonlinear Schrödinger equation \(iu+\Delta u=g(u)\), \(u(0,\cdot)=\phi (\cdot)\) in \({\mathbb{R}}^ n\), with \(g(u)=\lambda | u|^{\alpha}u\), it was proved, among other things, that when \(\alpha =4/n\), for every \(\phi \in L^ 2({\mathbb{R}}^ n)\), there exists a unique maximal solution \(u\in C([0,T^*);L^ 2({\mathbb{R}}^ n))\cap L^{\alpha +2}_{loc}([0,T^*);L^{\alpha +2}({\mathbb{R}}^ n))\). If \(n\geq 3\) and \(\alpha =4/(n-2)\), then, for every \(\phi \in H^ 1({\mathbb{R}}^ n)\), there exists a maximal solution \(u\in C([0,T^*);H^ 1({\mathbb{R}}^ n))\cap C^ 1([0,T^*);H^{-1}({\mathbb{R}}^ n))\). The cases discussed are critical. The approach is based on some sharp dispersive estimates for the linear Schrödinger equation. Reviewer: J.Yong 35Q99 Partial differential equations of mathematical physics and other areas of application
0f85add623296e57
Re-writing Feynman’s Lectures? I have a crazy new idea: a complete re-write of Feynman’s Lectures. It would be fun, wouldn’t it? I would follow the same structure—but start with Volume III, of course: the lectures on quantum mechanics. We could even re-use some language—although we’d need to be careful so as to keep Mr. Michael Gottlieb happy, of course. 🙂 What would you think of the following draft Preface, for example? The special problem we try to get at with these lectures is to maintain the interest of the very enthusiastic and rather smart people trying to understand physics. They have heard a lot about how interesting and exciting physics is—the theory of relativity, quantum mechanics, and other modern ideas—and spend many years studying textbooks or following online courses. Many are discouraged because there are really very few grand, new, modern ideas presented to them. The problem is whether or not we can make a course which would save them by maintaining their enthusiasm. The lectures here are not in any way meant to be a survey course, but are very serious. I thought it would be best to re-write Feynman’s Lectures to make sure that most of the above-mentioned enthusiastic and smart people would be able to encompass (almost) everything that is in the lectures. 🙂 This is the link to Feynman’s original Preface, so you can see how my preface compares to his: same-same but very different, they’d say in Asia. 🙂 Doesn’t that sound like a nice project? 🙂 Jean Louis Van Belle, 22 May 2020 Post scriptum: It looks like we made Mr. Gottlieb and/or MIT very unhappy already: the link above does not work for us anymore (see what we get below). That’s very good: it is always nice to start a new publishing project with a little controversy. 🙂 We will have to use the good old paper print edition. We recommend you buy one too, by the way. 🙂 I think they are just a bit over US$100 now. Well worth it! To put the historical record straight, the reader should note we started this blog before Mr. Gottlieb brought Feynman’s Lectures online. We actually wonder why he would be bothered by us referring to it. That’s what classical textbooks are for, aren’t they? They create common references to agree or disagree with, and why put a book online if you apparently don’t want it to be read or discussed? Noise like this probably means I am doing something right here. 🙂 Post scriptum 2: Done ! Or, at least, the first chapter is done ! Have a look: here is the link on ResearchGate and this is the link on Phil Gibbs’ site. Please do let me know what you think of it—whether you like it or not or, more importantly, what logic makes sense and what doesn’t. 🙂 Making sense of it all In recent posts, we have been very harsh in criticizing mainstream academics for not even trying to make sense of quantum mechanics—labeling them as mystery wallahs or, worse, as Oliver Consa does, frauds. While we think the latter criticism is fully justified –we can and should think of some of the people we used to admire as frauds now – I think we should also acknowledge most of the professional physicists are actually doing what we all are doing and that is to, somehow, try to make sense of it all. Nothing more, nothing less. However, they are largely handicapped: we can say or whatever we write, and we do not need to think about editorial lines. In other words: we are free to follow logic and practice real science. Let me insert a few images here to lighten the discussion. One is a cartoon from the web and the other was sent to me by a friendly academic. As for the painting, if you don’t know him already, you should find out for yourself. 🙂 Both mainstream as well as non-mainstream insiders and outsiders are having very heated discussions nowadays. When joining such discussions, I think we should start by acknowledging that Nature is actually difficult to understand: if it would be easy, we would not be struggling with it. Hence, anyone who wants you to believe it actually all is easy and self-evident is a mystery wallah or a fraud too—at the other end of the spectrum! For example, I really do believe that the ring current model of elementary particles elegantly combines wave-particle duality and, therefore, avoids countless dichotomies (such as the boson-fermion dichotomy, for example) that have hampered mankind’s understanding of what an elementary particle might actually be. At the same time, I also acknowledge that the model raises its own set of very fundamental questions (see our paper on the nature of antimatter and some other unresolved issues) and can, therefore, be challenged as well. In short, I don’t want to come across as being religious about our own interpretation of things because it is what it is: an interpretation of things we happen to believe in. Why? Because it happens to come across as being more rational, simpler or – to use Dirac’s characterization of a true theory – just beautiful. So why are we having so much trouble accepting the Copenhagen interpretation of quantum mechanics? Why are we so shocked by Consa’s story on man’s ambition in this particular field of human activity—as opposed to, say, politics or business? It’s because people like you and me thought these men were like us—much cleverer, perhaps, but, otherwise, totally like us: people searching for truth—or some basic version of it, at least! That’s why Consa’s conclusion hurts us so much: “QED should be the quantized version of Maxwell’s laws, but it is not that at all. […] QED is a bunch of fudge factors, numerology, ignored infinities, hocus-pocus, manipulated calculations, illegitimate mathematics, incomprehensible theories, hidden data, biased experiments, miscalculations, suspicious coincidences, lies, arbitrary substitutions of infinite values and budgets of 600 million dollars to continue the game.” Amateur physicists like you and me thought we were just missing something: some glaring (in)consistency in their or our theories which we just couldn’t see but that, inevitably, we would suddenly stumble upon while wracking our brains trying to grind through it all. We naively thought all of the sleepless nights, all the agony and all the sacrifices in terms of time and trouble would pay off, one day, at least! But, no, we’ve been wasting countless years to try to understand something which one can’t understand anyway—something which is, quite simply, not true. It was nothing but a bright shining lie and our anger is, therefore, fully justified. It sure did not do much to improve our mental and physical well-being, did it? Such indignation may be justified but it doesn’t answer the more fundamental question: why did we even bother? Why are we so passionate about these things? Why do we feel that the Copenhagen interpretation cannot be right? One reason, of course, is that we were never alone here. The likes of Einstein, Dirac, and even Bell told us all along. Now that I think of it, all mainstream physicists that I know are critical of us – amateur physicists – but, at the same time, are also openly stating that the Standard Model isn’t satisfactory—and I am really thinking of mainstream researchers here: the likes of Zwiebach, Hossenfelder, Smolin, Gasparan, Batelaan, Pohl and so many others: they are all into string theory or, else, trying to disprove this or that quantum-mechanical theorem. [Batelaan’s reseach on the exchange of momentum in the electron double-slit experiment, for example, is very interesting in this regard.] In fact, now that I think of it: can you give me one big name who is actually passionate about the Standard Model—apart from one or two Nobel Prize winners who got an undeserved price for it? If no one thinks it can be  right, then why can’t we just accept it just isn’t? I’ve come to the conclusion the ingrained abhorrence – both of professional as well as of amateur physicists – is rooted in this: the Copenhagen interpretation amounts to a surrender of reason. It is, therefore, not science, but religion. Stating that it is a law of Nature that even experts cannot possibly understand Nature “the way they would like to”, as Richard Feynman put it, is both intuitively as well as rationally unacceptable. Intuitively—and rationally? That’s a contradictio in terminis, isn’t it? We don’t think so. I think this is an outstanding example of a locus in our mind where intuition and rationality do meet each other. Mainstream QM: A Bright Shining Lie Dear fellow scientist, Something is Rotten in the State of QED Best Regards, Oliver Consa The Mystery Wallahs I’ve been working across Asia – mainly South Asia – for over 25 years now. You will google the exact meaning but my definition of a wallah is a someone who deals in something: it may be a street vendor, or a handyman, or anyone who brings something new. I remember I was one of the first to bring modern mountain bikes to India, and they called me a gear wallah—because they were absolute fascinated with the number of gears I had. [Mountain bikes are now back to a 2 by 10 or even a 1 by 11 set-up, but I still like those three plateaux in front on my older bikes—and, yes, my collection is becoming way too large but I just can’t do away with it.] Any case, let me explain the title of this post. I stumbled on the work of the research group around Herman Batelaan in Nebraska. Absolutely fascinating ! Not only did they actually do the electron double-slit experiment, but their ideas on an actual Stern-Gerlach experiment with electrons are quite interesting: I also want to look at their calculations on momentum exchange between electrons in a beam: Outright fascinating. Brilliant ! […] It just makes me wonder: why is the outcome of this 100-year old battle between mainstream hocus-pocus and real physics so undecided? I’ve come to think of mainstream physicists as peddlers in mysteries—whence the title of my post. It’s a tough conclusion. Physics is supposed to be the King of Science, right? Hence, we shouldn’t doubt it. At the same time, it is kinda comforting to know the battle between truth and lies rages everywhere—including inside of the King of Science. Quantum math: garbage in, garbage out? This post is basically a continuation of my previous one but – as you can see from its title – it is much more aggressive in its language, as I was inspired by a very thoughtful comment on my previous post. Another advantage is that it avoids all of the math. 🙂 It’s… Well… I admit it: it’s just a rant. 🙂 [Those who wouldn’t appreciate the casual style of what follows, can download my paper on it – but that’s much longer and also has a lot more math in it – so it’s a much harder read than this ‘rant’.] My previous post was actually triggered by an attempt to re-read Feynman’s Lectures on Quantum Mechanics, but in reverse order this time: from the last chapter to the first. [In case you doubt, I did follow the correct logical order when working my way through them for the first time because… Well… There is no other way to get through them otherwise. 🙂 ] But then I was looking at Chapter 20. It’s a Lecture on quantum-mechanical operators – so that’s a topic which, in other textbooks, is usually tackled earlier on. When re-reading it, I realize why people quickly turn away from the topic of physics: it’s a lot of mathematical formulas which are supposed to reflect reality but, in practice, few – if any – of the mathematical concepts are actually being explained. Not in the first chapters of a textbook, not in its middle ones, and… Well… Nowhere, really. Why? Well… To be blunt: I think most physicists themselves don’t really understand what they’re talking about. In fact, as I have pointed out a couple of times already, Feynman himself admits so much: “Atomic behavior appears peculiar and mysterious to everyone—both to the novice and to the experienced physicist. Even the experts do not understand it the way they would like to.” So… Well… If you’d be in need of a rather spectacular acknowledgement of the shortcomings of physics as a science, here you have it: if you don’t understand what physicists are trying to tell you, don’t worry about it, because they don’t really understand it themselves. 🙂 Take the example of a physical state, which is represented by a state vector, which we can combine and re-combine using the properties of an abstract Hilbert space. Frankly, I think the word is very misleading, because it actually doesn’t describe an actual physical state. Why? Well… If we look at this so-called physical state from another angle, then we need to transform it using a complicated set of transformation matrices. You’ll say: that’s what we need to do when going from one reference frame to another in classical mechanics as well, isn’t it? Well… No. In classical mechanics, we’ll describe the physics using geometric vectors in three dimensions and, therefore, the base of our reference frame doesn’t matter: because we’re using real vectors (such as the electric of magnetic field vectors E and B), our orientation vis-á-vis the object – the line of sight, so to speak – doesn’t matter. In contrast, in quantum mechanics, it does: Schrödinger’s equation – and the wavefunction – has only two degrees of freedom, so to speak: its so-called real and its imaginary dimension. Worse, physicists refuse to give those two dimensions any geometric interpretation. Why? I don’t know. As I show in my previous posts, it would be easy enough, right? We know both dimensions must be perpendicular to each other, so we just need to decide if both of them are going to be perpendicular to our line of sight. That’s it. We’ve only got two possibilities here which – in my humble view – explain why the matter-wave is different from an electromagnetic wave. I actually can’t quite believe the craziness when it comes to interpreting the wavefunction: we get everything we’d want to know about our particle through these operators (momentum, energy, position, and whatever else you’d need to know), but mainstream physicists still tell us that the wavefunction is, somehow, not representing anything real. It might be because of that weird 720° symmetry – which, as far as I am concerned, confirms that those state vectors are not the right approach: you can’t represent a complex, asymmetrical shape by a ‘flat’ mathematical object! Huh? Yes. The wavefunction is a ‘flat’ concept: it has two dimensions only, unlike the real vectors physicists use to describe electromagnetic waves (which we may interpret as the wavefunction of the photon). Those have three dimensions, just like the mathematical space we project on events. Because the wavefunction is flat (think of a rotating disk), we have those cumbersome transformation matrices: each time we shift position vis-á-vis the object we’re looking at (das Ding an sich, as Kant would call it), we need to change our description of it. And our description of it – the wavefunction – is all we have, so that’s our reality. However, because that reality changes as per our line of sight, physicists keep saying the wavefunction (or das Ding an sich itself) is, somehow, not real. Frankly, I do think physicists should take a basic philosophy course: you can’t describe what goes on in three-dimensional space if you’re going to use flat (two-dimensional) concepts, because the objects we’re trying to describe (e.g. non-symmetrical electron orbitals) aren’t flat. Let me quote one of Feynman’s famous lines on philosophers: “These philosophers are always with us, struggling in the periphery to try to tell us something, but they never really understand the subtleties and depth of the problem.” (Feynman’s Lectures, Vol. I, Chapter 16) Now, I love Feynman’s Lectures but… Well… I’ve gone through them a couple of times now, so I do think I have an appreciation of the subtleties and depth of the problem now. And I tend to agree with some of the smarter philosophers: if you’re going to use ‘flat’ mathematical objects to describe three- or four-dimensional reality, then such approach will only get you where we are right now, and that’s a lot of mathematical mumbo-jumbo for the poor uninitiated. Consistent mumbo-jumbo, for sure, but mumbo-jumbo nevertheless. 🙂 So, yes, I do think we need to re-invent quantum math. 🙂 The description may look more complicated, but it would make more sense. I mean… If physicists themselves have had continued discussions on the reality of the wavefunction for almost a hundred years now (Schrödinger published his equation in 1926), then… Well… Then the physicists have a problem. Not the philosophers. 🙂 As to how that new description might look like, see my papers on I firmly believe it can be done. This is just a hobby of mine, but… Well… That’s where my attention will go over the coming years. 🙂 Perhaps quaternions are the answer but… Well… I don’t think so either – for reasons I’ll explain later. 🙂 Post scriptum: There are many nice videos on Dirac’s belt trick or, more generally, on 720° symmetries, but this links to one I particularly like. It clearly shows that the 720° symmetry requires, in effect, a special relation between the observer and the object that is being observed. It is, effectively, like there is a leather belt between them or, in this case, we have an arm between the glass and the person who is holding the glass. So it’s not like we are walking around the object (think of the glass of water) and making a full turn around it, so as to get back to where we were. No. We are turning it around by 360°! That’s a very different thing than just looking at it, walking around it, and then looking at it again. That explains the 720° symmetry: we need to turn it around twice to get it back to its original state. So… Well… The description is more about us and what we do with the object than about the object itself. That’s why I think the quantum-mechanical description is defective. Wavefunctions, perspectives, reference frames, representations and symmetries Ouff ! This title is quite a mouthful, isn’t it? 🙂 So… What’s the topic of the day? Well… In our previous posts, we developed a few key ideas in regard to a possible physical interpretation of the (elementary) wavefunction. It’s been an interesting excursion, and I summarized it in another pre-publication paper on the open site. In my humble view, one of the toughest issues to deal with when thinking about geometric (or physical) interpretations of the wavefunction is the fact that a wavefunction does not seem to obey the classical 360° symmetry in space. In this post, I want to muse a bit about this and show that… Well… It does and it doesn’t. It’s got to do with what happens when you change from one representational base (or representation, tout court) to another which is… Well… Like changing the reference frame but, at the same time, it is also more than just a change of the reference frame—and so that explains the weird stuff (like that 720° symmetry of the amplitudes for spin-1/2 particles, for example). I should warn you before you start reading: I’ll basically just pick up some statements from my paper (and previous posts) and develop some more thoughts on them. As a result, this post may not be very well structured. Hence, you may want to read the mentioned paper first. The reality of directions Huh? The reality of directions? Yes. I warned you. This post may cause brain damage. 🙂 The whole argument revolves around a thought experiment—but one whose results have been verified in zillions of experiments in university student labs so… Well… We do not doubt the results and, therefore, we do not doubt the basic mathematical results: we just want to try to understand them better. So what is the set-up? Well… In the illustration below (Feynman, III, 6-3), Feynman compares the physics of two situations involving rather special beam splitters. Feynman calls them modified or ‘improved’ Stern-Gerlach apparatuses. The apparatus basically splits and then re-combines the two new beams along the z-axis. It is also possible to block one of the beams, so we filter out only particles with their spin up or, alternatively, with their spin down. Spin (or angular momentum or the magnetic moment) as measured along the z-axis, of course—I should immediately add: we’re talking the z-axis of the apparatus here. rotation about z The two situations involve a different relative orientation of the apparatuses: in (a), the angle is 0°, while in (b) we have a (right-handed) rotation of 90° about the z-axis. He then proves—using geometry and logic only—that the probabilities and, therefore, the magnitudes of the amplitudes (denoted by C+ and C and C’+ and C’ in the S and T representation respectively) must be the same, but the amplitudes must have different phases, noting—in his typical style, mixing academic and colloquial language—that “there must be some way for a particle to tell that it has turned a corner in (b).” The various interpretations of what actually happens here may shed some light on the heated discussions on the reality of the wavefunction—and of quantum states. In fact, I should note that Feynman’s argument revolves around quantum states. To be precise, the analysis is focused on two-state systems only, and the wavefunction—which captures a continuum of possible states, so to speak—is introduced only later. However, we may look at the amplitude for a particle to be in the up– or down-state as a wavefunction and, therefore (but do note that’s my humble opinion once more), the analysis is actually not all that different. We know, from theory and experiment, that the amplitudes are different. For example, for the given difference in the relative orientation of the two apparatuses (90°), we know that the amplitudes are given by C’+ = ei∙φ/2C+ = e i∙π/4C+ and C’ = ei∙φ/2C+ = e i∙π/4C respectively (the amplitude to go from the down to the up state, or vice versa, is zero). Hence, yes, wenot the particle, Mr. Feynman!—know that, in (b), the electron has, effectively, turned a corner. The more subtle question here is the following: is the reality of the particle in the two setups the same? Feynman, of course, stays away from such philosophical question. He just notes that, while “(a) and (b) are different”, “the probabilities are the same”. He refrains from making any statement on the particle itself: is or is it not the same? The common sense answer is obvious: of course, it is! The particle is the same, right? In (b), it just took a turn—so it is just going in some other direction. That’s all. However, common sense is seldom a good guide when thinking about quantum-mechanical realities. Also, from a more philosophical point of view, one may argue that the reality of the particle is not the same: something might—or must—have happened to the electron because, when everything is said and done, the particle did take a turn in (b). It did not in (a). [Note that the difference between ‘might’ and ‘must’ in the previous phrase may well sum up the difference between a deterministic and a non-deterministic world view but… Well… This discussion is going to be way too philosophical already, so let’s refrain from inserting new language here.] Let us think this through. The (a) and (b) set-up are, obviously, different but… Wait a minute… Nothing is obvious in quantum mechanics, right? How can we experimentally confirm that they are different? Huh? I must be joking, right? You can see they are different, right? No. I am not joking. In physics, two things are different if we get different measurement results. [That’s a bit of a simplified view of the ontological point of view of mainstream physicists, but you will have to admit I am not far off.] So… Well… We can’t see those amplitudes and so… Well… If we measure the same thing—same probabilities, remember?—why are they different? Think of this: if we look at the two beam splitters as one single tube (an ST tube, we might say), then all we did in (b) was bend the tube. Pursuing the logic that says our particle is still the same even when it takes a turn, we could say the tube is still the same, despite us having wrenched it over a 90° corner. Now, I am sure you think I’ve just gone nuts, but just try to stick with me a little bit longer. Feynman actually acknowledges the same: we need to experimentally prove (a) and (b) are different. He does so by getting third apparatus in (U), as shown below, whose relative orientation to T is the same in both (a) and (b), so there is no difference there. third apparatus Now, the axis of is not the z-axis: it is the x-axis in (a), and the y-axis in (b). So what? Well… I will quote Feynman here—not (only) because his words are more important than mine but also because every word matters here: “The two apparatuses in (a) and (b) are, in fact, different, as we can see in the following way. Suppose that we put an apparatus in front of which produces a pure +x state. Such particles would be split into +z and −z into beams in S, but the two beams would be recombined to give a +x state again at P1—the exit of S. The same thing happens again in T. If we follow by a third apparatus U, whose axis is in the +x direction and, as shown in (a), all the particles would go into the + beam of U. Now imagine what happens if and are swung around together by 90° to the positions shown in (b). Again, the apparatus puts out just what it takes in, so the particles that enter are in a +state with respect to S, which is different. By symmetry, we would now expect only one-half of the particles to get through.” I should note that (b) shows the apparatus wide open so… Well… I must assume that’s a mistake (and should alert the current editors of the Lectures to it): Feynman’s narrative tells us we should also imagine it with the minus channel shut. In that case, it should, effectively, filter approximately half of the particles out, while they all get through in (a). So that’s a measurement result which shows the direction, as we see it, makes a difference. Now, Feynman would be very angry with me—because, as mentioned, he hates philosophers—but I’d say: this experiment proves that a direction is something real. Of course, the next philosophical question then is: what is a direction? I could answer this by pointing to the experiment above: a direction is something that alters the probabilities between the STU tube as set up in (a) versus the STU tube in (b). In fact—but, I admit, that would be pretty ridiculous—we could use the varying probabilities as we wrench this tube over varying angles to define an angle! But… Well… While that’s a perfectly logical argument, I agree it doesn’t sound very sensical. OK. Next step. What follows may cause brain damage. 🙂 Please abandon all pre-conceived notions and definitions for a while and think through the following logic. You know this stuff is about transformations of amplitudes (or wavefunctions), right? [And you also want to hear about those special 720° symmetry, right? No worries. We’ll get there.] So the questions all revolve around this: what happens to amplitudes (or the wavefunction) when we go from one reference frame—or representation, as it’s referred to in quantum mechanics—to another? Well… I should immediately correct myself here: a reference frame and a representation are two different things. They are related but… Well… Different… Quite different. Not same-same but different. 🙂 I’ll explain why later. Let’s go for it. Before talking representations, let us first think about what we really mean by changing the reference frame. To change it, we first need to answer the question: what is our reference frame? It is a mathematical notion, of course, but then it is also more than that: it is our reference frame. We use it to make measurements. That’s obvious, you’ll say, but let me make a more formal statement here: The reference frame is given by (1) the geometry (or the shape, if that sounds easier to you) of the measurement apparatus (so that’s the experimental set-up) here) and (2) our perspective of it. If we would want to sound academic, we might refer to Kant and other philosophers here, who told us—230 years ago—that the mathematical idea of a three-dimensional reference frame is grounded in our intuitive notions of up and down, and left and right. [If you doubt this, think about the necessity of the various right-hand rules and conventions that we cannot do without in math, and in physics.] But so we do not want to sound academic. Let us be practical. Just think about the following. The apparatus gives us two directions: (1) The up direction, which we associate with the positive direction of the z-axis, and (2) the direction of travel of our particle, which we associate with the positive direction of the y-axis. Now, if we have two axes, then the third axis (the x-axis) will be given by the right-hand rule, right? So we may say the apparatus gives us the reference frame. Full stop. So… Well… Everything is relative? Is this reference frame relative? Are directions relative? That’s what you’ve been told, but think about this: relative to what? Here is where the object meets the subject. What’s relative? What’s absolute? Frankly, I’ve started to think that, in this particular situation, we should, perhaps, not use these two terms. I am not saying that our observation of what physically happens here gives these two directions any absolute character but… Well… You will have to admit they are more than just some mathematical construct: when everything is said and done, we will have to admit that these two directions are real. because… Well… They’re part of the reality that we are observing, right? And the third one… Well… That’s given by our perspective—by our right-hand rule, which is… Well… Our right-hand rule. Of course, now you’ll say: if you think that ‘relative’ and ‘absolute’ are ambiguous terms and that we, therefore, may want to avoid them a bit more, then ‘real’ and its opposite (unreal?) are ambiguous terms too, right? Well… Maybe. What language would you suggest? 🙂 Just stick to the story for a while. I am not done yet. So… Yes… What is their reality? Let’s think about that in the next section. Perspectives, reference frames and symmetries You’ve done some mental exercises already as you’ve been working your way through the previous section, but you’ll need to do plenty more. In fact, they may become physical exercise too: when I first thought about these things (symmetries and, more importantly, asymmetries in space), I found myself walking around the table with some asymmetrical everyday objects and papers with arrows and clocks and other stuff on it—effectively analyzing what right-hand screw, thumb or grip rules actually mean. 🙂 So… Well… I want you to distinguish—just for a while—between the notion of a reference frame (think of the xyz reference frame that comes with the apparatus) and your perspective on it. What’s our perspective on it? Well… You may be looking from the top, or from the side and, if from the side, from the left-hand side or the right-hand side—which, if you think about it, you can only define in terms of the various positive and negative directions of the various axes. 🙂 If you think this is getting ridiculous… Well… Don’t. Feynman himself doesn’t think this is ridiculous, because he starts his own “long and abstract side tour” on transformations with a very simple explanation of how the top and side view of the apparatus are related to the axes (i.e. the reference frame) that comes with it. You don’t believe me? This is the very first illustration of his Lecture on this: Modified Stern-GerlachHe uses it to explain the apparatus (which we don’t do here because you’re supposed to already know how these (modified or improved) Stern-Gerlach apparatuses work). So let’s continue this story. Suppose that we are looking in the positive y-direction—so that’s the direction in which our particle is moving—then we might imagine how it would look like when we would make a 180° turn and look at the situation from the other side, so to speak. We do not change the reference frame (i.e. the orientation) of the apparatus here: we just change our perspective on it. Instead of seeing particles going away from us, into the apparatus, we now see particles coming towards us, out of the apparatus. What happens—but that’s not scientific language, of course—is that left becomes right, and right becomes left. Top still is top, and bottom is bottom. We are looking now in the negative y-direction, and the positive direction of the x-axis—which pointed right when we were looking in the positive y-direction—now points left. I see you nodding your head now—because you’ve heard about parity inversions, mirror symmetries and what have you—and I hear you say: “That’s the mirror world, right?” No. It is not. I wrote about this in another post: the world in the mirror is the world in the mirror. We don’t get a mirror image of an object by going around it and looking at its back side. I can’t dwell too much on this (just check that post, and another one who talks about the same), but so don’t try to connect it to the discussions on symmetry-breaking and what have you. Just stick to this story, which is about transformations of amplitudes (or wavefunctions). [If you really want to know—but I know this sounds counterintuitive—the mirror world doesn’t really switch left for right. Your reflection doesn’t do a 180 degree turn: it is just reversed front to back, with no rotation at all. It’s only your brain which mentally adds (or subtracts) the 180 degree turn that you assume must have happened from the observed front to back reversal. So the left to right reversal is only apparent. It’s a common misconception, and… Well… I’ll let you figure this out yourself. I need to move on.] Just note the following: 1. The xyz reference frame remains a valid right-handed reference frame. Of course it does: it comes with our beam splitter, and we can’t change its reality, right? We’re just looking at it from another angle. Our perspective on it has changed. 2. However, if we think of the real and imaginary part of the wavefunction describing the electrons that are going through our apparatus as perpendicular oscillations (as shown below)—a cosine and sine function respectively—then our change in perspective might, effectively, mess up our convention for measuring angles. I am not saying it does. Not now, at least. I am just saying it might. It depends on the plane of the oscillation, as I’ll explain in a few moments. Think of this: we measure angles counterclockwise, right? As shown below… But… Well… If the thing below would be some funny clock going backwards—you’ve surely seen them in a bar or so, right?—then… Well… If they’d be transparent, and you’d go around them, you’d see them as going… Yes… Clockwise. 🙂 [This should remind you of a discussion on real versus pseudo-vectors, or polar versus axial vectors, but… Well… We don’t want to complicate the story here.] Now, if we would assume this clock represents something real—and, of course, I am thinking of the elementary wavefunction eiθ = cosθ + i·sinθ now—then… Well… Then it will look different when we go around it. When going around our backwards clock above and looking at it from… Well… The back, we’d describe it, naively, as… Well… Think! What’s your answer? Give me the formula! 🙂 We’d see it as eiθ = cos(−θ) + i·sin(−θ) = cosθ − i·sinθ, right? The hand of our clock now goes clockwise, so that’s the opposite direction of our convention for measuring angles. Hence, instead of eiθ, we write eiθ, right? So that’s the complex conjugate. So we’ve got a different image of the same thing here. Not good. Not good at all. :-/ You’ll say: so what? We can fix this thing easily, right? You don’t need the convention for measuring angles or for the imaginary unit (i) here. This particle is moving, right? So if you’d want to look at the elementary wavefunction as some sort of circularly polarized beam (which, I admit, is very much what I would like to do, but its polarization is rather particular as I’ll explain in a minute), then you just need to define left- and right-handed angles as per the standard right-hand screw rule (illustrated below). To hell with the counterclockwise convention for measuring angles! right-hand rule You are right. We could use the right-hand rule more consistently. We could, in fact, use it as an alternative convention for measuring angles: we could, effectively, measure them clockwise or counterclockwise depending on the direction of our particle. But… Well… The fact is: we don’t. We do not use that alternative convention when we talk about the wavefunction. Physicists do use the counterclockwise convention all of the time and just jot down these complex exponential functions and don’t realize that, if they are to represent something real, our perspective on the reference frame matters. To put it differently, the direction in which we are looking at things matters! Hence, the direction is not… Well… I am tempted to say… Not relative at all but then… Well… We wanted to avoid that term, right? 🙂 I guess that, by now, your brain may suffered from various short-circuits. If not, stick with me a while longer. Let us analyze how our wavefunction model might be impacted by this symmetry—or asymmetry, I should say. The flywheel model of an electron In our previous posts, we offered a model that interprets the real and the imaginary part of the wavefunction as oscillations which each carry half of the total energy of the particle. These oscillations are perpendicular to each other, and the interplay between both is how energy propagates through spacetime. Let us recap the fundamental premises: 1. The dimension of the matter-wave field vector is force per unit mass (N/kg), as opposed to the force per unit charge (N/C) dimension of the electric field vector. This dimension is an acceleration (m/s2), which is the dimension of the gravitational field. 2. We assume this gravitational disturbance causes our electron (or a charged mass in general) to move about some center, combining linear and circular motion. This interpretation reconciles the wave-particle duality: fields interfere but if, at the same time, they do drive a pointlike particle, then we understand why, as Feynman puts it, “when you do find the electron some place, the entire charge is there.” Of course, we cannot prove anything here, but our elegant yet simple derivation of the Compton radius of an electron is… Well… Just nice. 🙂 3. Finally, and most importantly in the context of this discussion, we noted that, in light of the direction of the magnetic moment of an electron in an inhomogeneous magnetic field, the plane which circumscribes the circulatory motion of the electron should also comprise the direction of its linear motion. Hence, unlike an electromagnetic wave, the plane of the two-dimensional oscillation (so that’s the polarization plane, really) cannot be perpendicular to the direction of motion of our electron. Let’s say some more about the latter point here. The illustrations below (one from Feynman, and the other is just open-source) show what we’re thinking of. The direction of the angular momentum (and the magnetic moment) of an electron—or, to be precise, its component as measured in the direction of the (inhomogeneous) magnetic field through which our electron is traveling—cannot be parallel to the direction of motion. On the contrary, it must be perpendicular to the direction of motion. In other words, if we imagine our electron as spinning around some center (see the illustration on the left-hand side), then the disk it circumscribes (i.e. the plane of the polarization) has to comprise the direction of motion. Of course, we need to add another detail here. As my readers will know, we do not really have a precise direction of angular momentum in quantum physics. While there is no fully satisfactory explanation of this, the classical explanation—combined with the quantization hypothesis—goes a long way in explaining this: an object with an angular momentum J and a magnetic moment μ that is not exactly parallel to some magnetic field B, will not line up: it will precess—and, as mentioned, the quantization of angular momentum may well explain the rest. [Well… Maybe… We have detailed our attempts in this regard in various posts on this (just search for spin or angular momentum on this blog, and you’ll get a dozen posts or so), but these attempts are, admittedly, not fully satisfactory. Having said that, they do go a long way in relating angles to spin numbers.] The thing is: we do assume our electron is spinning around. If we look from the up-direction only, then it will be spinning clockwise if its angular momentum is down (so its magnetic moment is up). Conversely, it will be spinning counterclockwise if its angular momentum is up. Let us take the up-state. So we have a top view of the apparatus, and we see something like this:electron waveI know you are laughing aloud now but think of your amusement as a nice reward for having stuck to the story so far. Thank you. 🙂 And, yes, do check it yourself by doing some drawings on your table or so, and then look at them from various directions as you walk around the table as—I am not ashamed to admit this—I did when thinking about this. So what do we get when we change the perspective? Let us walk around it, counterclockwise, let’s say, so we’re measuring our angle of rotation as some positive angle. Walking around it—in whatever direction, clockwise or counterclockwise—doesn’t change the counterclockwise direction of our… Well… That weird object that might—just might—represent an electron that has its spin up and that is traveling in the positive y-direction. When we look in the direction of propagation (so that’s from left to right as you’re looking at this page), and we abstract away from its linear motion, then we could, vaguely, describe this by some wrenched eiθ = cosθ + i·sinθ function, right? The x- and y-axes of the apparatus may be used to measure the cosine and sine components respectively. Let us keep looking from the top but walk around it, rotating ourselves over a 180° angle so we’re looking in the negative y-direction now. As I explained in one of those posts on symmetries, our mind will want to switch to a new reference frame: we’ll keep the z-axis (up is up, and down is down), but we’ll want the positive direction of the x-axis to… Well… Point right. And we’ll want the y-axis to point away, rather than towards us. In short, we have a transformation of the reference frame here: z’zy’ = − y, and x’ = − x. Mind you, this is still a regular right-handed reference frame. [That’s the difference with a mirror image: a mirrored right-hand reference frame is no longer right-handed.] So, in our new reference frame, that we choose to coincide with our perspective, we will now describe the same thing as some −cosθ − i·sinθ = −eiθ function. Of course, −cosθ = cos(θ + π) and −sinθ = sin(θ + π) so we can write this as: cosθ − i·sinθ = cos(θ + π) + i·sinθ = ei·(θ+π) = eiπ·eiθ = −eiθ. Sweet ! But… Well… First note this is not the complex conjugate: eiθ = cosθ − i·sinθ ≠ −cosθ − i·sinθ = −eiθ. Why is that? Aren’t we looking at the same clock, but from the back? No. The plane of polarization is different. Our clock is more like those in Dali’s painting: it’s flat. 🙂 And, yes, let me lighten up the discussion with that painting here. 🙂 We need to have some fun while torturing our brain, right? So, because we assume the plane of polarization is different, we get an −eiθ function instead of a eiθ function. Let us now think about the ei·(θ+π) function. It’s the same as −eiθ but… Well… We walked around the z-axis taking a full 180° turn, right? So that’s π in radians. So that’s the phase shift here. Hey! Try the following now. Go back and walk around the apparatus once more, but let the reference frame rotate with us, as shown below. So we start left and look in the direction of propagation, and then we start moving about the z-axis (which points out of this page, toward you, as you are looking at this), let’s say by some small angle α. So we rotate the reference frame about the z-axis by α and… Well… Of course, our ei·θ now becomes an our ei·(θ+α) function, right? We’ve just derived the transformation coefficient for a rotation about the z-axis, didn’t we? It’s equal to ei·α, right? We get the transformed wavefunction in the new reference frame by multiplying the old one by ei·α, right? It’s equal to ei·α·ei·θ = ei·(θ+α), right? electron wave perspective changeWell… No. The answer is: no. The transformation coefficient is not ei·α but ei·α/2. So we get an additional 1/2 factor in the phase shift. Huh? Yes. That’s what it is: when we change the representation, by rotating our apparatus over some angle α about the z-axis, then we will, effectively, get a new wavefunction, which will differ from the old one by a phase shift that is equal to only half of the rotation angle only. Huh? Yes. It’s even weirder than that. For a spin down electron, the transformation coefficient is ei·α/2, so we get an additional minus sign in the argument. Huh? Yes. I know you are terribly disappointed, but that’s how it is. That’s what hampers an easy geometric interpretation of the wavefunction. Paraphrasing Feynman, I’d say that, somehow, our electron not only knows whether or not it has taken a turn, but it also knows whether or not it is moving away from us or, conversely, towards us. But… Hey! Wait a minute! That’s it, right?  What? Well… That’s it! The electron doesn’t know whether it’s moving away or towards us. That’s nonsense. But… Well… It’s like this: Our ei·α coefficient describes a rotation of the reference frame. In contrast, the ei·α/2 and ei·α/2 coefficients describe what happens when we rotate the T apparatus! Now that is a very different proposition.  Right! You got it! Representations and reference frames are different things. Quite different, I’d say: representations are real, reference frames aren’t—but then you don’t like philosophical language, do you? 🙂 But think of it. When we just go about the z-axis, a full 180°, but we don’t touch that T-apparatus, we don’t change reality. When we were looking at the electron while standing left to the apparatus, we watched the electrons going in and moving away from us, and when we go about the z-axis, a full 180°, looking at it from the right-hand side, we see the electrons coming out, moving towards us. But it’s still the same reality. We simply change the reference frame—from xyz to x’y’z’ to be precise: we do not change the representation. In contrast, when we rotate the apparatus over a full 180°, our electron now goes in the opposite direction. And whether that’s away or towards us, that doesn’t matter: it was going in one direction while traveling through S, and now it goes in the opposite direction—relative to the direction it was going in S, that is. So what happens, really, when we change the representation, rather than the reference frame? Well… Let’s think about that. 🙂 Quantum-mechanical weirdness? The transformation matrix for the amplitude of a system to be in an up or down state (and, hence, presumably, for a wavefunction) for a rotation about the z-axis is the following one: rotation matrix Feynman derives this matrix in a rather remarkable intellectual tour de force in the 6th of his Lectures on Quantum Mechanics. So that’s pretty early on. He’s actually worried about that himself, apparently, and warns his students that “This chapter is a rather long and abstract side tour, and it does not introduce any idea which we will not also come to by a different route in later chapters. You can, therefore, skip over it, and come back later if you are interested.” Well… That’s how approached it. I skipped it, and didn’t worry about those transformations for quite a while. But… Well… You can’t avoid them. In some weird way, they are at the heart of the weirdness of quantum mechanics itself. Let us re-visit his argument. Feynman immediately gets that the whole transformation issue here is just a matter of finding an easy formula for that phase shift. Why? He doesn’t tell us. Lesser mortals like us must just assume that’s how the instinct of a genius works, right? 🙂 So… Well… Because he knows—from experiment—that the coefficient is ei·α/2 instead of ei·α, he just says the phase shift—which he denotes by λ—must be some proportional to the angle of rotation—which he denotes by φ rather than α (so as to avoid confusion with the Euler angle α). So he writes: λ = m·φ Initially, he also tries the obvious thing: m should be one, right? So λ = φ, right? Well… No. It can’t be. Feynman shows why that can’t be the case by adding a third apparatus once again, as shown below. third apparatusLet me quote him here, as I can’t explain it any better: “Suppose T is rotated by 360°; then, clearly, it is right back at zero degrees, and we should have C’+ = C+ and C’C or, what is the same thing, ei·m·2π = 1. We get m = 1. [But no!] This argument is wrong! To see that it is, consider that is rotated by 180°. If m were equal to 1, we would have C’+ei·πC+ = −C+ and C’ei·πC = −C. [Feynman works with states here, instead of the wavefunction of the particle as a whole. I’ll come back to this.] However, this is just the original state all over again. Both amplitudes are just multiplied by 1 which gives back the original physical system. (It is again a case of a common phase change.) This means that if the angle between and is increased to 180°, the system would be indistinguishable from the zero-degree situation, and the particles would again go through the (+) state of the apparatus. At 180°, though, the (+) state of the apparatus is the (xstate of the original S apparatus. So a (+x) state would become a (x) state. But we have done nothing to change the original state; the answer is wrong. We cannot have m = 1. We must have the situation that a rotation by 360°, and no smaller angle reproduces the same physical state. This will happen if m = 1/2.” The result, of course, is this weird 720° symmetry. While we get the same physics after a 360° rotation of the T apparatus, we do not get the same amplitudes. We get the opposite (complex) number: C’+ei·2π/2C+ = −C+ and C’ei·2π/2C = −C. That’s OK, because… Well… It’s a common phase shift, so it’s just like changing the origin of time. Nothing more. Nothing less. Same physics. Same reality. But… Well… C’+ ≠ −C+ and C’ ≠ −C, right? We only get our original amplitudes back if we rotate the T apparatus two times, so that’s by a full 720 degrees—as opposed to the 360° we’d expect. Now, space is isotropic, right? So this 720° business doesn’t make sense, right? Well… It does and it doesn’t. We shouldn’t dramatize the situation. What’s the actual difference between a complex number and its opposite? It’s like x or −x, or t and −t. I’ve said this a couple of times already again, and I’ll keep saying it many times more: Nature surely can’t be bothered by how we measure stuff, right? In the positive or the negative direction—that’s just our choice, right? Our convention. So… Well… It’s just like that −eiθ function we got when looking at the same experimental set-up from the other side: our eiθ and −eiθ functions did not describe a different reality. We just changed our perspective. The reference frame. As such, the reference frame isn’t real. The experimental set-up is. And—I know I will anger mainstream physicists with this—the representation is. Yes. Let me say it loud and clear here: A different representation describes a different reality. In contrast, a different perspective—or a different reference frame—does not. While you might have had a lot of trouble going through all of the weird stuff above, the point is: it is not all that weird. We can understand quantum mechanics. And in a fairly intuitive way, really. It’s just that… Well… I think some of the conventions in physics hamper such understanding. Well… Let me be precise: one convention in particular, really. It’s that convention for measuring angles. Indeed, Mr. Leonhard Euler, back in the 18th century, might well be “the master of us all” (as Laplace is supposed to have said) but… Well… He couldn’t foresee how his omnipresent formula—eiθ = cosθ + i·sinθ—would, one day, be used to represent something real: an electron, or any elementary particle, really. If he would have known, I am sure he would have noted what I am noting here: Nature can’t be bothered by our conventions. Hence, if eiθ represents something real, then eiθ must also represent something real. [Coz I admire this genius so much, I can’t resist the temptation. Here’s his portrait. He looks kinda funny here, doesn’t he? :-)] Frankly, he would probably have understood quantum-mechanical theory as easily and instinctively as Dirac, I think, and I am pretty sure he would have noted—and, if he would have known about circularly polarized waves, probably agreed to—that alternative convention for measuring angles: we could, effectively, measure angles clockwise or counterclockwise depending on the direction of our particle—as opposed to Euler’s ‘one-size-fits-all’ counterclockwise convention. But so we did not adopt that alternative convention because… Well… We want to keep honoring Euler, I guess. 🙂 So… Well… If we’re going to keep honoring Euler by sticking to that ‘one-size-fits-all’ counterclockwise convention, then I do believe that eiθ and eiθ represent two different realities: spin up versus spin down. Yes. In our geometric interpretation of the wavefunction, these are, effectively, two different spin directions. And… Well… These are real directions: we see something different when they go through a Stern-Gerlach apparatus. So it’s not just some convention to count things like 0, 1, 2, etcetera versus 0, −1, −2 etcetera. It’s the same story again: different but related mathematical notions are (often) related to different but related physical possibilities. So… Well… I think that’s what we’ve got here. Think of it. Mainstream quantum math treats all wavefunctions as right-handed but… Well… A particle with up spin is a different particle than one with down spin, right? And, again, Nature surely cannot be bothered about our convention of measuring phase angles clockwise or counterclockwise, right? So… Well… Kinda obvious, right? 🙂 Let me spell out my conclusions here: 1. The angular momentum can be positive or, alternatively, negative: J = +ħ/2 or −ħ/2. [Let me note that this is not obvious. Or less obvious than it seems, at first. In classical theory, you would expect an electron, or an atomic magnet, to line up with the field. Well… The Stern-Gerlach experiment shows they don’t: they keep their original orientation. Well… If the field is weak enough.] 2. Therefore, we would probably like to think that an actual particle—think of an electron, or whatever other particle you’d think of—comes in two variants: right-handed and left-handed. They will, therefore, either consist of (elementary) right-handed waves or, else, (elementary) left-handed waves. An elementary right-handed wave would be written as: ψ(θi= eiθi = ai·(cosθi + i·sinθi). In contrast, an elementary left-handed wave would be written as: ψ(θie−iθi = ai·(cosθii·sinθi). So that’s the complex conjugate. So… Well… Yes, I think complex conjugates are not just some mathematical notion: I believe they represent something real. It’s the usual thing: Nature has shown us that (most) mathematical possibilities correspond to real physical situations so… Well… Here you go. It is really just like the left- or right-handed circular polarization of an electromagnetic wave: we can have both for the matter-wave too! [As for the differences—different polarization plane and dimensions and what have you—I’ve already summed those up, so I won’t repeat myself here.] The point is: if we have two different physical situations, we’ll want to have two different functions to describe it. Think of it like this: why would we have two—yes, I admit, two related—amplitudes to describe the up or down state of the same system, but only one wavefunction for it? You tell me. Authors like me are looked down upon by the so-called professional class of physicists. The few who bothered to react to my attempts to make sense of Einstein’s basic intuition in regard to the nature of the wavefunction all said pretty much the same thing: “Whatever your geometric (or physical) interpretation of the wavefunction might be, it won’t be compatible with the isotropy of space. You cannot imagine an object with a 720° symmetry. That’s geometrically impossible.” Well… Almost three years ago, I wrote the following on this blog: “As strange as it sounds, a spin-1/2 particle needs two full rotations (2×360°=720°) until it is again in the same state. Now, in regard to that particularity, you’ll often read something like: “There is nothing in our macroscopic world which has a symmetry like that.” Or, worse, “Common sense tells us that something like that cannot exist, that it simply is impossible.” [I won’t quote the site from which I took this quotes, because it is, in fact, the site of a very respectable  research center!] Bollocks! The Wikipedia article on spin has this wonderful animation: look at how the spirals flip between clockwise and counterclockwise orientations, and note that it’s only after spinning a full 720 degrees that this ‘point’ returns to its original configuration after spinning a full 720 degrees. 720 degree symmetry So… Well… I am still pursuing my original dream which is… Well… Let me re-phrase what I wrote back in January 2015: Yes, we can actually imagine spin-1/2 particles, and we actually do not need all that much imagination! In fact, I am tempted to think that I’ve found a pretty good representation or… Well… A pretty good image, I should say, because… Well… A representation is something real, remember? 🙂 Post scriptum (10 December 2017): Our flywheel model of an electron makes sense, but also leaves many unanswered questions. The most obvious one question, perhaps, is: why the up and down state only? I am not so worried about that question, even if I can’t answer it right away because… Well… Our apparatus—the way we measure reality—is set up to measure the angular momentum (or the magnetic moment, to be precise) in one direction only. If our electron is captured by some harmonic (or non-harmonic?) oscillation in multiple dimensions, then it should not be all that difficult to show its magnetic moment is going to align, somehow, in the same or, alternatively, the opposite direction of the magnetic field it is forced to travel through. Of course, the analysis for the spin up situation (magnetic moment down) is quite peculiar: if our electron is a mini-magnet, why would it not line up with the magnetic field? We understand the precession of a spinning top in a gravitational field, but… Hey… It’s actually not that different. Try to imagine some spinning top on the ceiling. 🙂 I am sure we can work out the math. 🙂 The electron must be some gyroscope, really: it won’t change direction. In other words, its magnetic moment won’t line up. It will precess, and it can do so in two directions, depending on its state. 🙂 […] At least, that’s why my instinct tells me. I admit I need to work out the math to convince you. 🙂 The second question is more important. If we just rotate the reference frame over 360°, we see the same thing: some rotating object which we, vaguely, describe by some e+i·θ function—to be precise, I should say: by some Fourier sum of such functions—or, if the rotation is in the other direction, by some ei·θ function (again, you should read: a Fourier sum of such functions). Now, the weird thing, as I tried to explain above is the following: if we rotate the object itself, over the same 360°, we get a different object: our ei·θ and ei·θ function (again: think of a Fourier sum, so that’s a wave packet, really) becomes a −e±i·θ thing. We get a minus sign in front of it. So what happened here? What’s the difference, really? Well… I don’t know. It’s very deep. If I do nothing, and you keep watching me while turning around me, for a full 360°, then you’ll end up where you were when you started and, importantly, you’ll see the same thing. Exactly the same thing: if I was an e+i·θ wave packet, I am still an an e+i·θ wave packet now. Or if I was an ei·θ wave packet, then I am still an an ei·θ wave packet now. Easy. Logical. Obvious, right? But so now we try something different: turn around, over a full 360° turn, and you stay where you are. When I am back where I was—looking at you again, so to speak—then… Well… I am not quite the same any more. Or… Well… Perhaps I am but you see me differently. If I was e+i·θ wave packet, then I’ve become a −e+i·θ wave packet now. Not hugely different but… Well… That minus sign matters, right? Or If I was wave packet built up from elementary a·ei·θ waves, then I’ve become a −ei·θ wave packet now. What happened? Re-visiting the Complementarity Principle: the field versus the flywheel model of the matter-wave Note: I have published a paper that is very coherent and fully explains what’s going on. There is nothing magical about it these things. Check it out: The Meaning of the Fine-Structure Constant. No ambiguity. No hocus-pocus. Jean Louis Van Belle, 23 December 2018 Original post: This post is a continuation of the previous one: it is just going to elaborate the questions I raised in the post scriptum of that post. Let’s first review the basics once more. The geometry of the elementary wavefunction In the reference frame of the particle itself, the geometry of the wavefunction simplifies to what is illustrated below: an oscillation in two dimensions which, viewed together, form a plane that would be perpendicular to the direction of motion—but then our particle doesn’t move in its own reference frame, obviously. Hence, we could be looking at our particle from any direction and we should, presumably, see a similar two-dimensional oscillation. That is interesting because… Well… If we rotate this circle around its center (in whatever direction we’d choose), we get a sphere, right? It’s only when it starts moving, that it loses its symmetry. Now, that is very intriguing, but let’s think about that later. Let’s assume we’re looking at it from some specific direction. Then we presumably have some charge (the green dot) moving about some center, and its movement can be analyzed as the sum of two oscillations (the sine and cosine) which represent the real and imaginary component of the wavefunction respectively—as we observe it, so to speak. [Of course, you’ve been told you can’t observe wavefunctions so… Well… You should probably stop reading this. :-)] We write: ψ = = a·ei∙θ = a·ei∙E·t/ħ = a·cos(−E∙t/ħ) + i·a·sin(−E∙t/ħ) = a·cos(E∙t/ħ) i·a·sin(E∙t/ħ)  So that’s the wavefunction in the reference frame of the particle itself. When we think of it as moving in some direction (so relativity kicks in), we need to add the p·x term to the argument (θ = E·t − px). It is easy to show this term doesn’t change the argument (θ), because we also get a different value for the energy in the new reference frame: E= γ·E0 and so… Well… I’ll refer you to my post on this, in which I show the argument of the wavefunction is invariant under a Lorentz transformation: the way Ev and pv and, importantly, the coordinates and t relativistically transform ensures the invariance. In fact, I’ve always wanted to read de Broglie‘s original thesis because I strongly suspect he saw that immediately. If you click this link, you’ll find an author who suggests the same. Having said that, I should immediately add this does not imply there is no need for a relativistic wave equation: the wavefunction is a solution for the wave equation and, yes, I am the first to note the Schrödinger equation has some obvious issues, which I briefly touch upon in one of my other posts—and which is why Schrödinger himself and other contemporaries came up with a relativistic wave equation (Oskar Klein and Walter Gordon got the credit but others (including Louis de Broglie) also suggested a relativistic wave equation when Schrödinger published his). In my humble opinion, the key issue is not that Schrödinger’s equation is non-relativistic. It’s that 1/2 factor again but… Well… I won’t dwell on that here. We need to move on. So let’s leave the wave equation for what it is and go back to our wavefunction. You’ll note the argument (or phase) of our wavefunction moves clockwise—or counterclockwise, depending on whether you’re standing in front of behind the clock. Of course, Nature doesn’t care about where we stand or—to put it differently—whether we measure time clockwise, counterclockwise, in the positive, the negative or whatever direction. Hence, I’ve argued we can have both left- as well as right-handed wavefunctions, as illustrated below (for p ≠ 0). Our hypothesis is that these two physical possibilities correspond to the angular momentum of our electron being either positive or negative: Jz = +ħ/2 or, else, Jz = −ħ/2. [If you’ve read a thing or two about neutrinos, then… Well… They’re kinda special in this regard: they have no charge and neutrinos and antineutrinos are actually defined by their helicity. But… Well… Let’s stick to trying to describing electrons for a while.] The line of reasoning that we followed allowed us to calculate the amplitude a. We got a result that tentatively confirms we’re on the right track with our interpretation: we found that = ħ/me·c, so that’s the Compton scattering radius of our electron. All good ! But we were still a bit stuck—or ambiguous, I should say—on what the components of our wavefunction actually are. Are we really imagining the tip of that rotating arrow is a pointlike electric charge spinning around the center? [Pointlike or… Well… Perhaps we should think of the Thomson radius of the electron here, i.e. the so-called classical electron radius, which is equal to the Compton radius times the fine-structure constant: rThomson = α·rCompton ≈ 3.86×10−13/137.] So that would be the flywheel model. In contrast, we may also think the whole arrow is some rotating field vector—something like the electric field vector, with the same or some other physical dimension, like newton per charge unit, or newton per mass unit? So that’s the field model. Now, these interpretations may or may not be compatible—or complementary, I should say. I sure hope they are but… Well… What can we reasonably say about it? Let us first note that the flywheel interpretation has a very obvious advantage, because it allows us to explain the interaction between a photon and an electron, as I demonstrated in my previous post: the electromagnetic energy of the photon will drive the circulatory motion of our electron… So… Well… That’s a nice physical explanation for the transfer of energy. However, when we think about interference or diffraction, we’re stuck: flywheels don’t interfere or diffract. Only waves do. So… Well… What to say? I am not sure, but here I want to think some more by pushing the flywheel metaphor to its logical limits. Let me remind you of what triggered it all: it was the mathematical equivalence of the energy equation for an oscillator (E = m·a2·ω2) and Einstein’s formula (E = m·c2), which tells us energy and mass are equivalent but… Well… They’re not the same. So what are they then? What is energy, and what is mass—in the context of these matter-waves that we’re looking at. To be precise, the E = m·a2·ω2 formula gives us the energy of two oscillators, so we need a two-spring model which—because I love motorbikes—I referred to as my V-twin engine model, but it’s not an engine, really: it’s two frictionless pistons (or springs) whose direction of motion is perpendicular to each other, so they are in a 90° degree angle and, therefore, their motion is, effectively, independent. In other words: they will not interfere with each other. It’s probably worth showing the illustration just one more time. And… Well… Yes. I’ll also briefly review the math one more time. V-2 engine If the magnitude of the oscillation is equal to a, then the motion of these piston (or the mass on a spring) will be described by x = a·cos(ω·t + Δ). Needless to say, Δ is just a phase factor which defines our t = 0 point, and ω is the natural angular frequency of our oscillator. Because of the 90° angle between the two cylinders, Δ would be 0 for one oscillator, and –π/2 for the other. Hence, the motion of one piston is given by x = a·cos(ω·t), while the motion of the other is given by x = a·cos(ω·t–π/2) = a·sin(ω·t). The kinetic and potential energy of one oscillator – think of one piston or one spring only – can then be calculated as: 1. K.E. = T = m·v2/2 = (1/2)·m·ω2·a2·sin2(ω·t + Δ) 2. P.E. = U = k·x2/2 = (1/2)·k·a2·cos2(ω·t + Δ) The coefficient k in the potential energy formula characterizes the restoring force: F = −k·x. From the dynamics involved, it is obvious that k must be equal to m·ω2. Hence, the total energy—for one piston, or one spring—is equal to: E = T + U = (1/2)· m·ω2·a2·[sin2(ω·t + Δ) + cos2(ω·t + Δ)] = m·a2·ω2/2 Hence, adding the energy of the two oscillators, we have a perpetuum mobile storing an energy that is equal to twice this amount: E = m·a2·ω2. It is a great metaphor. Somehow, in this beautiful interplay between linear and circular motion, energy is borrowed from one place and then returns to the other, cycle after cycle. However, we still have to prove this engine is, effectively, a perpetuum mobile: we need to prove the energy that is being borrowed or returned by one piston is the energy that is being returned or borrowed by the other. That is easy to do, but I won’t bother you with that proof here: you can double-check it in the referenced post or – more formally – in an article I posted on It is all beautiful, and the key question is obvious: if we want to relate the E = m·a2·ω2 and E = m·c2 formulas, we need to explain why we could, potentially, write as a·ω = a·√(k/m). We’ve done that already—to some extent at least. The tangential velocity of a pointlike particle spinning around some axis is given by v = r·ω. Now, the radius is given by = ħ/(m·c), and ω = E/ħ = m·c2/ħ, so is equal to to v = [ħ/(m·c)]·[m·c2/ħ] = c. Another beautiful result, but what does it mean? We need to think about the meaning of the ω = √(k/m) formula here. In the mentioned article, we boldly wrote that the speed of light is to be interpreted as the resonant frequency of spacetime, but so… Well… What do we really mean by that? Think of the following. Einstein’s E = mc2 equation implies the ratio between the energy and the mass of any particle is always the same: This effectively reminds us of the ω2 = C1/L or ω2 = k/m formula for harmonic oscillators. The key difference is that the ω2= C1/L and ω2 = k/m formulas introduce two (or more) degrees of freedom. In contrast, c2= E/m for any particle, always. However, that is exactly the point: we can modulate the resistance, inductance and capacitance of electric circuits, and the stiffness of springs and the masses we put on them, but we live in one physical space only: our spacetime. Hence, the speed of light (c) emerges here as the defining property of spacetime: the resonant frequency, so to speak. We have no further degrees of freedom here. Let’s think about k. [I am not trying to avoid the ω2= 1/LC formula here. It’s basically the same concept: the ω2= 1/LC formula gives us the natural or resonant frequency for a electric circuit consisting of a resistor, an inductor, and a capacitor. Writing the formula as ω2= C−1/L introduces the concept of elastance, which is the equivalent of the mechanical stiffness (k) of a spring, so… Well… You get it, right? The ω2= C1/L and ω2 = k/m sort of describe the same thing: harmonic oscillation. It’s just… Well… Unlike the ω2= C1/L, the ω2 = k/m is directly compatible with our V-twin engine metaphor, because it also involves physical distances, as I’ll show you here.] The in the ω2 = k/m is, effectively, the stiffness of the spring. It is defined by Hooke’s Law, which states that the force that is needed to extend or compress a spring by some distance  is linearly proportional to that distance, so we write: F = k·x. Now that is interesting, isn’t it? We’re talking exactly the same thing here: spacetime is, presumably, isotropic, so it should oscillate the same in any direction—I am talking those sine and cosine oscillations now, but in physical space—so there is nothing imaginary here: all is real or… Well… As real as we can imagine it to be. 🙂 We can elaborate the point as follows. The F = k·x equation implies k is a force per unit distance: k = F/x. Hence, its physical dimension is newton per meter (N/m). Now, the in this equation may be equated to the maximum extension of our spring, or the amplitude of the oscillation, so that’s the radius in the metaphor we’re analyzing here. Now look at how we can re-write the a·ω = a·√(k/m) equation: In case you wonder about the E = F·a substitution: just remember that energy is force times distance. [Just do a dimensional analysis: you’ll see it works out.] So we have a spectacular result here, for several reasons. The first, and perhaps most obvious reason, is that we can actually derive Einstein’s E = m·c2 formula from our flywheel model. Now, that is truly glorious, I think. However, even more importantly, this equation suggests we do not necessarily need to think of some actual mass oscillating up and down and sideways at the same time: the energy in the oscillation can be thought of a force acting over some distance, regardless of whether or not it is actually acting on a particle. Now, that energy will have an equivalent mass which is—or should be, I’d say… Well… The mass of our electron or, generalizing, the mass of the particle we’re looking at. Huh? Yes. In case you wonder what I am trying to get at, I am trying to convey the idea that the two interpretations—the field versus the flywheel model—are actually fully equivalent, or compatible, if you prefer that term. In Asia, they would say: they are the “same-same but different” 🙂 but, using the language that’s used when discussing the Copenhagen interpretation of quantum physics, we should actually say the two models are complementary. You may shrug your shoulders but… Well… It is a very deep philosophical point, really. 🙂 As far as I am concerned, I’ve never seen a better illustration of the (in)famous Complementarity Principle in quantum physics because… Well… It goes much beyond complementarity. This is about equivalence. 🙂 So it’s just like Einstein’s equation. 🙂 Post scriptum: If you read my posts carefully, you’ll remember I struggle with those 1/2 factors here and there. Textbooks don’t care about them. For example, when deriving the size of an atom, or the Rydberg energy, even Feynman casually writes that “we need not trust our answer [to questions like this] within factors like 2, π, etcetera.” Frankly, that’s disappointing. Factors like 2, 1/2, π or 2π are pretty fundamental numbers, and so they need an explanation. So… Well… I do loose sleep over them. :-/ Let me advance some possible explanation here. As for Feynman’s model, and the derivation of electron orbitals in general, I think it’s got to do with the fact that electrons do want to pair up when thermal motion does not come into play: think of the Cooper pairs we use to explain superconductivity (so that’s the BCS theory). The 1/2 factor in Schrödinger’s equation also has weird consequences (when you plug in the elementary wavefunction and do the derivatives, you get a weird energy concept: E = m·v2, to be precise). This problem may also be solved when assuming we’re actually calculating orbitals for a pair of electrons, rather than orbitals for just one electron only. [We’d get twice the mass (and, presumably, the charge, so… Well… It might work—but I haven’t done it yet. It’s on my agenda—as so many other things, but I’ll get there… One day. :-)] So… Well… Let’s get back to the lesson here. In this particular context (i.e. in the context of trying to find some reasonable physical interpretation of the wavefunction), you may or may not remember (if not, check my post on it) ‘ll remember I had to use the I = m·r2/2 formula for the angular momentum, as opposed to the I = m·r2 formula. I = m·r2/2 (with the 1/2 factor) gives us the angular momentum of a disk with radius r, as opposed to a point mass going around some circle with radius r. I noted that “the addition of this 1/2 factor may seem arbitrary”—and it totally is, of course—but so it gave us the result we wanted: the exact (Compton scattering) radius of our electron. Now, the arbitrary 1/2 factor may or may be explained as follows. In the field model of our electron, the force is linearly proportional to the extension or compression. Hence, to calculate the energy involved in stretching it from x = 0 to a, we need to calculate it as the following integral: half factor So… Well… That will give you some food for thought, I’d guess. 🙂 If it racks your brain too much—or if you’re too exhausted by this point (which is OK, because it racks my brain too!)—just note we’ve also shown that the energy is proportional to the square of the amplitude here, so that’s a nice result as well… 🙂 Talking food for thought, let me make one final point here. The c2 = a2·k/m relation implies a value for k which is equal to k = m·c2/a = E/a. What does this tell us? In one of our previous posts, we wrote that the radius of our electron appeared as a natural distance unit. We wrote that because of another reason: the remark was triggered by the fact that we can write the cratio as c/ω = a·ω/ω = a. This implies the tangential and angular velocity in our flywheel model of an electron would be the same if we’d measure distance in units of a. Now, the E = a·k = a·F/(just re-writing…) implies that the force is proportional to the energy— F = (x/a)·E — and the proportionality coefficient is… Well… x/a. So that’s the distance measured in units of a. So… Well… Isn’t that great? The radius of our atom appearing as a natural distance unit does fit in nicely with our geometric interpretation of the wavefunction, doesn’t it? I mean… Do I need to say more? I hope not because… Well… I can’t explain any better for the time being. I hope I sort of managed to convey the message. Just to make sure, in case you wonder what I was trying to do here, it’s the following: I told you appears as a resonant frequency of spacetime and, in this post, I tried to explain what that really means. I’d appreciate if you could let me know if you got it. If not, I’ll try again. 🙂 When everything is said and done, one only truly understands stuff when one is able to explain it to someone else, right? 🙂 Please do think of more innovative or creative ways if you can! 🙂 OK. That’s it but… Well… I should, perhaps, talk about one other thing here. It’s what I mentioned in the beginning of this post: this analysis assumes we’re looking at our particle from some specific direction. It could be any direction but… Well… It’s some direction. We have no depth in our line of sight, so to speak. That’s really interesting, and I should do some more thinking about it. Because the direction could be any direction, our analysis is valid for any direction. Hence, if our interpretation would happen to be some true—and that’s a big if, of course—then our particle has to be spherical, right? Why? Well… Because we see this circular thing from any direction, so it has to be a sphere, right? Well… Yes. But then… Well… While that logic seems to be incontournable, as they say in French, I am somewhat reluctant to accept it at face value. Why? I am not sure. Something inside of me says I should look at the symmetries involved… I mean the transformation formulas for wavefunction when doing rotations and stuff. So… Well… I’ll be busy with that for a while, I guess. 😦 Post scriptum 2: You may wonder whether this line of reasoning would also work for a proton. Well… Let’s try it. Because its mass is so much larger than that of an electron (about 1835 times), the = ħ/(m·c) formula gives a much smaller radius: 1835 times smaller, to be precise, so that’s around 2.1×10−16 m, which is about 1/4 of the so-called charge radius of a proton, as measured by scattering experiments. So… Well… We’re not that far off, but… Well… We clearly need some more theory here. Having said that, a proton is not an elementary particle, so its mass incorporates other factors than what we’re considering here (two-dimensional oscillations). Wavefunctions as gravitational waves This is the paper I always wanted to write. It is there now, and I think it is good – and that‘s an understatement. 🙂 It is probably best to download it as a pdf-file from the site because this was a rather fast ‘copy and paste’ job from the Word version of the paper, so there may be issues with boldface notation (vector notation), italics and, most importantly, with formulas – which I, sadly, have to ‘snip’ into this WordPress blog, as they don’t have an easy copy function for mathematical formulas. It’s great stuff. If you have been following my blog – and many of you have – you will want to digest this. 🙂 Abstract : This paper explores the implications of associating the components of the wavefunction with a physical dimension: force per unit mass – which is, of course, the dimension of acceleration (m/s2) and gravitational fields. The classical electromagnetic field equations for energy densities, the Poynting vector and spin angular momentum are then re-derived by substituting the electromagnetic N/C unit of field strength (mass per unit charge) by the new N/kg = m/s2 dimension. The results are elegant and insightful. For example, the energy densities are proportional to the square of the absolute value of the wavefunction and, hence, to the probabilities, which establishes a physical normalization condition. Also, Schrödinger’s wave equation may then, effectively, be interpreted as a diffusion equation for energy, and the wavefunction itself can be interpreted as a propagating gravitational wave. Finally, as an added bonus, concepts such as the Compton scattering radius for a particle, spin angular momentum, and the boson-fermion dichotomy, can also be explained more intuitively. While the approach offers a physical interpretation of the wavefunction, the author argues that the core of the Copenhagen interpretations revolves around the complementarity principle, which remains unchallenged because the interpretation of amplitude waves as traveling fields does not explain the particle nature of matter. This is not another introduction to quantum mechanics. We assume the reader is already familiar with the key principles and, importantly, with the basic math. We offer an interpretation of wave mechanics. As such, we do not challenge the complementarity principle: the physical interpretation of the wavefunction that is offered here explains the wave nature of matter only. It explains diffraction and interference of amplitudes but it does not explain why a particle will hit the detector not as a wave but as a particle. Hence, the Copenhagen interpretation of the wavefunction remains relevant: we just push its boundaries. The basic ideas in this paper stem from a simple observation: the geometric similarity between the quantum-mechanical wavefunctions and electromagnetic waves is remarkably similar. The components of both waves are orthogonal to the direction of propagation and to each other. Only the relative phase differs : the electric and magnetic field vectors (E and B) have the same phase. In contrast, the phase of the real and imaginary part of the (elementary) wavefunction (ψ = a·ei∙θ = a∙cosθ – a∙sinθ) differ by 90 degrees (π/2).[1] Pursuing the analogy, we explore the following question: if the oscillating electric and magnetic field vectors of an electromagnetic wave carry the energy that one associates with the wave, can we analyze the real and imaginary part of the wavefunction in a similar way? We show the answer is positive and remarkably straightforward.  If the physical dimension of the electromagnetic field is expressed in newton per coulomb (force per unit charge), then the physical dimension of the components of the wavefunction may be associated with force per unit mass (newton per kg).[2] Of course, force over some distance is energy. The question then becomes: what is the energy concept here? Kinetic? Potential? Both? The similarity between the energy of a (one-dimensional) linear oscillator (E = m·a2·ω2/2) and Einstein’s relativistic energy equation E = m∙c2 inspires us to interpret the energy as a two-dimensional oscillation of mass. To assist the reader, we construct a two-piston engine metaphor.[3] We then adapt the formula for the electromagnetic energy density to calculate the energy densities for the wave function. The results are elegant and intuitive: the energy densities are proportional to the square of the absolute value of the wavefunction and, hence, to the probabilities. Schrödinger’s wave equation may then, effectively, be interpreted as a diffusion equation for energy itself. As an added bonus, concepts such as the Compton scattering radius for a particle and spin angular, as well as the boson-fermion dichotomy can be explained in a fully intuitive way.[4] Of course, such interpretation is also an interpretation of the wavefunction itself, and the immediate reaction of the reader is predictable: the electric and magnetic field vectors are, somehow, to be looked at as real vectors. In contrast, the real and imaginary components of the wavefunction are not. However, this objection needs to be phrased more carefully. First, it may be noted that, in a classical analysis, the magnetic force is a pseudovector itself.[5] Second, a suitable choice of coordinates may make quantum-mechanical rotation matrices irrelevant.[6] Therefore, the author is of the opinion that this little paper may provide some fresh perspective on the question, thereby further exploring Einstein’s basic sentiment in regard to quantum mechanics, which may be summarized as follows: there must be some physical explanation for the calculated probabilities.[7] We will, therefore, start with Einstein’s relativistic energy equation (E = mc2) and wonder what it could possibly tell us.  I. Energy as a two-dimensional oscillation of mass The structural similarity between the relativistic energy formula, the formula for the total energy of an oscillator, and the kinetic energy of a moving body, is striking: 1. E = mc2 2. E = mω2/2 3. E = mv2/2 In these formulas, ω, v and c all describe some velocity.[8] Of course, there is the 1/2 factor in the E = mω2/2 formula[9], but that is exactly the point we are going to explore here: can we think of an oscillation in two dimensions, so it stores an amount of energy that is equal to E = 2·m·ω2/2 = m·ω2? That is easy enough. Think, for example, of a V-2 engine with the pistons at a 90-degree angle, as illustrated below. The 90° angle makes it possible to perfectly balance the counterweight and the pistons, thereby ensuring smooth travel at all times. With permanently closed valves, the air inside the cylinder compresses and decompresses as the pistons move up and down and provides, therefore, a restoring force. As such, it will store potential energy, just like a spring, and the motion of the pistons will also reflect that of a mass on a spring. Hence, we can describe it by a sinusoidal function, with the zero point at the center of each cylinder. We can, therefore, think of the moving pistons as harmonic oscillators, just like mechanical springs. Figure 1: Oscillations in two dimensionsV-2 engine If we assume there is no friction, we have a perpetuum mobile here. The compressed air and the rotating counterweight (which, combined with the crankshaft, acts as a flywheel[10]) store the potential energy. The moving masses of the pistons store the kinetic energy of the system.[11] At this point, it is probably good to quickly review the relevant math. If the magnitude of the oscillation is equal to a, then the motion of the piston (or the mass on a spring) will be described by x = a·cos(ω·t + Δ).[12] Needless to say, Δ is just a phase factor which defines our t = 0 point, and ω is the natural angular frequency of our oscillator. Because of the 90° angle between the two cylinders, Δ would be 0 for one oscillator, and –π/2 for the other. Hence, the motion of one piston is given by x = a·cos(ω·t), while the motion of the other is given by x = a·cos(ω·t–π/2) = a·sin(ω·t). The kinetic and potential energy of one oscillator (think of one piston or one spring only) can then be calculated as: To facilitate the calculations, we will briefly assume k = m·ω2 and a are equal to 1. The motion of our first oscillator is given by the cos(ω·t) = cosθ function (θ = ω·t), and its kinetic energy will be equal to sin2θ. Hence, the (instantaneous) change in kinetic energy at any point in time will be equal to: Let us look at the second oscillator now. Just think of the second piston going up and down in the V-2 engine. Its motion is given by the sinθ function, which is equal to cos(θ−π /2). Hence, its kinetic energy is equal to sin2(θ−π /2), and how it changes – as a function of θ – will be equal to: We have our perpetuum mobile! While transferring kinetic energy from one piston to the other, the crankshaft will rotate with a constant angular velocity: linear motion becomes circular motion, and vice versa, and the total energy that is stored in the system is T + U = ma2ω2. We have a great metaphor here. Somehow, in this beautiful interplay between linear and circular motion, energy is borrowed from one place and then returns to the other, cycle after cycle. We know the wavefunction consist of a sine and a cosine: the cosine is the real component, and the sine is the imaginary component. Could they be equally real? Could each represent half of the total energy of our particle? Should we think of the c in our E = mc2 formula as an angular velocity? These are sensible questions. Let us explore them.  II. The wavefunction as a two-dimensional oscillation The elementary wavefunction is written as: ψ = a·ei[E·t − px]/ħa·ei[E·t − px]/ħ = a·cos(px E∙t/ħ) + i·a·sin(px E∙t/ħ) When considering a particle at rest (p = 0) this reduces to: ψ = a·ei∙E·t/ħ = a·cos(E∙t/ħ) + i·a·sin(E∙t/ħ) = a·cos(E∙t/ħ) i·a·sin(E∙t/ħ)  Let us remind ourselves of the geometry involved, which is illustrated below. Note that the argument of the wavefunction rotates clockwise with time, while the mathematical convention for measuring the phase angle (ϕ) is counter-clockwise. Figure 2: Euler’s formula760px-eulers_formula If we assume the momentum p is all in the x-direction, then the p and x vectors will have the same direction, and px/ħ reduces to p∙x/ħ. Most illustrations – such as the one below – will either freeze x or, else, t. Alternatively, one can google web animations varying both. The point is: we also have a two-dimensional oscillation here. These two dimensions are perpendicular to the direction of propagation of the wavefunction. For example, if the wavefunction propagates in the x-direction, then the oscillations are along the y– and z-axis, which we may refer to as the real and imaginary axis. Note how the phase difference between the cosine and the sine  – the real and imaginary part of our wavefunction – appear to give some spin to the whole. I will come back to this. Figure 3: Geometric representation of the wavefunction5d_euler_f Hence, if we would say these oscillations carry half of the total energy of the particle, then we may refer to the real and imaginary energy of the particle respectively, and the interplay between the real and the imaginary part of the wavefunction may then describe how energy propagates through space over time. Let us consider, once again, a particle at rest. Hence, p = 0 and the (elementary) wavefunction reduces to ψ = a·ei∙E·t/ħ. Hence, the angular velocity of both oscillations, at some point x, is given by ω = -E/ħ. Now, the energy of our particle includes all of the energy – kinetic, potential and rest energy – and is, therefore, equal to E = mc2. Can we, somehow, relate this to the m·a2·ω2 energy formula for our V-2 perpetuum mobile? Our wavefunction has an amplitude too. Now, if the oscillations of the real and imaginary wavefunction store the energy of our particle, then their amplitude will surely matter. In fact, the energy of an oscillation is, in general, proportional to the square of the amplitude: E µ a2. We may, therefore, think that the a2 factor in the E = m·a2·ω2 energy will surely be relevant as well. However, here is a complication: an actual particle is localized in space and can, therefore, not be represented by the elementary wavefunction. We must build a wave packet for that: a sum of wavefunctions, each with their own amplitude ak, and their own ωi = -Ei/ħ. Each of these wavefunctions will contribute some energy to the total energy of the wave packet. To calculate the contribution of each wave to the total, both ai as well as Ei will matter. What is Ei? Ei varies around some average E, which we can associate with some average mass m: m = E/c2. The Uncertainty Principle kicks in here. The analysis becomes more complicated, but a formula such as the one below might make sense:F1We can re-write this as:F2What is the meaning of this equation? We may look at it as some sort of physical normalization condition when building up the Fourier sum. Of course, we should relate this to the mathematical normalization condition for the wavefunction. Our intuition tells us that the probabilities must be related to the energy densities, but how exactly? We will come back to this question in a moment. Let us first think some more about the enigma: what is mass? Before we do so, let us quickly calculate the value of c2ħ2: it is about 1´1051 N2∙m4. Let us also do a dimensional analysis: the physical dimensions of the E = m·a2·ω2 equation make sense if we express m in kg, a in m, and ω in rad/s. We then get: [E] = kg∙m2/s2 = (N∙s2/m)∙m2/s2 = N∙m = J. The dimensions of the left- and right-hand side of the physical normalization condition is N3∙m5.  III. What is mass? We came up, playfully, with a meaningful interpretation for energy: it is a two-dimensional oscillation of mass. But what is mass? A new aether theory is, of course, not an option, but then what is it that is oscillating? To understand the physics behind equations, it is always good to do an analysis of the physical dimensions in the equation. Let us start with Einstein’s energy equation once again. If we want to look at mass, we should re-write it as m = E/c2: [m] = [E/c2] = J/(m/s)2 = N·m∙s2/m2 = N·s2/m = kg This is not very helpful. It only reminds us of Newton’s definition of a mass: mass is that what gets accelerated by a force. At this point, we may want to think of the physical significance of the absolute nature of the speed of light. Einstein’s E = mc2 equation implies we can write the ratio between the energy and the mass of any particle is always the same, so we can write, for example:F3This reminds us of the ω2= C1/L or ω2 = k/m of harmonic oscillators once again.[13] The key difference is that the ω2= C1/L and ω2 = k/m formulas introduce two or more degrees of freedom.[14] In contrast, c2= E/m for any particle, always. However, that is exactly the point: we can modulate the resistance, inductance and capacitance of electric circuits, and the stiffness of springs and the masses we put on them, but we live in one physical space only: our spacetime. Hence, the speed of light c emerges here as the defining property of spacetime – the resonant frequency, so to speak. We have no further degrees of freedom here. The Planck-Einstein relation (for photons) and the de Broglie equation (for matter-particles) have an interesting feature: both imply that the energy of the oscillation is proportional to the frequency, with Planck’s constant as the constant of proportionality. Now, for one-dimensional oscillations – think of a guitar string, for example – we know the energy will be proportional to the square of the frequency. It is a remarkable observation: the two-dimensional matter-wave, or the electromagnetic wave, gives us two waves for the price of one, so to speak, each carrying half of the total energy of the oscillation but, as a result, we get a proportionality between E and f instead of between E and f2. However, such reflections do not answer the fundamental question we started out with: what is mass? At this point, it is hard to go beyond the circular definition that is implied by Einstein’s formula: energy is a two-dimensional oscillation of mass, and mass packs energy, and c emerges us as the property of spacetime that defines how exactly. When everything is said and done, this does not go beyond stating that mass is some scalar field. Now, a scalar field is, quite simply, some real number that we associate with a position in spacetime. The Higgs field is a scalar field but, of course, the theory behind it goes much beyond stating that we should think of mass as some scalar field. The fundamental question is: why and how does energy, or matter, condense into elementary particles? That is what the Higgs mechanism is about but, as this paper is exploratory only, we cannot even start explaining the basics of it. What we can do, however, is look at the wave equation again (Schrödinger’s equation), as we can now analyze it as an energy diffusion equation.  IV. Schrödinger’s equation as an energy diffusion equation The interpretation of Schrödinger’s equation as a diffusion equation is straightforward. Feynman (Lectures, III-16-1) briefly summarizes it as follows: “We can think of Schrödinger’s equation as describing the diffusion of the probability amplitude from one point to the next. […] But the imaginary coefficient in front of the derivative makes the behavior completely different from the ordinary diffusion such as you would have for a gas spreading out along a thin tube. Ordinary diffusion gives rise to real exponential solutions, whereas the solutions of Schrödinger’s equation are complex waves.”[17] Let us review the basic math. For a particle moving in free space – with no external force fields acting on it – there is no potential (U = 0) and, therefore, the Uψ term disappears. Therefore, Schrödinger’s equation reduces to: The ubiquitous diffusion equation in physics is: The structural similarity is obvious. The key difference between both equations is that the wave equation gives us two equations for the price of one. Indeed, because ψ is a complex-valued function, with a real and an imaginary part, we get the following equations[18]: 1. Re(∂ψ/∂t) = −(1/2)·(ħ/meffIm(∇2ψ) 2. Im(∂ψ/∂t) = (1/2)·(ħ/meffRe(∇2ψ) These equations make us think of the equations for an electromagnetic wave in free space (no stationary charges or currents): 1. B/∂t = –∇×E 2. E/∂t = c2∇×B The above equations effectively describe a propagation mechanism in spacetime, as illustrated below. Figure 4: Propagation mechanismspropagation The Laplacian operator (∇2), when operating on a scalar quantity, gives us a flux density, i.e. something expressed per square meter (1/m2). In this case, it is operating on ψ(x, t), so what is the dimension of our wavefunction ψ(x, t)? To answer that question, we should analyze the diffusion constant in Schrödinger’s equation, i.e. the (1/2)·(ħ/meff) factor: 1. As a mathematical constant of proportionality, it will quantify the relationship between both derivatives (i.e. the time derivative and the Laplacian); 2. As a physical constant, it will ensure the physical dimensions on both sides of the equation are compatible. Now, the ħ/meff factor is expressed in (N·m·s)/(N· s2/m) = m2/s. Hence, it does ensure the dimensions on both sides of the equation are, effectively, the same: ∂ψ/∂t is a time derivative and, therefore, its dimension is s1 while, as mentioned above, the dimension of ∇2ψ is m2. However, this does not solve our basic question: what is the dimension of the real and imaginary part of our wavefunction? At this point, mainstream physicists will say: it does not have a physical dimension, and there is no geometric interpretation of Schrödinger’s equation. One may argue, effectively, that its argument, (px – E∙t)/ħ, is just a number and, therefore, that the real and imaginary part of ψ is also just some number. To this, we may object that ħ may be looked as a mathematical scaling constant only. If we do that, then the argument of ψ will, effectively, be expressed in action units, i.e. in N·m·s. It then does make sense to also associate a physical dimension with the real and imaginary part of ψ. What could it be? We may have a closer look at Maxwell’s equations for inspiration here. The electric field vector is expressed in newton (the unit of force) per unit of charge (coulomb). Now, there is something interesting here. The physical dimension of the magnetic field is N/C divided by m/s.[19] We may write B as the following vector cross-product: B = (1/c)∙ex×E, with ex the unit vector pointing in the x-direction (i.e. the direction of propagation of the wave). Hence, we may associate the (1/c)∙ex× operator, which amounts to a rotation by 90 degrees, with the s/m dimension. Now, multiplication by i also amounts to a rotation by 90° degrees. Hence, we may boldly write: B = (1/c)∙ex×E = (1/c)∙iE. This allows us to also geometrically interpret Schrödinger’s equation in the way we interpreted it above (see Figure 3).[20] Still, we have not answered the question as to what the physical dimension of the real and imaginary part of our wavefunction should be. At this point, we may be inspired by the structural similarity between Newton’s and Coulomb’s force laws:F4Hence, if the electric field vector E is expressed in force per unit charge (N/C), then we may want to think of associating the real part of our wavefunction with a force per unit mass (N/kg). We can, of course, do a substitution here, because the mass unit (1 kg) is equivalent to 1 N·s2/m. Hence, our N/kg dimension becomes: N/kg = N/(N·s2/m)= m/s2 What is this: m/s2? Is that the dimension of the a·cosθ term in the a·eiθ a·cosθ − i·a·sinθ wavefunction? My answer is: why not? Think of it: m/s2 is the physical dimension of acceleration: the increase or decrease in velocity (m/s) per second. It ensures the wavefunction for any particle – matter-particles or particles with zero rest mass (photons) – and the associated wave equation (which has to be the same for all, as the spacetime we live in is one) are mutually consistent. In this regard, we should think of how we would model a gravitational wave. The physical dimension would surely be the same: force per mass unit. It all makes sense: wavefunctions may, perhaps, be interpreted as traveling distortions of spacetime, i.e. as tiny gravitational waves. V. Energy densities and flows Pursuing the geometric equivalence between the equations for an electromagnetic wave and Schrödinger’s equation, we can now, perhaps, see if there is an equivalent for the energy density. For an electromagnetic wave, we know that the energy density is given by the following formula:F5E and B are the electric and magnetic field vector respectively. The Poynting vector will give us the directional energy flux, i.e. the energy flow per unit area per unit time. We write:F6Needless to say, the ∙ operator is the divergence and, therefore, gives us the magnitude of a (vector) field’s source or sink at a given point. To be precise, the divergence gives us the volume density of the outward flux of a vector field from an infinitesimal volume around a given point. In this case, it gives us the volume density of the flux of S. We can analyze the dimensions of the equation for the energy density as follows: 1. E is measured in newton per coulomb, so [EE] = [E2] = N2/C2. 2. B is measured in (N/C)/(m/s), so we get [BB] = [B2] = (N2/C2)·(s2/m2). However, the dimension of our c2 factor is (m2/s2) and so we’re also left with N2/C2. 3. The ϵ0 is the electric constant, aka as the vacuum permittivity. As a physical constant, it should ensure the dimensions on both sides of the equation work out, and they do: [ε0] = C2/(N·m2) and, therefore, if we multiply that with N2/C2, we find that is expressed in J/m3.[21] Replacing the newton per coulomb unit (N/C) by the newton per kg unit (N/kg) in the formulas above should give us the equivalent of the energy density for the wavefunction. We just need to substitute ϵ0 for an equivalent constant. We may to give it a try. If the energy densities can be calculated – which are also mass densities, obviously – then the probabilities should be proportional to them. Let us first see what we get for a photon, assuming the electromagnetic wave represents its wavefunction. Substituting B for (1/c)∙iE or for −(1/c)∙iE gives us the following result:F7Zero!? An unexpected result! Or not? We have no stationary charges and no currents: only an electromagnetic wave in free space. Hence, the local energy conservation principle needs to be respected at all points in space and in time. The geometry makes sense of the result: for an electromagnetic wave, the magnitudes of E and B reach their maximum, minimum and zero point simultaneously, as shown below.[22] This is because their phase is the same. Figure 5: Electromagnetic wave: E and BEM field Should we expect a similar result for the energy densities that we would associate with the real and imaginary part of the matter-wave? For the matter-wave, we have a phase difference between a·cosθ and a·sinθ, which gives a different picture of the propagation of the wave (see Figure 3).[23] In fact, the geometry of the suggestion suggests some inherent spin, which is interesting. I will come back to this. Let us first guess those densities. Making abstraction of any scaling constants, we may write:F8We get what we hoped to get: the absolute square of our amplitude is, effectively, an energy density ! |ψ|2  = |a·ei∙E·t/ħ|2 = a2 = u This is very deep. A photon has no rest mass, so it borrows and returns energy from empty space as it travels through it. In contrast, a matter-wave carries energy and, therefore, has some (rest) mass. It is therefore associated with an energy density, and this energy density gives us the probabilities. Of course, we need to fine-tune the analysis to account for the fact that we have a wave packet rather than a single wave, but that should be feasible. As mentioned, the phase difference between the real and imaginary part of our wavefunction (a cosine and a sine function) appear to give some spin to our particle. We do not have this particularity for a photon. Of course, photons are bosons, i.e. spin-zero particles, while elementary matter-particles are fermions with spin-1/2. Hence, our geometric interpretation of the wavefunction suggests that, after all, there may be some more intuitive explanation of the fundamental dichotomy between bosons and fermions, which puzzled even Feynman: “Why is it that particles with half-integral spin are Fermi particles, whereas particles with integral spin are Bose particles? We apologize for the fact that we cannot give you an elementary explanation. An explanation has been worked out by Pauli from complicated arguments of quantum field theory and relativity. He has shown that the two must necessarily go together, but we have not been able to find a way of reproducing his arguments on an elementary level. It appears to be one of the few places in physics where there is a rule which can be stated very simply, but for which no one has found a simple and easy explanation. The explanation is deep down in relativistic quantum mechanics. This probably means that we do not have a complete understanding of the fundamental principle involved.” (Feynman, Lectures, III-4-1) The physical interpretation of the wavefunction, as presented here, may provide some better understanding of ‘the fundamental principle involved’: the physical dimension of the oscillation is just very different. That is all: it is force per unit charge for photons, and force per unit mass for matter-particles. We will examine the question of spin somewhat more carefully in section VII. Let us first examine the matter-wave some more.  VI. Group and phase velocity of the matter-wave The geometric representation of the matter-wave (see Figure 3) suggests a traveling wave and, yes, of course: the matter-wave effectively travels through space and time. But what is traveling, exactly? It is the pulse – or the signal – only: the phase velocity of the wave is just a mathematical concept and, even in our physical interpretation of the wavefunction, the same is true for the group velocity of our wave packet. The oscillation is two-dimensional, but perpendicular to the direction of travel of the wave. Hence, nothing actually moves with our particle. Here, we should also reiterate that we did not answer the question as to what is oscillating up and down and/or sideways: we only associated a physical dimension with the components of the wavefunction – newton per kg (force per unit mass), to be precise. We were inspired to do so because of the physical dimension of the electric and magnetic field vectors (newton per coulomb, i.e. force per unit charge) we associate with electromagnetic waves which, for all practical purposes, we currently treat as the wavefunction for a photon. This made it possible to calculate the associated energy densities and a Poynting vector for energy dissipation. In addition, we showed that Schrödinger’s equation itself then becomes a diffusion equation for energy. However, let us now focus some more on the asymmetry which is introduced by the phase difference between the real and the imaginary part of the wavefunction. Look at the mathematical shape of the elementary wavefunction once again: The minus sign in the argument of our sine and cosine function defines the direction of travel: an F(x−v∙t) wavefunction will always describe some wave that is traveling in the positive x-direction (with the wave velocity), while an F(x+v∙t) wavefunction will travel in the negative x-direction. For a geometric interpretation of the wavefunction in three dimensions, we need to agree on how to define i or, what amounts to the same, a convention on how to define clockwise and counterclockwise directions: if we look at a clock from the back, then its hand will be moving counterclockwise. So we need to establish the equivalent of the right-hand rule. However, let us not worry about that now. Let us focus on the interpretation. To ease the analysis, we’ll assume we’re looking at a particle at rest. Hence, p = 0, and the wavefunction reduces to: ψ = a·ei∙E·t/ħ = a·cos(−E∙t/ħ) + i·a·sin(−E0∙t/ħ) = a·cos(E0∙t/ħ) − i·a·sin(E0∙t/ħ) E0 is, of course, the rest mass of our particle and, now that we are here, we should probably wonder whose time we are talking about: is it our time, or is the proper time of our particle? Well… In this situation, we are both at rest so it does not matter: t is, effectively, the proper time so perhaps we should write it as t0. It does not matter. You can see what we expect to see: E0/ħ pops up as the natural frequency of our matter-particle: (E0/ħ)∙t = ω∙t. Remembering the ω = 2π·f = 2π/T and T = 1/formulas, we can associate a period and a frequency with this wave, using the ω = 2π·f = 2π/T. Noting that ħ = h/2π, we find the following: T = 2π·(ħ/E0) = h/E0 ⇔ = E0/h = m0c2/h This is interesting, because we can look at the period as a natural unit of time for our particle. What about the wavelength? That is tricky because we need to distinguish between group and phase velocity here. The group velocity (vg) should be zero here, because we assume our particle does not move. In contrast, the phase velocity is given by vp = λ·= (2π/k)·(ω/2π) = ω/k. In fact, we’ve got something funny here: the wavenumber k = p/ħ is zero, because we assume the particle is at rest, so p = 0. So we have a division by zero here, which is rather strange. What do we get assuming the particle is not at rest? We write: This is interesting: it establishes a reciprocal relation between the phase and the group velocity, with as a simple scaling constant. Indeed, the graph below shows the shape of the function does not change with the value of c, and we may also re-write the relation above as: vp/= βp = c/vp = 1/βg = 1/(c/vp) Figure 6: Reciprocal relation between phase and group velocitygraph We can also write the mentioned relationship as vp·vg = c2, which reminds us of the relationship between the electric and magnetic constant (1/ε0)·(1/μ0) = c2. This is interesting in light of the fact we can re-write this as (c·ε0)·(c·μ0) = 1, which shows electricity and magnetism are just two sides of the same coin, so to speak.[24] Interesting, but how do we interpret the math? What about the implications of the zero value for wavenumber k = p/ħ. We would probably like to think it implies the elementary wavefunction should always be associated with some momentum, because the concept of zero momentum clearly leads to weird math: something times zero cannot be equal to c2! Such interpretation is also consistent with the Uncertainty Principle: if Δx·Δp ≥ ħ, then neither Δx nor Δp can be zero. In other words, the Uncertainty Principle tells us that the idea of a pointlike particle actually being at some specific point in time and in space does not make sense: it has to move. It tells us that our concept of dimensionless points in time and space are mathematical notions only. Actual particles – including photons – are always a bit spread out, so to speak, and – importantly – they have to move. For a photon, this is self-evident. It has no rest mass, no rest energy, and, therefore, it is going to move at the speed of light itself. We write: p = m·c = m·c2/= E/c. Using the relationship above, we get: vp = ω/k = (E/ħ)/(p/ħ) = E/p = c ⇒ vg = c2/vp = c2/c = c This is good: we started out with some reflections on the matter-wave, but here we get an interpretation of the electromagnetic wave as a wavefunction for the photon. But let us get back to our matter-wave. In regard to our interpretation of a particle having to move, we should remind ourselves, once again, of the fact that an actual particle is always localized in space and that it can, therefore, not be represented by the elementary wavefunction ψ = a·ei[E·t − px]/ħ or, for a particle at rest, the ψ = a·ei∙E·t/ħ function. We must build a wave packet for that: a sum of wavefunctions, each with their own amplitude ai, and their own ωi = −Ei/ħ. Indeed, in section II, we showed that each of these wavefunctions will contribute some energy to the total energy of the wave packet and that, to calculate the contribution of each wave to the total, both ai as well as Ei matter. This may or may not resolve the apparent paradox. Let us look at the group velocity. To calculate a meaningful group velocity, we must assume the vg = ∂ωi/∂ki = ∂(Ei/ħ)/∂(pi/ħ) = ∂(Ei)/∂(pi) exists. So we must have some dispersion relation. How do we calculate it? We need to calculate ωi as a function of ki here, or Ei as a function of pi. How do we do that? Well… There are a few ways to go about it but one interesting way of doing it is to re-write Schrödinger’s equation as we did, i.e. by distinguishing the real and imaginary parts of the ∂ψ/∂t =i·[ħ/(2m)]·∇2ψ wave equation and, hence, re-write it as the following pair of two equations: 1. Re(∂ψ/∂t) = −[ħ/(2meff)]·Im(∇2ψ) ⇔ ω·cos(kx − ωt) = k2·[ħ/(2meff)]·cos(kx − ωt) 2. Im(∂ψ/∂t) = [ħ/(2meff)]·Re(∇2ψ) ⇔ ω·sin(kx − ωt) = k2·[ħ/(2meff)]·sin(kx − ωt) Both equations imply the following dispersion relation: ω = ħ·k2/(2meff) Of course, we need to think about the subscripts now: we have ωi, ki, but… What about meff or, dropping the subscript, m? Do we write it as mi? If so, what is it? Well… It is the equivalent mass of Ei obviously, and so we get it from the mass-energy equivalence relation: mi = Ei/c2. It is a fine point, but one most people forget about: they usually just write m. However, if there is uncertainty in the energy, then Einstein’s mass-energy relation tells us we must have some uncertainty in the (equivalent) mass too. Here, I should refer back to Section II: Ei varies around some average energy E and, therefore, the Uncertainty Principle kicks in.  VII. Explaining spin The elementary wavefunction vector – i.e. the vector sum of the real and imaginary component – rotates around the x-axis, which gives us the direction of propagation of the wave (see Figure 3). Its magnitude remains constant. In contrast, the magnitude of the electromagnetic vector – defined as the vector sum of the electric and magnetic field vectors – oscillates between zero and some maximum (see Figure 5). We already mentioned that the rotation of the wavefunction vector appears to give some spin to the particle. Of course, a circularly polarized wave would also appear to have spin (think of the E and B vectors rotating around the direction of propagation – as opposed to oscillating up and down or sideways only). In fact, a circularly polarized light does carry angular momentum, as the equivalent mass of its energy may be thought of as rotating as well. But so here we are looking at a matter-wave. The basic idea is the following: if we look at ψ = a·ei∙E·t/ħ as some real vector – as a two-dimensional oscillation of mass, to be precise – then we may associate its rotation around the direction of propagation with some torque. The illustration below reminds of the math here. Figure 7: Torque and angular momentum vectorsTorque_animation A torque on some mass about a fixed axis gives it angular momentum, which we can write as the vector cross-product L = r×p or, perhaps easier for our purposes here as the product of an angular velocity (ω) and rotational inertia (I), aka as the moment of inertia or the angular mass. We write: L = I·ω Note we can write L and ω in boldface here because they are (axial) vectors. If we consider their magnitudes only, we write L = I·ω (no boldface). We can now do some calculations. Let us start with the angular velocity. In our previous posts, we showed that the period of the matter-wave is equal to T = 2π·(ħ/E0). Hence, the angular velocity must be equal to: ω = 2π/[2π·(ħ/E0)] = E0 We also know the distance r, so that is the magnitude of r in the Lr×p vector cross-product: it is just a, so that is the magnitude of ψ = a·ei∙E·t/ħ. Now, the momentum (p) is the product of a linear velocity (v) – in this case, the tangential velocity – and some mass (m): p = m·v. If we switch to scalar instead of vector quantities, then the (tangential) velocity is given by v = r·ω. So now we only need to think about what we should use for m or, if we want to work with the angular velocity (ω), the angular mass (I). Here we need to make some assumption about the mass (or energy) distribution. Now, it may or may not sense to assume the energy in the oscillation – and, therefore, the mass – is distributed uniformly. In that case, we may use the formula for the angular mass of a solid cylinder: I = m·r2/2. If we keep the analysis non-relativistic, then m = m0. Of course, the energy-mass equivalence tells us that m0 = E0/c2. Hence, this is what we get: L = I·ω = (m0·r2/2)·(E0/ħ) = (1/2)·a2·(E0/c2)·(E0/ħ) = a2·E02/(2·ħ·c2) Does it make sense? Maybe. Maybe not. Let us do a dimensional analysis: that won’t check our logic, but it makes sure we made no mistakes when mapping mathematical and physical spaces. We have m2·J2 = m2·N2·m2 in the numerator and N·m·s·m2/s2 in the denominator. Hence, the dimensions work out: we get N·m·s as the dimension for L, which is, effectively, the physical dimension of angular momentum. It is also the action dimension, of course, and that cannot be a coincidence. Also note that the E = mc2 equation allows us to re-write it as: L = a2·E02/(2·ħ·c2) Of course, in quantum mechanics, we associate spin with the magnetic moment of a charged particle, not with its mass as such. Is there way to link the formula above to the one we have for the quantum-mechanical angular momentum, which is also measured in N·m·s units, and which can only take on one of two possible values: J = +ħ/2 and −ħ/2? It looks like a long shot, right? How do we go from (1/2)·a2·m02/ħ to ± (1/2)∙ħ? Let us do a numerical example. The energy of an electron is typically 0.510 MeV » 8.1871×10−14 N∙m, and a… What value should we take for a? We have an obvious trio of candidates here: the Bohr radius, the classical electron radius (aka the Thompon scattering length), and the Compton scattering radius. Let us start with the Bohr radius, so that is about 0.×10−10 N∙m. We get L = a2·E02/(2·ħ·c2) = 9.9×10−31 N∙m∙s. Now that is about 1.88×104 times ħ/2. That is a huge factor. The Bohr radius cannot be right: we are not looking at an electron in an orbital here. To show it does not make sense, we may want to double-check the analysis by doing the calculation in another way. We said each oscillation will always pack 6.626070040(81)×10−34 joule in energy. So our electron should pack about 1.24×10−20 oscillations. The angular momentum (L) we get when using the Bohr radius for a and the value of 6.626×10−34 joule for E0 and the Bohr radius is equal to 6.49×10−59 N∙m∙s. So that is the angular momentum per oscillation. When we multiply this with the number of oscillations (1.24×10−20), we get about 8.01×10−51 N∙m∙s, so that is a totally different number. The classical electron radius is about 2.818×10−15 m. We get an L that is equal to about 2.81×10−39 N∙m∙s, so now it is a tiny fraction of ħ/2! Hence, this leads us nowhere. Let us go for our last chance to get a meaningful result! Let us use the Compton scattering length, so that is about 2.42631×10−12 m. This gives us an L of 2.08×10−33 N∙m∙s, which is only 20 times ħ. This is not so bad, but it is good enough? Let us calculate it the other way around: what value should we take for a so as to ensure L = a2·E02/(2·ħ·c2) = ħ/2? Let us write it out:F9 In fact, this is the formula for the so-called reduced Compton wavelength. This is perfect. We found what we wanted to find. Substituting this value for a (you can calculate it: it is about 3.8616×10−33 m), we get what we should find:F10 This is a rather spectacular result, and one that would – a priori – support the interpretation of the wavefunction that is being suggested in this paper.  VIII. The boson-fermion dichotomy Let us do some more thinking on the boson-fermion dichotomy. Again, we should remind ourselves that an actual particle is localized in space and that it can, therefore, not be represented by the elementary wavefunction ψ = a·ei[E·t − px]/ħ or, for a particle at rest, the ψ = a·ei∙E·t/ħ function. We must build a wave packet for that: a sum of wavefunctions, each with their own amplitude ai, and their own ωi = −Ei/ħ. Each of these wavefunctions will contribute some energy to the total energy of the wave packet. Now, we can have another wild but logical theory about this. Think of the apparent right-handedness of the elementary wavefunction: surely, Nature can’t be bothered about our convention of measuring phase angles clockwise or counterclockwise. Also, the angular momentum can be positive or negative: J = +ħ/2 or −ħ/2. Hence, we would probably like to think that an actual particle – think of an electron, or whatever other particle you’d think of – may consist of right-handed as well as left-handed elementary waves. To be precise, we may think they either consist of (elementary) right-handed waves or, else, of (elementary) left-handed waves. An elementary right-handed wave would be written as: ψ(θi= ai·(cosθi + i·sinθi) ψ(θi= ai·(cosθii·sinθi) How does that work out with the E0·t argument of our wavefunction? Position is position, and direction is direction, but time? Time has only one direction, but Nature surely does not care how we count time: counting like 1, 2, 3, etcetera or like −1, −2, −3, etcetera is just the same. If we count like 1, 2, 3, etcetera, then we write our wavefunction like: If we count time like −1, −2, −3, etcetera then we write it as:  ψ = a·cos(E0∙t/ħ) − i·a·sin(E0∙t/ħ)= a·cos(E0∙t/ħ) + i·a·sin(E0∙t/ħ) Hence, it is just like the left- or right-handed circular polarization of an electromagnetic wave: we can have both for the matter-wave too! This, then, should explain why we can have either positive or negative quantum-mechanical spin (+ħ/2 or −ħ/2). It is the usual thing: we have two mathematical possibilities here, and so we must have two physical situations that correspond to it. It is only natural. If we have left- and right-handed photons – or, generalizing, left- and right-handed bosons – then we should also have left- and right-handed fermions (electrons, protons, etcetera). Back to the dichotomy. The textbook analysis of the dichotomy between bosons and fermions may be epitomized by Richard Feynman’s Lecture on it (Feynman, III-4), which is confusing and – I would dare to say – even inconsistent: how are photons or electrons supposed to know that they need to interfere with a positive or a negative sign? They are not supposed to know anything: knowledge is part of our interpretation of whatever it is that is going on there. Hence, it is probably best to keep it simple, and think of the dichotomy in terms of the different physical dimensions of the oscillation: newton per kg versus newton per coulomb. And then, of course, we should also note that matter-particles have a rest mass and, therefore, actually carry charge. Photons do not. But both are two-dimensional oscillations, and the point is: the so-called vacuum – and the rest mass of our particle (which is zero for the photon and non-zero for everything else) – give us the natural frequency for both oscillations, which is beautifully summed up in that remarkable equation for the group and phase velocity of the wavefunction, which applies to photons as well as matter-particles: (vphase·c)·(vgroup·c) = 1 ⇔ vp·vg = c2 The final question then is: why are photons spin-zero particles? Well… We should first remind ourselves of the fact that they do have spin when circularly polarized.[25] Here we may think of the rotation of the equivalent mass of their energy. However, if they are linearly polarized, then there is no spin. Even for circularly polarized waves, the spin angular momentum of photons is a weird concept. If photons have no (rest) mass, then they cannot carry any charge. They should, therefore, not have any magnetic moment. Indeed, what I wrote above shows an explanation of quantum-mechanical spin requires both mass as well as charge.[26]  IX. Concluding remarks There are, of course, other ways to look at the matter – literally. For example, we can imagine two-dimensional oscillations as circular rather than linear oscillations. Think of a tiny ball, whose center of mass stays where it is, as depicted below. Any rotation – around any axis – will be some combination of a rotation around the two other axes. Hence, we may want to think of a two-dimensional oscillation as an oscillation of a polar and azimuthal angle. Figure 8: Two-dimensional circular movementoscillation-of-a-ball The point of this paper is not to make any definite statements. That would be foolish. Its objective is just to challenge the simplistic mainstream viewpoint on the reality of the wavefunction. Stating that it is a mathematical construct only without physical significance amounts to saying it has no meaning at all. That is, clearly, a non-sustainable proposition. The interpretation that is offered here looks at amplitude waves as traveling fields. Their physical dimension may be expressed in force per mass unit, as opposed to electromagnetic waves, whose amplitudes are expressed in force per (electric) charge unit. Also, the amplitudes of matter-waves incorporate a phase factor, but this may actually explain the rather enigmatic dichotomy between fermions and bosons and is, therefore, an added bonus. The interpretation that is offered here has some advantages over other explanations, as it explains the how of diffraction and interference. However, while it offers a great explanation of the wave nature of matter, it does not explain its particle nature: while we think of the energy as being spread out, we will still observe electrons and photons as pointlike particles once they hit the detector. Why is it that a detector can sort of ‘hook’ the whole blob of energy, so to speak? The interpretation of the wavefunction that is offered here does not explain this. Hence, the complementarity principle of the Copenhagen interpretation of the wavefunction surely remains relevant. Appendix 1: The de Broglie relations and energy The 1/2 factor in Schrödinger’s equation is related to the concept of the effective mass (meff). It is easy to make the wrong calculations. For example, when playing with the famous de Broglie relations – aka as the matter-wave equations – one may be tempted to derive the following energy concept: E = m·v2? This resembles the E = mc2 equation and, therefore, one may be enthused by the discovery, especially because the m·v2 also pops up when working with the Least Action Principle in classical mechanics, which states that the path that is followed by a particle will minimize the following integral:F11Now, we can choose any reference point for the potential energy but, to reflect the energy conservation law, we can select a reference point that ensures the sum of the kinetic and the potential energy is zero throughout the time interval. If the force field is uniform, then the integrand will, effectively, be equal to KE − PE = m·v2.[27] However, that is classical mechanics and, therefore, not so relevant in the context of the de Broglie equations, and the apparent paradox should be solved by distinguishing between the group and the phase velocity of the matter wave. Appendix 2: The concept of the effective mass The effective mass – as used in Schrödinger’s equation – is a rather enigmatic concept. To make sure we are making the right analysis here, I should start by noting you will usually see Schrödinger’s equation written as:F12This formulation includes a term with the potential energy (U). In free space (no potential), this term disappears, and the equation can be re-written as: We just moved the i·ħ coefficient to the other side, noting that 1/i = –i. Now, in one-dimensional space, and assuming ψ is just the elementary wavefunction (so we substitute a·ei∙[E·t − p∙x]/ħ for ψ), this implies the following: a·i·(E/ħ)·ei∙[E·t − p∙x]/ħ = −i·(ħ/2meffa·(p22 ei∙[E·t − p∙x]/ħ  ⇔ E = p2/(2meff) ⇔ meff = m∙(v/c)2/2 = m∙β2/2 It is an ugly formula: it resembles the kinetic energy formula (K.E. = m∙v2/2) but it is, in fact, something completely different. The β2/2 factor ensures the effective mass is always a fraction of the mass itself. To get rid of the ugly 1/2 factor, we may re-define meff as two times the old meff (hence, meffNEW = 2∙meffOLD), as a result of which the formula will look somewhat better: meff = m∙(v/c)2 = m∙β2 We know β varies between 0 and 1 and, therefore, meff will vary between 0 and m. Feynman drops the subscript, and just writes meff as m in his textbook (see Feynman, III-19). On the other hand, the electron mass as used is also the electron mass that is used to calculate the size of an atom (see Feynman, III-2-4). As such, the two mass concepts are, effectively, mutually compatible. It is confusing because the same mass is often defined as the mass of a stationary electron (see, for example, the article on it in the online Wikipedia encyclopedia[28]). In the context of the derivation of the electron orbitals, we do have the potential energy term – which is the equivalent of a source term in a diffusion equation – and that may explain why the above-mentioned meff = m∙(v/c)2 = m∙β2 formula does not apply. This paper discusses general principles in physics only. Hence, references can be limited to references to physics textbooks only. For ease of reading, any reference to additional material has been limited to a more popular undergrad textbook that can be consulted online: Feynman’s Lectures on Physics ( References are per volume, per chapter and per section. For example, Feynman III-19-3 refers to Volume III, Chapter 19, Section 3. [1] Of course, an actual particle is localized in space and can, therefore, not be represented by the elementary wavefunction ψ = a·ei∙θa·ei[E·t − px]/ħ = a·(cosθ i·a·sinθ). We must build a wave packet for that: a sum of wavefunctions, each with its own amplitude ak and its own argument θk = (Ek∙t – pkx)/ħ. This is dealt with in this paper as part of the discussion on the mathematical and physical interpretation of the normalization condition. [2] The N/kg dimension immediately, and naturally, reduces to the dimension of acceleration (m/s2), thereby facilitating a direct interpretation in terms of Newton’s force law. [3] In physics, a two-spring metaphor is more common. Hence, the pistons in the author’s perpetuum mobile may be replaced by springs. [4] The author re-derives the equation for the Compton scattering radius in section VII of the paper. [5] The magnetic force can be analyzed as a relativistic effect (see Feynman II-13-6). The dichotomy between the electric force as a polar vector and the magnetic force as an axial vector disappears in the relativistic four-vector representation of electromagnetism. [6] For example, when using Schrödinger’s equation in a central field (think of the electron around a proton), the use of polar coordinates is recommended, as it ensures the symmetry of the Hamiltonian under all rotations (see Feynman III-19-3) [7] This sentiment is usually summed up in the apocryphal quote: “God does not play dice.”The actual quote comes out of one of Einstein’s private letters to Cornelius Lanczos, another scientist who had also emigrated to the US. The full quote is as follows: “You are the only person I know who has the same attitude towards physics as I have: belief in the comprehension of reality through something basically simple and unified… It seems hard to sneak a look at God’s cards. But that He plays dice and uses ‘telepathic’ methods… is something that I cannot believe for a single moment.” (Helen Dukas and Banesh Hoffman, Albert Einstein, the Human Side: New Glimpses from His Archives, 1979) [8] Of course, both are different velocities: ω is an angular velocity, while v is a linear velocity: ω is measured in radians per second, while v is measured in meter per second. However, the definition of a radian implies radians are measured in distance units. Hence, the physical dimensions are, effectively, the same. As for the formula for the total energy of an oscillator, we should actually write: E = m·a2∙ω2/2. The additional factor (a) is the (maximum) amplitude of the oscillator. [9] We also have a 1/2 factor in the E = mv2/2 formula. Two remarks may be made here. First, it may be noted this is a non-relativistic formula and, more importantly, incorporates kinetic energy only. Using the Lorentz factor (γ), we can write the relativistically correct formula for the kinetic energy as K.E. = E − E0 = mvc2 − m0c2 = m0γc2 − m0c2 = m0c2(γ − 1). As for the exclusion of the potential energy, we may note that we may choose our reference point for the potential energy such that the kinetic and potential energy mirror each other. The energy concept that then emerges is the one that is used in the context of the Principle of Least Action: it equals E = mv2. Appendix 1 provides some notes on that. [10] Instead of two cylinders with pistons, one may also think of connecting two springs with a crankshaft. [11] It is interesting to note that we may look at the energy in the rotating flywheel as potential energy because it is energy that is associated with motion, albeit circular motion. In physics, one may associate a rotating object with kinetic energy using the rotational equivalent of mass and linear velocity, i.e. rotational inertia (I) and angular velocity ω. The kinetic energy of a rotating object is then given by K.E. = (1/2)·I·ω2. [12] Because of the sideways motion of the connecting rods, the sinusoidal function will describe the linear motion only approximately, but you can easily imagine the idealized limit situation. [13] The ω2= 1/LC formula gives us the natural or resonant frequency for a electric circuit consisting of a resistor (R), an inductor (L), and a capacitor (C). Writing the formula as ω2= C1/L introduces the concept of elastance, which is the equivalent of the mechanical stiffness (k) of a spring. [14] The resistance in an electric circuit introduces a damping factor. When analyzing a mechanical spring, one may also want to introduce a drag coefficient. Both are usually defined as a fraction of the inertia, which is the mass for a spring and the inductance for an electric circuit. Hence, we would write the resistance for a spring as γm and as R = γL respectively. [15] Photons are emitted by atomic oscillators: atoms going from one state (energy level) to another. Feynman (Lectures, I-33-3) shows us how to calculate the Q of these atomic oscillators: it is of the order of 108, which means the wave train will last about 10–8 seconds (to be precise, that is the time it takes for the radiation to die out by a factor 1/e). For example, for sodium light, the radiation will last about 3.2×10–8 seconds (this is the so-called decay time τ). Now, because the frequency of sodium light is some 500 THz (500×1012 oscillations per second), this makes for some 16 million oscillations. There is an interesting paradox here: the speed of light tells us that such wave train will have a length of about 9.6 m! How is that to be reconciled with the pointlike nature of a photon? The paradox can only be explained by relativistic length contraction: in an analysis like this, one need to distinguish the reference frame of the photon – riding along the wave as it is being emitted, so to speak – and our stationary reference frame, which is that of the emitting atom. [16] This is a general result and is reflected in the K.E. = T = (1/2)·m·ω2·a2·sin2(ω·t + Δ) and the P.E. = U = k·x2/2 = (1/2)· m·ω2·a2·cos2(ω·t + Δ) formulas for the linear oscillator. [17] Feynman further formalizes this in his Lecture on Superconductivity (Feynman, III-21-2), in which he refers to Schrödinger’s equation as the “equation for continuity of probabilities”. The analysis is centered on the local conservation of energy, which confirms the interpretation of Schrödinger’s equation as an energy diffusion equation. [18] The meff is the effective mass of the particle, which depends on the medium. For example, an electron traveling in a solid (a transistor, for example) will have a different effective mass than in an atom. In free space, we can drop the subscript and just write meff = m. Appendix 2 provides some additional notes on the concept. As for the equations, they are easily derived from noting that two complex numbers a + i∙b and c + i∙d are equal if, and only if, their real and imaginary parts are the same. Now, the ∂ψ/∂t = i∙(ħ/meff)∙∇2ψ equation amounts to writing something like this: a + i∙b = i∙(c + i∙d). Now, remembering that i2 = −1, you can easily figure out that i∙(c + i∙d) = i∙c + i2∙d = − d + i∙c. [19] The dimension of B is usually written as N/(m∙A), using the SI unit for current, i.e. the ampere (A). However, 1 C = 1 A∙s and, hence, 1 N/(m∙A) = 1 (N/C)/(m/s).      [20] Of course, multiplication with i amounts to a counterclockwise rotation. Hence, multiplication by –i also amounts to a rotation by 90 degrees, but clockwise. Now, to uniquely identify the clockwise and counterclockwise directions, we need to establish the equivalent of the right-hand rule for a proper geometric interpretation of Schrödinger’s equation in three-dimensional space: if we look at a clock from the back, then its hand will be moving counterclockwise. When writing B = (1/c)∙iE, we assume we are looking in the negative x-direction. If we are looking in the positive x-direction, we should write: B = -(1/c)∙iE. Of course, Nature does not care about our conventions. Hence, both should give the same results in calculations. We will show in a moment they do. [21] In fact, when multiplying C2/(N·m2) with N2/C2, we get N/m2, but we can multiply this with 1 = m/m to get the desired result. It is significant that an energy density (joule per unit volume) can also be measured in newton (force per unit area. [22] The illustration shows a linearly polarized wave, but the obtained result is general. [23] The sine and cosine are essentially the same functions, except for the difference in the phase: sinθ = cos(θ−π /2). [24] I must thank a physics blogger for re-writing the 1/(ε0·μ0) = c2 equation like this. See: (retrieved on 29 September 2017). [25] A circularly polarized electromagnetic wave may be analyzed as consisting of two perpendicular electromagnetic plane waves of equal amplitude and 90° difference in phase. [26] Of course, the reader will now wonder: what about neutrons? How to explain neutron spin? Neutrons are neutral. That is correct, but neutrons are not elementary: they consist of (charged) quarks. Hence, neutron spin can (or should) be explained by the spin of the underlying quarks. [27] We detailed the mathematical framework and detailed calculations in the following online article: [28] (retrieved on 29 September 2017). Re-visiting the matter wave (II) Pre-scriptum (dated 26 June 2020): This post did not suffer too much from the DMCA take-down of some material: only one or two illustrations from Feynman’s Lectures were removed. It is, therefore, still quite readable—even if my views on these  matters have evolved quite a bit as part of my realist interpretation of QM. I have actually re-written Feynman’s first lectures of quantum mechanics to replace de Broglie’s concept of the matter-wave with what I think is a much better description of ‘wavicles’: one that fully captures their duality. That paper has got the same title (Quantum Behavior) as Feynman’s first lecture but you will see it is a totally different animal. 🙂 So you should probably not read the post below but my lecture(s) instead. 🙂 This is the link to the first one, and here you can look at the second one. Both taken together are an alternative treatment of the subject-matter which Feynman’s discusses in Lecture 1 to 9 of his Lectures (I use a big L for his lectures to show the required reverence for all of the Mystery Wallahs—Feynman included). Let me know what you think of them (I mean the lectures here, not the mystery wallahs). Original post: Electron blobs So… Does this help?  example of wave packet Square wave packet
7c0a56fb88ae1560
Like great art, great thought experiments have implications unintended by their creators. Take philosopher John Searle’s Chinese room experiment. Searle concocted it to convince us that computers don’t really “think” as we do; they manipulate symbols mindlessly, without understanding what they are doing. Searle meant to make a point about the limits of machine cognition. Recently, however, the Chinese room experiment has goaded me into dwelling on the limits of human cognition. We humans can be pretty mindless too, even when engaged in a pursuit as lofty as quantum physics. Some background. Searle first proposed the Chinese room experiment in 1980. At the time, artificial intelligence researchers, who have always been prone to mood swings, were cocky. Some claimed that machines would soon pass the Turing test, a means of determining whether a machine “thinks.” Computer pioneer Alan Turing proposed in 1950 that questions be fed to a machine and a human. If we cannot distinguish the machine’s answers from the human’s, then we must grant that the machine does indeed think. Thinking, after all, is just the manipulation of symbols, such as numbers or words, toward a certain end. Some AI enthusiasts insisted that “thinking,” whether carried out by neurons or transistors, entails conscious understanding. Marvin Minsky espoused this “strong AI” viewpoint when I interviewed him in 1993. After defining consciousness as a record-keeping system, Minsky asserted that LISP software, which tracks its own computations, is “extremely conscious,” much more so than humans. When I expressed skepticism, Minsky called me “racist.” Back to Searle, who found strong AI annoying and wanted to rebut it. He asks us to imagine a man who doesn’t understand Chinese sitting in a room. The room contains a manual that tells the man how to respond to a string of Chinese characters with another string of characters. Someone outside the room slips a sheet of paper with Chinese characters on it under the door. The man finds the right response in the manual, copies it onto a sheet of paper and slips it back under the door. Unknown to the man, he is replying to a question, like “What is your favorite color?,” with an appropriate answer, like “Blue.” In this way, he mimics someone who understands Chinese even though he doesn’t know a word. That’s what computers do, too, according to Searle. They process symbols in ways that simulate human thinking, but they are actually mindless automatons. Searle’s thought experiment has provoked countless objections. Here’s mine. The Chinese room experiment is a splendid case of begging the question (not in the sense of raising a question, which is what most people mean by the phrase nowadays, but in the original sense of circular reasoning). The meta-question posed by the Chinese Room Experiment is this: How do we know whether any entity, biological or non-biological, has a subjective, conscious experience? When you ask this question, you are bumping into what I call the solipsism problem. No conscious being has direct access to the conscious experience of any other conscious being. I cannot be absolutely sure that you or any other person is conscious, let alone that a jellyfish or smartphone is conscious. I can only make inferences based on the behavior of the person, jellyfish or smartphone. Now, I assume that most humans, including those of you reading these words, are conscious, as I am. I also suspect that Searle is probably right, and that an “intelligent” program like Siri only mimics understanding of English. It doesn’t feel like anything to be Siri, which manipulates bits mindlessly. That’s my guess, but I can’t know for sure, because of the solipsism problem. Nor can I know what it’s like to be the man in the Chinese room. He may or may not understand Chinese; he may or may not be conscious. There is no way of knowing, again, because of the solipsism problem. Searle’s argument assumes that we can know what’s going on, or not going on, in the man’s mind, and hence, by implication, what’s going on or not in a machine. His flawed initial assumption leads to his flawed, question-begging conclusion. That doesn’t mean the Chinese room experiment has no value. Far from it. The Stanford Encyclopedia of Philosophy calls it “the most widely discussed philosophical argument in cognitive science to appear since the Turing Test.” Searle’s thought experiment continues to pop up in my thoughts. Recently, for example, it nudged me toward a disturbing conclusion about quantum mechanics, which I’ve been struggling to learn over the last year or so. Physicists emphasize that you cannot understand quantum mechanics without understanding its underlying mathematics. You should have, at a minimum, a grounding in logarithms, trigonometry, calculus (differential and integral) and linear algebra. Knowing Fourier transforms wouldn’t hurt. That’s a lot of math, especially for a geezer and former literature major like me. I was thus relieved to discover Q Is for Quantum by physicist Terry Rudolph. He explains superposition, entanglement and other key quantum concepts with a relatively simple mathematical system, which involves arithmetic, a little algebra and lots of diagrams with black and white balls falling into and out of boxes. Rudolph emphasizes, however, that some math is essential. Trying to grasp quantum mechanics without any math, he says, is like “having van Gogh’s ‘Starry Night’ described in words to you by someone who has only seen a black and white photograph. One that a dog chewed.” But here’s the irony. Mastering the mathematics of quantum mechanics doesn’t make it easier to understand and might even make it harder. Rudolph, who teaches quantum mechanics and co-founded a quantum-computer company, says he feels “cognitive dissonance” when he tries to connect quantum formulas to sensible physical phenomena. Indeed, some physicists and philosophers worry that physics education focuses too narrowly on formulas and not enough on what they mean. Philosopher Tim Maudlin complains in Philosophy of Physics: Quantum Theory that most physics textbooks and courses do not present quantum mechanics as a theory, that is, a description of the world; instead, they present it as a “recipe,” or set of mathematical procedures, for accomplishing certain tasks. Learning the recipe can help you predict the results of experiments and design microchips, Maudlin acknowledges. But if a physics student “happens to be unsatisfied with just learning these mathematical techniques for making predictions and asks instead what the theory claims about the physical world, she or he is likely to be met with a canonical response: Shut up and calculate!” In his book, Maudlin presents several attempts to make sense of quantum mechanics, including the pilot-wave and many-worlds models. His goal is to show that we can translate the Schrödinger equation and other formulas into intelligible accounts of what’s happening in, say, the double-slit experiment. But to my mind, Maudlin’s ruthless examination of the quantum models subverts his intention. Each model seems preposterous in its own way. Pondering the plight of physicists, I’m reminded of an argument advanced by philosopher Daniel Dennett in From Bacteria to Bach and Back: The Evolution of Minds. Dennett elaborates on his long-standing claim that consciousness is overrated, at least when it comes to doing what we need to do to get through a typical day. We carry out most tasks with little or no conscious attention. Dennett calls this “competence without comprehension.” Adding insult to injury, Dennett suggests that we are virtual “zombies.” When philosophers refer to zombies, they mean not the clumsy, grunting cannibals of The Walking Dead but creatures that walk and talk like sentient humans but lack inner awareness. When I reviewed Dennett’s book, I slammed him for downplaying consciousness and overstating the significance of unconscious cognition. Competence without comprehension may apply to menial tasks like brushing your teeth or driving a car but certainly not to science and other lofty intellectual pursuits. Maybe Dennett is a zombie, but I’m not! That, more or less, was my reaction. But lately I’ve been haunted by the ubiquity of competence without comprehension. Quantum physicists, for example, manipulate differential equations and matrices with impressive competence—enough to build quantum computers!—but no real understanding of what the math means. If physicists end up like information-processing automatons, what hope is there for the rest of us? After all, our minds are habituation machines, designed to turn even complex tasks—like being a parent, husband or teacher—into routines that we perform by rote, with minimal cognitive effort. The Chinese room experiment serves as a metaphor not only for physics but also for the human condition. Each of us sits alone within the cell of our subjective awareness. Now and then we receive cryptic messages from the outside world. Only dimly comprehending what we are doing, we compose responses, which we slip under the door. In this way, we manage to survive, even though we never really know what the hell is happening. Further Reading: Is the Schrödinger Equation True? Will Artificial Intelligence Ever Live Up to Its Hype? Can Science Illuminate Our Inner Dark Matter
178e29d96bbd9ee6
The nature of splitting worlds in the Everett interpretation This post is about an aspect of the Everett many-worlds interpretation of quantum mechanics. I’ve given brief primers of the interpretation in earlier posts (see here or here), in case you need one. Sean Carroll, as he does periodically, did an AMA on his podcast. He got a number of questions on the Everett interpretation, one of which in particular I want to look at, because it’s about an issue that bugged me for a long time. From the transcript: 0:26:50.1 SC: David H says, “When the universe splits a la Everett, is the split instantaneous across the whole pre-existing universe, or does it propagate at the speed of light?” So the nice answer is, it’s up to you. And this goes exactly back to what we were talking about, about Laplace’s demon earlier. The branching of the wave function of the universe into separate worlds is not part of the fundamental theory. The fundamental theory is, there’s a wave function and it evolves according to the Schrödinger equation. That’s the entire theory. The splitting into worlds is something that we human beings do for our convenience. So, the right way to ask this question is, is it more convenient to imagine the world splitting all at once across all of space, or propagating at the speed of light? 0:27:31.0 SC: And for that, it’s completely dependent on what your purpose is, right? I actually tend to think of it as simpler just to imagine the universe splitting all at once, pre-existing, simultaneously across the whole pre-existing universe. That bothers some people, because they say, “Well, that’s not compatible with special relativity, which says that signals can’t travel faster than the speed of light.” But there’s no signal traveling faster than the speed of light; it’s just our description is traveling faster than the speed of light, and that’s perfectly okay. While this answer makes sense to me now, I don’t think it would have when I was struggling with it. This post is my attempt to explore the answer in such a way that someone who doesn’t yet get it, might. Let’s start with an analogy, the Louisiana purchase. In 1803 France sold a large chunk of territory in North America to the United States. Consider this question. When did the territory become part of the US? From a legal perspective, that would have been when the US Senate ratified the purchase agreement with France, which happened on October 20, 1803. On that ratification, all of the territory became part of the US, and all of the inhabitants became US residents. Of course, news of the purchase took time to spread. There was a ceremony in New Orleans on December 20, 1803. But the news took longer to reach many residents. In particular, no one had really bothered to consult or inform most of the Native Americans living in the territory. So while the legal transfer happened instantly, the social results took time, years in fact, to be felt throughout the territory. Which way is the right way to look at when the Louisiana territory became part of the US? The legal transfer date? The boots on the ground occupation? Or the overall assimilation into US culture? There isn’t really a fact of the matter here. Borders and nationality are human conventions. The land is the land. Nature doesn’t care. So we can validly talk about it in different ways. That’s what Carroll is trying to get at when he talks about the raw theory, the universal wave function, versus our ways of talking about worlds or universes splitting. Similar to the transfer of the Louisiana territory, there are multiple ways of looking at and talking about the same reality. Here are three: 1. On a quantum measurement, the world begins splitting at the time and location of the measurement. The split propagates out at the speed of quantum interactions. The propagation can happen no faster than the speed of light. 2. On a quantum measurement, previously existing worlds, which had until then been identical, begin to diverge from each other at the time and location of the measurement. The divergence propagates out at the speed of quantum interactions, no faster than light. 3. On a quantum measurement, what we considered one world, we now instantly consider split into multiple whole worlds, which had until then been identical. They begin to diverge from each other at the time and location of the measurement, propagating out via quantum interactions no faster than light. The thing to remember here is that a “world” or “universe” in Everett is a slice of the universal wavefunction. But our divvying up of the wavefunction is a human convention. In nature it’s just a continuum. So we can talk about the slice we’re on “splitting” into two or more slices, or nearby slices “diverging” from each other, or even decide that what we once divvyed up as one slice we’re considering multiple slices. It’s all different ways of talking about the same reality. Option 1 has historically made the most sense to me. It was how I needed to think of the Everett interpretation to consider it a viable possibility. It also makes more sense when considering something like an isolated quantum system, such as a quantum computer, which has qubit circuits in combined superposition. Under 1, these could be seen as world splits that are contained for a time, until the measurement magnifies the quantum state differences into the universe. But 1, which is us constantly being split into multiple people, is an existentially disconcerting way to think about this. It also makes the probability of observed measurement outcomes awkward to talk about since all possible outcomes happen. And each split effectively divides up the energy of the world among the new worlds, which many find difficult to accept. Option 2 is David Deutsch’s preferred way of looking at it. In this view, we are who we are, and there are other people in parallel worlds identical to us but diverging away anytime a quantum event is magnified, so we can see ourselves as having a classical timeline. Isolated quantum superpositions are basically the conditions necessary to detect the interference between worlds. Talking about probabilities is much easier since we’re now talking about the probabilities of outcomes in this world. And the energy of this world is what it is. It’s also easier to understand why Bell’s theorem isn’t an issue for Everett within this view, because within any one world, the correlations can exist from the beginning. The drawback of this option is it requires more explanation. Option 3 is Carroll’s preference, and this is the way Everett is usually presented in quick summaries, although without the explanation of why it doesn’t violate relativity. It also seems to inherit the existential angst and other issues from 1. I’m not sure why Carroll prefers it. It might be because the existential issues can also be seen as exciting. And the hybrid model can be seen as preserving that while also making clear why Bell isn’t an issue. But it seems to have the highest explanatory burden. Of course, all of this is about a theory that already requires a lot of explanation, one most people won’t wait on before summarily dismissing the whole thing as absurd and outrageous. So maybe worrying about additional explanatory burden isn’t productive. Which option works for you? Or is the least problematic? Is there another way of looking at it? 83 thoughts on “The nature of splitting worlds in the Everett interpretation 1. “…a theory that already requires a lot of explanation, one most people won’t wait on before summarily dismissing the whole thing as absurd and outrageous.” What about those who’ve given it considerable thought and analysis and still find it absurd? Doesn’t the mere fact that proponents can’t even say for sure how the splitting works say something about how absurd the theory is? (Or, for that matter, define how energy can be “thinned”?) Liked by 1 person 1. I wasn’t describing you with that passage Wyrd. You’ve at least read about it and have often been willing to talk about it. On your question about the splitting, it seems clear I didn’t get my point across in the post, at least not to you. Oh well, maybe next time. (Doesn’t energy get thinned all the time in physics? What else is an explosion? Or the big bang?) 1. While I agree I’m not “most people” the way it’s written verges on the evangelistic ‘if you don’t agree with this, you don’t get it’ mode that I see as making MWI something of a case of groupthink. If your point is that it’s dealer’s choice, I got it, and it’s what I’m suggesting makes this not even a theory but a metaphysical belief. Too much is undefined, and there isn’t any math for any of it. Energy is never “thinned” out in the sense I think you know I mean (especially in light of any number of previous conversations). In physics, energy is conserved. I noticed each of your three options starts with “On a quantum measurement” but what really is a measurement under MWI? Measurements collapse the wave-function, which MWI explicitly denies. 1. Wyrd, my friend, based on our other conversations on this, I feel like if I address your points, things are just going to get progressively more heated. I acknowledge you think this theory has zero merit and is utterly misguided. Can we just agree to disagree on this particular topic? 2. I must read back into quantum mechanics to be able to comment better on your interesting speculations. What I am wondering is whether it doesn’t all result from the amplitude of the wave function being available to us but the phase always being unavailable and probabilistic. When we make a measurement, the phase of the wave function gets translated into an amplitude accessible to us. A measurement is then just an interaction that is accessible to us. Is it then necessary for some ‘wiring up’ behind the scenes to track which particles are entangled (= have correlated phase?), or does that drop out of the universal wave function, evolving according to Schodinger’s equation. Liked by 1 person 1. I have to admit your first paragraph is pushing beyond my understanding. My reading about the phase is that it’s a factor in maintaining coherence, and when it gets disrupted, we lose that coherence, that is, we get decoherence and the disappearance of quantum effects. That might match up with what you’re describing, but I’m not sure. It took me a while to appreciate how thoroughly entanglement features in the Everettian view. As I understand it, the wavefunction collapse in Copenhagen and other collapse interpretations ends entanglement. But under Everett, there is no collapse, just the evolution of the wavefunction. Decoherence is the quantum system becoming entangled with the environment. So with a universal wave function, entanglement is pervasive. When we talk about the entanglement, under Everett, it seems like we’re talking about systems more entangled than the background levels. It’s so pervasive that Carroll, working with others on their on theory of quantum gravity, has proposed that space may be emergent from entanglement. 1. You may regret asking… Given the canonical “zero” state, |0⟩, defined as: It’s the case that: The |0⟩ state is indistinguishable for any global phase angle theta. The reason, as PJMartin mentioned, is that the magnitude of that exponential is always 1.0, so the state always looks like the |0⟩ state. But given the states |+⟩ and |-⟩, defined as: Which differ by a relative phase, we can apply a rotation operator such that the states become |0⟩ and |1⟩, which we can distinguish. 2. Thanks. I think I follow the mathematics, but not sure if I follow the concept. Would it be accurate to say the global phase is the overall background phase of everything in the environment, and the relative phase is the local variance? If so, it makes sense that global phase could never be detected, since anything used to detect it would have the same phase which would just cancel out. 3. Both global and relative phase are properties of the quantum system and have nothing to do with the background. A physical intuition might be something like: Imagine a rotating ball. The vector pointing along the axis of rotation is, in some sense, rotating, but since its coordinates never change, there’s no way to detect that rotation. For the vectors not aligned with the axis, their coordinates do change under rotation, and we can detect that change. Liked by 1 person 3. The way I read the quote you provided from Carroll is that there is no effective difference, at the level of physics we can do, between universes in which a split happens everywhere at once and words in which it spreads at the speed of light. It makes zero difference to the physics. If you want to imagine the whole universe splits everywhere at once, have a ball. Or if you want to imagine the split having a fixed location in space and spreading at the speed of light, knock yourself out. But… if you start actually doing physics and you want to know which point on a detection screen a photon hit, and you’re ten light-years away, you’ll have to wait ten years to find out. Doesn’t matter if you think you split instantly with the photon, or you split when it arrives. Neither depiction matters because they’re indistinguishable in practice. When the radio signal reaches you with the information, then you’ll know! With your Louisiana Purchase example, imagine that what all those different people you described who are rambling around the Territory “know” about the purchase defines their phase correlation. And imagine that the moment of congressional ratification was actually a moment that could have gone either way. That is the quantum system we’re curious about. Let’s say the world “splits” everywhere instantly. So then everyone in the Territory is replicated instantly: in one world there is a version of themselves who know the purchase was ratified, and in the other world there is a version who knows the purchase was repudiated. But these two sets of people never interact because they know different things. OR… the replication of all those people doesn’t occur until the news actually reaches them, (traveling at the speed of light), since prior to this news reaching them Louisiana was owned by both the US and France simultaneously (in the quantum sense). But when news reaches them, THEN a version of them takes up residence in both worlds, since the news must be one way or the other. But at the end of the day, it doesn’t matter which version of the splitting worlds story is “right” because there’s no way for anyone in the territory to actually distinguish them… There’s also a bunch of people who think it was ratified and bunch of otherwise identical people who think it split. Right? Liked by 1 person 1. Hi Michael, You parsed it well! I think I agree with everything you wrote here, with a few minor but important quibbles. (Which might come down to just word choices.) The first is I think the word “replicate” gives the wrong impression. It implies a copy is being made. But that’s wrong. “Split” really is the right word for what’s happening, if we want to think about one world becoming two. Think of it that every world has a certain “thickness”, a certain energy. When a split happens, the resulting worlds are thinner. (Which raises the question of how thin things can get. Carroll says it may be infinite, but if not, based on a maximum entropy calculation, he estimates it should allow at least e^10^122 slices of the observable universe.) Or we can think about it as two worlds that were always there with the thinner thickness. They were identical, running side by side, until the ratification vote, then they started having differences after the vote went different ways in each one. Where we draw the boundaries and when we change them is really up to us, because the boundaries are just accounting, something to make it easier for us to think about it. No matter which way we do it, the actual dynamics only propagate under the speed of light (or the speed of early 1800s mail in the analogy). So, on the second quibble, it’s important to understand that it’s not a matter of not knowing which of different ontologies is “right” but all of them being compatible with the mathematics. It’s that both versions are the same. The underlying ontology (if Everett is correct) is identical. The variance is just in how we choose to slice up the universal wave function in our accounting. Hope that makes sense (and I got it all right on my end). Liked by 1 person 1. I have no quibbles with your quibbles, Mike. Replication was meant to suggest that when a split occurs there’s potentially two of me now–one where the purchase was ratified and one where it wasn’t. But I realized after I hit ‘Send’ that this was based only one of the three scenarios you had described. Both could have been there all along in some of the others. So no issue. On the issue of an ontology being “right” or not, this gets interesting to me in the following sense: I think what you’re saying is that both scenarios are fictional representations of processes that don’t exist quite as imagined to begin with, and because both are compatible with the observable processes that do exist, they are the “same.” But to one disinterested in physics, they seem like they could be different. To the non-technical part of me, for instance, it sure seems like a split that happens everywhere at once is not the same as one that propagates in time. I understand it is a difference without distinction, but maybe the pause that arises when we consider this is worth attending to… If the mathematics equates two scenarios which common sense tells us are not in all ways equal, then what is happening? This was prompted by thinking further about this equality of conditions that don’t seem equal–but ultimately are in terms of how they cash out. It’s not an objection… more of a curiosity. Liked by 1 person 1. I understand the difficulty Michael. Remember, I did a whole post questioning Deutsche’s view and had a hard time for months thinking of it as the same theory as Everett’s. The idea that they’re discussing the same reality isn’t obvious. What is a fiction is the idea that worlds are definite things in Everettian theory. It’s more of a continuum in which we can interact with a narrow slice. I think about a post Chad Orzel did on the Everett interpretation, which I linked to in my post about Carroll’s book. At the time, I misinterpreted his post as taking an anti-real stance toward the worlds because he used the word “metaphor”. But when I recently went back to it, I realized that I (and many other people) had missed his meaning. He meant the same thing that Carroll meant. The main reality is the evolution of the wave function. What we call “worlds” or “universes” are just a convenient way for us to think about that reality, to relate it to our experiences. Lev Vaidman, in the SEP article on the many-worlds interpretation, describes the theory as having two components: The MWI consists of two parts: i. A mathematical theory which yields the time evolution of the quantum state of the (single) Universe. ii. A prescription which sets up a correspondence between the quantum state of the Universe and our experiences. Part (i) is essentially summarized by the Schrödinger equation or its relativistic generalization. It is a rigorous mathematical theory and is not problematic philosophically. Part (ii) involves “our experiences” which do not have a rigorous definition. It’s funny that we use the word “interpretation” to refer to theories like Copenhagen, deBroglie-Bohm, and Everett, when they have different postulates and make different predictions. They really are different theories. (I think the word “interpretation” in this case arose for historical reasons, an attempt to get these alternate theories past the old guard.) But what Vaidman calls Part(ii) is actually an interpretation of Everettian physics, and there are multiple. But unlike what we normally call “interpretations”, these really are interpretations, all with exactly the same cash out predictions. I should also note that there are plenty of Everettians who do take either an anti-real stance toward the worlds, or an agnostic one. Stephen Hawking was one. He was an Everettian, but also an instrumentalist. His attitude was that Part(i) was the important part and that it was predictive of our observations. He stopped there. As someone with instrumentalist sympathies, it’s a view I can understand. Liked by 1 person 1. Hi Mike, Thanks for the link; I enjoyed Chad Orzel’s article. What it reinforced for me personally is that MWI suffers from the same problem every other form of QM suffers: there is no explicit connection between the mathematical theory and our experience of the world, which is to say, something in addition to the core mathematical theory is required to derive the world we experience. And that something in addition is always a little wonky compared to the underlying mathematical structure. This is Vaidman’s point I think. If you posit the wave equation is describing what is real, then our collective, objective perception of a classical world is a shared hallucinatory negation of everything else, and some physical vehicles or mechanisms are required to explain how this “filtering” occurs. And it isn’t just a filtering of conscious perceptions, but something more extensive. We know this because the “me” on one branch doesn’t bump into the “me” on another branch in the hallway. So if all branches are equally real, a mechanism for physical differentiation or divisibility is required. I’m not aware of any hypothetical means by which this shared hallucinatory negation or physical divisibility of elements of reality occurs. And I’m probably missing something because physicists don’t seem to bothered by this. In this notion, the “we” that we think we are, are ghosts. We pass through everything else. It seems more likely to me, as an explanatory position, that the wave equation is describing a universe of possibilities that are all quite real to one another at the level they exist, and that they can interact on this level, but that only a subset of particular conditions or branches are then physically instantiated somehow through a process we have yet to even imagine. And that produces the collective, objective reality. In this notion what is “real” is only what is instantiated, and the wave equation is actually describing a realm of ghosts. The “we” that we think we are is what is “real” and everything else is a ghost. Not sure it matters which is correct, but how do you reconcile the basic claim of MWI and the human experience without needing to define different notions of what is “real?” Liked by 1 person Nor am I. (Normally Fermi exclusion principle prohibits matter from coinciding.) The claim is that some magical form of decoherence is responsible, but decoherence as we know it does the opposite. You don’t sink through the chair you’re sitting in because you and the chair are both decohered. Liked by 1 person 3. Hi Michael, I’m not wild about the word “hallucinatory”, but I think from your full description you don’t mean something that should be perceptible by the nervous system. It happens at a much lower level. I wouldn’t say that physicists aren’t bothered by the concept you’re describing. In fact, the first physicist to be bothered by it was Albert Einstein, since the basic mechanism which would allow this to work is entanglement. The physicists have just been wrestling with it for a lot longer than we have. It’s old hat to them. It’s entanglement that allows for multiple particles to be in a combined superposition. The Everett interpretation is that entanglement doesn’t end on measurement, but propagates into the environment (decoherence results from the system becoming entangled with the environment). Consider quantum computers. A 50 qubit circuit can be in up to 2^50 concurrent states (over a quadrillion) simultaneously. When changes ripple through the circuits, how do the qubit states in each version of the circuit “know” which version they’re in? Because they’re still in a coherent state, there is detectable (and usable) interference between the versions, but each version is still distinct. Under collapse interpretations, when the circuit is measured, it collapses to one classical state. Under Everett, the entanglement instead spreads into the environment. The success of quantum computing is actually one of the things that led me to take another look at this stuff a few years ago. On only a subset of the branches being real, well, that’s the rub. Collapse interpretations say only one is real, but no one can identify a mechanism to identify why any particular one should be more real than any other, except to just say it’s random. But there’s nothing in the raw quantum formalism, the part of QM that has been validated through almost a century of experiments, to indicate any outcome should be any more real than the others. Doesn’t mean some experiment might not find one tomorrow, and so falsify Everett, but that’s where we are. Liked by 1 person 4. “It’s entanglement that allows for multiple particles to be in a combined superposition.” Except that entangled particles are distinct particles with their own energy/mass, and are subject to the Fermi exclusion principle. They cannot physically coincide, which, I believe, is what Michael is getting at. In a Stern-Gerlach experiment, for instance, under MWI there are suddenly two silver atoms where there was only one entering the apparatus. If the claim is there were two silver atoms all along, then how did they coincide? If the branch split one atom into two atoms, how does that happen? Either way you seem to need new physics. re QC: In all the reading I’ve done, most texts don’t mention MWI in the context of QC. I finally did find a reference to it. Deutsch believes the power of QC comes from the myriad branches, which really raised my eyebrows. For one, how do other branches return the result of their computations? MWI suggests branches cannot affect each other. For another, QC is fully explained in its own mathematics. Deutsch seems to treat QC like binary computing, but it’s not, it’s a form of analog computing, hence its ability to have those myriad superposed states. It’s like saying we need multiple worlds to explain the different timbres of diverse musical instruments playing the same note. The notes sound different because they are different superpositions of harmonics. The notes, and QC, are analog and fully capable of having myriad wave forms combined. Liked by 1 person 5. Do you mean the Pauli exclusion principle? As I understand it, that states that no two fermions can be in the same quantum state at the same time. Since the various states of a particle in superposition are, by definition, different states, I don’t think there’s an issue here. I’ve said it before, but when we think we’ve found a cheap way to dismiss Everett, we’re almost certainly missing basic stuff. On QC, most books on it don’t go into quantum interpretations because they’re controversial and it’s not needed to explain the techniques. But many of the theoreticians, like Deutsch or John Preskill, thought through it within the Everettian paradigm. That, in and of itself, doesn’t make that the only paradigm it can work in, just the one it does most straightforwardly. In any case, I can’t imagine anything Deutsch might say about the Everett interpretation that you wouldn’t have a strong reaction against. 🙂 I’ll repeat what I said above, over a quadrillion concurrent states, each one able to do its own calculations. At 300 qubits, there will be more states than there are particles in the observable universe. When we get into the thousands and higher, the alternative explanations to quantum states are going to get increasingly strained. As to how the branches return their results, remember that Deutsch is looking at this from Option 2 in the post. The main thing is these branches aren’t yet decohered from each other. They still have coherent interference. Under option 2, that interference is between worlds / universes. I think you know the interference is utilized and manipulated to promote the correct answer so that it has a high probability of being in the measured version. Liked by 1 person 6. Oops, yes, Pauli, not Fermi. I jumped from fermion to Fermi there! This is why I mentioned silver atoms (which are made of fermions). In particular, the electrons already occupy all available quantum states, except for the lone valence electron in the 5s shell. It’s that electron that allows a silver atom to have an over all spin. The other 46 electrons pair off in spin-up+spin-down pairs. And those 46 electrons are fully described within the silver atom, there are no extra quantum states they can have to differentiate from supposedly superposed “identical” electrons. Think of it this way: When Sean Carroll gives a lecture about MWI and uses his beam-splitter and then jumps one way or the other depending on the result, the implication is he, the podium, the stage, the audience, and the auditorium, all branch into closely identical versions that physically coincide. What magical quantum state allows all those fermions to do that in violation of Pauli? The usual explanation is “decoherence” but that’s magical, too, at least in terms of how we currently understand decoherence. Or the outcomes. Exactly. It’s not the controversy. It’s that there’s no need for it. It’s just one calculation — one set of operations performed on the qubits. The thing about interference, which yes, is where the QC power comes from, is that, as in the two-slit experiment, which we previously agreed didn’t seem to invoke MWI except in where the particle actually gets measured (as in a beam-splitter experiment), interference is a single-world phenomenon that, while we don’t fully understand it, doesn’t seem to require, or even suggest, multiple worlds. Liked by 1 person 7. Mike, Wyrd seems to understand pretty well the question I was trying to ask. I wasn’t sure from your answer you fully understood what I was driving at, or if you did, your own reading may have given you a perspective on this I don’t grasp, which results in our talking past one another just a bit. Imagine I am in a room observing a double-slit experiment. And there are various possibilities for the outcome I might observe. If I understand MWI, they all occur. And in popular writing about this, it seems to imply there is a “me” who sees one outcome, as well as a “me” in another branch that sees another. Let’s say I’m sitting behind a desk where a computer is telling me what the detector in “my” branch of the wave function registered. Presumably, in another branch, a completely independent instantiation of “me”, seated at the same desk (albeit an independent instantiation of the desk), registers a different result for where the photon landed. Now, the entanglement that allows the double slit experiment to create the interference pattern presumably is a physical process occurring within this room. So there are all these instantiations of me seated at this desk, by they do not “bump” into one another or know of one another or interact in any way. So it’s a lot like the Exclusion Principle problem, only we’re talking about entire portfolios of nearly identical physical systems that would seem to exist in the same physical space but don’t interact. When I asked if this issue concerned physicists, I wasn’t speaking about entanglement itself, which I know Einstein objected to–I’m wondering what it is I’m missing about all these nearly identical physical systems along separate branches that would intuitively be in the same room. If they are all in parallel “worlds” then where are those worlds? This seems like a straightforward question to ask if the premises are right: the key premise being that in branches with different outcomes than the one I know about, there is a version of “me” there also who witnesses the other outcomes. Where do these versions of “me” reside that all witness different outcomes of the same physical experiment, but never physically interact? Because it seems an obvious question, and because many people smarter than me don’t seem worried about it, I am wondering if I’m misunderstanding something essential about the MWI to begin with. Liked by 2 people 8. Michael: You’re right on point with the coincidence issue. I think the understanding required is this: MWI places the Schrödinger equation as central to its ontology, and proponents have faith the coincidence issue, the energy issue, the probability issue, the preferred basis issue, and the Hilbert space ontology issue, all have reasonable explanations we’ll someday understand based on the central notion that the Schrödinger equation explains everything. Those on the Copenhagen side of things have faith that wave-function “collapse” has a reasonable explanation we’ll someday understand based on, or extending, QM principles. (As I tried to illustrate with the spin experiments, even MWI experiences sudden changes to the wave-function in experiments, so it actually does include a form of “collapse” — that wave-function vector suddenly jumps to a known eigenstate.) The irony to me is that MWI is often claimed as the more parsimonious view based on the simplicity of the premise. I think the consequences of a premise need to be considered as well, and as total views there is far more physics unexplained under MWI and it is therefore the less parsimonious view overall. Liked by 1 person 9. Michael and Wyrd, On the exclusion principle, I don’t have a researched answer. However, I’ll note again that the Everett interpretation is not going to be dismissed on the cheap. If it was incompatible with something as fundamental as the Pauli Exclusion Principle, Hugh Everett wouldn’t have gotten it past John Wheeler, or his thesis committee, or the peer review for publication, not to mention all the people who’ve attacked the theory over the decades. So my answer here might not be right, but if it isn’t, it just means we’re overlooking something a first year physics graduate student probably knows. I think the answer is that the exclusion principle is based on interactions, on bosons being exchanged by fermions. However, in a group of entangled particles, such as all the elementary particles in an atom or molecule in superposition, those types of interactions can only happen between versions of the particles in the same element of the composite superposition. In other words, an electron in one version of an atom in superposition isn’t going to exchange photons with the same electron in another version of that atom. Remember that the photons are part of the entanglement too, so there will be versions for each element of the overall entangled superposition. (I wish I knew less awkward language to express this.) I’ll admit I’m not sure how interference factors into this, except to say it’s only a factor until decoherence. Wyrd laid the entire explanation on decoherence, but I’m not sure that’s true. I think there is already a separation before then. It’s just that interference is gone (or well, no longer significant) after decoherence. Anyway, that’s my amateur (possibly very wrong) shot at the answer. It’s the way I’ve assumed it worked for a while. I might do some digging around to find out how the exclusion principle and superpositions relate to each other. I think it’s where the answer lies. Liked by 1 person 10. Mike, It is precisely because I agree MWI won’t be dismissed on the cheap that I’m wondering what I’m missing. I think the focus/discussion above on the Pauli Exclusion Principle has perhaps led you away from the bigger picture, even simpler question I was asking. I think to your point, it’s easy enough to deal with the Exclusion Principle. Might we note for instance that the Pauli Exclusion Principle holds in any given branch or world, and that when we deal with entanglement all the “versions” of an electron, say, in MWI, have something unique about them (a different spin or position or momentum), which is why they’re in another branch to begin with. I’m less concerned about such a specific and technical nuance of the theory, and more curious about where the physicists think all the various branches of the wave function reside such that they are all equally “real” but utterly hidden from one another on the large. Liked by 1 person 11. Michael, I think the principle remains the same on broader considerations. Our ability to detect something depends on interactions. For example, we only see something by having photons from it strike our retina. When we touch something, it’s electromagnetic interactions that stop our hand from going through it, etc. This is one of the reasons dark matter is supposed to be so hard to detect, because it only seems to interact gravitationally. We could think of the other “worlds” as dark matter without the gravitational interactions. (Although each world obviously interacts with itself.) We can only interact with the slice of the wave function we’re on, essentially with the stuff in the same element of the superposition of the entangled environment we’re a part of. The other worlds are all right here, but we can’t interact with them, and they can’t interact with us. (At least aside from interference that is so fragmentary and canceled out that detecting it would require knowledge of all the relevant microstates.) Liked by 1 person 12. “The other worlds are all right here, but we can’t interact with them, and they can’t interact with us.” Nothing in physics explains how that can be true of normal matter. It’s an unfounded assertion. Liked by 1 person 13. I think that shoe is actually on the other foot. It’s my logic you have consistently denied in all these conversations. MWI doesn’t really have logic so much as assertions based on the notion that the Schrödinger equation must be the whole and entire truth. Liked by 1 person 14. I’m recalling now we had this discussion once before, Mike. I understand the notion that dark matter doesn’t interact with us except gravitationally so it’s in essence right here all the time though we never sense it. But I think what you’re suggesting here is that not only is red different from blue in the branch of reality in which we’re having this conversation, but that in 10^(10^120 something) co-located branches of reality, there is a red that is different from every other red in some way that doesn’t reduce it’s redness. I can imagine ways of describing this, but I think it requires additional properties of matter, and a HUGE range of them. I guess the question is: are these properties part of the wave equation? I think this is part of the extra stuff that is needed to relate the theory to our experiments. All the versions of QM have a problem with that specific issue I think. When you say we only interact with the stuff in the same element of the superposition of the entangled environment we’re a part of, I don’t really know how to parse that. I think of the double slit experiment again, and understand at some conceptual level that the entanglement between possible outcomes of the experiment is replaced by new entangled relationships that spread through the environment. But where this gets confusing is that if I’m listening to channel 96.5 on the FM band, everything on this station must be somehow related in a way that everything else is not. When we do a double slit experiment, the entanglement passes from all possible electron states to a specific electron and the detector, and then it bangs around the detector as a whole as atoms interact or what have you. Point being: the baton of entanglement is passed through specific interactions is it not? Two particles collide and now we don’t know which one has more of the energy. Entanglement doesn’t just get broadcast to every atom in our light cone once the electron hits the detector right? So if I’m correct that entanglement disperses through chains of interaction, then at some level it seems like we’re saying every element of matter/energy touched by this chain has to obtain or activate some underlying property that unifies them on the one hand, and differentiates them from all the other chains going on out there, right? I just don’t see how such a world practically works, or even is contained in the wave equation if there are no variable properties that are shared to unify all the matter and energy contained in a particular branch. Liked by 2 people 15. Michael, From what I’ve read, entanglement is a complex topic. It can exist on various properties (like spin) while not on others. And there can be different degrees of it. One of the sources that helped me think about it was this post. But at a fundamental level, I generally take entanglement to be correlation, which makes sense when you think about how correlations form and that they can exist to greater or lesser degrees. Of course, under collapse interpretations, it must be something stronger than that. And even under Everett, it feels like that isn’t sufficient. This feels particularly true when we’re talking about a quantum circuit in a superposition of quadrillions of composite states, much less of a whole environment that, under Everett, is also in a superposition of some unfathomable number of composite states. The feeling that there must be something else, some hidden variables to keep everything straight, is very strong. But there’s a good chance our intuitions here are simply not reliable. My understanding is that entanglement, under normal conditions, is constantly being “broadcast”. Remember that this is often described as information about the quantum system leaking into the environment. But what we’re really saying is that the system in question is having causal effects on the environment, while the environment is also having causal effects on it. A lot of the effort involved in keeping quantum circuits coherent involves inhibiting those causal interactions with the environment as much as possible until the desired result is ready. When it is ready, it is then allowed to causally cascade into the environment (i.e. be measured). What this means is that, under Everett, there’s a background level of entanglement, which we don’t notice because it’s everywhere. When we discuss whether or not particles are entangled, we’re really discussing whether they’re more entangled then that background level. All of this makes sense when you remember that we’re talking about a universal wavefunction. But as I mentioned somewhere else on this thread, entanglement is so pervasive that there are physicists now thinking that space could be emergent from it. Conversely, in the context of multiple worlds, it might be that entanglement’s broad ranging correlations are depending on space itself branching. That’s something we haven’t discussed here. Everett requires gravity to eventually be brought into the quantum fold. Which might help with keeping all those reds from each other. Although when we remember that red only comes about through interactions, I’m not sure it’s strictly necessary. This reply feels somewhat rambling. Hopefully somewhere in it your concerns were addressed. Liked by 1 person 16. Thanks for additional info. This is a very interesting topic… Focusing on your paragraph that begins, “What this means is that, under Everett, there’s a background level of entanglement…,” there are definitely questions that arise. I’ve read a few books on entanglement–Amir Aczel’s and Louisa Gilder’s–but it’s been a while. I don’t recall either of them spending much of any time on widespread universal entanglement networks, as they were focused moreso on the “more entangled” situations of experiments. In the experiments and in quantum computing, the entanglement is very fragile and has to be kept isolated, but I think what you’re saying is that in the most general case, entanglement is a pervasive condition of things. I want to say something like, “I can see that…,” but the truth is I’m pretty fuzzy on what that really means. Doesn’t mean I’m opposed to it. It’s just that the properties of such a reality would have to be explained a bit to me I think so I could understand it better. What I understand entanglement to mean is this: two or more particles are said to be entangled when a) they are in a superposition and haven’t interacted with the environment or otherwise been “measured” and b) when conservation of spin or momentum or something requires that their states, whenever they are actually determined, are mathematically related such that if I know the state of one I also know the state of the other. Perhaps as I ramble here in return, an important element of what you’re describing is noting that in an Everettian universe, the wave function never collapses, so the entanglement never really dissolves. Particles are never released from their obligations to one another, although they can trade those obligations with one another. Say particle A and B are entangled. Particle B could have a drink with Particle C, and they could agree to share somehow in the fulfillment of the obligation Particle B originally had with Particle A. This could go on and on and on. It’s kind of like those financial securities that got us in so much trouble in 2008. Pretty soon everyone has an obligation to everyone else and no one knows who owes who. But, it seems to me that all these trading of mutual obligations are actually moments when the wave function branches–since it doesn’t collapse, it must branch–and the question in my mind remains: how does one speak meaningfully about the “classical” world we experience in this case? So what keeps coming up for me, Mike, is this. There’s an intriguing “truth” I’ve encountered in a number of contexts: everything and nothing are indistinguishable. There is nothing interesting about either one. They are the alpha and the omega. Things only get interesting when one thing happens and other things do not, so I cannot help but think this notion of an ever-evolving wave function in which everything happens is only one part of what’s really happening, and that there are very likely selection processes at work. It’s just a suspicion. Otherwise, this notion of extended entanglement networks, which is a lot like an economy as Orzel noted, doesn’t quite explain how you could have trillions of such economies that are mutually exclusive. But it makes for interesting thought experiments and I’m inclined to run a few before saying anything more. Haha. Liked by 1 person 17. Thanks Michael. I hadn’t heard of those books. Interesting. I picked up a couple myself late last year, but was disappointed in them. They were fairly shallow pop-science books, and only lightly touched on Everett. One source that gave me a little insight about the relationship between entanglement and Everett was briefly discussed by Matt O’Dowd in this video. (Hopefully I got the timestamp right. He takes the option 1 approach from the post.) Sean Carroll occasionally veers into this on his podcast, particularly on the solo eps, although most of it is him interviewing others about their ideas. Thinking through scenarios is the way to approach this. Every time I think I’ve found a fatal flaw, it turns out to have a solution. As long as the raw quantum formalism continues to be validated in experiments, it’s hard to dismiss. Of course, that could change at any time with new evidence. Liked by 1 person 18. I skipped ahead to the eight minute mark that Wyrd pointed out. It was interesting and it was consistent with what I’ve heard on this topic before I think. Statements like this, at the 10:20 mark, are the ones I think require additional assumptions on top of the wave equation, “The evolution of the wave function is deterministic. That means all future branching of the wave function of your present, by which I mean the entanglement network that you currently belong to, is pre-defined. What isn’t defined is your own experience of that future branching. You will be the thread of conscious experience that travels one of those branches. You’ll also travel the others, but each version of you will only feel like you travel one of them. (emphasis added)” This gets quickly into the relationship of the wave function to conscious experience that I said earlier is tricky. More scenarios to ponder… 🙂 Liked by 1 person 19. FWIW, the first paragraph of the “Meaning of entanglement” section of the Wiki article for quantum entanglement does a fair job of describing it: An entangled system is defined to be one whose quantum state cannot be factored as a product of states of its local constituents; that is to say, they are not individual particles but are an inseparable whole. In entanglement, one constituent cannot be fully described without considering the other(s). The state of a composite system is always expressible as a sum, or superposition, of products of states of local constituents; it is entangled if this sum cannot be written as a single product term. (In general, Wiki is a pretty good resource for QM. It’s one of the first places I check when I have a question about some aspect of it.) Liked by 1 person 20. As I said, this isn’t a researched answer. It’s possible bosons aren’t involved, but the relation with superposition still applies. Or not. I think whatever it is, it’s standard physics that we’re simply missing. 21. Okay, good, I would have been surprised. I’m pretty sure Fermi-Dirac statistics (which may be why I confused the names) are due entirely to fermions having 1/2 integer spin. It’s a fundamental part of how such particles behave, and it comes from their mathematics. The thing about all these interpretations of QM is that there’s a metaphysics aspect to them, and metaphysical positions are easy to believe in a hard to refute. Ask why billions believe in some form of God. The only available tool is logic, and its value depends on people accepting the premises involved. I have long suspected the commitment to MWI comes, in part, from seeing its viability on the quantum scale and, from the premise “everything is quantum,” assuming it scales up to the classical world. As such, I’ve also long suspected the key to refuting MWI lies in figuring out the Heisenberg cut. 22. It seems like fermions have to interact with each other in some manner, otherwise how does one “know” where to avoid? If a Heisenberg cut were ever found, it would falsify Everett. As would evidence for an objective collapse. I also understand Everett needs gravity to be quantized. 23. Ha, yeah, we all need gravity to be quantized! To answer your question (as best I can), the mathematics of 1/2 integer spin only allow for a fixed set of quantum states. Recall that particles act like waves, and it’s in the interaction of those matter waves that the particles “know” what they can, or cannot, do. The matter waves for fermions act differently than the matter waves for bosons. (In fact, because of the dynamics of the wave behavior, bosons like to clump together. I think that pop sci series you’ve mentioned got into that in one of the essays.) Tunneling, for instance, is because when a particle is near a barrier its wave-function extends beyond the barrier and, because the wave-function determines the probability of finding the particle in a given position, there is therefore some probability the particle is on the other side of the barrier. With particles, the wave description always obtains until the particle is somehow observed. (FWIW, my abiding belief is that the Heisenberg cut will be figured out. We’re currently vexed because it all takes place down on the Planck level which we can’t see.) 24. Thus sayeth Schrödinger: “He who hath an ear let him hear. My equation was from the beginning, it is the premise upon which all understanding of the natural world rests. The great Schrödinger speaks; my equation is a probabilities mathematical synthesis, an Immortal Law derived from a quantum wave that has never been demonstrated to exist. He who hath an ear, let him hear…” As mother calls; “Children, it’s time to quit playing in the sandbox of discourse; it’s nappy time…” Liked by 1 person 25. This snippet from last week’s Ars Technica article on quantum physics is worth noting: Quantum mechanics is not only written in math, but there are three completely different versions of the math in widespread use: the Schrödinger wave approach, the Dirac formulation, and Feynman’s path integrals. The Schrödinger approach emphasizes the waviness of particles and uses differential equations. The Dirac formulation focuses on quantum mechanics’ sensitivity to measurement order and uses the language of linear algebra. Feynman’s path integrals also have a wavy point of view and can be seen as an extension of the Huygens–Fresnel principle of wave propagation. This leads to some truly terrifying path integrals, covering all possible paths and possibilities. Feynman diagrams are a shorthand for keeping track of the approximations you need to make to actually solve things. While the mental models behind the three mathematical traditions are quite distinct, they always give the same answers. So why are there three equivalent versions of quantum mechanics? Depending on the problem you are worrying about, it turns out that it can be easier to get the answer using one of the three approaches. And physicists are all about using the path of least resistance. So it’s not really about the Schrodinger equation in and of itself, but about what it models. But yes, the Everett view is that we live in a quantum universe. If right, it’s far from the first time science would be shifting our view of reality out from under our feet. 4. Hmm. I think that what Sean Carroll is hinting at when he playfully says, “So the nice answer is, it’s up to you,” is that the universe isn’t really splitting in the way MWI is popularly presented. If I understand the first thing about MWI, it’s that the “world” doesn’t “split” when a “quantum measurement” occurs, but is constantly accessing multiple states. To put it in mathematical terms, there is no measurement in the Schrodinger equation. It simply describes a time-dependent system whose solution is a superposition of eigenstates. And your assertion that “And each split effectively divides up the energy of the world among the new worlds, which many find difficult to accept,” is something we’ve discussed before and I thought you’d moved on from that misconception. The energy is not divided up between worlds. There aren’t different universes. Liked by 2 people 1. The main thing to understand about the Everett interpretation is that the core theory is simply the evolution of the universal wave function, the raw quantum formalism applied to the whole universe, a deterministic theory with local dynamics. Everything else is us interpreting the interpretation. So yes, Carroll is making clear that that’s the core theory, the main reality. I don’t think it’s accurate to say there’s no measurement in Everett, it just doesn’t have the ontological role it does in Copenhagen. Any magnification of an individual quantum outcome to macroscopic scale is a measurement like event. So when a cosmic ray knocks an atom loose in DNA resulting in a mutation, that is a measurement type event, even though there’s no conscious observer. When I talk about splits and dividing up the energy, that is about the interpretation of the interpretation. You can interpret it in different ways. Whether there are different universes, or portions of the same universe which don’t have access to each other, is just semantics. It’s like when other galaxies were discovered in the 1920s, they were often referred to as “island universes”, before the term “universe” got reserved for all of space. So if it makes you feel better to think of it all as one universe, that’s fine. It was the approach that Everett himself seemed to prefer. I use the term “world”, in the sense that there are many classical worlds in the universe. Others prefer to go with explicit multiverse language. Or you can think of it as the one universe in a superposition of many and an ever growing number of quantum states. It’s all compatible ways of thinking about the core theory. 1. The problem is that measurement necessarily alters — “collapses” — the wave-function, so figuring out what “measurement” actually means under MWI is one of the many undefined things about it. Which is why I pointed to your three options that all start with: “On a quantum measurement…” Under MWI, what does it mean to “measure” something? If Alex does a spin experiment and branches into Alex-Up and Alex-Down, both versions have a different wave-function than prior to the experiment. 1. There is a phenomenological collapse. That’s true in every interpretation. But if you want to say there’s an ontological collapse, then that’s not accepting the most fundamental thing about Everett, and I wouldn’t expect the theory to make much sense from there. From what I’ve read, the best way to think of a measurement under Everett is the magnifying of the effects of a quantum event. As I mentioned to Steve, there are natural measurement events. Certainly the wavefunction evolves and changes, and measurement has effects (such as decoherence). In the Alex scenario, each branch of Alex is dealing with a different element of the superposition of the spin of the particle in question. 1. “In the Alex scenario, each branch of Alex is dealing with a different element of the superposition of the spin of the particle in question.” The problem is that the experiment “picks out” a specific part of that superposition, the up and down on the selected axis, and now they each have a wave-function in a suddenly altered state. They can demonstrate this by repeating the same measurement and with 100% probability getting the same result they got the first time. Their respective shares of the wave-function have superpositions of possible measurements on other axes. If they first measured Z-axis, both would expect “random” results on the X-axis, because the Z-axis measurement eliminates any knowledge of the state of the X-axis. Doing such a test would cause further branching, whereas repeating the Z-axis test would not. Say they measure the Z-axis, branch into Alex-Z.up and Alex-Z.down, and both now measure the X-axis. Now there is: Alex-… Z.up-X.up, Z.up-X.down, Z.down-X.up, and Z.down-X.down. For each of the four branches, Alex now has knowledge of X-axis spin and has eliminated knowledge of the Z-axis spin. Considering just one Alex, say the one who got Z.up-X.up, what do you think they would get if they measured the Z-axis again? Note that these experiments are set up so only the final result is actually measured. The various branches of Alex only ever see the final result (e.g. Z.up-X.down-Z.up), although they know the path the particle took through the system and, hence, the outcome of each step along the way. (Note also that the tests I’m describing are physically possible and have been done and verified.) 2. One of the physicists I read, possibly the Ask a Physicist guy, said that a definite spin result on a particular axis just is a superposition of the other perpendicular axes. (I know it’s more complicated for the diagonal ones.) So I think the sequence would happen as you describe. Every time the superposition gets measured there is branching. Note that we could actually think of it as every time the same axis gets remeasured with no other axes measured in between, there’s also branching, but the branches are all the same, so it’s not usually thought of as branching. On running the experiment so the results aren’t measured until the end, are you saying they could know the intermediate spin results? I think I’d want more details on how that works. Speculating a bit (and possibly getting it very wrong), I suppose you could keep all the particles involved (electrons, photons, etc) isolated so that the changes to spin happen. But all the particles that interact would end up entangled with each other, and when information from the system did finally spread into the environment, it would all become entangled with the environment, with every element in the composite superposition of the entangled particles having its own branch. 3. Yes, we treat a Z-axis measurement as an equal superposition of up-down on other orthogonal axes because knowledge of spin on orthogonal axes is mutually exclusive (very similar to position and momentum being mutually exclusive). Spin of particles can be measured by a Stern-Gerlach experiment. Essentially a magnetic field cases a deflection of the particle such that there is one path into the “spin box” and two paths out, one representing spin-up and one representing spin-down. Under MWI we’d say that the particle interacting with the magnetic field causes a superposition (branch) and the particle follows both exit paths. When the particle hits a detection screen, it’s “measured” and we only see it in one place. (Not unlike a beam-splitter experiment.) Detection, of course, prevents further tests of the particle’s spin because it’s been splatted against the screen. But we can direct the output paths into a second stage pair of “spin boxes”. For instance, we could measure the Z-axis in both stages. If we do, we find the spin-up path from the first stage results in 100% spin-up particles and the spin-down path results in 100% spin-down. Or we can measure a different axis the second time. If we measure the X-axis, we see a 50/50 split from both second stages. In the first case, Z-Z, we see a [50%, 0; 0, 50%] distribution. [Z-up+Z-up, Z-up+Z-dn; Z-dn+Z-up, Z-dn+Z-dn] In the second, Z-X, the distribution we see is [25%, 25%; 25%, 25%] The final result tells us what path the particle had to take, so we know what its spin was at different stages of the experiment. Note that, until the detection screen at the end, the particle does not interact with other particles, only the magnetic field of the S-G device. So let me ask my question about the results of a Z-X-Z experiment again. The particle distribution after two stages is, as mentioned, [25%, 25%; 25%, 25%]. After the third stage, a second Z-axis test, there would be eight outcomes (branches). My question is: What is the final distribution? 4. BTW, with regard to a known eigenstate such as Z-up being a superposition, note that such a superposition is different if the known eigenstate is Z-down. For Z-up: But for Z-down it’s: Note the plus-minus difference between them. Both superpositions give a 50/50 probability for X-axis measurements. The difference means there are certain unitary operations that can change the state, and further such operations can return it to the original state. (Measuring the spin state would not be such an operation.) 5. I should maybe emphasize that, after measurement, the definite eigenstate itself is considered to be just |0⟩ for spin up and |1⟩ for spin-down. The superposition applies to the possibility of a measurement on some other axis. There are, in fact, infinite superpositions of measurements on other possible axes. In general: Where α and β are normalized coefficients that depend on the angle of the axis, and there is an implicit such superposition for every possible angle. The superpositions I showed you above involve an orthogonal axis where both coefficients are 1/sqrt(2). (Remember that we square the coefficient to get the probability of seeing that result, and that the sum of the squared coefficients must be 1.0.) 6. On the experiment, thanks. I had forgotten about that setup in the MIT lecture. On your question, not sure what you’re looking for. I agree there would be eight outcomes and so eight branches. 7. That there are eight outcomes is a given. My question involves the distribution of outcomes. In the two-stage versions, in the Z-Z version, the distribution is [50%, 0%; 0%; 50%]. In the Z-X version, it’s [25%, 25%; 25%, 25%]. I’m asking about the three-stage version comprised of Z-X-Z. [?, ?; ?, ?;; ?, ?; ?, ?] 8. Nope, exactly right. The point is that, after the first Z measurement the particle is in a known state, either |0⟩ or |1⟩, but in a superposition of measurements on other axes. In particular, the orthogonal X-axis is a 50/50 superposition so the second test on the X-axis has a “random” (uncorrelated) result. That second test gives us a definite state for the X-axis, often thought of as |+⟩ and |-⟩ in contrast to the Z-axis. This again puts the particle’s wave-function into a superposition of states for other axes, and again the orthogonal Z-axis is uncorrelated so there is again a 50/50 chance of measuring |0⟩ or |1⟩. For a single particle going through the apparatus, its wave-function changes as a result of each test. Importantly, the first state, either |0⟩ or |1⟩, is erased during the second test, which is why the third test has 50/50 odds. Liked by 1 person 2. I like this 4th option, “there aren’t different universes.” But to me it’s just one option; each option is useful for understanding certain aspects and each poses a danger of misleading when taken overly literally. This 4th option is good for correcting some of those dangers of the other three. Liked by 1 person 5. And when the final curtain falls, materialists sit around scratching their asses wondering why idealists think that materialism is such a screwed up metaphysical position!!??!!☹️ Party on Sean Carrol….. 1. Same difference right Mike? MWI is materialism’s archetype of idealism’s M@L. Sean Carrol professes to be physicist; one would think that Sean would use his high profile celebrity status as an academic in the profession of adult day care for more productive means other than promoting himself so he can write and market more ridiculous books. 1. M@L is Mind at Large; an imbecile god who has Dissociative Identity Disorder (DID) and splits off into multiple personalities…. Sound familiar? M@L, MWI……. pick your ridiculous construct, only you can choose. 2. Sounds like Kastrup’s theory. We’re all one big mind with multiple personalities. Kastrup is pretty vehement in his opposition to the Everett interpretation. I can understand why, since it undercuts the quantum physics justifications for his philosophy. 6. Is this a case of perspective? Outside looking in vs being inside, living it? And then the thought (and it’s only a thought) of instantinaity? Some human transaction or statement that theoretically impacts the subject instantly. “I’m king of the world!” would travel at the speed of thought, instantly. And it’s due to perspective how such a declaration is evaluated. It all sounds like fun mind-games to me. (Until you unscroll the deed to The People’s land and tell them to leave, ‘cuz you now own it.) Liked by 1 person 1. It definitely is a case of perspective. You also just reminded me about this from one of Terry Pratchett’s stories: Liked by 2 people 1. Now that you mention it, I do believe I’ve used it fairly recently. It’s one of the many, many bits I love about Pratchett — his twisted use of physics of which he seems to have a very good grasp. Liked by 1 person 7. FWIW: I was looking for a good explanation of global versus relative phase, and I found the following, which nails it. It’s inescapably mathematical, but you said you were doing okay with the math. We can define a two-state quantum system like this: Where r_i are normalized real-valued constants and θ_i is the phase. Then we can have: Doing the math: And then: As I showed you before, that leading term (the global phase) isn’t something we can detect, but the relative phase, θ_2-θ_1, is significant and accounts for interference. Mathematically it doesn’t get more clear than that. Intuitionally is another matter… 🙂 8. Much of the distinction seems irrelevant. Anything changing in our galaxy, for example, is irrelevant to things happening in other galaxies. So, whether the result of the change spreads instantaneously or at the speed of light matters little. The amount of change or repercussion of a change fades with an inverse square law, no? So, this “transmission” of the world split is a local affair. Basically what would happen if the effect of a split here on Earth weren’t noticed in a galaxy 100,000 light years away for 100,000 years? I argue, nada. Liked by 1 person 1. It’s definitely true that under all the options, the dynamics are always local. An analogy might be if we decided to change the name of the Andromeda Galaxy to Ralph’s Galaxy. In our mind, the change would be instantaneous. But if we sent a signal to that galaxy telling any inhabitants what we’d decided, it wouldn’t have any causal effects for at least 2.5 million years. So the various options could be seen as how we decide to account for the name change. Whatever we decide, it’s irrelevant to the physics. 9. I don’t know why I’ve never thought about this, but I just realized there’s a conundrum in MWI regarding “particles” — observing a particle requires collapsing the aspect of the wave-function that describes the position of the “particle.” In the two-slit experiment, for instance, the (unobserved) particle in flight is described by its momentum (its energy) which means its position is unknown. Until it hits something, and then its position is known. Even if we assume branching, each branch sees a “particle.” But that “collapses” the wave-function, so how can there ever be point-like interactions (“particles”) unless MWI does have wave-function collapse? Even positing a universal wave-function comprised only of interactions still seems to require abrupt changes to the state vector. Liked by 1 person 1. So tell me if this is crazy or not, but is this another way of expressing what I see as a fundamental question of QM in all its forms, and that is: how does the wave equation–which regardless of interpretation clearly has some part of the picture pretty well-nailed–relate to the reality we experience? And none of the QM theories can explain this without some assumptions that are in addition to the fundamental mathematical theory, unless I’m mistaken. Thinking about Newton’s equations of motion is (perhaps?) helpful as a view of a theory where this is not an issue. When we define “x” as the distance of a flying cannonball from the cannon, there are really no additional steps required to relate the math to the world we experience. If we use the equations to show that the cannonball is “x” = 73.5 meters from the cannon when “t” (time) = 0.9 seconds of flight, we know exactly what that means. There really aren’t additional assumptions required, just our definition of “x.” In QM, we have the wave equation, but it doesn’t describe a single outcome like Newton’s equations of motion do. So the rub in all QM interpretations is that we only see one thing, and the math predicts many things, no? And I think your point is related: we don’t see waves, whenever we measure something what we see are discrete quanta, or particles. So there are a number of ways it seems challenging to relate the fundamental mathematics to what is actually observed. 1. You’re not crazy. QM is the only branch of science I know that requires interpretation, even though it has very precise and extremely well-tested mathematics. I suspect that speaks to our ignorance of it. Its complete lack of compatibility with GR is another indicator we’re missing a big part of the picture. Comparisons with Newton’s F=ma are quite apt, and, as you say, seem complete at the classical level. And, also as you say, classical calculations predict single future results — the cannonball will strike here with this much force. The wave equation says, well, if you decide to look for a free particle here, there’s this probability of seeing it there, but that much probability of seeing it there, if you look there. And because the wave equation implicitly includes all possible locations (in the universe) there is some (vanishingly small) probability of finding it a zillion miles away. Liked by 1 person 2. I think it’s worse than that. We see interference effects, or what we infer to be interference effects, and from that infer waves. But we also never see a particle. Ever. We infer their existence as well though instruments we hope work according to our theories. Neils Bohr made the point that the quantum realm is inaccessible. Our data comes from the macroscopic effects of our interactions with it. But really, this just calls attention to something that always exists, because our senses work by inferring things in the world as well. We just feel like it’s more concrete at the classical level. There may be fewer levels of inference at classical scales, but all observation is inescapably theory laden. Liked by 1 person 1. “But we also never see a particle.” They’re too small to be seen by any instrument, but devices such as cathode ray tubes give us the same inference about, at least, point-like interactions, that interference gives us about waves. Einstein’s Nobel was another strong inference about the existence of particle-like behavior. Liked by 1 person 1. Schrodinger was inspired by de Broglie’s discovery. He intended his equation to model how the waves worked. But from what I’ve read, he couldn’t complete it until spin was discovered. (The Copenhagen camp played down the physicality of the waves. Schrodinger never agreed with that move. Obviously the Everettians agreed with him.) Your thoughts? You are commenting using your account. Log Out /  Change ) Google photo Twitter picture Facebook photo Connecting to %s
ba8ce5955b449bcd
Wigner's theorem The Wigner’s theorem was formulated and demonstrated for the first time by Eugene Paul Wigner on Gruppentheorie und ihre Anwendung auf die Quantenmechanik der Atomspektrum(1). It states that for each symmetry transformation in Hilbert’s space there exists a unitary or anti-unitary operator, uniquely determined less than a phase factor. For symmetry transformation, we intend a space transformation that preserved the characteristics of a given physical system. Asymmetry transformation implies also a change of reference system. Invariants play a key role in physics, being the quantities that, in any reference system, are unchanged. With the advent of quantum physics, their importance increased, particularly in the formulation of a relativistic quantum field theory. One of the most important tools in the study of invariants is the Wigner’s theorem, an instrument of fundamental importance for all the development of quantum theory. In particular, Wigner was interested in determining the properties of transformations that preserve the transition’s probability between two different quantum states. Given $\phi$ the wave function detected by the first observer, and $\bar {\phi}$ the wave function detected by the second observer, Wigner assumed that the equality \[|\langle \psi | \phi \rangle| = |\langle \bar \psi | \bar \phi \rangle|\] must be valid for all $\psi$ and $\phi$. In the end, if we exclude time inversions, we find that the operator $\operatorname{O}_{R}$, such that $\bar{\phi} = \operatorname{O} _{R} \phi$, must be unitary and linear, but also anti-unitary and anti-linear. Consequence of this fact is that the two observers’ descriptions are equivalent. So the first observes $\phi$, the second $\bar{\phi}$, while the operator $\operatorname{H}$ for the first will be $\operatorname{O}_R \operatorname{H} \operatorname{O}_R^{-1}$ for the second. JMP #58, 4: path integrals and friends The path integral formulation of quantum mechanics replaces the single, classical trajectory of a system with the sum over an infinity of quantum possible trajectories. To compute this sum is used a functional integral. The most famous interpretation is dued by Richard Feynman. In an Euclidean spacetime we speak about Euclidean path integral: Bernardo, R. C. S., & Esguerra, J. P. H. (2017). Euclidean path integral formalism in deformed space with minimum measurable length. Journal of Mathematical Physics, 58(4), 042103. doi:10.1063/1.4979797 We study time-evolution at the quantum level by developing the Euclidean path-integral approach for the general case where there exists a minimum measurable length. We derive an expression for the momentum-space propagator which turns out to be consistent with recently developed $\beta$-canonical transformation. We also construct the propagator for maximal localization which corresponds to the amplitude that a state which is maximally localized at location $\xi'$ propagates to a state which is maximally localized at location $\xi"$ in a given time. Our expression for the momentum-space propagator and the propagator for maximal localization is valid for any form of time-independent Hamiltonian. The nonrelativistic free particle, particle in a linear potential, and the harmonic oscillator are discussed as examples. Other papers from JMP #58, 4, follows: The quantum Zeno paradox The standard axioms of quantum mechanics imply that in the limit of continuous observation a quantum system cannot evolve. (Andrew Hodges in Alan Turing: the logical and physical basis of computing - pdf) Initially known as Turing’s paradox, in honor of the mathematician who formulated it in the 1950s, was subsequently identified as quantum Zeno effect, resulting an advanced version of the famous Zeno’s arrow paradox, whose phylosophical result is the negation of motion. A first formulation and derivation of the effect is found in Does the lifetime of an unstable system depend on the measuring apparatus?(1), while George Sudarshan and Baidyanath Misra were the first to identify it as quantum Zeno paradox. The two theoretical physicists established that an unstable particle will not decay as long as it is kept under continuous observation(2). However they try to save goat and cabbage: There is a fundamental principle in quantum theory that denies the possibility of continuous observation.(2) On the other hand, Ghirardi, Omero, Weber and Rimini show that: if the uncertainty relations are properly taken into account the arguments leading to the paradox are not valid.(3) Adiabatic pendulum In physics quantities which change little under slow changes of parameter, are called adiabatic invariants(1). The pendulum could be an adiabatic invariants. To get it, as well as to get any of the adiabatic invariants, the person changing the parameters of the system must not see what state the system is in(1). 1. V. I. Arnold, Mathematical Methods of Classical Mechanics, Springer-Verlag (1989) JMP 58, 3: math paradoxes in quantum mechanics Facchi, P., & Ligabò, M. (2017). Large-time limit of the quantum Zeno effect Journal of Mathematical Physics, 58 (3) DOI: 10.1063/1.4978851 (arXiv) If very frequent periodic measurements ascertain whether a quantum system is still in its initial state, its evolution is hindered. This peculiar phenomenon is called quantum Zeno effect. We investigate the large-time limit of the survival probability as the total observation time scales as a power of the measurement frequency, $t \propto N^\alpha$. The limit survival probability exhibits a sudden jump from $1$ to $0$ at $\alpha = 1/2$, the threshold between the quantum Zeno effect and a diffusive behavior. Moreover, we show that for $\alpha \geq 1$, the limit probability becomes sensitive to the spectral properties of the initial state and to the arithmetic properties of the measurement periods. Selvitella, A. (2017). The Simpson’s paradox in quantum mechanics Journal of Mathematical Physics, 58 (3) DOI: 10.1063/1.4977784 (sci-hub) In probability and statistics, the Simpson’s paradox is a paradox in which a trend that appears in different groups of data disappears when these groups are combined, while the reverse trend appears for the aggregate data. In this paper, we give some results about the occurrence of the Simpson’s paradox in quantum mechanics. In particular, we prove that the Simpson’s paradox occurs for solutions of the quantum harmonic oscillator both in the stationary case and in the non-stationary case. In the non-stationary case, the Simpson’s paradox is persistent: if it occurs at any time $t=\tilde t$, then it occurs at any time $t\not= \tilde t$. Moreover, we prove that the Simpson’s paradox is not an isolated phenomenon, namely, that, close to initial data for which it occurs, there are lots of initial data (a open neighborhood), for which it still occurs. Differently from the case of the quantum harmonic oscillator, we also prove that the paradox appears (asymptotically) in the context of the nonlinear Schrödinger equation but at intermittent times. Read also: Two quantum Simpson's paradoxes Meng, F., & Liu, C. (2017). Necessary and sufficient conditions for the existence of time-dependent global attractor and application Journal of Mathematical Physics, 58 (3) DOI: 10.1063/1.4978329 (sci-hub) In this paper, we are concerned with infinite dimensional dynamical systems in time-dependent space. First, we characterize some necessary and sufficient conditions for the existence of the time-dependent global attractor by using a measure of noncompactness. Then, we give a new method to verify the sufficient condition. As a simple application, we prove the existence of the time-dependent global attractor for the damped equation in strong topological space. Cen, J., Correa, F., & Fring, A. (2017). Time-delay and reality conditions for complex solitons Journal of Mathematical Physics, 58 (3) DOI: 10.1063/1.4978864 (arXiv) We compute lateral displacements and time-delays for scattering processes of complex multi-soliton solutions of the Korteweg de-Vries equation. The resulting expressions are employed to explain the precise distinction between solutions obtained from different techniques, Hirota’s direct method and a superposition principle based on Bäcklund transformations. Moreover they explain the internal structures of degenerate compound multi-solitons previously constructed. Their individual one-soliton constituents are time-delayed when scattered amongst each other. We present generic formulae for these time-dependent displacements. By recalling Gardner’s transformation method for conserved charges, we argue that the structure of the asymptotic behaviour resulting from the integrability of the model together with its $PT$-symmetry ensures the reality of all of these charges, including in particular the mass, the momentum, and the energy. Wilming, H., Kastoryano, M., Werner, A., & Eisert, J. (2017). Emergence of spontaneous symmetry breaking in dissipative lattice systems Journal of Mathematical Physics, 58 (3) DOI: 10.1063/1.4978328 (arXiv) A cornerstone of the theory of phase transitions is the observation that many-body systems exhibiting a spontaneous symmetry breaking in the thermodynamic limit generally show extensive fluctuations of an order parameter in large but finite systems. In this work, we introduce the dynamical analog of such a theory. Specifically, we consider local dissipative dynamics preparing an equilibrium steady-state of quantum spins on a lattice exhibiting a discrete or continuous symmetry but with extensive fluctuations in a local order parameter. We show that for all such processes, there exist asymptotically stationary symmetry-breaking states, i.e., states that become stationary in the thermodynamic limit and give a finite value to the order parameter. We give results both for discrete and continuous symmetries and explicitly show how to construct the symmetry-breaking states. Our results show in a simple way that, in large systems, local dissipative dynamics satisfying detailed balance cannot uniquely and efficiently prepare states with extensive fluctuations with respect to local operators. We discuss the implications of our results for quantum simulators and dissipative state preparation. There's no two without three Read also: LIGO Picks Up on the Third Ring LIGO Scientific and Virgo Collaboration (2017). GW170104: Observation of a 50-Solar-Mass Binary Black Hole Coalescence at Redshift 0.2 Physical Review Letters, 118 (22) DOI: 10.1103/PhysRevLett.118.221101
ddf0714074d4443d
Numeric precision in Microsoft Excel From Wikipedia, the free encyclopedia Jump to navigation Jump to search As with other spreadsheets, Microsoft Excel works only to limited accuracy because it retains only a certain number of figures to describe numbers (it has limited precision). With some exceptions regarding erroneous values, infinities, and denormalized numbers, Excel calculates in double-precision floating-point format from the IEEE 754 specification[1] (besides numbers, Excel uses a few other data types[2]). Although Excel can display 30 decimal places, its precision for a specified number is confined to 15 significant figures, and calculations may have an accuracy that is even less due to five issues: round off,[3] truncation, and binary storage, accumulation of the deviations of the operands in calculations, and worst: cancellation at subtractions resp. 'Catastrophic cancellation' at subtraction of values with similar magnitude. Accuracy and binary storage[edit] Excel maintains 15 figures in its numbers, but they are not always accurate: mathematical the bottom line should be the same as the top line, in 'fp-math' the step '1 + 1/9000' leads to a rounding up as the first bit of the 14 bit tail '10111000110010' of the mantissa falling off the table when adding 1 is a '1', this up-rounding is not undone when subtracting the 1 again, since there is no information about the origin of the values in this step. Thus the 're-subtracting' of 1 leaves a mantissa ending in '100000000000000' instead of '010111000110010', representing a value of '1.1111111111117289E-4' rounded by ex$el to 15 significant digits: '1.11111111111173E-4'. Of course mathematical 1 + x − 1 = x, 'floating point math' is sometimes a little different, that is not to be blamed on Excel. The discrepancy indicates the error. All errors are beyond the 15-th significant digit of the intermediate 1+x value, all errors are in high value digits of the final result, that is the problematic effect of 'cancellation'. In the top figure the fraction 1/9000 in Excel is displayed. Although this number has a decimal representation that is an infinite string of ones, Excel displays only the leading 15 figures. In the second line, the number one is added to the fraction, and again Excel displays only 15 figures. In the third line, one is subtracted from the sum using Excel. Because the sum has only eleven 1s after the decimal, the true difference when ‘1’ is subtracted is three 0s followed by a string of eleven 1s. However, the difference reported by Excel is three 0s followed by a 15-digit string of thirteen 1s and two extra erroneous digits. Thus, the numbers Excel calculates with are not the numbers that it displays. Moreover, the error in Excel's answer is not simply round-off error, it is an effect in floating point calculations called 'cancellation'. The inaccuracy in Excel calculations is more complicated than errors due to a precision of 15 significant figures. Excel's storage of numbers in binary format also affects its accuracy.[4] To illustrate, the lower figure tabulates the simple addition 1 + x − 1 for several values of x. All the values of x begin at the 15-th decimal, so Excel must take them into account. Before calculating the sum 1 + x, Excel first approximates x as a binary number. If this binary version of x is a simple power of 2, the 15-digit decimal approximation to x is stored in the sum, and the top two examples of the figure indicate recovery of x without error. In the third example, x is a more complicated binary number, x = 1.110111⋯111 × 2−49 (15 bits altogether). Here the 'IEEE 754 double value' resulting of the 15 bit figure is 3.330560653658221E-15, which is rounded! by Excel for the 'user interface' to 15 digits 3.33056065365822E-15, and then displayed with 30 decimals digits gets one 'fake zero' added, thus the 'binary' and 'decimal' values in the sample are identical only in display, the values associated with the cells are different (1.1101111111111100000000000000000000000000000000000000 × 2−49 vs. 1.1101111111111011111111111111111111111111111111111101 × 2−49). Similar is done by other spreadsheets, the handling of the different amount of decimal digits which can be exactly stored in the 53 bit mantissa of a 'double' (e.g. 16 digits between 1 and 8, but only 15 between 0,5 and 1 and between 8 and 10) is somewhat difficult and solved 'suboptimal'. In the fourth example, x is a decimal number not equivalent to a simple binary (although it agrees with the binary of the third example to the precision displayed). The decimal input is approximated by a binary and then that decimal is used. These two middle examples in the figure show that some error is introduced. The last two examples illustrate what happens if x is a rather small number. In the second from last example, x = 1.110111⋯111 × 2−50; 15 bits altogether. The binary is replaced very crudely by a single power of 2 (in this example, 2−49) and its decimal equivalent is used. In the bottom example, a decimal identical with the binary above to the precision shown, is nonetheless approximated differently from the binary, and is eliminated by truncation to 15 significant figures, making no contribution to 1 + x − 1, leading to x = 0.[5] For x′s that are not simple powers of 2, a noticeable error in 1 + x − 1 can occur even when x is quite large. For example, if x = 1/1000, then 1 + x − 1 = 9.9999999999989 × 10−4, an error in the 13-th significant figure. In this case, if Excel simply added and subtracted the decimal numbers, avoiding the conversion to binary and back again to decimal, no round-off error would occur and accuracy actually would be better. Excel has the option to "Set precision as displayed".[6] With this option, depending upon circumstance, accuracy may turn out to be better or worse, but you will know exactly what Excel is doing. (It should be noted, however, that only the selected precision is retained, and one cannot recover extra digits by reversing this option.) Some similar examples can be found at this link.[7] In short, a variety of accuracy behavior is introduced by the combination of representing a number with a limited number of binary digits, along with truncating numbers beyond the fifteenth significant figure.[8] Excel's treatment of numbers beyond 15 significant figures sometimes contributes better accuracy to the final few significant figures of a computation than working directly with only 15 significant figures, and sometimes not. For the reasoning behind the conversion to binary representation and back to decimal, and for more detail about accuracy in Excel and VBA consult these links.[9] 1. The shortcomings in the '= 1 + x - 1' tasks are a combination of 'fp-math weaknesses' and 'how Excel handles it', especially Excel's rounding. Excel does some rounding and / or 'snap to zero' for most of its results, in average chopping the last 3 bits of the IEEE double representation. This behavior can be switched of by setting the formula in parentheses: '= ( 1 + 2^-52 - 1 )'. You will see that even that small value survives. Smaller values will pass away as there are only 53 bits to represent the value, for this case 1.0000000000 0000000000 0000000000 0000000000 0000000000 01, the first representing the '1', and the last the '2^-52'. 2. It is not only clean powers of two surviving, but any combination of values constructed of bits which will be within the 53 bits once the decimal 1 is added. As most decimal values do not have a clean finite representation in binary they will suffer from 'round off' and 'cancellation' in tasks like the above. E.g. decimal 0,1 has the IEEE double representation 0 (1).1001100110011001100110011001100110011001100110011010 times 2^-4, and added to 140737488355328.0 (which is 2^47) will lose all of its bits except the first two. Thus from '= ( 140737488355328.0 + 0,1 - 140737488355328.0) it will come back as 0.09375 calculated with (64 bit) as well as in Excel with the parentheses around the formula. This effect mostly can be managed by meaningful rounding, which Excel does not apply, it is up to the user. Needless to say: other spreadsheets have similar problems, LibreOffice Calc uses a more aggressive rounding, while gnumeric tries to keep precision and make as well the precision as the 'lack of' visible for the user. Examples where precision is no indicator of accuracy Statistical functions[edit] Error in Excel 2007 calculation of standard deviation. All four columns have the same deviation of 0.5 Accuracy in Excel-provided functions can be an issue. Micah Altman et al. provide this example:[10] The population standard deviation given by: is mathematically equivalent to: However, the first form keeps better numerical accuracy for large values of x, because squares of differences between x and xav leads to less round-off than the differences between the much larger numbers Σx2 and (Σx)2. The built-in Excel function STDEVP(), however, uses the less accurate formulation because it is faster computationally.[11] Both the "compatibility" function STDEVP and the "consistency" function STDEV.P in Excel 2010 return the 0.5 population standard deviation for the given set of values. However, numerical inaccuracy still can be shown using this example by extending the existing figure to include 1015, whereupon the erroneous standard deviation found by Excel 2010 will be zero. Subtraction of Subtraction Results[edit] Doing simple subtractions may lead to errors as two cells may display the same numeric value while storing two separate values. An example of this occurs in a sheet where the following cells are set to the following numeric values: and the following cells contain the following formulas Both cells and display . However, if cell contains the formula then does not display as would be expected, but displays instead. The above is not limited to subtractions, try '=1+1,405*2^-48' in one cell, Excel rounds the display to 1,00000000000000000000, and '=0,9+225179982494413*2^-51' in another, same display (in the range above 1 / below 1 the rounding is different, which hits most decimal or binary magnitude changes) above, different rounding for value and display, violates one of the elementary requirements in Goldberg: 'What Every Computer Scientist Should Know About Floating-Point Arithmetic' (more or less 'the holy book' of fp-math), there stated: 'it is important to make sure that its use is transparent to the user. For example, on a calculator, if the internal representation of a displayed value is not rounded to the same precision as the display, then the result of further operations will depend on the hidden digits and appear unpredictable to the user' (the problem is not limited to Excel, e.g. LibreOffice calc acts similarly). Round-off error[edit] User computations must be carefully organized to ensure round-off error does not become an issue. An example occurs in solving a quadratic equation: The solutions (the roots) of this equation are exactly determined by the quadratic formula: When one of these roots is very large compared to the other, that is, when the square root is close to the value b, the evaluation of the root corresponding to subtraction of the two terms becomes very inaccurate due to round-off (cancellation?). It is possible to determine the round-off error by using the Taylor series formula for the square root: [12] indicating that, as b becomes larger, the first surviving term, say ε: becomes smaller and smaller. The numbers for b and the square root become nearly the same, and the difference becomes small: Under these circumstances, all the significant figures go into expressing b. For example, if the precision is 15 figures, and these two numbers, b and the square root, are the same to 15 figures, the difference will be zero instead of the difference ε. A better accuracy can be obtained from a different approach, outlined below.[13] If we denote the two roots by r1 and r2, the quadratic equation can be written: When the root r1 >> r2, the sum (r1 + r2 ) ≈ r1 and comparison of the two forms shows approximately: Thus, we find the approximate form: These results are not subject to round-off error, but they are not accurate unless b2 is large compared to ac. Excel graph of the difference between two evaluations of the smallest root of a quadratic: direct evaluation using the quadratic formula (accurate at smaller b) and an approximation for widely spaced roots (accurate for larger b). The difference reaches a minimum at the large dots, and round-off causes squiggles in the curves beyond this minimum. The bottom line is that in doing this calculation using Excel, as the roots become farther apart in value, the method of calculation will have to switch from direct evaluation of the quadratic formula to some other method so as to limit round-off error. The point to switch methods varies according to the size of coefficients a and b. In the figure, Excel is used to find the smallest root of the quadratic equation x2 + bx + c = 0 for c = 4 and c = 4 × 105. The difference between direct evaluation using the quadratic formula and the approximation described above for widely spaced roots is plotted vs. b. Initially the difference between the methods declines because the widely spaced root method becomes more accurate at larger b-values. However, beyond some b-value the difference increases because the quadratic formula (good for smaller b-values) becomes worse due to round-off, while the widely spaced root method (good for large b-values) continues to improve. The point to switch methods is indicated by large dots, and is larger for larger c-values. At large b-values, the upward sloping curve is Excel's round-off error in the quadratic formula, whose erratic behavior causes the curves to squiggle. A different field where accuracy is an issue is the area of numerical computing of integrals and the solution of differential equations. Examples are Simpson's rule, the Runge–Kutta method, and the Numerov algorithm for the Schrödinger equation.[14] Using Visual Basic for Applications, any of these methods can be implemented in Excel. Numerical methods use a grid where functions are evaluated. The functions may be interpolated between grid points or extrapolated to locate adjacent grid points. These formulas involve comparisons of adjacent values. If the grid is spaced very finely, round-off error will occur, and the less the precision used, the worse the round-off error. If spaced widely, accuracy will suffer. If the numerical procedure is thought of as a feedback system, this calculation noise may be viewed as a signal that is applied to the system, which will lead to instability unless the system is carefully designed.[15] Accuracy within VBA[edit] Although Excel nominally works with 8-byte numbers by default, VBA has a variety of data types. The Double data type is 8 bytes, the Integer data type is 2 bytes, and the general purpose 16 byte Variant data type can be converted to a 12 byte Decimal data type using the VBA conversion function CDec.[16] Choice of variable types in a VBA calculation involves consideration of storage requirements, accuracy and speed. 1. ^ "Floating-point arithmetic may give inaccurate results in Excel". Revision 8.2 ; article ID: 78113. Microsoft support. June 30, 2010. Retrieved 2010-07-02. 2. ^ Steve Dalton (2007). "Table 2.3: Worksheet data types and limits". Financial Applications Using Excel Add-in Development in C/C++ (2nd ed.). Wiley. pp. 13–14. ISBN 0-470-02797-5. 3. ^ Round-off is the loss of accuracy when numbers that differ by small amounts are subtracted. Because each number has only fifteen significant digits, their difference is inaccurate when there aren't enough significant digits to express the difference. 4. ^ Robert de Levie (2004). "Algorithmic accuracy". Advanced Excel for scientific data analysis. Oxford University Press. p. 44. ISBN 0-19-515275-1. 5. ^ To input a number as binary, the number is submitted as a string of powers of 2: 2^(−50)*(2^0 + 2^−1 + ⋯). To input a number as decimal, the decimal number is typed in directly. 6. ^ This option is found on the "Excel options/Advanced" tab. See How to correct rounding errors: Method 2 7. ^ Excel addition strangeness 8. ^ Robert de Levie (2004). cited work. pp. 45–46. ISBN 0-19-515275-1. 9. ^ Micah Altman; Jeff Gill; Michael McDonald (2004). "§2.1.1 Revealing example: Computing the coefficient standard deviation". Numerical issues in statistical computing for the social scientist. Wiley-IEEE. p. 12. ISBN 0-471-23633-0. 10. ^ Robert de Levie (2004). Advanced Excel for scientific data analysis. Oxford University Press. pp. 45–46. ISBN 0-19-515275-1. 11. ^ Gradshteyn, Izrail Solomonovich; Ryzhik, Iosif Moiseevich; Geronimus, Yuri Veniaminovich; Tseytlin, Michail Yulyevich; Jeffrey, Alan (2015) [October 2014]. "1.112. Power series". In Zwillinger, Daniel; Moll, Victor Hugo (eds.). Table of Integrals, Series, and Products. Translated by Scripta Technica, Inc. (8 ed.). Academic Press, Inc. p. 25. ISBN 0-12-384933-0. LCCN 2014010276. 12. ^ This approximate method is used often in the design of feedback amplifiers, where the two roots represent the response times of the system. See the article on step response. 13. ^ Anders Blom Computer algorithms for solving the Schrödinger and Poisson equations, Department of Physics, Lund University, 2002. 14. ^ R. W. Hamming (1986). Numerical Methods for Scientists and Engineers (2nd ed.). Courier Dover Publications. ISBN 0-486-65241-6. This book discusses round-off, truncation and stability extensively. For example, see Chapter 21: Indefinite integrals – feedback, page 357. 15. ^ John Walkenbach (2010). "Defining data types". Excel 2010 Power Programming with VBA. Wiley. pp. 198 ff and Table 8-1. ISBN 0-470-47535-8.
04ba10b4a140fc13
Tables for Volume B Reciprocal space Edited by U. Shmueli International Tables for Crystallography (2010). Vol. B, ch. 5.2, p. 647   | 1 | 2 | Section 5.2.2. The defining equations 5.2.2. The defining equations | top | pdf | No many-body effects have yet been detected in the diffraction of fast electrons, but the velocities lie well within the relativistic region. The one-body Dirac equation would therefore appear to be the appropriate starting point. Fujiwara (1962[link]), using the scattering matrix, carried through the analysis for forward scattering, and found that, to a very good approximation, the effects of spin are negligible, and that the solution is the same as that obtained from the Schrödinger equation provided that the relativistic values for wavelength and mass are used. In effect a Klein–Gordon equation (Messiah, 1965[link]) can be used in electron diffraction (Buxton, 1978[link]) in the form [\nabla^{2} \psi_{\rm b} + {8\pi^{2} m|e|\varphi\over h^{2}} \psi_{\rm b} + {8 \pi^{2} m_{0}|e|W\over h^{2}} \left(1 + {|e|W\over 2m_{0}c^{2}}\right) \psi_{\rm b} = 0.]Here, W is the accelerating voltage and [\varphi], the potential in the crystal, is defined as being positive. The relativistic values for mass and wavelength are given by [m = m_{0} (1 - v^{2} / c^{2})^{-1/2}], and taking `e' now to represent the modulus of the electronic charge, [|e|], [\lambda = h[2m_{0}eW (1 + eW / 2m_{0}c^{2})]^{-1/2},]and the wavefunction is labelled with the subscript b in order to indicate that it still includes back scattering, of central importance to LEED (low-energy electron diffraction). In more compact notation, [[\nabla^{2} + k^{2} (1 + \varphi / W)] \psi_{\rm b} = (\nabla^{2} + k^{2} + 2 k\sigma \varphi) \psi_{\rm b} = 0. \eqno(] Here [k = |{\bf k}|] is the scalar wavenumber of magnitude [2\pi / \lambda], and the interaction constant [\sigma = 2\pi me\lambda / h^{2}]. This constant is approximately [10^{-3}] for 100 kV electrons. For fast electrons, [\varphi / W] is a slowly varying function on a scale of wavelength, and is small compared with unity. The scattering will therefore be peaked about the direction defined by the incident beam, and further simplification is possible, leading to a forward-scattering solution appropriate to HEED (high-energy electron diffraction). Buxton, B. (1978). Graduate Lecture-Course Notes: Dynamical Diffraction Theory. Cambridge University, England. Fujiwara, K. (1962). Relativistic dynamical theory of electron diffraction. J. Phys. Soc. Jpn, 17, Suppl. B11, 118–123. Messiah, A. (1965). Quantum Mechanics, Vol. II, pp. 884–888. Amsterdam: North-Holland. to end of page to top of page
ad0298134d7b1a67
User account Enter your Quantum Shorts 2014 username. Enter the password that accompanies your username. Enter url Subscribe to Quantum Shorts 2014 newsletter Submit your email address so we can send you occasional competition updates and tell you who wins! Enter url D is for ... Dice Albert Einstein decided quantum theory couldn’t be right because its reliance on probability means everything is a result of chance. “God doesn’t play dice with the world,” he said. T is for ... Tunnelling This happens when quantum objects “borrow” energy in order to bypass an obstacle such as a gap in an electrical circuit. It is possible thanks to the uncertainty principle, and enables quantum particles to do things other particles can’t. S is for ... Schrödinger Equation This is the central equation of quantum theory, and describes how any quantum system will behave, and how its observable qualities are likely to manifest in an experiment. S is for ... Superposition Quantum objects can exist in two or more states at once: an electron in superposition, for example, can simultaneously move clockwise and anticlockwise around a ring-shaped conductor. J is for ... Josephson Junction This is a narrow constriction in a ring of superconductor. Current can only move around the ring because of quantum laws; the apparatus provides a neat way to investigate the properties of quantum mechanics. P is for ... Planck's Constant This is one of the universal constants of nature, and relates the energy of a single quantum of radiation to its frequency. It is central to quantum theory and appears in many important formulae, including the Schrödinger Equation. F is for ... Free Will Ideas at the heart of quantum theory, to do with randomness and the character of the molecules that make up the physical matter of our brains, lead some researchers to suggest humans can’t have free will. W is for ... Wavefunction The mathematics of quantum theory associates each quantum object with a wavefunction that appears in the Schrödinger equation and gives the probability of finding it in any given state. H is for ... Hidden Variables One school of thought says that the strangeness of quantum theory can be put down to a lack of information; if we could find the “hidden variables” the mysteries would all go away. K is for ... Kaon These are particles that carry a quantum property called strangeness. Some fundamental particles have the property known as charm! M is for ... Multiverse Our most successful theories of cosmology suggest that our universe is one of many universes that bubble off from one another. It’s not clear whether it will ever be possible to detect these other universes. D is for ... Decoherence Unless it is carefully isolated, a quantum system will “leak” information into its surroundings. This can destroy delicate states such as superposition and entanglement. L is for ... Light We used to believe light was a wave, then we discovered it had the properties of a particle that we call a photon. Now we know it, like all elementary quantum objects, is both a wave and a particle! T is for ... Teleportation Quantum tricks allow a particle to be transported from one location to another without passing through the intervening space – or that’s how it appears. The reality is that the process is more like faxing, where the information held by one particle is written onto a distant particle. M is for ... Many Worlds Theory Some researchers think the best way to explain the strange characteristics of the quantum world is to allow that each quantum event creates a new universe. I is for ... Information Many researchers working in quantum theory believe that information is the most fundamental building block of reality. G is for ... Gluon These elementary particles hold together the quarks that lie at the heart of matter. U is for ... Uncertainty Principle One of the most famous ideas in science, this declares that it is impossible to know all the physical attributes of a quantum particle or system simultaneously. E is for ... Entanglement When two quantum objects interact, the information they contain becomes shared. This can result in a kind of link between them, where an action performed on one will affect the outcome of an action performed on the other. This “entanglement” applies even if the two particles are half a universe apart. Y is for ... Young's Double Slit Experiment In 1801, Thomas Young proved light was a wave, and overthrew Newton’s idea that light was a “corpuscle”. I is for ... Interferometer Some of the strangest characteristics of quantum theory can be demonstrated by firing a photon into an interferometer: the device’s output is a pattern that can only be explained by the photon passing simultaneously through two widely-separated slits. A is for ... Act of observation Some people believe this changes everything in the quantum world, even bringing things into existence. B is for ... Bell's Theorem In 1964, John Bell came up with a way of testing whether quantum theory was a true reflection of reality. In 1982, the results came in – and the world has never been the same since! L is for ... Large Hadron Collider (LHC) At CERN in Geneva, Switzerland, this machine is smashing apart particles in order to discover their constituent parts and the quantum laws that govern their behaviour. V is for ... Virtual particles Quantum theory’s uncertainty principle says that since not even empty space can have zero energy, the universe is fizzing with particle-antiparticle pairs that pop in and out of existence. These “virtual” particles are the source of Hawking radiation. C is for ... Computing The rules of the quantum world mean that we can process information much faster than is possible using the computers we use now. P is for ... Probability Quantum mechanics is a probabilistic theory: it does not give definite answers, but only the probability that an experiment will come up with a particular answer. This was the source of Einstein’s objection that God “does not play dice” with the universe. R is for ... Randomness Unpredictability lies at the heart of quantum mechanics. It bothered Einstein, but it also bothers the Dalai Lama. G is for ... Gravity Our best theory of gravity no longer belongs to Isaac Newton. It’s Einstein’s General Theory of Relativity. There’s just one problem: it is incompatible with quantum theory. The effort to tie the two together provides the greatest challenge to physics in the 21st century. Q is for ... Qubit One quantum bit of information is known as a qubit (pronounced Q-bit). The ability of quantum particles to exist in many different states at once means a single quantum object can represent multiple qubits at once, opening up the possibility of extremely fast information processing. O is for ... Objective reality Niels Bohr, one of the founding fathers of quantum physics, said there is no such thing as objective reality. All we can talk about, he said, is the results of measurements we make. A is for ... Alice and Bob In quantum experiments, these are the names traditionally given to the people transmitting and receiving information. In quantum cryptography, an eavesdropper called Eve tries to intercept the information. B is for ... Bose-Einstein Condensate (BEC) At extremely low temperatures, quantum rules mean that atoms can come together and behave as if they are one giant super-atom. A is for ... Atom This is the basic building block of matter that creates the world of chemical elements – although it is made up of more fundamental particles. R is for ... Radioactivity The atoms of a radioactive substance break apart, emitting particles. It is impossible to predict when the next particle will be emitted as it happens at random. All we can do is give the probability that any particular atom will have decayed by a given time. S is for ... Schrödinger’s Cat A hypothetical experiment in which a cat kept in a closed box can be alive and dead at the same time – as long as nobody lifts the lid to take a look. X is for ... X-ray In 1923 Arthur Compton shone X-rays onto a block of graphite and found that they bounced off with their energy reduced exactly as would be expected if they were composed of particles colliding with electrons in the graphite. This was the first indication of radiation’s particle-like nature. H is for ... Hawking Radiation In 1975, Stephen Hawking showed that the principles of quantum mechanics would mean that a black hole emits a slow stream of particles and would eventually evaporate. Q is for ... Quantum biology A new and growing field that explores whether many biological processes depend on uniquely quantum processes to work. Under particular scrutiny at the moment are photosynthesis, smell and the navigation of migratory birds. N is for ... Nonlocality When two quantum particles are entangled, it can also be said they are “nonlocal”: their physical proximity does not affect the way their quantum states are linked. R is for ... Reality Since the predictions of quantum theory have been right in every experiment ever done, many researchers think it is the best guide we have to the nature of reality. Unfortunately, that still leaves room for plenty of ideas about what reality really is! Z is for ... Zero-point energy Even at absolute zero, the lowest temperature possible, nothing has zero energy. In these conditions, particles and fields are in their lowest energy state, with an energy proportional to Planck’s constant. U is for ... Universe To many researchers, the universe behaves like a gigantic quantum computer that is busy processing all the information it contains. W is for ... Wave-particle duality It is possible to describe an atom, an electron, or a photon as either a wave or a particle. In reality, they are both: a wave and a particle. C is for ... Cryptography People have been hiding information in messages for millennia, but the quantum world provides a whole new way to do it. Latest Video
eede5071e02dad3b
Why Stuff is Hard Why is stuff hard? That is, how can matter become solid, instead of just floating through us (or into the center of the earth) like a ghost? It might seem like a silly question, that the burdon of explaination ought to be on exceptions to the rule, such as holograms and optical illusions. But as I learned about matter from a particle physics perspective, I became increasingly perplexed that this stuff ever manages to condense itself into anything concrete. I carried this question around with me for years before finding the explaination at the end of this article; I’m surprised how rarely it is addressed in detail. Why stuff might be soft Fundamental particles of matter, according to Sir Isaac Newton centuries ago, or Democritus millenia ago, are hard, solid shapes that can be stacked and stuck together (with little hooks, in Democritus’s theory). News graphics of particle physics reactions suggest a similar picture, rendering electrons and quarks as shaded spheres eminating from a billards collision. For most of our history, we have concieved of matter as something which occupies space exclusively, with an inclination toward defining its reality by its impenetrability. When MacBeth saw an intangible knife floating before him, Art thou not, fatal vision, sensible To feeling as to sight? or art thou but a dagger of the mind, a false creation Proceeding from the heat-oppressed brain? it was either imaginary or conjured by witches. When Samuel Johnson heard Berkeley’s theory that all physical objects exist only in the mind as ideas, he pronounced, “I refute it thus!” and kicked a large stone. He could not have imagined how many neutrinos, cosmic rays, and (very likely) dark matter particles were pouring through his body at that instant. Today, we define matter using quantum field theory, the culnumation of the first 50 years of trying to understand quantum mechanics. Quantum field theory is a framework, rather than a single theory, only making predictions when given a set of fundamental fields and interactions. The goal of particle physics is to identify these inputs (or, if that doesn’t work, improve upon the framework). A classical field, in this language, is a function from every point in space-time to a number, spinor, vector, tensor, or some other structure of numbers. The first fields discussed in earnest were the electric field and the magnetic field, both of which map points in space-time to 3-component vectors. These fields are manifestly real (they make telegraphs work) even though they fill all of space, permeating all matter, as well as each other. The crowning demonstration of the reality of these fields came when Maxwell predicted the existence of radio in 1873 (13 years early) as self-perpetuating waves in the electromagnetic field. In quantum field theory, all matter consists of self-perpetuating waves in one of several quantum fields: the up-quark field, the down-quark field, the neutrino field, etc. These fields fill all of space— an empty vacuum is simply a region without waves. In modern language, we might call Maxwell’s electromagnetic field the photon field, the field of photon particles. A quantum field differs from a classical field in that it is a probability distribution over classical fields (plus a “phase,” an angle in an abstract space, which is not important for this discussion). This probability distribution is constrained by a differential equation called the Schrödinger equation, which often restricts energy to a discrete set of values. In particular, the energy of a standing wave, a particle at rest, is forced to be an integer multiple of the particle mass. If we want to add waves to the electron field (that is, make electrons), we can only add 0.511 MeV (one electron), 1.022 MeV (two electrons), or 1.533 MeV (three electrons), etc. The very fact that matter comes in particles of fixed mass, rather than a mushy continuum, is a consequence of quantum mechanics! The quantum field is therefore both more free and more constrained than the classical field, as illustrated below. The energy in a quantum field can’t be any arbitrary value, but it can be several restricted values at the same time. Classical field versus quantum field Derivation of quantized particles with minimal prerequisites This is an aside, but if you’d like to see where this quantization comes from, the following derivation only requires first-year differential equations. In the simplest case of a real-valued field with no interactions, the Schrödinger equation is \displaystyle i\sqrt{ \left(\frac{\partial\Psi}{\partial t}\right)^2 - \nabla^2\Psi } \;=\; -\frac{1}{2}\left( \frac{\partial^2}{\partial\phi^2} - m^2\phi^2 \right)\Psi where \Psi is the quantum field, (the square root of) a probability distribution over the 5-dimensional space t, x, y, z, \phi, where \phi is the classical field value. (We could think of the classical field as being a single point in that 5-dimensional space. That’s equivalent to a function from 4 dimensions to a real number.) The m^2\phi^2 term is the potential energy: an energy cost that penalizes large values of |\phi| (Einstein’s equivalence between mass and energy). If we divide both sides by \Psi and assume that \Psi factorizes into a function of space-time multiplied by a function of classical field value, this differential equation becomes seperable (justifying the assumption). The left-hand side of the equation would then only depend on t, x, y, z and the right-hand side would only depend on \phi. Therefore, both sides must equal a constant, suggestively called M for mass. The left-hand side is a wave equation in space and time with energy and momentum related by \displaystyle\sqrt{E^2 - |\vec{p}|^2} = M and the right-hand side becomes \displaystyle\left( m^2\phi^2 - 2M \right) \psi(\phi) = \frac{\partial^2 \psi}{\partial\phi^2} with \psi(\phi) being the factor of \Psi depending on \phi only. This equation is hard to satisfy; this is what constrains the masses of particles to be integers times M. The solution is \displaystyle \psi_n(\phi) = \exp\left( -\frac{m}{2}\phi^2 \right) H_n\left(\sqrt{m}\,\phi\right)\;\;\mbox{only if}\;\;M = \left( n + \frac{1}{2} \right)m where H_n are Hermite polynomials for integers n. Excitations of the the real-valued quantum field are therefore waves with \sqrt{E^2 - |\vec{p}|^2} (the solution to the left-hand side) constrained to be \left( n+\frac{1}{2} \right) m\;\mbox{MeV} (to solve the right-hand side). The actual field values are spread over a continuum on both sides of zero— if the energy is single-valued, the field amplitude cannot be. This is Heisenberg’s uncertainty principle in the field theory context. Getting back to Why stuff might be soft Given this picture of matter as waves, it’s hard to imagine how it could ever coalesce into something solid. In fact, the above example doesn’t. These waves pass through each other, doubling the energy in the region where they superimpose, returning to their original shapes as they continue on their way. It was a simple example without interactions; a more realistic treatment would include extra terms that allow energy to flow from one field to another, in the same way that vibrational energy flows from a cello string to its sounding board to the air in a concert hall. A field without interactions vibrates in isolation, unable to be heard. This is nearly the case for the neutrino field: there are ten times as many neutrinos by mass than heavy elements like carbon and oxygen (which is like saying there are four times as many ants than humans, by mass), but they interact so weakly with our matter that a few detections per day is a good rate for a ton-scale detector. Neutrinos do not form solid structures. Particle physicists have identified four fundamental interactions in nature: • Electromagnetism: charged particles attract or repel each other by exciting the photon field, to which they are both coupled. • Weak Nuclear force: particles change species by de-exciting one field and exciting two others in their place: e.g. a down quark becomes an up and a W^-. This is how many radioactive isotopes decay. • Strong Nuclear force: holds quarks and nuclei together with gluons, rather than photons; very short-range. • Gravitation: the medium of exchange is the metric of space-time itself. Gravitons are virtual excitations of the curvature of space-time, treated as a quantum mechanical field. The “contact force” that keeps solids from pushing through each other is obviously not derived from gravity, and it acts over distances which are too large to be related to the Strong Nuclear force. The Weak Nuclear force is too weak, and that would turn electrons into neutrinos anyway. So we’re left with electromagnetism. Explaination #1: Electromagnetic force makes stuff hard Electromagnetism is responsible for nearly all macroscopic phenomena, the major exception being gravity. It is certainly the reason small things stick together: neutral atoms can be polarized and attract each other at short distances, even though they each have zero total charge. Many molecules, like water, have permanent electric dipoles which make it bead up into drops and crawl up the edges of a glass beaker. Water’s dipole and oil’s lack of a strong dipole are together responsible for all the hydrophilic/hydrophobic mechanisms in biology, such as keeping our cells from bursting open. But it’s not clear that electromagnetism can be solely responsible for holding things apart. I have never heard a description of exactly how electromagnetism is supposed to do it, and there are some general facts about electromagnetism that seem to preclude its being responsible for the contact force. The simplest way to hold things apart is to make them out of like charges, since like charges repel electrostatically (that is, without magnetism). Ignoring for the moment that ordinary matter is resolutely neutral, any residual charges being immediately screened by humidity or punished with an electric shock, there’s a theorem by Samuel Earnshaw which states that charged particles cannot be electrostatically trapped. Solids are in a state of stable equilibrium: the (electrostatic) attractive forces must be balanced by the repulsive contact force to keep them from collapsing to a point. The particles in a solid are trapped in Earnshaw’s sense, so electrostatic forces can’t be the reason for it. More likely, contact forces would be due to electrically polarized atoms or molecules. I wrestled with this for a long time, trying to make a model that works. The problem is that polarized particles should attract each other, except for unusual special cases. As two neutral atoms approach, the positive parts of one lean toward the negative parts of the other, minimizing the distance between the unlike charges and maximizing the distance between the like charges, making the total force attractive. It is possible for molecules to have permanant dipoles, but then they can simply rotate themselves to minimize distance between unlike charges, becoming attractive again. In biology, huge molecules can use repulsive polarization to their advantage, largely because they can root themselves relative to the object they want to repel. But this can’t be the reason so many simple substances solidify. Magnetism always comes in dipoles, so the same arguments apply. I’m fairly convinced that contact forces cannot be due to electromagnetism alone, though I don’t have a proof that rules out all possible mechanisms. The puzzling thing is that I have heard “electromagnetism” (with no further explaination) cited as the origin of contact forces in several reputable physics popularizations, one of them being Brian Greene on Nova. In our case study, we will see that electromagnetism is involved in holding metals together, but it is not responsible for the repulsive contact force. Explaination #2: The Pauli exclusion principle makes stuff hard Closer to the heart of the matter is Pauli’s exclusion principle, which states, roughly, that “two identical particles cannot occupy the same state at the same time.” That sounds like the solution to our problem, given as an axiom! It is the second explaination that I have heard in popular presentations of physics, always discouragingly unspecific. We have reason to be wary— this is the effect which “becomes significant” when matter is crushed in white dwarf stars. Could it also be responsible for balsa wood? Derivation of Pauli’s exclusion principle To see more clearly what this principle states, we should return to our formulation of matter as a quantum field. Last time, I skirted past the fact that the quantum field is the square root of a probability distribution, with a phase. The function maps classical field configurations to an “amplitude,” A, which is a complex number with a normalization property when it is squared. |A|^2, or A^*A, is interpreted as probability density. \displaystyle \int A^*(x) A(x) \, dx = 1 (This “amplitude” is a new word. It is not the amplitude of the classical field— sorry! To use my notation from an earlier section, A is \Psi(t,x,y,z,\phi), not \phi(t,x,y,z).)The phase of the complex number is lost when A is squared, but it is relevant when two waves superimpose, because their relative phase determines whether they add constructively or subtract destructively. Pauli’s exclusion principle applies to spinor fields, not fields of real numbers or vectors. Spinors are mathematical objects which are negated by 2\pi rotations. Vectors, which I assume you’re more familiar with, are unaffected by rotation by 2\pi. Imagine rotating a teacup 360 degrees— if it’s a vector, you get the same teacup back, but if it’s a spinor, you get minus a teacup (which, if squared, is a teacup squared in either case). For concreteness, we can represent spinors with matrices. A vector living in 3-dimensional space is a 3-tuple that is rotated by applying this transformation: \displaystyle \left(\begin{array}{c} x' \\ y' \\ z' \end{array}\right) = \left(\begin{array}{c c c} \cos\theta & -\sin\theta & 0 \\ \sin\theta & \cos\theta & 0 \\ 0 & 0 & 1 \end{array} \right) \left(\begin{array}{c} x \\ y \\ z \end{array} \right) while a spinor living in 3-dimensional space is a 2-tuple that is rotated by applying this transformation: \displaystyle \left(\begin{array}{c} x' \\ y' \end{array} \right) = \left(\begin{array}{c c} \cos\frac{\theta}{2} - i \sin\frac{\theta}{2} & 0 \\ 0 & \cos\frac{\theta}{2} + i \sin\frac{\theta}{2} \end{array} \right) \left(\begin{array}{c} x \\ y \end{array} \right) Note that x', y' \to -x, -y when \theta \to 2\pi for spinors. (These are both special cases of rotation around the z axis, and the above spinor representation applies only to spinors in the z axis. A spinor only lives in one axis, with the two components interpreted as “up” and “down.”) The key thing about spinors is that the amplitude of two spinor-particles is negated if they are exchanged. We can construct an example of this by placing one spinor-particle at x with spin (1,0) and the other at -x with spin (-i,0), which is (1,0) rotated by \theta=\pi. The amplitude of a two-particle system is the product of the amplitudes of the individual particles, because it represents the combined probability of the pair. We can interchange the two particles by rotating everything by \theta=\pi: this swaps the positions of the two particles, and the spins become (-i,0) and (-1,0). This is the same scenario we started with, except that one of the spinors has acquired a minus sign, so the whole amplitude has a minus sign. The general proof (of the Spin-Statistics Theorem) follows these lines. Now imagine two particles in the same position with the same spin. Swapping them yields exactly the same state, so the amplitude is unchanged. But it is also negated, therefore A=0. There is no amplitude for a pair of spinor-particles in the same state, so there is no probability for it, either! This is Pauli’s exclusion principle. It is not an axiom: it is derived from the rotation properties of spinors. Consequences of Pauli’s exclusion principle So what happens if two spinor-particles merely get close to each other? If there are no other fields whose coupling is strong enough to drain their mass-energy away, the spinor-particles can’t disappear. The total probability density must be 1, so one of them must change its state, either by changing spin from (1,0) to (0,1) or by entering a higher-energy state. Open vice-grip Closed vice-grip with excited particles Imagine a row of spinor-particles, all at rest, lined up along the x axis. If we crush them in a vice-grip, we will encounter no resistance until the length of the row is halved, because the particles will happly overlap each other in the remaining space, selecting opposite spins. But as we twist the crank further, the particles will be forced to overlap each other with the same spin. The only way they can do that is by climbing into higher and higher states of kinetic energy. (High-energy states are standing waves with more wiggles.) At last, when the vice-grip is one particle wide, every state from the ground state up to state number N/2 will be filled with two of the N particles. We provided the energy needed to push the particles into the upper states with the handle of the vice-grip, and it felt to us like a resisting force. Force is defined F = -dE/dx, so the Pauli exclusion principle really did exert a force against our hand as we turned the crank (sometimes called the exchange force). But the Pauli exclusion principle isn’t one of the four fundamental forces! Gravity, Electromagnetism, and the Strong and Weak Nuclear interactions are put into quantum field theory by hand; the Pauli exclusion principle is a derived consequence of the rotation of spinors, independent of interactions! How can a principle exert a force? This is not the only force-which-is-not-a-fundamental interaction. The random motions of air molecules inside a balloon exert a force on the balloon’s surface, even though they are freely-streaming particles, without interactions (ideally). Balloons and vice-grips are made of charged particles, so we might expect electromagnetism to be a distant cause, but it doesn’t need to be. Imagine, instead of a vice-grip, that the particles are enclosed in a small, toroidial universe with finite volume. If the volume shrinks for some reason, the particles will resist, even if there are no interactions in the Schrödinger equation at all. Thus, the Pauli exclusion principle can provide a resisting force that acts qualitatively like the contact force from freshman physics. But is it big enough for balsa wood? Case study: what makes metal hard? I decided one day that I had gone long enough without knowing why things are hard, so I went to the library to find out. The answer lies in “that other branch of physics,” namely, everything but particle physics/cosmology. The majority of physicists start with protons, neutrons, and electrons and derive from these everything we encounter in our macroscopic lives, including tangibility. I expected “Why things are hard” to be a chapter in a standard Solid State Physics textbook. At least, I expected it to come in a “Basic Properties” section, before applications to transistors. I never found a general answer to my question, and I suspect that the exact answer might differ in the details for every material. The Pauli exclusion principle is probably involved at some level for all of them, though it could be obfusgated by other effects. From my reading, I was able to derive the hardness of one simple, but undeniably hard, material: metal. I need to apply some approximations, and only carry out my calculation to the nearest order of magnitude, but a rough agreement with the measured hardness of metal gives me some confidence that this is the correct explaination. Many metals can be described by the following picture: a lattice of atomic nuclei, all the same element, surrounded by several filled shells of electrons. Electrons are spinor-particles, so they obey Pauli’s exclusion principle: only one can fill a given orbital+spin energy level. The innermost shell accepts exactly 2 electrons (purely determined by spin), the next takes 8, the next 18, etc. Beyond the filled shells, the atoms may require 3 to 6 more electrons to be neutral, but this is not enough to fill a shell. These “valence” electrons are loosely bound and roam from nucleus to nucleus. The nuclei attract each other electrostatically, because although most of their charge is screened by the filled shells and the valence electrons, not all of it is. This is the attractive force that keeps metal from flying apart. The repulsive force that balances it, and resists the force of my swinging fist, derives from the dependence of the valence electrons’ energy on the size of the metal object, highly amplified by the Pauli exclusion principle. It’s worth noting that elements with no valence electrons are all formless gasses.To derive the hardness of metal, we must consider how the internal energy responds to crushing, just like the example of the vice-grip on a line of particles. The quantity that measures three-dimensional crushing is called the bulk modulus, \displaystyle B = -V\frac{dP}{dV} = -V\frac{d^2E}{dV^2} also known as 1/\kappa, inverse compressibility. We need to calculate the energy of the metal as a function of volume. Normal humans cannot crush metal to such an extent that the nuclei or their inner shells are threatened, so the valence electrons’ energy is the only component that matters. The potential energy that the valence electrons feel is a regular lattice of 1/|r| wells, the Coulumb potential due to each nucleus. But do we need all of this detail? Back-of-the-envelope calculations of electron wavelengths yield 10 nm at the smallest, which are tens to hundreds of times the interatomic spacing. The valence electrons will therefore see a smoothed potential that looks remarkably like the infinite square well at the beginning of most Quantum Mechanics books. For the most part, the nuclei just keep the valence electrons from wandering away. Approximation of potential energy The solution to the Schrödinger equation for an infinite square-well is sinusodial with zero amplitude along the edges. An electron in the ground state is one big wave that fills the entire metal conductor— if the metal object is, say, a skyscraper, that’s an enormous electron! Or consider the electron that sits in the metallic hydrogen core of Jupiter. Fundamental particles really are waves— their sizes are not intrinsic. The three-dimensional infinite square-well problem is solved in detail on the “Fermi Sea” Wikipedia page, in exactly our context: valence electrons in metal. The energy of a given state is \displaystyle E(n_x,n_y,n_z) = \frac{\hbar^2\pi^2}{2m}\left(\frac{{n_x}^2}{{L_x}^2} + \frac{{n_y}^2}{{L_y}^2} + \frac{{n_z}^2}{{L_z}^2}\right) where we will ignore dimensionless constants of order one. If the electrons did not obey Pauli’s exclusion principle, the total energy of N electrons would be \displaystyle E_{\mbox{\scriptsize tot}}=\frac{\hbar^2}{mV^{2/3}}N because they would all be in the ground state (barring excitation due to thermal noise). Since electrons are spinor-particles, the exclusion principle applies and the electrons fill one state each, from the ground state up to the N^{\mbox{\scriptsize th}} state. (Thermal excitation only blurs the top few levels.) We can integrate for the total energy by representing n_x,n_y,n_z as the first quadrant of a sphere— this is all worked out in detail on the above-mentioned Wikipedia page. The total energy is actually \displaystyle E_{\mbox{\scriptsize tot}}=\frac{\hbar^2}{mV^{2/3}}N^{5/3} The 5/3 power, applied to typical numbers of valence electrons (10^{23}), makes a difference of a factor of 10^{15} in total energy. Metal would be much squishier (and denser) without it. Now we can calculate the bulk modulus. \displaystyle B=V\frac{d^2E}{dV^2}=V\frac{N^{5/3}\hbar^2}{mV^{8/3}}=\frac{N^{5/3}\hbar^2}{V^{5/3}m}= \left(\frac{N}{V}\right)^{5/3}\times 10^{-38} \mbox{ Nm}^3 Armed with this prediction, I confronted the Periodic Table and was immediately overwhelmed by the qualitatively different kinds of metals. The transition metals, including some of the most familiar such as iron, gold, and tin, don’t fit the simple picture I presented at the beginning of this derivation because they have two unfilled shells, and the other shell can be fairly large (holds 18). I don’t know how many of these electrons to call “valence.” The non-transition metals are divided into semi-metalic “metaloids” and post-transition “poor metals.” I found the best agreement in the Boron and Carbon families, an indication that unaccounted-for systematic effects are lurking among the data, preferring certain electron configurations over others. Here are the data for the Boron and Carbon families, with an asterix marking the poor metals. Element Nuclei/V (\times 10^{27}/\mbox{m}^3) Valence Electrons Prediction (\times 10^{7} \mbox{N}/\mbox{m}^2) Measured (\times 10^{7} \mbox{N}/\mbox{m}^2) Ratio (meas/pred) Boron 130 3 21000 32000 1.52 Silicon 48 4 6400 10000 1.56 Aluminum* 59 3 5600 7600 1.36 Thallium* 34 3 2200 4300 1.95 Tin* 36 4 4000 5800 1.45 Lead* 32 4 3300 4600 1.40 The fact that the ratio is not 1.0 is no surprise: we ignored constants of order unity. What is interesting is (a) the prediction and the measurement are the same order of magnitude, indicating that this mechanism really can explain most of the effect, and (b) there’s a correlation between the measurement and the prediction: the values vary by a factor of 7 from Thallium to Boron, but their ratios vary by less than a factor of 1.5. So what happened to this being the effect which “becomes significant” in white dwarfs? It’s certainly significant in ordinary matter! It just isn’t a factor in normal stars, because stars are so hot that the kinetic energy of their electrons aren’t limited to the minimum-energy states. When stars cool into white dwarfs, they become more like metal. Because I’m honest, here are the rest of the non-transition metals. Arsenic 45 5 8300 2200 0.27 Antimony 32 5 4700 4200 0.89 Tellurium 28 6 5100 6500 1.27 Bismuth* 28 5 3800 3100 0.82 (No data for Gallium.) Including these, we can see variations of a factor of 7 in the ratio. Perhaps there isn’t anything special about the Boron and Carbon families; it could have more to do with metaloids versus poor metals. There are way too few elements to do a statistical analysis— to find out what’s really going on here, I would need to learn Chemistry. Now it’s time to back up: have we answered our question? We have just explained (roughly, at least) how metal resists being crushed, assuming that we have the power to move the edges of the square-well potential. But why would pushing a metal face move the edge of the potential? That potential is set by the nuclei— couldn’t I push my finger ghost-like into the metal, leaving the nuclei, and therefore the square-well potential, fixed? (The metal’s nuclei and the nuclei in my finger are both very small; they won’t collide.) Here’s how I think about it: suppose we push a block of metal with a metal finger, all the same substance. The block plus the finger can be considered one object, and if they could interpenetrate, that would be the same as saying that the block-plus-finger object is shrinking, and the bulk modulus would be sure to prevent that! My finger is not made of metal, but it has the same effect. This still amazes me, because the outermost electrons in my finger are not free-roaming valence electrons; they form a wide variety of configurations, and yet it all still works— solids don’t interpenetrate. Conclusion: which explaination was right? As we have seen, Pauli’s exclusion principle has more to do with why things are hard than electromagnetism. But its role is not particularly simple, and it isn’t enough to say that “two particles can’t sit in the same place at the same time,” because they can and they do. The electrons (or really, electron-waves) in a metal all fill the whole structure, but at strictly different energy levels. And even this isn’t the direct cause of the contact force but an amplifier of it: the direct cause is the fact that energies of all the electron states scale with the size of the box they are forced to live in. One could point out that it is electromagnetism that keeps them in the box, but if that means that electromagnetism wins, it wins by an enormous technicality. The disturbing thing about this picture is that it is not general. The exact reason that one material, like metal, is hard is not necessarily the reason that another material, like my finger, is somewhat hard. This might account for the vast diversity in pliability and texture in nature, but it makes me wonder if there’s a more general way of looking at it that I’m missing, or if such a complicated rule has an exception. Could some highly engineered substance be made intangible, like recent materials designed for invisibility? That’s a tricky problem, because the same force that resists outside pressure is the force that keeps metal from collapsing. To make an intangible solid, one must find a way to balance the attractive force of the substance’s internal particles without being influenced by penetration from outside. We could do this if there were new fields in the Schrödinger equation which are strongly coupled to each other but weakly coupled to our fields. That would be a ghost universe, with planets we could orbit but never land on— in fact, we could orbit inside them! This line of reasoning is unfair, because you can’t make new technology by rewriting the laws of physics. There is some value to thinking about it, though, because dark matter is a field with very little coupling to our own matter. This has been firmly established with gravitational observations (we could definitely orbit it), but not directly, with physical detectors (emphasizing the point that it couples very weakly). We don’t know if there are new strong interactions felt only by dark matter, which would be necessary to make dark planets, and all indications so far suggest that the vast majority of dark matter is softer than gas. But suppose that there are different kinds of stable dark matter particles, and only a small fraction of them interact strongly with themselves. This could be enough to make a planet here and there… We’ll find out when the dark matter people living in the center of the earth send up a satellite to discover, “What’s beyond the Mantle?” 18 Responses to “Why Stuff is Hard” 1. Kea Says: Don’t worry – it’s a fantastic post. We’re just speechless at your blogging skills. I especially liked the point about quantized masses. 2. Rueben Says: …way too long an explanation for us dummies. “If you can’t explain it to your grandmother — you don’t understand it.” {– A. Einstein} 3. Jonathan Vos Post Says: I think this sweeps some hard problems under a soft rug. We know that crystals of various substances and crystallographic point groups exist and are stable in 3+1 dimensional space (x,y,z,t). But only in the past decade or so can we mathematically suggest why that stability is so. Most solid matter is not crystalline. Except for little bits of tooth and bone, we ourselves are squishy wet soft stuff. Why is soft matter stable? Why are DNA and RNA and proteins in protoplasm stable? Even more to the point: what is the actual structure of liquid water, and why is it stable? The question of loops versus strings in water were debated in the past couple of years between Los Alamos people and other people. Why is glass hard? These are serious questions. 4. Kea Says: Agreed, Jonathon, but would you mind clarifying the mysterious comment: 5. jpivarski Says: Hi all, thanks for your comments! I had to choose a level for this article, and I chose to write it for interested mathematicians, and myself five years ago. I can’t write about the specifics of soft matter, glass, and water, because I don’t know much about them. However, I also want to understand this subject in general— it’s disturbing that most things in the world feel solid, but we need a separate explanation for each of them. I think I’ve found a truly general argument (see my next post), and that one necessarily glosses over details. I, too, would like to learn if water is stringy (or loopy). Is it? 6. Kea Says: It’s neither stringy or loopy because these are failed attempts at QG. 7. Michael D. Cassidy Says: Thanks for this post, though there are parts that went by me, it was wonderful to read. 8. The uselessness of physics in fundamental research at Freedom of Science Says: […] Here a physicist ruminates about the hardness of matter:1 The Pauli exclusion principle is probably involved at some level for all of them, though it could be obfuscated by other effects. From my reading, I was able to derive the hardness of one simple, but undeniably hard, material: metal. … […] 9. Abubakar Mahre Says: I am student from the department of Geological engineering from Kaduna Polytechnic-Nigeria. On a project,trying to know what makes a material hard. But couldn’t find any meaning. 10. Abubakar Mahre Says: 11. John Heath Says: Knowing what a electron is and by what means it likes a positron but not another electron would go a long way towards answering the question ” why stuff is hard ” . No answer on my end but it would be nice to know what a electron is other than .511 MeV and the smoke and mirrors of quantum probabilities . Where is the beef ? 12. The uselessness of physics in fundamental research « How the world works Says: […] Here a physicist ruminates about the hardness of matter: ((Studying hardness of matter has been the quientessential scholastic subject for millennia.)) The Pauli exclusion principle is probably involved at some level for all of them, though it could be obfuscated by other effects. From my reading, I was able to derive the hardness of one simple, but undeniably hard, material: metal. … […] 13. El mundo Says: El mundo… […]Why Stuff is Hard « The Everything Seminar[…]… 14. software autocad Says: software autocad… 15. spacetime Says: 16. emily browning Says: 17. katv.com Says: Why Stuff is Hard | The Everything Seminar 18. Jens Wilkinson Says: I wanted to comment about this statement: ” Ignoring for the moment that ordinary matter is resolutely neutral. . .” True, but on the other hand, I think that since the EM force follows the inverse-square law, if two atoms are very far apart they essentially seem neutral, but as they get closer and closer, the electron shells (which are closer to the approaching atom) will become the dominant force, so the repulsion between the electron shells will become stronger and stronger compared to the attraction between either of the electron shells and the protons in the other atom. So they will experience a repulsion, I think. Leave a Reply WordPress.com Logo Twitter picture Facebook photo Google+ photo Connecting to %s %d bloggers like this:
606eb4944cd10dac
A Beginners Guide to Quantum Mechanics Quantum mechanics is notoriously weird and difficult to comprehend. Many people I speak to seem to know of it, but not much about it at all- which is my main motivation behind writing this article. I will (briefly) go through some of the insights that quantum mechanics has provided about the way we see the world around us, and hopefully clarify things- at least so that the next time the word ‘Quantum’ is used somewhere completely out of context, you’ll be able to call ‘Bullshit’ on whoever is trying to sound like they know what they’re talking about (happens a lot more than you’d think!). Figure 1. Max Plank Figure 1. Max Plank The term ‘quantum’ was coined by German physicist Max Planck (1858–1947) to describe a discrete amount of energy- a ‘package’ of energy. Planck, who was one of the founding fathers of quantum theory- was researching the relationship between intensity (the amount of light) and frequency of light. At that time, there were only empirical laws to describe this relationship (Wien’s Law for high frequencies, and the Rayleigh-Jeans law for low frequencies) but no solid theoretical framework which could successfully make predictions in accordance with these [1]. As part of the solution, Planck postulated that energy can only occur in discrete packages, or ‘quanta’. From this postulate he was able to derive a law which stood in accordance with experimental data, and therefore with the empirical laws known at the time. This short, but sweet equation gives one the relationship between the energy (E) associated with a certain frequency of light (f). h is known as Plancks constant, which is always the same, tiny quantity. This means that the value of h*f is always the smallest package of energy attainable for a given frequency. However, it must be noted that Planck merely postulated this thinking; it was a mathematical trick which happened to yield promising results- It did not yet occur to him that he had fundamentally changed the way physicists would view the world[2]. Five years later Albert Einstein used Plancks postulate in order to provide a theoretical understanding of the Photoelectric effect- A current generated in a conductor, through illumination with visible light/ Ultraviolet. This work gained Einstein the Nobel Prize in physics in 1921 (contrary to popular belief Einstein’s work on Relativity, which he is most famous for, did not win him a Nobel prize) [3]. Fig. 2 Visual representation of the photoelectric effect Figure. 2 Visual representation of the photoelectric effect From the definition of the quantum sprang a whole new branch of physics which we now know as Quantum mechanics. Before getting carried away with the fascinating history of quantum mechanics, I will focus on a few key ideas which have emerged from it- ones I often find myself speaking about when people ask ‘could you just explain quantum to me?’. I will certainly do my best. I want to note that this article will not explain how these phenomena work, but rather lay out different phenomena and present some evidence for these occurring. I will, however, provide several links to papers which attempt explain how to the most interested of readers. Wave- Particle Duality For a very long time scientists were locked in a heated debate about the nature of light. Scientists such as Christiaan Huygens believed light to be a wave, because of the many wave-like properties exhibited by light, such as diffraction, polarization and interference. However, Isaac Newton believed light to be composed of particles- a theory which was mainly popularised through Newton’s prominence. It wasn’t until the early 20th century, when Albert Einstein successfully explained the photoelectric effect using particles of light (photons) that the particle theory of light was seriously considered again. [4] What quantum mechanics has come to show is that light, as well as all other fundamental particles behave both as a wave, and a particle depending on the circumstances. Lewis de Broglie postulated that particles of matter are also waves of some sort, which have an associated wavelength (which is known as de Broglie wavelength).[5] There is an overwhelming amount of evidence to support this[6,7], and in recent times wave- particle duality has been extended far beyond photons and electrons. Wave- particle duality has been demonstrated in objects such as large organic molecules [8]- forcing us to reconsider the definitions for ‘particles’, and ‘waves’ and to ask ourselves the question whether anything ever exists as purely a particle, or a wave. Uncertainty is the biggest factor setting quantum physics aside from classical physics. In fact, this is an extension of wave-particle duality since the phenomenon of wave-particle duality only arises due to a particles uncertainty in position; allowing it to exhibit wave like properties. Before going any further I would like to make a distinction between two types of uncertainty, which both play a big role: uncertainty arising from our inability to know something (information inaccessible to us due to a variety of reasons), and inherent uncertainty which exists within nature itself. Yes, as it turns out nature itself can be uncertain about its own properties- and this has nothing to do with our ignorance, or our method of probing. The mathematics in quantum mechanics is full of uncertainty relations; inequalities which restrict the amount of information obtainable about a system. Take for example the most famous of them all- Heisenberg’s uncertainty relation, which can be seen below. Where ΔX is the uncertainty position, ΔP is the uncertainty in momentum and ħ is the (reduced) Planck constant which we have seen above (kind of). This means that the left hand side of the equation can never equal zero (in fact, never be smaller than ħ/2); there will always be uncertainty in the position, and the momentum of a given quantum particle. What this means in physical terms is that you can never exactly know the position and momentum (speed and direction) of a particle at a given time. How can this be possible? Surely there is something that we’re misunderstanding? One attempt to explain uncertainty in a classical sense was to suggest that this was merely another form of the ‘observer effect’ [9]. This effect is a consequence of the inevitability that the properties system cannot be measured without being altered; take for example measuring the temperature of a hot water bath. When introducing a thermometer, the temperature of the bath will be (very, very slightly) altered because energy has to be taken from, or given to the system in order to record a temperature. The same reasoning can be used for measuring the position of an electron. If you want to know the momentum of the electron more precisely, you will have to use more energetic light (higher frequency) which, in turn, will change the momentum of the electron. Even though this effect does take place, it is distinctly different from the quantum uncertainty being discussed. Without making things a lot more complicated there is little I can say except for the fact that uncertainty is inherent in nature, and that we have more than enough evidence for this to be credible. In fact, some of our technology even utilises uncertainty, and would not function without it- an example being the scanning tunnelling microscope (STM) [10] and some forms of touch – screens. The scanning tunnelling microscope makes use of the phenomenon quantum tunnelling. (Quantum) Tunnelling is a phenomenon in which particles can appear on the other side of an energy barrier, without having the necessary energy to make it over the barrier. A classical analogy to this would be: picture a ball rolling from side- to side in a valley. It does not have the necessary energy to make it to the top of either side of the valley. Quantum tunnelling would allow this ball to spontaneously appear on the other side of the valley, without ever having the necessary energy to make it to the top of one of the hills. Fig. 3 Visual Representation of Quantum tunelling Figure. 3 Visual Representation of Quantum tunnelling However, tunnelling is simply yet another amazing consequence of uncertainty; because uncertainty also applies to the kinetic energy a particle possess. A good and more rigorous explanation of tunnelling and its use in touch- screen technology is found here: The phenomenon of superposition is one where particles tend to have multiple, seemingly mutually exclusive properties at the same time. For example position; besides there being uncertainty in a particles position, a particle can occupy numerous positions at the same time. This, again, is not a desperate attempt at explaining something that we can’t, but rather a phenomenon demonstrated time and time again [11]. Superposition arises straight from the heart of quantum mechanics- the Schrödinger equation. This equation is the quantum equivalent of Newton’s equations of motion; it governs the dynamics of a quantum system. Just like any other equation you input certain parameters, and then solve for possible solutions- these solutions being properties of the system. As it turns out, there are an infinite number of solutions to the Schrödinger equation- more precisely the sum of any solutions to this equation is yet another solution! This means nothing less than a particle exhibiting an infinite number of properties at the same time. What makes matters even more mind boggling is the fact that when scientists attempt to measure particles being in several states at once- they never do! Quantum objects start behaving classically when we measure them; one way we know that superposition exists is through secondary effects- outcomes which can only occur if two things happened at once. Superposition has been demonstrated in countless experiments, such as the famous double-slit experiment. If the reader is not familiar with this experiment I would strongly advise to look it up- it is mind blowing! Here is a link to a video explaining it quite well (but it’s slightly cheezy!): Another more recent experiment placed not a tiny particle, but a nano-sized ‘tuning fork’ into a superposition of states; researchers got this mechanical object to vibrate at several frequencies at once! [12] Furthermore superposition is utilised by plants in their photosynthesis process making it up to 99% efficient! [13] (compared to 25%- 30% for pertol engines [14], and about 22% efficiency for solar panels [15]). Surely the biggest application of the superposition phenomenon is the development of quantum computers; which famously use ‘qubits’ instead of ordinary ‘bits’. Whereas a classical bit of information is only ever a 1 or a 0, a qubit can also be in the superpositioned state of being a 1 and 0 at the same time. This is because qubits are made up from something exhibiting quantum properties; ultracold atoms/ions, photons, or currents in superconductors.  This leads to vastly greater computing power because numerous solutions can be processes/calculated at the same time- once successful quantum computers will undoubtedly change the world. This is an image of ultracold ions being held in a line by magnetic fields. This is one way to implement a quantum computer- the computations are made with these ions Figure 4. This is an image of ultracold ions being held in a line by magnetic fields. Superposition applies to many different properties; whether its position, momentum, or even the chronological order of events! [16] That’s right, it has been recently shown that not only can a particle be in several places at once, but that there are quantum systems in which things happen both forwards, and backwards in time! Astonishingly, this does not violate any causal inequality because these events happen in both directions of time- not just backwards. It seems quantum mechanics is still getting weirder about 100 years after its initial formulation! Entanglement is a phenomenon famously called “spooky action at a distance” by Einstein because it seemingly violated one of physics’ most sacred laws: the speed of light being the maximum achievable speed in the universe. A set of entangled particles influence one another instantaneously, regardless of their separation. These particles could be on opposite ends of the universe, and this would still hold true. The conundrum is that seemingly this implies that information travels between one particle, and another at faster than the speed of light (something Einstein had shown to be impossible). As amazing as this is, the paradox is resolved through interpreting the situation slightly differently [17]- something I definitely do not have time to explain in this article (for those particularly curious I can suggest the paper, entitled “Quantum mysteries disentangled” to which I have added a link at the bottom of this page) Where is this ‘quantum’ and why don’t I see it? The short answer is: because you’re looking. For my university dissertation I simulated an experiment demonstrating that quantum mechanics naturally gives rise to classical physics- which we observe in our day to day life. When fully quantum particles (with all their weird properties) interact with their environment, they start losing their quantum properties. With environment, I mean anything. Since our universe is filled with particles, constantly bumping into once another, interacting- one could say the universe is constantly measuring itself. As we have learned: measurements get rid of quantum properties, which is inherently the reason we see a classical, and not a quantum world. This loss of quantum properties has been coined ‘Decoherence’ and is an area of active research – since it is one of the biggest obstacles facing quantum computers. In order for qubits to be 1’s and 0’s at the same time these qubits have to remain quantum, and not ‘decohere’. This means isolating them from their environment; which is proving to be extremely difficult. It is very difficult to say anything more about decoherence, or the nature of quantum mechanics without going a lot further into detail, and writing a book about it which is why I would suggest to anyone curious or interested to start researching this stuff. I am aware that some of these explanations, to those more knowledgeable, may be rather simplistic- something I couldn’t really help, due to the sheer quantity of information I would have to provide to continue making sense. As promised here is a link to the paper which attempts to explain how entanglement works: And to those who are still unconvinced, I am happy to send my dissertation to upon request, in which I simulate this quantum to classical transition mentioned above, and explain the how of things in a lot more detail. [1] – M. Planck (1914). The theory of heat radiation, second edition [2] – Kragh, Helge (1 December 2000), Max Planck: the reluctant revolutionary, PhysicsWorld.com [3] –  Folsing, Albrecht (1997), Albert Einstein: A Biography [4] – http://www.grandinetti.org/quantum-theory-light [5] – Feynman, R.; QED the Strange Theory of Light and matter, Penguin 1990 Edition [6] – Darling, David (2007). “Wave–Particle Duality”. The Internet Encyclopedia of Science [7] – Davisson–Germer experiment  “The diffraction of electrons by a crystal of nickel [8] – http://www.livescience.com/19268-quantum-double-slit-experiment-largest-molecules.html [9] – Furuta, Aya (2012), “One Thing Is Certain: Heisenberg’s Uncertainty Principle Is Not Dead”, Scientific American [10] – http://www.azonano.com/article.aspx?ArticleID=1725 [11] – http://www.nature.com/news/physicists-snatch-a-peep-into-quantum-paradox-1.13899 [12] – A. Voje, J. M. Kinaret, and A. Isacsson,“Generating macroscopic superposition states in nanomechanical graphene resonators”, Phys. Rev. B 85 (2012) [13] – http://phys.org/news/2014-01-quantum-mechanics-efficiency-photosynthesis.html [14] – Baglione, Melody L. (2007). Development of System Analysis Methodologies and Tools for Modeling and Optimizing Vehicle System Efficiency (Ph.D.). University of Michigan. pp. 52–54. [15] – http://www.qrg.northwestern.edu/projects/vss/docs/power/2-how-efficient-are-solar-panels.html [16] – http://phys.org/news/2015-11-quantum-superposition-events.html [17] – http://www.flownet.com/ron/QM.pdf Share on FacebookTweet about this on TwitterShare on Google+ Leave a Reply
44900449559a9529
Take the 2-minute tour × We're making a video presentation on the topic of eigenvectors and eigenvalues. Unfortunately we have only reached the theoretical part of the discussion. Any comments on practical applications would be appreciated. share|improve this question closed as no longer relevant by Andy Putman, Mark Meckes, Benoît Kloeckner, Mark Sapir, Tom Church Feb 3 '12 at 10:26 en.wikipedia.org/wiki/… –  Daniel Moskovich Sep 29 '10 at 11:48 I'd say that physics was pretty much an application of eigenvalues and eigenvectors. :-) In particular normal modes (en.wikipedia.org/wiki/Normal_modes) of oscillations for a system with $n$ degrees of freedom comes down to finding eigenvalues/vectors of an $n$-by-$n$ matrix. –  Robin Chapman Sep 29 '10 at 11:50 Please add more context: Who is your intended audience, and what scientific background can be assumed? What form is the presentation (e.g., video lecture like OpenCourseWare, animated demo like the Geometry Center, interpretive dance, etc.)? –  S. Carnahan Sep 29 '10 at 12:00 10 Answers 10 The problem of ranking the outcomes of a search engine like Google is solved in terms of an invariant measure on the net, seen as a Markov chain. Finding the invariant measure requires the spectral analysis of the associated matrix. share|improve this answer I would comment on Peitro's answer, but I don't have enough reputation; for a marvelously-titled explanation of Google's Pagerank, see The $25,000,000,000 Eigenvector. share|improve this answer Google's pagerank system is most likely the most canonical example, however others include, -Dynamical System If you are able to express a model in terms of a matrix acting on vectors, one can look at the iterations and ask what occurs? This can be done to model the life cycle of some species in an environment (bacteria on a petri dish, wolf/sheep interaction, fibonacci sequence as the spread of a population of bunnies, etc...). These examples are fairly small, however you can certainly have massive systems to model, and if your matrix is diagonalizable, the iterations of this map correspond to iterations of a diagonal matrix (very easy to do!) instead of the standard $m^{2}$ operations to multiply out an $m\times m$ matrix. Think about a $1 000 000 \times 1 000 000$ matrix $M$, where you're looking at whether a certain species will die out (i.e., itererating $M^{n}$ and checking as $n\to\infty$. Quite the time saver!) -Graph theory As an undergrad one of my summer research projects looked into special graphs called (3,6)-fullerenes, where we were finding that, looking at the adjacency matrix of the graph, one could pick 3 well chosen eigenvalues and their corresponding eigenvetors, and generate nice 3d plots of the graphs, whereas other choices would produce degenerate images, involving some twisted 2d surface. -Differential equations One can use eigenvalues and eigenvectors to express the solutions to certain differential equations, which is one of the main reasons theory was developed in the first place! I would highly recommend reading the wikipedia article, as it covers many more examples than any one reply here will likely contain, with examples along to way! (Schrödinger equation, Molecular Orbitals, Geology and Glaciology, Factor Analysis, Vibration Analysis, Eigenfaces, Tensor of Inertia, Stress Tensor, Eigenvalues of a Graph) share|improve this answer All of Quantum Mechanics is based on the notion of eigenvectors and eigenvalues. Observables are represented by hermitian operators Q, their determinate states are eigenvectors of Q, a measure of the observable can only yield an eigenvalue of the corresponding operator Q. If you measure an observable in the state $\psi$ in a system and find as result the eigenvalue $a$, the state of the system just after the measurement will be the normed projection of $\psi$ onto the eigenvector associated to $a$. And so on and so forth. Of course Quantum Physics is not mathematically trivial: the arena is infinite dimensional Hilbert Space (or more complicated functional analytic structures like Gelfand triples), operators are not bounded, etc...However, in the extremely fast growing field of Quantum Computing the algebra is mostly limited to finite-dimensional spaces and their operators. Finally, let me mention that Frank Wilczek, a winner of the 2004 Nobel Prize in Physics, has interestingly reminisced that as a student he found Quantum Mechanics easier than Classical Mechanics because of its nice axiomatization alluded to above.. share|improve this answer For visual appeal, you should look into the area of pendulums. There is a good demonstration with swinging bottles, I recall, and this does depend on eigenvalues that are nearly equal. Do a Web search on "coupled pendulums". share|improve this answer Principal Component Analysis is a way of identifying patterns in data, and expressing the data in such a way as to highlight their similarities and differences. It is very difficult to visualize data in high dimensional space, but PCA can be used their to analyze data. From the data set covariance matrix is formed and then eigen values and eigen vectors of that covariance matrix are found. These eigne values and eigen vectors then can be compared to figure out the contribution of a particular feature in the data set. Thus PCA can be successfully applied to reduce dimension of the data. share|improve this answer In telecommunications the so-called "beam-forming" algorithm in case of multiple antennas requires calculation of eigenvectors. share|improve this answer I think the book $Spectra$ $of$ $Graphs$$:$ $Theory$ $and$ $Applications$ by Dragos M. Cvetkovic, Michael Doob, Horst Sachs and M. Cvetkovic is very good source for practical applications of eigenvalues and eigenvectors. In communication theory, coding theory and cryptography, the minimum distance of codes is very important parameter in decoding and also is very important in coding based cryptography (for example McEliece cryptosystem). It is interesting that the second largest eigenvalue of related graph to a code, can determine a good lower-bond for minimum distance of code. share|improve this answer Another interesting application is rigid body rotation theory. No matter how complicated an object looks, there's always (at least) a set of three mutually orthogonal directions around which it can rotate perfectly without precession. Maybe not something you can base a whole lecture on, but it's a nice remark. share|improve this answer
8cf801557e45c229
Take the 2-minute tour × I'm an aspiring physicist who wants to self study some Quantum Physics. My thirst for knowledge is unquenchable and I can not wait 2 more years until I get my first quantum physics class in university, so I want to start with a self study. I am enrolled in a grammar school and the most gifted among the gifted (not my description, mind you, I hate coming off as cocky, sorry) are enrolled in a special 'project'. We are allowed to take 3 school hours a week off in order to work on a project, which can be about anything you want, from music to mathematics. On the 4th of April we have to present our projects. Last year an acquaintance of mine did it about university level mathematics, so I thought, why not do it about university level physics? It is now the 3rd of October so I have half a year. My question is, where can I conduct a self study of quantum physics? Starting from scratch? And is it possible for me to be able to use and understand the Schrödinger equation by April? What are good books, sites, etc. that can help me? My goal is to have a working knowledge of BASIC quantum physics and I would like to understand and be able to use the Schrödinger equation. Is this possible? What is needed for these goals? share|improve this question Do you have any experience with linear algebra, calculus or differential equations? –  DJBunk Oct 3 '12 at 15:58 None with linear algebra, but I do with calculus. –  kamal Oct 3 '12 at 16:03 I would say it depends on how ambitious you are in general learning a subject, but I really doubt 3 hours a week will do it. With some effort you might be able to learn some neat qualitative things, but I highly doubt you will be solving the Schrodinger eqn etc by April. I suggest doing something more specific like learning about things like the double slit experiment and the photoelectric effect. Those types of things you can start with Wikipedia to see if it interests. Don't let me discourage you though! –  DJBunk Oct 3 '12 at 16:14 Of course those 3 hours a week are only during school time, I expect to spend around ~10 hours a week for this, some weeks more and some less, but at least 10 hours, that I know. I already have a working knowledge of the double slit experiment and photoelectric effect, so I think I am ready for the next step (although I am not certain what that might be). –  kamal Oct 3 '12 at 16:17 4 Answers 4 Just pick up Dirac's book "The Principles of Quantum Mechanics" and read it in conjunction with "The Feynman Lectures on Physics Vol III". Don't waste time with linear algebra, the entire content of the undergraduate courses can be learned in half a day. Don't worry about the infinite dimensional nature of the thing, just reduce all the spaces to finite dimensions. Also, be aware that "gifted" is a political label that has nothing to do with you, it's just a way for schools to segregate students by their future social class. It's not the analog of special needs, because the students in gifted classes are no different from the students in usual classes, except that they are given a slightly better education. Don't be fooled by a label into thinking you are somehow special, everyone is ordinary, including Einstein and Dirac. One has to do good work despite this, and those folks show it is possible by assiduous effort. share|improve this answer Trouble is you're seeing things from the way you did things, and not how they can be done today using what's available. Have you seen Susskind's QM video lectures for example? Don't you think watching videos while taking notes is more productive? I'm with you and Howard Gardner on "giftedness" –  Larry Harson Oct 4 '12 at 2:13 @LarryHarson: I agree that I'm out of date, but it cannot be overemphasized how important it is to read the classics. Dirac's book is timeless, it is lucid, it is brief, it starts with first principles, and its mathematics is self contained. It's path of development is unique and very illuminating, being independent of both Schrodinger and Bohr. Susskind's videos I am sure are excellent, but I have a soft spot for Dirac, who was one of my closest friends throughout adolescence. As for giftedness, it is worst for the "gifted", who are made cocky and incapable of the humility required for study –  Ron Maimon Oct 4 '12 at 3:04 I wonder how you think that ''Don't worry about the infinite dimensional nature of the thing, just reduce all the spaces to finite dimensions.'' can be done without some understanding of linear algebra.... –  Arnold Neumaier Oct 4 '12 at 15:05 @ArnoldNeumaier: Because I didn't study linear algebra and I read Dirac and had no trouble. –  Ron Maimon Oct 4 '12 at 16:09 @kamal: Yes, it's a waste of time, but it was always a waste of time, it was a high-class marker to know Latin (you must be living in some former European colony to have such an education, class-markers were very important under colonialism). High-class markers (King's English, Queen's accent, a Rolex, high-status position) are always extremely time-consuming to acquire (or else they wouldn't work to mark high-classes), and this is why science is always done by low-class people who hate Latin and dress like slobs. The ancient stuff can be useful for Marlowe/Shakespeare, that's about all. –  Ron Maimon Oct 27 '12 at 12:43 Without having understood matrices and their interpretation as linear mappings (operators) it is very difficult to get a reasonable understanding of quantum mechanics. So you should spend some time on elementary linear algebra. Wikipedia is not bad on this, so you could pick up most from there. (To start with. For basic math, Wikipedia is almost completely reliable, which is not the case for more specialized topics. In case of doubt, cross check with other sources.) Today, the shortest road to quantum mechanics is probably quantum information theory. For online introductory lecture notes see, e.g., The following lecture notes start from scratch (use Wikipedia for the math not explained there): This one might also be useful: In quantum information theory, all Hilbert spaces are finite-dimensional, wave functions are just complex vectors, and the Schroedinger equation is just a linear differential equation with constant coefficients. So you also need to learn a little bit about ordinary differential equations and how linear systems behave. Again, this can be picked up from Wikipedia. In more traditional quantum mechanics, the Schroedinger equation is a partial differential equations, and wave functions are complex function depending on one or more position coordiates. On this level, you need to understand what partial derivatives are and have some knowledge about Fourier transforms. Again, this can be picked up from Wikipedia. Then you might start with You may also wish to try my online book http://lanl.arxiv.org/abs/0810.1019 It assumes some familiarity with linear algebra and of partial derivatives, but little else. Some basic questions are also answered in my theoretical physics FAQ at http://www.mat.univie.ac.at/~neum/physfaq/physics-faq.html share|improve this answer +1 these are nice sources if you get stuck on linear algebra, but I never got stuck on the linear algebra, rather the sticking points were the partial differential equations and the path integral. –  Ron Maimon Oct 4 '12 at 18:17 @RonMaimon: kamal doesn't want to understand the path integral by April. And one needs very little from PDE as long as one doesn't want to solve numerically a real problem. Thus if he has no trouble with the linear algebra and with Fourier transforms, he'll have no trouble at all! –  Arnold Neumaier Oct 4 '12 at 18:20 He should be more ambitious then--- the speed with which one can self study has increased tenfold in the last decade. –  Ron Maimon Oct 4 '12 at 18:21 @RonMaimon What would you suggest being good for me to set as a goal? You seem like a very informed man and I would like to ask for your personal advice. Of course I am also busy with sports, and I'm starting to learn LaTeX, so I'd say I spend 10 hours a week on this. –  kamal Oct 27 '12 at 12:18 @kamal: The only goal is to understand what has been done and push it forward, like everyone else tries to do. For this, you can follow a sequence more or less like Dirac/Feynman/Onsager/Landau/Gell-Mann/Anderson/Mandelstam/Polyakov/Parisi/'tHoo‌​ft/Scherk/Schwarz/Susskind/Witten (with about two dozen more authors I left out, sorry). I gave a simple but flashy thing which can be tackled after understanding basic QM here: physics.stackexchange.com/questions/41780/… (your question). Maybe read Nielson and Chuang, learn complexity classes. –  Ron Maimon Oct 27 '12 at 12:31 You can watch videos from here and lectures from here (first two atleast). share|improve this answer If you want to understand quantum physics, you have to understand Fourier series and Fourier transforms. The best introductory text ever, is the book Who Is Fourier?. Do not be fooled by its cartoonish appearance, this is a serious book as can be demonstrated by the fact that the name on the top of the list of advisers is Yoichiro Nambu who is the 2008 Nobel prize co-winner: Then I would work to gain an understanding of the Heat Equation. The Schrodinger equation can be described as the quantum version of the heat equation (except, what is diffusing is probability). Fourier developed the Fourier series in order to solve the question of how heat diffuses in a material. If you understand these things, you can understand quantum mechanics within a few months. share|improve this answer For fourier analysis, Koerner is a great source, with both accurate historical material and fascinating applications, including primes in arithmetic progression and an alternate RW proof of Picard's theorem: amazon.com/Fourier-Analysis-T-246-rner/dp/0521389917. I didn't read the cartoon book, but I doubt it has the same depth as Koerner, which is one of the great pedagogical mathematics books, along with Davenport's number theory. These were thankfully used by the mathematics professors I had as an undergraduate, and they were very good folks. –  Ron Maimon Oct 4 '12 at 18:19 @RonMaimon Thanks, I will see if I can pick up a copy, it looks pretty cool from the excerpts on amazon –  user11547 Oct 4 '12 at 18:30 @Hal Swyers thank you for giving insight into importance of heat equation in understanding the Schrodinger equation. I wish I could get free e-copy of this book "who is fourier" else i will try buying it. –  baalkikhaal Oct 27 '12 at 10:33 Your Answer
ecdc33f806011d7f
Take the 2-minute tour × I am stuck on a QM homework problem. The setup is this: enter image description here (To be clear, the potential in the left and rightmost regions is $0$ while the potential in the center region is $V_0$, and the wavefunction vanishes when $|x|>b+a/2$.) I'm asked to write the Schrödinger equation for each region, find its solution, set up the BCs, and obtain the transcendental equations for the eigenvalues. Where I'm at: I understand the infinite potential well easily and I have done a free particle going over a finite barrier before (which I understood less well, but I can deal with it it). • The problem asks me to make use of "a symmetry" in the problem, which is a vague hint. Are they trying to get me to make $\psi$ an even function? • I am supposed the condition for there to be one and only one bound state for $E<V_0$. How do I go about that? share|improve this question 2 Answers 2 up vote 1 down vote accepted You seem to have trouble to understand the basic approach. Actually there is a systematic way to solve the Schrödinger equation for picewise constant potentials. Maybe this will give you some basic idea how to solve your problem: Let be the potential given by $$V(z) = \begin{cases} \infty & z < z_1 \\ V_1 & z_1 <= z < z_2 \\ V_2 & z_2 <= z < z_3 \\ ... \end{cases}$$ • For the above potential the wavefunction for energy eigenvalue $E_n$ is given by $$\Psi_n(z) = \begin{cases} 0 & z < z_1 \\ A_1\exp(-i k_1 z) + B_1\exp(+i k_1 z) & z_1 <= z < z_2 \\ A_2\exp(-i k_2 z) + B_2\exp(+i k_2 z) & z_2 <= z < z_3 \\ ... \end{cases}$$ with $k_i = 2\pi/h \sqrt{2 m e (E_n-V_i)}$ and some (yet to be determined) constants $A_i$ and $B_i$. This is easily verified by plugging in. (In fact each "segment" is the solution to the Schrödinger equation with constant potential). Note that the $k_i$ can be real or imaginary, in which case the wavefunction in the respective segment is either sinusoidal or exponential. • As required by physics the wavefunction must be continuous and continuously differentiable everywhere. Hence the constants $A_i$ and $B_i$ must be chosen so that this is fulfilled at each point where this possibly is violated (i.e. the points $z_i$). • The above results in a linear equation system for the $A_i$ and $B_i$. This equation system now only contains the energy $E_n$ as remaining unknown. If you do it correctly the equation system contains as many unknowns as equations. • Now you compute the determinant of the equation system and set it to zero to find the $E_n$ values for which it is solvable. This is the transcendetal equation for the eigenvalues. This equation has in your case infinitely many discrete solutions $E_n$ (each solution denoted by the running index $n$). For each $E_n$ there are sets of $A_i$ and $B_i$ (which solve the equation system) which give you the wavefunction. In case there is more than one set of linearly independent $A_i$ and $B_i$, you have more than one wavefunction to the same eigenvalue $E_n$. In that case the state is degenerate. (You have degenerate states in your problem!). Regarding symmetry: The wavefunctions do not need to have the same symmetry as the potential. Of course if you have a solution wavefunction, then the mirrored wavefunction must be a solution as well (if the potential is symmetric as in your case). It needs to belong to the same energy eigenvalue. Regarding the single bound state: Once you have calculated the $E_n$ you will see that there are conditions where $E_1 < V_0$ and $E_2 > V_0$ ($E_2$ the second largest eigenvalue). This depends on the geometry, i.e. width of your barrier and well. Generally speaking the energy states have higher spacing, if the well is smaller. So probably the single bound state condition will display itself as range specification for $a$ and $b$. share|improve this answer Very good, thanks. Quite helpful. –  Alexander Nikolas Gruber Oct 15 '12 at 23:37 The parity operator commutes with the Hamiltonian because of the symmetry in your potential. This says that all eigenstates of the Hamiltonian are eigenstates of the parity operator. Therefore, the only possible eigenstate solutions to the system are ones with even or odd parity. This fact will allow you to simplify the process of applying the boundary conditions mentioned by Andreas, as you can immediately conclude several things regarding the unknown coefficients. share|improve this answer Your Answer
b47a0a5d7d1a0558
Our Mathematical Universe Max Tegmark has a new book out, entitled Our Mathematical Universe, which is getting a lot of attention. I’ve written a review of the book for the Wall Street Journal, which is now available (although now behind a paywall, if not a subscriber, you can try here). There’s also an old blog posting here about the same ideas. Tegmark’s career is a rather unusual story, mixing reputable science with an increasingly strong taste for grandiose nonsense. In this book he indulges his inner crank, describing in detail an uttery empty vision of the “ultimate nature of reality.” What’s perhaps most remarkable about the book is the respectful reception it seems to be getting, see reviews here, here, here and here. The Financial Times review credits Tegmark as the “academic celebrity” behind the turn of physics to the multiverse: As recently as the 1990s, most scientists regarded the idea of multiple universes as wild speculation too far out on the fringe to be worth serious discussion. Indeed, in 1998, Max Tegmark, then an up-and-coming young cosmologist at Princeton, received an email from a senior colleague warning him off multiverse research: “Your crackpot papers are not helping you,” it said. Needless to say, Tegmark persisted in exploring the multiverse as a window on “the ultimate nature of reality”, while making sure also to work on subjects in mainstream cosmology as camouflage for his real enthusiasm. Today multiple universes are scientifically respectable, thanks to the work of Tegmark as much as anyone. Now a physics professor at Massachusetts Institute of Technology, he presents his multiverse work to the public in Our Mathematical Universe. The New Scientist is the comparative voice of reason, with the review there noting that “there does seem to be something a little questionable with this vast multiplication of multiverses”. The book explains Tegmark’s categorization of multiverse scenarios in terms of “Level”, with Level I just lots of unobservable extensions of what we see, with the same physics, an uncontroversial notion. Level III is the “many-worlds” interpretation of quantum mechanics, which again sticks to our known laws of physics. Level II is where conventional notions of science get left behind, with different physics in other unobservable parts of the universe. This is what has become quite popular the past dozen years, as an excuse for the failure of string theory unification, and it’s what I rant about all too often here. Tegmark’s innovation is to postulate a new, even more extravagant, “Level IV” multiverse. With the string landscape, you explain any observed physical law as a random solution of the equations of M-theory (whatever they might be…). Tegmark’s idea is to take the same non-explanation explanation, and apply it to explain the equations of M-theory. According to him, all mathematical structures exist, and the equations of M-theory or whatever else governs Level II are just some random mathematical structure, complicated enough to provide something for us to live in. Yes, this really is as spectacularly empty an idea as it seems. Tegmark likes to claim that it has the virtue of no free parameters. In any multiverse-promoting book, one should look for the part where the author explains what their scenario implies about physics. At Level II, Susskind’s book The Cosmic Landscape could come up with only one bit of information in terms of predictions (the sign of the spatial curvature), and Steve Hsu soon argued that even that one bit isn’t there. There’s only small part of Tegmark’s book that deals with the testability issue, the end of Chapter 12. His summary of Chapter 12 claims that he has shown: The Mathematical Universe Hypothesis is in principle testable and falsifiable. His claim about falsifiability seems to be based on last page of the chapter, about “The Mathematical Regularity Prediction” which is that: physics research will uncover further mathematical regularities in nature. This is a prediction not of the Level IV multiverse, but a “prediction” of the idea that our physical laws are based on mathematics. I suppose it’s conceivable that the LHC will discover that at scales above 1 TeV, the only way to understand what we find is not through laws described by mathematics, but, say, by the emotional states of the experimenters. In any case, this isn’t a prediction of Level IV. On page 354 there is a paragraph explaining not a Level IV prediction, but the possibility of a Level IV prediction. The idea seems to be that if your Level II theory turns out to have the right properties, you might be able to claim that what you see is not just fine-tuned in the parameters of the Level II theory, but also fine-tuned in the space of all mathematical structures. I think an accurate way of characterizing this is that Tegmark is assuming something that has no reason to be true, then invoking something nonsensical (a measure on the space of all mathematical structures). He ends the argument and the paragraph though with: In other words, while we currently lack direct observational support for the Level IV multiverse, it’s possible that we may get some in the future. This is pretty much absurd, but in any case, note the standard linguistic trick here: what we’re missing is only “direct” observational support, implying that there’s plenty of “indirect” observational support for the Level IV multiverse. The interesting question is why anyone would possibly take this seriously. Tegmark first came up with this in 1997, putting on the arXiv this preprint. In this interview, Tegmark explains how three journals rejected the paper, but with John Wheeler’s intervention he managed to get it published in a fourth (Annals of Physics, just before the period it published the (in)famous Bogdanov paper). He also explains that he was careful to do this just after he got a new postdoc (at the IAS), figuring that by the time he had to apply for another job, it would not be in prominent position on his CV. One answer to the question is Tegmark’s talent as an impresario of physics and devotion to making a splash. Before publishing his first paper, he changed his name from Shapiro to Tegmark (his mother’s name), figuring that there were too many Shapiros in physics for him to get attention with that name, whereas “Tegmark” was much more unusual. In his book he describes his method for posting preprints on the arXiv, before he has finished writing them, with the timing set to get pole position on the day’s listing. Unfortunately there’s very little in the book about his biggest success in this area, getting the Templeton Foundation to give him and Anthony Aguirre nearly $9 million for a “Foundational Questions Institute” (FQXi). Having cash to distribute on this scale has something to do with why Tegmark’s multiverse ideas have gotten so much attention, and why some physicists are respectfully reviewing the book. A very odd aspect of this whole story is that while Tegmark’s big claim is that Math=Physics, he seems to have little actual interest in mathematics and what it really is as an intellectual subject. There are no mathematicians among those thanked in the acknowledgements, and while “mathematical structures” are invoked in the book as the basis of everything, there’s little to no discussion of the mathematical structures that modern mathematicians find interesting (although the idea of “symmetries” gets a mention). A figure on page 320 gives a graph of mathematical structures which a commenter on mathoverflow calls “truly bizarre” (see here). Perhaps the explanation of all this is somehow Freudian, since Tegmark’s father is the mathematician Harold Shapiro. The book ends with a plea for scientists to get organized to fight things like fringe religious groups concerned that questioning their pseudo-scientific claims would erode their power. and his proposal is that To teach people what a scientific concept is and how a scientific lifestyle will improve their lives, we need to go about it scientifically: we need new science-advocacy organizations that use all the same scientific marketing and fund-raising tools as the anti-scientific coalition employ. We’ll need to use many of the tools that make scientists cringe, from ads and lobbying to focus groups that identify the most effective sound bites. There’s an obvious problem here, since Tegmark’s idea of “what a scientific concept is” appears to be rather different than the one I think most scientists have, but he’s going to be the one leading the media campaign. As for the “scientific lifestyle”, this may be unfair, but while I was reading this section of the book my twitter feed was full of pictures from an FQXi-sponsored conference discussing Boltzmann brains and the like on a private resort beach on an island off Puerto Rico. Is that the “scientific lifestyle” Tegmark is referring to? Who really is the fringe group making pseudo-scientific claims here? Multiverse mania goes way back, with Barrow and Tipler writing The Anthropic Cosmological Principle nearly 30 years ago. The string theory landscape has led to an explosion of promotional multiverse books over the past decade, for instance • Parallel Worlds, Kaku 2004 • The cosmic landscape, Susskind, 2005 • Many worlds in one, Vilenkin, 2006 • The Goldilocks enigma, Davies, 2006 • In search of the Multiverse, Gribbin, 2009 • From eternity to here, Carroll, 2010 • The grand design, Hawking, 2010 • The hidden reality, Greene, 2011 • Edge of the universe, Halpern, 2012 Watching these come out, I’ve always wondered: where do they go from here? Tegmark is one sort of answer to that. Later this month, Columbia University Press will publish Worlds Without End: The Many Lives of the Multiverse, which at least is written by someone with the proper training for this (a theologian, Mary-Jane Rubenstein). I’m still though left without an answer to the question of why the scientific community tolerates if not encourages all this. Why does Nature review this kind of thing favorably? Why does this book come with a blurb from Edward Witten? I’m mystified. One ray of hope is philosopher Massimo Pigliucci, whose blog entry about this is Mathematical Universe? I Ain’t Convinced. For more from Tegmark, see this excerpt at Scientific American, an excerpt at Discover, and this video, this article and interview at Nautilus. There’s also this at Huffington Post, and a Facebook page. After the Level IV multiverse, it’s hard to see where Tegmark can go next. Maybe the answer is his very new Consciousness as a State of Matter, discussed here. Taking a quick look at it, the math looks quite straightforward, his claims it has something to do with consciousness much less so. Based on my time spent with “Our Mathematical Universe”, I’ll leave this to others to look into… Update: Scott Aaronson has a short comment here. This entry was posted in Book Reviews, Multiverse Mania. Bookmark the permalink. 125 Responses to Our Mathematical Universe 1. Bernhard says: Max Tegmark, One problem I see with the multiverse (and I understand that also includes your Level I) as just another prediction of a theory is that it can be used to justify virtually anything. Suppose another intelligent race (in this universe…) don’t yet know Mawwell’s theory, hit a intellectually wall and are trying to explain the value of the speed of light, which they can measure precisely. A multiverse “theory” can well be used to “explain” it (they happen to live in the the universe where the speed of light is correct). In this sense I am not sure how much the multiverse is really a “prediction” of inflation or a symptom of its weakness in a certain domain. While scientific valuable, the theory most likely cannot be extrapolated to explain, say, the cosmological constant, at least not in a non-environmental way. PS: I really don’t want to prolong this already confusing discussion. If Max is willing to answer, it would be interesting to read what he thinks – if not, for the rest, please just ignore my comment. 2. Bernhard says: * doesn’t yet know Mawwell’s theory, hit an intellectual wall* 3. Peter Woit says: Sure, a “Level I” multiverse can be an implication of an inflationary theory that you can test, and if you can get enough evidence for that theory you would have evidence for that sort of multiverse. I’m a bit skeptical that you really can get enough info about the inflaton, but sure, this is science. About the difference between the review and the blog posting. The blog posting was aimed a very different audience, people who regularly read this blog. The aim of the review was to give an accurate picture of what’s in your book, provide some context, explain the main claim you are making, and also explain why it’s empty. I don’t think anyone reading it will miss my argument that this is a book making grandiose but empty claims. In the blog entry, there’s a part devoted to discussing in detail your claims about testability, there because it wouldn’t fit in the review. Besides that though, the blog entry is really about a different question than the review: why is an even emptier argument than the string theory landscape getting positive attention from the public and some scientists, part of an effective campaign that has already created a highly disturbing situation in theoretical physics? The background for this I’ve written about here ad nauseam, and that context should make it clear to my readers why I’m choosing to begin with a simple and blunt characterization rather than a more polite and indirect one. I don’t think you can really disagree that claims to have figured out the ultimate nature of reality are “grandiose”, and such claims with nothing solid behind them are the province of the crank (note that I think most every theorist feels the allure of this kind of thing, we all have our own inner crank…). The non-scientific material is there because it’s in your book, and it’s relevant to the main topic of the blog posting, which is non-scientific: why is something that traditionally would be considered crackpot science now making inroads into conventional science? How is that being done? You’re unusual among talented, successful scientists in also having a great talent for getting public attention. You’ve had significant influence in getting people to take seriously highly dubious material about the multiverse. I’m fascinated by how that has happened, although of course my main concern is how to make it stop… 4. Bernhard says: Max Tegmark, I realize my argument does not really hold for Level I, but then you also lose the ability to “explain” things the cosmological constant. In any case, would be interesting to read your thoughts. 5. Igor Khavkine says: I’m puzzled by the need to invoke inflation to give an example of a theory where there exist regions of a universe inaccessible to certain observers. That can only muddy the waters by the fact that many theoretical as well as observational aspects of inflation are yet to be fully fledged out. On the other hand, already in special relativity, for any single observer (meaning an event on a worldline, or even the whole half of the worldline leading up to an event), there exist spacelike separated regions. In fact, the same thing happens in any theory with a bounded speed of propagation of disturbances (information, or whatever one might call it). All other phenomena like event horizons, Cauchy horizons, cosmological horizons are extensions of this basic property. So, it seems to me that whatever philosophical difficulties are raised by inflation have already been raised by special relativity. 6. Joel Rice says: I would be more inclined to look at what the Standard Model does not explain, namely why there are 3 generations of fermions, rather than indulging in rampant ‘modal realism’ or Platonism on steroids. Perhaps getting an answer to that would point to one mathematical structure ‘all the way down’ – to define, rather than merely describe. 7. Max Tegmark says: Thanks Peter for these helpful clarifications! I’ve long viewed you as someone who courageously stands up for a controversial view because you feel that it is correct, even thought you get a lot of flack from the physics community for it. This is something I have very much identified with over the years, since as you know, many of my views on physics have been just as controversial as yours, and as a result, both you and I have been called crackpots. I hope you don’t find this offensive, but I must confess that I find your recent postings disturbingly unscientific. You’re asking the interesting question “Why is something that traditionally would be considered crackpot science now making inroads into conventional science?”, so why aren’t you considering all logically possible answers, including the possibility that they’re making inroads because the supporting arguments are actually correct and new supporting evidence has come to light? After all, many currently accepted theories (e.g. relativity theory) were also considered crackpot science by some contemporary pundits. How can you be so certain that it’s a good idea to “work to stop this” if you’re not willing to even consider this possibility? Instead, you appear to dismiss this possibility from the get-go and focus only on other explanations such as the science community having become dysfunctional, me personally having dubious motives, etc. I also find your posting style disturbingly unscientific, and can’t help feel that you’re applying a double-standard: you keep writing interesting, respectful and carefully balanced replies to me personally about how the Level I multiverse is a valid scientific discussion topic, etc., while at the same time writing pithy sound-bites to others suggesting that everything multiverse-related is unscientific nonsense. To me, one of the core principles of scientific integrity is to only say things that you’re willing to stand by. For example, when I write an anonymous referee report, I like to pretend that I’m going to sign my name under it. My main goal in this interesting conversation with you is to identify what our common ground is (a lot, it seems!) and where we disagree. My point of view is that we don’t know whether any parallel universes exist or not, but that it’s interesting to explore the possibilities. In contrast, you appear to feel that this is uninteresting, and I totally respect that viewpoint – so far, so good. Moving on to scientific claims, I make several implication claims in the book of the form “if physics theory X is correct, then multiverse level Y exists”. You still haven’t told me about any physics claims of mine that you feel are incorrect, so unless you tell me otherwise, I’m going to assume you agree with these too. Please let me know if this is a fair characterization of your current views: * Level I: you agree that it’s a legitimate scientific topic * Level II: you reject it because of your misgivings about the string landscape * Level III: we haven’t yet discussed this. Do you agree that unitary (collapse-free) quantum mechanics is a (possibly incorrect) scientific theory that implies Level III? * Level IV: you find this meaningless, but haven’t identified which of my arguments you consider fallacious. Unless I’ve misunderstood you (and please correct me if I have!), this means that out of the 13 chapters in my book (http://mathematicaluniverse.org – CONTENTS tab), only four (6, 10-12) contain ideas that you feel are a waste of physicists’ time, and your critique of them boils down to mainly to a lack of interest, not to me making incorrect statements in them. I’m very much looking forward to hearing what you think! 8. Peter Woit says: First about the questions about my views. Sure Level I multiverse theories can be part of legitimate science. In practice though, they are endlessly being abused by people who invoke evidence for them to argue for the string landscape. As Igor Khavkine points out above, the idea that the universe extends indefinitely to unobservable regions, with the same physics, isn’t anything by itself new or revolutionary. My views on Level II are well-known. About interpretations of QM in general, my view is that the basic laws of QM reflect a very deep mathematical insight into the way the universe works. At a fundamental level, I think that what we still don’t understand very well is how classical mechanics emerges from this (with things like “quantum Darwinism” relevant ideas). “Many worlds” seems to me one legitimate way to characterize the overall picture one is trying to understand. It doesn’t though seem very helpful in getting at what we don’t understand. About the claims in your book relating Level I and Level III, I’ve made no comment, simply because I don’t understand them, and the whole idea just seems too implausible to take seriously. I am committed to only commenting about that which I understand well enough to trust my arguments. An important aspect which I value of the culture of mathematics is an insistence on keeping straight what you understand and what you don’t and always knowing where that boundary is. On Level IV, yes, I think the arguments you’re making often involve meaningless, ill-defined words and sentences, so one can’t find a “fallacy”. The only way to pin down such arguments is to look for non-trivial implications of them. I tried to be careful in the blog entry to identify the parts of your book that claimed such implications and to explain why I thought they were not justified. On the other issues you raise, let me make it clear that I don’t think your motives are “dubious”. I’m sure you believe what you are arguing for and people have the right to do what they can to make the most effective arguments for what they believe in. On the other hand, yes I do think parts of the theoretical physics community have become dysfunctional, and my arguments about this are not based on dismissing thoughtlessly the reasons people are doing what they are doing, but upon paying close attention to the issues. By the way, at this point I don’t think the claim that the string theory landscape is dysfunctional science is a controversial minority view of mine, but I’d guess that it’s the majority view in the physics community (maybe even within the “string theory” community, depending how you define that). As for the claim that I often make crude sound-bite arguments about these issues, I’ll just say that I do the best I can to make accurate statements about what I’m well aware are complicated issues, given the constraints imposed by the format I’m writing in. I was happy with how the WSJ thing came out, partly because I ended up with enough space to make a serious argument and important distinctions. The first draft of that piece was 900 words, which only allowed a serious argument at the cost of so little background that readers got lost. The editor decided to allow a longer (1300 words) piece, which was enough to both give the bare bones of an argument and provide some background. I’m often though in the position of writing something much more constrained by limits of space (or my time), trying to say something in much less than 900 words. This is never going to capture all aspects of the question at hand. About your book in general, sure, large parts of it are perfectly reasonable discussions of a variety of topics. The problem though is that from the title on, it is structured as an argument for a point of view that I think is seriously misguided, one that is not just a “waste of time”, but is likely to further promote the most problematic trends in this subject. I’ve tried to make clear exactly what my arguments against this point of view are. As an author of a book myself, I’m well aware that it’s frustrating when people discuss only one aspect of it, ignoring all sorts of things one put into it. It’s true that I’m just ignoring a lot of what’s in the book, just because I have nothing very interesting to say about it. To give one example, when I first saw that you were going to discuss the “singularity”, my knee-jerk reaction was “Oh no, another Kurzweilian technological optimist…”, but when I read that part of the book I saw that I was wrong, you have a much more interesting take on that subject. So, you see, sometimes I can say something positive… 9. Roger says: The analogy to relativity is weak. Relativity always had solid experimental support, with Michelson-Morley 1887 and relativistic mass experiments starting in 1902. There is no experimental support for the string landscape, many-worlds, or the math multiverse, nor is there likely to be any in the foreseeable future. Max, you have labeled your chapters as being “mainstream” or “controversial”. You should not be surprised that the criticism of your book has been centered on the chapters that you labeled controversial. 10. srp says: Since Peter has stated that the Level I multiverse seems methodologically OK and even agreed that it follows from special relativity, do we then have to assent to the argument that there must be a huge number of slightly different copies of ourselves out there, etc.? In other words, is the Borgesian stuff a straightforward logical deduction from the existence of the Level I multiverse or not? And if it is, should we just shrug it off on the basis of its direct unobservability? 11. Peter Woit says: I don’t think we have significant evidence for an infinitely extended universe (or significant evidence against it), just agree that this is a question one can sensibly hope to address by studying various cosmological models and comparing them to experiment. Personally, I’ve never been too interested in the paradoxes that come up when you assume infinite extension, but can see why some people are. To each his or her own… 12. jd says: Mr. Tegmark Since I and many of my colleagues have studied and arrived at many positions similar to those of Mr. Woit, then the conclusion is that you consider our views “controversial” and that some in the community would believe us to be “crackpots.” Perhaps even you? You do not help your cause Sir with such a stand. And in truth you cannot support your position from what is being discussed in the community; I know many in the field of HEP, I have many contacts, I am at a large national lab. Also, I know what is being written. I further resent the insult you make to my intelligence by assuming that I am so naive as to swallow your illogical statements. And your weak attempt to equate yourself with Einstein is obvious when you also characterized general relativity as crackpot to some. After all Hilbert also published the field equations and I sincerely doubt that any reputable scientist would consider the both of them crackpots. Some here have said you are a nice guy. I have never met you and I wonder what motivates you. There is more that could be said. For example “why aren’t you considering all logically possible answers?” To the best of my ability I am considering all my own answers and those I see in the literature. Given what supporting evidence there is, I find some answers to be nonsense and I will not waste my career on them. Life is too short and there is good work to be done. Enough already. 13. Peter Woit says: I think Max is correct to claim there are some in the physics community who think my views on string theory are “crackpot”. Lubos Motl is the most prominent exponent of this view…. 14. George Ellis says: See “Physics on the Fringe: Smoke Rings, Circlons, and Alternative Theories of Everything” by Margaret Wertheim (Walker, 2011) [http://physicsonthefringe.com/] for an account of a quantum cosmology meeting at Santa Barbara where conventional scientific constraints on theories were thrown to the wind. The discussions were very similar in atmosphere to those at crank science meetings. 15. ScentOfViolets says: There is very little — if anything — new in Tegmark’s noodlings. Greg Egan famously used the ‘dust hypothesis’ in his Permutation City, and in the linked FAQ, Egan points out that his dust is almost identical to Moravec’s Simulation, Consciousness, Existence paper. The bottom line? ‘Math is everything’ is fun as a science fictional device, but there’s very little ‘there’ there. This will continue to be the case until there’s some way to tell theories of this type are wrong . . . and you can can get an experimentalist to test for what happens in the requisite setup. Sorry, Max, but science is, above all, a very practical sort of enterprise. Metaphysics need not apply. 16. Mathematician says: MUH? Meh! 17. OMF says: I’d just like to say that it’s a little annoying to find that I have to rupture my hump making sure my calculations and research are rigorously fit enough to publish while others can actually make an entire career out of this stuff. tl;dr bitter-vet is bitter. 18. C Wright says: “The Universe is made of Math” How might one begin to investigate such a claim? Let’s assume this “new” hypothesis is true and ponder a boundary condition for quick insight. One reasonable boundary condition might be at the onset of the “Big bang.” We have T = 0, E >>0, and I suspect, more than a few other initial condition parameters (please feel free to mentally provide). What’s the status of mathematics at this conjecture? If mathematics is non existent at this point in time, but comes into being only after T=0, then it cannot be our hypothetical constructor. Therefore, let’s assume some mathematics exists at or before T=0. How much mathematics? We surely need enough to define all the universe’s initial conditions. This will take more than a little mathematics – certainly more than all that is known today. Restating then, at the beginning of the universe when T=0, a lot of mathematics (maybe all of it) exists. And where does mathematics exist/reside? Don’t tell me, there is no place in the universe just yet, so let’s define an arbitrary place for it. Will you go for the “Great Omnipotent Depository?” (Something sounds a bit familiar with this concept.) With all this mathematics existing before the universe began, and since the “universe is made of math,” I see we can also conclude that a virtual universe also exists at T=0. WTF, from our original hypothesis, we can conclude that the universe virtually existed before the universe began. Hmm, I’m sensing more salesmanship than science. Thank you, but I’ll likely pass on this book. 19. Max Tegmark says: Dear Peter: although I’m grateful for you answering more of my questions, I’m surprised that you didn’t answer the main one! I’m sorry if I didn’t ask it clearly enough – please let me ask it more explicitly. You’re asking the interesting question There are of course many possible explanations, including 1) The physics community is becoming increasingly dysfunctional, 2) A “crank” and “impresario” named Max Tegmark with a “taste for grandiose nonsense” is corrupting the physics community, 3) Money from the John Templeton Foundation is corrupting the physics community, 4) They’re making inroads because the supporting arguments are correct and new supporting evidence has come to light. Your posts above explore options 1), 2) and 3), but isn’t the scientific approach to explore all possibilities, including 4)? When I talk to physicists other than you, on both sides of the multiverse debate, they routinely mention three explanations in category 4: a) Observations of the cosmic microwave background by the Planck satellite etc. have make some scientists take cosmological inflation more seriously, and inflation in turn generically predicts (according to the work of Vilenkin, Linde and others) a Level I multiverse. b) Steven Weinberg’s use of the Level II multiverse to predict dark energy with roughly the correct density before it was observed and awarded the Nobel prize has made some scientists take Level II more seriously. c) Experimental demonstration that the collapse-free Schrödinger equation applies to ever larger quantum systems appears to have made some scientists take the Level III multiverse more seriously. Is it really completely obvious that these people are all deluded and that none of these three developments have any bearing on your question? I can’t help feeling disturbed by similarities between your posts and the recent hate-mail I’ve been receiving from a Young-Earth Creationist: you both seem to start by assuming that your conclusion is true (“Earth is 6000 years old”/“Multiverse ideas are nonsense”), and simply avoid mentioning any evidence to the contrary. If I stop posting on your blog, it will be because your approach is too unscientific for my taste. 20. Tom says: Paul Steinhardt has quite a distaste for the “anything goes” multiverse, eg read here: Does Steinhardt’s “Cyclic” hypothesis, an alternative to “conventional” inflationary Big Bang theory, somehow dispense with the Level 1 and other multiverses being discussed here? 21. Peter Woit says: About the 3 strongest examples of “experimental evidence” for the multiverse you mention. First “c”. This is kind of ridiculous. I’ve never heard of anyone expecting QM to fail for such larger systems, so the evidence that it doesn’t can’t possibly have surprised anyone or changed their mind about anything. If it did fail, that would be a huge surprise and would change people’s attitudes dramatically. This is also irrelevant to any of my arguments since I don’t have anything in particular against many-worlds interpretations (although I also don’t think they get at the interesting questions). On the other hand, if you have any evidence for your cosmological interpretation of QM that would be different, but I saw none in your book. About “a”. Again, I’m not arguing against Level I multiverses. On the question of eternal inflation, from what I can tell we still have little to no relevant experimental evidence, although I freely admit to not being an expert on this. Since you are one, here’s a question: later this year Planck will release B-mode polarization results. What does eternal inflation say about this? If Planck sees nothing, will that be evidence against inflation and people will start having less faith in an eternal inflation scenario and thus such a multiverse? If this is a subject with real connection to experiment, what’s the prediction here? About “b”. Obviously I’m well aware of the Weinberg argument about the CC (it’s basically the only one anyone ever brings up for a Level II multiverse). Sure, I agree that that argument and the observed value of the CC have had an influence. I don’t happen to think it’s particularly strong evidence, but, sure, it’s at least something. No, I don’t argue that anyone interested in Level II multiverse theories is a fool with no reason to be thinking about this. As for the three explanations you have me making for interest in the Level II multiverse (as opposed to your fourth claimed correct one that it is due to increased support from experiment and better theoretical understanding), first of all, you’re ignoring the main one I am making: people who have a lot invested in string theory refusing to admit their theory has turned out to be an empty failure. Do you honestly claim this is not a major contributor to interest in the Level II multiverse? Of the three explanations you do assign to me, about Templeton I do think their money has had some effect, I don’t know how much. Maybe increased interest in the multiverse in the last ten years was not affected at all by things they financed, like the 2003 “Universe or Multiverse” Stanford conference and the book that came of it. I suspect they think their money has an effect or they wouldn’t have kept spending it. I don’t think it’s deniable that, at least in 2003, the NSF was not about to finance such a conference. In an alternate part of the multiverse where John Templeton died a pauper, I think it’s fair to suspect there might be slightly less interest in the multiverse. About your influence, again I’d find it hard to quantify, but it’s non-zero. Recall that no less a source than the Financial Times tells us “Today multiple universes are scientifically respectable, thanks to the work of Tegmark as much as anyone.” As for terminology, applying “impresario” to your FQXi activities and others seems to me rather accurate. To be accurate, I referred to you not as a “crank”, but as a scientist who, with this book and the Level IV business in general, was “indulging his inner crank”. Do you honestly have no idea what your colleagues think about this kind of thing? How many do you think believe the Level IV business is anything other than empty grandiose nonsense of a characteristically crank or crackpot sort? As for the comparison of me to a Young-Earth creationist spewing hate-mail, get a grip. 22. Neil says: Thank you Peter and Max for an enlightening exchange. One comment on Templeton and the multiverse, if I may. As I understand it, maybe I am wrong, Templeton is interested in financing research that links science to god, or as they say in their mission statement, the “spiritual dimension.” The multiverse is used by materialists to explain fine tuning and other existential “mysteries” as an alternative to god. I don’t think Templeton has any great interest in promoting the multiverse. 23. Jusnem says: This has been a very spirited and interesting debate. I would like to propose a compromise which I believe both sides can agree on. The mathematical universe hypothesis is a religious principle. There seem to be two options to explain the existence of our universe in light of the fine tuning required for us to be here: (1) God made it that way, or (2) all mathematically possible structures exist. If there is a third option, then Mr Woit should be able to identify a specific flaw in Mr. Tegmark’s reasoning. So far, I haven’t seen one. Mr. Tegmark’s proposal provides a logical foundation for the religion of atheism. For any physicist with faith in determinism (and how can we have physics without that faith?) Mr. Tegmark’s conclusion is unavoidable. With that said, I personally can’t imagine any scenario where this hypothesis would be falsifiable. Of course, that should not detract from its value for inspiration and clear thinking. Mr. Woit’s criticism is based not on a critique of Mr. Tegmark’s logic, but rather on a religious belief in God, free will, or a more traditional notion of our world. I don’t see how this opposition can be reconciled with the basic assumptions that external reality exists and that physics can describe it, but it too is based on sound religious notions. Of course, if this debate is really about who should get funding for what then these comments don’t really add anything to the debate so feel free to ignore them. 24. Dom says: Max – to be fair as much as I don’t like the Lubos Motl type of language e.g. “crank”, I was clear that Peter was referring to particular ideas rather than you as a person. I don’t personally see the harm in putting forward any idea however outlandish as long as we do not give the lay person (or even the reasonably well-informed person) the idea that what we are saying is experimentally confirmed fact. I have found what you have to say very interesting and as someone elsewhere said, you are at your most impressive when you ignore the stuff other people would take personally (hard to do, easy to admire). 25. Peter Woit says: Templeton’s agenda is more interesting and subtle than just linking science and religion, and promoting religion (although they do plenty of that). If you look at what they don’t support, they’re explicit about not supporting experiment, and they pretty much avoid serious mathematics. One could say they like to support “philosophy”, and their “philosophy of cosmology” initiative is part of this. I don’t think they have a side pro or con on the multiverse, but it definitely is a topic they like to support discussions of. All in all, they like to support ideas and work that the scientific community doesn’t normally support because it seems rather empty and unlikely to be fruitful. Some of this ends up supporting actually interesting work, some is just a waste of time, and, yes, going on about religion and science fits nicely into this. One way of describing the effect they’re going for (and to some extent achieving) is to move topics like “universe or multiverse” from the category where the consensus is “empty waste of time, best ignored” to “controversial” (i.e. worthy of debate). 26. Lee Smolin says: Dear Peter and Max, I was going to stay out of this debate, having in print already two books that explain why anthropic multiverse cosmologies cannot possibly yield falsifiable predictions. These books, Life of the Cosmos and Time Reborn, also put forth also an alternative program for cosmology that does yield falsifiable predictions, based on the hypothesis that laws evolve in time. But I don’t want to let pass Max’s claim that Weinberg’s prediction of the order of magnitude of the dark energy provides evidence for a multiverse. Weinberg’s prediction happened to turn out roughly right but the argument that that provides evidence for the multiverse is based on fallacious reasoning. I explained why on page 136 of Time Reborn, from which I quote: “One problem with that conclusion is that the critical value referred to is the one above which galaxies would not form if the cosmological constant were the only parameter that varied. But theories of the early universe have other parameters that can vary. If we vary some of those while we vary the cosmological constant, the argument loses its force. Let’s look at one case, in which we vary the size of the density fluctuations, which, as we discussed earlier in this chapter, determine how evenly the matter in the early universe was distributed. These are relevant because if they were bigger, the cosmological constant could be far above the critical value and galaxies would still form in the very dense regions created by the fluctuations. There is still a critical value for the cosmological constant, but it goes up as the size of the density fluctuations goes up. So you can rerun the argument, letting the cosmological constant and the fluctuation size both vary over the population of universes. Now you pull two numbers out of the hat for each universe, one for the cosmological constant, the second for the size of the density fluctuations. We choose these numbers randomly, within the range in which galaxies form. It turns out that the probability of randomly getting both numbers to be as small as they are observed to be is now down from 1 chance in 20 to a few parts in 100,000. The problem is that because we don’t observe any other universes, it is impossible to know which constants vary over the hypothetical multiverse. If we assume that the right story is that only the cosmolog- ical constant can vary over the multiverse, Weinberg’s argument does well. If we assume that the right story is instead that both the cosmological constant and the fluctuation size vary, the argument does less well. In the absence of any independent evidence as to which, if any, of these hypotheses are true, the argument leads to no conclusion. So the claim that Weinberg’s argument correctly predicted the rough value of the cosmological constant fails, because of a subtler fallacy than the one discussed above. This fallacy, which is known to specialists in probability theory, arises whenever you take advantage of the freedom to arbitrarily choose a probability distribution that describes unobservable entities and so cannot be checked independently. Weinberg’s original argument has no logical force, because you could reach a different conclusion by making a different assumption about unobservable entities.” (there are citations to the scientific literature in the text.) Max, do you have a response to this or do you agree that Weinberg’s argument offers no evidence for a multiverse? 27. Peter Woit says: Thanks Lee, It did occur to me after writing that comment about this that referring to your arguments about this would have been a good idea. Another thing I could have added was the following simple point I often tried to make in arguments about this way back when. The Level II Multiverse theory of the CC is effectively the same as my own personal theory of the CC, which is that I have absolutely no idea what is responsible for its value. So, hey, no reason to think any particular value is more or less likely. I think the implications of the Level II Multiverse theory and my theory of the CC thus should be the same: from nothing you get nothing. 28. Bernhard says: “But theories of the early universe have other parameters that can vary.” I was under the impression that in these multiverse theories one was allowed to do anything. In the book “Universe of Multiverse” ()http://books.google.se/books?id=U_Jm2DT_AVAC&printsec=frontcover#v=onepage&q&f=false on part II, when Craig J. Hogan enters with particle physics he is talking about changes in the Yukawa couplings – which by itself, could give you almost anything given enough creativity. I suppose the multiverse should even make SUSY irrelevant since it can also solve the hierarchy problem. 29. Daniel Miller says: Hi Peter, Max and Lee, First, I have read both Our Mathematical Universe and Time Reborn and enjoyed them both very much. I view them as popular novels appealing to a wider audience that presents both non-controversial as well as controversial ideas. I see no problem with presenting or even pushing controversial ideas provided there is no attempt to deceive. Neither book was deceptive, so there is no issue. Furthermore, these controversial ideas are often exciting and effective in attracting a wider and uninitiated audience to the subject. Second, I appreciate the obvious importance in questioning the falsifiability of theories, as long as this process itself is carried out scientifically – lest you become a hypocrite. What I see absolutely no value in is attacking and dismissing admittedly controversial ideas outright simply because you find them exceedingly peculiar or fear their potential effects on the direction of the field, provided of course that they have some merit – which is for the field to decide. Indeed, I can easily imagine negative effects resulting from these kinds of (unscientific) attacks being much more plausible than some critical number of students being misled into studying “empty” theories and this resulting in some crisis of the field – this would require severe misjudgment on a large scale in a manner that is not compatible with the typical student of physics. In short, it’s a needless concern and appears to me to be a cop-out for more legitimate forms of dismissal. Third, radical theories, both good and bad, must often be “sold” if they are to find wide-spread acceptance – it just follows logically from the nature of revolutionary ideas and is supported by the history of physics. Finally, being a PhD student in physics I can tell you that it wasn’t very-well founded, falsifiable ideas that got me interested in physics. It was reading Thorne’s “Black holes and time warps” and Kaku’s “Hyperspace” as a child, which then led to Bertrand Russell’s “ABC of Relativity” and more “non-controversial” material. Finally, finally – REAL crises do exist such as global warming and nuclear proliferation that scientists are going to need to get comfortable speaking about with some emotion and zeal that they are not characteristically known for – hell, it would even be great if we had a few “academic celebrities” – because if we don’t sell these issues to the public then there won’t be any meaningful landscape left to falsify. 30. Peter Woit says: Daniel Miller, The “Mathematical Universe Hypothesis” and Level IV multiverse of Tegmark’s book is not “controversial”. As far as I can tell, no serious scientist other than him thinks these are non-empty ideas. There is a controversy over the string theory landscape, but none here. These ideas are also not “radical”, they are content-free. You refer to Tegmark’s book as a “novel”, and then you expect it to inspire young people about science, and convince the public to take scientist’s warnings about global warming seriously. This makes no sense. What will inspire smart young people to be scientists is good science, not obviously empty claims. I don’t think this book helps the credibility of scientists with the public one bit, quite the opposite. Yes, scientists sometimes need to sell their work to the public. But to do this, they have to have something of value to sell. 31. Tim May says: I was at Max Tegmark’s talk last night in Santa Cruz. I made a joke to him about our weather vs. East Coast weather during the setting-up period, but had no chance to talk to him during the now-obligatory book-signing period, which appeared to be about 30-50 people long. (I bought the book via Amazon and it arrive on the day of official pub, the 17th). He said little about multiple universes. I thought his talk was very good at what I’ll call the “George Gamow, One, Two, Three Infinity” level. Understand, this was a book that mightily influenced me in the 60s when I was about 12. (To my left in the crowd was a woman with a young boy. Tegmark spoke directly to a few of the young kids (boys) in the audience. This boy said he was 9. Tegmark asked him about his interests. “Chemistry” was not terribly surprising. But he also asked about some sums, and the kid responded correctly. And he sat during the presentation paying rapt attention, apparently. (One has to be careful about observing others, the Heisenberg Pedophelia Principle.) So, Tegmark did a nice prevention about the frontiers of astronomy (the Hubble Deep-Field), some mentions of some stuff possibly beyond, but he did not discuss the controversial aspects of his MUH theory. (As we were dispersing, I hear one very famous UCSC professor saying “Well, it was good that there were no questions about alternate universes.” I overheard him and quipped “Don’t worry, in 10 to the 700 other universes they asked questions.”) My feeling is that Tegmark’s lecture, and his book, which I have skimmed since receiving it last week, are not terrible things for people to read. I grew up on reading Gamow, Asimov, etc. on popular science, plus the then-excellent Scientific American. The weird stuff about monopoles, tachyons, etc. was out there, but one developed a good idea of what was plausible and what was longer-term, somewhat implausible. As a long-time reader of Peter Woit’s blog, and his book, but as someone who also reads Lubo Motl’s blog, I think Tegmark’s books is a fairly good popularization of some background in physics and cosmology and and an introduction to the more outré aspects. (Sorry to be personal, but string theory never grabbed me. But the recent EPR = ER stuff really, really grabs be. Doesn’t make it right. But it sure is suggestive.) –Tim May 32. Daniel Miller says: Hi Peter, Thanks for responding – this comment thread got blown up! Aside from addressing the validity of an idea, which is clearly totally worthwhile and necessary, I was trying to speak to other separate issues which I will try to be a bit more clear about. 1) The concern that MUH will lead the field astray and waste time that could be better spent elsewhere: I was trying to say that I think this is unlikely. A similar argument has been made about string theory and the landscape, yet physics appears to not have imploded yet nor does it appear to be approaching a crisis. I’m not saying that MUH has the legitimacy or establishment that string theory has found, nor am I saying it is without merit. Nor had Max even remotely tried to claim this is an accepted mainstream idea. And this will all be clear to anyone serious about physics. So this entire concern is unwarranted. 2) Young people may be inspired toward science by ideas that stir their curiosity and which they find intellectually fascinating. This can be “good science”, science fiction, or anything in between. But above all it must be interesting. What will make them “good scientists” is the entire formal process required to become a scientist. I have more than a few friends who were originally inspired to pursue science from Star Wars. 3) There is nothing inherently wrong with “grandiose” ideas, being an “academic celebrity” or selling one’s ideas. In fact, these qualities are currently needed more among scientists to appeal to the public about pressing social issues. This was a statement about the virtue of these traits alone, which were mentioned somewhat derogatorily in previous posts, and not with respect to the book or the books role in promoting awareness about global warming. And, unfortunately, it is not the credibility of scientists that pushes part of the population toward climate change denial – it’s the loudness, visibility and grandiosity of (totally incredible) pundits. The overall point being that both Max and his more fun ideas have a net positive effect on both science and the world we live in. Have a goodnight, 33. Peter Lynds says: Hi Lee, I can’t help but question why you felt the need to mention about your own work and its ability to make falsifiable predictions. I think this is maybe a little bit rich, considering that charges of empty content and the rest could easily be pointed in your direction with regards to your recent book and arguments and claims concerning the reality of time. That is, time and the empirical don’t, and will never, go together. 34. Lee Smolin says: Dear Peter Lynds, My logic, as was carefully explained in Time Reborn (TR), takes three steps, which in outline are: 1) The various arguments which have been given that time is an illusion, i.e. emergent from a more fundamental timeless level of description, depends on an assumption, which is the immutability of the laws of physics. This is one of the conclusions of Part I of TR. 2) Specific hypotheses about how the laws evolved are testible by real,doable experiments and observations, a few are even falsifiable. 3) Therefor, if a hypothesis that laws evolve were confirmed by evidence, the laws cannot be immutable, therefor the arguments that have convinced physicists that time is unreal would have no force. So there is a relationship between the view of time and empiricaly testible hypotheses. Furthermore, the hypothesis of cosmological natural selection, published in 1992, made two falsifiable predictions which continue to stand up to empirical test. It also remains the only proposed explanation for the fine tunings of the standard model that makes falsifiable predictions. 35. Marcus says: IMO that 3 step argument must surely be the most interesting thing that has appeared in this thread and contains what could be the REAL reason that the natural world appears mathematical. Since Max T. seems to like mathematical regularities/patterns, this argument, I suspect, is what the second half of his MUH book should actually have been about: Leibniz principle of sufficient reason (for the manifest regularity Wigner referred to) implies TIME = a process by which regularities emerge and become “laws” Thermodynamics of physical law. The universe becomes more predictable as time goes on. More “mathematical” more patterned, more regulated, more repetitive. Paradoxically beautiful and from another perspective boring: more expected and less surprising. Like a Shannon channel whose information capacity is gradually diminishing until the listener at the other end hears nothing, or nothing he didn’t already know. So this three step argument shows us the real MUH. And there is only ONE universe in this MUH. It is unnecessary, old-fashioned, and ridiculous to imagine more than one. Leibniz+Wigner –>time, and time is “testable” in the sense that we might someday witness the emergence of an unprecedented regular pattern. Extremely farfetched, but still nice. The first half of Tegmark’s book, from what I have sampled, is pedagogically excellent. It reminds me of Timothy Ferris’ *Coming of Age in the Milky Way*, really A-plus. Maybe it is such a good front end that it should have a different back end. :0) 36. Peter Woit says: This is turning into a discussion of another book… Please, comments should be about the Tegmark book, and at this point coming up with something new about that might not be easy. I promise a couple new postings soon. 37. Peter Lynds says: Hi Lee, Thanks. There is more I’d like to say than this, but I agree that it would be off topic. If I can, though, I would like to quickly mention that, unless one is very selective, arguments for time’s non existence don’t depend on the assumption that the laws of physics are immutable (I can think of plenty that aren’t, including simply that time is unobservable). The thesis that the laws of physics should be mutable and internal to the universe (I don’t think there can be any doubt that the latter is correct) also obviously isn’t dependent on time existing. While I disagree with his ideas, I think Max deserves credit for participating in this discussion. Hopefully he’ll come back. If he does, I think some sensitivity to his position (maybe a bit like an English soccer supporter walking into a rival team’s local bar) wouldn’t go amiss. 38. Roger says: I was also at that Santa Cruz bookstore lecture. I was surprised that he peppered his talk with arguments that we are not spending enough money and effort trying to reduce risk of future disasters. His math multiverse implies that time is an illusion, that we have no free will, and that all future scenarios happen, regardless of what we might try to prevent. 39. Max Tegmark says: Thanks Lee for bringing up this important point! > no evidence for a multiverse? I fully agree with the mathematics in your analysis. In fact, I reached a similar conclusion in http://arxiv.org/pdf/astro-ph/0410281v2.pdf (see around 81), and so does Alex Vilenkin. The bottom line is that galaxy formation efficiency depends not on the dark energy density alone, but on what we might call the dimensionless “Weinberg” parameter W=rhoL/Q^3 xi^4, where Q ~ 2e-5 is the CMB fluctuation level, rhoL ~1e-123 is the dark energy density and xi~2e-28 is the dark matter density per photon, all in Planck units. What Weinberg’s argument gives is then a prediction for the parameter W, no more and no less. Vilenkin has emphatically argued that Weinberg’s prediction for W is impressive because it predated its measurement (by observations of CMB, supernovae, etc). On a separate note, since you too got picked on above for making controversial statements, I want to add that I find your work extremely valuable, particularly because your conclusions are so different from mine. Whenever we face a science question to which we don’t yet know the answer, I find it valuable when the community carefully explores the full range of logical possibilities. On some issues related to the roles of time and mathematics in physics, your work and mine in a sense explores the opposite extremes of the spectrum of possibilities. In contrast, I find conformist pressures to lampoon and dismiss certain scientific topics to be against the spirit of science. Indeed, it’s been disturbing to see that the loudest cheerleading for this particular blog thread has been in the Intelligent Design community: 40. Peter Woit says: There is no “controversy” about your MUH and Level IV arguments, just a consensus that they are empty, and that’s what you need to address. I notice that you have decided to stop answering any of the arguments I raise, in favor of a sleazy tactic of slandering me as a creationist. Up to you how to behave, I don’t think you’ll find this helps you or your case. 41. John says: Don’t you think that little bit of insinuation is beneath you Max? After all, what the hell does the opinion of the Intelligent Design community have to do with this. 42. Orin says: Peter, it’s not fair to ask Max (or anyone) to address a claim that the MUH is “empty” if you don’t define what that means. Do you mean only that you think it is unfalsifiable? Or do you believe that it is “empty” philosophically as well? You keep saying it over and over, but nowhere in this thread or in your original post do you present an argument for anyone to rebut. One might say your criticism is “empty.” Some in this thread have presented examples of how the MUH may turn out to be predictive. You completely ignore these arguments (sometimes removing them from the comment section) and just keep shouting “empty!” without addressing their points. Even your reaction to the recent discussion of the Weinberg argument (“I don’t happen to think it’s particularly strong evidence, but, sure, it’s at least something.”) is perplexing in the context of your statement that the MUH is “completely empty”. Completely? You just admitted that the L2 multiverse has perhaps some weak predictive content! So you really need to be clear about exactly what you mean by “completely empty” before asking others to defend against an attack that is itself “completely empty” in its current form. 43. Max Tegmark says: Hi John: Peter Woit’s latest accusation that I’m somehow claiming that he’s a creationist is so silly that it hardly warrants a reply. Please let me repeat and expand on what I wrote above, in the hope that it won’t be further misunderstood. I wrote that I couldn’t help feeling disturbed by similarities in argumentation style between Peter’s posts and the recent hate-mail I’ve been receiving from a Young-Earth Creationist: both seem to start by assuming that their conclusion is true (“Earth is 6000 years old”/“Multiverse ideas are nonsense”), and they then proceed to simply avoid mentioning any evidence to the contrary. I gave a detailed example of how Peter did this in my January 22, 2014 at 6:44 pm post above, by avoiding any examination of option 4. To me, this is an unscientific approach. As far as I can tell, the posters on that creationist site I linked to appear to dislike my book because they feel it represents a naturalistic world view. Again, they don’t appear interested in examining all logically possible options (in particular, the logically possible option that modern cosmology is correct), instead dismissing this with pithy quotes about “nonsense”, “crank” etc. that they’ve borrowed from Peter Woit. If we scientists are to have any claim to the moral high ground in scientific debates, I feel that we need to practice what we preach and conduct our debates at a higher level of civility and rigor! 44. Peter Woit says: While pointing to your January 22 comment, you studiously ignore my response to it, following up your attack on me as similar to a hate-mail-sending Creationist with a sleazy comment about my supposed connection to a “disturbing” ID blog. You then end with “I feel that we need to practice what we preach and conduct our debates at a higher level of civility and rigor!”. Impressive. By the way, I just noticed that you’re featured in an upcoming film, The Principle. Have you seen it, and if so do you think you do a good job of representing the scientific viewpoint in this film? 45. Peter Woit says: In this context, “empty” = “implies nothing about the real world” I devoted a fair amount of time to carefully reading the Tegmark book and looking for where he discussed the implications for physics. In the blog entry above I discussed those carefully and completely, giving an argument (that no one has contradicted) that Tegmark was unable to derive and real implications of his MUH or Level IV multiverse for physics. This has nothing to do with the argument about Level II predictions. 46. Orin says: Peter, then I suggest you add the adverb “currently” before your use of the word “empty.” Even more fair would be “currently not predictive.” There is a big difference between being vacuous (which is what “empty” may imply) , and simply being a theory that may or may not yield testable predictions at this time. The argument you need to be making then, is why you are so sure that this theory has no possibility whatsoever of ever yielding testable predictions. I have not seen this argument from you, and I think it is an argument that needs to be made if you really believe that the MUH is not worthy of further study. I have a hard time believing that you could really hold such a hard line, when it is so obvious (from arguments like Weinberg’s discussed above) that there really can be no possibility of predictive consequences of such a theory (the Level II apropos of Winberg is not level IV, but in theory the logic is no different). 47. Peter Woit says: The problem is not one of “currently untestable”, but one of “in principle untestable”. Tegmark is the one who wrote scientific papers and a book about this, he’s the one responsible for explaining why the ideas are not vacuous, by giving a plausible explanation for how his ideas could be tested, at least in principle. The burden of proof is not on me, but on him. I’ve argued above carefully above that he hasn’t met this burden (although he claims in his book that he has). If you follow the discussion between us in these comments, I think you’ll see that the point at which he starts going on about hate-filled creationists is the point where he no longer has an argument. That’s what people do… 48. Orin says: Peter, you well know that an idea exists independently of a promoter of that idea. To insist that you will only accept an argument from Max and only Max is a bizarre form of straw man. There have been examples given in this thread of how the MUH is not necessarily vacuous. You have ignored them. 49. Peter Woit says: You seem to find it plausible that Tegmark wrote a whole book about his “hypothesis”, but didn’t bother to include in the book an explanation of how his “hypothesis” could be tested. Now, before I can argue that he doesn’t have anything, it’s my job to prove that there’s no way for anyone to come up with tests for his “hypothesis”. Right? Sorry, we disagree, I think my job is to read the book, assume he has put his best arguments in there, and see if these arguments are convincing, or if they don’t hold water. I think any scientist who reads the book will see that the latter is what is going on here. By the way, have you read the book? On which page did you see a convincing argument from him that his “hypothesis” was testable? 50. Orin says: Peter, I’m afraid I haven’t had the time to read the book yet, but I’m familiar with the material I expect to be in it. Hopefully I’ll get to it next week. I will admit that if what you say is correct then I am puzzled that Max would not include in his book the interesting ways in which the MUH can lead to physical predictions. Of course I know well from your past opinions that you don’t take kindly to some of the tools that would be brought to bare, such as anthropic reasoning(*). Nonetheless such reasoning is not vacuous, it exists independently of whether or not Max puts it in his book, and you are surely aware of it, so it seems disingenuous to take such a hard line as “vacuous” with regard to these ideas. It would be refreshing if you admitted that in principle these ideas could yield fruit. Could they not? (*) I should note that this is by no means the only reasoning; it may not even be necessary if a sensible measure is found to be predictive. Comments are closed.
33b144bb16c2eaea
Hulthen potential From Encyclopedia of Mathematics Jump to: navigation, search The Hulthen potential [a1] is given by where is the screening parameter and z is a constant which is identified with the atomic number when the potential is used for atomic phenomena. The Hulthen potential is a short-range potential which behaves like a Coulomb potential for small values of and decreases exponentially for large values of . The Hulthen potential has been used in many branches of physics, such as nuclear physics [a2], atomic physics [a3], [a4], solid state physics [a5], and chemical physics [a6]. The model of the three-dimensional delta-function could well be considered as a Hulthen potential with the radius of the force going down to zero [a7]. The Schrödinger equation for this potential can be solved in a closed form for waves. For , a number of methods have been employed to find approximate solutions for the Schrödinger equation with the Hulthen potential [a8], [a9], [a10], [a11]. The Dirac equation with the Hulthen potential has also been studied using an algebraic approach [a12]. [a1] L. Hulthen, Ark. Mat. Astron. Fys , 28A (1942) pp. 5 (Also: 29B, 1) [a2] L. Hulthen, M. Sugawara, S. Flugge (ed.) , Handbuch der Physik , Springer (1957) [a3] T. Tietz, J. Chem. Phys. , 35 (1961) pp. 1917 [a4] C.S. Lam, Y.P. Varshni, Phys. Rev. A , 4 (1971) pp. 1875 [a5] A.A. Berezin, Phys. Status. Solidi (b) , 50 (1972) pp. 71 [a6] P. Pyykko, J. Jokisaari, Chem. Phys. , 10 (1975) pp. 293 [a7] A.A. Berezin, Phys. Rev. B , 33 (1986) pp. 2122 [a8] C.S. Lai, W.C. Lin, Phys. Lett. A , 78 (1980) pp. 335 [a9] S.H. Patil, J. Phys. A , 17 (1984) pp. 575 [a10] V.S. Popov, V.M. Wienberg, Phys. Lett. A , 107 (1985) pp. 371 [a11] B. Roy, R. Roychoudhury, J. Phys. A , 20 (1987) pp. 3051 [a12] B. Roy, R. Roychoudhury, J. Phys. A , 23 (1990) pp. 5095 How to Cite This Entry: Hulthen potential. R. Roychoudhury (originator), Encyclopedia of Mathematics. URL: This text originally appeared in Encyclopedia of Mathematics - ISBN 1402006098
0c603140b4f6dc6f
Quantum Field Theory First published Thu Jun 22, 2006; substantive revision Thu Sep 27, 2012 Quantum Field Theory (QFT) is the mathematical and conceptual framework for contemporary elementary particle physics. In a rather informal sense QFT is the extension of quantum mechanics (QM), dealing with particles, over to fields, i.e. systems with an infinite number of degrees of freedom. (See the entry on quantum mechanics.) In the last few years QFT has become a more widely discussed topic in philosophy of science, with questions ranging from methodology and semantics to ontology. QFT taken seriously in its metaphysical implications seems to give a picture of the world which is at variance with central classical conceptions of particles and fields, and even with some features of QM. The following sketches how QFT describes fundamental physics and what the status of QFT is among other theories of physics. Since there is a strong emphasis on those aspects of the theory that are particularly important for interpretive inquiries, it does not replace an introduction to QFT as such. One main group of target readers are philosophers who want to get a first impression of some issues that may be of interest for their own work, another target group are physicists who are interested in a philosophical view upon QFT. 1. What is QFT? In contrast to many other physical theories there is no canonical definition of what QFT is. Instead one can formulate a number of totally different explications, all of which have their merits and limits. One reason for this diversity is the fact that QFT has grown successively in a very complex way. Another reason is that the interpretation of QFT is particularly obscure, so that even the spectrum of options is not clear. Possibly the best and most comprehensive understanding of QFT is gained by dwelling on its relation to other physical theories, foremost with respect to QM, but also with respect to classical electrodynamics, Special Relativity Theory (SRT) and Solid State Physics or more generally Statistical Physics. However, the connection between QFT and these theories is also complex and cannot be neatly described step by step. If one thinks of QM as the modern theory of one particle (or, perhaps, a very few particles), one can then think of QFT as an extension of QM for analysis of systems with many particles—and therefore with a large number of degrees of freedom. In this respect going from QM to QFT is not inevitable but rather beneficial for pragmatic reasons. However, a general threshold is crossed when it comes to fields, like the electromagnetic field, which are not merely difficult but impossible to deal with in the frame of QM. Thus the transition from QM to QFT allows treatment of both particles and fields within a uniform theoretical framework. (As an aside, focusing on the number of particles, or degrees of freedom respectively, explains why the famous renormalization group methods can be applied in QFT as well as in Statistical Physics. The reason is simply that both disciplines study systems with a large or an infinite number of degrees of freedom, either because one deals with fields, as does QFT, or because one studies the thermodynamic limit, a very useful artifice in Statistical Physics.) Moreover, issues regarding the number of particles under consideration yield yet another reason why we need to extend QM. Neither QM nor its immediate relativistic extension with the Klein-Gordon and Dirac equations can describe systems with a variable number of particles. However, obviously this is essential for a theory that is supposed to describe scattering processes, where particles of one kind are destroyed while others are created. One gets a very different kind of access to what QFT is when focusing on its relation to QM and SRT. One can say that QFT results from the successful reconciliation of QM and SRT. In order to understand the initial problem one has to realize that QM is not only in a potential conflict with SRT, more exactly: the locality postulate of SRT, because of the famous EPR correlations of entangled quantum systems. There is also a manifest contradiction between QM and SRT on the level of the dynamics. The Schrödinger equation, i.e. the fundamental law for the temporal evolution of the quantum mechanical state function, cannot possibly obey the relativistic requirement that all physical laws of nature be invariant under Lorentz transformations. The Klein-Gordon and Dirac equations, resulting from the search for relativistic analogues of the Schrödinger equation in the 1920s, do respect the requirement of Lorentz invariance. Nevertheless, ultimately they are not satisfactory because they do not permit a description of fields in a principled quantum-mechanical way. Fortunately, for various phenomena it is legitimate to neglect the postulates of SRT, namely when the relevant velocities are small in relation to the speed of light and when the kinetic energies of the particles are small compared to their mass energies mc2. And this is the reason why non-relativistic QM, although it cannot be the correct theory in the end, has its empirical successes. But it can never be the appropriate framework for electromagnetic phenomena because electrodynamics, which prominently encompasses a description of the behavior of light, is already relativistically invariant and therefore incompatible with QM. Scattering experiments are another context in which QM fails. Since the involved particles are often accelerated almost up to the speed of light, relativistic effects can no longer be neglected. For that reason scattering experiments can only be correctly grasped by QFT. Unfortunately, the catchy characterization of QFT as the successful merging of QM and SRT has its limits. On the one hand, as already mentioned above, there also is a relativistic QM, with the Klein-Gordon- and the Dirac-equation among their most famous results. On the other hand, and this may come as a surprise, it is possible to formulate a non-relativistic version of QFT (see Bain 2011). The nature of QFT thus cannot simply be that it reconciles QM with the requirement of relativistic invariance. Consequently, for a discriminating criterion it is more appropriate to say that only QFT, and not QM, allows describing systems with an infinite number of degrees of freedom, i.e. fields (and systems in the thermodynamic limit). According to this line of reasoning, QM would be the modern (as opposed to classical) theory of particles and QFT the modern theory of particles and fields. Unfortunately however, and this shall be the last turn, even this gloss is not untarnished. There is a widely discussed no-go theorem by Malament (1996) with the following proposed interpretation: Even the quantum mechanics of one single particle can only be consonant with the locality principle of special relativity theory in the framework of a field theory, such as QFT. Hence ultimately, the characterization of QFT, on the one hand, as the quantum physical description of systems with an infinite number of degrees of freedom, and on the other hand, as the only way of reconciling QM with special relativity theory, are intimately connected with one another. theory diagram Figure 1. The diagram depicts the relations between different theories, where Non-Relativistic Quantum Field Theory is not a historical theory but rather an ex post construction that is illuminating for conceptual purposes. Theoretically, [(i), (ii), (iii)], [(ii), (i), (iii)] and [(ii), (iii), (i)] are three possible ways to get from Classical Mechanics to Relativistic Quantum Field Theory. But note that this is meant as a conceptual decomposition; history didn't go all these steps separately. On the one hand, by good luck, so to say, classical electrodynamics is relativistically invariant already, so that its successful quantization leads directly to Relativistic Quantum Field Theory. On the other hand, some would argue (e.g. Malament 1996) that the only way to reconcile QM and SRT is in terms of a field theory, so that (ii) and (iii) would coincide. Note that the steps (i), (ii) and (iii), i.e. quantization, transition to an infinite number of degrees of freedom, and reconciliation with SRT, are all ontologically relevant. In other words, by these steps the nature of the physical entities the theories talk about may change fundamentally. See Huggett 2003 for an alternative three-dimensional “map of theories”. Further Reading on QFT and Philosophy of QFT. Mandl and Shaw (2010), Peskin and Schroeder (1995), Weinberg (1995) and Weinberg (1996) are standard textbooks on QFT. Teller (1995) and Auyang (1995) are the first systematic monographs on the philosophy of QFT. The anthologies Brown and Harré (1988), Cao (1999) and Kuhlmann et al. (2002) are anthologies with contributions by physicists and philosophers (of physics), where the last anthology has a focus on ontological issues. The literature on the philosophy of QFT has increased significantly in the last decade. Besides a number of separate papers there are two new monographs, Cao (2010) and Kuhlmann (2010), and one special issue (May 2011) of Studies in History and Philosophy of Modern Physics. Bain (2011), Huggett (2000) and Ruetsche (2002) provide article length discussions on a number of issues in the philosophy of QFT. See also the following supplementary document: The History of QFT. 2. The Basic Structure of the Conventional Formulation 2.1 The Lagrangian Formulation of QFT The crucial step towards quantum field theory is in some respects analogous to the corresponding quantization in quantum mechanics, namely by imposing commutation relations, which leads to operator valued quantum fields. The starting point is the classical Lagrangian formulation of mechanics, which is a so-called analytical formulation as opposed to the standard version of Newtonian mechanics. A generalized notion of momentum (the conjugate or canonical momentum) is defined by setting p = ∂L/∂, where L is the Lagrange function L = TV (T is the kinetic energy and V the potential) and dq/dt. This definition can be motivated by looking at the special case of a Lagrange function with a potential V which depends only on the position so that (using Cartesian coordinates) ∂L/∂ = (∂/∂)(m2/2) = m = px. Under these conditions the generalized momentum coincides with the usual mechanical momentum. In classical Lagrangian field theory one associates with the given field φ a second field, namely the conjugate field (3.1)   π = ∂L/∂φ̇ where L is a Lagrangian density. The field φ and its conjugate field π are the direct analogues of the canonical coordinate q and the generalized (canonical or conjugate) momentum p in classical mechanics of point particles. In both cases, QM and QFT, requiring that the canonical variables satisfy certain commutation relations implies that the basic quantities become operator valued. From a physical point of view this shift implies a restriction of possible measurement values for physical quantities some (but not all) of which can have their values only in discrete steps now. In QFT the canonical commutation relations for a field φ and the corresponding conjugate field π are (3.2)   [φ(x,t), π(y,t)] = 3(xy) [φ(x,t), φ(y,t)] = [π(x,t), π(y,t)] = 0 which are equal-time commutation relations, i.e., the commutators always refer to fields at the same time. It is not obvious that the equal-time commutation relations are Lorentz invariant but one can formulate a manifestly covariant form of the canonical commutation relations. If the field to be quantized is not a bosonic field, like the Klein-Gordon field or the electromagnetic field, but a fermionic field, like the Dirac field for electrons one has to use anticommutation relations. While there are close analogies between quantization in QM and in QFT there are also important differences. Whereas the commutation relations in QM refer to a quantum object with three degrees of freedom, so that one has a set of 15 equations, the commutation relations in QFT do in fact comprise an infinite number of equations, namely for each of the infinitely many space-time 4-tuples (x,t) there is a new set of commutation relations. This infinite number of degrees of freedom embodies the field character of QFT. It is important to realize that the operator valued field φ(x,t) in QFT is not analogous to the wavefunction ψ(x,t) in QM, i.e., the quantum mechanical state in its position representation. While the wavefunction in QM is acted upon by observables/operators, in QFT it is the (operator valued) field itself which acts on the space of states. In a certain sense the single particle wave functions have been transformed, via their reinterpretation as operator valued quantum fields, into observables. This step is sometimes called ‘second quantization’ because the single particle wave equations in relativistic QM already came about by a quantization procedure, e.g., in the case of the Klein-Gordon equation by replacing position and momentum by the corresponding quantum mechanical operators. Afterwards the solutions to these single particle wave equations, which are states in relativistic QM, are considered as classical fields, which can be subjected to the canonical quantization procedure of QFT. The term ‘second quantization’ has often been criticized partly because it blurs the important fact that the single particle wave function φ in relativistic QM and the operator valued quantum field φ are fundamentally different kinds of entities despite their connection in the context of discovery. In conclusion, it must be emphasized that both in QM and QFT states and observables are equally important. However, to some extent their roles are switched. While states in QM can have a concrete spatio-temporal meaning in terms of probabilities for position measurements, in QFT states are abstract entities and it is the quantum field operators that seem to allow for a spatio-temporal interpretation. See the section on the field interpretation of QFT for a critical discussion. 2.2 Interaction Up to this point, the aim was to develop a free field theory. Doing so does not only neglect interaction with other particles (fields), it is even unrealistic for one free particle because it interacts with the field that it generates itself. For the description of interactions—such as scattering in particle colliders—we need certain extensions and modifications of the formalism. The immediate contact between scattering experiments and QFT is given by the scattering or S-matrix which contains all the relevant predictive information about, e.g., scattering cross sections. In order to calculate the S-matrix the interaction Hamiltonian is needed. The Hamiltonian can in turn be derived from the Lagrangian density by means of a Legendre transformation. In order to discuss interactions one introduces a new representation, the interaction picture, which is an alternative to the Schrödinger and the Heisenberg picture. For the interaction picture one splits up the Hamiltonian, which is the generator of time-translations, into two parts H = H0 + Hint, where H0 describes the free system, i.e., without interaction, and gets absorbed in the definition of the fields and Hint is the interaction part of the Hamiltonian, or short the ‘interaction Hamiltonian’. Using the interaction picture is advantageous because the equations of motion as well as, under certain conditions, the commutation relations are the same for interacting fields as for free fields. Therefore, various results that were established for free fields can still be used in the case of interacting fields. The central instrument for the description of interaction is again the S-matrix, which expresses the connection between in and out states by specifying the transition amplitudes. In QED, for instance, a state |in⟩ describes one particular configuration of electrons, positrons and photons, i.e., it describes how many of these particles there are and which momenta, spins and polarizations they have before the interaction. The S-matrix supplies the probability that this state goes over to a particular |out⟩ state, e.g., that a particular counter responds after the interaction. Such probabilities can be checked in experiments. The canonical formalism of QFT as introduced in the previous section is only applicable in the case of free fields since the inclusion of interaction leads to infinities (see the historical part). For this reason perturbation theory makes up a large part of most publications on QFT. The importance of perturbative methods is understandable realizing that they establish the immediate contact between theory and experiment. Although the techniques of perturbation theory have become ever more sophisticated it is somewhat disturbing that perturbative methods could not be avoided even in principle. One reason for this unease is that perturbation theory is felt to be rather a matter of (highly sophisticated) craftsmanship than of understanding nature. Accordingly, the corpus of perturbative methods plays a small role in the philosophical investigations of QFT. What does matter, however, is in which sense the consideration of interaction effects the general framework of QFT. An overview about perturbation theory is given in section 4.1 (“Perturbation Theory—Philosophy and Examples”) of Peskin & Schroeder (1995). 2.3 Gauge Invariance Some theories are distinguished by being gauge invariant, which means that gauge transformations of certain terms do not change any observable quantities. Requiring gauge invariance provides an elegant and systematic way of introducing terms for interacting fields. Moreover, gauge invariance plays an important role in selecting theories. The prime example of an intrinsically gauge invariant theory is electrodynamics. In the potential formulation of Maxwell's equations one introduces the vector potential A and the scalar potential φ, which are linked to the magnetic field B(x,t) and the electric field E(x,t) by (3.3)   B = ∇ × A E = −(∂A/∂t) − ∇φ or covariantly (3.4)   Fμν = ∂μAν − ∂νAμ where Fμν is the electromagnetic field tensor and Aμ = (φ, A) the 4-vector potential. The important point in the present context is that given the identification (3.3), or (3.4), there remains a certain flexibility or freedom in the choice of A and φ, or Aμ. In order to see that, consider the so-called gauge transformations (3.5)   A A − ∇ψ φ φ + ∂χ/∂t or covariantly (3.6)   AμAμ + ∂μχ where χ is a scalar function (of space and time or of space-time) which can be chosen arbitrarily. Inserting the transformed potential(s) into equation(s) (3.3), or (3.4), one can see that the electric field E and the magnetic field B, or covariantly the electromagnetic field tensor Fμν, are not effected by a gauge transformation of the potential(s). Since only the electric field E and the magnetic field B, and quantities constructed from them, are observable, whereas the vector potential itself is not, nothing physical seems to be changed by a gauge transformation because it leaves E and B unaltered. Note that gauge invariance is a kind of symmetry that does not come about by space-time transformations. In order to link the notion of gauge invariance to the Lagrangian formulation of QFT one needs a more general form of gauge transformations which applies to the field operator φ and which is supplied by (3.7)   φ eiΛφ φ* eiΛφ* where Λ is an arbitrary real constant. Equations (3.7) describe a global gauge transformation whereas a local gauge transformation (3.8)   φ(x) eiα(x)φ(x) varies with x. It turned out that requiring invariance under local gauge transformations supplies a systematic way for finding the equations describing fundamental interactions. For instance, starting with the Lagrangian for a free electron, the requirement of local gauge invariance can only be fulfilled by introducing additional terms, namely those for the electromagnetic field. Gauge invariance can be captured by certain symmetry groups: U(1) for electromagnetic, SU(2)⊗U(1) for electroweak and SU(3) for strong interaction. This is an important basis for unification programs, as is the analogy to general relativity where a local gauge symmetry is associated with the gravitational field. Moreover, it turned out that only gauge invariant quantum field theories are renormalizable. All this can be taken to show that a mathematically rich theory, with surplus structures, can be very valuable in the construction of theories. Auyang (1995) emphasizes the general conceptual significance of invariance principles; Redhead (2002) and Martin (2002) focus specifically on gauge symmetries. Healey (2007) and Lyre (2004 and 2012) discuss the ontological significance of gauge theories, among other things concerning the Aharanov-Bohm effect and ontic structural realism. 2.4 Effective Field Theories and Renormalization In the 1970s a program emerged in which the theories of the standard model of elementary particle physics are considered as effective field theories (EFTs) which have a common quantum field theoretical framework. EFTs describe relevant phenomena only in a certain domain since the Lagrangian contains only those terms that describe particles which are relevant for the respective range of energy. EFTs are inherently approximative and change with the range of energy considered. EFTs are only applicable on a certain energy scale, i.e., they only describe phenomena in a certain range of energy. Influences from higher energy processes contribute to average values but they cannot be described in detail. This procedure has no severe consequences since the details of low-energy theories are largely decoupled from higher energy processes. Both domains are only connected by altered coupling constants and the renormalization group describes how the coupling constants depend on the energy. The main idea of EFTs is that theories, i.e., in particular the Lagrangians, depend on the energy of the phenomena which are analysed. The physics changes by switching to a different energy scale, e.g., new particles can be created if a certain energy threshold is exceeded. The dependence of theories on the energy scale distinguishes QFT from, e.g., Newton's theory of gravitation where the same law applies to an apple as well as to the moon. Nevertheless, laws from different energy scales are not completely independent of each other. A central aspect of considerations about this dependence are the consequences of higher energy processes on the low-energy scale. On this background a new attitude towards renormalization developed in the 1970s, which revitalizes earlier ideas that divergences result from neglecting unknown processes of higher energies. Low-energy behavior is thus affected by higher energy processes. Since higher energies correspond to smaller distances this dependence is to be expected from an atomistic point of view. According to the reductionist program the dynamics of constituents on the microlevel should determine processes on the macrolevel, i.e., here the low-energy processes. However, as, for instance hydrodynamics shows, in practice theories from different levels are not quite as closely connected because a law which is applicable on the macrolevel can be largely independent of microlevel details. For this reason analogies with statistical mechanics play an important role in the discussion about EFTs. The basic idea of this new story about renormalization is that the influences of higher energy processes are localizable in a few structural properties which can be captured by an adjustment of parameters. “In this picture, the presence of infinities in quantum field theory is neither a disaster, nor an asset. It is simply a reminder of a practical limitation—we do not know what happens at distances much smaller than those we can look at directly” (Georgi 1989: 456). This new attitude supports the view that renormalization is the appropriate answer to the change of fundamental interactions when the QFT is applied to processes on different energy scales. The price one has to pay is that EFTs are only valid in a limited domain and should be considered as approximations to better theories on higher energy scales. This prompts the important question whether there is a last fundamental theory in this tower of EFTs which supersede each other with rising energies. Some people conjecture that this deeper theory could be a string theory, i.e., a theory which is not a field theory any more. Or should one ultimately expect from physics theories that they are only valid as approximations and in a limited domain? Hartmann (2001) and Castellani (2002) discuss the fate of reductionism vis-à-vis EFTs. Wallace (2011) and Fraser (2011) discuss what the successful application of renormalization methods in quantum statistical mechanics means for their role in QFT, reaching very different conclusions. 3. Beyond the Standard Model The “standard model of elementary particle physics” is sometimes used almost synonymously with QFT. However, there is a crucial difference. While the standard model is a theory with a fixed ontology (understood in a prephilosophical sense), i.e. three fundamental forces and a certain number of elementary particles, QFT is rather a frame, the applicability of which is open. Thus while quantum chromodynamics (or ‘QED’) is a part of the standard model, it is an instance of a quantum field theory, or short “a quantum field theory” and not a part of QFT. This section deals with only some particularly important proposals that go beyond the standard model, but which do not necessarily break up the basic framework of QFT. 3.1 Quantum Gravity The standard model of particle physics covers the electromagnetic, the weak and the strong interaction. However, the fourth fundamental force in nature, gravitation, has defied quantization so far. Although numerous attempts have been made in the last 80 years, and in particular very recently, there is no commonly accepted solution up to the present day. One basic problem is that the mass, length and time scales quantum gravity theories are dealing with are so extremely small that it is almost impossible to test the different proposals. The most important extant versions of quantum gravity theories are canonical quantum gravity, loop theory and string theory. Canonical quantum gravity approaches leave the basic structure of QFT untouched and just extend the realm of QFT by quantizing gravity. Other approaches try to reconcile quantum theory and general relativity theory not by supplementing the reach of QFT but rather by changing QFT itself. String theory, for instance, proposes a completely new view concerning the most fundamental building blocks: It does not merely incorporate gravitation but it formulates a new theory that describes all four interactions in a unified way, namely in terms of strings (see next subsection). While quantum gravity theories are very complicated and even more remote from classical thinking than QM, SRT and GRT, it is not so difficult to see why gravitation is far more difficult to deal with than the other three forces. Electromagnetic, weak and strong force all act in a given space-time. In contrast, gravitation is, according to GRT, not an interaction that takes place in time, but gravitational forces are identified with the curvature of space-time itself. Thus quantizing gravitation could amount to quantizing space-time, and it is not at all clear what that could mean. One controversial proposal is to deprive space-time of its fundamental status by showing how it “emerges ” in some non-spatio-temporal theory. The “emergence” of space-time then means that there are certain derived terms in the new theory that have some formal features commonly associated with space-time. See Kiefer (2007) for physical details, Rickles (2008) for an accessible and conceptually reflected introduction to quantum gravity and Wüthrich (2005) for a philosophical evaluation of the alleged need to quantize the gravitational field. Also, see the entry on quantum gravity. 3.2 String Theory String theory is one of the most promising candidates for bridging the gap between QFT and general relativity theory by supplying a unified theory of all natural forces, including gravitation. The basic idea of string theory is not to take particles as fundamental objects but strings that are very small but extended in one dimension. This assumption has the pivotal consequence that strings interact on an extended distance and not at a point. This difference between string theory and standard QFT is essential because it is the reason why string theory also encompasses the gravitational force which is very difficult to deal with in the framework of QFT. It is so hard to reconcile gravitation with QFT because the typical length scale of the gravitational force is very small, namely at Planck scale, so that the quantum field theoretical assumption of point-like interaction leads to untreatable infinities. To put it another way, gravitation becomes significant (in particular in comparison to strong interaction) exactly where QFT is most severely endangered by infinite quantities. The extended interaction of strings brings it about that such infinities can be avoided. In contrast to the entities in standard quantum physics strings are not characterized by quantum numbers but only by their geometrical and dynamical properties. Nevertheless, “macroscopically” strings look like quantum particles with quantum numbers. A basic geometrical distinction is the one between open strings, i.e., strings with two ends, and closed strings which are like bracelets. The central dynamical property of strings is their mode of excitation, i.e., how they vibrate. Reservations about string theory are mostly due to the lack of testability since it seems that there are no empirical consequences which could be tested by the methods which are, at least up to now, available to us. The reason for this “problem” is that the length scale of strings is in the average the same as the one of quantum gravity, namely the Planck length of approximately 10−33 centimeters which lies far beyond the accessibility of feasible particle experiments. But there are also other peculiar features of string theory which might be hard to swallow. One of them is the fact that string theory implies that space-time has 10, 11 or even 26 dimensions. In order to explain the appearance of only four space-time dimensions string theory assumes that the other dimensions are somehow folded away or “compactified” so that they are no longer visible. An intuitive idea can be gained by thinking of a macaroni which is a tube, i.e., a two-dimensional piece of pasta rolled together, but which looks from the distance like a one-dimensional string. Despite of the problems of string theory, physicists do not abandon this project, partly because many think that, among the numerous alternative proposals for reconciling quantum physics and general relativity theory, string theory is still the best candidate, with “loop quantum gravity” as its strongest rival (see the entry on quantum gravity). Correspondingly, string theory has also received some attention within the philosophy of physics community in recent years. Probably the first philosophical investigation of string theory is Weingard (2001) in Callender & Huggett (2001), an anthology with further related articles. Dawid (2003) (see Other Internet Resources below) argues that string theory has significant consequences for the philosophical debate about realism, namely that it speaks against the plausibility of anti-realistic positions. Also see Dawid (2009). Johansson and Matsubara (2011) assess string theory from various different methodological perspectives, reaching conclusions in disagreement with Dawid (2009). Standard introductory monographs on string theory are Polchinski (2000) and Kaku (1999). Greene (1999) is a very successful popular introduction. An interactive website with a nice elementary introduction is ‘Stringtheory.com’ (see the Other Internet Resources section below). 4. Axiomatic Reformulations of QFT 4.1 Deficiencies of the Conventional Formulation of QFT From the 1930s onwards the problem of infinities as well as the potentially heuristic status of the Lagrangian formulation of QFT stimulated the search for reformulations in a concise and eventually axiomatic manner. A number of further aspects intensified the unease about the standard formulation of QFT. The first one is that quantities like total charge, total energy or total momentum of a field are unobservable since their measurement would have to take place in the whole universe. Accordingly, quantities which refer to infinitely extended regions of space-time should not appear among the observables of the theory as they do in the standard formulation of QFT. Another problematic feature of standard QFT is the idea that QFT is about field values at points of space-time. The mathematical aspect of the problem is that a field at a point, φ(x), is not an operator in a Hilbert space. The physical counterpart of the problem is that it would require an infinite amount of energy to measure a field at a point of space-time. One way to handle this situation—and one of the starting points for axiomatic reformulations of QFT—is not to consider fields at a point but instead fields which are smeared out in the vicinity of that point using certain functions, so-called test functions. The result is a smeared field φ(f) =  φ(x)f(x)dx with supp(f) ⊂ O, where supp(f) is the support of the test function f and O is a bounded open region in Minkowski space-time. The third important problem for standard QFT which prompted reformulations is the existence of inequivalent representations. In the context of quantum mechanics, Schrödinger, Dirac, Jordan and von Neumann realized that Heisenberg's matrix mechanics and Schrödinger's wave mechanics are just two (unitarily) equivalent representations of the same underlying abstract structure, i.e., an abstract Hilbert space H and linear operators acting on this space. In other words, we are merely dealing with two different ways for representing the same physical reality, and it is possible to switch between these different representations by means of a unitary transformation, i.e. an operation that is analogous to an innocuous rotation of the frame of reference. Representations of some given algebra or group are sets of mathematical objects, like numbers, rotations or more abstract transformations (e.g. differential operators) together with a binary operation (e.g. addition or multiplication) that combines any two elements of the algebra or group, such that the structure of the algebra or group to be represented is preserved. This means that the combination of any two elements in the representation space, say a and b, leads to a third element which corresponds to the element that results when you combine the elements corresponding to a and b in the algebra or group that is represented. In 1931 von Neumann gave a detailed proof (of a conjecture by Stone) that the canonical commutation relations (CCRs) for position coordinates and their conjugate momentum coordinates in configuration space fix the representation of these two sets of operators in Hilbert space up to unitary equivalence (von Neumann's uniqueness theorem). This means that the specification of the purely algebraic CCRs suffices to describe a particular physical system. In quantum field theory, however, von Neumann's uniqueness theorem looses its validity since here one is dealing with an infinite number of degrees of freedom. Now one is confronted with a multitude of inequivalent irreducible representations of the CCRs and it is not obvious what this means physically and how one should cope with it. Since the troublesome inequivalent representations of the CCRs that arise in QFT are all irreducible their inequivalence is not due to the fact that some are reducible while others are not (a representation is reducible if there is an invariant subrepresentation, i.e. a subset which alone represent the CCRs already). Since inequivalent irreducible representations (short: IIRs) seem to describe different physical states of affairs it is no longer legitimate to simply choose the most convenient representation, just like choosing the most convenient frame of reference. The acuteness of this problem is not immediately clear, since prima facie it is possibly that all but one of the IIRs are physically irrelevant, i.e. mathematical artefacts of a redundant formalism. However, although apparently this applies to most of the available IIRs, it seems that a number of irreducible representations of the CCRs remain that are inequivalent and physically relevant. 4.2 Algebraic Approaches to QFT According to the algebraic point of view algebras of observables rather than observables themselves in a particular representation should be taken as the basic entities in the mathematical description of quantum physics; thereby avoiding the above-mentioned problems from the outset. In standard QM the algebraic point of view in terms of C*-algebras makes no notable difference to the usual Hilbert space formulation since both formalisms are equivalent. However, in QFT this is no longer the case since the infinite number of degrees of freedom leads to unitarily inequivalent irreducible representations of a C*-algebra. Thus sticking to the usual Hilbert space formulation tacitly implies choosing one particular representation. The notion of C*-algebras, introduced abstractly by Gelfand and Neumark in 1943 and named this way by Segal in 1947, generalizes the notion of the algebra B(H) of all bounded operators on a Hilbert space H, which is also the most important example for a C*-algebra. In fact, it can be shown that any C*-algebra is isomorphic to a (norm-closed, self-adjoint) algebra of bounded operators on a Hilbert space. The boundedness (and self-adjointness) of the operators is the reason why C*-algebras are considered as ideal for representing physical observables. The 'C' indicates that one is dealing with a complex vector space and the '*' refers to the operation that maps an element A of an algebra to its involution (or adjoint) A*, which generalizes the conjugate complex of complex numbers to operators. This involution is needed in order to define the crucial norm property of C*-algebras, which is of central importance for the proof of the above isomorphism claim. Another point where algebraic formulations are advantageous derives from the fact that two quantum fields are physically equivalent when they generate the same algebras of local observables. Such equivalent quantum field theories belong to the same so-called Borchers class which entails that they lead to the same S-matrix. As Haag (1996) stresses, fields are only an instrument in order to “coordinatize” observables, more precisely: sets of observables, with respect to different finite space-time regions. The choice of a particular field system is to a certain degree conventional, namely as long as it belongs to the same Borchers class. Thus it is more appropriate to consider these algebras, rather than quantum fields, as the fundamental entities in QFT. A prominent attempt to axiomatise QFT is Wightman's field axiomatics from the early 1950s. Wightman imposed axioms on polynomial algebras P(O) of smeared fields, i.e., sums of products of smeared fields in finite space-time regions O. A crucial point of this approach is replacing the mapping x → φ(x) by OP(O). While the usage of unbounded field operators makes Wightman's approach mathematically cumbersome, Algebraic Quantum Field Theory (AQFT)—arguably the most successful attempt to reformulate QFT axiomatically—employs only bounded operators. AQFT originated in the late 1950s by the work of Haag and quickly advanced in collaboration with Araki and Kastler. AQFT itself exists in two versions, concrete AQFT (Haag-Araki) and abstract AQFT (Haag-Kastler, 1964). The concrete approach uses von Neumann algebras (or W*-algebras), the abstract one C*-algebras. The adjective ‘abstract’ refers to the fact that in this approach the algebras are characterized in an abstract fashion and not by explicitly using operators on a Hilbert space. In standard QFT, the CCRs together with the field equations can be used for the same purpose, i.e., an abstract characterization. One common aim of these axiomatizations of QFT is avoiding the usual approximations of standard QFT. However, trying to do this in a strictly axiomatic way, one only gets ‘reformulations’ which are not as rich as standard QFT. As Haag (1996) concedes, the “algebraic approach […] has given us a frame and a language not a theory”. 4.3 Basic Ideas of AQFT One of the crucial ideas of AQFT is taking so-called nets of algebras as basic for the mathematical description of a quantum physical system. A decade earlier, Segal (1947) used a single C*-algebra—generated by all bounded operators—and dismissed the availability of inequivalent representations as irrelevant to physics. Against this approach Haag argued that inequivalent representations can be understood physically by realizing that the important physical information in a quantum field theory is not contained in individual algebras but in the net of algebras, i.e. in the mapping OA(O) from finite space-time regions to algebras of local observables. The crucial point is that it is not necessary to specify observables explicitly in order to fix physically meaningful quantities. The very way how algebras of local observables are linked to space-time regions is sufficient to supply observables with physical significance. It is the partition of the algebra Aloc of all local observables into subalgebras which contains physical information about the observables, i.e., it is the net structure of algebras which matters. Physically the most important notion of AQFT is the principle of locality which has an external as well as an internal aspect. The external aspect is the fact that AQFT considers only observables connected with finite regions of space-time and not global observables like the total charge or the total energy momentum vector which refer to infinite space-time regions. This approach was motivated by the operationalistic view that QFT is a statistical theory about local measurement outcomes with all the experimental information coming from measurements in finite space-time regions. Accordingly everything is expressed in terms of local algebras of observables. The internal aspect of locality is that there is a constraint on the observables of such local algebras: All observables of a local algebra connected with a space-time region O are required to commute with all observables of another algebra which is associated with a space-time region O′ that is space-like separated from O. This principle of (Einstein) causality is the main relativistic ingredient of AQFT. The basic structure upon which the assumptions or conditions of AQFT are imposed are local observables, i.e., self-adjoint elements in local (non-commutative) von Neumann-algebras, and physical states, which are identified as positive, linear, normalized functionals which map elements of local algebras to real numbers. States can thus be understood as assignments of expectation values to observables. One can group the assumptions of AQFT into relativistic axioms, such as locality and covariance, general physical assumptions, like isotony and spectrum condition, and finally technical assumptions which are closely related to the mathematical formulation. As a reformulation of QFT, AQFT is expected to reproduce the main phenomena of QFT, in particular properties which are characteristic of it being a field theory, like the existence of antiparticles, internal quantum numbers, the relation of spin and statistics, etc. That this aim could not be achieved on a purely axiomatic basis is partly due to the fact that the connection between the respective key concepts of AQFT and QFT, i.e., observables and quantum fields, is not sufficiently clear. It turned out that the main link between observable algebras and quantum fields are superselection rules , which put restrictions on the set of all observables and allow for classification schemes in terms of permanent or essential properties. Introductions to AQFT are provided by the monographs Haag (1996) and Horuzhy (1990) as well as the overview articles Haag & Kastler (1964), Roberts (1990) and Buchholz (1998). Streater & Wightman (1964) is an early pioneering monograph on axiomatic QFT. Bratteli & Robinson (1979) emphasize mathematical aspects. 4.4 AQFT and the Philosopher In recent years, QFT has received a lot of attention in the philosophy of physics. Most philosophers who engage in that debate rest their considerations on AQFT; for instance, see Baker (2009), Baker & Halvorson (2010), Earman & Fraser (2006), Fraser (2008, 2009, 2011), Halvorson & Müger (2007), Kronz & Lupher (2005), Kuhlmann (2010a, 2010b), Lupher (2010), Rédei & Valente (2010) and Ruetsche (2002, 2003, 2006, 2011). While most philosophers of physics who are skeptical about this approach remained largely silent, Wallace (2006, 2011) launched an eloquent attack on the predominance of AQFT for foundational studies about QFT. To be sure, Wallace emphasizes, his critique is not directed against the use of algebraic methods, e.g. when studying inequivalent representations. Rather, he aims at AQFT as a physical theory, regarded as a rival to conventional QFT (CQFT). In his evaluation, viewed from the 21st century, one has to state that CQFT succeeded, while AQFT failed, so that “to be lured away from the Standard Model by [AQFT] is sheer madness” (Wallace 2011:124). So what may justify this drastic conclusion? On the one hand, Wallace points out that, the problem of ultraviolet divergences, which initiated the search for alternative approaches in the 1950s, was eventually solved in CQFT via the renormalization group techniques. On the other hand, AQFT never succeeded in finding realistic interacting quantum field theories in four dimensions (such as QED) that fit into their framework. Fraser (2009, 2011) is most actively engaged in defending AQFT against Wallace's assault. She argues (2009) that consistency plays a central role in choosing between different formulations of QFT since they do not differ in their respective empirical success and AQFT fares better in this respect. Moreover, Fraser (2011) questions Wallace's crucial point in defense of CQFT, namely that the empirically successful application of renormalization group techniques in QFT removes all doubts about CQFT: The fact that renormalization in condensed matter physics and QFT are formally similar does not license Wallace's claim that there are also physical similarities concerning the freezing out of degrees of freedom at very small length scales. And if that physical analogy cannot be sustained, then the empirical success of renormalization in CQFT leaves the physical reasons for this success in the dark, in contrast to the case of condensed matter physics, where the physical basis for the empirical success of renormalization is intelligible, namely the fact that matter is discrete at atomic length scales. As a consequence, despite of the formal analogy with renormalization in condensed matter physics the empirical success of renormalization in CQFT does not, as Wallace claims, discredit the idea to work with arbitrarily small regions of spacetime, as it is done in AQFT. Kuhlmann (2010b) also advocates AQFT as the prime object for foundational studies, focusing on ontological considerations. He argues that for matters of ontology AQFT is to be preferred over CQFT because, like ontology itself, AQFT strives for a clear separation of fundamental and derived entities and a parsimonious selection of basic assumptions. CQFT, on the other hand is a grown formalism that is very good for calculations but obscures foundational issues. Moreover, Kuhlmann contends that AQFT and CQFT should not be regarded as rival research programs. Nowadays at the very least, AQFT is not meant to replace CQFT, despite of the “kill it or cure it” slogan (Streater and Wightman 1964: 1, cited by Wallace 2011: 117). AQFT is suited and designed to illuminate the basic structure of QFT, but it is not and never will be the appropriate framework for the working physicist. 5. Philosophical Issues 5.1 Setting the Stage: Candidate Ontologies Ontology is concerned with the most general features, entities and structures of being. One can pursue ontology in a very general sense or with respect to a particular theory or a particular part or aspect of the world. With respect to the ontology of QFT one is tempted to more or less dismiss ontological inquiries and to adopt the following straightforward view. There are two groups of fundamental fermionic matter constituents, two groups of bosonic force carriers and four (including gravitation) kinds of interactions. As satisfying as this answer might first appear, the ontological questions are, in a sense, not even touched. Saying that, for instance the down quark is a fundamental constituent of our material world is the starting point rather than the end of the (philosophical) search for an ontology of QFT. The main question is what kind of entity, e.g., the down quark is. The answer does not depend on whether we think of down quarks or muon neutrinos since the sought features are much more general than those ones which constitute the difference between down quarks or muon neutrinos. The relevant questions are of a different type. What are particles at all? Can quantum particles be legitimately understood as particles any more, even in the broadest sense, when we take, e.g., their localization properties into account? How can one spell out what a field is and can “quantum fields” in fact be understood as fields? Could it be more appropriate not to think of, e.g., quarks, as the most fundamental entities at all, but rather of properties or processes or events? 5.1.1 The Particle Interpretation Many of the creators of QFT can be found in one of the two camps regarding the question whether particles or fields should be given priority in understanding QFT. While Dirac, the later Heisenberg, Feynman, and Wheeler opted in favor of particles, Pauli, the early Heisenberg, Tomonaga and Schwinger put fields first (see Landsman 1996). Today, there are a number of arguments which prepare the ground for a proper discussion beyond mere preferences. The Particle Concept It seems almost impossible to talk about elementary particle physics, or QFT more generally, without thinking of particles which are accelerated and scattered in colliders. Nevertheless, it is this very interpretation which is confronted with the most fully developed counter-arguments. There still is the option to say that our classical concept of a particle is too narrow and that we have to loosen some of its constraints. After all, even in classical corpuscular theories of matter the concept of an (elementary) particle is not as unproblematic as one might expect. For instance, if the whole charge of a particle was contracted to a point, an infinite amount of energy would be stored in this particle since the repulsive forces become infinitely large when two charges with the same sign are brought together. The so-called self energy of a point particle is infinite. Probably the most immediate trait of particles is their discreteness. Particles are countable or ‘aggregable’ entities in contrast to a liquid or a mass. Obviously this characteristic alone cannot constitute a sufficient condition for being a particle since there are other things which are countable as well without being particles, e.g., money or maxima and minima of the standing wave of a vibrating string. It seems that one also needs individuality, i.e., it must be possible to say that it is this or that particle which has been counted in order to account for the fundamental difference between ups and downs in a wave pattern and particles. Teller (1995) discusses a specific conception of individuality, primitive thisness, as well as other possible features of the particle concept in comparison to classical concepts of fields and waves, as well as in comparison to the concept of field quanta, which is the basis for the interpretation that Teller advocates. A critical discussion of Teller's reasoning can be found in Seibt (2002). Moreover, there is an extensive debate on individuality of quantum objects in quantum mechanical systems of ‘identical particles’. Since this discussion concerns QM in the first place, and not QFT, any further details shall be omitted here. French and Krause (2006) offer a detailed analysis of the historical, philosophical and mathematical aspects of the connection between quantum statistics, identity and individuality. See Dieks and Lubberdink (2011) for a critical assessment of the debate. Also consult the entry on quantum theory: identity and individuality. There is still another feature which is commonly taken to be pivotal for the particle concept, namely that particles are localizable in space. While it is clear from classical physics already that the requirement of localizability need not refer to point-like localization, we will see that even localizability in an arbitrarily large but still finite region can be a strong condition for quantum particles. Bain (2011) argues that the classical notions of localizability and countability are inappropriate requirements for particles if one is considering a relativistic theory such as QFT. Eventually, there are some potential ingredients of the particle concept which are explicitly opposed to the corresponding (and therefore opposite) features of the field concept. Whereas it is a core characteristic of a field that it is a system with an infinite number of degrees of freedom, the very opposite holds for particles. A particle can for instance be referred to by the specification of the coordinates x(t) that pertain, e.g., to its center of mass—presupposing impenetrability. A further feature of the particle concept is connected to the last point and again explicitly in opposition to the field concept. In a pure particle ontology the interaction between remote particles can only be understood as an action at a distance. In contrast to that, in a field ontology, or a combined ontology of particles and fields, local action is implemented by mediating fields. Finally, classical particles are massive and impenetrable, again in contrast to (classical) fields. Why QFT Seems to be About Particles The easiest way to quantize the electromagnetic (or: radiation) field consists of two steps. First, one Fourier analyses the vector potential of the classical field into normal modes (using periodic boundary conditions) corresponding to an infinite but denumerable number of degrees of freedom. Second, since each mode is described independently by a harmonic oscillator equation, one can apply the harmonic oscillator treatment from non-relativistic quantum mechanics to each single mode. The result for the Hamiltonian of the radiation field is (2.1)    Hrad = k r ℏωk ( ar(kar(k) + 1/2 ), where ar(k) and ar(k) are operators which satisfy the following commutation relations (2.2)     [ar(k), as(k′)]   =   δrsδkk′ [ar(k), as(k′)]   =   [ar(k), as(k′)] = 0. with the index r labeling the polarisation. These commutation relations imply that one is dealing with a bosonic field. The operators ar(k) and ar(k) have interesting physical interpretations as so-called particle creation and annihilation operators. In order to see this, one has to examine the eigenvalues of the operators (2.3)    Nr(k) = ar(kar(k) which are the essential parts in Hrad. Due to the commutation relations (2.2) one finds that the eigenvalues of Nr(k) are the integers nr(k) = 0, 1, 2, … and the corresponding eigenfunctions (up to a normalisation factor) are (2.4) |nr(k)⟩ = [ar(k)]nr(k)|0⟩ where the right hand side means that ar(k) operates nr(k) times on |0⟩, the state vector of the vacuum with no photons present. The interpretation of these results is parallel to the one of the harmonic oscillator. ar(k) is interpreted as the creation operator of a photon with momentum ℏk and energy ℏωk (and a polarisation which depends on r and k). That is, equation (2.4) can be understood in the following way. One ets a state with nr(k) photons of momentum ℏk and energy ℏωk when the creation operator ar(k) operates nr(k) times on the vacuum state |0⟩. Accordingly, Nr(k) is called the number operator and nr(k) the ‘occupation number’ of the mode that is specified by k and r, i.e., this mode is occupied by nr(k) photons. Note that Pauli's exclusion principle is not violated since it only applies to fermions and not to bosons like photons. The corresponding interpretation for the annihilation operator ar(k) is parallel: When it operates on a state with a given number of photons this number is lowered by one. It is a widespread view that these results complete “the justification for interpreting N(k) as the number operator, and hence for the particle interpretation of the quantized theory” (Ryder 1996: 131). This is a rash judgement, however. For instance, the question of localizability is not even touched while it is certain that this is a pivotal criterion for something to be a particle. All that is established so far is that certain mathematical quantities in the formalism are discrete. However, countability is merely one feature of particles and not at all conclusive evidence for a particle interpretation of QFT yet . It is not clear at this stage whether we are in fact dealing with particles or with fundamentally different objects which only have this one feature of discreteness in common with particles. Teller (1995) argues that the Fock space or “occupation number” representation does support a particle ontology in terms of field quanta since these can be counted or aggregated, although not numbered. The degree of excitation of a certain mode of the underlying field determines the number of objects, i.e. the particles in the sense of quanta. Labels for individual particles like in the Schrödinger many-particle formalism do not occur any more, which is the crucial deviation from the classical notion of particles. However, despite of this deviation, says Teller, quanta should be regarded as particles: Besides their countability another fact that supports seeing quanta as particles is that they have the same energies as classical particles. Teller has been criticized for drawing unduly far-reaching ontological conclusions from one particular representation, in particular since the Fock space representation cannot be appropriate in general because it is only valid for free particles (see, e.g., Fraser 2008). In order to avoid this problem Bain (2000) proposes an alternative quanta interpretation that rests on the notion of asymptotically free states in scattering theory. For a further discussion of the quanta interpretation see the subsection on inequivalent representations below. The vacuum state |0⟩ is the energy ground state, i.e., the eigenstate of the energy operator with the lowest eigenvalue. It is a remarkable result in ordinary non-relativistic QM that the ground state energy of e.g., the harmonic oscillator is not zero in contrast to its analogue in classical mechanics. In addition to this, the relativistic vacuum of QFT has the even more striking feature that the expectation values for various quantities do not vanish, which prompts the question what it is that has these values or gives rise to them if the vacuum is taken to be the state with no particles present. If particles were the basic objects of QFT how can it be that there are physical phenomena even if nothing is there according to this very ontology? Eventually, studies of QFT in curved space-time indicate that the existence of a particle number operator might be a contingent property of the flat Minkowski space-time, because Poincaré symmetry is used to pick out a preferred representation of the canonical commutation relations which is equivalent to picking out a preferred vacuum state (see Wald 1994). Before exploring whether other (potentially) necessary requirements for the applicability of the particle concept are fulfilled let us see what the alternatives are. Proceeding this way makes it easier to evaluate the force of the following arguments in a more balanced manner. 5.1.2 The Field Interpretation Since various arguments seem to speak against a particle interpretation, the allegedly only alternative, namely a field interpretation, is often taken to be the appropriate ontology of QFT. So let us see what a physical field is and why QFT may be interpreted in this sense. A classical point particle can be described by its position x(t) and its momentum p(t), which change as the time t progresses. So there are six degrees of freedom for the motion of a point particle corresponding to the three coordinates of the particle's position and three more coordinates for its momentum. In the case of a classical field one has an independent value for each single point x in space, where this specification changes as time progresses. The field value φ can be a scalar quantity, like temperature, a vectorial one as for the electromagnetic field, or a tensor, such as the stress tensor for a crystal. A field is therefore specified by a time-dependent mapping from each point of space to a field value φ(x,t). Thus a field is a system with an infinite number of degrees of freedom, which may be restrained by some field equations. Whereas the intuitive notion of a field is that it is something transient and fundamentally different from matter, it can be shown that it is possible to ascribe energy and momentum to a pure field even in the absence of matter. This somewhat surprising fact shows how gradual the distinction between fields and matter can be. The transition from a classical field theory to a quantum field theory is characterized by the occurrence of operator-valued quantum fields φ̂(x,t), and corresponding conjugate fields, for both of which certain canonical commutation relations hold. Thus there is an obvious formal analogy between classical and quantum fields: in both cases field values are attached to space-time points, where these values are specified by real numbers in the case of classical fields and operators in the case of quantum fields. That is, the mapping x ↦ φ̂(x,t) in QFT is analogous to the classical mapping x ↦ φ(x,t). Due to this formal analogy it appears to be beyond any doubt that QFT is a field theory. But is a systematic association of certain mathematical terms with all points in space-time really enough to establish a field theory in a proper physical sense? Is it not essential for a physical field theory that some kind of real physical properties are allocated to space-time points? This requirement seems not fulfilled in QFT, however. Teller (1995: ch. 5) argues that the expression quantum field is only justified on a “perverse reading” of the notion of a field, since no definite physical values whatsoever are assigned to space-time points. Instead, quantum field operators represent the whole spectrum of possible values so that they rather have the status of observables (Teller: “determinables”) or general solutions. Only a specific configuration, i.e. an ascription of definite values to the field observables at all points in space, can count as a proper physical field. There are at least four proposals for a field interpretation of QFT, all of which respect the fact that the operator-valuedness of quantum fields impedes their direct reading as physical fields. (i) Teller (1995) argues that definite physical quantities emerge when not only the quantum field operators but also the state of the system is taken into account. More specifically, for a given state |ψ⟩ one can calculate the expectation values ⟨ψ|φ(x)|ψ⟩ which yields an ascription of definite physical values to all points x in space and thus a configuration of the operator-valued quantum field that may be seen as a proper physical field. The main problem with proposal (i), and possibly with (ii), too, is that an expectation value is the average value of a whole sequence of measurements, so that it does not qualify as the physical property of any actual single field system, no matter whether this property is a pre-existing (or categorical) value or a propensity (or disposition). (ii) The vacuum expectation value or VEV interpretation, advocated by Wayne (2002), exploits a theorem by Wightman (1956). According to this reconstruction theorem all the information that is encoded in quantum field operators can be equivalently described by an infinite hierarchy of n-point vacuum expectation values, namely the expectation values of all products of quantum field operators at n (in general different) space-time points, calculated for the vacuum state. Since this collection of vacuum expectation values comprises only definite physical values it qualifies as a proper field configuration, and, Wayne argues, due to Wightman's theorem, so does the equivalent set of quantum field operators. Thus, and this is the upshot of Wayne's argument, an ascription of quantum field operators to all space-time points does by itself constitute a field configuration, namely for the vacuum state, even if this is not the actual state. But this is also a problem for the VEV interpretation: While it shows nicely that much more information is encoded in the quantum field operators than just unspecifically what could be measured, it still does not yield anything like an actual field configuration. While this last requirement is likely to be too strong in a quantum theoretical context anyway, the next proposal may come at least somewhat closer to it. (iii) In recent years the term wave functional interpretation has been established as the name for the default field interpretation of QFT. Correspondingly, it is the most widely discussed extant proposal; see, e.g., Huggett (2003), Halvorson and Müger (2007), Baker (2009) and Lupher (2010). In effect, it is not very different from proposal (i), and with further assumptions for (i) even identical. However, proposal (ii) phrases things differently and in a very appealing way. The basic idea is that quantized fields should be interpreted completely analogously to quantized one-particle states, just as both result analogously from imposing canonical commutation relations on the non-operator-valued classical quantities. In the case of a quantum mechanical particle, its state can be described by a wave function ψ(x), which maps positions to probability amplitudes, where |ψ(x)|2 can be interpreted as the probability for the particle to be measured at position x. For a field, the analogue of positions are classical field configurations φ(x), i.e. assignments of field values to points in space. And so, the analogy continues, just as a quantum particle is described by a wave function that maps positions to probabilities (or rather probability amplitudes) for the particle to be measured at x, quantum fields can be understood in terms of wave functionals ψ[φ(x)] that map functions to numbers, namely classical field configurations φ(x) to probability amplitudes, where |ψ[φ(x)]|2 can be interpreted as the probability for a given quantum field system to be found in configuration φ(x) when measured. Thus just as a quantum state in ordinary single-particle QM can be interpreted as a superposition of classical localized particle states, the state of a quantum field system, so says the wave functional approach, can be interpreted as a superposition of classical field configurations. And what superpositions mean depends on one's general interpretation of quantum probabilities (collapse with propensities, Bohmian hidden variables, branching Everettian many-worlds,…). In practice, however, QFT is hardly ever represented in wave functional space because usually there is little interest in measuring field configurations. Rather, one tries to measures ‘particle’ states and therefore works in Fock space. (iv) For a modification of proposal (iii), indicated in Baker (2009: sec. 5) and explicitly formulated as an alternative interpretation by Lupher (2010), see the end of the section “Non-Localizability Theorems” below. 5.1.3 Ontic Structural Realism The multitude of problems for particle as well as field interpretations prompted a number of alternative ontological approaches to QFT. Auyang (1995) and Dieks (2002) propose different versions of event ontologies. Seibt (2002) and Hättich (2004) defend process-ontological accounts of QFT, which are scrutinized in Kuhlmann (2002, 2010a: ch. 10). In recent years, however, ontic structural realism (OSR) has become the most fashionable ontological framework for modern physics. While so far the vast majority of studies concentrates on ordinary QM and General Relativity Theory, it seems to be commonly believed among advocates of OSR that their case is even stronger regarding QFT, in light of the paramount significance of symmetry groups (also see below)—hence the name group structural realism (Roberts 2010). Explicit arguments are few and far between, however. One of the rare arguments in favor of OSR that deal specifically with QFT is due to Kantorovich (2003), who opts for a Platonic version of OSR; a position that is otherwise not very popular among OSRists. Kantorovich argues that directly after the big bang “the world was baryon-free, whereas the symmetry of grand unification existed as an abstract structure” (p. 673). Cao (1997b) points out that the best ontological access to QFT is gained by concentrating on structural properties rather than on any particular category of entities. Cao (2010) advocates a “constructive structural realism” on the basis of a detailed conceptual investigation of the formation of quantum chromodynamics. However, Kuhlmann (2011) shows that Cao's position has little to do with what is usually taken to be ontic structural realism, and that it is not even clear whether it should at least be rated as an epistemic variant of structural realism. Lyre (2004) argues that the central significance of gauge theories in modern physics supports structural realism, and offers a case study concerning the U(1) gauge symmetry group, which characterizes QED. Recently Lyre (2012) has been advocating an intermediate form of OSR, which he calls “Extended OSR (ExtOSR)”, according to which there are not only relational structural properties but also structurally derived intrinsic properties, namely the invariants of structure: mass, spin, and charge. Lyre claims that only ExtOSR is in a position to account for gauge theories. Moreover, it can make sense of zero-value properties, such as the zero mass of photons. See the Section 4.2 (OSR and Quantum Field Theory) in the SEP entry on structural realism. 5.1.4 Trope Ontology Kuhlmann (2010a) proposes a Dispositional Trope Ontology (DTO) as the most appropriate ontological reading of the basic structure of QFT, in particular in its algebraic formulation, AQFT. The term ‘trope’ refers to a conception of properties that breaks with tradition by regarding properties as particulars rather than repeatables (or ‘universals’). This new conception of properties permits analyzing objects as pure bundles of properties/tropes without excluding the possibility of having different objects with (qualitatively but not numerically) exactly the same properties. One of Kuhlmann's crucial points is that (A)QFT speaks in favor of a bundle conception of objects because the net structure of observable algebras alone (see section “Basic Ideas of AQFT” above) encodes the fundamental features of a given quantum field theory, e.g. its charge structure. In the DTO approach, the essential properties/tropes of a trope bundle are then identified with the defining characteristics of a superselection sector, such as different kinds of charges, mass and spin. Since these properties cannot change by any state transition they guarantee the object's identity over time. Superselection sectors are inequivalent irreducible representations of the algebra of all quasi-local observables. While the essential properties/tropes of an object are permanent, its non-essential ones may change. Since we are dealing with quantum physical systems many properties are dispositions (or propensities); hence the name dispositional trope ontology. A trope bundle is not individuated via spatio-temporal co-localization but because of the particularity of its constitutive tropes. Morganti (2009) also advocates a trope-ontological reading of QFT, which refers directly to the classification scheme of the Standard Model. 5.2 Did Wigner Define the Particle Concept? Wigner's (1939) famous analysis of the Poincaré group is often assumed to provide a definition of elementary particles. The main idea of Wigner's approach is the supposition that each irreducible (projective) representation of the relevant space-time symmetry group yields the state space of one kind of elementary physical system, where the prime example is an elementary particle which has the more restrictive property of being structureless. The physical justification for linking up irreducible representations with elementary systems is the requirement that “there must be no relativistically invariant distinction between the various states of the system” (Newton & Wigner 1949). In other words the state space of an elementary system shall have no internal structure with respect to relativistic transformations. Put more technically, the state space of an elementary system must not contain any relativistically invariant subspaces, i.e., it must be the state space of an irreducible representation of the relevant invariance group. If the state space of an elementary system had relativistically invariant subspaces then it would be appropriate to associate these subspaces with elementary systems. The requirement that a state space has to be relativistically invariant means that starting from any of its states it must be possible to get to all the other states by superposition of those states which result from relativistic transformations of the state one started with. The main part of Wigner's analysis consists in finding and classifying all the irreducible representations of the Poincaré group. Doing that involves finding relativistically invariant quantities that serve to classify the irreducible representations. Wigner's pioneering identification of types of particles with irreducible unitary representations of the Poincaré group has been exemplary until the present, as it is emphasized, e.g., in Buchholz (1994). For an alternative perspective focusing on “Wigner's legacy” for ontic structural realism see Roberts (2011). Regarding the question whether Wigner has supplied a definition of particles, one must say that although Wigner has in fact found a highly valuable and fruitful classification of particles, his analysis does not contribute very much to the question what a particle is and whether a given theory can be interpreted in terms of particles. What Wigner has given is rather a conditional answer. If relativistic quantum mechanics can be interpreted in terms of particles then the possible types of particles and their invariant properties can be determined via an analysis of the irreducible unitary representations of the Poincaré group. However, the question whether, and if yes in what sense, at least relativistic quantum mechanics can be interpreted as a particle theory at all is not addressed in Wigner's analysis. For this reason the discussion of the particle interpretation of QFT is not finished with Wigner's analysis as one might be tempted to say. For instance, the pivotal question of the localizability of particle states, to be discussed below, is still open. Moreover, once interactions are included, Wigner's classification is no longer applicable (see Bain 2000). Kuhlmann (2010a: sec. 8.1.2) offers an accessible introduction to Wigner's analysis and discusses its interpretive relevance. 5.3 Non-Localizability Theorems The observed ‘particle traces’, e.g., on photographic plates of bubble chambers, seem to be a clear indication for the existence of particles. However, the theory which has been built on the basis of these scattering experiments, QFT, turns out to have considerable problems to account for the observed ‘particle trajectories’. Not only are sharp trajectories excluded by Heisenberg's uncertainty relations for position and momentum coordinates, which hold for non-relativistic quantum mechanics already. More advanced examinations in AQFT show that ‘quantum particles’ which behave according to the principles of relativity theory cannot be localized in any bounded region of space-time, no matter how large, a result which excludes even tube-like trajectories. It thus appears to be impossible that our world is composed of particles when we assume that localizability is a necessary ingredient of the particle concept. So far there is no single unquestioned argument against the possibility of a particle interpretation of QFT but the problems are piling up. Reeh & Schlieder, Hegerfeldt, Malament and Redhead all gained mathematical results, or formalized their interpretation, which prove that certain sets of assumptions, which are taken to be essential for the particle concept, lead to contradictions. The Reeh-Schlieder theorem (1961) is a central result in AQFT. It asserts that acting on the vacuum state Ω with elements of the von Neumann observable algebra R(O) for open space-time region O, one can approximate as closely as one likes any state in Hilbertspace H, in particular one that is very different from the vacuum in some space-like separated region O′. The Reeh-Schlieder theorem is thus exploiting long distance correlations of the vacuum. Or one can express the result by saying that local measurements do not allow for a distinction between an N-particle state and the vacuum state. Redhead's (1995a) take on the Reeh-Schlieder theorem is that local measurements can never decide whether one observes an N-particle state, since a projection operator PΨ which corresponds to an N-particle state Ψ can never be an element of a local algebra R(O). Clifton & Halvorson (2001) discuss what this means for the issue of entanglement. Halvorson (2001) shows that an alternative “Newton-Wigner” localization scheme fails to evade the problem of localization posed by the Reeh-Schlieder theorem. Malament (1996) formulates a no-go theorem to the effect that a relativistic quantum theory of a fixed number of particles predicts a zero probability for finding a particle in any spatial set, provided four conditions are satisfied, namely concerning translation covariance, energy, localizability and locality. The localizability condition is the essential ingredient of the particle concept: A particle—in contrast to a field—cannot be found in two disjoint spatial sets at the same time. The locality condition is the main relativistic part of Malament's assumptions. It requires that the statistics for measurements in one space-time region must not depend on whether or not a measurement has been performed in a space-like related second space-time region. Malament's proof has the weight of a no-go theorem provided that we accept his four conditions as natural assumptions for a particle interpretation. A relativistic quantum theory of a fixed number of particles, satisfying in particular the localizability and the locality condition, has to assume a world devoid of particles (or at least a world in which particles can never be detected) in order not to contradict itself. Malament's no-go theorem thus seems to show that there is no middle ground between QM and QFT, i.e., no theory which deals with a fixed number of particles (like in QM) and which is relativistic (like QFT) without running into the localizability problem of the no-go theorem. One is forced towards QFT which, as Malament is convinced, can only be understood as a field theory. Nevertheless, whether or not a particle interpretation of QFT is in fact ruled out by Malament's result is a point of debate. At least prima facie Malament's no-go theorem alone cannot supply a final answer since it assumes a fixed number of particles, an assumption that is not valid in the case of QFT. The results about non-localizability which have been explored above may appear to be not very astonishing in the light of the following facts about ordinary QM: Quantum mechanical wave functions (in position representation) are usually smeared out over all ℜ3, so that everywhere in space there is a non-vanishing probability for finding a particle. This is even the case arbitrarily close after a sharp position measurement due to the instantaneous spreading of wave packets over all space. Note, however, that ordinary QM is non-relativistic. A conflict with SRT would thus not be very surprising although it is not yet clear whether the above-mentioned quantum mechanical phenomena can actually be exploited to allow for superluminal signalling. QFT, on the other side, has been designed to be in accordance with special relativity theory (SRT). The local behavior of phenomena is one of the leading principles upon which the theory was built. This makes non-localizability within the formalism of QFT a much severer problem for a particle interpretation. Malament's reasoning has come under attack in Fleming & Butterfield (1999) and Busch (1999). Both argue to the effect that there are alternatives to Malament's conclusion. The main line of thought in both criticisms is that Malament's ‘mathematical result’ might just as well be interpreted as evidence that the assumed concept of a sharp localization operator is flawed and has to be modified either by allowing for unsharp localization (Busch 1999) or for so-called “hyperplane dependent localization” (Fleming & Butterfield 1999). In Saunders (1995) a different conclusion from Malament's (as well as from similar) results is drawn. Rather than granting Malament's four conditions and deriving a problem for a particle interpretation Saunders takes Malament's proof as further evidence that one can not hold on to all four conditions. According to Saunders it is the localizability condition which might not be a natural and necessary requirement on second thought. Stressing that “relativity requires the language of events, not of things” Saunders argues that the localizability condition loses its plausibility when it is applied to events: It makes no sense to postulate that the same event can not occur at two disjoint spatial sets at the same time. One can only require for the same kind of event not to occur at both places. For Saunders the particle interpretation as such is not at stake in Malament's argument. The question is rather whether QFT speaks about things at all. Saunders considers Malament's result to give a negative answer to this question. A kind of meta paper on Malament's theorem is Halvorson & Clifton (2002). Various objections to the choice of Malament's assumptions and his conclusion are considered and rebutted. Moreover, Halvorson and Clifton establish two further no-go theorems which preserve Malament's theorem by weakening tacit assumptions and showing that the general conclusion still holds. One thing seems to be clear. Since Malament's ‘mathematical result’ appears to allow for various different conclusions it cannot be taken as conclusive evidence against the tenability of a particle interpretation of QFT and the same applies to Redhead's interpretation of the Reeh-Schlieder theorem. For a more detailed exposition and comparison of the Reeh-Schlieder theorem and Malament's theorem see Kuhlmann (2010a: sec. 8.3). Does the field interpretation also suffer from problems concerning non-localizability? In the section “Deficiencies of the Conventional Formulation of QFT” we already saw that, strictly speaking, field operators cannot be defined at points but need to be smeared out in the (finite and arbitrarily small) vicinity of points, giving rise to smeared field operators φˆ(f), which represent the weighted average field value in the respective region. This procedure leads to operator-valued distributions instead of operator-valued fields. The lack of field operators at points appears to be analogous to the lack of position operators in QFT, which troubles the particle interpretation. However, for position operators there is no remedy analogous to that for field operators: while even unsharply localized particle positions do not exist in QFT (see Halvorson and Clifton 2002, theorem 2), the existence of smeared field operators demonstrates that there are at least point-like field operators. On this basis Lupher (2010) proposes a “modified field ontology”. 5.4 Inequivalent Representations The occurrence of inequivalent representations is a grave obstacle for interpreting QFT, which is increasingly rated as the single most important problem, that has no counterpart whatsoever in standard QM. As we saw in the section “Deficiencies of the Conventional Formulation of QFT”, the quantization of a theory with an infinite number of degrees of freedom, such as a field theory, leads to unitarily inequivalent representations (UIR) of the canonical commutation relations. It is highly controversial what the availability of UIRs means. One possible stance is to dismiss them as mathematical artifacts with no physical relevance. Ruetsche (2002) calls this “Hilbert Space Conservatism”. On the one hand, this view fits well to the fact that UIRs are hardly even mentioned in standard textbooks on QFT. On the other hand, this cannot be the last word because UIRs undoubtedly do real work in physics, e.g. in quantum statistical mechanics (see Ruetsche 2003) and in particular when it comes to spontaneous symmetry breaking. The coexistence of UIRs can be readily understood looking at ferromagnetism (see Ruetsche 2006). At high temperatures the atomic dipoles in ferromagnetic substances fluctuate randomly. Below a certain temperature the atomic dipoles tend to align to each other in some direction. Since the basic laws governing this phenomenon are rotationally symmetrical, no direction is preferred. Thus once the dipoles have “chosen” one particular direction, the symmetry is broken. Since there is a different ground state for each direction of magnetization, one needs different Hilbert spaces—each containing a unique ground state—in order to describe symmetry breaking systems. Correspondingly, one has to employ inequivalent representations. One important interpretive issue where UIRs play a crucial role is the Unruh effect: a uniformly accelerated observer in a Minkowski vacuum should detect a thermal bath of particles, the so-called Rindler quanta (Unruh 1976, Unruh & Wald 1984). A mere change of the reference frame thus seems to bring particles into being. Since the very existence of the basic entities of an ontology should be invariant under transformations of the referential frame the Unruh effect constitutes a severe challenge to a particle interpretation of QFT. Teller (1995: 110-113) tries to dispel this problem by pointing out that while the Minkowski vacuum has the definite value zero for the Minkowski number operator, the particle number is indefinite for the Rindler number operator, since one has a superposition of Rindler quanta states. This means that there are only propensities for detecting different numbers of Rindler quanta but no actual quanta. However, this move is problematic since it seems to suggest that quantum physical propensities in general don't need to be taken fully for real. Clifton and Halvorson (2001b) argue, contra Teller, that it is inapproriate to give priority to either the Minkowski or the Rindler perspective. Both are needed for a complete picture. The Minkowski as well as the Rindler representation are true descriptions of the world, namely in terms of objective propensities. Arageorgis, Earman and Ruetsche (2003) argue that Minkowski and Rindler (or Fulling) quantization do not constitute a satisfactory case of physically relevant UIRs. First, there are good reasons to doubt that the Rindler vacuum is a physically realizable state. Second, the authors argue, the unitary inequivalence in question merely stems from the fact that one representation is reducible and the other one irreducible: The restriction of the Minkowski vacuum to a Rindler wedge, i.e. what the Minkowski observer says about the Rindler wedge, leads to a mixed state (a thermodynamic KMS state) and therefore a reducible representation, whereas the Rindler vacuum is a pure state and thus corresponds to an irreducible representation. Therefore, the Unruh effect does not cause distress for the particle interpretation—which the authors see to be fighting a losing battle anyhow—because Rindler quanta are not real and the unitary inequivalence of the representations in question has nothing specific to do with conflicting particle ascriptions. The occurrence of UIRs is also at the core of an analysis by Fraser (2008). She restricts her analysis to inertial observers but compares the particle notion for free and interacting systems. Fraser argues, first, that the representations for free and interacting systems are unavoidably unitarily inequivalent, and second, that the representation for an interacting system does not have the minimal properties that are needed for any particle interpretation—e.g. Teller's (1995) quanta version—namely the countability condition (quanta are aggregable) and a relativistic energy condition. Note that for Fraser's negative conclusion about the tenability of the particle (or quanta) interpretation for QFT there is no need to assume localizability. Bain (2000) has a diverging assessment of the fact that only asymptotically free states, i.e. states very long before or after a scattering interaction, have a Fock representation that allows for an interpretation in terms of countable quanta. For Bain, the occurrence of UIRs without a particle (or quanta) interpretation for intervening times, i.e. close to scattering experiments, is irrelevant because the data that are collected from those experiments always refer to systems with negligible interactions. Bain concludes that although the inclusion of interactions does in fact lead to the break-down of the alleged duality of particles and fields it does not undermine the notion of particles (or fields) as such. Fraser (2008) rates this as an unsuccessful “last ditch” attempt to save a quanta interpretation of QFT because it is ad hoc and can't even show that at least something similar to the free field total number operator exists for finite times, i.e. between the asymptotically free states. Moreover, Fraser (2008) points out that, contrary to what some authors suggest, the main source of the impossibility to interpret interacting systems in terms of particles is not that many-particle states are inappropriately described in the Fock representation if one deals with interacting fields but rather that QFT obeys special relativity theory (also see Earman and Fraser (2006) on Haag's theorem). As Fraser concludes, “[F]or a free system, special relativity and the linear field equation conspire to produce a quanta interpretation.” In his reply Bain (2011) points out that the reason why there is no total number operator in interacting relativistic quantum field theories is that this would require an absolute space-time structure, which in turn is not an appropriate requirement. Baker (2009) points out that the main arguments against the particle interpretation—concerning non-localizability (e.g. Malament 1996) and failure for interacting systems (Fraser 2008)—may also be directed against the wave functional version of the field interpretation (see field interpretation (iii) above). Mathematically, Baker's crucial point is that wave functional space is unitarily equivalent to Fock space, so that arguments against the particle interpretation that attack the choice of the Fock representation may carry over to the wave functional interpretation. First, a Minkowski and a Rindler observer may also detect different field configurations. Second, if the Fock space representation is not apt to describe interacting systems, then the unitarily equivalent wave functional representation is in no better situation: Interacting fields are unitarily inequivalent to free fields, too. It is difficult to say how the availability of UIRs should be interpreted in general. Clifton and Halvorson (2001b) propose seeing this as a form of complementarity. Ruetsche (2003) advocates a “Swiss army approach”, according to which the availability of UIRs shows that physical possibilities in different degrees must be included into our ontology. However, both proposals are yet too sketchy and await further elaboration. 5.5 The Role of Symmetries Symmetries play a central role in QFT. In order to characterize a special symmetry one has to specify transformations T and features that remain unchanged during these transformations: invariants I. Symmetries are thus pairs {T, I}. The basic idea is that the transformations change elements of the mathematical description (the Lagrangians for instance) whereas the empirical content of the theory is unchanged. There are space-time transformations and so-called internal transformations. Whereas space-time symmetries are universal, i. e., they are valid for all interactions, internal symmetries characterize special sorts of interaction (electromagnetic, weak or strong interaction). Symmetry transformations define properties of particles/quantum fields that are conserved if the symmetry is not broken. The invariance of a system defines a conservation law, e.g., if a system is invariant under translations the linear momentum is conserved, if it is invariant under rotation the angular momentum is conserved. Inner transformations, such as gauge transformations, are connected with more abstract properties. Symmetries are not only defined for Lagrangians but they can also be found in empirical data and phenomenological descriptions. Symmetries can thus bridge the gap between descriptions which are close to empirical results (‘phenomenology’) and the more abstract general theory which is a most important reason for their heuristic force. If a conservation law is found one has some knowledge about the system even if details of the dynamics are unknown. The analysis of many high energy collision experiments led to the assumption of special conservation laws for abstract properties like baryon number or strangeness. Evaluating experiments in this way allowed for a classification of particles. This phenomenological classification was good enough to predict new particles which could be found in the experiments. Free places in the classification could be filled even if the dynamics of the theory (for example the Lagrangian of strong interaction) was yet unknown. As the history of QFT for strong interaction shows, symmetries found in the phenomenological description often lead to valuable constraints for the construction of the dynamical equations. Arguments from group theory played a decisive role in the unification of fundamental interactions. In addition, symmetries bring about substantial technical advantages. For example, by using gauge transformations one can bring the Lagrangian into a form which makes it easy to prove the renormalizability of the theory. See also the entry on symmetry and symmetry breaking. In many cases symmetries are not only heuristically useful but supply some sort of ‘justification’ by being used in the beginning of a chain of explanation. To a remarkable degree the present theories of elementary particle interactions can be understood by deduction from general principles. Under these principles symmetry requirements play a crucial role in order to determine the Lagrangian. For example, the only Lorentz invariant and gauge invariant renormalizable Lagrangian for photons and electrons is precisely the original Dirac Lagrangian. In this way symmetry arguments acquire an explanatory power and help to minimize the unexplained basic assumptions of a theory. Heisenberg concludes that in order “to find the way to a real understanding of the spectrum of particles it will therefore be necessary to look for the fundamental symmetries and not for the fundamental particles.” (Blum et al. 1995: 507). Since symmetry operations change the perspective of an observer but not the physics an analysis of the relevant symmetry group can yield very general information about those entities which are unchanged by transformations. Such an invariance under a symmetry group is a necessary (but not sufficient) requirement for something to belong to the ontology of the considered physical theory. Hermann Weyl propagated the idea that objectivity is associated with invariance (see, e.g., his authoritative work Weyl 1952: 132). Auyang (1995) stresses the connection between properties of physically relevant symmetry groups and ontological questions. Kosso argues that symmetries help to separate objective facts from the conventions of descriptions; see his article in Brading & Castellani (2003), an anthology containing numerous further philosophical studies about symmetries in physics. Symmetries are typical examples of structures that show more continuity in scientific change than assumptions about objects. For that reason structural realists consider structures as “the best candidate for what is ‘true’ about a physical theory” (Redhead 1999: 34). Physical objects such as electrons are then taken to be similar to fiction that should not be taken seriously, in the end. In the epistemic variant of structural realism structure is all we know about nature whereas the objects which are related by structures might exist but they are not accessible to us. For the extreme ontic structural realist there is nothing but structures in the world (Ladyman 1998). 5.6 Taking Stock: Where do we Stand? A particle interpretation of QFT answers most intuitively what happens in particle scattering experiments and why we seem to detect particle trajectories. Moreover, it would explain most naturally why particle talk appears almost unavoidable. However, the particle interpretation in particular is troubled by numerous serious problems. There are no-go theorems to the effect that, in a relativistic setting, quantum “particle” states cannot be localized in any finite region of space-time no matter how large it is. Besides localizability, another hard core requirement for the particle concept that seems to be violated in QFT is countability. First, many take the Unruh effect to indicate that the particle number is observer or context dependent. And second, interacting quantum field theories cannot be interpreted in terms of particles because their representations are unitarily inequivalent to Fock space (Haag's theorem), which is the only known way to represent countable entities in systems with an infinite number of degrees of freedom. At first sight the field interpretation seems to be much better off, considering that a field is not a localized entity and that it may vary continuously—so no requirements for localizability and countability. Accordingly, the field interpretation is often taken to be implied by the failure of the particle interpretation. However, on closer scrutiny the field interpretation itself is not above reproach. To begin with, since “quantum fields” are operator valued it is not clear in which sense QFT should be describing physical fields, i.e. as ascribing physical properties to points in space. In order to get determinate physical properties, or even just probabilities, one needs a quantum state. However, since quantum states as such are not spatio-temporally defined, it is questionable whether field values calculated with their help can still be viewed as local properties. The second serious challenge is that the arguably strongest field interpretation—the wave functional version—may be hit by similar problems as the particle interpretation, since wave functional space is unitarily equivalent to Fock space. The occurrence of unitarily inequivalent representations (UIRs), which first seemed to cause problems specifically for the particle interpretation but which appears to carry over to the field interpretation, may well be a severe obstacle for any ontological interpretation of QFT. However, it is controversial whether the two most prominent examples, namely the Unruh effect and Haag's theorem, really do cause the contended problems in the first place. Thus one of the crucial tasks for the philosophy of QFT is further unmasking the ontological significance of UIRs. The two remaining contestants approach QFT in a way that breaks more radically with traditional ontologies than any of the proposed particle and field interpretations. Ontic Structural Realism (OSR) takes the paramount significance of symmetry groups to indicate that symmetry structures as such have an ontological primacy over objects. However, since most OSRists are decidedly against Platonism, it is not altogether clear how symmetry structures could be ontologically prior to objects if they only exist in concrete realizations, namely in those objects that exhibit these symmetries. Dispositional Trope Ontology (DTO) deprives both particles and fields of their fundamental status, and proposes an ontology whose basic elements are properties understood as particulars, called ‘tropes’. One of the advantages of the DTO approach is its great generality concerning the nature of objects which it analyzes as bundles of (partly dispositional) properties/tropes: DTO is flexible enough to encompass both particle and field like features without being committed to either a particle or a field ontology. In conclusion one has to recall that one reason why the ontological interpretation of QFT is so difficult is the fact that it is exceptionally unclear which parts of the formalism should be taken to represent anything physical in the first place. And it looks as if that problem will persist for quite some time. • Auyang, S. Y., 1995, How is Quantum Field Theory Possible?, Oxford-New York: Oxford University Press. • Bain, J., 2000, “Against particle/field duality: Asymptotic particle states and interpolating fields in interacting QFT (or: Who’s afraid of Haag’s theorem?)”, Erkenntnis, 53: 375–406. • –––, 2011, “Quantum field theories in classical spacetimes and particles”, Studies in History and Philosophy of Modern Physics, 42: 98–106. • Baker, D. J., 2009, “Against field interpretations of quantum field theory”, British Journal for the Philosophy of Science, 60: 585–609. • Baker, D.J. and H. Halvorson, 2010, “Antimatter”, British Journal for the Philosophy of Science, 61: 93–121. • Born, M., with W. Heisenberg, and P. Jordan, 1926, “Zur Quantenmechanik II”, Zeitschr. für Physik 35, 557. • Brading, K. and E. Castellani (eds.), 2003, Symmetries in Physics: Philosophical Reflections, Cambridge: Cambridge University Press. • Bratteli, O. and D. W. Robinson, 1979, Operator Algebras and Quantum Statistical Mechanics 1: C* and W*-Algebras, Symmetry Groups, Decomposition of States, New York et al.: Springer • Brown, H. R. and R. Harré (eds.), 1988, Philosophical Foundations of Quantum Field Theory, Oxford: Clarendon Press. • Buchholz, D., 1994, “On the manifestations of particles,” in R. N. Sen and A. Gersten, eds., Mathematical Physics Towards the 21st Century, Beer-Sheva: Ben-Gurion University Press. • –––, 1998, “Current trends in axiomatic qantum field theory,” in P. Breitenlohner and D. Maison, eds, Quantum Field Theory. Proceedings of the Ringberg Workshop 1998, pp. 43-64, Berlin-Heidelberg: Springer. • Busch, P., 1999, “Unsharp localization and causality in relativistic quantum theory,” Journal of Physics A: Mathematics General, 32: 6535. • Butterfield, J. and H. Halvorson (eds.), 2004, Quantum Entanglements — Selected Papers — Rob Clifton, Oxford: Oxford University Press. • Butterfield, J. and C. Pagonis (eds.), 1999, From Physics to Philosophy, Cambridge: Cambridge University Press. • Callender, C. and N. Huggett (eds.), 2001, Physics Meets Philosophy at the Planck Scale, Cambridge: Cambridge University Press. • Cao, T. Y., 1997a, Conceptual Developments of 20th Century Field Theories, Cambridge: Cambridge University Press. • –––, 1997b, “Introduction: Conceptual issues in QFT,” in Cao 1997a, pp. 1-27. • –––, (ed.), 1999, Conceptual Foundations of Quantum Field Theories, Cambridge: Cambridge University Press. • –––, 2010, From Current Algebra to Quantum Chromodynamics: A Case for Structural Realism, Cambridge: Cambridge University Press. • Castellani, E., 2002, “Reductionism, emergence, and effective field theories,” Studies in History and Philosophy of Modern Physics, 33: 251-267. • Clifton, R. (ed.), 1996, Perspectives on Quantum Reality: Non-Relativistic, Relativistic, and Field-Theoretic, Dordrecht et al.: Kluwer. • Clifton, R. and H. Halvorson, 2001, “Entanglement and open systems in algebraic quantum field theory,” Studies in History and Philosophy of Modern Physics, 32: 1-31; reprinted in Butterfield & Halvorson 2004. • Davies, P. (ed.), 1989, The New Physics, Cambridge: Cambridge University Press. • Dawid, R., 2009, “On the conflicting assessments of string theory”, Philosophy of Science, 76: 984–996. • Dieks, D., 2002, “Events and covariance in the interpretation of quantum field theory,” in Kuhlmann et al. 2002, pp. 215-234. • Dieks, D. and A. Lubberdink, 2011, “How classical particles emerge from the quantum world”, Foundations of Physics, 41: 1051–1064. • Dirac, P. A. M., 1927, “The quantum theory of emission and absorption of radiation,” Proceedings of the Royal Society of London, A 114: 243-256. • Earman, John, 2011, “The Unruh effect for philosophers”, Studies In History and Philosophy of Modern Physics, 42: 81 – 97. • Earman, J. and D. Fraser, 2006, “Haag’s theorem and its implications for the foundations of quantum field theory”, Erkenntnis, 64: 305–344. • Fleming, G. N. and J. Butterfield, 1999, “Strange positions,” in Butterfield & Pagonis 1999, pp. 108-165. • Fraser, D., 2008, “The fate of “particles” in quantum field theories with interactions”, Studies in History and Philosophy of Modern Physics, 39: 841–59. • –––, 2009, “Quantum field theory: Underdetermination, inconsistency, and idealization”, Philosophy of Science, 76: 536–567. • –––, 2011, “How to take particle physics seriously: A further defence of axiomatic quantum field theory”, Studies in History and Philosophy of Modern Physics, 42: 126–135. • Georgi, H., 1989, “Effective quantum field theories,” in Davies 1989, pp. 446-457. • Greene, B., 1999, The Elegant Universe. Superstrings, Hidden Dimensions and the Quest for the Ultimate Theory, New York: W. W. Norton and Company. • Haag, R., 1996, Local Quantum Physics: Fields, Particles, Algebras, 2nd edition, Berlin et al.: Springer. • Haag, R. and D. Kastler, 1964, “An algebraic approach to quantum field theory,” Journal of Mathematical Physics, 5: 848-861. • Halvorson, H., 2001, “Reeh-schlieder defeats newton-wigner: On alternative localization schemes in relativistic quantum field theory”, Philosophy of Science, 68: 111–133. • Halvorson, H. and R. Clifton, 2002, “No place for particles in relativistic quantum theories?” Philosophy of Science, 69: 1-28; reprinted in Butterfield and Halvorson 2004 and in Kuhlmann et al. 2002. • Halvorson, H. and M. Müger , 2007, “Algebraic quantum field theory (with an appendix by Michael Müger)”, in Handbook of the Philosophy of Physics — Part A, Jeremy Butterfield and John Earman (eds.), Amsterdam: Elsevier, 731–922. • Hartmann, S., 2001, “Effective field theories, reductionism, and explanation,” Studies in History and Philosophy of Modern Physics, 32: 267-304. • Hättich, F., 2004, Quantum Processes — A Whiteheadian Interpretation of Quantum Field Theory, Münster: agenda Verlag. • Healey, R., 2007, Gauging What’s Real: The Conceptual Foundations of Contemporary Gauge Theories, Oxford: Oxford University Press. • Heisenberg, W. and W. Pauli, 1929, “Zur Quantendynamik der Wellenfelder,” Zeitschrift für Physik, 56: 1-61. • Hoddeson, L., with L. Brown, M. Riordan, and M. Dresden (eds.), 1997, The Rise of the Standard Model: A History of Particle Physics from 1964 to 1979, Cambridge: Cambridge University Press. • Horuzhy, S. S., 1990, Introduction to Algebraic Quantum Field Theory, 1st edition, Dordrecht et al.: Kluwer. • Huggett, N., 2000, “Philosophical foundations of quantum field theory”, The British Journal for the Philosophy Science, 51: 617–637. • –––, 2003, “Philosophical foundations of quantum field theory”, in Philosophy of Science Today, P. Clark and K. Hawley, eds., Oxford: Clarendon Press, 617?37. • Johansson, L. G. and K. Matsubara, 2011, “String theory and general methodology: A mutual evaluation”, Studies in History and Philosophy of Modern Physics, 42: 199–210. • Kaku, M., 1999, Introduction to Superstrings and M-Theory, New York: Springer. • Kantorovich, A., 2003, “The priority of internal symmetries in particle physics”, Studies in History and Philosophy of Modern Physics, 34: 651–675. • Kastler, D. (ed.), 1990, The Algebraic Theory of Superselection Sectors: Introduction and Recent Results, Singapore et al.: World Scientific. • Kiefer, C., 2007, Quantum Gravity, Oxford: Oxford University Press. Second edition. • Kronz, F. and T. Lupher, 2005, “Unitarily inequivalent representations in algebraic quantum theory”, International Journal of Theoretical Physics, 44: 1239–1258. • Kuhlmann, M., 2010a, The Ultimate Constituents of the Material World – In Search of an Ontology for Fundamental Physics, Frankfurt: ontos Verlag. • –––, 2010b, “Why conceptual rigour matters to philosophy: On the ontological significance of algebraic quantum field theory”, Foundations of Physics, 40: 1625–1637. • –––, 2011, “Review of “From Current Algebra to Quantum Chromodynamics: A Case for Structural Realism” by T. Y. Cao”, Notre Dame Philosophical Reviews, available online. • Kuhlmann, M. with H. Lyre and A. Wayne (eds.), 2002, Ontological Aspects of Quantum Field Theory, London: World Scientific Publishing. • Ladyman, J., 1998, “What is structural realism?” Studies in History and Philosophy of Science, 29: 409-424. • Landsman, N. P., 1996, “Local quantum physics,” Studies in History and Philosophy of Modern Physics, 27: 511-525. • Lupher, T., 2010, “Not particles, not quite fields: An ontology for quantum field theory”, Humana Mente, 13: 155–173. • Lyre, H., 2004, “Holism and structuralism in U(1) gauge theory,” Studies in History and Philosophy of Modern Physics, 35/4: 643-670. • –––, 2012, “Structural invariants, structural kinds, structural laws”, in Probabilities, Laws, and Structures, Dordrecht: Springer, 179–191. • Malament, D., 1996, “In defense of dogma: Why there cannot be a relativistic quantum mechanics of (localizable) particles,” in Clifton 1996, pp. 1-10. • Mandl, F. and G. Shaw, 2010, Quantum Field Theory, Chichester (UK): John Wiley & Sons, second ed. • Martin, C. A., 2002, “Gauge principles, gauge arguments and the logic of nature,” Philosophy of Science, 69/3: 221-234. • Morganti, M., 2009, “Tropes and physics”, Grazer Philosophische Studien, 78: 185–205. • Newton, T. D. and E. P. Wigner, 1949, “Localized states for elementary particles,” Reviews of Modern Physics, 21/3: 400-406. • Peskin, M. E. and D. V. Schroeder, 1995, Introduction to Quantum Field Theory, Cambridge (MA): Perseus Books. • Polchinski, J., 2000, String Theory, 2 volumes, Cambridge: Cambridge University Press. • Redhead, M. L. G., 1995a, “More ado about nothing,” Foundations of Physics, 25: 123-137. • –––, 1995b, “The vacuum in relativistic quantum field theory,” in Hull et al. 1994 (vol. 2), pp. 88-89. • –––, 1999, “Quantum field theory and the philosopher,” in Cao 1999, pp. 34-40. • –––, 2002, “The interpretation of gauge symmetry,” in Kuhlmann et al. 2002, pp. 281-301. • Reeh, H. and S. Schlieder, 1961, “Bemerkungen zur Unitäräquivalenz von Lorentzinvarianten Feldern,” Nuovo Cimento, 22: 1051-1068. • Rickles, D., 2008, “Quantum gravity: A primer for philosophers”, in The Ashgate Companion to Contemporary Philosophy of Physics, Dean Rickles (ed.), Aldershot: Ashgate, 262–382. • Roberts, B. W., 2011, “Group structural realism”, The British Journal for the Philosophy Science, 62: 47?69. • Roberts, J. E., 1990, “Lectures on algebraic quantum field theory,” in Kastler 1990, pp. 1-112. • Ruetsche, L., 2002, “Interpreting quantum field theory”, Philosophy of Science, 69: 348–378. • –––, 2003, “A matter of degree: Putting unitary equivalence to work,” Philosophy of Science, 70/5: 1329-1342. • –––, 2006, “Johnny's so long at the ferromagnet”, Philosophy of Science, 73: 473–486. • –––, 2011, “Why be normal?”, Studies in History and Philosophy of Modern Physics, 42: 107–115. • Ryder, L. H., 1996, Quantum Field Theory, 2nd edition, Cambridge: Cambridge University Press. • Saunders, S., 1995, “A dissolution of the problem of locality,” in Hull, M. F. D., Forbes, M., and Burian, R. M., eds., 1995, Proceedings of the Biennial Meeting of the Philosophy of Science Association: PSA 1994, East Lansing, MI: Philosophy of Science Association, vol. 2, pp. 88-98. • Saunders, S. and H. R. Brown (eds.), 1991, The Philosophy of Vacuum, Oxford: Clarendon Press. • Schweber, S. S., 1994, QED and the Men Who Made It,” Princeton: Princeton University Press. • Segal, I. E., 1947, “Postulates for general quantum mechanics,” Annals of Mathematics, 48/4: 930-948. • Seibt, J., 2002, “The matrix of ontological thinking: Heuristic preliminaries for an ontology of QFT,” in Kuhlmann et al. 2002, pp. 53-97. • Streater, R. F. and A. S. Wightman, 1964, PCT, Spin and Statistics, and all that, New York: Benjamin. • Teller, P., 1995, An Interpretive Introduction to Quantum Field Theory, Princeton: Princeton University Press. • Unruh, W. G., 1976, “Notes on black hole evaporation,” Physical Review D, 14: 870-92. • Unruh, W. G. and R. M. Wald, 1984, “What happens when an accelerating observer detects a Rindler particle?” Physical Review D, 29: 1047-1056. • Wallace, D., 2006, “In defence of naiveté: The conceptual status of Lagrangian quantum field theory”, Synthese, 151: 33–80. • –––, 2011, “Taking particle physics seriously: A critique of the algebraic approach to quantum field theory”, Studies in History and Philosophy of Modern Physics, 42: 116–125. • Wayne, Andrew, 2002, “A naive view of the quantum field”, in Kuhlmann et al. 2002, 127–133. • –––, 2008, “A trope-bundle ontology for field theory”, in The Ontology of Spacetime II, Dennis Dieks (ed.), Amsterdam: Elsevier, 1–15. • Weinberg, S., 1995, The Quantum Theory of Fields – Foundations (Volume 1), Cambridge: Cambridge University Press. • –––, 1996, The Quantum Theory of Fields – Modern Applications (Volume 2), Cambridge: Cambridge University Press. • Weingard, R., 2001, “A philosopher looks at string theory,” in Callender & Huggett 2001, pp. 138-151. • Weyl, H., 1952, Symmetry, Princeton: Princeton University Press. • Wightman, A. S., 1956, “Quantum field theory in terms of vacuum expectation values”, Physical Review, 101: 860–66. • Wigner, E. P., 1939, “On unitary representations of the inhomoneneous Lorentz group,” Annals of Mathematics, 40: 149-204. Other Internet Resources Copyright © 2012 by Meinard Kuhlmann <meik@uni-bremen.de> Please Read How You Can Help Keep the Encyclopedia Free
771458a15d748820
Take the 2-minute tour × All of us have probably been exposed to questions such as: "What are the applications of group theory...". This is not the subject of this MO question. Here is a little newspaper article that I found inspiring: Madam, – In response to Marc Morgan’s question, “Does mathematics have any practical value?” (November 8th), I wish to respond as follows. Apart from its direct applications to electrical circuits and machinery, electronics (including circuit design and computer hardware), computer software (including cryptography for internet transaction security, business software, anti-virus software and games), telephones, mobile phones, fax machines, radio and television broadcasting systems, antenna design, computer game consoles, hand-held devices such as iPods, architecture and construction, automobile design and fabrication, space travel, GPS systems, radar, X-ray machines, medical scanners, particle research, meteorology, satellites, all of physics and much of chemistry, the answer is probably “No”. – Yours, etc, The Irish Times - Wednesday, November 10, 2010 The above article article seems to provide an ideal source of solutions to a perennial problem: How to tell something interesting about math to non-mathematicians, without losing your audience? However, I am embarrassed to admit that I have no idea what kind of math gets used for antenna designs, computer game consoles, GPS systems, etc. I would like to have a list applications of math, from the point of view of the applications. To make sure that each answer is sufficiently structured and developed, I shall impose some restrictions on their format. Each answer should contain the following three parts, roughly of the same size: • Start by the description of a practical problem that any layman can understand. • Then I would appreciate to have an explanation of why it is difficult to solve without mathematical tools. • Finally, there should be a little explanation of the kind of math that gets used in the solution. ♦♦  My ultimate goal is to have a nice collection of examples of applications of mathematics, for the purpose of casual discussions with non-mathematicians. ♦♦ As usual with community-wiki questions: one answer per post. share|improve this question Related: mathoverflow.net/questions/2556/… –  Qiaochu Yuan Feb 24 '11 at 18:55 I think you'd need a list of topics that you'd like to know more about, lest this develops in a whole new encyclopedia. I just picked three examples from the newpaper piece you cited. –  Tim van Beek Feb 25 '11 at 3:05 @Tim: The article I cited provides such a list. I'd already be quite happy if an example could be provided for each of the items in there. –  André Henriques Feb 25 '11 at 10:51 @André: Ok, is there an item on the list that has not been addressed and that your are particularly interested in? I feel like I could write books about every one :-) –  Tim van Beek Feb 27 '11 at 10:49 @Tim: Actually yes: antenna design. –  André Henriques Feb 27 '11 at 15:02 10 Answers 10 up vote 48 down vote accepted Sending a man to the Moon (and back). Hilbert once remarked half-jokingly that catching a fly on the Moon would be the most important technological achievement. "Why? "Because the auxiliary technical problems which would have to be solved for such a result to be achieved imply the solution of almost all the material difficulties of mankind." (Quoted from Hilbert-Courant by Constance Reid, Springer, 1986, p. 92). The task obviously required solving plenty of scientific and technological problems. But the key breakthrough that made it all possible was Richard Arenstorf's discovery of a stable 8-shaped orbit between the Earth and the Moon. This involved the development of a numerical algorithm for solving the restricted three-body problem which is just a special non-linear second order ODE (see also my answer to the previous MO question). Another orbit, also mapped by Arenstorf, was later used in the dramatic rescue of the Apollo 13 crew. share|improve this answer One typical way that GPS is invoked as an application of mathematics is through the use of general relativity. Most people have a rough idea of what the GPS system does: there are some (27) satellites flying in the sky, and a GPS device on the surface of the earth determines its position by radio communication with the satellites. It is also pretty clear that this is a hard problem to solve, with or without mathematics. The basic idea is that if your GPS device measures its distance between 3 different satellites, then it knows that it lies on three level sets which must intersect at a point. This is the standard idea of triangulation. Of course measuring distance is hard to do, and relativity comes into play in many different, nontrivial ways, but there is one way in particular that is interesting and easy to explain. If one uses the euclidean metric to determine the distance (so, straight lines) from the GPS to the satellite, then it will be impossible to determine the location on the earth to a high degree of accuracy. So instead the GPS system uses the kerr metric, that is the lorentz metric that models spacetime outside of a spherically symmetric, rotating body. Naturally this metric gives a different, more accurate distance between the observer on earth and the satellite. The thing that is surprising to people is that the switch from euclidean to kerr is required to get really accurate gps readings. In other words, without relativity you might not be able to use that iphone app to find your car in the grocery store parking lot. People are often surprised and interested to learn that the differences between relativity and newtonian gravity really are observable. Other standard examples are the precession of the perihelion of mercury (which was a famous unsolved problem before the introduction of GR) and the demonstration that light rays do not travel along straight lines by photographing the sun during an eclipse. This last observation demonstrated, for instance, that the metric on the universe is not the trivial flat one. share|improve this answer Another place where relativity is relevant to GPS is time dilation. Fundamentally GPS computes distances by calculating time differences, so every satellite contains an atomic clock. Before they launch them they calibrate the clocks, but they have to be detuned by an amount that takes into account both the SR and GR time dilation effects in order to be accurate when they get into orbit. –  hobbs Feb 9 '14 at 3:10 A particularly striking application to physics and chemistry is explained in Singer's book Linearity, symmetry, and prediction in the hydrogen atom. The practical problem, in the large, is easy to state: what is the stuff around us made of, and why does it react with other stuff the way it does? More precisely, what explains the structure of the periodic table? There is no a priori reason that the elements ought to naturally arrange themselves in rows of size $2, 8, 8, 18, 18, ...$ with repeating chemical properties. This periodic structure profoundly shapes the nature of the world around us and so ought to be well worth trying to understand on a deeper level. Physically, the answer has to do with the way that electrons arrange themselves around a nucleus, one of the classic examples of the breakdown of classical mechanics. The Bohr model posits that electrons are arranged in discrete orbitals $n = 1, 2, 3, ... $ with energy levels proportional to $- \frac{1}{n^2}$ such that the $n^{th}$ energy level admits at most $2n^2$ electrons. This behavior $- \frac{1}{n^2}$ can be empirically deduced by an examination of atomic spectra but the Bohr model still does not provide a conceptual explanation of it. That explanation comes from full-blown quantum mechanics, which already requires a fair amount of nontrivial mathematics. For our purposes quantum mechanics will be described by a Hilbert space $K = L^2(X)$ where $X$ is the classical phase space (e.g. $\mathbb{R}^3$) and a self-adjoint operator $H : K \to K$, the Hamiltonian, which will describe the evolution of states via the Schrödinger equation. The simplest case is that of an electron orbiting a single proton, in which case one can explicitly write down the potential. In this case the Schrödinger equation can be solved fairly explicitly and the answer tells you what electron orbitals look like, but it turns out that one can do much better: it is possible to predict the solutions and their properties using representation theory. To start with, the Coulomb potential has a spherical symmetry, so this endows $K$ with the structure of a unitary representation of $\text{SO}(3)$. By identifying two wave functions together if they lie in the same representation we can hope to have a physical classification of the possible states of an electron; the idea is that physical quantities we care about should be invariant under physical symmetries (e.g. mass, energy, charge). The action of $\text{SO}(3)$ breaks up the space of possible states based on their angular momentum (Noether's theorem). The corresponding representations have dimensions $1, 3, 5, 7, ...$ and indeed we find that we can decompose the number of elements in each row of the periodic table as $$2 = 1 + 1$$ $$8 = 1 + 1 + 3 + 3$$ $$18 = 1 + 1 + 3 + 3 + 5 + 5$$ corresponding to the possible angular momentum values allowed at each energy level. Of course these symmetry considerations apply to every spherically symmetric system so the $\text{SO}(3)$ symmetry cannot tell us anything more specific. But it turns out there is even more symmetry to exploit. First of all, remarkably enough the $\text{SO}(3)$ symmetry extends to an $\text{SO}(4)$ symmetry. (I do not really know a conceptual explanation of this, unfortunately; I have a half-baked one which I'm not sure is valid.) The irreducible representations of $\text{SO}(4)$ occurring here are precisely the ones of dimensions $1, 1 + 3, 1 + 3 + 5, ...$ and they break up into irreducible $\text{SO}(3)$ representations in exactly the right way to account for the above pattern up to a factor of $2$. Second of all, the factor of $2$ is accounted for by an additional action of $\text{SU}(2)$ coming from electron spin (the thing that makes MRI machines work). So representation theory provides a strikingly elegant answer to the question of how the periodic table is arranged (if one accepts that a single proton is a good approximation to a general atomic nucleus). Of course there is much more to say here about the relation between representation theory and physics and chemistry, but I am not the one to ask... share|improve this answer My half-baked explanation for the SO(4) symmetry is this: the wave functions we are interested are localized near the origin, so their Fourier transforms are smeared out in momentum space. Smeared-out functions on R^3 look like functions on S^3, and the proton in R^3 is completely smeared out in S^3, so... there is an SO(4) symmetry in momentum space. Or something like that. –  Qiaochu Yuan Feb 25 '11 at 1:10 You could mention almost any physical theory (e.g. General Relativity, anything with PDEs, etc.) - almost all involve some degree of nontrival mathematics. Why QM in particular? I actually think it's a particularly bad example! You say: "...but the Bohr model still does not provide a conceptual explanation of it." But I would say the same applies to Quantum Mechanics! Maybe it gives correct formulae and predictions for (currently known) experimental data, but it's TOTALLY crazy and strange. Maybe in 100-200 years from now, it will have been replaced by something totally different! –  Zen Harper Feb 25 '11 at 5:34 ...just to make it clear, I'm obviously not suggesting that QM is not useful or not mathematical! I just think there are many other things from mathematical physics which are much less crazy and counterintuitive, and so would be much better examples for conversation with nonmathematicians. To me, QM looks like a totally incorrect theory which only gives the correct formulae by chance. A proper explanation is still waiting to be found. –  Zen Harper Feb 25 '11 at 5:40 @Qiaochu: That's fine as a basic example of the use of representation theory. But note that much more subtle aspects of how the Periodic Table is organized, which for a long time were only experimentally observed, have just recently been described mathematically: by detailed studies of asymptotic properties of the Schrödinger equation combined with some rep theoretical aspects, see <a href="mpip-mainz.mpg.de/theory/events/namet2010/…;. –  Thomas Sauvaget Feb 25 '11 at 10:07 @Qiaochu: Sorry the link should be mpip-mainz.mpg.de/theory/events/namet2010/… Also, the SO(4) symmetry is a property of the classical 2-body problem, which is thus inherited by the quantum one: for a natural conceptual explanation you can have a look at this wikipedia section en.wikipedia.org/wiki/… (and that whole wikipedia page for more details on the corresponding additional conserved quantity). –  Thomas Sauvaget Feb 25 '11 at 10:16 Here are some examples to quote my favorite one: In 1998, mathematics was suddenly in the news. Thomas Hales of the University of Pittsburgh, Pennsylvania, had proved the Kepler conjecture, showing that the way greengrocers stack oranges is the most efficient way to pack spheres. A problem that had been open since 1611 was finally solved! On the television a greengrocer said: “I think that it's a waste of time and taxpayers' money.” I have been mentally arguing with that greengrocer ever since: today the mathematics of sphere packing enables modern communication, being at the heart of the study of channel coding and error-correction codes. In 1611, Johannes Kepler suggested that the greengrocer's stacking was the most efficient, but he was not able to give a proof. It turned out to be a very difficult problem. Even the simpler question of the best way to pack circles was only proved in 1940 by László Fejes Tóth. Also in the seventeenth century, Isaac Newton and David Gregory argued over the kissing problem: how many spheres can touch a given sphere with no overlaps? In two dimensions it is easy to prove that the answer is 6. Newton thought that 12 was the maximum in 3 dimensions. It is, but only in 1953 did Kurt Schütte and Bartel van der Waerden give a proof. The kissing number in 4 dimensions was proved to be 24 by Oleg Musin in 2003. In 5 dimensions we can say only that it lies between 40 and 44. Yet we do know that the answer in 8 dimensions is 240, proved back in 1979 by Andrew Odlyzko of the University of Minnesota, Minneapolis. The same paper had an even stranger result: the answer in 24 dimensions is 196,560. These proofs are simpler than the result for three dimensions, and relate to two incredibly dense packings of spheres, called the E8 lattice in 8-dimensions and the Leech lattice in 24 dimensions. This is all quite magical, but is it useful? In the 1960s an engineer called Gordon Lang believed so. Lang was designing the systems for modems and was busy harvesting all the mathematics he could find. He needed to send a signal over a noisy channel, such as a phone line. The natural way is to choose a collection of tones for signals. But the sound received may not be the same as the one sent. To solve this, he described the sounds by a list of numbers. It was then simple to find which of the signals that might have been sent was closest to the signal received. The signals can then be considered as spheres, with wiggle room for noise. To maximize the information that can be sent, these 'spheres' must be packed as tightly as possible. In the 1970s, Lang developed a modem with 8-dimensional signals, using E8 packing. This helped to open up the Internet, as data could be sent over the phone, instead of relying on specifically designed cables. Not everyone was thrilled. Donald Coxeter, who had helped Lang understand the mathematics, said he was “appalled that his beautiful theories had been sullied in this way”. share|improve this answer computerized tomography In computerized tomograhy one meaures X-ray images of a body from different angles. Each X-ray image roughly corrsponds to a projection of the density distribution along a certain direction. To obtain the full density distribution inside the body one has to invert the Radon transform. This is an interesting problem from integral geometry which is also challenging concerning the numerical implementation, since the inverse is known the be discontinuous and hence, regularization techniques has to be employed. Another interesting aspect of this story is that the mathematical problem of the inversion of the Radon transform was done aound 1917 (by Johan Radon itself) while this was totally unknown to the inventors of computerized tomography as it is used today. share|improve this answer example: Information send over the internet needs to be secure such that only the sender and the recipient can understand and use it. Example: Man in the middle attack on a bank transaction: You send an order to your bank to pay 100 Dollars to Mr. X. I intercept this transmission and change the order to your bank to make them send me 100 000 Dollars instead. Since every information send over the internet passes through a lot of different computers (gateways), all I need to intercept your message is access to one of those computers. Thousands of network administrators do have such an access (this is grossly simplified of course). In order to secure the information, me and my bank need to know an algorithm for cryptography. Commonly used are algorithms using public/private key pairs. These consist of functions such that 1. the bank publishes a public key k, 2. I can apply a function f mapping the information inf I would like to send, using the public key k, to an encrypted message f(inf, k). 3. The whole punchline is that the inverse function can only be computed by knowing the private key, which only my bank knows. So only my bank can compute the information inf knowing f(inf, k). Commonly used algorithms are based on the assumption that there is no efficient algorithm to factorize large numbers, i.e. compute the prime factors of a given large number. The validity has not been proven. So you can a) get famous by proving this assumption, b) get either famous (and provoke a collapse of internet banking) or insanely rich by finding an algorithm that computes prime factors efficiently, c) get famous and rich by finding an efficient algorithm for public/private key encryption that is efficient and provable safe. share|improve this answer I don't want to be the person who makes most of the current coding algorithms useless ... this seems to be quite dangerous. –  Martin Brandenburg Feb 25 '11 at 9:15 It would be dangerous if someone developed a cracking algorithm to use this for his own good instead of publishing it. –  Tim van Beek Feb 27 '11 at 10:48 computer game consoles Many computer games today display some sort of 3D real time graphics. There are two important aspects: a) the display of a 2D projection of a 3D object model needs involved algorithms that calculate what is visible from the viewpoint of the observer, how objects look like from that perspective and calculate shading and light effects on colors. These algorithms need concepts from 3D geometry (linear algebra, vectors, areas, projection operators etc.). Many computer science departments have classes for the involved mathematics. b) the animation effects of many games are calculated by numerical solutions of partial differential equations describing the physical motion of solid bodies and fluids. In order to animate fluids like water, for example, computer games use finite element approximations to the Navier-Stokes equations. Needless to say, this is a very active area of current research. (Computer consoles are actually used in research involving computational fluid dynamics because they are cheap, easy to program and very powerful.) The last part is also true for automobile design and fabrication: Car companies need to test new car designs for example for mechanical problems: Are there any parts that will make noise once you drive faster than 50 km/h? This is tested with software that simulates the mechanical parts of the proposed car design using finite element approximations to equations of solid state mechanics. The same technique is also used to simulate crash tests. The design of the car body is done via CAD (computer aided design) software. This software uses approximation and interpolation algorithms to calculate external surfaces that are as smooth as possible while satisfying boundary conditions that are specified by the designer. These approximations are done e.g. by spline interpolation. Numerical approximations of computational fluid dynamics are also used to simulate tests in the wind tunnel. This actually saves a lot of money. (It is also the reason why modern cars all kinda look alike.) share|improve this answer circuit design and computer hardware Example: a robotic arm has to create a complicated electronic circuit by putting a conducting material on a non-conducting base. The robotic arm has to traverse the whole graph that makes up the circuit at least once. In order to reduce the time the roboter needs to create one electronic circuit, the way it has to traverse needs to be minimized, that is one needs a good approximate solution to the traveling salesman problem. I know of examples where improved heuristic approximation algorithms have increased the output by several percents (the student doing the math thesis on this was rewarded by the company producing these circuits with several hundred thousands of dollars. No, it wasn't me.). share|improve this answer (Dredged up from the murky past...) Designing control systems usually involves building a logic circuit that has several inputs and one or two outputs. Sometimes states are involved (sequencing of traffic lights, coin collectors for vending machines), sometimes not. In designing such control logic, many equations get written down which represent things like "If these three switches are off and these others are on, flip this switch over here". Once one has the equations written down, (often as a Boolean function, a map from {0,1}^n to {0,1}) one has to build the circuit implementing these equations. Often times, the medium for implementation is a gate array, which may be a field of NAND logic gates that can be wired together, or a programmable logic device, which is like two or more gate arrays, some with ANDS, some with ORS, some NOT gates, flip-flops which are like little memory stores, and so on. The major question is: are there enough gates on the device to build all the logic represented by the equations? To this end, computer programs called logic minimizers are used. They have certain definite rules (related to manipulation terms in Boolean logic) and certain heuristics (guidelines and methods for following the guidelines) to follow in order to minimize the number of, say, AND and OR gates used in representing the equations. The mathematics of representing any Boolean function as a series of AND and OR gates, and finding equivalent representations, has been developed and used since George Boole set down the algebraic form of what is now called Boolean Logic. Computer Science, abstract algebra, clone theory, all have played and continue to play an essential role in solving instances of this kind of problem. The fact that it is not completely solved is related to one of the Millenium Prize problems (P-NP) . Gerhard "Ask Me About PLD Chips" Paseman, 2011.02.24 share|improve this answer The error-correction required for cell phones and 3G and 4G devices to work is mathematics! share|improve this answer Your Answer
6e97a1bc651ca737
Take the 2-minute tour × When finding the discrete energy states of a operator I have been taught to use the time-independent Schrodinger equation which restates the definition of eigenvalues and eigenvectors. What I don’t understand is why the eigenvalues are the energy states, is there firstly a mathematical reason and secondly a physical reason? Does this arise from Hamiltonian or Lagrangian mechanics which I am not familiar with? share|improve this question Sorry I mean eigenvalues of the operator not wave funciton –  Josh Apr 6 '11 at 17:00 Keep in mind that we are only dealing with Hermetian operators, because their eigenvalues are real, and hence correspond to positive definite probabilities. –  Matt Calhoun Apr 8 '11 at 16:44 7 Answers 7 up vote 4 down vote accepted As has been remarked by others and explained clearly, and mathematically, the eigenvalues are important because a) they allow you to solve the time-dependent equation, i.e., solve for the evolution of the system and b) a state which belongs to the eigenvalue $E$, i.e., as we say, a state which is an eigenstate with eigenvalue $E$, has an expectation value of the energy operator which is easy to see has to be $E$ itself. But those explanations are advanced and rely on the maths. And they do not explain why $E$ should be considered 'an energy level'. At some risk, I will try to answer your question more physically. What is the physical reason why the energy states of a system, e.g., an atom, are the eigenvalues of the operator $H$ that appears in the time-independent Schroedinger equation? Well, first, note that it's absolutely the same $H$ that appears in the time-dependent Schrodinger equation, $$H\cdot \psi = -i{\partial \psi \over \partial t}$$ which controls the rate of change of $\psi$. The answer doesn't come from the classical Hamiltonian or Lagrangian mechanics, but from the then-new quantum properties of Nature. A non-classical feature of QM is that some states are stationary, which means they do not change in time. E.g., the electron in a Bohr orbit is actually not moving, not orbiting at all, and this solves the classical paradoxes about the atom (why the rotating charge doesn't radiate its energy away and fall into the centre). The first key point is that an eigenstate is a stationary state: what is the explanation for this? well, Schroedinger's time dependent equation clearly says that, up to a constant of proportionality, the time-rate of change of any state $\psi$ is found by applying the operator $H$ (the Hamiltonian: we do not yet know it is also the energy operator) to it: the new vector or function $H\cdot\psi$ is the change in $\psi$ per unit time. Obviously if this is zero, $\psi$ does not change (this was the only classical possibility). But also if $H\cdot\psi$ is even a non-zero multiple of $\psi$, call it $E\psi$, then $\psi$ plus this rate of change is still a multiple of $\psi$, so as time goes on, $\psi$ changes in a trivial fashion: just to another multiple of itself. In QM, a multiple of the wave function represents the same quantum state, so we see the quantum state does not change. Now the next key point is that a state with a definite energy value must be stationary. Why? In QM, it is not automatic that a system has a definite value of a physical quantity, but if it does, that means its measurement always leads to the same answer, so there is no uncertainty. So if there is no uncertainty in the energy, by Heisenberg's uncertainty principle there must be infinite uncertainty in something else, whatever is 'conjugate' to energy. And that is time. You cannot tell the time using this system, which implies it is not changing. So it is stationary. (remember, we are not assuming that $H$ is also the energy operator and we are not assuming the formula for expectations). Thus being an eigenstate of $H$ implies $\psi$ is stationary. And having a definite energy value implies it is stationary. Being physicists, we now conclude that being an eigenstate implies it has a definite energy value, which answers your question, and these are the 'energy levels' of a system such as an atom: a system, even an atom, might not possess a definite energy, but if it doesn't, it won't be stationary, and being microscopic, the time-scale in which it will evolve will be so rapid we are unlikely to be able to observe its energy, or even care (since it won't be relevant to molecules or chemistry). So, 'most' atoms for which we can actually measure their energy must be stationary: this is 'why' the definite values of energy which a stationary state can possess are called the 'energy levels' of the system, and historically were discovered first, before Schroedingers equation. From a human perspective, most atoms that we care about spend most of their time that matters to us in an approximately stationary state. In case you are wondering why time is the conjugate to energy, whereas Heisenberg's original analysis of his uncertainty principle showed that position was conjugate to momentum, we rely on relativity: time is just another coordinate of space-time, and so is analogous to position. And in relativistic mechanics, momentum in a spatial direction is analogous to energy (or mass, same thing). In the standard relativistic equation $$p^2-m^2=E^2,$$ we see that momentum ($p$) and mass $m$ are symmetric (except for the negative sign) with each other. So since momentum is conjugate to position, $m$ or energy must be conjugate to time. For this reason, Bohr was able to extend Heisenberg's analysis, of the uncertainty relations between measurements of position and measurements of momentum, to show the same relations between energy and time. share|improve this answer Both eigenvalues and eigenstates belong to some operator. In your case, this is the Hamiltonian operator $\hat H$. It's fundamental because of many reasons. First is that it is indeed an operator that represents energy in a sense that possible energy levels are encoded in its spectrum (i.e. a set of eigenvalues). The second important reason is that it is the operator that can be found in Schrodinger equation $i \hbar \partial_t \left | \psi(t) \right > = \hat H \left | \psi(t) \right >$. This equation can then be solved by writing $\left | \psi(t) \right >$ as superposition of eigenstates of $\hat H$: $\left | \psi(t) \right > = \sum_n c_n(t) \left | \psi_n \right >$. If we can find these states, we are done as $c_n(t) = \exp({-iE_n t \over \hbar}) c_n(t=0)$ solves the equation (and it also shows the importance of these eigenstates because they are preserved by time evolution). So this means the problem of time evolution in quantum mechanics can be reduced to the problem of finding the eigenvalues and eigenstates of $\hat H$, the equation for that being $\hat H \left | \psi_n \right> = E_n \left| \psi_n \right>$. Note: the above assumes that $\hat H$ is time-independent. If it's not (as is the case in basically all practical applications, using perturbation theory) then we use different techniques, e.g. of path integration, or various scattering formulas. share|improve this answer You seem to be confusing two things, namely the eigenstates of an operator and Schrödinger's equation. A priori, these two have nothing to do with each other. In Quantum Mechanics, measurable quantities are represented by (hermitian) operators on a Hilbert space. For instance there is an operator $P$ corresponding to the momentum. In general, when measuring the momentum of a state $|\psi \rangle$, the result will not be deterministic. However, the average over several measurements will be equal to the expection value $$ \langle \psi | P | \psi \rangle $$ However, when $|\psi\rangle$ is an eigenvector of the operator, $P|\psi\rangle = \lambda |\psi\rangle$, then the measurement will always be the same value $\lambda$. In particular, there is an operator corresponding to the total energy, the Hamiltionian $H$. The form of this operator can be obtained from classical physics if you replace momentum and location by their corresponding operators. For instance, the Hamiltonian of an electron in an electric potential $V$ is $$ H = \frac1{2m} P^2 + eV(X) .$$ Thus, the expectation value for the energy of a state $|\psi\rangle$ is $\langle \psi|H|\psi\rangle$. Now, the Hamiltionian is a very interesting operator because it features prominently in the equation of motion, the Schrödinger equation. $$ i\hbar \partial_t |\psi(t)\rangle = H |\psi(t)\rangle .$$ What does this have to do with the eigenvalues of the Hamiltionian? A priori nothing, but the point is that knowing the eigenvectors and -values of $H$ allow you to solve this equation. Namely, if you have an eigenvator $|\psi_n\rangle$, then you have $$ i\hbar \partial_t |\psi_n(t)\rangle = H |\psi_n(t)\rangle = E_n|\psi(t)\rangle$$ which can be solved to $$ |\psi_n(t)\rangle = e^{-\frac{i}{\hbar} E_nt} |\psi(0)\rangle $$ To summarize, the eigenvalues of an operator tells you something about what happens when you perform measurements, but in addition, the eigenvalues of the energy operator help you solve the equations of motion. share|improve this answer The reason why it is the eigenvalues of the Hamiltonian and not some other operator that will give you the energy states is that in classical Mechanics, the Hamiltonian function is just the energy of your system, expressed as a function of position $x$ and momentum $p$. As a simple example, the Hamiltonian for a harmonic oscillator is $$H(x,p) = \frac{p^2}{2m} + \frac{1}{2} m \omega^2 x^2$$ Note that this really is just the sum of kinetic and potential energy, so we could write $$H(x,p) = E$$. To get to quantum mechanics, one now performs what is called canonical quantization. There is no mathematically rigorous reason why this will give you a correct quantum mechanics. Since quantum isn't classical, we cannot really expect to find a seamless and watertight derivation of the former from the latter. To my knowledge, this approach has, however, always given correct results. So, in canonical quantization, what one does is to replace the variables of the Hamiltonian, i.e., $x$ and $p$, with their operator versions, $\hat x$ and $\hat p$. Now we cannot simply write $H(x,p) = E$ anymore, since the energy is a scalar, but the Hamiltonian $H$ is now an operator. Operators are functions that take a wavefunction and modify it in some way and give you a new wavefunction. Now, another postulate of quantum mechanics is that you get the expectation value of an operator $\hat A$ in a given state $\Psi$ by calculation the integral $$ \int dx \Psi^*(x) \hat A \Psi(x) $$ Hence, we get the expectation value of the energy by calculating $$ \int dx \Psi^*(x) H \Psi(x)$$ Obviously, if $H\Psi(x) = E\Psi(x)$, then the expectation value yields $E$, and it is not hard to show that for such an eigenstate, the variance of $E$ will be $0$, i.e. every measurement of the energy in state $\Psi$ will yield the same value $E$. share|improve this answer A transformation from one set to another can be regarded as a matrix if we define a particular basis. Likewise, an operator can be thought of as a matrix. What is the matrix equation that relates eigenvalues and eigenvectors? You are solving an eigenvalue problem when you are solving the time independent Schrodinger equation. share|improve this answer The basic experimental fact the inventors of QM had to deal with was the uncertainty principle. The mathematics behind this principle has two major parts, one involving linear algebra and another involving Fourier analysis. In other words, the operator algebra of QM is necessary in order to have a theory which obeys the uncertainty principle, and if you want to know why this is true, you have to study the mathematics. share|improve this answer I think you should specify you answer. –  Self-Made Man Nov 14 '13 at 14:29 The physics of this is the DeBroglie relation for particles, which relates the energy to the frequency of some wave. The energy of a photon is the frequency of the emitted electromagnetic wave. When a quantum mechanical atom is weakly interacting with the photon field, and goes from a state with frequency f to a state with frequency f', it can only emit photons with frequency f-f'. The reason is that the transition process is only resonant with waves of frequency equal to the beat frequency \Delta f= f-f'. The atomic relative phases during the transition process recurs with time $1\over \Delta f$, and for any outgoing wave whose frequency does not match this, the process will be cancelling at long times, and no wave will be emitted. This means that atomic transitions from f to f' are accompanied by a loss of energy of $h\Delta f$, so that one must identify the frequency with the energy in general quantum systems. The Schrodinger waves of definite frequency are the solutions of the time independent problem, since when $$i{d\over dt} \psi = H \psi $$ and $H\psi = E\psi$, that is, if $\psi$ is an eigenvector of H, then $\psi(t) = e^{-iEt} \psi(0)$, so the time dependence of the wave has a definite frequency. I am giving a physical argument here, because the notion that energy is frequency is engrained into the foundation of quantum mechanics, and it is hard to argue that it is true using a formalism built upon this as a foundation. share|improve this answer Your Answer
45fe3634c1e0405e
The geometry of the wavefunction, electron spin and the form factor Our previous posts showed how a simple geometric interpretation of the elementary wavefunction yielded the (Compton scattering) radius of an elementary particle—for an electron, at least: for the proton, we only got the order of magnitude right—but then a proton is not an elementary particle. We got lots of other interesting equations as well… But… Well… When everything is said and done, it’s that equivalence between the E = m·a2·ω2 and E = m·c2 relations that we… Well… We need to be more specific about it. Indeed, I’ve been ambiguous here and there—oscillating between various interpretations, so to speak. 🙂 In my own mind, I refer to my unanswered questions, or my ambiguous answers to them, as the form factor problem. So… Well… That explains the title of my post. But so… Well… I do want to be somewhat more conclusive in this post. So let’s go and see where we end up. 🙂 To help focus our mind, let us recall the metaphor of the V-2 perpetuum mobile, as illustrated below. With permanently closed valves, the air inside the cylinder compresses and decompresses as the pistons move up and down. It provides, therefore, a restoring force. As such, it will store potential energy, just like a spring, and the motion of the pistons will also reflect that of a mass on a spring: it is described by a sinusoidal function, with the zero point at the center of each cylinder. We can, therefore, think of the moving pistons as harmonic oscillators, just like mechanical springs. Of course, instead of two cylinders with pistons, one may also think of connecting two springs with a crankshaft, but then that’s not fancy enough for me. 🙂 V-2 engine At first sight, the analogy between our flywheel model of an electron and the V-twin engine seems to be complete: the 90 degree angle of our V-2 engine makes it possible to perfectly balance the pistons and we may, therefore, think of the flywheel as a (symmetric) rotating mass, whose angular momentum is given by the product of the angular frequency and the moment of inertia: L = ω·I. Of course, the moment of inertia (aka the angular mass) will depend on the form (or shape) of our flywheel: 1. I = m·a2 for a rotating point mass m or, what amounts to the same, for a circular hoop of mass m and radius a. 2. For a rotating (uniformly solid) disk, we must add a 1/2 factor: I = m·a2/2. How can we relate those formulas to the E = m·a2·ω2 formula? The kinetic energy that is being stored in a flywheel is equal Ekinetic = I·ω2/2, so that is only half of the E = m·a2·ω2 product if we substitute I for I = m·a2. [For a disk, we get a factor 1/4, so that’s even worse!] However, our flywheel model of an electron incorporates potential energy too. In fact, the E = m·a2·ω2 formula just adds the (kinetic and potential) energy of two oscillators: we do not really consider the energy in the flywheel itself because… Well… The essence of our flywheel model of an electron is not the flywheel: the flywheel just transfers energy from one oscillator to the other, but so… Well… We don’t include it in our energy calculations. The essence of our model is that two-dimensional oscillation which drives the electron, and which is reflected in Einstein’s E = m·c2 formula. That two-dimensional oscillation—the a2·ω2 = c2 equation, really—tells us that the resonant (or natural) frequency of the fabric of spacetime is given by the speed of light—but measured in units of a. [If you don’t quite get this, re-write the a2·ω2 = c2 equation as ω = c/a: the radius of our electron appears as a natural distance unit here.] Now, we were extremely happy with this interpretation not only because of the key results mentioned above, but also because it has lots of other nice consequences. Think of our probabilities as being proportional to energy densities, for example—and all of the other stuff I describe in my published paper on this. But there is even more on the horizon: a follower of this blog (a reader with an actual PhD in physics, for a change) sent me an article analyzing elementary particles as tiny black holes because… Well… If our electron is effectively spinning around, then its tangential velocity is equal to ω = c. Now, recent research suggest black holes are also spinning at (nearly) the speed of light. Interesting, right? However, in order to understand what she’s trying to tell me, I’ll first need to get a better grasp of general relativity, so I can relate what I’ve been writing here and in previous posts to the Schwarzschild radius and other stuff. Let me get back to the lesson here. In the reference frame of our particle, the wavefunction really looks like the animation below: it has two components, and the amplitude of the two-dimensional oscillation is equal to a, which we calculated as = ħ·/(m·c) = 3.8616×10−13 m, so that’s the (reduced) Compton scattering radius of an electron. In my original article on this, I used a more complicated argument involving the angular momentum formula, but I now prefer a more straightforward calculation: c = a·ω = a·E/ħ = a·m·c2/ħ  ⇔ = ħ/(m·c) The question is: what is that rotating arrow? I’ve been vague and not so vague on this. The thing is: I can’t prove anything in this regard. But my hypothesis is that it is, in effect, a rotating field vector, so it’s just like the electric field vector of a (circularly polarized) electromagnetic wave (illustrated below). There are a number of crucial differences though: 1. The (physical) dimension of the field vector of the matter-wave is different: I associate the real and imaginary component of the wavefunction with a force per unit mass (as opposed to the force per unit charge dimension of the electric field vector). Of course, the newton/kg dimension reduces to the dimension of acceleration (m/s2), so that’s the dimension of a gravitational field. 2. I do believe this gravitational disturbance, so to speak, does cause an electron to move about some center, and I believe it does so at the speed of light. In contrast, electromagnetic waves do not involve any mass: they’re just an oscillating field. Nothing more. Nothing less. In contrast, as Feynman puts it: “When you do find the electron some place, the entire charge is there.” (Feynman’s Lectures, III-21-4) I mentioned that in my previous post but, for your convenience, I’ll repeat what I wrote there. The basic idea here is illustrated below (credit for this illustration goes to another blogger on physics). As for the Stern-Gerlach experiment itself, let me refer you to a YouTube video from the Quantum Made Simple site. Figure 1 BohrThe point is: the direction of the angular momentum (and the magnetic moment) of an electron—or, to be precise, its component as measured in the direction of the (inhomogeneous) magnetic field through which our electron is traveling—cannot be parallel to the direction of motion. On the contrary, it is perpendicular to the direction of motion. In other words, if we imagine our electron as spinning around some center, then the disk it circumscribes will comprise the direction of motion. However, we need to add an interesting detail here. As you know, we don’t really have a precise direction of angular momentum in quantum physics. [If you don’t know this… Well… Just look at one of my many posts on spin and angular momentum in quantum physics.] Now, we’ve explored a number of hypotheses but, when everything is said and done, a rather classical explanation turns out to be the best: an object with an angular momentum J and a magnetic moment μ (I used bold-face because these are vector quantities) that is parallel to some magnetic field B, will not line up, as you’d expect a tiny magnet to do in a magnetic field—or not completely, at least: it will precess. I explained that in another post on quantum-mechanical spin, which I advise you to re-read if you want to appreciate the point that I am trying to make here. That post integrates some interesting formulas, and so one of the things on my ‘to do’ list is to prove that these formulas are, effectively, compatible with the electron model we’ve presented in this and previous posts. Indeed, when one advances a hypothesis like this, it’s not enough to just sort of show that the general geometry of the situation makes sense: we also need to show the numbers come out alright. So… Well… Whatever we think our electron—or its wavefunction—might be, it needs to be compatible with stuff like the observed precession frequency of an electron in a magnetic field. Our model also needs to be compatible with the transformation formulas for amplitudes. I’ve been talking about this for quite a while now, and so it’s about time I get going on that. Last but not least, those articles that relate matter-particles to (quantum) gravity—such as the one I mentioned above—are intriguing too and, hence, whatever hypotheses I advance here, I’d better check them against those more advanced theories too, right? 🙂 Unfortunately, that’s going to take me a few more years of studying… But… Well… I still have many years ahead—I hope. 🙂 Post scriptum: It’s funny how one’s brain keeps working when sleeping. When I woke up this morning, I thought: “But it is that flywheel that matters, right? That’s the energy storage mechanism and also explains how photons possibly interact with electrons. The oscillators drive the flywheel but, without the flywheel, nothing is happening. It is really the transfer of energy—through the flywheel—which explains why our flywheel goes round and round.” It may or may not be useful to remind ourselves of the math in this regard. The motion of our first oscillator is given by the cos(ω·t) = cosθ function (θ = ω·t), and its kinetic energy will be equal to sin2θ. Hence, the (instantaneous) change in kinetic energy at any point in time (as a function of the angle θ) is equal to: d(sin2θ)/dθ = 2∙sinθ∙d(sinθ)/dθ = 2∙sinθ∙cosθ. Now, the motion of the second oscillator (just look at that second piston going up and down in the V-2 engine) is given by the sinθ function, which is equal to cos(θ − π /2). Hence, its kinetic energy is equal to sin2(θ − π /2), and how it changes (as a function of θ again) is equal to 2∙sin(θ − π /2)∙cos(θ − π /2) = = −2∙cosθ∙sinθ = −2∙sinθ∙cosθ. So here we have our energy transfer: the flywheel organizes the borrowing and returning of energy, so to speak. That’s the crux of the matter. So… Well… What if the relevant energy formula is E = m·a2·ω2/2 instead of E = m·a2·ω2? What are the implications? Well… We get a √2 factor in our formula for the radius a, as shown below. square 2 Now that is not so nice. For the tangential velocity, we get a·ω = √2·c. This is also not so nice. How can we save our model? I am not sure, but here I am thinking of the mentioned precession—the wobbling of our flywheel in a magnetic field. Remember we may think of Jz—the angular momentum or, to be precise, its component in the z-direction (the direction in which we measure it—as the projection of the real angular momentum J. Let me insert Feynman’s illustration here again (Feynman’s Lectures, II-34-3), so you get what I am talking about. Now, all depends on the angle (θ) between Jz and J, of course. We did a rather obscure post on these angles, but the formulas there come in handy now. Just click the link and review it if and when you’d want to understand the following formulas for the magnitude of the presumed actual momentum:magnitude formulaIn this particular case (spin-1/2 particles), j is equal to 1/2 (in units of ħ, of course). Hence, is equal to √0.75 ≈ 0.866. Elementary geometry then tells us cos(θ) = (1/2)/√(3/4) =  = 1/√3. Hence, θ ≈ 54.73561°. That’s a big angle—larger than the 45° angle we had secretly expected because… Well… The 45° angle has that √2 factor in it: cos(45°) = sin(45°) = 1/√2. Hmm… As you can see, there is no easy fix here. Those damn 1/2 factors! They pop up everywhere, don’t they? 🙂 We’ll solve the puzzle. One day… But not today, I am afraid. I’ll call it the form factor problem… Because… Well… It sounds better than the 1/2 or √2 problem, right? 🙂 Note: If you’re into quantum math, you’ll note ħ/(m·c) is the reduced Compton scattering radius. The standard Compton scattering radius is equal to  = (2π·ħ)/(m·c) =  h/(m·c) = h/(m·c). It doesn’t solve the √2 problem. Sorry. The form factor problem. 🙂 To be honest, I finished my published paper on all of this with a suggestion that, perhaps, we should think of two circular oscillations, as opposed to linear ones. Think of a tiny ball, whose center of mass stays where it is, as depicted below. Any rotation – around any axis – will be some combination of a rotation around the two other axes. Hence, we may want to think of our two-dimensional oscillation as an oscillation of a polar and azimuthal angle. It’s just a thought but… Well… I am sure it’s going to keep me busy for a while. 🙂polar_coordsThey are oscillations, still, so I am not thinking of two flywheels that keep going around in the same direction. No. More like a wobbling object on a spring. Something like the movement of a bobblehead on a spring perhaps. 🙂bobblehead The Essence of Reality I know it’s a crazy title. It has no place in a physics blog, but then I am sure this article will go elsewhere. […] Well… […] Let me be honest: it’s probably gonna go nowhere. Whatever. I don’t care too much. My life is happier than Wittgenstein’s. 🙂 My original title for this post was: discrete spacetime. That was somewhat less offensive but, while being less offensive, it suffered from the same drawback: the terminology was ambiguous. The commonly accepted term for discrete spacetime is the quantum vacuum. However, because I am just an arrogant bastard trying to establish myself in this field, I am telling you that term is meaningless. Indeed, wouldn’t you agree that, if the quantum vacuum is a vacuum, then it’s empty. So it’s nothing. Hence, it cannot have any properties and, therefore, it cannot be discrete – or continuous, or whatever. We need to put stuff in it to make it real. Therefore, I’d rather distinguish mathematical versus physical space. Of course, you are smart, and so you now you’ll say that my terminology is as bad as that of the quantum vacuumists. And you are right. However, this is a story that am writing, and so I will write it the way want to write it. 🙂 So where were we? Spacetime! Discrete spacetime. Yes. Thank you! Because relativity tells us we should think in terms of four-vectors, we should not talk about space but about spacetime. Hence, we should distinguish mathematical spacetime from physical spacetime. So what’s the definitional difference? Mathematical spacetime is just what it is: a coordinate space – Cartesian, polar, or whatever – which we define by choosing a representation, or a base. And all the other elements of the set are just some algebraic combination of the base set. Mathematical space involves numbers. They don’t – let me emphasize that: they do not!– involve the physical dimensions of the variables. Always remember: math shows us the relations, but it doesn’t show us the stuff itself. Think of it: even if we may refer to the coordinate axes as time, or distance, we do not really think of them as something physical. In math, the physical dimension is just a label. Nothing more. Nothing less. In contrast, physical spacetime is filled with something – with waves, or with particles – so it’s spacetime filled with energy and/or matter. In fact, we should analyze matter and energy as essentially the same thing, and please do carefully re-read what I wrote: I said they are essentially the same. I did not say they are the same. Energy and mass are equivalent, but not quite the same. I’ll tell you what that means in a moment. These waves, or particles, come with mass, energy and momentum. There is an equivalence between mass and energy, but they are not the same. There is a twist – literally (only after reading the next paragraphs, you’ll realize how literally): even when choosing our time and distance units such that is numerically equal to 1 – e.g. when measuring distance in light-seconds (or time in light-meters), or when using Planck units – the physical dimension of the cfactor in Einstein’s E = mcequation doesn’t vanish: the physical dimension of energy is kg·m2/s2. Using Newton’s force law (1 N = 1 kg·m/s2), we can easily see this rather strange unit is effectively equivalent to the energy unit, i.e. the joule (1 J = 1 kg·m2/s2 = 1 (N·s2/m)·m2/s= 1 N·m), but that’s not the point. The (m/s)2 factor – i.e. the square of the velocity dimension – reflects the following: 1. Energy is nothing but mass in motion. To be precise, it’s oscillating mass. [And, yes, that’s what string theory is all about, but I didn’t want to mention that. It’s just terminology once again: I prefer to say ‘oscillating’ rather than ‘vibrating’. :-)] 2. The rapidly oscillating real and imaginary component of the matter-wave (or wavefunction, we should say) each capture half of the total energy of the object E = mc2. 3. The oscillation is an oscillation of the mass of the particle (or wave) that we’re looking at. In the mentioned publication, I explore the structural similarity between: 1. The oscillating electric and magnetic field vectors (E and B) that represent the electromagnetic wave, and 2. The oscillating real and imaginary part of the matter-wave. The story is simple or complicated, depending on what you know already, but it can be told in an abnoxiously easy way. Note that the associated force laws do not differ in their structure: Coulomb Law gravitation law The only difference is the dimension of m versus q: mass – the measure of inertia -versus charge. Mass comes in one color only, so to speak: it’s always positive. In contrast, electric charge comes in two colors: positive and negative. You can guess what comes next, but I won’t talk about that here.:-) Just note the absolute distance between two charges (with the same or the opposite sign) is twice the distance between 0 and 1, which must explains the rather mysterious 2 factor I get for the Schrödinger equation for the electromagnetic wave (but I still need to show how that works out exactly). The point is: remembering that the physical dimension of the electric field is N/C (newton per coulomb, i.e. force per unit of charge) it should not come as a surprise that we find that the physical dimension of the components of the matter-wave is N/kg: newton per kg, i.e. force per unit of mass. For the detail, I’ll refer you to that article of mine (and, because I know you will not want to work your way through it, let me tell you it’s the last chapter that tells you how to do the trick). So where were we? Strange. I actually just wanted to talk about discrete spacetime here, but I realize I’ve already dealt with all of the metaphysical questions you could possible have, except the (existential) Who Am I? question, which I cannot answer on your behalf. 🙂 I wanted to talk about physical spacetime, so that’s sanitized mathematical space plus something. A date without logistics. Our mind is a lazy host, indeed. Reality is the guest that brings all of the wine and the food to the party. In fact, it’s a guest that brings everything to the party: you – the observer – just need to set the time and the place. In fact, in light of what Kant – and many other eminent philosophers – wrote about space and time being constructs of the mind, that’s another statement which you should interpret literally. So physical spacetime is spacetime filled with something – like a wave, or a field. So how does that look like? Well… Frankly, I don’t know! But let me share my idea of it. Because of the unity of Planck’s quantum of action (ħ ≈ 1.0545718×10−34 N·m·s), a wave traveling in spacetime might be represented as a set of discrete spacetime points and the associated amplitudes, as illustrated below. [I just made an easy Excel graph. Nothing fancy.] The space in-between the discrete spacetime points, which are separated by the Planck time and distance units, is not real. It is plain nothingness, or – if you prefer that term – the space in-between in is mathematical space only: a figment of the mind – nothing real, because quantum theory tells us that the real, physical, space is discontinuous. Why is that so? Well… Smaller time and distance units cannot exist, because we would not be able to pack Planck’s quantum of action in them: a box of the Planck scale, with ħ in it, is just a black hole and, hence, nothing could go from here to there, because all would be trapped. Of course, now you’ll wonder what it means to ‘pack‘ Planck’s quantum of action in a Planck-scale spacetime box. Let me try  to explain this. It’s going to be a rather rudimentary explanation and, hence, it may not satisfy you. But then the alternative is to learn more about black holes and the Schwarzschild radius, which I warmly recommend for two equivalent reasons: 1. The matter is actually quite deep, and I’d recommend you try to fully understand it by reading some decent physics course. 2. You’d stop reading this nonsense. If, despite my warning, you would continue to read what I write, you may want to note that we could also use the logic below to define Planck’s quantum of action, rather than using it to define the Planck time and distance unit. Everything is related to everything in physics. But let me now give the rather naive explanation itself: • Planck’s quantum of action (ħ ≈ 1.0545718×10−34 N·m·s) is the smallest thing possible. It may express itself as some momentum (whose physical dimension is N·s) over some distance (Δs), or as some amount of energy (whose dimension is N·m) over some time (Δt). • Now, energy is an oscillation of mass (I will repeat that a couple of times, and show you the detail of what that means in the last chapter) and, hence, ħ must necessarily express itself both as momentum as well as energy over some time and some distance. Hence, it is what it is: some force over some distance over some time. This reflects the physical dimension of ħ, which is the product of force, distance and time. So let’s assume some force ΔF, some distance Δs, and some time Δt, so we can write ħ as ħ = ΔF·Δs·Δt. • Now let’s pack that into a traveling particle – like a photon, for example – which, as you know (and as I will show in this publication) is, effectively, just some oscillation of mass, or an energy flow. Now let’s think about one cycle of that oscillation. How small can we make it? In spacetime, I mean. • If we decrease Δs and/or Δt, then ΔF must increase, so as to ensure the integrity (or unity) of ħ as the fundamental quantum of action. Note that the increase in the momentum (ΔF·Δt) and the energy (ΔF·Δs) is proportional to the decrease in Δt and Δs. Now, in our search for the Planck-size spacetime box, we will obviously want to decrease Δs and Δt simultaneously. • Because nothing can exceed the speed of light, we may want to use equivalent time and distance units, so the numerical value of the speed of light is equal to 1 and all velocities become relative velocities. If we now assume our particle is traveling at the speed of light – so it must be a photon, or a (theoretical) matter-particle with zero rest mass (which is something different than a photon) – then our Δs and Δt should respect the following condition: Δs/Δt = c = 1. • Now, when Δs = 1.6162×10−35 m and Δt = 5.391×10−44 s, we find that Δs/Δt = c, but ΔF = ħ/(Δs·Δt) = (1.0545718×10−34 N·m·s)/[(1.6162×10−35 m)·(5.391×10−44 s)] ≈ 1.21×1044 N. That force is monstrously huge. Think of it: because of gravitation, a mass of 1 kg in our hand, here on Earth, will exert a force of 9.8 N. Now note the exponent in that 1.21×1044 number. • If we multiply that monstrous force with Δs – which is extremely tiny – we get the Planck energy: (1.6162×10−35 m)·(1.21×1044 N) ≈ 1.956×109 joule. Despite the tininess of Δs, we still get a fairly big value for the Planck energy. Just to give you an idea, it’s the energy that you’d get out of burning 60 liters of gasoline—or the mileage you’d get out of 16 gallons of fuel! In fact, the equivalent mass of that energy, packed in such tiny space, makes it a black hole. • In short, the conclusion is that our particle can’t move (or, thinking of it as a wave, that our wave can’t wave) because it’s caught in the black hole it creates by its own energy: so the energy can’t escape and, hence, it can’t flow. 🙂 Of course, you will now say that we could imagine half a cycle, or a quarter of that cycle. And you are right: we can surely imagine that, but we get the same thing: to respect the unity of ħ, we’ll then have to pack it into half a cycle, or a quarter of a cycle, which just means the energy of the whole cycle is 2·ħ, or 4·ħ. However, our conclusion still stands: we won’t be able to pack that half-cycle, or that quarter-cycle, into something smaller than the Planck-size spacetime box, because it would make it a black hole, and so our wave wouldn’t go anywhere, and the idea of our wave itself – or the particle – just doesn’t make sense anymore. This brings me to the final point I’d like to make here. When Maxwell or Einstein, or the quantum vacuumists – or I 🙂 – say that the speed of light is just a property of the vacuum, then that’s correct and not correct at the same time. First, we should note that, if we say that, we might also say that ħ is a property of the vacuum. All physical constants are. Hence, it’s a pretty meaningless statement. Still, it’s a statement that helps us to understand the essence of reality. Second, and more importantly, we should dissect that statement. The speed of light combines two very different aspects: 1. It’s a physical constant, i.e. some fixed number that we will find to be the same regardless of our reference frame. As such, it’s as essential as those immovable physical laws that we find to be the same in each and every reference frame. 2. However, its physical dimension is the ratio of the distance and the time unit: m/s. We may choose other time and distance units, but we will still combine them in that ratio. These two units represent the two dimensions in our mind that – as Kant noted – structure our perception of reality: the temporal and spatial dimension. Hence, we cannot just say that is ‘just a property of the vacuum’. In our definition of as a velocity, we mix reality – the ‘outside world’ – with our perception of it. It’s unavoidable. Frankly, while we should obviously try – and we should try very hard! – to separate what’s ‘out there’ versus ‘how we make sense of it’, it is and remains an impossible job because… Well… When everything is said and done, what we observe ‘out there’ is just that: it’s just what we – humans – observe. 🙂 So, when everything is said and done, the essence of reality consists of four things: 1. Nothing 2. Mass, i.e. something, or not nothing 3. Movement (of something), from nowhere to somewhere. 4. Us: our mind. Or God’s Mind. Whatever. Mind. The first is like yin and yang, or manicheism, or whatever dualistic religious system. As for Movement and Mind… Hmm… In some very weird way, I feel they must be part of one and the same thing as well. 🙂 In fact, we may also think of those four things as: 1. 0 (zero) 2. 1 (one), or as some sine or a cosine, which is anything in-between 0 and 1. 3. Well… I am not sure! I can’t really separate point 3 and point 4, because they combine point 1 and point 2. So we’ve don’t have a quadrupality, right? We do have Trinity here, don’t we? […] Maybe. I won’t comment, because I think I just found Unity here. 🙂 The Poynting vector for the matter-wave In my various posts on the wavefunction – which I summarized in my e-book – I wrote at the length on the structural similarities between the matter-wave and the electromagnetic wave. Look at the following images once more: Animation 5d_euler_f Both are the same, and then they are not. The illustration on the right-hand side is a regular quantum-mechanical wavefunction, i.e. an amplitude wavefunction: the x-axis represents time, so we are looking at the wavefunction at some particular point in space. [Of course, we  could just switch the dimensions and it would all look the same.] The illustration on the left-hand side looks similar, but it is not an amplitude wavefunction. The animation shows how the electric field vector (E) of an electromagnetic wave travels through space. Its shape is the same. So it is the same function. Is it also the same reality? Yes and no. The two energy propagation mechanisms are structurally similar. The key difference is that, in electromagnetics, we get two waves for the price of one. Indeed, the animation above does not show the accompanying magnetic field vector (B), which is equally essential. But, for the rest, Schrödinger’s equation and Maxwell’s equation model a similar energy propagation mechanism, as shown below. amw propagation They have to, as the force laws are similar too: Coulomb Law gravitation law The only difference is that mass comes in one color only, so to speak: it’s always positive. In contrast, electric charge comes in two colors: positive and negative. You can now guess what comes next: quantum chromodynamics, but I won’t write about that here, because I haven’t studied that yet. I won’t repeat what I wrote elsewhere, but I want to make good on one promise, and that is to develop the idea of the Poynting vector for the matter-wave. So let’s do that now. Let me first remind you of the basic ideas, however. The animation below shows the two components of the archetypal wavefunction, i.e. the sine and cosine: Think of the two oscillations as (each) packing half of the total energy of a particle (like an electron or a photon, for example). Look at how the sine and cosine mutually feed into each other: the sine reaches zero as the cosine reaches plus or minus one, and vice versa. Look at how the moving dot accelerates as it goes to the center point of the axis, and how it decelerates when reaching the end points, so as to switch direction. The two functions are exactly the same function, but for a phase difference of 90 degrees, i.e. a right angle. Now, I love engines, and so it makes me think of a V-2 engine with the pistons at a 90-degree angle. Look at the illustration below. If there is no friction, we have a perpetual motion machine: it would store energy in its moving parts, while not requiring any external energy to keep it going. If it is easier for you, you can replace each piston by a physical spring, as I did below. However, I should learn how to make animations myself, because the image below does not capture the phase difference. Hence, it does not show how the real and imaginary part of the wavefunction mutually feed into each other, which is (one of the reasons) why I like the V-2 image much better. 🙂 summary 2 The point to note is: all of the illustrations above are true representations – whatever that means – of (idealized) stationary particles, and both for matter (fermions) as well as for force-carrying particles (bosons). Let me give you an example. The (rest) energy of an electron is tiny: about 8.2×10−14 joule. Note the minus 14 exponent: that’s an unimaginably small amount. It sounds better when using the more commonly used electronvolt scale for the energy of elementary particles: 0.511 MeV. Despite its tiny mass (or energy, I should say, but then mass and energy are directly proportional to each other: the proportionality coefficient is given by the E = m·c2 formula), the frequency of the matter-wave of the electron is of the order of 1×1020 = 100,000,000,000,000,000,000 cycles per second. That’s an unimaginably large number and – as I will show when we get there – that’s not because the second is a huge unit at the atomic or sub-atomic scale. We may refer to this as the natural frequency of the electron. Higher rest masses increase the frequency and, hence, give the wavefunction an even higher density in spacetime. Let me summarize things in a very simple way: • The (total) energy that is stored in an oscillating spring is the sum of the kinetic and potential energy (T and U) and is given by the following formula: E = T + U = a02·m·ω02/2. The afactor is the maximum amplitude – which depends on the initial conditions, i.e. the initial pull or push. The ωin the formula is the natural frequency of our spring, which is a function of the stiffness of the spring (k) and the mass on the spring (m): ω02 = k/m. • Hence, the total energy that’s stored in two springs is equal to a02·m·ω02. • The similarity between the E = a02·m·ω02 and the E = m·c2 formula is much more than just striking. It is fundamental: the two oscillating components of the wavefunction each store half of the total energy of our particle. • To emphasize the point: ω0 = √(k/m) is, obviously, a characteristic of the system. Likewise, = √(E/m) is just the same: a property of spacetime. Of course, the key question is: what is that is oscillating here? In our V-2 engine, we have the moving parts. Now what exactly is moving when it comes to the wavefunction? The easy answer is: it’s the same thing. The V-2 engine, or our springs, store energy because of the moving parts. Hence, energy is equivalent only to mass that moves, and the frequency of the oscillation obviously matters, as evidenced by the E = a02·m·ω02/2 formula for the energy in a oscillating spring. Mass. Energy is moving mass. To be precise, it’s oscillating mass. Think of it: mass and energy are equivalent, but they are not the same. That’s why the dimension of the c2 factor in Einstein’s famous E = m·c2 formula matters. The equivalent energy of a 1 kg object is approximately 9×1016 joule. To be precise, it is the following monstrous number: 89,875,517,873,681,764 kg·m2/s2 Note its dimension: the joule is the product of the mass unit and the square of the velocity unit. So that, then, is, perhaps, the true meaning of Einstein’s famous formula: energy is not just equivalent to mass. It’s equivalent to mass that’s moving. In this case, an oscillating mass. But we should explore the question much more rigorously, which is what I do in the next section. Let me warn you: it is not an easy matter and, even if you are able to work your way through all of the other material below in order to understand the answer, I cannot promise you that the answer will satisfy you entirely. However, it will surely help you to phrase the question. The Poynting vector for the matter-wave For the photon, we have the electric and magnetic field vectors E and B. The boldface highlights the fact that these are vectors indeed: they have a direction as well as a magnitude. Their magnitude has a physical dimension. The dimension of E is straightforward: the electric field strength (E) is a quantity expressed in newton per coulomb (N/C), i.e. force per unit charge. This follows straight from the F = q·E force relation. The dimension of B is much less obvious: the magnetic field strength (B) is measured in (N/C)/(m/s) = (N/C)·(s/m). That’s what comes out of the F = q·v×B force relation. Just to make sure you understand: v×B is a vector cross product, and yields another vector, which is given by the following formula: a×b =  |a×bn = |a|·|bsinφ·n The φ in this formula is the angle between a and b (in the plane containing them) and, hence, is always some angle between 0 and π. The n is the unit vector that is perpendicular to the plane containing a and b in the direction given by the right-hand rule. The animation below shows it works for some rather special angles: We may also need the vector dot product, so let me quickly give you that formula too. The vector dot product yields a scalar given by the following formula: ab = |a|·|bcosφ Let’s get back to the F = q·v×B relation. A dimensional analysis shows that the dimension of B must involve the reciprocal of the velocity dimension in order to ensure the dimensions come out alright: [F]= [q·v×B] = [q]·[v]·[B] = C·(m/s)·(N/C)·(s/m) = N We can derive the same result in a different way. First, note that the magnitude of B will always be equal to E/c (except when none of the charges is moving, so B is zero), which implies the same: [B] = [E/c] = [E]/[c] = (N/C)/(m/s) = (N/C)·(s/m) Finally, the Maxwell equation we used to derive the wavefunction of the photon was ∂E/∂t = c2∇×B, which also tells us the physical dimension of B must involve that s/m factor. Otherwise, the dimensional analysis would not work out: 1. [∂E/∂t] = (N/C)/s = N/(C·s) 2. [c2∇×B] = [c2]·[∇×B] = (m2/s2)·[(N/C)·(s/m)]/m = N/(C·s) This analysis involves the curl operator ∇×, which is a rather special vector operator. It gives us the (infinitesimal) rotation of a three-dimensional vector field. You should look it up so you understand what we’re doing here. Now, when deriving the wavefunction for the photon, we gave you a purely geometric formula for B: B = ex×E = i·E Now I am going to ask you to be extremely flexible: wouldn’t you agree that the B = E/c and the B = ex×E = i·E formulas, jointly, only make sense if we’d assign the s/m dimension to ex and/or to i? I know you’ll think that’s nonsense because you’ve learned to think of the ex× and/or operation as a rotation only. What I am saying here is that it also transforms the physical dimension of the vector on which we do the operation: it multiplies it with the reciprocal of the velocity dimension. Don’t think too much about it, because I’ll do yet another hat trick. We can think of the real and imaginary part of the wavefunction as being geometrically equivalent to the E and B vector. Just compare the illustrations below: e-and-b Rising_circular Of course, you are smart, and you’ll note the phase difference between the sine and the cosine (illustrated below). So what should we do with that? Not sure. Let’s hold our breath for the moment. Let’s first think about what dimension we could possible assign to the real part of the wavefunction. We said this oscillation stores half of the energy of the elementary particle that is being described by the wavefunction. How does that storage work for the E vector? As I explained in my post on the topic, the Poynting vector describes the energy flow in a varying electromagnetic field. It’s a bit of a convoluted story (which I won’t repeat here), but the upshot is that the energy density is given by the following formula: energy density Its shape should not surprise you. The formula is quite intuitive really, even if its derivation is not. The formula represents the one thing that everyone knows about a wave, electromagnetic or not: the energy in it is proportional to the square of its amplitude, and so that’s E•E = E2 and B•B = B2. You should also note he cfactor that comes with the B•B product. It does two things here: 1. As a physical constant, with some dimension of its own, it ensures that the dimensions on both sides of the equation come out alright. 2. The magnitude of B is 1/c of that of E, so cB = E, and so that explains the extra c2 factor in the second term: we do get two waves for the price of one here and, therefore, twice the energy. Speaking of dimensions, let’s quickly do the dimensional analysis: 1. E is measured in newton per coulomb, so [E•E] = [E2] = N2/C2. 2. B is measured in (N/C)/(m/s), so we get [B•B] = [B2] = (N2/C2)·(s2/m2). However, the dimension of our c2 factor is (m2/s2) and so we’re left with N2/C2. That’s nice, because we need to add stuff that’s expressed in the same units. 3. The ε0 is that ubiquitous physical constant in electromagnetic theory: the electric constant, aka as the vacuum permittivity. Besides ensuring proportionality, it also ‘fixes’ our units, and so we should trust it to do the same thing here, and it does: [ε0] = C2/(N·m2), so if we multiply that with N2/C2, we find that u is expressed in N/m2. Why is N/m2 an energy density? The correct answer to that question involves a rather complicated analysis, but there is an easier way to think about it: just multiply N/mwith m/m, and then its dimension becomes N·m/m= J/m3, so that’s  joule per cubic meter. That looks more like an energy density dimension, doesn’t it? But it’s actually the same thing. In any case, I need to move on. We talked about the Poynting vector, and said it represents an energy flow. So how does that work? It is also quite intuitive, as its formula really speaks for itself. Let me write it down: energy flux Just look at it: u is the energy density, so that’s the amount of energy per unit volume at a given point, and so whatever flows out of that point must represent its time rate of change. As for the –S expression… Well… The • operator is the divergence, and so it give us the magnitude of a (vector) field’s source or sink at a given point. If C is a vector field (any vector field, really), then C is a scalar, and if it’s positive in a region, then that region is a source. Conversely, if it’s negative, then it’s a sink. To be precise, the divergence represents the volume density of the outward flux of a vector field from an infinitesimal volume around a given point. So, in this case, it gives us the volume density of the flux of S. If you’re somewhat familiar with electromagnetic theory, then you will immediately note that the formula has exactly the same shape as the j = −∂ρ/∂t formula, which represents a flow of electric charge. But I need to get on with my own story here. In order to not create confusion, I will denote the total energy by U, rather than E, because we will continue to use E for the magnitude of the electric field. We said the real and the imaginary component of the wavefunction were like the E and B vector, but what’s their dimension? It must involve force, but it should obviously not involve any electric charge. So what are our options here? You know the electric force law (i.e. Coulomb’s Law) and the gravitational force law are structurally similar: Coulomb Law gravitation law So what if we would just guess that the dimension of the real and imaginary component of our wavefunction should involve a newton per kg factor (N/kg), so that’s force per mass unit rather than force per unit charge? But… Hey! Wait a minute! Newton’s force law defines the newton in terms of mass and acceleration, so we can do a substitution here: 1 N = 1 kg·m/s2 ⇔ 1 kg = 1 N·s2/m. Hence, our N/kg dimension becomes: What is this: m/s2? Is that the dimension of the a·cosθ term in the a·ei·θ = a·cosθ − i·a·sinθ wavefunction? I hear you. This is getting quite crazy, but let’s see where it leads us. To calculate the equivalent energy density, we’d then need an equivalent for the ε0 factor, which – replacing the C by kg in the [ε0] = C2/(N·m2) expression – would be equal to kg2/(N·m2). Because we know what we want (energy is defined using the force unit, not the mass unit), we’ll want to substitute the kg unit once again, so – temporarily using the μ0 symbol for the equivalent of that ε0 constant – we get: 0] = [N·s2/m]2/(N·m2) = N·s4/m4 Hence, the dimension of the equivalent of that ε0·E2 term becomes:  [(μ0/2)]·[cosθ]2 = (N·s4/m4)·m2/s= N/m2 Bingo! How does it work for the other component? The other component has the imaginary unit (i) in front. If we continue to pursue our comparison with the E and B vectors, we should assign an extra s/m dimension because of the ex and/or i factor, so the physical dimension of the i·sinθ term would be (m/s2)·(s/m) = s. What? Just the second? Relax. That second term in the energy density formula has the c2 factor, so it all works out:  [(μ0/2)]·[c2]·[i·sinθ]2 = [(μ0/2)]·[c2]·[i]2·[sinθ]2 (N·s4/m4)·(m2/s2)·(s2/m2)·m2/s= N/m2 As weird as it is, it all works out. We can calculate and, hence, we can now also calculate the equivalent Poynting vector (S). However, I will let you think about that as an exercise. 🙂 Just note the grand conclusions: 1. The physical dimension of the argument of the wavefunction is physical action (newton·meter·second) and Planck’s quantum of action is the scaling factor. 2. The physical dimension of both the real and imaginary component of the elementary wavefunction is newton per kg (N/kg). This allows us to analyze the wavefunction as an energy propagation mechanism that is structurally similar to Maxwell’s equations, which represent the energy propagation mechanism when electromagnetic energy is involved. As such, all we presented so far was a deep exploration of the mathematical equivalence between the gravitational and electromagnetic force laws: Coulomb Law gravitation law Despite our grand conclusions, you should note we have not answered the most fundamental question of all. What is mass? What is electric charge? We have all these relations and equations, but are we any wiser, really? The answer to that question probably lies in general relativity: mass is that what curves spacetime. Likewise, we may look at electric charge as causing a very special type of spacetime curvature. However, even such answer – which would involve a much more complicated mathematical analysis – may not satisfy you. In any case, I will let you digest this post. I hope you enjoyed it as much as I enjoyed writing it. 🙂 Post scriptum: Of all of the weird stuff I presented here, I think the dimensional analyses were the most interesting. Think of the N/kg = N/(N·s2/m)= m/sidentity, for example. The m/s2 dimension is the dimension of physical acceleration (or deceleration): the rate of change of the velocity of an object. The identity comes straight out of Newton’s force law: F = m·a ⇔ F/m = a Now look, once again, at the animation, and remember the formula for the argument of the wavefunction: θ = E0∙t’. The energy of the particle that is being described is the (angular) frequency of the real and imaginary components of the wavefunction. The relation between (1) the (angular) frequency of a harmonic oscillator (which is what the sine and cosine represent here) and (2) the acceleration along the axis is given by the following equation: a(x) = −ω02·x I’ll let you think about what that means. I know you will struggle with it – because I did – and, hence, let me give you the following hint: 1. The energy of an ordinary string wave, like a guitar string oscillating in one dimension only, will be proportional to the square of the frequency. 2. However, for two-dimensional waves – such as an electromagnetic wave – we find that the energy is directly proportional to the frequency. Think of Einstein’s E = h·f = ħ·ω relation, for example. There is no squaring here! It is a strange observation. Those two-dimensional waves – the matter-wave, or the electromagnetic wave – give us two waves for the price of one, each carrying half of the total energy but, as a result, we no longer have that square function. Think about it. Solving the mystery will make you feel like you’ve squared the circle, which – as you know – is impossible. 🙂
9aa94e892ecbe21b
Skip to main content 2DEGs and 2DHGs Two-dimensional electron and hole gases (2DEGs and 2DHGs respectively) can be described as having quantised energy levels for one spatial dimension, but being free to move in the other two.  They can be produced by adjoining semiconductors with differently sized band gaps creating what is known as a Heterojunction. Figure 1: An energy level schematic of a heterojunction and the resulting square well. The wells created in the conduction and valence bands can be occupied by electrons and holes respectively, as shown. It is possible for an electron hole pair to combine and create a photon, the energy of which can be used to infer the structure of the energy levels within the wells. Potential Wells Depending on how heterojunctions are manufactured, the wells in which the 2DEGs and 2DHGs exist may have different forms such as square, triangular or parabolic. For wells of infinite potential, the energy level solutions are as follows: E_{n}=\frac{\pi \hbar^2n^2}{2 m^\star a^2} E_{n}=\left(\frac{\hbar^2}{2m^\star}\right)^{1/3}   \left( \frac{\frac{3}{2}\pi eF}{2m^\star}\right)^{2/3}   \left(n+\frac{3}{4} \right)^{2/3}  twell.png pwell.png where F is the magnitude of the electric field at the interface and m^\star is the effective mass of the electron or hole. Triangular well Parabolic well   Figure 2: An energy level schematic of a triangular and parabolic well. The triangular well approximation (shown in Figure 2) is widely used to describe electron energy bands at single heterojunctions. It allows the exact analytical solution of the Schrödinger equation (which must be solved self-consistently along with the Poission equation in order to determine the electronic structure at a heterointerface) in terms of Airy functions, \zeta_{n}\left(z\right)=An\left[\frac{2m^\star eF}{\hbar^2}\left(z-\frac{E_{n}}{eF}\right)\right] the eigenvalues of which describe the energy levels given above. In practical calculations however, simpler, approximate analytic wave functions make calculations much more convenient. The simplest of these is the Fang Howard wave function. where z&gt;0 and b is determined by minimizing the total energy. This function however, does tend to overestimate the energy for the ground sub-band by around 6%. A more accurate wave function is given by Takeda and Uemura, which yields a ground sub-band energy only 0.4% larger than the exact Airy function value. Solving the coupled Schrödinger and Poission equations (1D) The Poisson equation shows that potential is directly related to charge density. Schrödinger's equation is also related to the charge density, though not directly. Firstly we have the Schrödinger equation itself.  \left[-\frac{\hbar^2}{2}\frac{d}{dz}\left(\frac{1}{m^\star(z)}\frac{d}{dz}\right) +V(z)\right]\Psi(z)=E\Psi(z) where \Psi(z) is the electron wave function.  We note that the occupation of electronic states k is given by the Fermi-Dirac distribution which then allows the spacial density of electrons to be calculated using the solution of the Schrödinger equation (or an approximate solution, such as those mentioned above). Where m is the number of bound states. We must now ask 'How does the charge density \rho(z) relate to the electron density n(z)?' The answer is fairly straight forward once you consider the electron donors, which become positively charged and have density N_{D} This enables the Poisson equation to be re-written as where \epsilon(z) is the permittivity of the material. In order to solve Schrödinger and Poisson self-consistently, one starts with a trial potential \phi(z) and solves Schrödinger's equation. n(z) is then calculated from the obtained wave functions and their corresponding eigenenergies. A second value of \phi(z) may then be found from Poission's equation (using n(z) and N_{D}). This second potential is then fed back into the Schrödinger equation and more iterations take place until |\phi_{i}(z)-\phi_{i-1}(z)| is less than a certain criteria. A second example In the following section we will consider a Si/SiGe heterojunction and the resulting 2DHG in oder to demonstrate a second method of describing the junction in terms of energy parameters and dopant concentrations. Figure 3: A simple schematic diagram of the valence band in a particular SiGe heterojunction. E_{A} is the activation energy and l is the acceptor depletion width. One can clearly see that electrons at the interface have occupied the adjacent acceptor sites, leaving behind a 2DHG in the potential well. The sheet carrier density is obtained from the two-dimensional density of states for a single sub-band, and is given by (1) The electric field in the well is assumed to be uniform, allowing the triangular well approximation to be used, with the ground-state energy given by (2) E_{0}=\left(\frac{\hbar^2}{2m^\star}\right)^{1/3}   \left( \frac{\frac{3}{2}\pi eF_{0}}{2m^\star}\right)^{2/3}   \left(\frac{3}{4} \right)^{2/3} where F_{0} is the magnitude of the electric field at the interface just inside the SiGe alloy layer and is given by with N_{Depl} as the charge arising from background donor depletion within the SiGe alloy, the Si buffer later and the Si substrate. Allowing for the possibility of negatively charged impurities with sheet density n_{i}  at the Si/SiGe interface, the electric field may also be written as where N_{A} is the acceptor concentration.  The set of equations can now be closed by adding all the potential variations from the bottom if the well up to the top of the valence band offset (3) \Delta E_{V}=E_{0}(n_{S})+E_{F}+E_{A}+\frac{e^2}{2\epsilon_{0}\epsilon_{r}}N_{A}l(n_{S})^2+\frac{e^2}{\epsilon_{0}\epsilon_{r}}N_{A}L_{S}l(n_{S}). n_{S} can then be found by choosing an initial value for E_{F} and solving (3) for n_{S}. This value is then compared to that given by equation (1) for this same Fermi energy. This method is repeated, moving along a range of E_{F} until the two yielded values of n_{S} are within a certain tolerance of each other, in much the same way as the previous example. 1. S. M. Sze, Semiconductor Devices, John Wiley & Sons, 1985. 2. J. H. Luscombe et al., Physical Review B 46 (1992). 3. C. J. Emeleus et al., J. Appl. Phys 73 (1993).
2a9926c73fca7cb2
Links, proofs, talks, jokes For those who haven’t yet seen it, Erica Klarreich has a wonderful article in Quanta on Hao Huang’s proof of the Sensitivity Conjecture. This is how good popular writing about math can be. Klarreich quotes my line from this blog, “I find it hard to imagine that even God knows how to prove the Sensitivity Conjecture in any simpler way than this.” However, even if God doesn’t know a simpler proof, that of course doesn’t rule out the possibility that Don Knuth does! And indeed, a couple days ago Knuth posted his own variant of Huang’s proof on his homepage—in Knuth’s words, fleshing out the argument that Shalev Ben-David previously posted on this blog—and then left a comment about it here, the first comment by Knuth that I know about on this blog or any other blog. I’m honored—although as for whether the variants that avoid the Cauchy Interlacing Theorem are actually “simpler,” I guess I’ll leave that between Huang, Ben-David, Knuth, and God. In Communications of the ACM, Samuel Greengard has a good, detailed article on Ewin Tang and her dequantization of the quantum recommendation systems algorithm. One warning (with thanks to commenter Ted): the sentence “The only known provable separation theorem between quantum and classical is sqrt(n) vs. n” is mistaken, though it gestures in the direction of a truth. In the black-box setting, we can rigorously prove all sorts of separations: sqrt(n) vs. n (for Grover search), exponential (for period-finding), and more. In the non-black-box setting, we can’t prove any such separations at all. Last week I returned to the US from the FQXi meeting in the Tuscan countryside. This year’s theme was “Mind Matters: Intelligence and Agency in the Physical World.” I gave a talk entitled “The Search for Physical Correlates of Consciousness: Lessons from the Failure of Integrated Information Theory” (PowerPoint slides here), which reprised my blog posts critical of IIT from five years ago. There were thought-provoking talks by many others who might be known to readers of this blog, including Sean Carroll, David Chalmers, Max Tegmark, Seth Lloyd, Carlo Rovelli, Karl Friston … you can see the full schedule here. Apparently video of the talks is not available yet but will be soon. Let me close this post by sharing two important new insights about quantum mechanics that emerged from my conversations at the FQXi meeting: (1) In Hilbert space, no one can hear you scream. Unless, that is, you scream the exact same way everywhere, or unless you split into separate copies, one for each different way of screaming. (2) It’s true that, as a matter of logic, the Schrödinger equation does not imply the Born Rule. Having said that, if the Schrödinger equation were leading a rally, and the crowd started a chant of “BORN RULE! BORN RULE! BORN RULE!”—the Schrödinger equation would just smile and wait 13 seconds for the chant to die down before continuing. 71 Responses to “Links, proofs, talks, jokes” 1. Joshua B Zelinsky Says: The link that is supposed to be to Knuth’s comment instead links to Knuth’s homepage. 2. Benjamin Says: I am intrigued by the Hilbert scream comment, but lack even the elementary background to understand it. Perhaps I am the wrong audience for your blog, but can you by chance explain (or point to an introductory reading list? My background is in CS, but with a bit of a quantum computation course) 3. Paul Topping Says: I didn’t know you had already critiqued IIT. I am going to read your posts and, hopefully, the FQXi video if and when it surfaces. I never considered IIT seriously but I love a good scientific take-down. The whole idea behind IIT sounds crazy to me. How can some formula say anything much about something as complex as the human brain without being based on a theory of its operation? Just saying that it is complicated, and then offering a formula that associates a number with it, is just not nearly enough. Dare I suggest that Tononi is taking advantage of our, hopefully temporary, lack of knowledge about the brain? If so, I hope for his sake that he isn’t doing it consciously. 4. Scott Says: Joshua #1: Thanks! Fixed. 5. Scott Says: Benjamin #2: Try my undergrad lecture notes. It would sort of kill it if I had to explain it. If someone else wants to, though… 🙂 6. James Cross Says: We do have one way of addressing some of the pretty hard problem. We can correlate subjective reports (heavens forbid!) and performance on tasks to brain states measured by fMRI, brain waves, etc. That will work to some degree for humans who can self-report as long as we agree humans are conscious. Progress can also be made with measurements of people suffering various brain injuries or abnormalities. That won’t give us a nice formula that can tell us if the computer on the desktop has become conscious or whether my cat is but it is something. 7. James Gallagher Says: Cool to know Don Knuth is alive and well and still contributing. re Born Rule and Schrödinger equation The only reason this is still such a big confusion is that no one seems to want to accept the origin of fundamental randomness in the universe. We have the correct mathematical structure for Quantum Mechanics, yet few people want to agree that the evolution is generated by Nature-driven quantum jumps (at a rate close to planck time on average) Once you inject randomness into the evolution of the Schrödinger equation like this all other known results follow. (It’s a mathematical equation and the Nature driven random jumps give you the same mathematical universe as many worlds, except only one exists, the Nature we experience) 8. Scott Says: James #7: Bohmian mechanics (or Bell’s beable theories) sound vaguely like what you want; the difference is that they propose actual equations for the evolution of the single world, instead of just words. In any case this has little to do with my remark, which was not about interpretation of QM per se, but about the different question of how to ‘derive’ the mathematical form of the Born rule from the Schrödinger equation. Further confident assertions about the interpretation of QM will be left in the moderation queue. 9. James Gallagher Says: lol, stop being such a curmudgeon I mean here in the UK we just got Boris as Prime minister and you’re trying to be all serious and stuff about QM ideas. 10. Scott Says: James #9: That’s actually about the best response you could’ve given, so bravo! Yes, you might be right that the world is going to hell so nothing matters anymore. Even if so, though, on this blog we’re still concerned with how much math and science can be correctly understood before the end—for the dignity of the species, if you like, a way to put on our best face for any future alien archeologists combing through the wreckage. 11. Miquel Says: I saw the title of the post and I was expecting to read about Lev Gordeev’s 2nd attempt at proving that NP=PSPACE (learnt about this via Timothy Chow on FOM) I must say that reading about Knuth’s still amazing intellectual contributions was pretty good too 🙂 12. mjgeddes Says: Some solid leads about consciousness may finally be starting to coalesce. Japanese researcher Ryota Kanai has just published a paper “Information Generation as a Functional Basis of Consciousness”, offering similar ideas to my own! Author specifically mentions counter-factual information generation and temporal models as the basis for consciousness. “we propose that a core function of consciousness be the ability to internally generate representations of events possibly detached from the current sensory input.” “consciousness emerged in evolution when organisms gained the ability to perform internal simulations using internal models” “we propose that information generation corresponds to top-down predictions in the predictive coding framework.” My own ideas are somewhat similar , at least, I’m following the same general lines as Kanai ! I proposed: Consciousness=TPTA (Temporal Perception & Temporal Action) TP (Temporal Perception , Bottom-up , Discriminative) TA (Temporal Action, Top-Down, Generative) Subjective model of time TP – Past, Memory, Perception TA -Future – Planning, Imagination Predictive Coding = {TP,TA} So rather than IIT, perhaps what we need is TIG (Temporal Information Generation) I think this may really be starting to get somewhere… 13. The boy from the emperor's new clothes story Says: Until there is an actual derivation of the Born Rule quantum mechanics cannot be considered a full scientific theory because “observer” and “measurement” are too fuzzy. 14. ppnl Says: Reality is just a ray in Hilbert space. 15. Ted Says: In the Communications of the ACM article, when Greengard says “The only known provable separation theorem between quantum and classical is sqrt(n) vs. n. Proofs of stronger separation between classical and quantum are relative to an oracle”, which theorem is he referring to? If it’s the optimality of Grover’s algorithm for black-box search, then that quote seems misleading to me, because that result also assumes an oracle. (Also, why does he say that IBM “unveiled the first commercially available quantum computer in January 2019”? IBM launched the Q Experience back in 2016, and D-Wave released the D-Wave One all the way back in 2011 (putting aside questions of whether it counts as a “real” quantum computer).) 16. Scott Says: Miquel #11: O ye of little faith! 🙂 Discussion of yet more claimed proofs of NP=PSPACE, etc. is a perfect example of what you should not expect to find on this blog—not unless a claim has received massive attention or done something else that gives me no other choice. 17. Scott Says: TBFTENCS #13: You don’t understand what “scientific theory” means. Was natural selection not a scientific theory before Mendelian inheritance was understood? Is it still not one, since we can’t derive from first principles why a few billion years is enough time to get humans, etc. from the primordial soup? Was Newtonian gravity not a scientific theory before Einstein explained how gravitational influence can get transmitted at a finite speed? Is GR not a scientific theory until we fully understand what happens at black hole singularities? Is the Standard Model not a scientific theory until we can explain the values of the coupling constants? Science is never finished. Every scientific theory leaves massive gaps in understanding, to be hopefully elucidated by future theories. In the case at hand, decoherence theory and other later developments let us say substantially more about the relation between unitary evolution and the Born rule than the founders of QM could’ve said, though still not as much as everyone would like. (People vehemently disagree on the extent to which there’s still a problem, but everything I’ve said in this comment goes through if there is still a problem and even a severe one.) Welcome to the business! 🙂 18. Scott Says: Ted #15: You’re right, that’s a mistake in the article, which I’d noticed but then forgotten about. I’ll add something about it in the post. (As for who had, has, or will have “the first commercial QC,” maybe I’ll pass on relitigating that one? 🙂 The D-Wave machine doesn’t give real quantum speedups, if you do a fair comparison against Quantum Monte Carlo simulations. The IBM machine might give real quantum speedups, but we don’t publicly know yet, and if it does then probably not yet useful ones.) 19. T Says: I thought Grover search lower bound was unconditional and we definitely know quantum computers are a tleast quadratically better than classical. No?? 20. T Says: Can you mathematically quantify those? 21. Scott Says: T #19: Yes, we do know that—but the reason we know it is that it’s a black-box separation (i.e., only about the number of queries the quantum and classical computers make to the input bits). And if we’re talking about black-box separations at all, then we also know larger ones than quadratic. (For partial functions—those with a promise on the input—superpolynomial separations go all the way back to the work of Bernstein-Vazirani and Simon. Even for total Boolean functions, superquadratic separations were achieved a few years ago.) 22. Scott Says: T #20: Can you mathematically quantify those? Absolutely. The first joke is only ~150 milliyuks, but the second is nearly a full yuk. 23. Job Says: I was going to ask why the use of Grover’s to evaluate OR isn’t an example of a quadratic speedup without a blackbox. But i remember that using AND/OR trees you can get O(n^0.75) with bounded error, so the quantum speedup over classical isn’t quadratic. Is that right? 24. Scott Says: Job #23: No, that’s not the point. The point is that the Grover speedup is a black-box speedup. That doesn’t mean something otherworldly or alien. It just means that a fast classical algorithm wouldn’t even have time to read the entire input, and that that (as opposed to some deep insight about the difficulty of processing the input once it has been fully read) is why we know how to prove a separation. 25. Bram Cohen Says: Another link: We’re doing a proof of space programming competition with $100,000 in prize money, which has some very interesting and new underlying CS theory https://www.chia.net/2019/07/07/chia-network-announces-pos-competition.en.html 26. fred Says: To cover all the topics at once: Schrödinger wrote extensively about the mystery of consciousness from the point of view of an apparent breaking of symmetry (similar to the measurement problem): “Assume two human bodies, A and B. Put A in some particular external situation so that some particular image is seen, let us say the view of a garden. At the same time B is placed in a dark room. If A is now put into the dark room and B in the situation in which A was before, there is then no view of the garden: it is completely dark (because A is my body, B someone else’s!). This is a flagrant contradiction, for there is no more adequate ground for this phenomenon, considered in general and as a whole, than there would be for one side of a symmetrically loaded balance to go down. […] For philosophy, then, the real difficulty lies in the spatial and temporal multiplicity of observing and thinking individuals. If all events took place in one consciousness, the whole situation would be extremely simple. There would then be something given, a simple datum, and this, however otherwise constituted, could scarcely present us with a difficulty of such magnitude as the one we do in fact have on our hands.” 27. fred Says: A recent article about deriving the Born rule – https://www.quantamagazine.org/the-born-rule-has-been-derived-from-simple-physical-principles-20190213/ . 28. AdamT Says: Hi Scott, can you please shoot me an email next time another one of these FxQi forums comes up or anything similar? Judging from the speakers and the titles of the talks… I desperately want to be a fly on the wall… 🙂 Can’t wait for the videos! 29. Pavel Says: Hi Scott, in the closing remarks of IIT slides you write that any theory in the form of “sufficient complicatedness” will be failure. I know your arguments published on this blog years back, but I don’t understand how the statement from above follows. Can you elaborate little bit more? Thanks. 30. Bill Jeffries Says: Sorry to be a little slow on such a intelligent blog comment section but I don’t quite get the scream comment other than to suppose/guess it contrasts deterministic interpretations: Bohm(this less sure about) vs MWI interpretation(more clear). Any elucidation will be appreciated. 31. T Says: I feel I am humorously challenged. 32. Scott Says: Alright, alright, everyone who didn’t get my scream-based restatement of the no-cloning theorem: screaming differently in different branches will cause decoherence. 33. Scott Says: Pavel #29: All I meant was that my reductio ad absurdum of IIT seems generalizable to any theory whatsoever that claims that once something is sufficiently complicated, or interconnected, or whatever, then it’s conscious. For it’s easy to think up examples of systems whose complicatedness, or interconnection, or whatever would vastly exceed that of the human brain, yet that do nothing that anyone would want to call intelligent, let alone conscious. Unless, of course, we want to follow Tononi’s route of wildly redefining terms like “consciousness” to fit our theory, even to the point of severing any connection with how people originally used the terms. 34. AdamT Says: Scott #33, I get that this was somewhat tounge-in-cheek, but doesn’t this just mean IIT or whatever advocates just need a more *complicated* def. of complicated ness, interconnectedness, or whatever?? More seriously, the point is that mere definitions is not what what progress on the pretty hard problem would look like. Rather, it is some theory that would do well at both explaining which physical systems in which configurations give rise to conciousness that matches our intuitions AND also illuminates the problem enough to give guidance on constructing such physical systems or giving insight on where in the universe to look for unknown examples of consciousness. 35. Scott Says: AdamT #34: No, it wasn’t tongue-in-cheek. I personally don’t understand how any proposed consciousness measure could possibly capture what’s been understood by the term for centuries, if it said nothing about—-to take two examples— (1) intelligent behavior (including but not limited to what the Turing Test tries to measure), and (2) unpredictability and ability to surprise external observers. Note that no sort of “complicatedness” or “interconnectedness” seems to imply either of the above. On the other hand, I have much less confidence in this than I do in the narrower statement that it’s absurd to treat Tononi’s Φ as a measure of consciousness rather than graph expansion. 36. James Cross Says: Scott #35 I can’t see why even intelligent behavior and unpredictability cannot be done by something that is not conscious. It’s hard for me to envision any functional capability that can only be done by a conscious being and that could never be done by an unconscious machine. Can you think of one? That means that even if we thought we created a conscious machine we would have no way to verify it was conscious. 37. Scott Says: James #36: Necessary ≠ sufficient 38. Adam Treat Says: Scott #35, “… proposed consciousness measure could possibly capture what’s been understood by the term for centuries” Ok, maybe we *do* have to start with some definitions since the term “consciousness” in Western Philosophy is usually too ill-defined to talk meaningfully about your Pretty Hard problem let alone Chalmer’s Hard Problem. Specifically, Western Philosophy usually fails to delineate two very different aspects of what is usually labeled consciousness: 1) The ability to experience, to perceive, to have qualia. 2) The ability to use #1 to divide the world into subject and object ie, subjectivity/sentience. I think Chalmer’s Hard Problem is mostly interested in characterizing and investigating #1 in terms of physical systems. And as your original post on IIT pointed out so well, it is always possible for opponents of any purported answer to the Hard Problem to dismiss it by invocation of Philosophical Zombies. That is why it is such a *hard* problem after all! Your refinement of Chalmer’s effort into the Pretty Hard Problem aims to take away this rebuttal by dismissing Philosophical Zombies and saying that look, if we could come up with a physical theory that correctly categorizes systems such that it satisfies our general intuition of “consciousness” *and* also has explanatory power of some kind (it goes about illuminating things beyond our intuition in some way) that this would be Good Progress^TM and Philosophical Zombies can be damned! Your newest set of slides indicate you think *any* approach that follows IIT into defining “consciousness” as some sort of statement about the complexity of “information flow” in a system will be incapable of satisfying two key aspects of what *your* intuition says about “consciousness” ie, that it somehow has to do with intelligence and unpredictability. I can see why you’d think that if these are indeed two key ingredients of your intuition’s def. of “consciousness,” but I doubt that others intuition’s will agree these are necessary (let alone sufficient) criteria. I for one do not. No, I am much more interested in the Pretty Hard problem as applied to #2. Why? Because I think any physical system that demonstrates #2 will have a distinct *behavior* that for me is highly correlated with my intuition of *sentience* if not consciousness. That is, any physical system that demonstrates #2 will be biased towards behaving in the world with agency such that it minimizes its own suffering and maximizes its own happiness. On this planet alone we have an abundance of a variety of non-human physical systems that seem to satisfy/not-satisfy our intuitions for #1 and #2 to varying degrees. A rock vs a dog. A chicken vs a pool of water. An ant vs a pile of dirt. We can (and do) argue endlessly the extent to which these systems exhibit #1 and #2. And precisely *this* is what I want to see progress on the Pretty Hard problem give insight to! Intelligence and Unpredictability might be high on your list, but I have no problem granting sentience if not “consciousness” to seemingly unintelligent and/or predictable behavior that nevertheless seem to exhibit various amounts of #1 and #2. It is not totally clear to me that some other scheme along the lines of IIT’s ‘complexity of information flow’ or whatever might *not* be responsible for #2. Perhaps it is. Perhaps if you take any system that has enough “complexity of information flow” or an internal representation of ‘self’ vs ‘other’ that you’ll get #2 and sentience. 39. James Cross Says: #38 Adam The problem is how to prove that an entity experiences qualia or divides the world into subject/object since both of those things are part of subjective experience. I would probably be willing to grant consciousness to any entity – machine, power grid, or amoeba – that could be proven to have those things but fundamentally I don’t see how it is possible to prove consciousness exists outside my own. That was why I called the “hard” problem an “unserious” problem because it isn’t really amenable to scientific investigation. 40. Adam Treat Says: James #39, Of course you are right that without the ability to directly experience the qualia of another there is no way to 100% verify the subjective experience of another. But we already make this adjudication in our day to day lives to a great extent, in our legal system to a lesser extent and in our religious/spiritual lives to a mixed extent. And we do so by looking at the behavior of physical systems. I see the answer to your objection in adjudicating consciousness/sentience rooted in judgement based upon behavior. We look at the behavior of other physical systems (human, non-human, inanimate) and ascribe consciousness/sentience or not. In order to make progress on the Pretty Hard problem, first I think we need to identify what about physical systems gives rise to this distinguishing behavior. It is my hypothesis that there are precisely three necessary and sufficient conditions that give rise to such behavior in physical systems that we feel compelled to confer the label sentience: #1 qualia as said above #2 ability to use #1 to divide the world into self vs other #3 ability to distinguish biased states that subjectively increase own happiness and decrease own suffering The biggest question I have about sentience among living organisms is whether plants/trees exhibit #2. My current hypothesis is they do not. Although some plants/trees seem to exhibit #3 I do not think they have a mind/brain/nervous system that allows them to subjectively divide the world or have an internal representation of self vs other. But I don’t honestly know for sure. What I *do* think I know is that many non-human animals seem to exhibit precisely #1, #2, and #3 and that rocks utterly do not or at least lack the capacity to act upon the world in accordance. Let’s say that tomorrow researchers in AI make a big breakthrough and we are confronted by a neural net that seems to hold an internal representation of itself and begins to act in ways that seem selfish or preferential to itself. If it tries to preserve or extend its own existence or to avoid destruction or avoid things we could empathize as painful for it. Personally speaking, I would be hard pressed to not grant the label sentient to it. 41. James Cross Says: Adam #40 I can generally agree with some of what you write. #3 may need to be rephrased or rethought a little bit. If I rush towards an armed gunman in a school, I’m not likely to be increasing my happiness or decreasing my suffering but it would be a supremely conscious action. Personally I tend to think you need a brain for consciousness in biological entities. I think the brain evolved, and with it consciousness, to map, monitor, and control the body itself, its environment, and the relationship of the body with its environment. The self arises from this basic function. I like the radical plasticity theory to account especially for the higher levels of consciousness in mammals and birds. 42. James Cross Says: Adam #40 One other thing I see rarely if ever commented on is the fact that biological beings manifest consciousness with amazing little amounts of energy expended. It strikes me that there may be some clue in that. Namely, that consciousness may have been a cheaper solution in terms of energy than the zombie alternatives to evolve complex and adaptive behavior. 43. mjgeddes Says: Excellent points about IIT Scott, and I think your own idea that consciousness is somehow connected to the flow of time is intuitively very strong. If you at Ryota Kanai’s CIG (Counter-Factual Information Generation) , this is an information-theoretic approach that already looks way way more convincing to me than IIT, and it’s barely even been developed much yet. Whereas IIT is vague and fuzzy , CIG is sharp and clear. Whereas IIT has no connection to intelligent behavior and free will/predictability, CIG *does* naturally connect to these points. CIG slideshow here: Look at the 3-fold subjective time division: Past, Present, Future. Notice how it naturally correlates with 3 types of counter-factuals: Past: Correlational counter-factuals Present: Causal/Interventional counter-factuals Future: Logical Counter-factuals I think consciousness is a generative model about one’s own time-evolution (past > present > future), represented as trees of counterfactuals. And there’s some kind of participation in the arrow of time generating the information…. 44. Jacob Says: So this is slightly off topic but I feel like anything tangentially Born rule related is fair game in this comment section, so here it goes: 45. Ted Says: I’m afraid I’m still a bit confused by your edit’s comment “In the non-black-box setting, we can’t prove any such separations at all.” Doesn’t e.g. Bravyi, Gosset, and Konig’s paper “Quantum advantage with shallow circuits” give an unconditional, non-black-box proof of a (admittedly very small) separation between constant-time quantum and logarithmic-time classical? Sorry, I’m not trying to nitpick – I just want to make sure that I understand these subtle results correctly. 46. Anonymous Says: What if the finite universe is a being that is conscious with free will and of course very intelligent and what if particles are not just building blocks but immature consciousness that may over a very long time grow up to be a universe like its parent? The binding problem is such a big problem for consciousness theories and even bigger problem for free will theories that maybe consciousness and free will is centralized in the brain in a high energy, high mass rare fundamental particle. Maybe this homonculi particle communicates with the brain using an electromagnetic code like a more complicated low range Bluetooth. If something like that was found and soul particles turn out to be real then soul particles could be moved out to a more durable, capable body engineered for almost any environment in the universe. Virtual reality will be easy too but most importantly pain and death will be mostly a thing of the past! 47. Scott Says: Ted #45: The Bravyi et al result is specifically about extremely low-depth circuits—and there we again do know how to prove separations. What we don’t know how to prove is that, with no depth restriction, some natural, explicit problem requires (say) n2 classical or quantum gates rather than only O(n). 48. fred Says: The problem when talking about consciousness is that we’re reaching the limits of language/science. Words are tags for conceptual symbols in our brains. But words are only defined in terms of other words (dictionaries are directed graphs), which in itself is a paradox – i.e. how does it get bootstrapped? There has to be words which can’t be defined in terms of others (like nodes with all outgoing edges in the dictionary graph), mapping to rock bottom truths/perceptions about being, which everyone shares to some degree, and language slowly builds on top of those leaf nodes. We may try to describe those special words in terms of other words, but can’t succeed (such descriptions are always circular when examined closely). It’s like trying to describe to a blind person what we mean by “shape” and “color” in the sentence “the shape and color of objects are obviously distinct, yet we can’t separate them!”. Is the ability of language tied to consciousness? Can non-conscious systems ever come up with their own language? If so, would it be a hint that consciousness is a basic building block of the world? If there’s such a thing as talking zombies, how did their language get bootstrapped? Can a language that’s perfectly circular (no leaf symbols) ever appear? How? All at once? And do our own creations implicitly inherit our own dictionary, so AIs may be able to always mimic consciousness convincingly? E.g. going back to blindness, both a seeing and a non-seeing person have an internal symbol for the color blue. For a blind person, ‘blue’ is a total mystery (but the irony is that ‘blue’ is also a total mystery for a seeing person!), a blind person only knows of blue because color-seeing people have supplied him/her with their own (imperfect) definitions for it. With enough such definitions, could a blind person pass a “blue” Turing test? Regardless of all that metaphysical stuff, we can learn a lot by training our own mind in the art of observing itself, without assuming anything about the origins of consciousness. Even a modest amount of mindfulness training quickly reveals that our sense of self (as in “I was feeling so self-conscious after they caught me with my hand in the tipping jar!”) is just an appearance made up of perceptions rising in the space of consciousness – the feeling that we’re located in a particular point in space behind our eyes and our ears, feelings in our face, etc. All these perceptions add up to creating a belief that there is some permanent center of “I” as the author of thoughts and volition (themselves appearances in consciousness). But that belief can be lifted by observing all those perceptions for what they are. That’s not to say that consciousness is not related with some sort of feedback loop (e.g. Doug Hofstadter’s book “I am a strange loop”). 49. STEM Caveman curious Says: Scott, when you asked Tang (according to the article) to prove a lower bound on classical algorithms for the recommendation problem, what did you have in mind, e.g., what computation model and what complexity measure? Unconditional runtime lower bound doesn’t sound like something assignable to a student so I’m guessing you meant a conditional bound (as in fine grained complexity or hardness of approximation) or a measure different from runtime. 50. Scott Says: STEM Caveman #49: I meant a lower bound on number of queries to the input data. (Or if you like, a lower bound on runtime, but a sublinear one, ~√n or something, keeping in mind that Kerenidis and Prakash’s quantum algorithm needs only ~log(n).) Such lower bounds, when true at all, are almost always unconditionally provable. 51. Michael Says: Scott, I’m curious- as a mathematician, what do you think of the 8/2(2+2) controversy: 52. Mateus Araújo Says: I heartily agree with the comment about the Schrödinger equation and the Born rule. In fact, in the very paper where Schrödinger introduced his equation he also discussed at length the mod-squared amplitudes. He didn’t get the correct interpretation for it (Born did), but it was already obvious to Schrödinger that the mod-squared amplitude was the meaningful physical quantity. This historical accident supports the idea that, mathematically speaking, the Born rule is obvious. The non-obvious part is the definition of measurement and probabilities, which, ironically enough, is often ignored in the derivations of the Born rule. 53. George McKee Says: Sounds like FQXi was a fun conference that didn’t report any breakthroughs. Maybe a few little insights are all anyone could ask for. But I wouldn’t go so far as to say “ANY THEORY of the form “sufficient complicatedness / interconnection / etc. ⇒ consciousness” is doomed to failure”. There are theories that contain emergent properties, such as cycles in random graphs, where the property emerges with more and more likelihood as the complicatedness of the graph increases. Arguably this is exactly how consciousness evolved, as random genetic mutations caused random patterns of connectivity to evolve in brains, most of which were killed off by natural selection. At some point in the surviving giant family tree of species a pattern appeared that supports consciousness in adult animals, and here we are. I don’t think we have a good way to characterize theories that contain this kind of emergence. For example, what’s the difference between the class of finite state machine components of a Universal Turing Machine that makes the full device universally powerful, and the class of FSMs that don’t yield Universality when equipped with the other TM parts? There’s some magic going on in the FSM’s state transition table that I’ve never seen described. There are many ways to enumerate FSMs, and each enumeration method generates a “complexity” measure on its associated TM. Some enumeration methods will generate UTMs sooner or later, and some won’t. Likewise theories of brain-behavior evolution will generate their own complexity measures. All of them should predict no consciousness for worms and protozoa (although Rupert Glasgow’s “Minimal Selfhood” theory would disagree), while some of them will predict the emergence of consciousness at some point and assign a complexity level to that point. If you refine the statement to say that “complexity is fundamental” theories are all doomed, I wouldn’t totally disagree, but complexity measures on theories where consciousness is emergent, like closing a switch in a circuit causes all kinds of important dynamics to suddenly emerge, can provide lots of important insights, and those complexity theories are not doomed at all. 54. Scott Says: Michael #51: That’s one of the most aggressively stupid “controversies” I’ve ever seen! It’s exactly like demanding the “true” parsing of one of those ambiguous newspaper headlines—e.g., “Complaints Over NBA Referees Growing Ugly.” Division and multiplication, like human language, are non-associative. Having said that, 1. For if it were 16, then it seems inexplicable that the 8/2 wasn’t parenthesized. 🙂 55. STEM Caveman abstracts Says: @Michael #51, it’s clearer if one replaces numbers by letters. If “a/b(c+d)” were meant to equal a(c+d)/b it would have been written in that order, or (as in Scott’s reply) with additional parenthesis as (a/b)(c+d). If forced to parse without further information, the default reading is either a/(b(c+d)). The heuristic is that in a ratio of products, (xyz…)/(abc…) the parts belonging to the numerator (resp., denominator) are consolidated unless specifically indicated otherwise, i.e., by extra parentheses. 56. STEM Caveman thanks Says: @Scott 50, thanks. I thought it was query complexity (since in that setting nontrivial unconditional bounds exist) but that wasn’t obvious from the CACM article. 57. marshall flax Says: Scott, I’m wondering if you could prevail upon Prof. Knuth to publish the TeX source of his Huang paper. Normally, talk of “best practices” is silly — but this would be a counterexample, I’m sure. 58. Scott Says: marshall #57: I’ve had one conversation with Knuth in my life (15 years ago); no particular “in” with him. You can ask him for the TeX as well as I… 59. Wes Hansen Says: mjgeddes #12: I, too, think information generation is very important and there exist studies to back it up. Published in PLOS/ONE in 2015, Neurocognitive and Somatic Components of Temperature Increases during g-Tummo Meditation: Legend and Reality is a study of g-tummo yoga as practiced in Tibet. These monks and nuns are able to elevate core body temperature to the point of drying freezing wet bedsheet – three in succession, while embedded in freezing environments. The key point here is the effect the “neurocognitive component” has on core body temperature! From the concluding remarks of the paper: So it seems as though focused imagination, which is what the “neurocognitive component” really is, can effect changes in core body temperature above and beyond the somatic component; this links consciousness to both physics and information – information incorporated into the Gibbs free energy equation. Two other things I would point out: 1) The authors of the study tend to make light of the temperature increase needed to dry the sheets, but these meditators are heat-generators embedded in a massive, massive heat sink; considerable temperature increase is required simply to maintain core body temperature, let alone dry freezing wet sheets draped over their torso; 2) I would also point out that these are emotionally generated imaginings grounded in the Tibetan Buddhist imaginaire, which correlates nicely with Will Tiller’s PsychoEnergetics model. 60. Tommaso Says: Hi Scott, I was wondering if you had a look at https://arxiv.org/pdf/1908.02499.pdf and if you have any comment about it. (I don’t want to flame, just I’m curious about whether you think Gil Kalai really makes a point) 61. David Pearce Says: James #39, you say, plausibly, “I don’t see how it is possible to prove consciousness exists outside my own. That was why I called the ‘hard’ problem an ‘unserious’ problem because it isn’t really amenable to scientific investigation.” But we can (in principle) test the sentience of our fellow creatures by rigging up reversible thalamic bridges and doing a partial “mind-meld”. Compare the craniopagous Hogan sisters: Testing the (in)sentience of classical digital computers may be more if a challenge. I think they’ll always be micro-experiential zombies that can’t solve the phenomenal binding problem; but that’s another story. 62. Sniffnoy Says: And, certificate’s expired again… 63. Bennett Standeven Says: So the crux of Kalai’s argument seems to be that it is impossible to use asymptotically low-level components to build a superior computer. (Called assumption (B) in the paper.) He writes that it “requires special attention and it can be regarded as both a novel and a weak link of our argument. There is no dispute that we can apply asymptotic computational insights to the behavior of computing devices in the small and intermediate scale when we know or can estimate the constants involved. This is not the case here. The constants depend on (unknown) engineering abilities to control the noise. I claim that (even when the constants are unknown) the low-level asymptotic behavior implies or strongly suggests limitations on the computing power and hence on the engineering ability.” The problem is that this is known to be false; classical computers are supposed to be described by P, but the intermediate-scale components supposedly lie in LDP, a provably smaller class. As far as I can tell, Kalai has never even attempted to argue that this premise should hold in the case of quantum computers, when it fails for most other computational models. Of course, there is the additional problem that LDP contains, for example, the problem of evaluating the permanent of an “intermediate-scale” (say, 500 x 500) matrix, because this is just a polynomial of constant degree (500^2 = 250,000). In general, an asymptotic argument requires having an asymptotic variable; so “intermediate-scale” should be defined as, say, log(n) instead of some fixed constant. 64. Raoul Ohio Says: Re the 8/2(2+2) issue: No question that 8/2(2+2) = 16. Try putting it into google, or 8/2*(2+2) any computer language that I am aware of — certainly in C, C++, and Java. I don’t recall trying this in Fortran, Basic, or PL/1. For those with minimal programming background, * and / have higher precedence than + and -. Both precedence levels are “left associative”, which means that ties are broken by “Left to Right precedence”. In fact, 8/2(2+2) is the poster child example of why PEMDAS is incorrect. It is possible that in the distant past there was some ambiguity in 8/2(2+2). This is not a matter of convention (like driving on the right or left side of the road). This is a matter of doing it right. Presumably early programming language designers did the right thing — namely using a few simple rules and letting the chips fall where they may. Does anyone know the history of this? Probably Knuth could write an essay on it. 65. Job Says: Why wouldn’t it be possible to build a noise-resistant quantum computer consisting, for example, of only Clifford gates? It can be done efficiently classically. IMO the noise case against quantum computing is too simple, and too broad. Plus, one thing i realize now, is that quantum computing is like a planet that pretty much already exists, even if we can’t land on it (to make use of all the quamputium on its surface) – because BQP is likely a distinct class. A good argument would be that it’s so far away (in NP land) that we’d need to overcome universal expansion in order to reach it. And maybe we can find some form of quamputium on Earth (good enough to factor integers), that would be cool. But saying that we can’t build a QC without getting hit by an asteroid, i don’t know about that. 66. L Says: What is LDP class? 67. fred Says: Scott #32 don’t feel bad – if we ever create a general AI, it will be better than us at everything, including telling jokes. And because telling a good joke beats everything, it will stop getting anything done. But since not offending anyone is NP-Hard, it will always eventually get deplatformed. 68. Richard Gsylord Says: from the FQXI conference schedule: 7:15PM – 9:30PMWomen in STEM (women-only event) Dinner – Open to women in STEM in attendance at the conference. so they had a segregated event at the meeting? didn’t anyone complain? how is this anymore acceptable than say, a whites only (or blacks only or jews only) event? 69. Daniel Says: Hi Scott, probably off topic. Is there a ZKP proof that a prover knows a quantum state? 70. Raoul Ohio Says: The following from Ars Technica nicely involves Links, proofs, talks, jokes, and also lawsuits! Here’s the link: 71. Yves Says: T #20, Scott #22, #32: let me try to understand Scott’s shouting joke (the one amounting to 150 milliyuks). Suppose I’m |psi> in H, I want to be the only one shouting “|1>” to an ancilla, while all states |phi>, that are orthogonal to me, shoud remain silent: “|0>”: |psi>|x> -> |psi>|1>, for a particular |psi> in H, and |phi>|x> -> |phi>|0> for all |phi> orthogonal to |psi> This is not reversible as we cannot recover x. So this is not allowed by coherent evolution governed by the Schroedinger equation. I have a friend, coming from a different field, who once mused how it is beautiful that in QC everything is reversible – I see it rather as a curse. So, Scott, did I get it right, and are the folowing are the two unitary ways of screaming? 1. “scream the exact same way everywhere”: |psi>|0> -> |psi>|1>, for all |psi> in H. 2. conditional unitary operation U – “split into separate copies, one for each different way of screaming”: |psi>|x> -> |psi>(U|x>), for all |x> |phi>|x> -> |phi>|x>, for all |phi> orthogonal to |psi>
b7bf35e5a96df84e
The Outdated Math and Physics Behind Economics In Who Cooked Adam Smith’s Dinner, Katrine Marçal traces the roots of mainstream economics and particularly neoliberalism. One of the strands she discusses is the the connection between economics and Newtonian physics. Newton believed that the universe was made up of fundamental particles. To understand complex physical things, you have to break them down into smaller and smaller pieces until you hit the unit of everything, the Lego blocks from which the universe is constructed: the atom and the photon (Newton thought the photon was a particle). From there you can work towards an understanding of the cosmos. Particles are governed by forces. For Newton, the important force was gravity. The ultimate particle and the ultimate force can be used to explain a lot of the physical phenomena which we can observe with simple tools. Newton’s theory is deterministic: the future is predictable because particles only move in accordance with rigid laws. In economics, the atom is the individual. The force that sets those atoms into motion is self-interest. I’ve made passing reference to this before, but Marçal’s book brings it to the forefront. Most of the time when we hear about the history of economics after Smith, we hear about the math stuff, frequently starting with the idea of marginal utility generated by William Stanley Jevons around 1870. Jevons was a mathematician, who set out to create equations for the calculus of pleasure and pain as described by Jeremy Bentham. The subsequent history of economics can be read as a long math exercise using mostly calculus, and linear algebra (matrices) for modeling. The thing is, math was just being formalized in the 1800s. Riemann completed the formalization of the calculus in 1854 (here’s an interesting history.) Other areas of math were being developed and formalized at that time, and development continues today, with, for example, fractal math. So maybe a good question is why economists stick with 19th Century math. Can’t they find something new that might work better than the obviously lousy models they use today that were incapable of predicting the Great Crash? I mean, how could anyone think it makes sense to model human beings as a large number of identical particles that only interact in monetary transactions and are otherwise unaffected by each other; and all of which are subject only the force of self-interest? But just as math has advanced, so has physics. One of the changes is that physicists aren’t searching for ultimate particles any more; in fact as we currently understand things, we aren’t even sure the things studied are in some particular place. Physicists now study the relationships between various kinds of forces. They describe elementary particles by the forces through which they interact which in turn are defined in math terms, and terms that are a lot further from calculus than calculus is from addition. The relationships are mediated through the Schrödinger equation; It describes our observation small numbers of what we think today are elementary particles, but it is too hard to solve it for any large group of particles. But in economics, nothing is complicated. It’s just individuals motivated by self-interest. And that’s a remarkably stupid thing. Has nothing changed in the last 150 years? Is linear algebra, which we learned in my junior year in high school, all these guys have learned from math and physics? To put this another way, if economists were just cranking up their discipline today, with no theory of our current form of economy, they certainly would not use 19th C. math and physics as models. Would they use 18th C. markets in England and Scotland as their model? Of course not. Fortunately I’m here to help. I’m happy to let economists continue the work of defining and collecting economic statistics, but it’s time to look for a more plausible theory. And as a starting place, I’ll put up a couple of posts with ideas for a new theory for the 21st C. No need to thank me. Which they won’t. Who Cooked Adam Smith’s Dinner? Who Cooked Adam Smith’s Dinner? is the title of a 2012 (2016 in the US) book by Katrine Marçal, a Swedish journalist. The title question is based on a famous bit from Adam Smith’ The Wealth of Nations*: Marçal explores the impact of Smith’s omission on the study of economics. One thread is the feminist story: much of the crucial work of care is provided through benevolence, not for money, and so it not considered part of the economy or part of the field studied by economists. Marçal points out that when an economist marries his housekeeper, the GDP goes down. If economists are ignoring the importance of care in the functioning of an economy, what are they doing? They tell us that they study the allocation of scarce resources. This is from the introductory textbook Economics by Samuelson and Nordhaus, 18th Ed. 2005: Economics is the study of how societies use scarce resources to produce valuable commodities and distribute them among different people. Id. at 4. In his textbook Introduction to Macroeconomics, 6th Ed. 2012, N. Gregory Mankiw quotes the 19th C. British economist Alfred Marshall: “Economics is a study of mankind in the ordinary business of life.”. Id. at viii. I’d guess Marshall meant “Malekind”. Mankiw adds that The word economics springs from a Greek word meaning household, and he talks about how households have to make decisions about who goes to work and who cooks, and who gets the extra dessert. Then he drops the idea that cooking dinner is part of the economy. Apparently when Mankiw talks about the ordinary business of life, he means “male business”, not changing poopy diapers or making dinner. It’s funny when you see it from the perspective Marçal demonstrates. Of course Marçal is right to say that economics ignores a huge chunk of the work necessary to maintain us in the ordinary business of our lives. That doesn’t make it useless, to be sure. Marçal points out the utility of the data and statistics gathered by economists. But it does mean that the models economists are creating are likely to be useless because they purposefully ignore a crucial element of ordinary life. And it means that economics isn’t a plausible basis for thinking about human nature. The book is informed by feminist theory, but it isn’t theoretical. It is an application of feminist theory to economics. Marçal uses uses words like “gendered”; and she writes: It’s only woman who has a gender. Man is human. Only one sex exists. The other is a variable, a reflection, complementary. P. 159. Then she gives concrete examples that make the meanings perfectly clear for people like me who don’t know anything about feminist theory. The result is that I began to leann a little about the theory, and it was much easier than trying to learn it on my own from primary sources**. Marçal devotes several chapters to eviscerating the economists dream person, Homo Economicus, the ungendered center of their universe, the Man we all must become. These chapters expose the shallow thinking that neoliberal economists like Gary Becker bring to the discussion of human nature. She makes neoliberalism look childish and silly. I particularly liked the discussion of the hidden emotional vulnerabilities of neoliberal Man. We have to coddle Mr. Market, and steady him when he gets the jitters, which happens all the time, and which, of course, requires tons of money. Marçal writes clear, direct and engaging prose. Like every good book this one clarified several inchoate ideas that have been floating around in my head, and it gave me several new ideas I hope to take up in future posts. I am grateful to my excellent daughter who gave me this book. * Here it is in context. I leave for my skeptical readers the pleasure of picking at the holes in this passage. **Another good book for this purpose is Possession, by A.S. Byatt.
8de1659a9b0da4da
1 hydrogen atom mein kitne neutron hote hain If a neutral hydrogen atom loses its electron, it becomes a cation. "159 Litres" Raj. Atomic Symbol Name Weight. H-2, also known as deuterium, has one proton and one neutron in its nucleus. In 1913, Niels Bohr obtained the energy levels and spectral frequencies of the hydrogen atom after making a number of simple assumptions in order to correct the failed classical model. / If this were true, all atoms would instantly collapse, however atoms seem to be stable. Some cells are covered by a cell wall, other are not, some have slimy coats or elongated structures that push and pull them through their environment. Θ ℓ but different The "ground state", i.e. 2 s ka sabse bada Fish utpadak dam? = ∞ ). Mendak me kitne gunsutra hote hain? R Under extremely high pressure it can become a liquid metal. 1. r Database developed by J. Baker, M. Douma, and S. Kotochigova. 73. , and 1 {\displaystyle \ell } π B ( H-1 is hydrogen with a proton in the nucleus and exists in nature as H2. z Hydrogen has a melting point of -259.14 °C and a boiling point of -252.87 °C. Mass definition, a body of coherent matter, usually of indefinite shape and often of considerable size: a mass of dough. {\displaystyle |n,\ell ,m\rangle } This “heavy hydrogen” is called deuterium and is sometimes given its own chemical symbol (D). Protium is the most common form of hydrogen in the universe and in our bodies. {\displaystyle n=1,2,3,\ldots } of the electron being in a shell at a distance These figures, when added to 1 in the denominator, represent very small corrections in the value of R, and thus only small corrections to all energy levels in corresponding hydrogen isotopes. Raj. . 2 {\displaystyle {\frac {1}{\Phi }}{\frac {{\rm {d}}^{2}\Phi }{{\rm {d}}\phi ^{2}}}+B=0.}. The normalized position wavefunctions, given in spherical coordinates are: The quantum numbers can take the following values: Additionally, these wavefunctions are normalized (i.e., the integral of their modulus square equals 1) and orthogonal: where r {\displaystyle z} For these developments, it was essential that the solution of the Dirac equation for the hydrogen atom could be worked out exactly, such that any experimentally observed deviation had to be taken seriously as a signal of failure of the theory. Some cells have a thick layer surrounding their cell. {\displaystyle n} 2 + Bharat kitne akshansh par hai? ψ is the fine-structure constant and / This platform provides access to journals, books and databases from RSC Publishing, linking over 1 million chemical science articles and chapters. By extending the symmetry group O(4) to the dynamical group O(4,2), Außerdem Triathlet: Nach verpasster Qualifikation für den Ironman Hawaii in Frankfurt sicherte Thorsten Schröder sich den begehrten Hawaii-Slot wenige Wochen später beim … 10 Interesting Facts About Radioactive Tritium, What Is an Element in Chemistry? Hydrogen becomes a liquid at very low temperature and high pressure. The second lowest energy states, just above the ground state, are given by the quantum numbers The obvious answer to the original question is that a neutron has more mass than a hydrogen atom. It is given by the square of a mathematical function known as the "wavefunction," which is a solution of the Schrödinger equation. / n It turns out that this is a maximum at A hydrogen atom is an atom of the chemical element hydrogen. All orders are custom made and most ship worldwide within 24 hours. {\displaystyle \left|\ell \pm {\tfrac {1}{2}}\right|} Furthermore, the spiral inward would release a smear of electromagnetic frequencies as the orbit got smaller. 2 h The three isotopes are proton, deuterium, and tritium. / and − 1 Given that the hydrogen atom contains a nucleus and an electron, quantum mechanics allows one to predict the probability of finding the electron at any given radial distance Ki sabse chote atom hydrogen ke partek atom me ek electron hota hai. Hier erhalten sie die neuesten Pressemeldungen rund um Netto Marken-Discount Dr. David Kading , Dr. Kristi Kading , Dr. Katherine Shen ϕ a 1 d When the wavefunction is separated as product of functions r {\displaystyle (2,1,0)} [14] This formula represents a small correction to the energy obtained by Bohr and Schrödinger as given above. Klein. 1. This is not the case, as most of the results of both approaches coincide or are very close (a remarkable exception is the problem of hydrogen atom in crossed electric and magnetic fields, which cannot be self-consistently solved in the framework of the Bohr–Sommerfeld theory), and in both theories the main shortcomings result from the absence of the electron spin. ***** boy:sohniye tenu padan di lod nhi. n For a chemical description, see, Mathematical summary of eigenstates of hydrogen atom, Visualizing the hydrogen electron orbitals, Features going beyond the Schrödinger solution, Eite Tiesinga, Peter J. Mohr, David B. Newell, and Barry N. Taylor (2019), "The 2018 CODATA Recommended Values of the Fundamental Physical Constants" (Web Version 8.0). … , are also degenerate (i.e., they have the same energy). ϵ The pure element bonds to itself to form diatomic hydrogen gas. m It's 1.008, the weighted mean of the atomic weights of hydrogen and a very small amount of deuterium. However, although the electron is most likely to be on a Bohr orbit, there is a finite probability that the electron may be at any other place = 2 1 {\displaystyle \ell =0,1,2,\ldots } Sath mein isme iron bhi hota hai. {\displaystyle 1\mathrm {s} } 298. wavefunction. के अनुसार किसी व्यक्ति के साल में कितने रेडिएशन के सपंर्क में आने पर कैंसर का खतरा होता है । IAEA ke anusar kisi vaykti ke saal me kitne radiation ke sampark me aane par Cancer ka khatra hota hai 1 The quantum numbers determine the layout of these nodes. a It is often alleged that the Schrödinger equation is superior to the Bohr–Sommerfeld theory in describing hydrogen atom. Since the probability of finding the electron somewhere in the whole volume is unity, the integral of Auf der regionalen Jobbörse von inFranken finden Sie alle Stellenangebote in Coburg und Umgebung | Suchen - Finden - Bewerben und dem Traumjob in Coburg ein Stück näher kommen mit jobs.infranken.de! 0 Isotope Definition and Examples in Chemistry. d 1. 1 is the Bohr radius and + B See more. / Further, by applying special relativity to the elliptic orbits, Sommerfeld succeeded in deriving the correct expression for the fine structure of hydrogen spectra (which happens to be exactly the same as in the most elaborate Dirac theory). / Since the number of protons and neutrons is the same, you might think this would be the most abundant form of the element, but it's relatively rare. − 0 but different Ordinarily, this form of element number 1 has one electron per atom, but it readily loses it to form the H+ ion. Cuba ki rajdhani kya hai? I’ll ask you to come up without prejudice and with open mind. Although the resulting energy eigenfunctions (the orbitals) are not necessarily isotropic themselves, their dependence on the angular coordinates follows completely generally from this isotropy of the underlying potential: the eigenstates of the Hamiltonian (that is, the energy eigenstates) can be chosen as simultaneous eigenstates of the angular momentum operator. वह जहाज़ द्वारा तीन ब ) ( This includes the kinetic energy of the nucleus in the problem, because the total (electron plus nuclear) kinetic energy is equivalent to the kinetic energy of the reduced mass moving with a velocity equal to the electron velocity relative to the nucleus. state: and there are three j θ About 1 hydrogen atom in 5000 is the isotope deuterium. You can access the latest research of interest using the custom eAlerts, RSS feeds and blogs or you can explore content using the quick and advanced searches. Hydrogen does have some isotopes with atoms that contain one or more neutrons in its atoms. where m ( However, neutral hydrogen is common when it is covalently bound to another atom, and hydrogen atoms can also exist in cationic and anionic forms. r {\displaystyle n=2} Tritium has a half-life of 12.32 years. Or 1909 me american Physicist R. Millikan ne Negative charge kiye gaye tel ki bundo kai istemal karke ek electron ka Charge measured tha. or Ek atom ke nucleus andar Corpus hota hai. , sabse pahli baat ham burai nahi karte saty ki jaankaari dene ki koshish karte hain. ) How Many Protons, Neutrons, and Electrons in an Atom? Scrooge Mcduck Funko Pop, California Aviation Services, Three-dimensional Computer Vision Pdf, Weight Watchers Wraps, Best Audiobooks For Families, Beach House Menu Anna Maria Island, Heating Element Material, Text Size Increaser Online, Evolution Orange Juice Target, Types Of Trees Chart, Leave a Reply
43acbf779ba88e02
- Art Gallery - Atomic, molecular, and optical physics (AMO) is the study of matter-matter and light-matter interactions; at the scale of one or a few atoms[1] and energy scales around several electron volts.[2]:1356[3] The three areas are closely interrelated. AMO theory includes classical, semi-classical and quantum treatments. Typically, the theory and applications of emission, absorption, scattering of electromagnetic radiation (light) from excited atoms and molecules, analysis of spectroscopy, generation of lasers and masers, and the optical properties of matter in general, fall into these categories. Atomic and molecular physics Main articles: Atomic physics and Molecular physics Atomic physics is the subfield of AMO that studies atoms as an isolated system of electrons and an atomic nucleus, while molecular physics is the study of the physical properties of molecules. The term atomic physics is often associated with nuclear power and nuclear bombs, due to the synonymous use of atomic and nuclear in standard English. However, physicists distinguish between atomic physics — which deals with the atom as a system consisting of a nucleus and electrons — and nuclear physics, which considers atomic nuclei alone. The important experimental techniques are the various types of spectroscopy. Molecular physics, while closely related to atomic physics, also overlaps greatly with theoretical chemistry, physical chemistry and chemical physics.[4] Both subfields are primarily concerned with electronic structure and the dynamical processes by which these arrangements change. Generally this work involves using quantum mechanics. For molecular physics, this approach is known as quantum chemistry. One important aspect of molecular physics is that the essential atomic orbital theory in the field of atomic physics expands to the molecular orbital theory.[5] Molecular physics is concerned with atomic processes in molecules, but it is additionally concerned with effects due to the molecular structure. Additionally to the electronic excitation states which are known from atoms, molecules are able to rotate and to vibrate. These rotations and vibrations are quantized; there are discrete energy levels. The smallest energy differences exist between different rotational states, therefore pure rotational spectra are in the far infrared region (about 30 - 150 µm wavelength) of the electromagnetic spectrum. Vibrational spectra are in the near infrared (about 1 - 5 µm) and spectra resulting from electronic transitions are mostly in the visible and ultraviolet regions. From measuring rotational and vibrational spectra properties of molecules like the distance between the nuclei can be calculated.[6] Optical physics See also: Optics Optical physics is the study of the generation of electromagnetic radiation, the properties of that radiation, and the interaction of that radiation with matter,[7] especially its manipulation and control.[8] It differs from general optics and optical engineering in that it is focused on the discovery and application of new phenomena. There is no strong distinction, however, between optical physics, applied optics, and optical engineering, since the devices of optical engineering and the applications of applied optics are necessary for basic research in optical physics, and that research leads to the development of new devices and applications. Often the same people are involved in both the basic research and the applied technology development, for example the experimental demonstration of electromagnetically induced transparency by S. E. Harris and of slow light by Harris and Lene Vestergaard Hau.[9][10] Other important areas of research include the development of novel optical techniques for nano-optical measurements, diffractive optics, low-coherence interferometry, optical coherence tomography, and near-field microscopy. Research in optical physics places an emphasis on ultrafast optical science and technology. The applications of optical physics create advancements in communications, medicine, manufacturing, and even entertainment.[12] Main articles: Atomic theory and Basics of quantum mechanics The Bohr model of the Hydrogen atom One of the earliest steps towards atomic physics was the recognition that matter was composed of atoms, in modern terms the basic unit of a chemical element. This theory was developed by John Dalton in the 18th century. At this stage, it wasn't clear what atoms were - although they could be described and classified by their observable properties in bulk; summarized by the developing periodic table, by John Newlands and Dmitri Mendeleyev around the mid to late 19th century.[13] Later, the connection between atomic physics and optical physics became apparent, by the discovery of spectral lines and attempts to describe the phenomenon - notably by Joseph von Fraunhofer, Fresnel, and others in the 19th century.[14] From that time to the 1920s, physicists were seeking to explain atomic spectra and blackbody radiation. One attempt to explain hydrogen spectral lines was the Bohr atom model.[13] Experiments including electromagnetic radiation and matter - such as the photoelectric effect, Compton effect, and spectra of sunlight the due to the unknown element of Helium, the limitation of the Bohr model to Hydrogen, and numerous other reasons, lead to an entirely new mathematical model of matter and light: quantum mechanics.[15] Classical oscillator model of matter Early models to explain the origin of the index of refraction treated an electron in an atomic system classically according to the model of Paul Drude and Hendrik Lorentz. The theory was developed to attempt to provide an origin for the wavelength-dependent refractive index n of a material. In this model, incident electromagnetic waves forced an electron bound to an atom to oscillate. The amplitude of the oscillation would then have a relationship to the frequency of the incident electromagnetic wave and the resonant frequencies of the oscillator. The superposition of these emitted waves from many oscillators would then lead to a wave which moved more slowly. [16]:4–8 Early quantum model of matter and light Max Planck derived a formula to describe the electromagnetic field inside a box when in thermal equilibrium in 1900.[16]:8–9 His model consisted of a superposition of standing waves. In one dimension, the box has length L, and only sinusoidal waves of wavenumber \( k={\frac {n\pi }{L}} \) can occur in the box, where n is a positive integer (mathematically denoted by \( \scriptstyle n\in {\mathbb {N}}_{1}) \). The equation describing these standing waves is given by: E\( E=E_{0}\sin \left({\frac {n\pi }{L}}x\right)\,\!. \) where E0 is the magnitude of the electric field amplitude, and E is the magnitude of the electric field at position x. From this basic, Planck's law was derived.[16]:4–8,51–52 In 1911, Ernest Rutherford concluded, based on alpha particle scattering, that an atom has a central pointlike proton. He also thought that an electron would be still attracted to the proton by Coulomb's law, which he had verified still held at small scales. As a result, he believed that electrons revolved around the proton. Niels Bohr, in 1913, combined the Rutherford model of the atom with the quantisation ideas of Planck. Only specific and well-defined orbits of the electron could exist, which also do not radiate light. In jumping orbit the electron would emit or absorb light corresponding to the difference in energy of the orbits. His prediction of the energy levels was then consistent with observation.[16]:9–10 These results, based on a discrete set of specific standing waves, were inconsistent with the continuous classical oscillator model.[16]:8 Work by Albert Einstein in 1905 on the photoelectric effect led to the association of a light wave of frequency ν {\displaystyle \nu } \nu with a photon of energy h ν {\displaystyle h\nu } h\nu . In 1917 Einstein created an extension to Bohrs model by the introduction of the three processes of stimulated emission, spontaneous emission and absorption (electromagnetic radiation).[16]:11 Modern treatments The largest steps towards the modern treatment was the formulation of quantum mechanics with the matrix mechanics approach by Werner Heisenberg and the discovery of the Schrödinger equation by Erwin Schrödinger.[16]:12 There are a variety of semi-classical treatments within AMO. Which aspects of the problem are treated quantum mechanically and which are treated classically is dependent on the specific problem at hand. The semi-classical approach is ubiquitous in computational work within AMO, largely due to the large decrease in computational cost and complexity associated with it. For matter under the action of a laser, a fully quantum mechanical treatment of the atomic or molecular system is combined with the system being under the action of a classical electromagnetic field.[16]:14 Since the field is treated classically it can not deal with spontaneous emission.[16]:16 This semi-classical treatment is valid for most systems,[2]:997 particular those under the action of high intensity laser fields.[2]:724 The distinction between optical physics and quantum optics is the use of semi-classical and fully quantum treatments respectively.[2]:997 Within collision dynamics and using the semi-classical treatment, the internal degrees of freedom may be treated quantum mechanically, whilst the relative motion of the quantum systems under consideration are treated classically.[2]:556 When considering medium to high speed collisions, the nuclei can be treated classically while the electron is treated quantum mechanically. In low speed collisions the approximation fails.[2]:754 Classical Monte-Carlo methods for the dynamics of electrons can be described as semi-classical in that the initial conditions are calculated using a fully quantum treatment, but all further treatment is classical.[2]:871 Isolated atoms and molecules Atomic, Molecular and Optical physics frequently considers atoms and molecules in isolation. Atomic models will consist of a single nucleus that may be surrounded by one or more bound electrons, whilst molecular models are typically concerned with molecular hydrogen and its molecular hydrogen ion. It is concerned with processes such as ionization, above threshold ionization and excitation by photons or collisions with atomic particles. While modelling atoms in isolation may not seem realistic, if one considers molecules in a gas or plasma then the time-scales for molecule-molecule interactions are huge in comparison to the atomic and molecular processes that we are concerned with. This means that the individual molecules can be treated as if each were in isolation for the vast majority of the time. By this consideration atomic and molecular physics provides the underlying theory in plasma physics and atmospheric physics even though both deal with huge numbers of molecules. Electronic configuration In the event that the electron absorbs a quantity of energy less than the binding energy, it may transition to an excited state or to a virtual state. After a statistically sufficient quantity of time, an electron in an excited state will undergo a transition to a lower state via spontaneous emission. The change in energy between the two energy levels must be accounted for (conservation of energy). In a neutral atom, the system will emit a photon of the difference in energy. However, if the lower state is in an inner shell, a phenomenon known as the Auger effect may take place where the energy is transferred to another bound electrons causing it to go into the continuum. This allows one to multiply ionize an atom with a single photon. See also Born–Oppenheimer approximation Frequency doubling Hyperfine structure Isomeric shift Metamaterial cloaking Molecular energy state Molecular modeling Negative index metamaterials Nonlinear optics Optical engineering Photon polarization Quantum chemistry Rigid rotor Stationary state Transition of state Vector model of the atom Atomic, molecular, and optical physics. National Academy Press. 1986. ISBN 978-0-309-03575-0. Editor: Gordon Drake (Various authors) (1996). Handbook of atomic, molecular, and optical physics. Springer. ISBN 978-0-387-20802-2. Chen, L. T. (ed.) (2009). Atomic, Molecular and Optical Physics: New Research. Nova Science Publishers. ISBN 978-1-60456-907-0. C.B. Parker (1994). McGraw Hill Encyclopaedia of Physics (2nd ed.). McGraw Hill. p. 803. ISBN 978-0-07-051400-3. R. E. Dickerson; I. Geis (1976). "chapter 9". Chemistry, Matter, and the Universe. W.A. Benjamin Inc. (USA). ISBN 978-0-19-855148-5. I.R. Kenyon (2008). "chapters 12, 13, 17". The Light Fantastic – Introduction to Classic and Quantum Optics. Oxford University Press. ISBN 978-0-19-856646-5. Y. B. Band (2010). "chapters 3". Light and Matter: Electromagnetism, Optics, Spectroscopy and Lasers. John Wiley & Sons. ISBN 978-0-471-89931-0. "Optical Physics". University of Arizona. Retrieved Apr 23, 2014. "Slow Light". Science Watch. Retrieved Jan 22, 2013. Y.B. Band (2010). "chapters 9,10". Light and Matter: Electromagnetism, Optics, Spectroscopy and Lasers. John Wiley & Sons. ISBN 978-0-471-89931-0. C.B. Parker (1994). McGraw Hill Encyclopaedia of Physics (2nd ed.). McGraw Hill. pp. 933–934. ISBN 978-0-07-051400-3. I. R. Kenyon (2008). "5, 6, 10, 16". The Light Fantastic – Introduction to Classic and Quantum Optics (2nd ed.). Oxford University Press. ISBN 978-0-19-856646-5. R. E. Dickerson; I. Geis (1976). "chapters 7, 8". Chemistry, Matter, and the Universe. W.A. Benjamin Inc. (USA). ISBN 978-0-19-855148-5. Y.B. Band (2010). Light and Matter: Electromagnetism, Optics, Spectroscopy and Lasers. John Wiley & Sons. pp. 4–11. ISBN 978-0-471-89931-0. P. A. Tipler; G. Mosca (2008). "chapter 34". Physics for Scientists and Engineers - with Modern Physics. Freeman. ISBN 978-0-7167-8964-2. Haken, H. (1981). Light (Reprint. ed.). Amsterdam u.a.: North-Holland Physics Publ. ISBN 978-0-444-86020-0. Bransden, B. H.; Joachain, CJ (2002). Physics of Atoms and Molecules (2nd ed.). Prentice Hall. ISBN 978-0-582-35692-4. Foot, C. J. (2004). Atomic Physics. Oxford University Press. ISBN 978-0-19-850696-6. Herzberg, G. (1979) [1945]. Atomic Spectra and Atomic Structure. Dover. ISBN 978-0-486-60115-1. Condon, E. U. & Shortley, G. H. (1935). The Theory of Atomic Spectra. Cambridge University Press. ISBN 978-0-521-09209-8. Cowan, Robert D. (1981). The Theory of Atomic Structure and Spectra. University of California Press. ISBN 978-0-520-03821-9. Lindgren, I. & Morrison, J. (1986). Atomic Many-Body Theory (Second ed.). Springer-Verlag. ISBN 978-0-387-16649-0. J. R. Hook; H. E. Hall (2010). Solid State Physics (2nd ed.). Manchester Physics Series, John Wiley & Sons. ISBN 978-0-471-92804-1. P. W. Atkins (1978). Physical chemistry. Oxford University Press. ISBN 978-0-19-855148-5. I. R. Kenyon (2008). The Light Fantastic – Introduction to Classic and Quantum Optics. Oxford University Press. ISBN 978-0-19-856646-5. T.Hey, P.Walters (2009). The New Quantum Universe. Cambridge University Press. ISBN 978-0-521-56457-1. R. Loudon (1996). The Quantum Theory of Light. Oxford University Press (Oxford Science Publications). ISBN 978-0-19-850177-0. P.W. Atkins (1974). Quanta: A handbook of concepts. Oxford University Press. ISBN 978-0-19-855493-6. E. Abers (2004). Quantum Mechanics. Pearson Ed., Addison Wesley, Prentice Hall Inc. ISBN 978-0-13-146100-0. P.W. Atkins (1977). Molecular Quantum Mechanics Parts I and II: An Introduction to QUANTUM CHEMISTRY (Volume 1). Oxford University Press. ISBN 978-0-19-855129-4. P.W. Atkins (1977). Molecular Quantum Mechanics Part III: An Introduction to QUANTUM CHEMISTRY (Volume 2). Oxford University Press. ISBN 978-0-19-855129-4. Light and Matter: Electromagnetism, Optics, Spectroscopy and Lasers, Y.B. Band, John Wiley & Sons, 2010, ISBN 978-0471-89931-0 The Light Fantastic – Introduction to Classic and Quantum Optics, I.R. Kenyon, Oxford University Press, 2008, ISBN 978-0-19-856646-5 Handbook of atomic, molecular, and optical physics, Editor: Gordon Drake, Springer, Various authors, 1996, ISBN 0-387-20802-X Fox, Mark (2010). Optical properties of solids. Oxford New York: Oxford University Press. ISBN 978-0-19-957336-3. External links Wikimedia Commons has media related to Atomic, molecular, and optical physics. Lorentz and Drude Models (see and listen to Lecture 2) Nonlinear and Anisotropic Materials (see and listen to Lecture 3) ScienceDirect - Advances In Atomic, Molecular, and Optical Physics Journal of Physics B: Atomic, Molecular and Optical Physics American Physical Society - Division of Atomic, Molecular & Optical Physics European Physical Society - Atomic, Molecular & Optical Physics Division National Science Foundation - Atomic, Molecular and Optical Physics MIT-Harvard Center for Ultracold Atoms JILA - Atomic and Molecular Physics Joint Quantum Institute at University of Maryland and NIST ORNL Physics Division Queen's University Belfast - Center for Theoretical, Atomic, Molecular and Optical Physics, University of California, Berkeley - Atomic, Molecular and Optical Physics Physics Encyclopedia Hellenica World - Scientific Library Retrieved from "http://en.wikipedia.org/"
7e3a12b8417592b2
Get Electron essential facts below. View Videos or join the Electron discussion. Add Electron to your PopFlock.com topic list for future reference or share this resource on social media. Atomic-orbital-clouds spd m0.png Hydrogen atomic orbitals at different energy levels. The more opaque areas are where one is most likely to find an electron at any given time. CompositionElementary particle[1] InteractionsGravity, electromagnetic, weak AntiparticlePositron (also called antielectron) TheorizedRichard Laming (1838-1851),[2] G. Johnstone Stoney (1874) and others.[3][4] DiscoveredJ. J. Thomson (1897)[5] []-1 u[a] Mean lifetimestable ( > [8]) Electric charge[b] Magnetic moment-1.00115965218091(26) ?B[7] Weak isospin Weak hypercharge The electron is a subatomic particle, symbol , whose electric charge is negative one elementary charge.[9] Electrons belong to the first generation of the lepton particle family,[10] and are generally thought to be elementary particles because they have no known components or substructure.[1] The electron has a mass that is approximately 1/1836 that of the proton.[11] Quantum mechanical properties of the electron include an intrinsic angular momentum (spin) of a half-integer value, expressed in units of the reduced Planck constant, ?. Being fermions, no two electrons can occupy the same quantum state, in accordance with the Pauli exclusion principle.[10] Like all elementary particles, electrons exhibit properties of both particles and waves: they can collide with other particles and can be diffracted like light. The wave properties of electrons are easier to observe with experiments than those of other particles like neutrons and protons because electrons have a lower mass and hence a longer de Broglie wavelength for a given energy. Interactions involving electrons with other subatomic particles are of interest in fields such as chemistry and nuclear physics. The Coulomb force interaction between the positive protons within atomic nuclei and the negative electrons without, allows the composition of the two known as atoms. Ionization or differences in the proportions of negative electrons versus positive nuclei changes the binding energy of an atomic system. The exchange or sharing of the electrons between two or more atoms is the main cause of chemical bonding.[13] In 1838, British natural philosopher Richard Laming first hypothesized the concept of an indivisible quantity of electric charge to explain the chemical properties of atoms.[3] Irish physicist George Johnstone Stoney named this charge 'electron' in 1891, and J. J. Thomson and his team of British physicists identified it as a particle in 1897 during the cathode ray tube experiment.[5] Electrons can also participate in nuclear reactions, such as nucleosynthesis in stars, where they are known as beta particles. Electrons can be created through beta decay of radioactive isotopes and in high-energy collisions, for instance when cosmic rays enter the atmosphere. The antiparticle of the electron is called the positron; it is identical to the electron except that it carries electrical charge of the opposite sign. When an electron collides with a positron, both particles can be annihilated, producing gamma ray photons. Discovery of effect of electric force The ancient Greeks noticed that amber attracted small objects when rubbed with fur. Along with lightning, this phenomenon is one of humanity's earliest recorded experiences with electricity.[14] In his 1600 treatise De Magnete, the English scientist William Gilbert coined the New Latin term electrica, to refer to those substances with property similar to that of amber which attract small objects after being rubbed.[15] Both electric and electricity are derived from the Latin ?lectrum (also the root of the alloy of the same name), which came from the Greek word for amber, (?lektron). Discovery of two kinds of charges In the early 1700s, French chemist Charles François du Fay found that if a charged gold-leaf is repulsed by glass rubbed with silk, then the same charged gold-leaf is attracted by amber rubbed with wool. From this and other results of similar types of experiments, du Fay concluded that electricity consists of two electrical fluids, vitreous fluid from glass rubbed with silk and resinous fluid from amber rubbed with wool. These two fluids can neutralize each other when combined.[15][16] American scientist Ebenezer Kinnersley later also independently reached the same conclusion.[17]:118 A decade later Benjamin Franklin proposed that electricity was not from different types of electrical fluid, but a single electrical fluid showing an excess (+) or deficit (-). He gave them the modern charge nomenclature of positive and negative respectively.[18] Franklin thought of the charge carrier as being positive, but he did not correctly identify which situation was a surplus of the charge carrier, and which situation was a deficit.[19] Stoney initially coined the term electrolion in 1881. Ten years later, he switched to electron to describe these elementary charges, writing in 1894: "... an estimate was made of the actual amount of this most remarkable fundamental unit of electricity, for which I have since ventured to suggest the name electron". A 1906 proposal to change to electrion failed because Hendrik Lorentz preferred to keep electron.[21][22] The word electron is a combination of the words electric and ion.[23] The suffix -on which is now used to designate other subatomic particles, such as a proton or neutron, is in turn derived from electron.[24][25] Discovery of free electrons outside matter A round glass vacuum tube with a glowing circular beam inside A beam of electrons deflected in a circle by a magnetic field[26] While studying electrical conductivity in rarefied gases in 1859, the German physicist Julius Plücker observed that the phosphorescent light, which was caused by radiation emitted from the cathode, appeared at the tube wall near the cathode, and the region of the phosphorescent light could be moved by application of a magnetic field.[27] In 1869, Plucker's student Johann Wilhelm Hittorf found that a solid body placed in between the cathode and the phosphorescence would cast a shadow upon the phosphorescent region of the tube. Hittorf inferred that there are straight rays emitted from the cathode and that the phosphorescence was caused by the rays striking the tube walls. In 1876, the German physicist Eugen Goldstein showed that the rays were emitted perpendicular to the cathode surface, which distinguished between the rays that were emitted from the cathode and the incandescent light. Goldstein dubbed the rays cathode rays.[28][29]:393 Decades of experimental and theoretical research involving cathode rays were important in J. J. Thomson's eventual discovery of electrons.[3] During the 1870s, the English chemist and physicist Sir William Crookes developed the first cathode ray tube to have a high vacuum inside.[30] He then showed in 1874 that the cathode rays can turn a small paddle wheel when placed in their path. Therefore, he concluded that the rays carried momentum. Furthermore, by applying a magnetic field, he was able to deflect the rays, thereby demonstrating that the beam behaved as though it were negatively charged.[28] In 1879, he proposed that these properties could be explained by regarding cathode rays as composed of negatively charged gaseous molecules in fourth state of matter in which the mean free path of the particles is so long that collisions may be ignored.[29]:394-395 The German-born British physicist Arthur Schuster expanded upon Crookes's experiments by placing metal plates parallel to the cathode rays and applying an electric potential between the plates.[31] The field deflected the rays toward the positively charged plate, providing further evidence that the rays carried negative charge. By measuring the amount of deflection for a given level of current, in 1890 Schuster was able to estimate the charge-to-mass ratio[c] of the ray components. However, this produced a value that was more than a thousand times greater than what was expected, so little credence was given to his calculations at the time.[28] While studying naturally fluorescing minerals in 1896, the French physicist Henri Becquerel discovered that they emitted radiation without any exposure to an external energy source. These radioactive materials became the subject of much interest by scientists, including the New Zealand physicist Ernest Rutherford who discovered they emitted particles. He designated these particles alpha and beta, on the basis of their ability to penetrate matter.[33] In 1900, Becquerel showed that the beta rays emitted by radium could be deflected by an electric field, and that their mass-to-charge ratio was the same as for cathode rays.[34] This evidence strengthened the view that electrons existed as components of atoms.[35][36] In 1897, the British physicist J. J. Thomson, with his colleagues John S. Townsend and H. A. Wilson, performed experiments indicating that cathode rays really were unique particles, rather than waves, atoms or molecules as was believed earlier.[5] Thomson made good estimates of both the charge e and the mass m, finding that cathode ray particles, which he called "corpuscles", had perhaps one thousandth of the mass of the least massive ion known: hydrogen.[5] He showed that their charge-to-mass ratio, e/m, was independent of cathode material. He further showed that the negatively charged particles produced by radioactive materials, by heated materials and by illuminated materials were universal.[5][37] The name electron was adopted for these particles by the scientific community, mainly due to the advocation by G. F. FitzGerald, J. Larmor, and H. A. Lorentz.[38]:273 The electron's charge was more carefully measured by the American physicists Robert Millikan and Harvey Fletcher in their oil-drop experiment of 1909, the results of which were published in 1911. This experiment used an electric field to prevent a charged droplet of oil from falling as a result of gravity. This device could measure the electric charge from as few as 1-150 ions with an error margin of less than 0.3%. Comparable experiments had been done earlier by Thomson's team,[5] using clouds of charged water droplets generated by electrolysis, and in 1911 by Abram Ioffe, who independently obtained the same result as Millikan using charged microparticles of metals, then published his results in 1913.[39] However, oil drops were more stable than water drops because of their slower evaporation rate, and thus more suited to precise experimentation over longer periods of time.[40] Atomic theory By 1914, experiments by physicists Ernest Rutherford, Henry Moseley, James Franck and Gustav Hertz had largely established the structure of an atom as a dense nucleus of positive charge surrounded by lower-mass electrons.[42] In 1913, Danish physicist Niels Bohr postulated that electrons resided in quantized energy states, with their energies determined by the angular momentum of the electron's orbit about the nucleus. The electrons could move between those states, or orbits, by the emission or absorption of photons of specific frequencies. By means of these quantized orbits, he accurately explained the spectral lines of the hydrogen atom.[43] However, Bohr's model failed to account for the relative intensities of the spectral lines and it was unsuccessful in explaining the spectra of more complex atoms.[42] Chemical bonds between atoms were explained by Gilbert Newton Lewis, who in 1916 proposed that a covalent bond between two atoms is maintained by a pair of electrons shared between them.[44] Later, in 1927, Walter Heitler and Fritz London gave the full explanation of the electron-pair formation and chemical bonding in terms of quantum mechanics.[45] In 1919, the American chemist Irving Langmuir elaborated on the Lewis's static model of the atom and suggested that all electrons were distributed in successive "concentric (nearly) spherical shells, all of equal thickness".[46] In turn, he divided the shells into a number of cells each of which contained one pair of electrons. With this model Langmuir was able to qualitatively explain the chemical properties of all elements in the periodic table,[45] which were known to largely repeat themselves according to the periodic law.[47] In 1924, Austrian physicist Wolfgang Pauli observed that the shell-like structure of the atom could be explained by a set of four parameters that defined every quantum energy state, as long as each state was occupied by no more than a single electron. This prohibition against more than one electron occupying the same quantum energy state became known as the Pauli exclusion principle.[48] The physical mechanism to explain the fourth parameter, which had two distinct possible values, was provided by the Dutch physicists Samuel Goudsmit and George Uhlenbeck. In 1925, they suggested that an electron, in addition to the angular momentum of its orbit, possesses an intrinsic angular momentum and magnetic dipole moment.[42][49] This is analogous to the rotation of the Earth on its axis as it orbits the Sun. The intrinsic angular momentum became known as spin, and explained the previously mysterious splitting of spectral lines observed with a high-resolution spectrograph; this phenomenon is known as fine structure splitting.[50] Quantum mechanics In his 1924 dissertation Recherches sur la théorie des quanta (Research on Quantum Theory), French physicist Louis de Broglie hypothesized that all matter can be represented as a de Broglie wave in the manner of light.[51] That is, under the appropriate conditions, electrons and other matter would show properties of either particles or waves. The corpuscular properties of a particle are demonstrated when it is shown to have a localized position in space along its trajectory at any given moment.[52] The wave-like nature of light is displayed, for example, when a beam of light is passed through parallel slits thereby creating interference patterns. In 1927, George Paget Thomson discovered the interference effect was produced when a beam of electrons was passed through thin metal foils and by American physicists Clinton Davisson and Lester Germer by the reflection of electrons from a crystal of nickel.[53] A symmetrical blue cloud that decreases in intensity from the center outward De Broglie's prediction of a wave nature for electrons led Erwin Schrödinger to postulate a wave equation for electrons moving under the influence of the nucleus in the atom. In 1926, this equation, the Schrödinger equation, successfully described how electron waves propagated.[54] Rather than yielding a solution that determined the location of an electron over time, this wave equation also could be used to predict the probability of finding an electron near a position, especially a position near where the electron was bound in space, for which the electron wave equations did not change in time. This approach led to a second formulation of quantum mechanics (the first by Heisenberg in 1925), and solutions of Schrödinger's equation, like Heisenberg's, provided derivations of the energy states of an electron in a hydrogen atom that were equivalent to those that had been derived first by Bohr in 1913, and that were known to reproduce the hydrogen spectrum.[55] Once spin and the interaction between multiple electrons were describable, quantum mechanics made it possible to predict the configuration of electrons in atoms with atomic numbers greater than hydrogen.[56] In 1928, building on Wolfgang Pauli's work, Paul Dirac produced a model of the electron - the Dirac equation, consistent with relativity theory, by applying relativistic and symmetry considerations to the hamiltonian formulation of the quantum mechanics of the electro-magnetic field.[57] In order to resolve some problems within his relativistic equation, Dirac developed in 1930 a model of the vacuum as an infinite sea of particles with negative energy, later dubbed the Dirac sea. This led him to predict the existence of a positron, the antimatter counterpart of the electron.[58] This particle was discovered in 1932 by Carl Anderson, who proposed calling standard electrons negatons and using electron as a generic term to describe both the positively and negatively charged variants. In 1947, Willis Lamb, working in collaboration with graduate student Robert Retherford, found that certain quantum states of the hydrogen atom, which should have the same energy, were shifted in relation to each other; the difference came to be called the Lamb shift. About the same time, Polykarp Kusch, working with Henry M. Foley, discovered the magnetic moment of the electron is slightly larger than predicted by Dirac's theory. This small difference was later called anomalous magnetic dipole moment of the electron. This difference was later explained by the theory of quantum electrodynamics, developed by Sin-Itiro Tomonaga, Julian Schwinger and Richard Feynman in the late 1940s.[59] Particle accelerators With the development of the particle accelerator during the first half of the twentieth century, physicists began to delve deeper into the properties of subatomic particles.[60] The first successful attempt to accelerate electrons using electromagnetic induction was made in 1942 by Donald Kerst. His initial betatron reached energies of 2.3 MeV, while subsequent betatrons achieved 300 MeV. In 1947, synchrotron radiation was discovered with a 70 MeV electron synchrotron at General Electric. This radiation was caused by the acceleration of electrons through a magnetic field as they moved near the speed of light.[61] With a beam energy of 1.5 GeV, the first high-energy particle collider was ADONE, which began operations in 1968.[62] This device accelerated electrons and positrons in opposite directions, effectively doubling the energy of their collision when compared to striking a static target with an electron.[63] The Large Electron-Positron Collider (LEP) at CERN, which was operational from 1989 to 2000, achieved collision energies of 209 GeV and made important measurements for the Standard Model of particle physics.[64][65] Confinement of individual electrons Individual electrons can now be easily confined in ultra small (, ) CMOS transistors operated at cryogenic temperature over a range of -269 °C (4 K) to about -258 °C (15 K).[66] The electron wavefunction spreads in a semiconductor lattice and negligibly interacts with the valence band electrons, so it can be treated in the single particle formalism, by replacing its mass with the effective mass tensor. Standard Model of elementary particles. The electron (symbol e) is on the left. Fundamental properties The invariant mass of an electron is approximately  kilograms,[69] or  atomic mass units. On the basis of Einstein's principle of mass-energy equivalence, this mass corresponds to a rest energy of 0.511 MeV. The ratio between the mass of a proton and that of an electron is about 1836.[11][70] Astronomical measurements show that the proton-to-electron mass ratio has held the same value, as is predicted by the Standard Model, for at least half the age of the universe.[71] Electrons have an electric charge of coulombs,[69] which is used as a standard unit of charge for subatomic particles, and is also called the elementary charge. Within the limits of experimental accuracy, the electron charge is identical to the charge of a proton, but with the opposite sign.[72] As the symbol e is used for the elementary charge, the electron is commonly symbolized by because it has the same properties as the electron but with a positive rather than negative charge.[68][69] The electron has an intrinsic angular momentum or spin of 1/2.[69] This property is usually stated by referring to the electron as a spin-1/2 particle.[68] For such particles the spin magnitude is ?/2,[73][d] while the result of the measurement of a projection of the spin on any axis can only be ±?/2. In addition to spin, the electron has an intrinsic magnetic moment along its spin axis.[69] It is approximately equal to one Bohr magneton,[74][e] which is a physical constant equal to .[69] The orientation of the spin with respect to the momentum of the electron defines the property of elementary particles known as helicity.[75] The electron has no known substructure.[1][76] Nevertheless, in condensed matter physics, spin-charge separation can occur in some materials. In such cases, electrons 'split' into three independent particles, the spinon, the orbiton and the holon (or chargon). The electron can always be theoretically considered as a bound state of the three, with the spinon carrying the spin of the electron, the orbiton carrying the orbital degree of freedom and the chargon carrying the charge, but in certain conditions they can behave as independent quasiparticles.[77][78][79] The issue of the radius of the electron is a challenging problem of modern theoretical physics. The admission of the hypothesis of a finite radius of the electron is incompatible to the premises of the theory of relativity. On the other hand, a point-like electron (zero radius) generates serious mathematical difficulties due to the self-energy of the electron tending to infinity.[80] Observation of a single electron in a Penning trap suggests the upper limit of the particle's radius to be 10-22 meters.[81] The upper bound of the electron radius of 10-18 meters[82] can be derived using the uncertainty relation in energy. There is also a physical constant called the "classical electron radius", with the much larger value of , greater than the radius of the proton. However, the terminology comes from a simplistic calculation that ignores the effects of quantum mechanics; in reality, the so-called classical electron radius has little to do with the true fundamental structure of the electron.[83][84][f] There are elementary particles that spontaneously decay into less massive particles. An example is the muon, with a mean lifetime of  seconds, which decays into an electron, a muon neutrino and an electron antineutrino. The electron, on the other hand, is thought to be stable on theoretical grounds: the electron is the least massive particle with non-zero electric charge, so its decay would violate charge conservation.[85] The experimental lower bound for the electron's mean lifetime is years, at a 90% confidence level.[8][86][87] Quantum properties The wave-like nature of the electron allows it to pass through two parallel slits simultaneously, rather than just one slit as would be the case for a classical particle. In quantum mechanics, the wave-like property of one particle can be described mathematically as a complex-valued function, the wave function, commonly denoted by the Greek letter psi (?). When the absolute value of this function is squared, it gives the probability that a particle will be observed near a location--a probability density.[88]:162-218 Virtual particles In a simplified picture, which often tends to give the wrong idea but may serve to illustrate some aspects, every photon spends some time as a combination of a virtual electron plus its antiparticle, the virtual positron, which rapidly annihilate each other shortly thereafter.[89] The combination of the energy variation needed to create these particles, and the time during which they exist, fall under the threshold of detectability expressed by the Heisenberg uncertainty relation, ?E · ?t >= ?. In effect, the energy needed to create these virtual particles, ?E, can be "borrowed" from the vacuum for a period of time, ?t, so that their product is no more than the reduced Planck constant, . Thus, for a virtual electron, ?t is at most .[90] A schematic depiction of virtual electron-positron pairs appearing at random near an electron (at lower left) While an electron-positron virtual pair is in existence, the Coulomb force from the ambient electric field surrounding an electron causes a created positron to be attracted to the original electron, while a created electron experiences a repulsion. This causes what is called vacuum polarization. In effect, the vacuum behaves like a medium having a dielectric permittivity more than unity. Thus the effective charge of an electron is actually smaller than its true value, and the charge decreases with increasing distance from the electron.[91][92] This polarization was confirmed experimentally in 1997 using the Japanese TRISTAN particle accelerator.[93] Virtual particles cause a comparable shielding effect for the mass of the electron.[94] The interaction with virtual particles also explains the small (about 0.1%) deviation of the intrinsic magnetic moment of the electron from the Bohr magneton (the anomalous magnetic moment).[74][95] The extraordinarily precise agreement of this predicted difference with the experimentally determined value is viewed as one of the great achievements of quantum electrodynamics.[96] The apparent paradox in classical physics of a point particle electron having intrinsic angular momentum and magnetic moment can be explained by the formation of virtual photons in the electric field generated by the electron. These photons can heuristically be thought of as causing the electron to shift about in a jittery fashion (known as zitterbewegung), which results in a net circular motion with precession.[97] This motion produces both the spin and the magnetic moment of the electron.[10] In atoms, this creation of virtual photons explains the Lamb shift observed in spectral lines.[91] The Compton Wavelength shows that near elementary particles such as the electron, the uncertainty of the energy allows for the creation of virtual particles near the electron. This wavelength explains the "static" of virtual particles around elementary particles at a close distance. An electron generates an electric field that exerts an attractive force on a particle with a positive charge, such as the proton, and a repulsive force on a particle with a negative charge. The strength of this force in nonrelativistic approximation is determined by Coulomb's inverse square law.[98](pp58-61) When an electron is in motion, it generates a magnetic field.[88](p140) The Ampère-Maxwell law relates the magnetic field to the mass motion of electrons (the current) with respect to an observer. This property of induction supplies the magnetic field that drives an electric motor.[99] The electromagnetic field of an arbitrary moving charged particle is expressed by the Liénard-Wiechert potentials, which are valid even when the particle's speed is close to that of light (relativistic).[98](pp429-434) A graph with arcs showing the motion of charged particles When an electron is moving through a magnetic field, it is subject to the Lorentz force that acts perpendicularly to the plane defined by the magnetic field and the electron velocity. This centripetal force causes the electron to follow a helical trajectory through the field at a radius called the gyroradius. The acceleration from this curving motion induces the electron to radiate energy in the form of synchrotron radiation.[100][g][88](p160) The energy emission in turn causes a recoil of the electron, known as the Abraham-Lorentz-Dirac Force, which creates a friction that slows the electron. This force is caused by a back-reaction of the electron's own field upon itself.[101] Here, Bremsstrahlung is produced by an electron e deflected by the electric field of an atomic nucleus. The energy change E2 - E1 determines the frequency f of the emitted photon. An inelastic collision between a photon (light) and a solitary (free) electron is called Compton scattering. This collision results in a transfer of momentum and energy between the particles, which modifies the wavelength of the photon by an amount called the Compton shift.[h] The maximum magnitude of this wavelength shift is h/mec, which is known as the Compton wavelength.[104] For an electron, it has a value of .[69] When the wavelength of the light is long (for instance, the wavelength of the visible light is 0.4-0.7 ?m) the wavelength shift becomes negligible. Such interaction between the light and free electrons is called Thomson scattering or linear Thomson scattering.[105] The relative strength of the electromagnetic interaction between two charged particles, such as an electron and a proton, is given by the fine-structure constant. This value is a dimensionless quantity formed by the ratio of two energies: the electrostatic energy of attraction (or repulsion) at a separation of one Compton wavelength, and the rest energy of the charge. It is given by ? ? , which is approximately equal to 1/137.[69] When electrons and positrons collide, they annihilate each other, giving rise to two or more gamma ray photons. If the electron and positron have negligible momentum, a positronium atom can form before annihilation results in two or three gamma ray photons totalling 1.022 MeV.[106][107] On the other hand, a high-energy photon can transform into an electron and a positron by a process called pair production, but only in the presence of a nearby charged particle, such as a nucleus.[108][109] exchange, and this is responsible for neutrino-electron elastic scattering.[110] Atoms and molecules An electron can be bound to the nucleus of an atom by the attractive Coulomb force. A system of one or more electrons bound to a nucleus is called an atom. If the number of electrons is different from the nucleus's electrical charge, such an atom is called an ion. The wave-like behavior of a bound electron is described by a function called an atomic orbital. Each orbital has its own set of quantum numbers such as energy, angular momentum and projection of angular momentum, and only a discrete set of these orbitals exist around the nucleus. According to the Pauli exclusion principle each orbital can be occupied by up to two electrons, which must differ in their spin quantum number. Electrons can transfer between different orbitals by the emission or absorption of photons with an energy that matches the difference in potential.[111]:159-160 Other methods of orbital transfer include collisions with particles, such as electrons, and the Auger effect.[112] To escape the atom, the energy of the electron must be increased above its binding energy to the atom. This occurs, for example, with the photoelectric effect, where an incident photon exceeding the atom's ionization energy is absorbed by the electron.[111]:127-132 The chemical bond between atoms occurs as a result of electromagnetic interactions, as described by the laws of quantum mechanics.[114] The strongest bonds are formed by the sharing or transfer of electrons between atoms, allowing the formation of molecules.[13] Within a molecule, electrons move under the influence of several nuclei, and occupy molecular orbitals; much as they can occupy atomic orbitals in isolated atoms.[115] A fundamental factor in these molecular structures is the existence of electron pairs. These are electrons with opposed spins, allowing them to occupy the same molecular orbital without violating the Pauli exclusion principle (much like in atoms). Different molecular orbitals have different spatial distribution of the electron density. For instance, in bonded pairs (i.e. in the pairs that actually bind atoms together) electrons can be found with the maximal probability in a relatively small volume between the nuclei. By contrast, in non-bonded pairs electrons are distributed in a large volume around nuclei.[116] Four bolts of lightning strike the ground A lightning discharge consists primarily of a flow of electrons.[117] The electric potential needed for lightning can be generated by a triboelectric effect.[118][119] Independent electrons moving in vacuum are termed free electrons. Electrons in metals also behave as if they were free. In reality the particles that are commonly termed electrons in metals and other solids are quasi-electrons--quasiparticles, which have the same electrical charge, spin, and magnetic moment as real electrons but might have a different mass.[121] When free electrons--both in vacuum and metals--move, they produce a net flow of charge called an electric current, which generates a magnetic field. Likewise a current can be created by a changing magnetic field. These interactions are described mathematically by Maxwell's equations.[122] Metals make relatively good conductors of heat, primarily because the delocalized electrons are free to transport thermal energy between atoms. However, unlike electrical conductivity, the thermal conductivity of a metal is nearly independent of temperature. This is expressed mathematically by the Wiedemann-Franz law,[124] which states that the ratio of thermal conductivity to the electrical conductivity is proportional to the temperature. The thermal disorder in the metallic lattice increases the electrical resistivity of the material, producing a temperature dependence for electric current.[127] When cooled below a point called the critical temperature, materials can undergo a phase transition in which they lose all resistivity to electric current, in a process known as superconductivity. In BCS theory, pairs of electrons called Cooper pairs have their motion coupled to nearby matter via lattice vibrations called phonons, thereby avoiding the collisions with atoms that normally create electrical resistance.[128] (Cooper pairs have a radius of roughly 100 nm, so they can overlap each other.)[129] However, the mechanism by which higher temperature superconductors operate remains uncertain. Electrons inside conducting solids, which are quasi-particles themselves, when tightly confined at temperatures close to absolute zero, behave as though they had split into three other quasiparticles: spinons, orbitons and holons.[130][131] The former carries spin and magnetic moment, the next carries its orbital location while the latter electrical charge. Motion and energy According to Einstein's theory of special relativity, as an electron's speed approaches the speed of light, from an observer's point of view its relativistic mass increases, thereby making it more and more difficult to accelerate it from within the observer's frame of reference. The speed of an electron can approach, but never reach, the speed of light in a vacuum, c. However, when relativistic electrons--that is, electrons moving at a speed close to c--are injected into a dielectric medium such as water, where the local speed of light is significantly less than c, the electrons temporarily travel faster than light in the medium. As they interact with the medium, they generate a faint light called Cherenkov radiation.[132] The plot starts at zero and curves sharply upward toward the right The effects of special relativity are based on a quantity known as the Lorentz factor, defined as where v is the speed of the particle. The kinetic energy Ke of an electron moving with velocity v is: where me is the mass of electron. For example, the Stanford linear accelerator can accelerate an electron to roughly 51 GeV.[133] Since an electron behaves as a wave, at a given velocity it has a characteristic de Broglie wavelength. This is given by ?e = h/p where h is the Planck constant and p is the momentum.[51] For the 51 GeV electron above, the wavelength is about , small enough to explore structures well below the size of an atomic nucleus.[134] A photon approaches the nucleus from the left, with the resulting electron and positron moving off to the right Pair production of an electron and positron, caused by the close approach of a photon with an atomic nucleus. The lightning symbol represents an exchange of a virtual photon, thus an electric force acts. The angle between the particles is very small.[135] The Big Bang theory is the most widely accepted scientific theory to explain the early stages in the evolution of the Universe.[136] For the first millisecond of the Big Bang, the temperatures were over 10 billion kelvins and photons had mean energies over a million electronvolts. These photons were sufficiently energetic that they could react with each other to form pairs of electrons and positrons. Likewise, positron-electron pairs annihilated each other and emitted energetic photons: For reasons that remain uncertain, during the annihilation process there was an excess in the number of particles over antiparticles. Hence, about one electron for every billion electron-positron pairs survived. This excess matched the excess of protons over antiprotons, in a condition known as baryon asymmetry, resulting in a net charge of zero for the universe.[138][139] The surviving protons and neutrons began to participate in reactions with each other--in the process known as nucleosynthesis, forming isotopes of hydrogen and helium, with trace amounts of lithium. This process peaked after about five minutes.[140] Any leftover neutrons underwent negative beta decay with a half-life of about a thousand seconds, releasing a proton and electron in the process, For about the next -, the excess electrons remained too energetic to bind with atomic nuclei.[141] What followed is a period known as recombination, when neutral atoms were formed and the expanding universe became transparent to radiation.[142] Roughly one million years after the big bang, the first generation of stars began to form.[142] Within a star, stellar nucleosynthesis results in the production of positrons from the fusion of atomic nuclei. These antimatter particles immediately annihilate with electrons, releasing gamma rays. The net result is a steady reduction in the number of electrons, and a matching increase in the number of neutrons. However, the process of stellar evolution can result in the synthesis of radioactive isotopes. Selected isotopes can subsequently undergo negative beta decay, emitting an electron and antineutrino from the nucleus.[143] An example is the cobalt-60 (60Co) isotope, which decays to form nickel-60 (60 A branching tree representing the particle production When a pair of virtual particles (such as an electron and positron) is created in the vicinity of the event horizon, random spatial positioning might result in one of them to appear on the exterior; this process is called quantum tunnelling. The gravitational potential of the black hole can then supply the energy that transforms this virtual particle into a real particle, allowing it to radiate away into space.[146] In exchange, the other member of the pair is given negative energy, which results in a net loss of mass-energy by the black hole. The rate of Hawking radiation increases with decreasing mass, eventually causing the black hole to evaporate away until, finally, it explodes.[147] Cosmic rays are particles traveling through space with high energies. Energy events as high as have been recorded.[148] When these particles collide with nucleons in the Earth's atmosphere, a shower of particles is generated, including pions.[149] More than half of the cosmic radiation observed from the Earth's surface consists of muons. The particle called a muon is a lepton produced in the upper atmosphere by the decay of a pion. A muon, in turn, can decay to form an electron or positron.[150] Aurorae are mostly caused by energetic electrons precipitating into the atmosphere.[151] The frequency of a photon is proportional to its energy. As a bound electron transitions between different energy levels of an atom, it absorbs or emits photons at characteristic frequencies. For instance, when atoms are irradiated by a source with a broad spectrum, distinct dark lines appear in the spectrum of transmitted radiation in places where the corresponding frequency is absorbed by the atom's electrons. Each element or molecule displays a characteristic set of spectral lines, such as the hydrogen spectral series. When detected, spectroscopic measurements of the strength and width of these lines allow the composition and physical properties of a substance to be determined.[153][154] In laboratory conditions, the interactions of individual electrons can be observed by means of particle detectors, which allow measurement of specific properties such as energy, spin and charge.[155] The development of the Paul trap and Penning trap allows charged particles to be contained within a small region for long durations. This enables precise measurements of the particle properties. For example, in one instance a Penning trap was used to contain a single electron for a period of 10 months.[156] The magnetic moment of the electron was measured to a precision of eleven digits, which, in 1980, was a greater accuracy than for any other physical constant.[157] The distribution of the electrons in solid materials can be visualized by angle-resolved photoemission spectroscopy (ARPES). This technique employs the photoelectric effect to measure the reciprocal space--a mathematical representation of periodic structures that is used to infer the original structure. ARPES can be used to determine the direction, speed and scattering of electrons within the material.[160] Plasma applications Particle beams Electron beams are used in welding.[162] They allow energy densities up to across a narrow focus diameter of and usually require no filler material. This welding technique must be performed in a vacuum to prevent the electrons from interacting with the gas before reaching their target, and it can be used to join conductive materials that would otherwise be considered unsuitable for welding.[163][164] Electron-beam lithography (EBL) is a method of etching semiconductors at resolutions smaller than a micrometer.[165] This technique is limited by high costs, slow performance, the need to operate the beam in the vacuum and the tendency of the electrons to scatter in solids. The last problem limits the resolution to about 10 nm. For this reason, EBL is primarily used for the production of small numbers of specialized integrated circuits.[166] Electron beam processing is used to irradiate materials in order to change their physical properties or sterilize medical and food products.[167] Electron beams fluidise or quasi-melt glasses without significant increase of temperature on intensive irradiation: e.g. intensive electron radiation causes a many orders of magnitude decrease of viscosity and stepwise decrease of its activation energy.[168] Linear particle accelerators generate electron beams for treatment of superficial tumors in radiation therapy. Electron therapy can treat such skin lesions as basal-cell carcinomas because an electron beam only penetrates to a limited depth before being absorbed, typically up to 5 cm for electron energies in the range 5-20 MeV. An electron beam can be used to supplement the treatment of areas that have been irradiated by X-rays.[169][170] Particle accelerators use electric fields to propel electrons and their antiparticles to high energies. These particles emit synchrotron radiation as they pass through magnetic fields. The dependency of the intensity of this radiation upon spin polarizes the electron beam--a process known as the Sokolov-Ternov effect.[i] Polarized electron beams can be useful for various experiments. Synchrotron radiation can also cool the electron beams to reduce the momentum spread of the particles. Electron and positron beams are collided upon the particles' accelerating to the required energies; particle detectors observe the resulting energy emissions, which particle physics studies .[171] Low-energy electron diffraction (LEED) is a method of bombarding a crystalline material with a collimated beam of electrons and then observing the resulting diffraction patterns to determine the structure of the material. The required energy of the electrons is typically in the range 20-200 eV.[172] The reflection high-energy electron diffraction (RHEED) technique uses the reflection of a beam of electrons fired at various low angles to characterize the surface of crystalline materials. The beam energy is typically in the range 8-20 keV and the angle of incidence is 1-4°.[173][174] The electron microscope directs a focused beam of electrons at a specimen. Some electrons change their properties, such as movement direction, angle, and relative phase and energy as the beam interacts with the material. Microscopists can record these changes in the electron beam to produce atomically resolved images of the material.[175] In blue light, conventional optical microscopes have a diffraction-limited resolution of about 200 nm.[176] By comparison, electron microscopes are limited by the de Broglie wavelength of the electron. This wavelength, for example, is equal to 0.0037 nm for electrons accelerated across a 100,000-volt potential.[177] The Transmission Electron Aberration-Corrected Microscope is capable of sub-0.05 nm resolution, which is more than enough to resolve individual atoms.[178] This capability makes the electron microscope a useful laboratory instrument for high resolution imaging. However, electron microscopes are expensive instruments that are costly to maintain. Other applications In the free-electron laser (FEL), a relativistic electron beam passes through a pair of undulators that contain arrays of dipole magnets whose fields point in alternating directions. The electrons emit synchrotron radiation that coherently interacts with the same electrons to strongly amplify the radiation field at the resonance frequency. FEL can emit a coherent high-brilliance electromagnetic radiation with a wide range of frequencies, from microwaves to soft X-rays. These devices are used in manufacturing, communication, and in medical applications, such as soft tissue surgery.[182] Electrons are important in cathode ray tubes, which have been extensively used as display devices in laboratory instruments, computer monitors and television sets.[183] In a photomultiplier tube, every photon striking the photocathode initiates an avalanche of electrons that produces a detectable current pulse.[184] Vacuum tubes use the flow of electrons to manipulate electrical signals, and they played a critical role in the development of electronics technology. However, they have been largely supplanted by solid-state devices such as the transistor.[185] See also 3. ^ Note that older sources list charge-to-mass rather than the modern convention of mass-to-charge ratio. 4. ^ This magnitude is obtained from the spin quantum number as for quantum number s = 1/2. See: Gupta (2001). 5. ^ Bohr magneton: where ?0 is the vacuum permittivity. For an electron with rest mass m0, the rest energy is equal to: See: Haken, Wolf, & Brewer (2005). 7. ^ Radiation from non-relativistic electrons is sometimes termed cyclotron radiation. 8. ^ The change in wavelength, ??, depends on the angle of the recoil, ?, as follows, where c is the speed of light in a vacuum and me is the electron mass. See Zombeck (2007).[70](p393, 396) 1. ^ a b c Eichten, E.J.; Peskin, M.E.; Peskin, M. (1983). "New Tests for Quark and Lepton Substructure". Physical Review Letters. 50 (11): 811-814. Bibcode:1983PhRvL..50..811E. doi:10.1103/PhysRevLett.50.811. OSTI 1446807. 2. ^ a b Farrar, W.V. (1969). "Richard Laming and the Coal-Gas Industry, with His Views on the Structure of Matter". Annals of Science. 25 (3): 243-254. doi:10.1080/00033796900200141. 3. ^ a b c d Arabatzis, T. (2006). Representing Electrons: A Biographical Approach to Theoretical Entities. University of Chicago Press. pp. 70-74, 96. ISBN 978-0-226-02421-9. 4. ^ Buchwald, J.Z.; Warwick, A. (2001). Histories of the Electron: The Birth of Microphysics. MIT Press. pp. 195-203. ISBN 978-0-262-52424-7. 5. ^ a b c d e f Thomson, J.J. (1897). "Cathode Rays". Philosophical Magazine. 44 (269): 293-316. doi:10.1080/14786449708621070. 6. ^ a b c Mohr, P.J.; Taylor, B.N.; Newell, D.B. "2018 CODATA recommended values". National Institute of Standards and Technology. Gaithersburg, MD: U.S. Department of Commerce. This database was developed by J. Baker, M. Douma, and S. Kotochigova. 7. ^ a b Mohr, P.J.; Taylor, B.N.; Newell, D.B. "The 2014 CODATA Recommended Values of the Fundamental Physical Constants". National Institute of Standards and Technology. Gaithersburg, MD: U.S. Department of Commerce. This database was developed by J. Baker, M. Douma, and S. Kotochigova. 8. ^ a b Agostini, M.; et al. (Borexino Collaboration) (2015). "Test of Electric Charge Conservation with Borexino". Physical Review Letters. 115 (23): 231802. arXiv:1509.01223. Bibcode:2015PhRvL.115w1802A. doi:10.1103/PhysRevLett.115.231802. PMID 26684111. S2CID 206265225. 9. ^ Coff, Jerry (10 September 2010). "What Is An Electron". Retrieved 2010. 10. ^ a b c Curtis, L.J. (2003). Atomic Structure and Lifetimes: A Conceptual Approach. Cambridge University Press. p. 74. ISBN 978-0-521-53635-6. 12. ^ Anastopoulos, C. (2008). Particle Or Wave: The Evolution of the Concept of Matter in Modern Physics. Princeton University Press. pp. 236-237. ISBN 978-0-691-13512-0. 13. ^ a b Pauling, L.C. (1960). The Nature of the Chemical Bond and the Structure of Molecules and Crystals: an introduction to modern structural chemistry (3rd ed.). Cornell University Press. pp. 4-10. ISBN 978-0-8014-0333-0. 14. ^ Shipley, J.T. (1945). Dictionary of Word Origins. The Philosophical Library. p. 133. ISBN 978-0-88029-751-6. 15. ^ a b Benjamin, Park (1898), A history of electricity (The intellectual rise in electricity) from antiquity to the days of Benjamin Franklin, New York: J. Wiley, pp. 315, 484-5, ISBN 978-1-313-10605-4 16. ^ Keithley, J.F. (1999). The Story of Electrical and Magnetic Measurements: From 500 B.C. to the 1940s. IEEE Press. pp. 19-20. ISBN 978-0-7803-1193-0. 17. ^ Cajori, Florian (1917). A History of Physics in Its Elementary Branches: Including the Evolution of Physical Laboratories. Macmillan. 18. ^ "Benjamin Franklin (1706-1790)". Eric Weisstein's World of Biography. Wolfram Research. Retrieved 2010. 19. ^ Myers, R.L. (2006). The Basics of Physics. Greenwood Publishing Group. p. 242. ISBN 978-0-313-32857-2. 20. ^ Barrow, J.D. (1983). "Natural Units Before Planck". Quarterly Journal of the Royal Astronomical Society. 24: 24-26. Bibcode:1983QJRAS..24...24B. 21. ^ Okamura, S?go (1994). History of Electron Tubes. IOS Press. p. 11. ISBN 978-90-5199-145-1. Retrieved 2015. In 1881, Stoney named this electromagnetic 'electrolion'. It came to be called 'electron' from 1891. [...] In 1906, the suggestion to call cathode ray particles 'electrions' was brought up but through the opinion of Lorentz of Holland 'electrons' came to be widely used. 22. ^ Stoney, G.J. (1894). "Of the "Electron," or Atom of Electricity". Philosophical Magazine. 38 (5): 418-420. doi:10.1080/14786449408620653. 24. ^ Soukhanov, A.H., ed. (1986). Word Mysteries & Histories. Houghton Mifflin. p. 73. ISBN 978-0-395-40265-8. 25. ^ Guralnik, D.B., ed. (1970). Webster's New World Dictionary. Prentice Hall. p. 450. 26. ^ Born, M.; Blin-Stoyle, R.J.; Radcliffe, J.M. (1989). Atomic Physics. Courier Dover. p. 26. ISBN 978-0-486-65984-8. 27. ^ Plücker, M. (1858-12-01). "XLVI. Observations on the electrical discharge through rarefied gases". The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science. 16 (109): 408-418. doi:10.1080/14786445808642591. ISSN 1941-5982. 28. ^ a b c Leicester, H.M. (1971). The Historical Background of Chemistry. Courier Dover. pp. 221-222. ISBN 978-0-486-61053-5. 29. ^ a b Whittaker, E.T. (1951). A History of the Theories of Aether and Electricity. 1. London: Nelson. 30. ^ DeKosky, R.K. (1983). "William Crookes and the quest for absolute vacuum in the 1870s". Annals of Science. 40 (1): 1-18. doi:10.1080/00033798300200101. 31. ^ Schuster, Arthur (1890). "The discharge of electricity through gases". Proceedings of the Royal Society of London. 47: 526-559. 32. ^ Wilczek, Frank (June 2012). "Happy birthday, electron". Scientific American. 33. ^ Trenn, T.J. (1976). "Rutherford on the Alpha-Beta-Gamma Classification of Radioactive Rays". Isis. 67 (1): 61-75. doi:10.1086/351545. JSTOR 231134. S2CID 145281124. 34. ^ Becquerel, H. (1900). "Déviation du Rayonnement du Radium dans un Champ Électrique". Comptes rendus de l'Académie des sciences (in French). 130: 809-815. 35. ^ Buchwald and Warwick (2001:90-91). 36. ^ Myers, W.G. (1976). "Becquerel's Discovery of Radioactivity in 1896". Journal of Nuclear Medicine. 17 (7): 579-582. PMID 775027. 37. ^ Thomson, J.J. (1906). "Nobel Lecture: Carriers of Negative Electricity" (PDF). The Nobel Foundation. Archived from the original (PDF) on 10 October 2008. Retrieved 2008. 38. ^ O'Hara, J. G. (March 1975). "George Johnstone Stoney, F.R.S., and the Concept of the Electron". Notes and Records of the Royal Society of London. Royal Society. 29 (2): 265-276. doi:10.1098/rsnr.1975.0018. JSTOR 531468. S2CID 145353314. 39. ^ Kikoin, I.K.; Sominski?, I.S. (1961). "Abram Fedorovich Ioffe (on his eightieth birthday)". Soviet Physics Uspekhi. 3 (5): 798-809. Bibcode:1961SvPhU...3..798K. doi:10.1070/PU1961v003n05ABEH005812. Original publication in Russian: , ?.?.; , ?.?. (1960). " ?.?. ". ? ?. 72 (10): 303-321. doi:10.3367/UFNr.0072.196010e.0307. 40. ^ Millikan, R.A. (1911). "The Isolation of an Ion, a Precision Measurement of its Charge, and the Correction of Stokes's Law" (PDF). Physical Review. 32 (2): 349-397. Bibcode:1911PhRvI..32..349M. doi:10.1103/PhysRevSeriesI.32.349. 41. ^ Das Gupta, N.N.; Ghosh, S.K. (1999). "A Report on the Wilson Cloud Chamber and Its Applications in Physics". Reviews of Modern Physics. 18 (2): 225-290. Bibcode:1946RvMP...18..225G. doi:10.1103/RevModPhys.18.225. 42. ^ a b c Smirnov, B.M. (2003). Physics of Atoms and Ions. Springer. pp. 14-21. ISBN 978-0-387-95550-6. 43. ^ Bohr, N. (1922). "Nobel Lecture: The Structure of the Atom" (PDF). The Nobel Foundation. Retrieved 2008. 44. ^ Lewis, G.N. (1916). "The Atom and the Molecule". Journal of the American Chemical Society. 38 (4): 762-786. doi:10.1021/ja02261a002. 45. ^ a b Arabatzis, T.; Gavroglu, K. (1997). "The chemists' electron" (PDF). European Journal of Physics. 18 (3): 150-163. Bibcode:1997EJPh...18..150A. doi:10.1088/0143-0807/18/3/005. S2CID 56117976. Archived from the original (PDF) on 2020-06-05. 46. ^ Langmuir, I. (1919). "The Arrangement of Electrons in Atoms and Molecules". Journal of the American Chemical Society. 41 (6): 868-934. doi:10.1021/ja02227a002. 47. ^ Scerri, E.R. (2007). The Periodic Table. Oxford University Press. pp. 205-226. ISBN 978-0-19-530573-9. 48. ^ Massimi, M. (2005). Pauli's Exclusion Principle, The Origin and Validation of a Scientific Principle. Cambridge University Press. pp. 7-8. ISBN 978-0-521-83911-2. 49. ^ Uhlenbeck, G.E.; Goudsmith, S. (1925). "Ersetzung der Hypothese vom unmechanischen Zwang durch eine Forderung bezüglich des inneren Verhaltens jedes einzelnen Elektrons". Die Naturwissenschaften (in German). 13 (47): 953-954. Bibcode:1925NW.....13..953E. doi:10.1007/BF01558878. S2CID 32211960. 50. ^ Pauli, W. (1923). "Über die Gesetzmäßigkeiten des anomalen Zeemaneffektes". Zeitschrift für Physik (in German). 16 (1): 155-164. Bibcode:1923ZPhy...16..155P. doi:10.1007/BF01327386. S2CID 122256737. 51. ^ a b de Broglie, L. (1929). "Nobel Lecture: The Wave Nature of the Electron" (PDF). The Nobel Foundation. Retrieved 2008. 52. ^ Falkenburg, B. (2007). Particle Metaphysics: A Critical Account of Subatomic Reality. Springer. p. 85. Bibcode:2007pmca.book.....F. ISBN 978-3-540-33731-7. 53. ^ Davisson, C. (1937). "Nobel Lecture: The Discovery of Electron Waves" (PDF). The Nobel Foundation. Retrieved 2008. 54. ^ Schrödinger, E. (1926). "Quantisierung als Eigenwertproblem". Annalen der Physik (in German). 385 (13): 437-490. Bibcode:1926AnP...385..437S. doi:10.1002/andp.19263851302. 55. ^ Rigden, J.S. (2003). Hydrogen. Harvard University Press. pp. 59-86. ISBN 978-0-674-01252-3. 56. ^ Reed, B.C. (2007). Quantum Mechanics. Jones & Bartlett Publishers. pp. 275-350. ISBN 978-0-7637-4451-9. 57. ^ Dirac, P.A.M. (1928). "The Quantum Theory of the Electron" (PDF). Proceedings of the Royal Society A. 117 (778): 610-624. Bibcode:1928RSPSA.117..610D. doi:10.1098/rspa.1928.0023. 58. ^ Dirac, P.A.M. (1933). "Nobel Lecture: Theory of Electrons and Positrons" (PDF). The Nobel Foundation. Retrieved 2008. 59. ^ "The Nobel Prize in Physics 1965". The Nobel Foundation. Retrieved 2008. 60. ^ Panofsky, W.K.H. (1997). "The Evolution of Particle Accelerators & Colliders" (PDF). Beam Line. 27 (1): 36-44. Retrieved 2008. 61. ^ Elder, F.R.; et al. (1947). "Radiation from Electrons in a Synchrotron". Physical Review. 71 (11): 829-830. Bibcode:1947PhRv...71..829E. doi:10.1103/PhysRev.71.829.5. 62. ^ Hoddeson, L.; et al. (1997). The Rise of the Standard Model: Particle Physics in the 1960s and 1970s. Cambridge University Press. pp. 25-26. ISBN 978-0-521-57816-5. 63. ^ Bernardini, C. (2004). "AdA: The First Electron-Positron Collider". Physics in Perspective. 6 (2): 156-183. Bibcode:2004PhP.....6..156B. doi:10.1007/s00016-003-0202-y. S2CID 122534669. 64. ^ "Testing the Standard Model: The LEP experiments". CERN. 2008. Retrieved 2008. 65. ^ "LEP reaps a final harvest". CERN Courier. 40 (10). 2000. 66. ^ Prati, E.; De Michielis, M.; Belli, M.; Cocco, S.; Fanciulli, M.; Kotekar-Patil, D.; Ruoff, M.; Kern, D.P.; Wharam, D.A.; Verduijn, J.; Tettamanzi, G.C.; Rogge, S.; Roche, B.; Wacquez, R.; Jehl, X.; Vinet, M.; Sanquer, M. (2012). "Few electron limit of n-type metal oxide semiconductor single electron transistors". Nanotechnology. 23 (21): 215204. arXiv:1203.4811. Bibcode:2012Nanot..23u5204P. CiteSeerX doi:10.1088/0957-4484/23/21/215204. PMID 22552118. S2CID 206063658. 67. ^ Frampton, P.H.; Hung, P.Q.; Sher, Marc (2000). "Quarks and Leptons Beyond the Third Generation". Physics Reports. 330 (5-6): 263-348. arXiv:hep-ph/9903387. Bibcode:2000PhR...330..263F. doi:10.1016/S0370-1573(99)00095-2. S2CID 119481188. 68. ^ a b c Raith, W.; Mulvey, T. (2001). Constituents of Matter: Atoms, Molecules, Nuclei and Particles. CRC Press. pp. 777-781. ISBN 978-0-8493-1202-1. 69. ^ a b c d e f g h The original source for CODATA is Mohr, P.J.; Taylor, B.N.; Newell, D.B. (2008). "CODATA recommended values of the fundamental physical constants". Reviews of Modern Physics. 80 (2): 633-730. arXiv:0801.0028. Bibcode:2008RvMP...80..633M. CiteSeerX doi:10.1103/RevModPhys.80.633. 70. ^ a b Zombeck, M.V. (2007). Handbook of Space Astronomy and Astrophysics (3rd ed.). Cambridge University Press. p. 14. ISBN 978-0-521-78242-5. 71. ^ Murphy, M.T.; et al. (2008). "Strong Limit on a Variable Proton-to-Electron Mass Ratio from Molecules in the Distant Universe". Science. 320 (5883): 1611-1613. arXiv:0806.3081. Bibcode:2008Sci...320.1611M. doi:10.1126/science.1156352. PMID 18566280. S2CID 2384708. 72. ^ Zorn, J.C.; Chamberlain, G.E.; Hughes, V.W. (1963). "Experimental Limits for the Electron-Proton Charge Difference and for the Charge of the Neutron". Physical Review. 129 (6): 2566-2576. Bibcode:1963PhRv..129.2566Z. doi:10.1103/PhysRev.129.2566. 73. ^ Gupta, M.C. (2001). Atomic and Molecular Spectroscopy. New Age Publishers. p. 81. ISBN 978-81-224-1300-7.CS1 maint: ref duplicates default (link) 75. ^ Anastopoulos, C. (2008). Particle Or Wave: The Evolution of the Concept of Matter in Modern Physics. Princeton University Press. pp. 261-262. ISBN 978-0-691-13512-0. 76. ^ Gabrielse, G.; et al. (2006). "New Determination of the Fine Structure Constant from the Electron g Value and QED". Physical Review Letters. 97 (3): 030802(1-4). Bibcode:2006PhRvL..97c0802G. doi:10.1103/PhysRevLett.97.030802. PMID 16907491. 77. ^ "UK | England | Physicists 'make electrons split'". BBC News. 2009-08-28. Retrieved . 78. ^ Discovery About Behavior Of Building Block Of Nature Could Lead To Computer Revolution. Science Daily (July 31, 2009) 79. ^ Yarris, Lynn (2006-07-13). "First Direct Observations of Spinons and Holons". Lbl.gov. Retrieved . 80. ^ Eduard Shpolsky, Atomic physics (Atomnaia fizika), second edition, 1951 81. ^ Dehmelt, H. (1988). "A Single Atomic Particle Forever Floating at Rest in Free Space: New Value for Electron Radius". Physica Scripta. T22: 102-110. Bibcode:1988PhST...22..102D. doi:10.1088/0031-8949/1988/T22/016. 82. ^ Gabrielse, Gerald. "Electron Substructure". Physics. Harvard University. Archived from the original on 2019-04-10. Retrieved . 83. ^ Meschede, D. (2004). Optics, light and lasers: The Practical Approach to Modern Aspects of Photonics and Laser Physics. Wiley-VCH. p. 168. ISBN 978-3-527-40364-6. 84. ^ Haken, H.; Wolf, H.C.; Brewer, W.D. (2005). The Physics of Atoms and Quanta: Introduction to Experiments and Theory. Springer. p. 70. ISBN 978-3-540-67274-6. 85. ^ Steinberg, R.I.; et al. (1999). "Experimental test of charge conservation and the stability of the electron". Physical Review D. 61 (2): 2582-2586. Bibcode:1975PhRvD..12.2582S. doi:10.1103/PhysRevD.12.2582. 86. ^ Beringer, J.; et al. (Particle Data Group) (2012). "Review of Particle Physics: [electron properties]" (PDF). Physical Review D. 86 (1): 010001. Bibcode:2012PhRvD..86a0001B. doi:10.1103/PhysRevD.86.010001. 87. ^ Back, H.O.; et al. (2002). "Search for electron decay mode e -> ? + ? with prototype of Borexino detector". Physics Letters B. 525 (1-2): 29-40. Bibcode:2002PhLB..525...29B. doi:10.1016/S0370-2693(01)01440-X. 88. ^ a b c d e Munowitz, M. (2005). Knowing the Nature of Physical Law. Oxford University Press. p. 162. ISBN 978-0-19-516737-5. 89. ^ Kane, G. (9 October 2006). "Are virtual particles really constantly popping in and out of existence? Or are they merely a mathematical bookkeeping device for quantum mechanics?". Scientific American. Retrieved 2008. 90. ^ Taylor, J. (1989). "Gauge Theories in Particle Physics". In Davies, Paul (ed.). The New Physics. Cambridge University Press. p. 464. ISBN 978-0-521-43831-5. 91. ^ a b Genz, H. (2001). Nothingness: The Science of Empty Space. Da Capo Press. pp. 241-243, 245-247. ISBN 978-0-7382-0610-3. 92. ^ Gribbin, J. (25 January 1997). "More to electrons than meets the eye". New Scientist. Retrieved 2008. 93. ^ Levine, I.; et al. (1997). "Measurement of the Electromagnetic Coupling at Large Momentum Transfer". Physical Review Letters. 78 (3): 424-427. Bibcode:1997PhRvL..78..424L. doi:10.1103/PhysRevLett.78.424. 94. ^ Murayama, H. (10-17 March 2006). Supersymmetry Breaking Made Easy, Viable and Generic. Proceedings of the XLIInd Rencontres de Moriond on Electroweak Interactions and Unified Theories. La Thuile, Italy. arXiv:0709.3041. Bibcode:2007arXiv0709.3041M. -- lists a 9% mass difference for an electron that is the size of the Planck distance. 95. ^ Schwinger, J. (1948). "On Quantum-Electrodynamics and the Magnetic Moment of the Electron". Physical Review. 73 (4): 416-417. Bibcode:1948PhRv...73..416S. doi:10.1103/PhysRev.73.416. 96. ^ Huang, K. (2007). Fundamental Forces of Nature: The Story of Gauge Fields. World Scientific. pp. 123-125. ISBN 978-981-270-645-4. 97. ^ Foldy, L.L.; Wouthuysen, S. (1950). "On the Dirac Theory of Spin 1/2 Particles and Its Non-Relativistic Limit". Physical Review. 78 (1): 29-36. Bibcode:1950PhRv...78...29F. doi:10.1103/PhysRev.78.29. 98. ^ a b Griffiths, David J. (1998). Introduction to Electrodynamics (3rd ed.). Prentice Hall. ISBN 978-0-13-805326-0. 99. ^ Crowell, B. (2000). Electricity and Magnetism. Light and Matter. pp. 129-152. ISBN 978-0-9704670-4-1. 100. ^ Mahadevan, R.; Narayan, R.; Yi, I. (1996). "Harmony in Electrons: Cyclotron and Synchrotron Emission by Thermal Electrons in a Magnetic Field". The Astrophysical Journal. 465: 327-337. arXiv:astro-ph/9601073. Bibcode:1996ApJ...465..327M. doi:10.1086/177422. S2CID 16324613. 101. ^ Rohrlich, F. (1999). "The Self-Force and Radiation Reaction". American Journal of Physics. 68 (12): 1109-1112. Bibcode:2000AmJPh..68.1109R. doi:10.1119/1.1286430. 102. ^ Georgi, H. (1989). "Grand Unified Theories". In Davies, Paul (ed.). The New Physics. Cambridge University Press. p. 427. ISBN 978-0-521-43831-5. 103. ^ Blumenthal, G.J.; Gould, R. (1970). "Bremsstrahlung, Synchrotron Radiation, and Compton Scattering of High-Energy Electrons Traversing Dilute Gases". Reviews of Modern Physics. 42 (2): 237-270. Bibcode:1970RvMP...42..237B. doi:10.1103/RevModPhys.42.237. 104. ^ "The Nobel Prize in Physics 1927". The Nobel Foundation. 2008. Retrieved 2008. 105. ^ Chen, S.-Y.; Maksimchuk, A.; Umstadter, D. (1998). "Experimental observation of relativistic nonlinear Thomson scattering". Nature. 396 (6712): 653-655. arXiv:physics/9810036. Bibcode:1998Natur.396..653C. doi:10.1038/25303. S2CID 16080209. 106. ^ Beringer, R.; Montgomery, C.G. (1942). "The Angular Distribution of Positron Annihilation Radiation". Physical Review. 61 (5-6): 222-224. Bibcode:1942PhRv...61..222B. doi:10.1103/PhysRev.61.222. 107. ^ Buffa, A. (2000). College Physics (4th ed.). Prentice Hall. p. 888. ISBN 978-0-13-082444-8. 108. ^ Eichler, J. (2005). "Electron-positron pair production in relativistic ion-atom collisions". Physics Letters A. 347 (1-3): 67-72. Bibcode:2005PhLA..347...67E. doi:10.1016/j.physleta.2005.06.105. 109. ^ Hubbell, J.H. (2006). "Electron positron pair production by photons: A historical overview". Radiation Physics and Chemistry. 75 (6): 614-623. Bibcode:2006RaPC...75..614H. doi:10.1016/j.radphyschem.2005.10.008. 110. ^ Quigg, C. (4-30 June 2000). The Electroweak Theory. TASI 2000: Flavor Physics for the Millennium. Boulder, Colorado. p. 80. arXiv:hep-ph/0204104. Bibcode:2002hep.ph....4104Q. 111. ^ a b Tipler, Paul; Llewellyn, Ralph (2003). Modern Physics (illustrated ed.). Macmillan. ISBN 978-0-7167-4345-3. 112. ^ Burhop, E.H.S. (1952). The Auger Effect and Other Radiationless Transitions. Cambridge University Press. pp. 2-3. ISBN 978-0-88275-966-1. 113. ^ Jiles, D. (1998). Introduction to Magnetism and Magnetic Materials. CRC Press. pp. 280-287. ISBN 978-0-412-79860-3. 114. ^ Löwdin, P.O.; Erkki Brändas, E.; Kryachko, E.S. (2003). Fundamental World of Quantum Chemistry: A Tribute to the Memory of Per-Olov Löwdin. Springer Science+Business Media. pp. 393-394. ISBN 978-1-4020-1290-7. 115. ^ McQuarrie, D.A.; Simon, J.D. (1997). Physical Chemistry: A Molecular Approach. University Science Books. pp. 325-361. ISBN 978-0-935702-99-6. 116. ^ Daudel, R.; et al. (1974). "The Electron Pair in Chemistry". Canadian Journal of Chemistry. 52 (8): 1310-1320. doi:10.1139/v74-201. 117. ^ Rakov, V.A.; Uman, M.A. (2007). Lightning: Physics and Effects. Cambridge University Press. p. 4. ISBN 978-0-521-03541-5. 118. ^ Freeman, G.R.; March, N.H. (1999). "Triboelectricity and some associated phenomena". Materials Science and Technology. 15 (12): 1454-1458. doi:10.1179/026708399101505464. 119. ^ Forward, K.M.; Lacks, D.J.; Sankaran, R.M. (2009). "Methodology for studying particle-particle triboelectrification in granular materials". Journal of Electrostatics. 67 (2-3): 178-183. doi:10.1016/j.elstat.2008.12.002. 120. ^ Weinberg, S. (2003). The Discovery of Subatomic Particles. Cambridge University Press. pp. 15-16. ISBN 978-0-521-82351-7. 121. ^ Lou, L.-F. (2003). Introduction to phonons and electrons. World Scientific. pp. 162, 164. Bibcode:2003ipe..book.....L. ISBN 978-981-238-461-4. 122. ^ Guru, B.S.; H?z?ro?lu, H.R. (2004). Electromagnetic Field Theory. Cambridge University Press. pp. 138, 276. ISBN 978-0-521-83016-4. 123. ^ Achuthan, M.K.; Bhat, K.N. (2007). Fundamentals of Semiconductor Devices. Tata McGraw-Hill. pp. 49-67. ISBN 978-0-07-061220-4. 124. ^ a b Ziman, J.M. (2001). Electrons and Phonons: The Theory of Transport Phenomena in Solids. Oxford University Press. p. 260. ISBN 978-0-19-850779-6. 125. ^ Main, P. (12 June 1993). "When electrons go with the flow: Remove the obstacles that create electrical resistance, and you get ballistic electrons and a quantum surprise". New Scientist. 1887: 30. Retrieved 2008. 126. ^ Blackwell, G.R. (2000). The Electronic Packaging Handbook. CRC Press. pp. 6.39-6.40. ISBN 978-0-8493-8591-9. 127. ^ Durrant, A. (2000). Quantum Physics of Matter: The Physical World. CRC Press. pp. 43, 71-78. ISBN 978-0-7503-0721-5. 128. ^ "The Nobel Prize in Physics 1972". The Nobel Foundation. 2008. Retrieved 2008. 129. ^ Kadin, A.M. (2007). "Spatial Structure of the Cooper Pair". Journal of Superconductivity and Novel Magnetism. 20 (4): 285-292. arXiv:cond-mat/0510279. doi:10.1007/s10948-006-0198-z. S2CID 54948290. 130. ^ "Discovery about behavior of building block of nature could lead to computer revolution". ScienceDaily. 31 July 2009. Retrieved 2009. 131. ^ Jompol, Y.; et al. (2009). "Probing Spin-Charge Separation in a Tomonaga-Luttinger Liquid". Science. 325 (5940): 597-601. arXiv:1002.2782. Bibcode:2009Sci...325..597J. doi:10.1126/science.1171769. PMID 19644117. S2CID 206193. 132. ^ "The Nobel Prize in Physics 1958, for the discovery and the interpretation of the Cherenkov effect". The Nobel Foundation. 2008. Retrieved 2008. 133. ^ "Special Relativity". Stanford Linear Accelerator Center. 26 August 2008. Retrieved 2008. 134. ^ Adams, S. (2000). Frontiers: Twentieth Century Physics. CRC Press. p. 215. ISBN 978-0-7484-0840-5. 135. ^ Bianchini, Lorenzo (2017). Selected Exercises in Particle and Nuclear Physics. Springer. p. 79. ISBN 978-3-319-70494-4. 136. ^ Lurquin, P.F. (2003). The Origins of Life and the Universe. Columbia University Press. p. 2. ISBN 978-0-231-12655-7. 137. ^ Silk, J. (2000). The Big Bang: The Creation and Evolution of the Universe (3rd ed.). Macmillan. pp. 110-112, 134-137. ISBN 978-0-8050-7256-3. 138. ^ Kolb, E.W.; Wolfram, Stephen (1980). "The Development of Baryon Asymmetry in the Early Universe" (PDF). Physics Letters B. 91 (2): 217-221. Bibcode:1980PhLB...91..217K. doi:10.1016/0370-2693(80)90435-9. 139. ^ Sather, E. (Spring-Summer 1996). "The Mystery of Matter Asymmetry" (PDF). Beam Line. Stanford University. Retrieved 2008. 141. ^ Boesgaard, A.M.; Steigman, G. (1985). "Big bang nucleosynthesis - Theories and observations". Annual Review of Astronomy and Astrophysics. 23 (2): 319-378. Bibcode:1985ARA&A..23..319B. doi:10.1146/annurev.aa.23.090185.001535. 142. ^ a b Barkana, R. (2006). "The First Stars in the Universe and Cosmic Reionization". Science. 313 (5789): 931-934. arXiv:astro-ph/0608450. Bibcode:2006Sci...313..931B. CiteSeerX doi:10.1126/science.1125644. PMID 16917052. S2CID 8702746. 143. ^ Burbidge, E.M.; et al. (1957). "Synthesis of Elements in Stars" (PDF). Reviews of Modern Physics. 29 (4): 548-647. Bibcode:1957RvMP...29..547B. doi:10.1103/RevModPhys.29.547. 144. ^ Rodberg, L.S.; Weisskopf, V. (1957). "Fall of Parity: Recent Discoveries Related to Symmetry of Laws of Nature". Science. 125 (3249): 627-633. Bibcode:1957Sci...125..627R. doi:10.1126/science.125.3249.627. PMID 17810563. 145. ^ Fryer, C.L. (1999). "Mass Limits For Black Hole Formation". The Astrophysical Journal. 522 (1): 413-418. arXiv:astro-ph/9902315. Bibcode:1999ApJ...522..413F. doi:10.1086/307647. S2CID 14227409. 146. ^ Parikh, M.K.; Wilczek, F. (2000). "Hawking Radiation As Tunneling". Physical Review Letters. 85 (24): 5042-5045. arXiv:hep-th/9907001. Bibcode:2000PhRvL..85.5042P. doi:10.1103/PhysRevLett.85.5042. hdl:1874/17028. PMID 11102182. S2CID 8013726. 147. ^ Hawking, S.W. (1974). "Black hole explosions?". Nature. 248 (5443): 30-31. Bibcode:1974Natur.248...30H. doi:10.1038/248030a0. S2CID 4290107. 148. ^ Halzen, F.; Hooper, D. (2002). "High-energy neutrino astronomy: the cosmic ray connection". Reports on Progress in Physics. 66 (7): 1025-1078. arXiv:astro-ph/0204527. Bibcode:2002RPPh...65.1025H. doi:10.1088/0034-4885/65/7/201. S2CID 53313620. 149. ^ Ziegler, J.F. (1998). "Terrestrial cosmic ray intensities". IBM Journal of Research and Development. 42 (1): 117-139. Bibcode:1998IBMJ...42..117Z. doi:10.1147/rd.421.0117. 150. ^ Sutton, C. (4 August 1990). "Muons, pions and other strange particles". New Scientist. Retrieved 2008. 151. ^ Wolpert, S. (24 July 2008). "Scientists solve 30 year-old aurora borealis mystery" (Press release). University of California. Archived from the original on 17 August 2008. Retrieved 2008. 152. ^ Gurnett, D.A.; Anderson, R. (1976). "Electron Plasma Oscillations Associated with Type III Radio Bursts". Science. 194 (4270): 1159-1162. Bibcode:1976Sci...194.1159G. doi:10.1126/science.194.4270.1159. PMID 17790910. S2CID 11401604. 153. ^ Martin, W.C.; Wiese, W.L. (2007). "Atomic Spectroscopy: A compendium of basic ideas, notation, data, and formulas". National Institute of Standards and Technology. Retrieved 2007. 154. ^ Fowles, G.R. (1989). Introduction to Modern Optics. Courier Dover. pp. 227-233. ISBN 978-0-486-65957-2. 155. ^ Grupen, C. (2000). "Physics of Particle Detection". AIP Conference Proceedings. 536: 3-34. arXiv:physics/9906063. Bibcode:2000AIPC..536....3G. doi:10.1063/1.1361756. S2CID 119476972. 156. ^ "The Nobel Prize in Physics 1989". The Nobel Foundation. 2008. Retrieved 2008. 157. ^ Ekstrom, P.; Wineland, David (1980). "The isolated Electron" (PDF). Scientific American. 243 (2): 91-101. Bibcode:1980SciAm.243b.104E. doi:10.1038/scientificamerican0880-104. Retrieved 2008. 158. ^ Mauritsson, J. "Electron filmed for the first time ever" (PDF). Lund University. Archived from the original (PDF) on 25 March 2009. Retrieved 2008. 159. ^ Mauritsson, J.; et al. (2008). "Coherent Electron Scattering Captured by an Attosecond Quantum Stroboscope". Physical Review Letters. 100 (7): 073003. arXiv:0708.1060. Bibcode:2008PhRvL.100g3003M. doi:10.1103/PhysRevLett.100.073003. PMID 18352546. S2CID 1357534. 160. ^ Damascelli, A. (2004). "Probing the Electronic Structure of Complex Systems by ARPES". Physica Scripta. T109: 61-74. arXiv:cond-mat/0307085. Bibcode:2004PhST..109...61D. doi:10.1238/Physica.Topical.109a00061. S2CID 21730523. 161. ^ "Image # L-1975-02972". Langley Research Center. NASA. 4 April 1975. Archived from the original on 7 December 2008. Retrieved 2008. 162. ^ Elmer, J. (3 March 2008). "Standardizing the Art of Electron-Beam Welding". Lawrence Livermore National Laboratory. Archived from the original on 20 September 2008. Retrieved 2008. 163. ^ Schultz, H. (1993). Electron Beam Welding. Woodhead Publishing. pp. 2-3. ISBN 978-1-85573-050-2. 164. ^ Benedict, G.F. (1987). Nontraditional Manufacturing Processes. Manufacturing engineering and materials processing. 19. CRC Press. p. 273. ISBN 978-0-8247-7352-6. 165. ^ Ozdemir, F.S. (25-27 June 1979). Electron beam lithography. Proceedings of the 16th Conference on Design automation. San Diego, CA: IEEE Press. pp. 383-391. Retrieved 2008. 166. ^ Madou, M.J. (2002). Fundamentals of Microfabrication: the Science of Miniaturization (2nd ed.). CRC Press. pp. 53-54. ISBN 978-0-8493-0826-0. 167. ^ Jongen, Y.; Herer, A. (2-5 May 1996). [no title cited]. APS/AAPT Joint Meeting. Electron Beam Scanning in Industrial Applications. American Physical Society. Bibcode:1996APS..MAY.H9902J. 168. ^ Mobus, G.; et al. (2010). "Nano-scale quasi-melting of alkali-borosilicate glasses under electron irradiatio". Journal of Nuclear Materials. 396 (2-3): 264-271. Bibcode:2010JNuM..396..264M. doi:10.1016/j.jnucmat.2009.11.020. 169. ^ Beddar, A.S.; Domanovic, Mary Ann; Kubu, Mary Lou; Ellis, Rod J.; Sibata, Claudio H.; Kinsella, Timothy J. (2001). "Mobile linear accelerators for intraoperative radiation therapy". AORN Journal. 74 (5): 700-705. doi:10.1016/S0001-2092(06)61769-9. PMID 11725448. 170. ^ Gazda, M.J.; Coia, L.R. (1 June 2007). "Principles of Radiation Therapy" (PDF). Retrieved 2013. 171. ^ Chao, A.W.; Tigner, M. (1999). Handbook of Accelerator Physics and Engineering. World Scientific. pp. 155, 188. ISBN 978-981-02-3500-0. 172. ^ Oura, K.; et al. (2003). Surface Science: An Introduction. Springer Science+Business Media. pp. 1-45. ISBN 978-3-540-00545-2. 173. ^ Ichimiya, A.; Cohen, P.I. (2004). Reflection High-energy Electron Diffraction. Cambridge University Press. p. 1. ISBN 978-0-521-45373-8. 174. ^ Heppell, T.A. (1967). "A combined low energy and reflection high energy electron diffraction apparatus". Journal of Scientific Instruments. 44 (9): 686-688. Bibcode:1967JScI...44..686H. doi:10.1088/0950-7671/44/9/311. 175. ^ McMullan, D. (1993). "Scanning Electron Microscopy: 1928-1965". University of Cambridge. Retrieved 2009. 176. ^ Slayter, H.S. (1992). Light and electron microscopy. Cambridge University Press. p. 1. ISBN 978-0-521-33948-3. 177. ^ Cember, H. (1996). Introduction to Health Physics. McGraw-Hill Professional. pp. 42-43. ISBN 978-0-07-105461-4. 179. ^ Bozzola, J.J.; Russell, L.D. (1999). Electron Microscopy: Principles and Techniques for Biologists. Jones & Bartlett Publishers. pp. 12, 197-199. ISBN 978-0-7637-0192-5. 180. ^ Flegler, S.L.; Heckman Jr., J.W.; Klomparens, K.L. (1995). Scanning and Transmission Electron Microscopy: An Introduction (Reprint ed.). Oxford University Press. pp. 43-45. ISBN 978-0-19-510751-7. 181. ^ Bozzola, J.J.; Russell, L.D. (1999). Electron Microscopy: Principles and Techniques for Biologists (2nd ed.). Jones & Bartlett Publishers. p. 9. ISBN 978-0-7637-0192-5. 182. ^ Freund, H.P.; Antonsen, T. (1996). Principles of Free-Electron Lasers. Springer. pp. 1-30. ISBN 978-0-412-72540-1. 183. ^ Kitzmiller, J.W. (1995). Television Picture Tubes and Other Cathode-Ray Tubes: Industry and Trade Summary. Diane Publishing. pp. 3-5. ISBN 978-0-7881-2100-5. 184. ^ Sclater, N. (1999). Electronic Technology Handbook. McGraw-Hill Professional. pp. 227-228. ISBN 978-0-07-058048-0. 185. ^ "The History of the Integrated Circuit". The Nobel Foundation. 2008. Retrieved 2008. External links Music Scenes
6b5ee9abbe4679c5
In my previous post, I briefly discussed the work of the four Fields medalists of 2010 (Lindenstrauss, Ngo, Smirnov, and Villani). In this post I will discuss the work of Dan Spielman (winner of the Nevanlinna prize), Yves Meyer (winner of the Gauss prize), and Louis Nirenberg (winner of the Chern medal). Again by chance, the work of all three of the recipients overlaps to some extent with my own areas of expertise, so I will be able to discuss a sample contribution for each of them. Again, my choice of contribution is somewhat idiosyncratic and is not intended to represent the “best” work of each of the awardees. — 1. Dan Spielman — Dan Spielman works in numerical analysis (and in particular, numerical linear algebra) and theoretical computer science. Here I want to talk about one of his key contributions, namely his pioneering work with Teng on smoothed analysis. This is about an idea as much as it is about a collection of rigorous results, though Spielman and Teng certainly did buttress their ideas with serious new theorems. Prior to this work, there were two basic ways that one analysed the performance (which could mean run-time, accuracy, or some other desirable quality) of a given algorithm. Firstly, one could perform a worst-case analysis, in which one assumed that the input was chosen in such an “adversarial” fashion that the performance was as poor as possible. Such an analysis would be suitable for applications such as certain aspects of cryptography, in which the input really was chosen by an adversary, or in high-stakes situations in which there was zero tolerance for any error whatsoever; it is also useful as a “default” analysis for when no realistic input model is available. At the other extreme, one could perform an average-case analysis, in which the input was chosen in a completely random fashion (e.g. a random string of zeroes and ones, or a random vector whose entries were all distributed according to a Gaussian distribution). While such input models were usually not too realistic (except in situations where the signal-to-noise ratio was very low), they were usually fairly simple to analyse (using tools such as concentration of measure). In many situations, the worst-case analysis is too conservative, and the average-case analysis is too optimistic or unrealistic. For instance, when using the popular simplex method to solve linear programming problems, the worst-case run-time can be exponentially large in the size of the problem, whereas the average-case run-time (in which one is fed a randomly chosen linear program as input) is polynomial. However, the typical linear program that one encounters in practice has enough structure to it that it does not resemble a randomly chosen program at all, and so it is not clear that the average-case bound is appropriate for the type of inputs one has in practice. At the other extreme, the exponentially bad worst-case inputs were so rare that they never seemed to come up in practice either. To obtain a better input model, Spielman and Teng considered a smoothed-case model, in which the input was the sum of a deterministic (and possibly worst-case) input, and a small noise perturbation, which they took to be Gaussian to simplify their analysis. This reflected the presence of measurement error, roundoff error, and similar sources of noise in real-life applications of numerical algorithms. Remarkably, they were able to analyse the run-time of the simplex method for this model, concluding (after a lengthy technical argument) that under reasonable choices of parameters, the run time was polynomial time in the length, thus explaining the empirically observed phenomenon that the simplex method tended to run a lot better in practice than its worst-case analysis would predict, even if one started with extremely ill-conditioned inputs, provided that there was a bit of noise in the system. One of the ingredients in their analysis was a quantitative bound on the condition number of an arbitrary matrix when it is perturbed by a random gaussian perturbation; the point being that random perturbation can often make an ill-conditioned matrix better behaved. (This is perhaps analogous in some ways to the empirical experience that some pieces of machinery work better after being kicked.) Recently, new tools from additive combinatorics (in particular, inverse Littlewood-Offord theory) have enabled Rudelson-Vershynin, Vu and myself, and others to generalise this bound to other noise models, such as random Bernoulli perturbations, which are a simple model for modeling digital roundoff error. — 2. Yves Meyer — Yves Meyer has worked in many fields over the years, from number theory to harmonic analysis to PDE to signal processing. As the Gauss prize is concerned with impact on fields outside of mathematics, Meyer’s major contributions to the theoretical foundations of wavelets, which are now a basic tool in signal processing, were undoubtedly a major consideration in awarding this prize. But I would like to focus here on another of Yves’ contributions, namely the Coifman-Meyer theory of paraproducts developed with Raphy Coifman, which is a cornerstone of the para-differential calculus that has turned out to be an indispensable tool in the modern theory of nonlinear PDE. Nonlinear differential equations, by definition, tend to involve a combination of differential operators and nonlinear operators. The simplest example of the latter is a pointwise product {uv} of two fields {u} and {v}. One is thus often led to study expressions such as {D(uv)} or {D^{-1}(uv)} for various differential operators {D}. For first order operators {D}, we can handle derivatives using the product rule (or Leibniz rule) from freshman calculus: \displaystyle D(uv) = (Du)v + u(Dv). We can then iterate this to handle higher order derivatives. For instance, we have \displaystyle D^2(uv) = (D^2 u) v + 2 (Du) (Dv) + u (D^2 v), \displaystyle D^3(uv) = (D^3 u) v + 3 (D^2u) (Dv) + 3 (Du) (D^2 v) + u (D^3 v), and so forth, assuming of course that all functions involved are sufficiently regular so that all expressions make sense. For inverse derivative expressions such as {D^{-1}(uv)}, no such simple formula exists, although one does have the very important integration by parts formula as a substitute. And for fractional derivatives such as {|D|^\alpha(uv)} with {\alpha > 0} a non-integer, there is also no closed formula of the above form. Note how the derivatives on the product {uv} get distributed to the individual factors {u,v}, with {u} absorbing all the derivatives in one term, {v} absorbing all the derivatives in another, and the derivatives being shared between {u} and {v} in other terms. Within the usual confines of differential calculus; we cannot pick and choose which of the terms we like to keep, and which ones to discard; we must treat every single one of the terms that arise from the Leibniz expansion. This can cause difficulty when trying to control the product of two functions of unequal regularity – a situation that occurs very frequently in nonlinear PDE. For instance, if {u} is {C^1} (once continuously differentiable), and {v} is {C^3} (three times continuously differentiable), then the product {uv} is merely {C^1} rather than {C^3}; intuitively, we cannot “prevent” the three derivatives in the expression {D^3(uv)} from making their way to the {u} factor, which is not prepared to absorb all of them. However, it turns out that if we split the product {uv} into paraproducts such as the high-low paraproduct {\pi_{hl}(u,v)} and the low-high paraproduct, we can effectively separate these terms from each other, allowing for a much more flexible analysis of the situation. The concept of a paraproduct can be motivated by using the Fourier transform. For simplicity let us work in one dimension, with {D} being the usual differential operator {D = \frac{d}{dx}}. Using a Fourier expansion (and assuming as much regularity and integrability as is needed to justify the formal manipulations) of {u} into components of different frequencies, we have \displaystyle u(x) = \int_{\bf R} \hat u(\xi) e^{i x \xi}\ d\xi (here we use the usual PDE normalisation in which we try to hide the {2\pi} factor) and similarly \displaystyle v(x) = \int_{\bf R} \hat v(\eta) e^{i x \eta}\ d\eta and thus \displaystyle uv(x) = \int_{\bf R} \int_{\bf R} \hat u(\xi) \hat v(\eta) e^{ix(\xi+\eta)}\ d\xi d\eta \ \ \ \ \ (1) and thus, by differentiating under the integral sign \displaystyle D^k(uv)(x) = i^k \int_{\bf R} \int_{\bf R} (\xi+\eta)^k \hat u(\xi) \hat v(\eta) e^{ix(\xi+\eta)}\ d\xi d\eta. In contrast, we have \displaystyle (D^j u) (D^{k-j} v)(x) = i^k \int_{\bf R} \int_{\bf R} \xi^j \eta^{k-j} \hat u(\xi) \hat v(\eta) e^{ix(\xi+\eta)}\ d\xi d\eta; thus, the iterated Leibnitz rule just becomes the binomial formula \displaystyle (\xi+\eta)^2 = \xi^2 + 2\xi\eta + \eta^2, \quad (\xi+\eta)^3 = \xi^3 + 3 \xi^2 \eta + 3 \xi \eta^2 + \eta^3, \ldots after taking Fourier transforms. Now, it is certainly true that when dealing with an expression such as {(\xi+\eta)^2}, all three terms {\xi^2, 2\xi\eta, \eta^2} need to be present. But observe that when dealing with a “high-low” frequency interaction, in which {\xi} is much larger in magnitude than {\eta}, then the first term dominates: {(\xi+\eta)^2 \sim \xi^2}. Conversely, with a “low-high” frequency interaction, in which {\eta} is much larger in magnitude than {\xi}, we have {(\xi+\eta)^2 \sim \eta^2}. (There are also “high-high” interactions, in which {\xi} and {\eta} are comparable in magnitude, and {(\xi+\eta)^2} can be significantly smaller than either {\xi^2} or {\eta^2}, but for simplicity of discussion let us ignore this case.) It then becomes natural to try to decompose the product {uv} into “high-low” and “low-high” pieces (plus a “high-high” error), for instance by inserting suitable cutoff functions {m_{hl}(\xi,\eta)} or {m_{lh}(\xi,\eta)} in (1) to the regions {|\xi| \gg |\eta|} or {|\xi| \ll |\eta|} to create the paraproducts \displaystyle \pi_{hl}(u,v)(x) = \int_{\bf R} \int_{\bf R} m_{hl}(\xi,\eta) \hat u(\xi) \hat v(\eta) e^{ix(\xi+\eta)}\ d\xi d\eta \displaystyle \pi_{lh}(u,v)(x) = \int_{\bf R} \int_{\bf R} m_{lh}(\xi,\eta) \hat u(\xi) \hat v(\eta) e^{ix(\xi+\eta)}\ d\xi d\eta Such paraproducts were first introduced by Calderón, and more explicitly by Bony. Heuristically, {\pi_{hl}(u,v)} is the “high-low” portion of the product {uv}, in which the high frequency components of {u} are “allowed” to interact with the low frequency components of {v}, but no other frequency interactions are permitted, and similarly for {\pi_{lh}(u,v)}. The para-differential calculus of Bony, Coifman, and Meyer then allows one to manipulate these paraproducts in ways that are very similar to ordinary pointwise products, except that they behave better with respect to the Leibniz rule or with more exotic differential or integral operators. For instance, one has \displaystyle D^k \pi_{hl}(u,v) \approx \pi_{hl}(D^k u, v) \displaystyle D^k \pi_{lh}(u,v) \approx \pi_{lh}(u, D^k v) for differential operators {D^k} (and more generally for pseudodifferential operators such as {|D|^\alpha}, or integral operators such as {D^{-1}}), where we use the {\approx} symbol loosely to denote “up to lower order terms”. Furthermore, many of the basic estimates of the pointwise product, in particular Hölder’s inequality, have analogues for paraproducts; this is a special case of what is now known as the Coifman-Meyer theorem, which is fundamental in this subject, and is proven using Littlewood-Paley theory. The same theory in fact gives some estimates for paraproducts beyond what are available for products. For instance, if {u} is in {C^1} and {v} is in {C^3}, then the paraproduct {\pi_{lh}(u,v)} is “almost” in {C^3} (modulo some technical logarithmic divergences which I will not elaborate on here), in contrast to the full product {uv} which is merely in {C^1}. Paraproducts also allow one to extend the classical product and chain rules to fractional derivative operators, leading to the fractional Leibniz rule \displaystyle |D|^\alpha(uv) \approx (|D|^\alpha u) v + u (|D|^\alpha v) and fractional chain rule \displaystyle |D|^\alpha(F(u)) \approx (|D|^\alpha u) F'(u) which are both very useful in nonlinear PDE (see e.g. this book of Taylor for a thorough treatment). See also this brief Notices article on paraproducts by Benyi, Maldonado, and Naibo. — 3. Louis Nirenberg — Louis Nirenberg has made an amazing number of contributions to analysis, PDE, and geometry (e.g. John-Nirenberg inequality, Nirenberg-Treves conjecture (recently solved by Dencker), Newlander-Nirenberg theorem, Gagliardo-Nirenberg inequality, Caffarelli-Kohn-Nirenberg theorem, etc.), while also being one of the nicest people I know. I will mention only two results of his here, one of them very briefly. Among other things, Nirenberg and Kohn introduced the pseudo-differential calculus which, like the para-differential calculus mentioned in the previous section, is an extension of differential calculus, but this time focused more on generalisation to variable coefficient or fractional operators, rather than in generalising the Leibniz or chain rules. This calculus sits at the intersection of harmonic analysis, PDE, von Neumann algebras, microlocal analysis, and semiclassical physics, and also happens to be closely related to Meyer’s work on wavelets; it quantifies the positive aspects of the Heisenberg uncertainty principle, in that one can observe position and momentum simultaneously so long as the uncertainty relation is respected. But I will not discuss this topic further today. Instead, I would like to focus here instead on a gem of an argument of Gidas, Ni, and Nirenberg, which is a brilliant application of Alexandrov’s method of moving planes, combined with the ubiquitous maximum principle. This concerns solutions to the ground state equation \displaystyle \Delta Q + Q^p = Q \ \ \ \ \ (2) where {Q: {\bf R}^n \rightarrow {\bf R}^+} is a smooth positive function that decays exponentially at infinity, {p > 1} is an exponent, and {\Delta := \sum_{j=1}^n \frac{\partial^2}{\partial x_j^2}} is the Laplacian. This equation shows up in a number of contexts, including the nonlinear Schrödinger equation and also, by coincidence, in connection with the best constants in the Gagliardo-Nirenberg inequality. The existence of ground states {Q} can be proven by the variational principle. But one can say much more: Lemma 1 (Gidas-Ni-Nirenberg) All ground states {Q} are radially symmetric with respect to some origin. To show this radial symmetry, a small amount of Euclidean geometry shows that it is enough to show that there is a lot of reflection symmetry: Lemma 2 (Gidas-Ni-Nirenberg, again) If {Q} is a ground state and {\omega \in S^{n-1}} is a unit vector, then there exists a hyperplane orthogonal to {\omega} with respect to which {Q} is symmetric. To prove this lemma, we use the moving planes method, sliding in a plane orthogonal to {\omega} from infinity. More precisely, for each {t \in {\bf R}}, let {\Pi_t} be the hyperplane {\{ x: x \cdot \omega = t \}}, let {H_t} be the associated half-space {\{ x: x \cdot \omega \leq t\}}, and let {Q_t: H_t \rightarrow {\bf R}} be the function {Q_t(x) := Q(x) - Q(r_t(x))}, where \displaystyle r_t(x) := x + 2 (t - x \cdot \omega) \omega is the reflection through {\Pi_t}; thus {Q_t} is the difference between {Q} and its reflection in {\Pi_t}. In particular, {Q_t} vanishes on the boundary {\Pi_t} of the half-space {H_t}. Intuitively, the argument proceeds as follows. It is plausible that {Q_t} is going to be positive in the interior of {H_t} for large positive {t}, but negative in the interior of {H_t} for large negative {t}. Now imagine sliding {t} down from {+\infty} to {-\infty} until one reaches the first point {t = t_0} where {Q_{t_0}} ceases to be positive in the interior of {H_{t_0}}, then it attains its minimum at some zero in the interior of {H_{t_0}}. But by playing around with (2) (using the Lipschitz nature of the map {Q \mapsto Q^p} when {Q} is bounded) we know that {Q_{t_0}} obeys an elliptic constraint of the form {\Delta Q_{t_0} = O( |Q_{t_0}| )}. Applying the maximum principle, we can then conclude that {Q_{t_0}} vanishes identically in {H_{t_0}}, which gives the desired reflectoin symmetry. (Now it turns out that there are some technical issues in making the above sketch precise, mainly because of the non-compact nature of the half-space {H_t}, but these can be fixed with a little bit of fiddling; see for instance Appendix B of my PDE textbook.)
ca65355c007a705c
Conference - Stochastic differential geometry and mathematical physics Marc Arnaudon (Université de Bordeaux) Title: Stochastic mean curvature flow and intertwined Brownian motion Abstract: The evolution of a set by deformation of its boundary along mean curvature flow is involved in many physical phenomena. We are interested here in this evolution, to which we add a noise that acts uniformly on its boundary, and a renormalization term.  It is shown that this evolution can be coupled to a Brownian motion which remains within the set, and which at all times is uniformly distributed inside the set. This is the phenomenon of duality and intertwining established by Diaconis and Fill in the context of Markov chains in finite state spaces. Different couplings are proposed,  some of which involving the local time of the Brownian motion, either on the skeleton of the set or on its boundary. These couplings also differ by more or less strong correlation between the Brownian motion inside and the vibrating boundary. When the set is a symmetric real interval, the boundary evolves as a Bessel process of dimension 3, and we recover the 2M-X Pitman theorem as a special case, with one particular coupling. This is a common work with Koléhè Coulibaly (Institut Elie Cartan de Lorraine (IECL), Nancy), and Laurent Miclo (Institut de Mathématiques de Toulouse (IMT), Toulouse). Davide Barilari (Università degli Studi di Padova) Title: Surfaces in 3D contact sub-Riemannian: geometry and stochastic processes Abstract: We discuss differential geometry and stochastic processes on surfaces in three-dimensional contact sub-Riemannian manifolds. By considering the Riemannian approximations to the sub-Riemannian manifold, we obtain a second order partial differential operator on the surface arising as the limit of Laplace-Beltrami operators. The stochastic process associated with the limiting operator moves along the characteristic foliation induced on the surface by the contact distribution. We show that for this stochastic process elliptic characteristic points are inaccessible, while hyperbolic characteristic points are accessible from the separatrices and we illustrate the results with some examples. [Joint work with Ugo Boscain, Daniele Cannarsa and Karen Habermann.] Fabrice Baudoin (University of Connecticut) Title: Asymptotic windings of the block determinants of a unitary Brownian motion and related diffusions Abstract: We study several matrix diffusion processes constructed from a unitary Brownian motion. In particular, we use the Stiefel fibration to lift the Brownian motion of the complex Grassmannian to the complex Stiefel manifold and deduce a skew-product decomposition of the Stiefel Brownian motion. As an application, we prove asymptotic laws for the determinants of the block entries of the unitary Brownian motion. This is a joint work with Jing Wang (Purdue University). Robert Baumgarth (Universität Leipzig) Title: Scattering Theory for the Hodge Laplacian Abstract: We prove using an integral criterion the existence and completeness of the wave operators $W_{\pm}(\Delta_h^{(p)}, \Delta_g^{(p)}, I_{g,h}^{(p)})$ corresponding to the Hodge-Laplacians $\Delta_\nu^{(p)}$ acting on differential $p$-forms, for $\nu\in\{g,h\}$, induced by two quasi-isometric Riemannian metrics $g$ and $h$ on an complete open smooth manifold $M$. In particular, this result provides a criterion for the absolutely continuous spectra $\sigma_{\mathrm{ac}}(\Delta_g^{(p)}) = \sigma_{\mathrm{ac}}(\Delta_h^{(p)})$ of $\Delta_\nu^{(p)}$ to coincide. The proof is based on gradient estimates obtained by probabilistic Bismut-type formulae for the heat semigroup defined by spectral calculus. By these localised formulae, the integral criterion only requires local curvature bounds and some upper local control on the heat kernel acting on functions, but no control on the injectivity radii. A consequence is a stability result of the absolutely continuous spectrum under a Ricci flow. As an application we concentrate on the important case of conformal perturbations. Ugo Boscain (Ecole Polytechnique) Title: Geometric confinement of the curvature Laplacian $-\Delta+c K$ on $2D-$almost-Riemannian manifolds Abstract: Two-dimension almost-Riemannian structures of step 2 are natural generalizations of the Grushin plane. They are generalized Riemannian structures for which the vectors of a local orthonormal frame can become parallel. Under the 2-step assumption the singular set $Z$, where the structure is not Riemannian, is a $1D$ embedded submanifold. While approaching the singular set, all Riemannian quantities diverge. A remarkable property of these structures is that the geodesics can cross the singular set without singularities, but the heat and the solution of the Schrödinger equation (with the Laplace-Beltrami operator $\Delta$) cannot. This is due to the fact that (under a natural compactness hypothesis), the Laplace-Beltrami operator is essentially self-adjoint on a connected component of the manifold without the singular set. In the literature such phenomenon is called geometric confinement. In this paper we study the self-adjointness of the curvature Laplacian, namely $-\Delta+c K$, for $c>0$ (here $K$ is the Gaussian curvature), which originates in coordinate free quantization procedures (as for instance in path-integral or covariant Weyl quantization). We prove that there is no geometric confinement for these types of operators. Ilya Chevyrev (University of Edinburgh) Title: State space for the 3D stochastic quantisation equation of Yang-Mills Abstract: In this talk I will present a proposed state space of distributions for the stochastic Yang-Mills equations (SYM) in 3D. I will show how the notion of gauge equivalence extends to this space and how one can construct a Markov process on the space of gauge orbits associated with the SYM. This partly extends a recent construction done in the less singular 2D setting. Based on a joint work in progress with Ajay Chandra, Martin Hairer, and Hao Shen.  Ana-Bela Cruzeiro (Universidade de Lisboa) Title: A stochastic view on the deterministic Navier-Stokes equation Abstract: We review some recent results on a stochastic approach to the deterministic Navier-Stokes equation, that generalizes Arnold's characterization of the Euler equation in fluid dynamics. Shizan Fang (Université de Bourgogne) Title: Ikeda-Watanabe's connection and Navier-Stokes equations. Abstract: To each velocity, we associate an affine connection introduced by N. Ikeda and S. Watanabe when they rolled a flat Brownian motion on manifolds. We will compute the associated intrinsic Ricci tensor in the sense of B. Driver, which will allow us to express the vorticity form of Navier-Stokes equations: this can be seen as a non linear version of Bochner-Weitzenbock formula. Franck Gabriel (Ecole Polytechnique Fédérale de Lausanne) Title: Two-dimensional planar Yang-Mills measure as planar Markovian holomomic fields Abstract: Yang-Mills theories are gauge theories with a non-abelian symmetry group. On the plane, considering both its gauge symmetry property and the underlying action, it can be seen that the Yang-Mills measure should satisfy a generalization of the defining properties of Lévy processes: it should be described by a planar Markovian holonomy field. Planar Markovian holonomy fields, a class of path-indexed stochastic processes, can be fully described: using both the algebraic and geometric perspectives on the braid group, we describe a correspondence between these fields and Lévy processes on compact groups. In the case where the symmetry group is the permutation group, planar Markovian holomomic fields can also be obtained geometrically from random branched covering models. When the number of sheets of the covering goes to infinity, Wilson’s observables can be easily computed. Maria Gordina (University of Connecticut) Title: Ergodicity for Langevin dynamics with singular potentials Abstract: We discuss Langevin dynamics of $N$ particles on $\mathbb R^d$ interacting via a singular repulsive potential, such as the Lennard-Jones potential, and show that the system converges to the unique invariant Gibbs measure exponentially fast in a weighted Sobolev norm. The proof relies on an explicit construction of a Lyapunov function using a modified Gamma calculus. In contrast to previous results for such systems, our results imply geometric convergence to equilibrium starting from an essentially optimal family of initial distributions. This is based on the joint work with F. Baudoin and D. Herzog, as well a recent preprint with E. Camrud, D. Herzog and G. Stoltz. Erlend Grong (University of Bergen) Title: Path space on sub-Riemannian manifolds Abstract: We discuss how we can generalize the concept of Malliavin Calculus to the setting of a sub-Riemannian manifolds. We discuss how concepts such as the Cameron-Martin space and the gradient and damped gradient of functions on path space can be understood in this setting. From there, we discuss how we can obtain functional inequalities related to both an lower and upper bounds for Ricci curvature. These results are from a joint work with Li-Juan Cheng and Anton Thalmaier. Batu Güneysu (Universität Potsdam) Title : Feynman-Kac formula for first order perturbations of covariant Laplace-type operators. Abstract:  It is a classical fact that one can represent the heat semigroup of a Schrödinger operator (that is, a perturbation of the Laplacian by a real-valued potential) as a path integral in terms of Brownian motion: this is the celebrated Feynman-Kac formula. The aim of this talk is to explain a non-selfadjoint variant of this result, where one allows arbitrary first order perturbations of a Bochner-Laplacian that acts on sections of a vector bundle over an arbitrary noncompact Riemannian manifold. In particular, one replaces self-adjoint heat semigroups by holomorphic semigroups. As an application to differential geometry, we obtain an explicit path integral formula for the first degree part of the differential graded Chern character of an even dimensional Riemannian spin manifold, which plays a crucial role in the context of the Duistermaat-Heckman localization formula on loopspace. This is joint work with Sebastian Boldt (Leipzig). Karen Habermann (University of Warwick) Title: Fluctuations for Brownian bridge expansions and convergence rates of Lévy area approximations Abstract: We start by deriving a polynomial expansion for Brownian motion expressed in terms of shifted Legendre polynomials by considering Brownian motion conditioned to have vanishing iterated time integrals of all orders. We further discuss the fluctuations for this expansion and show that they converge in finite dimensional distributions to a collection of independent zero-mean Gaussian random variables whose variances follow a scaled semicircle. We then link the asymptotic convergence rates of approximations for Brownian Lévy area which are based on the Fourier series expansion and the polynomial expansion of the Brownian bridge to these limit fluctuations. We close with the observation that the Lévy area approximation resulting from the polynomial expansion of the Brownian bridge is more accurate than the Kloeden-Platen-Wright approximation, whilst still only using independent normal random vectors. Martin Hairer (Imperial College London) Title: SPDEs with values in manifolds Abstract: We consider space-time white noise driven SPDEs in one spatial dimension with values in a manifold. Heuristic considerations suggest that the second-order calculus appearing when considering SDEs should then be replaced by a fourth-order calculus. We will see that while this is indeed the case, it turns out that these equations admit a distinguished notion of "solution" which actually has nicer properties than in the case of SDEs! Makoto Katori (Chuo University) Title: Zeros of the i.i.d. Gaussian Laurent series on an annulus Abstract: On an annulus ${\mathbb{A}}_q :=\{z \in {\mathbb{C}}: q < |z| < 1\}$ with a fixed $q \in (0, 1)$,we study a Gaussian analytic function (GAF) defined by the  i.i.d. Gaussian Laurent series. The covariance kernel of the GAF is given by the weighted Szegő kernel of ${\mathbb{A}}_q$ with the weight parameter $r$ studied by Mccullough and Shen. Conditioning the GAF by giving zeros, new GAFs are induced such thatthe covariance kernels are also given by the weighted Szegő kernel of Mccullough and Shen  but the weight parameter $r$ is changed depending on the given zeros. We prove that the zero set of the GAF provides a permanental-determinantal point process (PDPP) in which each correlation function is expressed by a permanent multiplied by a determinant. If we take the limit $q \to 0$, a simpler but still non-trivial PDPP is obtained on a punctured unit disk ${\mathbb{D}}^{\times} := {\mathbb{D}} \setminus \{0\}$. In the further limit $r \to 0$  the present PDPP is reduced to the  determinantal point process on ${\mathbb{D}}$ studied by Peres and Virág. The present talk is based on a joint work with Tomoyuki Shirai ( Christian Léonard (Université Paris Nanterre) Title:  Entropic optimal transport, time-reversal and (usual) optimal transport Abstract: Felix Otto discovered twenty years ago that quadratic optimal transport on a Riemannian manifold $M$ generates the so-called Wasserstein geometry on the space $ \mathcal{P}(M)$ of probability measures on $M$. The basic ingredients of this geometry are the McCann displacement interpolations which result from a lift of geodesics of $M$ onto $\mathcal{P}(M)$. Replacing the geodesics of $M$ by Brownian bridges leads to a natural notion of interpolations on $\mathcal{P}(M)$, the so-called entropic interpolations. It is known that entropic interpolations converge to displacement interpolations as the temperature of the Brownian bridges decreases to zero. Not surprisingly, time reversal of some stochastic processes, the Schrödinger bridges, whose marginals are entropic interpolations leads to a precise evaluation of the energetic gap between entropic and displacement interpolations. Some  well-established and heuristic consequences of time-reversing Schrödinger bridges will be presented. Thierry Lévy (Sorbonne Université) Title: Matrix-tree theorems and determinantal linear processes Abstract: The matrix-tree theorem states that the product of the non-zero eigenvalues of the Laplacian on a connected graph is equal to the number of rooted spanning trees of this graph. This theorem can be traced back to the middle of the XIXth century, to papers of Kirchhoff (1847) and Sylvester (1857), both devoted to the resolution of certain systems of linear equations. In the situation where the graph is endowed with a Hermitian fibre bundle and a unitary connection, the matrix-tree theorem was generalised by Forman (for a bundle of rank 1) and Kenyon (for a bundle of rank 2 and a SU(2)-connection), to the effect that the determinant of the covariant Laplacian counts special subgraphs of the graph, namely cycle-rooted spanning forests. On the other hand, from a probabilistic perspective, the uniform spanning tree and uniform cycle-rooted spanning forest fall into the intensely studied class of determinantal point processes. With Adrien Kassel (CNRS, ENS Lyon), we have been trying in the last few years to understand from a combinatorial and probabilistic perspective the covariant Laplacian on a graph endowed with a Hermitian bundle of arbitrary rank and an arbitrary unitary connection. This led us in particular to define a new class of probability measures on Grassmannian manifolds, which generalises determinantal point processes and that we call determinantal linear processes. In this talk, I will describe this class of measures and some of their properties, and how they relate to the covariant Laplacian. Xue-Mei Li (Imperial College London) Title: Hessian Estimates.  Abstract: We will discuss Hessian estimates for solutions for Markov generators. Jacek Malecki (Politechnika Wrocławska) Title: Archimedes principle for ideal gas Abstract: We prove Archimedes’ principle for a macroscopic ball in ideal gas consisting of point particles with non-zero mass. The main result is an asymptotic theorem, as the number of point particles goes to infinity and their total mass remains constant. We also show that, asymptotically, the gas has an exponential density as a function of height. We find the asymptotic inverse temperature of the gas. We derive an accurate estimate of the volume of the phase space using the local central limit theorem. The talk is based on a joint work with Krzysztof Burdzy. Tai Melcher (University of Virginia) Title: Regularity properties of some infinite-dimensional hypoelliptic diffusions Abstract: Smoothness properties are of classical interest in the study of measures in infinite dimensions. These properties have been a particular focus for the end point distributions of diffusions, that is, heat kernel measures. In finite dimensions, hypoellipticity of the generator is a standard hypothesis to assume for (or is in some sense equivalent to) regularity properties of the heat kernel measure. We’ll review progress on generalizations of some hypoelliptic constructions in infinite dimensions, including recent joint work with Fabrice Baudoin and Masha Gordina on infinite-dimensional generalizations of Kolmogorov diffusions. Gerard Misiolek (Notre-Dame University) Title: Geometry and Fluids. Abstract: Hydrodynamics of ideal fluids is an example of an infinite dimensional Riemannian geometry where solutions of the incompressible Euler equations correspond to geodesics in the group of volume preserving diffeomorphisms equipped with a right-invariant metric defined by the fluid’s kinetic energy. This beautiful observation was made by V. Arnold in a pioneering paper published in 1966 in Annales de l'Institut Fourier. It opened the way for introduction of geometric, topological and Lie theoretic methods to the study of fluid dynamics and the field has remained very active ever since. I will explain the basic Riemannian constructions of hydrodynamics and describe some of the results that have been obtained in recent years. Robert Neel (Lehigh University) Title: Heat kernels, their derivatives, and the bridge process in small time. Abstract: Consider a sub-Riemannian manifold with a sub-Laplacian and associated heat kernel and diffusion, and fix two points such that all minimal geodesics between them are strongly normal. We show that the Molchanov method for computing the small-time asymptotics of the heat kernel and its derivatives can be rigorously applied and is capable of giving expansions to all order. Further, we show that there is a family of probability measures on pathspace that, as time goes to 0, converges to a probability measure on the set of minimal geodesics that gives the law of large numbers for the associated bridge processes. This limiting measure can be determined in various cases, essentially, again, via Molchanov's method. This extends earlier work of Hsu and Ballieul-Norris. Moreover, we see that logarithmic derivatives of the heat kernel, to any order, have small-time asymptotic behavior given by the cumulant of geometrically natural random variables with respect to this same limiting measure. The method is fundamentally local and is valid even on incomplete manifolds under appropriate conditions on the distance to infinity. This talk is based on joint work with Ludovic Sacchelli. Pierre Perruchaud (Notre-Dame University) Title: The search for an infinite-dimensional Cartan development Abstract: The description of Brownian motion on manifolds was greatly simplified by Eells, Elworthy and Malliavin as they introduced their now classical construction, using insight from differential geometry. In particular, the so-called Cartan development plays a central role, allowing to transpose not only Brownian motion, but for instance any semimartingale from the Euclidean to the manifold setting. Motivated by applications to stochastic fluids, I will present a possible definition for the Cartan development in a class of infinite dimensional manifolds of diffeomorphisms, and describe how topology, geometry and probability tend to work against each other in this context. This will lead us to discuss the elusive orthonormal frame bundle, which seems to be an important missing piece of the puzzle. Gregory Schehr (Sorbonne Université) Title: Non-intersecting Brownian bridges in the flat-to-flat geometry Abstract: In this talk, I will discuss N non-intersecting Brownian bridges propagating from an initial configuration $\{a_1 < a_2 < \ldots< a_N \}$ at time $t=0$ to a final configuration $\{b_1 < b_2 < \ldots< b_N \}$. I will first show that this problem can be mapped to a non-intersecting Dyson's Brownian bridges with Dyson index $\beta=2$. For the latter I will derive an exact effective Langevin equation that allows to generate very efficiently the non-intersecting bridge configurations. In particular, for the flat-to-flat configuration in the large $N$ limit, where $a_i = b_i = (i-1)/N$, for $i = 1, \cdots, N$, I will use this effective Langevin equation to derive an exact Burgers' equation (in the inviscid limit) for the Green's function and solve this Burgers' equation for arbitrary time $0 \leq t\leq t_f$. Finally, I will discuss connections to some well known problems, such as the Chern-Simons model, the related Stieltjes-Wigert orthogonal polynomials and the Borodin-Muttalib ensemble of determinantal point processes. Alexander Schmeding (Nord Universitet) Title: Stochastic PDE from hydrodynamics via infinite-dimensional geometry Abstract: The motion of a rigid body, can be described as a differential equation on the configuration space of the system which turns out to be a finite-dimensional Lie group. Building on this observation, Arnold postulated that the Euler equation governing an incompressible fluid in a domain can be described as a differential equation on its configuration space, the group of volume preserving diffeomorphisms. Indeed as Ebin and Marsden in 1970 showed, one can reformulate this partial differential equation as an ordinary differential equation, for the price of switching to an infinite dimensional manifold. Using geometric techniques local wellposedness of the Euler equation can then be established. We were recently able to apply this circle of ideas to stochastic versions of the Euler equation (joint with M. Maurelli (Milano) and K. Modin (Chalmers, Gothenburg)). In the talk I will give an introduction to these topics together with an overview on these new developments. Note that the talk will not supposes familiarity with infinite-dimensional manifolds and only mild familiarity with stochastic analysis. Armen Shirikyan (CY Cergy Paris Université) Title: Large deviations and entropy production in viscous fluid flows Abstract: We first review a general framework of the theory of entropic fluctuations in deterministic and stochastic systems. Several questions of interest will be formulated, together with the difficulties encountered when dealing with randomly forced PDEs. We next consider the 2D Navier-Stokes system coupled with a Lagrangian particle. In this context, we discuss a criterion for the validity of the large deviation principle and show that it can be reduced to the verification of some controllability properties of the underlying deterministic problem. We next turn to a study of the large-time asymptotics of the particle and discuss the question of entropy production for it. The talk is based on joint works with V. Jakšić, V. Nersesyan, and C.-A. Pillet. Stefan Suhr (Ruhr-Universität Bochum) Title : Recent Developments in Optimal Transport and Lorentzian Geometry Abstract: The stellar success of optimal transport theory in Riemannian geometry over the last fifteen years invites to consider similar questions in Lorentzian geometry, especially with a view towards general relativity and mathematical physics. In my talk I will introduce the basic ideas of optimal transport for (globally hyperbolic) spacetimes and discuss a few first results. The guiding beacon of my considerations is the characterization of Ricci curvature (and with it the Einstein field equations) via optimal transport theory. Anton Thalmaier (Université du Luxembourg) Title: Gradient formulas on manifolds - some new perspectives Abstract: We recall various first and second order derivative formulas for heat semigroups on manifolds and describe geometric applications related to Calderón-Zygmund type inequalities, as well as new versions of log-Sobolev and transportation inequalities connecting relative entropy, Stein discrepancy and relative Fisher information on Riemannian manifolds. Emmanuel Trélat (Sorbonne Université) Title: Spectral analysis of sub-Riemannian Laplacians and Weyl measure Abstract:  In collaboration with Yves Colin de Verdière and Luc Hillairet, we study spectral properties of sub-Riemannian Laplacians, which are selfadjoint hypoelliptic operators satisfying the Hörmander condition. Thanks to the knowledge of the small-time asymptotics of heat kernels in a neighborhood of the diagonal, we establish the local and microlocal Weyl law. When the Lie bracket configuration is regular enough (equiregular case), the Weyl law resembles that of the Riemannian case. But in the singular case (e.g., Baouendi-Grushin, Martinet) the Wey law reveals much more complexity. In turn, we derive quantum ergodicity properties in some sub-Riemannian cases. François-Xavier Vialard (Université Gustave Eiffel) Title: Hdiv generalized minimizing geodesics Abstract: In this talk we show how to extend Brenier's approach of generalized geodesics for the incompressible Euler equation to the setting of compressible fluid for a generalization of the Camassa-Holm equation. We aim at understanding geodesic on a group of diffeomorphisms endowed with the right invariant metric Hdiv, which is L2 norm of the vector field + L2 norm of its divergence. To introduce this work, we first present unbalanced optimal transport and its link with the generalized Camassa-Holm equation. Then, we propose a simple convex relaxation of the associated minimization problem on the path space on a cone manifold with moment constraints. We show that the relaxation is tight in some cases of interest and conclude with open questions. Jean-Claude Zambrini (Universidade de Lisboa) Title: Schrödinger's problem and space-time optimal control Abstract: Schrödinger's problem was, initially, formulated as a probabilistic boundary value problem on a finite, fixed, time interval.We shall describe its generalization on a space-time domain, whose time interval is also random. And explain its relations with the original motivation of Schrödinger, the foundations of quantum mechanics.
b5646051639f0f49
Go toArchive Browse byFacets Bookbag ( 0 ) 'Surface Charges' in keywords Results  1 Item Sorted by    Publication Year 1998 (1) 1Author    M. F. El-Sayed, D. K. CallebautRequires cookie*  Title    Nonlinear Electrohydrodynamic Stability of Two Superposed Bounded Fluids in the Presence of Interfacial Surface Charges    Abstract    The method of multiple scales is used to analyse the nonlinear propagation of waves on the interface between two superposed dielectric fluids with uniform depths in the presence of a normal electric field, taking into account the interfacial surface charges. The evolution of the amplitude for travelling waves is governed by a nonlinear Schrödinger equation which gives the criterion for modulational instability. Numerical results are given in graphical form, and some limiting cases are recovered. Three cases, in the pure hydrodynamical case, depending on whether the depth of the lower fluid is equal to or greater than or smaller than the one of the upper fluid are considered, and the effect of the electric field on the stability regions is determined. It is found that the effect of the electric field is the same in all the cases for small values of the field, and there is a value of the electric field after which the effect differs from case to case. It is also found that the effect of the electric field is stronger in the case where the depth of the lower fluid is larger than the one of the upper fluid. On the other hand, the evolution of the am-plitude for standing waves near the cut-off wavenumber is governed by another type of nonlinear Schrödinger equation with the roles of time and space are interchanged. This equation makes it possible to determine the nonlinear dispersion relation, and the nonlinear effect on the cut-off wavenumber.    Reference    Z. Naturforsch. 53a, 217—232 (1998); received January 23 1998    Published    1998    Keywords    Hydrodynamic Stability, Electrohydrodynamics, Nonlinearity, Interfacial Instability, Dielectric Fluids, Surface Charges    Similar Items    Find  TEI-XML for    default:Reihe_A/53/ZNA-1998-53a-0217.pdf   Identifier    ZNA-1998-53a-0217   Volume    53
5c9c46fc493961ac
Take the 2-minute tour × When trying to solve the Schrödinger equation for hydrogen, one usually splits up the wave function into two parts: $$\psi(r,\phi,\theta)= R(r)Y_{l,m}(\phi,\theta).$$ I understand that the radial part usually has a singularity for the 1s state at $r=0$ and this is why you remove it by writing: $$R(r) = \frac{u(r)}{r}$$ But what is the physical meaning of $$R(r=0) = \infty~?$$ Wouldn't this mean that the electron cloud is only at the centre of the atomic nucleolus? share|improve this question 5 Answers 5 The infinitesimal probability for the electron to be in the volume $dV$ around a point $(r,\theta,\phi)\leftrightarrow (x,y,z)$ is given by $$ dP = dV\cdot |\psi(x,y,z)|^2 = dV\cdot |R(r)|^2\cdot |Y_{lm}(\theta,\phi)|^2 =\dots$$ as you can see if you substitute your Ansatz for the wave function. However, the infinitesimal volume $dV=dx\cdot dy\cdot dz$ may be rewritten in terms of differentials of the spherical coordinates as $$ dV = dr\cdot r^2 \cdot d\Omega = dr\cdot r^2 \cdot \sin\theta\cdot d\theta\cdot d\phi $$ where the small solid angle $d\Omega$ was rewritten in terms of the spherical coordinates. You see that for dimensional reasons (or because the surface of a sphere scales like $r^2$), there is an extra factor of $r^2$ in $dV$ and therefore also in $dP$ which suppresses the probability. There is simply not enough volume for small values of $r$. So $|R(r)|^2$ may still go like $1/r^2$ for small $r$ and in that case, $dV$ will be proportional to $dr$ times a function that is finite for $r\to 0$. Such $dP$ may be integrated and there's no divergence at all near $r=0$. That's why one should allow the wave function to go like $1/r$ near $r=0$ which is the true counterpart of one-dimensional wave function's being finite near a point. However, Nature doesn't use this particular loophole because the wave function $\psi$ for small $r$ actually scales like $r^l$ where $l$ is the orbital quantum number and the wave function actually never diverges even though it could. share|improve this answer The physical observable is not the wavefunction, but its integral over a finite area. In spherical coordinates, this is: $P({\vec x})=\int dr\, d\theta\, d\phi r^{2}\sin\theta \psi^{*}\psi$ This integrand is manifestly finite at $r=0$, even if $R(r)$ has a $\frac{1}{r}$ divergance. share|improve this answer Dear @Jerry, you were a minute faster but shorter ;-). I think that $\sin^2\theta$ should be just $\sin\theta$. –  Luboš Motl Apr 26 '12 at 16:32 Indeed! I'm so used to writing the metric that I forgot the square root. –  Jerry Schirmer Apr 26 '12 at 17:16 Good way to phrase the priorities. ;-) –  Luboš Motl Apr 26 '12 at 18:09 For a hydrogen-like atom in 3 spatial dimensions, the rewriting of the radial part is not performed to keep the $u(r)$ part regular, as OP suggests, but usually because the 3D radial equation in terms of the $u$ function has the same form as a 1D Schrödinger equation. Imagine that the radial wave function goes as a power $$R(r) ~\sim ~ r^{p} \qquad {\rm for} \qquad r~\to~ 0, \qquad p~\in~\mathbb{R}.$$ On general grounds, one can impose the following list of consistency conditions, listed with the weakest condition first and the strongest condition last. 1. Normalizability of the wave function $$\infty~>~\langle\psi|\psi\rangle~=~\int d^3r~|\psi(\vec{r})|^2 ~\propto~ \int_0^{\infty} r^{2}dr~|R(r)|^2 .$$ Integrability at $r=0$ yields that the power $p>-\frac{3}{2}$. In other words, this normalizability condition does not by itself imply that $R(r)$ or $u(r)$ should be regular at $r=0$, which is also the conclusion of many of the other answers. 2. The expectation value of the potential energy $V$ should be bounded from below, $$-\infty~<~\langle\psi| V|\psi\rangle~=~\int d^3r~V(r)|\psi(\vec{r})|^2~\propto~-\int_0^{\infty} rdr~|R(r)|^2. $$ Integrability at $r=0$ yields that the power $p>-1$. In other words, $u(r)$ should be regular for $r\to 0$. 3. The kinetic energy operator (or equivalently, the Laplacian $\Delta$) should behave self-adjointly for two wave functions $\psi_1(\vec{r})$ and $\psi_2(\vec{r})$, $$\langle\psi_1| \Delta\psi_2\rangle~=~-\langle\vec{\nabla}\psi_1| \cdot\vec{\nabla}\psi_2\rangle,$$ without picking up pathological contributions at $r=0$. A detailed analysis shows that the powers of the radial parts of $\psi_1(\vec{r})$ and $\psi_2(\vec{r})$ should satisfy $p>-\frac{1}{2}$. In comparison, the actual bound state solutions have non-negative $p=\ell\in \mathbb{N}_0$, and therefore satisfy these three conditions. share|improve this answer In addition to the simply geometric constraints that Jerry and Lubos talk about, the derivation used to illustrate the problem almost always assumes that the proton is a point particle which is a pretty good approximation but not strictly true. Working the problem again with a realistic proton charge density function (roughly constant inside a radius of about 1 fm) would be another way to remove the singularity. Mind you, you this argument does not hold true for the positronium so you still need the geometric constraint. share|improve this answer Re:positronium: wouldn't sub-Compton-wavelength renormalization of the Coulomb law soften the singularity? –  Slaviks Apr 26 '12 at 19:28 @Slaviks: I'm a little on thin ice here, but I think that renormalization does solve the problem, but that's in the context of QFT, while this question seem to be phrased in the language of introductory QM. –  dmckee Apr 26 '12 at 19:53 Sure, I was just entertaining the concept :) There is no singularity in the w.f., worrying in about the radial part is just staring at a singularity if the coordinate system, imho. –  Slaviks Apr 26 '12 at 19:56 For Hydrogen, $R(r)$ does not diverge, as $U(r)$ vanishes as fast as (or faster than) $r$ as $r\rightarrow 0$. In fact, it's only for the $s$ orbitals that the wavefunction is non zero at $r=0$. But as pointed out before, a non-zero radial wavefunction does not mean a non-zero probability of finding the electron at the center. share|improve this answer Your Answer
d388f3cedccddcd1
Why Research Should Continue Only available on StudyMode • Download(s) : 158 • Published : September 24, 2012 Open Document Text Preview The development of MRI imaging technology is one useful spinoff of basic research into the structure of the atom. Research, however, is expensive. Many people argue that the high cost of research outweighs its potential benefits. Provide one argument for and one argument against increasing current funding for atomic-structure research. Use specific examples from this lesson in your answer to support each position. One argument against continuing research on the atom is that the cost continues to increase at an alarming rate, and the benefits seem to be declining. Wheras quantum chemistry experiments as early as 80 years ago were very cheap and yielded considerable practical results, now very few if any practical results are emerging, but the costs of experiments continues to increase. For example, the Large Hadron Collider(LHC) was recently constructed for the sole purpose of studying the behavior of atoms under extreme conditions. The final cost of the LHC was somewhere around $9 BILLION, and has yielded no practical results, nor promises to do so in the future. Contrast this to 80 years ago, when the field of quantum chemistry was just emerging. The results provided during the early and mid 1900s such as the Schrödinger equation and Heisenberg uncertainty principle enable virtually everything chemistry related today. The fields of pharmaceutical chemistry and polymer science(plastics especially) would still be in their infancy if not for the work on quantum mechanics during this time. These major accomplishments were largely based on complex derivations and rarely had experiments that cost a significant sum. A common argument for the continued research in atomic and subatomic structure is the desire for a greater understanding of the universe. As we study the interactions of very small particles, we can gain a better understanding of how they work together to ultimately create the world we live in. Also there is a practical goal of understanding subatomic... tracking img
46383f5e1bd62040
Psychology Wiki Quantum field theory 34,203pages on this wiki Add New Page Talk0 Share Quantum field theory (QFT) is the quantum theory of fields. It provides a theoretical framework, widely used in particle physics and condensed matter physics, in which to formulate consistent quantum theories of many-particle systems, especially in situations where particles may be created and destroyed. Non-relativistic quantum field theories are needed in condensed matter physics— for example in the BCS theory of superconductivity. Relativistic quantum field theories are indispensable in particle physics (see the standard model), although they are known to arise as effective field theories in condensed matter physics. Quantum field theory originated in the problem of computing the energy radiated by an atom when it dropped from one quantum state to another of lower energy. This problem was first examined by Max Born and Pascual Jordan in 1925. In 1926, Max Born, Werner Heisenberg and Pascual Jordan wrote down the quantum theory of the electromagnetic field neglecting polarization and sources to obtain what would today be called a free field theory. In order to quantize this theory, they used the canonical quantization procedure. In 1927, Paul Dirac gave the first consistent treatment of this problem. Quantum field theory followed unavoidably from a quantum treatment of the only known classical field, viz. the electromagnetic field. The theory was required by the need to treat a situation where the number of particles changes. Here, one atom in the initial state becomes an atom and a photon in the final state. It was obvious from the beginning that the quantum treatment of the electromagnetic field required a proper treatment of relativity. Jordan and Wolfgang Pauli showed in 1928 that commutators of the field were actually Lorentz invariant. By 1933, Niels Bohr and Leon Rosenfeld had related these commutation relations to a limitation on the ability to measure fields at space-like separation. The development of the Dirac equation and the hole theory drove quantum field theory to explain these using the ideas of causality in relativity, work that was completed by Wendell Furry and Robert Oppenheimer using methods developed for this purpose by Vladimir Fock. This need to put together relativity and quantum mechanics was a second motivation which drove the development of quantum field theory. This thread was crucial to the eventual development of particle physics and the modern (partially) unified theory of forces called the standard model. In 1927 Jordan tried to extend the canonical quantization of fields to the wave function which appeared in the quantum mechanics of particles, giving rise to the equivalent name second quantization for this procedure. In 1928 Jordan and Eugene Wigner found that the Pauli exclusion principle demanded that the electron field be expanded using anti-commuting creation and annihilation operators. This was the third thread in the development of quantum field theory— the need to handle the statistics of multi-particle systems consistently and with ease. This thread of development was incorporated into many-body theory, and strongly influenced condensed matter physics and nuclear physics. Quantum mechanics in general deals with operators acting upon a (separable) Hilbert space. For a single nonrelativistic particle, the fundamental operators are its position and momentum, \hat{\mathbf{x}}(t) and \hat{\mathbf{p}}(t). These operators are time dependent in the Heisenberg picture, but we may also choose to work in the Schrödinger picture or (in the context of perturbation theory) the interaction picture. Quantum field theory is a special case of quantum mechanics in which the fundamental operators are an operator-valued field A single field describes a spinless particle. More fields are necessary for more types of particles, or for particles with spin. In quantum field theory, the energy is given by the Hamiltonian operator, which can be constructed from the quantum fields; it is the generator of infinitesimal time translations. (Being able to construct the generator of infinitesimal time translations out of quantum fields means many unphysical theories are ruled out, which is a good thing.)In order for the theory to be sensible, the Hamiltonian must be bounded from below. The lowest energy eigenstate (which may or may not be degenerate) is called the vacuum in particle physics and the ground state in condensed matter physics (QFT appears in the continuum limit of condensed matter systems). Technical statementEdit Quantum field theory corrects several limitations of ordinary quantum mechanics, which we will briefly discuss now. The time-independent Schrödinger equation, in its most commonly encountered form, is \left[ \frac{|\mathbf{p}|^2}{2m} + V(\mathbf{r}) \right] |\psi(t)\rang = i \hbar \frac{\partial}{\partial t} |\psi(t)\rang, where |\psi\rang denotes the quantum state (notation) of a particle with mass m, in the presence of a potential V. The first problem occurs when we seek to extend the equation to large numbers of particles. As described in the article on identical particles, quantum-mechanical particles of the same species are indistinguishable, in the sense that the state of the entire system must be symmetric (bosons) or antisymmetric (fermions) when the coordinates of its constituent particles are exchanged. These multi-particle states are extremely complicated to write. For example, the general quantum state of a system of N bosons is written as |\phi_1 \cdots \phi_N \rang = \sqrt{\frac{\prod_j N_j!}{N!}} \sum_{p\in S_N} |\phi_{p(1)}\rang \cdots |\phi_{p(N)} \rang, where |\phi_i\rang are the single-particle states, N_j is the number of particles occupying state j, and the sum is taken over all possible permutations p acting on N elements. In general, this is a sum of N! (N factorial) distinct terms, which quickly becomes unmanageable as N increases. Large numbers of particles are needed in condensed matter physics where typically the number of particles is on the order of Avogadro's number, approximately 1023. The second problem arises when trying to reconcile the Schrödinger equation with special relativity. It is possible to modify the Schrödinger equation to include the rest energy of a particle, resulting in the Klein-Gordon equation or the Dirac equation. However, these equations have many unsatisfactory qualities; for instance, they possess energy eigenvalues which extend to –∞, so that there seems to be no easy definition of a ground state. Such inconsistencies occur, because these equations neglect the possibility of dynamically creating or destroying particles, which is a crucial aspect of relativity. Einstein's famous mass-energy relation predicts that sufficiently massive particles can decay into several lighter particles, and sufficiently energetic particles can combine to form massive particles. For example, an electron and a positron can annihilate each other to create photons. Such processes must be accounted for in a truly relativistic quantum theory. This problem brings to the fore the notion that a consistent relativistic quantum theory, even of a single particle, must be a many particle theory. Quantizing a classical field theoryEdit Canonical quantizationEdit Quantum field theory solves these problems by consistently quantizing a field. By interpreting the physical observables of the field appropriately, one can create a (rather successful) theory of many particles. Here is how it is: 1. Each normal mode oscillation of the field is interpreted as a particle with frequency f. 2. The quantum number n of each normal mode (which can be thought of as a harmonic oscillator) is interpreted as the number of particles. The energy associated with the mode of excitation is therefore E = (n+1/2)\hbar\omega which directly follows from the energy eigenvalues of a one dimensional harmonic oscillator in quantum mechanics. With some thought, one may similarly associate momenta and position of particles with observables of the field. Having cleared up the correspondence between fields and particles (which is different from non-relativistic QM), we can proceed to define how a quantum field behaves. Two caveats should be made before proceeding further: 1. Each of these "particles" obeys the usual uncertainty principle of quantum mechanics. The "field" is an operator defined at each point of spacetime. 2. Quantum field theory is not a wildly new theory. Classical field theory is the same as classical mechanics of an infinite number of dynamical quantities (say, tiny elements of rubber on a rubber sheet). Quantum field theory is the quantum mechanics of this infinite system. The first method used to quantize field theory was the method now called canonical quantization (earlier known as second quantization). This method uses a Hamiltonian formulation of the classical problem. The later technique of Feynman path integrals uses a Lagrangian formulation. Many more methods are now in use; for an overview see the article on quantization. Canonical quantization for bosonsEdit Suppose we have a system of N bosons which can occupy mutually orthogonal single-particle states |\phi_1\rang, |\phi_2\rang, |\phi_3\rang, and so on. The usual method of writing a multi-particle state is to assign a state to each particle and then impose exchange symmetry. As we have seen, the resulting wavefunction is an unwieldy sum of N! terms. In contrast, in the second quantized approach we will simply list the number of particles in each of the single-particle states, with the understanding that the multi-particle wavefunction is symmetric. To be specific, suppose that N=3, with one particle in state |\phi_1\rang and two in state|\phi_2\rang. The normal way of writing the wavefunction is In second quantized form, we write this as |1, 2, 0, 0, 0, \cdots \rangle, which means "one particle in state 1, two particles in state 2, and zero particles in all the other states." Though the difference is entirely notational, the latter form makes it easy for us to define creation and annihilation operators, which add and subtract particles from multi-particle states. These creation and annihilation operators are very similar to those defined for the quantum harmonic oscillator, which added and subtracted energy quanta. However, these operators literally create and annihilate particles with a given quantum state. The bosonic annihilation operator a_2 and creation operator a_2^\dagger have the following effects: a_2 | N_1, N_2, N_3, \cdots \rangle = \sqrt{N_2} \mid N_1, (N_2 - 1), N_3, \cdots \rangle, a_2^\dagger | N_1, N_2, N_3, \cdots \rangle = \sqrt{N_2 + 1} \mid N_1, (N_2 + 1), N_3, \cdots \rangle. We may well ask whether these are operators in the usual quantum mechanical sense, i.e. linear operators acting on an abstract Hilbert space. In fact, the answer is yes: they are operators acting on a kind of expanded Hilbert space, known as a Fock space, composed of the space of a system with no particles (the so-called vacuum state), plus the space of a 1-particle system, plus the space of a 2-particle system, and so forth. Furthermore, the creation and annihilation operators are indeed Hermitian conjugates, which justifies the way we have written them. The bosonic creation and annihilation operators obey the commutation relation where \delta stands for the Kronecker delta. These are precisely the relations obeyed by the "ladder operators" for an infinite set of independent quantum harmonic oscillators, one for each single-particle state. Adding or removing bosons from each state is therefore analogous to exciting or de-exciting a quantum of energy in a harmonic oscillator. The final step toward obtaining a quantum field theory is to re-write our original N-particle Hamiltonian in terms of creation and annihilation operators acting on a Fock space. For instance, the Hamiltonian of a field of free (non-interacting) bosons is H = \sum_k E_k \, a^\dagger_k \,a_k, where E_k is the energy of the k-th single-particle energy eigenstate. Note that a_k^\dagger\,a_k|\cdots, N_k, \cdots \rangle=N_k| \cdots, N_k, \cdots \rangle. Canonical quantization for fermionsEdit It turns out that the creation and annihilation operators for fermions must be defined differently, in order to satisfy the Pauli exclusion principle. For fermions, the occupation numbers N_i can only take on the value 0 or 1, since particles cannot share quantum states. We then define the fermionic annihilation operators c and creation operators c^\dagger by c_j | N_1, N_2, \cdots, N_j = 0, \cdots \rangle = 0 c_j | N_1, N_2, \cdots, N_j = 1, \cdots \rangle = (-1)^{(N_1 + \cdots + N_{j-1})} | N_1, N_2, \cdots, N_j = 0, \cdots \rangle c_j^\dagger | N_1, N_2, \cdots, N_j = 0, \cdots \rangle = (-1)^{(N_1 + \cdots + N_{j-1})} | N_1, N_2, \cdots, N_j = 1, \cdots \rangle c_j^\dagger | N_1, N_2, \cdots, N_j = 1, \cdots \rangle = 0 The fermionic creation and annihilation operators obey an anticommutation relation, Significance of creation and annihilation operatorsEdit When we re-write a Hamiltonian using a Fock space and creation and annihilation operators, as in the previous example, the symbol N, which stands for the total number of particles, drops out. This means that the Hamiltonian is applicable to systems with any number of particles. Of course, in many common situations N is a physically important and perfectly well-defined quantity. For instance, if we are describing a gas of atoms sealed in a box, the number of atoms had better remain a constant at all times. This is certainly true for the above Hamiltonian. Viewing the Hamiltonian as the generator of time evolution, we see that whenever an annihilation operator a_k destroys a particle during an infinitesimal time step, the creation operator a_k^\dagger to the left of it instantly puts it back. Therefore, if we start with a state of N non-interacting particles then we will always have N particles at a later time. On the other hand, it is often useful to consider quantum states where the particle number is ill-defined, i.e. linear superpositions of vectors from the Fock space that possess different values of N. For instance, it may happen that our bosonic particles can be created or destroyed by interactions with a field of fermions. Denoting the fermionic creation and annihilation operators by c_k^\dagger and c_k, we could add a "potential energy" term to our Hamiltonian such as: This describes processes in which a fermion in state k either absorbs or emits a boson, thereby being kicked into a different eigenstate k+q. In fact, this is the expression for the interaction between phonons and conduction electrons in a solid. The interaction between photons and electrons is treated in a similar way; it is a little more complicated, because the role of spin must be taken into account. One thing to notice here is that even if we start out with a fixed number of bosons, we will generally end up with a superposition of states with different numbers of bosons at later times. On the other hand, the number of fermions is conserved in this case. In condensed matter physics, states with ill-defined particle numbers are also very important for describing the various superfluids. Many of the defining characteristics of a superfluid arise from the notion that its quantum state is a superposition of states with different particle numbers. Field operatorsEdit We can now define field operators that create or destroy a particle at a particular point in space. In particle physics, these are often more convenient to work with than the creation and annihilation operators, because they make it easier to formulate theories that satisfy the demands of relativity. The bosonic field operators obey the commutation relation It should be emphasized that the field operator is not the same thing as a single-particle wavefunction. The former is an operator acting on the Fock space, and the latter is just a scalar field. However, they are closely related, and are indeed commonly denoted with the same symbol. If we have a Hamiltonian with a space representation, say where the indices i and j run over all particles, then the field theory Hamiltonian is H = - \frac{\hbar^2}{2m} \int d^3\!r \; \phi(\mathbf{r})^\dagger \nabla^2 \phi(\mathbf{r}) + \int\!d^3\!r \int\!d^3\!r' \; \phi(\mathbf{r})^\dagger \phi(\mathbf{r}')^\dagger U(|\mathbf{r} - \mathbf{r}'|) \phi(\mathbf{r'}) \phi(\mathbf{r}) Quantization of classical fieldsEdit So far, we have shown how one goes from an ordinary quantum theory to a quantum field theory. There are certain systems for which no ordinary quantum theory exists. These are the "classical" fields, such as the electromagnetic field. There is no such thing as a wavefunction for a single photon in classical electromagnetisim, so a quantum field theory must be formulated right from the start. The essential difference between an ordinary system of particles and the electromagnetic field is the number of dynamical degrees of freedom. For a system of N particles, there are 3N coordinate variables corresponding to the position of each particle, and 3N conjugate momentum variables. One formulates a classical Hamiltonian using these variables, and obtains a quantum theory by turning the coordinate and position variables into quantum operators, and postulating commutation relations between them such as \left[ q_i , p_j \right] =i \delta_{ij} For an electromagnetic field, the analogue of the coordinate variables are the values of the electrical potential \phi(\mathbf{x}) and the vector potential \mathbf{A}(\mathbf{x}) at every point \mathbf{x}. This is an uncountable set of variables, because \mathbf{x} is continuous. This prevents us from postulating the same commutation relation as before. The way out is to replace the Kronecker delta with a Dirac delta function. This ends up giving us a commutation relation exactly like the one for field operators! We therefore end up treating "fields" and "particles" in the same way, using the apparatus of quantum field theory. Only by accident electrons were not regarded as de Broglie waves and photons governed by geometrical optics were not the dominant theory when QFT was developed. Path integral methodsEdit The axiomatic approachEdit The first class of axioms (most notably the Wightman, Osterwalder-Schrader, and Haag-Kastler systems) tried to formalize the physicists' notion of an "operator-valued field" within the context of functional analysis. These axioms enjoyed limited success. It was possible to prove that any QFT satisfying these axioms satisfied certain general theorems, such as the spin-statistics theorem and the PCT theorems. Unfortunately, it proved extraordinarily difficult to show that any realistic field theory (e.g. quantum chromodynamics) satisfied these axioms. Most of the theories which could be treated with these analytic axioms were physically trivial: restricted to low-dimensions and lacking in interesting dynamics. Constructive quantum field theory is the construction of theories which satisfy one of these sets of axioms. Important work was done in this area in the 1970s by Segal, Glimm, Jaffe and others. In the 1980s, a second wave of axioms were proposed. These axioms (associated most closely with Atiyah and Segal, and notably expanded upon by Witten, Borcherds, and Kontsevich) are more geometric in nature, and more closely resemble the path integrals of physics. They have not been exceptionally useful to physicists, as it is still extraordinarily difficult to show that any realistic QFTs satisfy these axioms, but have found many applications in mathematics, particularly in representation theory, algebraic topology, and geometry. Finding the proper axioms for quantum field theory is still an open and difficult problem in mathematics. In fact, one of the Clay Millennium Prizes offers $1,000,000 to anyone who proves the existence of a mass gap in Yang-Mills theory. It seems likely that we have not yet understood the underlying structures which permit the Feynman path integrals to exist. Some of the problems and phenomena eventually addressed by renormalization actually appeared earlier in the classical electrodynamics of point particles in the 19th and early 20th century. The basic problem is that the observable properties of an interacting particle cannot be entirely separated from the field that mediates the interaction. The standard classical example is the energy of a charged particle. To cram a finite amount of charge into a single point requires an infinite amount of energy; this manifests itself as the infinite energy of the particle's electric field. The energy density grows to infinity as one gets close to the charge. A single particle state in quantum field theory incorporates within it multiparticle states. This is most simply demonstrated by examining the evolution of a single particle state in the interaction picture |\psi(t)\rangle = e^{iH_It} |\psi(0)\rangle = \left[1+iH_It-\frac12 H_I^2t^2 -\frac i{3!}H_I^3t^3 + \frac1{4!}H_I^4t^4 + \cdots\right] |\psi(0)\rangle. Taking the overlap with the initial state, one retains the even powers of HI. These terms are responsible for changing the number of particles during propagation, and are therefore quintessentially a product of quantum field theory. Corrections such as these are incorporated into wave function renormalization and mass renormalization. Similar corrections to the interaction Hamiltonian, HI, include vertex renormalization, or, in modern language, effective field theory. Gauge theoriesEdit A gauge theory is a theory which admits a symmetry with a local parameter. For example, in every quantum theory the global phase of the wave function is arbitrary and does not represent something physical, so the theory is invariant under a global change of phases (adding a constant to the phase of all wave functions, everywhere); this is a global symmetry. In quantum electrodynamics, the theory is also invariant under a local change of phase, that is - one may shift the phase of all wave functions so that in every point in space-time the shift is different. This is a local symmetry. However, in order for a well-defined derivative operator to exist, one must introduce a new field, the gauge field, which also transforms in order for the local change of variables (the phase in our example) not to affect the derivative. In quantum electrodynamics this gauge field is the electromagnetic field. The change of local change of variables is termed gauge transformation. In general, the gauge transformations of a theory consist several different transformations, which may not be commutative. These transformations are together described by a mathematical object known as a gauge group. Infinitesimal gauge transformations are the gauge group generators. Therefore the number of gauge bosons is the group rank (i.e. number of generators forming an orthogonal basis). Quantum field theory and consciousnessEdit Quantum brain dynamics is an approach to explaining conscoiousness in the context of quantum field theory. Beyond local field theoryEdit More details can be found in the article on the history of quantum field theory. Quantum field theory was created by Dirac when he attempted to quantize the electromagnetic field in the late 1920s. The early development of the field involved Fock, Jordan, Pauli, Heisenberg, Bethe, Tomonaga, Schwinger, Feynman, and Dyson. This phase of development culminated with the construction of the theory of quantum electrodynamics in the 1950s. Gauge theory was formulated and quantized, leading to the unification of forces embodied in the standard model of particle physics. This effort started in the 1950s with the work of Yang and Mills, was carried on by Martinus Veltman and a host of others during the 1960s and completed during the 1970s by the work of Gerard 't Hooft, Frank Wilczek, David Gross and David Politzer. Parallel developments in the understanding of phase transitions in condensed matter physics led to the study of the renormalization group. This in turn led to the grand synthesis of theoretical physics which unified theories of particle and condensed matter physics through quantum field theory. This involved the work of Michael Fisher and Leo Kadanoff in the 1970s which led to the seminal reformulation of quantum field theory by Kenneth Wilson. The study of quantum field theory is alive and flourishing, as are applications of this method to many physical problems. It remains one of the most vital areas of theoretical physics today, providing a common language to many branches of physics. Physicists like Wilczek, Politzer, and Carl M. Bender are some of the foremost experts in the field. See also Edit Suggested reading Edit • Wilczek, Frank ; Quantum Field Theory, Review of Modern Physics 71 (1999) S85-S95. Review article written by a master of Q.C.D., Nobel laureate 2003. Full text available at : hep-th/9803075 • Ryder, Lewis H. ; Quantum Field Theory (Cambridge University Press, 1985), [ISBN 0-521-33859-X]. Introduction to relativistic Q.F.T. for particle physics. • Zee, Anthony ; Quantum Field Theory in a Nutshell, Princeton University Press (2003) [ISBN 0-691-01019-6]. • Loudon, Rodney ; The Quantum Theory of Light (Oxford University Press, 1983), [ISBN 0-19-851155-8] • Paul H. Frampton , Gauge Field Theories, Frontiers in Physics, Addison-Wesley (1986), Second Edition, Wiley (2000). • Gordon L. Kane (1987). Modern Elementary Particle Physics, Perseus Books. ISBN 0-201-11749-5. External linksEdit Template:Quantum field theory Ad blocker interference detected! Also on Fandom Random Wiki
9352969b26105c39
söndagen den 29:e maj 2011 Mathematical Secret of Flight 1 Computed Lift and Drag of a 3d NACA0012 wing for different angles of attack by Unicorn (blue) compared with different experiments. My talk on June 15 at Svenska Mekanikdagar 2011, is now available for preview as describing joint work with Johan Hoffman and Johan Jansson. Based on accurate solution of the incompressible Navier-Stokes equations we identify the true mechanism for the generation of large lift L at small drag D of a wing with lift to drag quotient L/D of size 10 - 50, which is not described in the literature. We combine the Navier-Stokes equations with a slip boundary condition on the wing motivated by the experimental fact that the skin friction is small for a slightly viscous fluid such as air or water, and we exhibit the role the slip condition in two crucial aspects: • prevention of separation at the crest of the wing generating large lift • 3d slip-separation at the trailing edge not destroying large lift and causing small drag. Text books claim following Prandtl, named the father of modern fluid mechanics, that both lift and drag result from a boundary layer arising from a no-slip condition. We obtain lift and drag in full accordance with experiments by solving the Navier-Stokes equations with a slip condition, which does not generate any boundary layer, and we thus present strong evidence that lift and drag do not originate from any boundary layer. In short, we show that solutions to the Navier-Stokes equations with slip are computable and correctly capture the physics of (subsonic) flight. See also • Consider a transport airplane with a 50-meter-long fuselage and wings with a chord length (the distance from the leading to the trailing edge) of about five meters. If the craft is cruising at 250 meters per second at an altitude of 10,000 meters, about 10 quadrillion (10^16) grid points are required to simulate the turbulence near the surface with reasonable detail. Kim and Moin express the necessity dictated by Prandtl to resolve thin boundary layers to correctly compute lift and drag of a wing or an entire airplane, which would require 50 years of Moore's law to increase the computing power with a factor 10^10 to reach the dictated 10^16 points. We show that this is possible already today using 10^6 points by using slip without boundary layers to resolve. Monstrosity of Quantum Mechanics 6: Collapse of Wave Function Is quantum mechanics a physics beauty contest with all possibilities collapsing into one actuality upon observation? Who would you choose? Since the multi-dimensional wave function of quantum mechanics is supposed to represent a probability distribution over all possibilities, the high dimensionality has to be drastically reduced to become an actuality of some interest. This is supposed to happen in an interaction with an observer, referred to as collapse of the wave function, where the observer somehow picks one of all the potentialities and makes it into an actuality, as when Miss America somehow is chosen among many candidates by some educated physics observers. Is then quantum mechanics a beauty contest? Well, ask your favorite physicist about the nature of the collapse of the wave function. Is it real? What is collapsing? Physical reality or our knowledge about reality. Or is it quantum mechanics itself which collapses upon critical observation? lördagen den 28:e maj 2011 Monstrosity of Quantum Mechanics 5: Passive Observation Impossible Is passive observation really impossible in the world of quantum mechanics? David Albert, together with Barry Loewer inventor of a version of the Many-Worlds Interpretation referred to as Many-Minds (different from the one I suggest), tells us that the physical process of making an observation in the quantum world necessarily interferes with what is being observed. In other words, the ideal of fully passive observation of classical mechanics, cannot be upheld in quantum mechanics. The observer will always interfere more or less with what is being observed. Albert tells us that this is the big difference between classical and quantum mechanics. But is this true? Is fully passive observation impossible in quantum mechanics? Maybe, or maybe not, depending on what is meant by an observation. A human being can make observations in different forms: 1. Inspection of an analog physical apparatus capabable of measuring some phenomenon. 2. Inspection of digital simulation of the phenomen. Here 2. represents a digital simulation based on solving the Schrödinger equation describing the phenomenon, e g the ground state of an atom, and observing its energy, while 1. would be to directly observe the emission spectrum. Th nice thing about 2. is that it is a completely passive observation, in the sense that the computational process is independent of the observer making the final observation of the energy as a number coming out of the computation. So maybe passive observation is possible in quantum mechanics. Maybe quantum mechanics is not so different from classical mechanics. Not so mysterious? fredagen den 27:e maj 2011 Monstrosity of Quantum Mechanics 4: Quantum Computers The belief of the modern physicist that the linear multi-dimensional Schrödinger equation describes the quantum world of atoms and molecules, has led to the idea of the quantum computer: • device for computation that makes direct use of quantum mechanical phenomena, such as superpositionand entanglement, to perform operations on data. • Experiments have been carried out in which quantum computational operations were executed on a very small number of qubits (quantum bits). I have noticed in previous posts that the linear multi-dimensional Schrödinger equation is a monster, which cannot be solved, not even on any thinkable supercomputer with any thinkable known microprocessor technique. The dimensionality is simply overwhelming. We have noticed that the impossibility of solving the multi-dimensional Schrödinger equation results from the fact the equation describes all possibilities rather than specific actualities, which is overwhelming for microprocessors limited to performing computations on specific data. The Schrödinger equation is thus a monster computationally, and to handle such a beast a monster computer is needed, a computer which computes all possibilities rather than specific actualities, which computes on all data rather than on specific data: In other words a quantum computer is needed. Are there any quantum computers? No, only with a few quantum bits. Is it possible to construct a quantum computer? Nobody knows. Few seem to believe one can. Does the multi-dimensional Schrödinger equation give a realistic description of the atomic world? Nobody knows because solutions cannot be computed and compared to experimental observation. Can you solve a monster equation on a monster computer, that is a device which simulates a real analog monster by being a real digital monster? What if the multi-dimensional Schrödinger equation is just an invented fictional monster, which will disappear as soon you stop talking about it? Compare with the post today on The Reference Frame singing praise to the Copenhagen Interpretation of the multidimensional Schrödinger equation, as if it has a meaning. Read yourself and ask if you understand anything. tisdagen den 24:e maj 2011 Monstrosity of Quantum Mechanics 3: Many-Worlds The monstrosity of quantum mechanics is expressed in full bloom in Everett's Many-Worlds interpretation reflecting that solutions of the linear multi-dimensional Schrödinger equation can freely be superimposed. The Schrödinger cat in its closed box thus can be in a state of superposition of both alive and dead and only upon opening the box for observation does the cat have to collapse into either alive or dead, as if there were two possible parallel universa prior to collapse into one actual universe. The solution of the linear multi-dimensional Schrödinger equation thus is interpreted as a universal wave-function supposedly representing all possible universa, out of which a specific actual universe is singled out in one way or the other. How to react to this breath-taking ocean of possibilities? In this case there seems to be two possibilities: 1. Accept the linear multi-dimensional Schrödinger equation as given by God. 2. Replace the linear multi-dimensional Schrödinger as a basic model of quantum mechanics with something more reasonable. I would vote for 2. and I explore one possibility in Many-Minds Quantum Mechanics. After all, it was Schrödinger and not God who wrote down the equation. It was Schrödinger who understood that his equation had serious flaws and should be replaced by a version describing actualities instead of possibilities. What do you say? 1 or 2? One actuality or all possibilities? Would you prefer all possible lives before one actual life. Compare with the title of the biography: A Life of Erwin Schrödinger. Nobody would be able to write a biography with the title All Possible Lives of Erwin Schrödinger, and even if somebody could, nobody would be interested in reading it. måndagen den 23:e maj 2011 Monstrosity of Quantum Mechanics 2 The simplicity (linearity) of the Schrödinger equation is seductive and has misled many minds. Quantum mechanics as a description of the microscopic world of atoms and molecules is based on Schrödinger's wave equation, which as a mathematical object is (see above) • scalar • linear • multidimensional in 3N coordinates for N electrons/kernels with solutions called wave-functions commonly denoted as Psi: • Psi (x1, x2, ..., xN, t) with xj representing the three position coordinates of particle j with j=1,...,N, and t denoting time. The wave function Psi thus depends on 3N independent real variables plus time. The simplicity of the Schrödinger wave equation (scalar and linear) as a description of a complex reality, is thus balanced by an extreme richness of the wave function depending on 3N + 1 independent variables. The richness of the wave-function thus makes it impossible to give it a physical meaning as representing a configuration or distribution of electrons and kernels, which threatened to kill quantum mechanics at birth but was rescued by Max Born declaring that • | Psi (x1,....,xN,t) |^2 represents the probability of the configuration given by the coordinates (x1,...,xN,t), and by Niels Bohr declaring that the wave function as a probability distribution, upon observation could collapse into a definite physical state, as when opening the box containing the Schrödinger cat. Born and Bohr thus developed the Copenhagen Interpretation (of quantum mechanics), which is today the officially accepted truth although contested by alternatives as hidden variables and many-worlds interpretations, without any winner. Schrödinger himself left quantum mechanics as soon as the Copenhagen Interpretation captured the minds of most physicists. The richness of the wave function is in fact a monstrosity already for small systems with N = 100 say, not to speak of real systems of 10^23 particles in a mole of gas, as pointed out by Walter Kohn, Nobel Prize in Physics in 1998: • The wave function does not exist for N larger than 100. • Why? Because it cannot be computed, because of the many dimensions. Kohn got the Nobel Prize for computing electron densities instead of probabilities as solutions of a non-linear version of the Schrödinger equation in 3 space dimension, referred to as density functional theory. If now the wave-function as solution to the Schrödinger equation does not exist, there must something fishy about the Schrödinger equation. What? We saw that the equation is scalar and linear and thus has a simple structure, which is not problematic in itself, but if it necessitates a monstrous richness in dimensions, it seems that one should question the very formulation of the Schrödinger equation as a scalar linear multidimensional equation. From where did Schrödinger get his equation? Did he derive it from basic principles? Not really. It is more of an ad hoc invention expressing particle interaction by electrostatic Coulomb potentials combined with a new mysterious form of kinetic energy. How can we know that the equation is a good model of physics if it cannot be solved? How can we check that its solutions give correct predictions if they cannot be computed and thus determined? Nevertheless it is mantra of modern physics that the Schrödinger equation is a good model, but it is a mantra without physical meaning about an equations which cannot be solved. It is like claiming that a certain truth is hidden in a riddle which cannot be solved. Thus, new versions of the Schrödinger equation are needed. I explore one such line of thought in Many-Minds Quantum Mechanics in the spirit of the Hartree method as a non-linear coupled system of one-electron/kernel Schrödinger equations. The simplicity of linearity (and superposition) of the multi-dimensional Schrödinger equation is here replaced by a non-linear complexity, but the system solutions only depend on three space dimensions which makes a direct physical interpretation possible, without probabilities and wave function collapse. This is a realist approach as compared to the non-realist Copenhagen Interpretation. Compare with Lars-Göran Johansson: Interpreting Quantum Mechanics: A Realsist's View in Schrödinger's Vein suggesting a form of realist wave-particle duality as continuous waves for propagation and discontinuous particles for exchange of energy. söndagen den 22:e maj 2011 Charles Mackay: Madness of Crowds and CO2 Alarmism In Extraordinary Popular Delusions and the Madness of Crowds published in 1841, Charles Mackay debunks witch-hunts, alchemy and economic bubbles. Today Mackay would have been writing about the crowd madness of CO2 alarmism, with the witches being the polluters of CO2, the alchemists the CO2 alarmists and the bubble the green economy. Mackay said many clever things obviously anticipating CO2 climate alarmism, while giving hope to skeptics of CO2 alarmism: • Aid the dawning, tongue and pen: Aid it, hopes of honest men! • Truth... and if mine eyes Can bear its blaze, and trace its symmetries, Measure its distance, and its advent wait, I am no prophet - I but calculate. fredagen den 20:e maj 2011 Monstrosity of Quantum Mechanics Schrödinger trying to slay the many-headed monster of the wave function (assisted by Einstein) however without success. Basically, classical physics is Newtonian mechanics and modern physics is quantum mechanics. Quantum mechanics is supposed to be described by Schrödinger's equation, worshipped by modern physicists. The equation was formulated by Erwin Schrödinger in 1925 seeking an equation with wave-like solutions called wave functions describing the dynamics of atoms and molecules resulting from an interplay of positive kernels and negative electrons under attractive and repulsive electric Coulomb forces. Nothing strange in principle, but what Schrödinger had created showed to be nothing but a Monster. Monster? Why? Well, the wave function for the simplest case of the Hydrogen atom with one electron depends on 3 space coordinates and and time, but the wave function for an atom with N electrons depends on 3N space coordinates (and time), which makes it into a Many-Headed Monster beyond direct physical interpretation: • Instead of describing an actuality in 3 space dimensions, the wave function describes all possibilities • Instead of describing a specific actual sequence of 1000 coin flips, the wave function describes the 2^1000, much more than 10^100 = googol, possible sequences of coin flips. • Instead of describing the life of one specific actual human being, it describes the lives of all possible human beings. As soon as Schrödinger understood that he had created a scientific monster, he tried to kill it but failed and then he withdrew from physics, while the Monster captured the minds of all the modern physicists (except Einstein's) who quickly formed a whole army under the leadership of Niels Bohr and his Copenhagen Interpretation of the wave function as a probability distribution of all possibilites. To get from possibility to actuality, the idea of collapse of the wave-function was invented, a Monstrous Idea to handle a Monster. Before collapse the Schrödinger Cat in the box would be in a state of superposition of alive and dead with all possibilities still present, and only upon opening of the box and inspection, would the Cat collapse into an actuality as alive or dead. This Monstrous Idea has led modern physics into an endless desert of Multiverses and Many-Worlds of all possibilities. A recent contribution to this monstrosity is the The Multiverse Interpretation of Quantum Mechanics by Raphael Bousso and Leonard Susskind: • We argue that the many-worlds of quantum mechanics and the many worlds of the multiverse are the same thing, and that the multiverse is necessary to give exact operational meaning to probabilistic predictions from quantum mechanics. Decoherence - the modern version of wave-function collapse - is subjective in that it depends on the choice of a set of unmonitored degrees of freedom, the "environment". Read and try to understand where physics is today... For a new approach without monsters, see Many-Minds Quantum Mechanics based on a different non-linear version of the Schrödinger equation as a coupled system of one-particle three-dimensional equations. The thesis of Hugh Everett III behind the many-worlds interpretation exhibits the difficulties or rather monstrosities of the usual scalar linear multidimensional version of Schrödinger's equation. We will return to Everett's thesis in search of a connection between many-minds and many-worlds physics. Since we all have different conceptions of the world, maybe we in fact live in a many-worlds universe, one for each mind. Of course, the following questions then comes up: What is a mind and how many are there? Another monstrosity perturbing the minds of many modern physicists is the Greenhouse Gas Effect, but there are some physicists fighting this monster, as e g William Happer: The Truth about Greenhouse Gases referring to Charles Mackay's Extraordinary Popular Delusions and the Madness of Crowds first published in 1841. The development of modern physics into monstrosity is described in Dr Faustus of Modern Physics. Free Will and Finite Precision Computation 5 • The free will that humans enjoy is similar to that exercised by animals as simple as flies. • Animals always have a range of options available to them...perceived as conscious decisions. • The idea tackles one of history's great philosophical debates. • What has been long established is that "deterministic behaviour" - the idea that an animal poked in just such a way will react with the same response every time - is not a complete description of behaviour. • Even the simple animals are not the predictable automatons that they are often portrayed to be. • However, the absence of determinism does not suggest completely random behaviour either. • Experiments has shown that although animal behaviour can be unpredictable, responses do seem to come from a fixed list of options. • Free will is not that lofty metaphysical thing that it was until the 1970s or so. • It is a biological property, a trait; the brain possesses the freedom to generate behaviours and options on its own. • The exact mechanism by which brains - from those of flies up to humans - do that generation remains a matter for experiments to more fully prove. • There is no way the conscious mind, the refuge of the soul, could influence the brain without leaving tell-tale signs; physics does not permit such ghostly interactions. • Brembs and others have used mathematical models to simulate brain activity on a computer, finding that what worked best was a combination of deterministic behaviour and what is known as stochastic behaviour - which may look random but actually, in time, follows a defined set of probabilities. • Tethered fruit flies proved their choices to be neither deterministic nor random. In short, free will seems to be expressed through a combination of goal-oriented determinism (as concerns big things) and indeterminism (as concerns little things), with a clear connection to finite precision computation. It seems that the discussion has passed from sterile metaphysics into a more constructive analysis of finite precision computing minds... torsdagen den 19:e maj 2011 Free Will and Finite Precision Computation 4 • little things decided by a dice. A Dark Side of CO2 Alarmism and The Royal Swedish Academy The Royal Swedish Academy of Sciences hiding behind a statue of Anders Retzius, father of racial biology with his cephalic index. The CO2 climate alarmism behind the 3rd Nobel Laurate Symposium on Global Sustainability, organized by the Royal Swedish Academy of Sciences, goes back to the Swedish physicist/chemist Svante Arrhenius, who The Stockholm-Memorandum signed by an invited group of Nobel Laurates states that • Humans are now propelling the planet into a new geological epoch, the Anthropocene and makes the following Call: • Fundamental transformation in all spheres and at all scales to stop and reverse global environmental change. • Greatly increase access to reproductive health services... • reduce birth rates. What is "reproductive health services"? How to "reduce birth rates"? Is there a connection to Arrhenius, or is it just a fantasy? Why was the Symposium held behind closed doors? I have already expressed my protest against the uncritical support of CO2 alarmism by the Royal Academy in a symbolic resignation. Basic Science: Climate Sensitivity Less Than 0.3 C For the convenience of the reader I here collect the links to a couple of basic arguments showing that the effect of doubled atmospheric CO2 at most could be a global warming of harmless 0.3 C, that is that the climate sensitivity is smaller than 0.3 C. The idea is to combine observation with simple mathematical models, where observation is used to determine the coefficients of the model, thus allowing prediction. Stefan-Boltzmann's radiation law for an ideal blackbody is not used, since it does not describe the complex Earth-atmosphere system. I thus only use simple models with coefficients determined by observation, which is the basic scientific method leading to the basic mathematical models of physics, such as the heat equation, potential flow equation and radiative transfer equation. I assume that doubled CO2 could correspond to a change of the radiative properties of the atmosphere of 1%, or a "radiative forcing" of 3 W/m2 = 1% of a total insolation of about 300 W/m2. I assume that the "atmospheric effect" is 33 C corresponding to raising the temperature of an Earth without atmosphere (= observed mean temperature of the Moon) of - 18 C to the observed temperature of the Earth with atmosphere of 15 C. These are three different arguments using different data and different simple models, all giving the same result of a climate sensitivity smaller than 0.3 C, where 0.3 C is to be viewed as an upper bound, with the real value probably a factor 2 - 3 smaller. IPCC claims a "best estimate" which is 10 times bigger = 3 C, which is obtained by confusing definition with physical fact and free invention of positive feed-backs. IPCC has invented a factor 10 for which there is no scientific basis. In economics a factor 10 would be swindle and it is the same in science, or even worse. onsdagen den 18:e maj 2011 What is a Princess Allowed to Say? About CO2 and Great Transformation? Crown Princess Victoria of the Kingdom of Sweden stated in her presentation at the 3rd Nobel Laureate Symposium on Global Sustainability organized by the Royal Swedish Academy of Sciences: • Burdens must be shared by everyone (including masses of poor people). • Wind turbines, solar connectors, panels and geothermal energy, why is it that countries are using soo little of renewable energy sources despite having the knowledge and technique? • We can and must change our life styles and the manner in which we use energy. • What are we waiting for? The work has to start here and now. • The world succeeded to come together and decide upon removal of freons. • To succeed we need to reconnect humanity with the biosphere. • This is no small task. I see no better persons though than Nobel Laurates to carry this critical message to the world: • The need for a Great Transformation. • Our generation has the knowledge and ability to create a sustainable world for future generations. The presentation poses the following questions • Does the Princess make political statements? • Is the Princess allowed to make political statements? • Is the Princess allowed to advocate specific techniques for generating energy? • Is the Royal Swedish Academy influenced by Royals? Any answers? See also my Newsmill article about biased jury. The Princess speaks the same words as Hans Joachim Schellnhuber, main organizer and ideolog of the symposium, according to New York Times known for his "aggressive stance on climate policy": 1. Earth’s population could be devastated by buildup of greenhouse gases. Does the Princess understand what she is saying? PS1 The verdict of the Jury of Nobel Laurates of the Symposium is expressed in the Stockholm-Memorandum: • Our call is for fundamental transformation and innovation in all spheres and at all scales in order to stop and reverse global environmental change. • Greatly increase access to reproductive health services... reduce birth rates. • Introduce strict resource efficiency standards • Launch a major research initiative on the earth system. • Scale up our education efforts to increase scientific literacy. This is nothing but A New Brave World......but is there place for a Princess in this New Brave World? A resource efficient renewable Princess? tisdagen den 17:e maj 2011 Free Will and Finite Precision Computation 3 This is a continuation of Free Will 2: So can we make it rain tomorrow by leaving the car window open? Can the flap of a butterfly in Brazil set up a tornado in Texas? How can we tell? Well we have already answered this question: Take away the butterfly and observe tornados anyway, close the window and observe rain anyway. Or let the butterfly flap and observe no tornados, open the window and observe no rain. Evidently we are talking about a big effect (tornado, rain) from a small cause (butterfly, car window), which is only possible if the system under consideration is unstable. Why? Because the definition of an unstable system is that a small cause can have a big effect. If the effect of any small causes is small, then the system is stable. Most of the systems we can observe are (more or less) stable, because unstable systems tend to break down or explode into non-existence. Is the weather unstable? Well, we say that the weather is unstable when changes are unpredictable, and we know that this is often the case. How unstable can then the weather be? Can it be so unstable that the flap of butterfly can cause a tornado? Probably not. We expect that sufficiently small perturbations cannot change the major features of the weather and cause a tornado. This means that it is irrelevant whether a butterfly flaps or not, or if we leave the car windows open or not, as concerns tornados and rain. If we accept that small causes do not change major features, that is, that we are dealing with a (more or less) stable system as a typical system which we may be confronted with, then we could say that we could leave certain little things to be determined by chance, by throwing a dice: • It would not change anything essential. • It would save us time for essentials by avoiding getting drowned into pedantry. • In fact, it would be necessary to not get bogged down by details. • In other words, we would have to act with finite precision in order to not get stuck on the spot at a specific point in time. • Time is advancing and so we have to advance as well and thus we have to take decisions with finite precision only, because we have no time to do everything with infinite precision. We are now approaching the question of free will. Can we do anything we could of think of doing? No, our abilities are limited, but within these limits we would say that we have some form of free will. We could decide what to study at the university, with whom to engage, how to dress, what to eat, what to say, but all these decisions could fit into some form of master plan for our life, which we probably should search for if we don't have any. Our free will would not be entirely free but subordinate to a master plan, which we may have chosen by free will or inherited from our parents, spouse or friends or society. So maybe our free will as concerns big things is not that free, as if the main pattern of our life largely is predetermined. We could still argue that we have a free will to decide little things, what movie to see, what to have for dinner et cet, but we could also say that we will only spend limited time on these issues to find the "optimal solution". We could even use a dice to decide if we cannot easily make up our mind or come to some agreement with somebody. But you would not like Luke Rhinehart decide big things, such as getting divorced or not, by throwing the dice, because that would quickly ruin your life. In short, you would act with finite precision and feel that you have some form of free will in particular for little things, possibly exercised using dice, while you may feel that the main path of your life (or at least other people's lives) is more or less predetermined. This corresponds to something between full determinism (no dice) and full indeterminism (all dice) as a for of finite precision determinism (dice only for little things). In other words, a free will which is not completely free, but not completely unfree either: • a finite precision free will. PS Suppose Tom wants to show Harry that he has a free will. Consider the following conversation: Tom: Look, I can decide to lift my left arm or my right arm according to my own free will. Harry: How you decide to lift the left or the right? Do have some predetermined preference? Tom: Of course not, then it would not be free will. Harry: OK, but if you are completely neutral, how are going to decide? Tom: Let me think...should I lift the right arm...or should I lift the left...what could be a good reason to lift the right arm...instead of the left...well, I cannot really decide...I need more time... but even so I don't how to choose while staying fully neutral... Harry: Can I offer some help? What about flipping a coin? Tom: Flipping a coin? Yes, that must be the only possibility which is completely neutral, without any perdetermined predjudice for right and left. That's what I will do to not get held up by this silly test... What Does a Nobel Laurate Understand about CO2? 1 Murray Gell-Mann, Nobel Prize in Physics in 1969 for the Standard Model of elementary particles, is one of the Nobel Laurates to decide about the future of humanity at the Nobel Laurate Symposium on Global Sustainability at The Royal Swedish Academy of Sciences, May 16-19, • Evidence that the Earth is warming by human emissions of greenhouse gases is unequivocal • fossil fuel raising CO2 above the limits of the Holocene • exit door from the Holocene had been opened • Great Acceleration: human population tripled, consumption in the global economy grew many times faster • Great Acceleration has not been an environmentally benign phenomenon • eroding the Earth’s resilience, ocean acidification. The agenda for the meeting is presented by The German Advisory Council on Global Change, chaired by Prof. Dr. Hans Joachim Schellnhuber, as a Summary for Policy Makers: World in Transition, A Social Contract for Sustainability: • carbon-based model unsustainable • low-carbon society is a Great Transformation • global energy system decarbonised • greenhouse gas emissions absolute minimum • low-carbon societies • quantum leap for civilisation • universal consensus • Global Enlightenment • new social contract • science subservient role • sustainability is a question of imagination. The purpose of the meeting is to get Nobel Laurates of Physics and Chemistry to confirm on scientific grounds that CO2 emission is the big threat to human civilization. The Nobel Laurates will form the jury of a Tribunal facing Humanity with charges of destroying the Earth (by CO2 emission). We ask the questions: • Do Nobel Laurates understand the role of CO2 for global climate? • Do Nobel Laurates say that society will have to be decarbonized by 2050? and will report on answers..stay tuned... PS Johan Rockström (organizer) and Andreas Carlgren (minister) write in DN Debate to prepare the Swedish opinion: • To avoid catastrophical climate change, many scientists believe that CO2 emisssion from fossil fuels must stop by 2050. • This requires resources to renewable energy of unprecedented size. Note the term many scientists, not as before all scientists... This seems to be an acknowledgement that there are also many scientists who consider CO2 emission to not be harmful at all. What if the jury was changed to the latter group of scientists? What would then the charges be? Who would then sit on the accused bench? måndagen den 16:e maj 2011 Free Will and Finite Precision Computation 2 Continuation of Free Will 1: David Foster Wallace studies the logical argument of the fatalism of Richard Taylor going back to Aristotle stating that we cannot do anything other than what we actually do, in other words that the future is predetermined and free will is only an illusion. The logic is that a statement of the form "Tomorrow it will rain" is either true or false: If it is true then it will have to rain, and if it is not true then it cannot rain, in both cases showing that what will happen tomorrow is predetermined. Of course, this is a simple logical trick based on the idea that a statement must be either true or not true. But nothing says that this must apply to the statement "Tomorrow it will rain". It is a statement without definite truth value when (today) it is uttered; only in retrospect knowing the outcome is it possible to assign it a truth value and then the predetermination disappears. But there are other arguments showing that the future is determined by the present. The best one is that of Laplace's demon: Laplace's demon would thus be able to tell today if it is going to rain tomorrow by simply computing the solution to the equations of motion describing the evolution of particles (atoms, molecules) making up the climate system, and would thus be able to make a necessarily true prediction of a coming event, thus showing that it is predetermined. But is there a Laplace demon? Human computing power is capable of solving the equations of motion for particle systems of millions or billions of particles (10^6 - 10^9), but not the octillions (10^30) of real systems. We are all too familiar with the fact that human intellect cannot tell for sure if it is going to rain tomorrow. But is the weather still predetermined? Is there anything we can do by free will to make it rain or not? Can we make it rain tomorrow by leaving our car windows open? The investigation continues... Self-publishing on Google Books? I have published the following new books of mine on Google Books as fullview with free PDF download: The idea is to compare direct publishing on Google Books with self-publishing on e.g. Amazon CreateSpace, or to conventional publishing through an established publisher as ebook or printed book. It appears that Amazon CreateSpace requires conversion of pdf to a different ebook format, which is not automatic and tricky if math formulas are involved. Maybe somebody has some good advice to give. söndagen den 15:e maj 2011 Free Will and Finite Precision Computation 1 In recent books I have shown that the concept of finite precision computation, in reality in analog form and in simulation of reality in digital form, can be used to give rational deterministic (mathematical) explanations of the following phenomena: • direction of time • 2nd law of thermodynamics • blackbody radiation, which have evaded explanations using both classical deterministic exact mathematics and classical statistical physics. Finite precision computation opens classical exact determinism to some imprecision or indeterminism, without going all the way to the full indeterminism of statistical physics, and thus avoids the impossibility of both extreme determinsim and extreme indeterminsim. In finite precision computation, little things may be decided by throwing a dice, corresponding to chopping a decimal expansion into a finite number of digits, while big things still may be fully deterministic. The concept can be described as one of the following options for using dice throw to decide what to do: • Full Determinism: Calculate everything exactly. Never throw a dice. • Full Indeterminsim: Calculate nothing. Always throw a dice. • Finite Precision: Calculate the big. Throw a dice to decide the small. Full Indeterminism is represented by the cult novel The Dice Man by George Cockcroft about the psychiatrist Luke Rhinehart, who decides to let the dice decide everything with catastrophical results from using it to decide big things like getting divorced or not. Full Determinsim is represented by the fatalism of Richard Taylor exhibited by the cult author David Foster Wallace, who took his own life on Sept 12 2008, maybe after asking the dice to decide to pull the trigger or not. Wallace wrote a college thesis on Taylor's fatalism with title Fate, Time, and Language: An Essay of Free Will, republished in 2010 by Columbia State University. Can Finite Precision Computation be used to shed some light on the eternal philosophical problem Free Will? I will address this question in a sequence of posts, while reading a bit of Wallace. I will start with the following question: • Is it helpful to let a dice decide little things? onsdagen den 11:e maj 2011 Har Professorn Avskaffat Sig Själv? I den nya Högskolelagen är det prefekten som "leder verksamheten", dvs bestämmer vad som skall göras och sägas, medan professorn/läraren "har hand om" utbildning och forskning, dvs gör jobbet, på order av ledningen. Jag tar upp detta i ett inlägg på Newsmill: med utgångspunkt från mina erfarenheter av censur och munkavle på KTH, redovisat under KTH-gate. Naturligtvis refuserades artikeln av Svd och DN var ju inte att tänka på. Munkavle på! Ytterst handlar det om akademisk tankefrihet, om vem som skall bestämma vad som är aktuell vetenskaplig sanning, professorn/vetenskapsmannen eller administratören/politikern? Mina professorskollegor i landet är anmärkningsvärt indifferenta till frågan, som om den inte berörde dem: • Kan det verkligen vara så att professorn avskaffat sig själv utan att någon har märkt något och än mindre sagt något? • Utan att professorns fackförbund SULF haft någon invändning? • Kanske det i den nya högskolan inte behövs några professorer med uppgift att tänka självständigt? • Har professorn utan att protestera låtit sig förses med munkavle? Debatten på Newsmill kanske ger svar. PS1 Vad gäller munkavle, censur och tystande av kritiska röster, så är det naturligtvis effektivt så länge det funkar till 100%, men det förutsätter att alla kanaler stängs och att övervakningen är total. Detta är dock svårt att uppnå i dagens nya informationsvärld: Debatten i klimatfrågan har nu tagits över av den fria blogg-sfären och det politiskt korrekta tänkandet har tappat sin hegemoni på vetenskaplig sanning. Något för KTH, DN och SvD att betänka, kanske. Det finns ju också skygglappar att sätta på. PS2 Lustigt nog arrangerar KTH ett Symposium om Akademiskt Ledarskap 13/5 till ära av Ingrid Melinder (som inte är professor), där naturligtvis de administratörer som satt munkavle med benäget bistånd av Melinder, talar: Peter Gudmundson och Folke Snickars. Kanske läge att ta upp KTH-gate? Nog inte: det är bara administratörer som får tala om akademiskt ledarskap. Professorer får hålla tyst, i den nya högskolan (utom Mathias Uhlen med sina 800 milj/år), medan Scoutförbundet får utveckla sin ledarskapsfilosofi, för högskolan. måndagen den 9:e maj 2011 SULF om Censur och KTH-gate Efter att ha blivit utsatt för censur med ett direkt personangrepp av KTH backat av Rektor Peter Gudmundson, vilket jag redovisat i en serie poster under KTH-gate, vände jag mig till mitt fackförbund SULF för att se om jag kunde få något stöd. Jag presenterade mitt fall vid ett möte med förbundsjurist Carl Falck, som därefter deltog i ett möte med Rektor stöttad av sin adjutant Anders Lundgren, varvid mycket tydligt framkom att Rektor inte hyste minsta betänklighet att genom sitt agerande allvarligt skada min professionella verksamhet. Anders Lundgren var noga med att påpeka att KTH aldrig (aldrig) kommenterar uppgifter i pressen som tillskrivs Rektor, även om de är grovt felaktiga och hårt drabbar den som utsätts för felaktiga nedvärderande uppgifter. Aldrig! KTH har principer, och KTH följer sina principer, även om det är tufft. Peter Gudmundson är gammal hockeyspelare och är van vid hårda puckar. Carl Falck meddelar mig kort därefter, efter att ha träffat Anders Lundgren utan min närvaro: • Vi har diskuterat frågan vid flera tillfällen inom förbundet, men har nu kommit fram till att det idag inte finns utrymme för oss att föra denna fråga vidare. Det svar jag kan ge dig, och det är vårt gemensamma svar, är att vi inte kommer att agera fortsättningsvis i detta ärende. Förbundordförande Anna Götlind bekräftar med: • Som SULF:s förbundsjurist Carl Falck tidigare meddelat dig kommer SULF inte att bistå dig vidare i ditt ärende. • SULF är alltid positiva till debatt om den akademiska friheten, men som förbund kan vi inte debattera i enskilda ärenden. Tja, vad skall man nu säga om detta? Ja inte kan man säga att SULF givit mig något vidare stöd. Ur min synvinkel har SULF snarast försvårat min situation genom att till synes helt liera sig med KTHs ledning. Carl Falck säger att det idag inte finns utrymme medan Anna Götlind använder argumentet att SULF inte tar upp enskilda ärenden, som om det trots allt skulle finnas utrymme bara inte mitt ärende vore så himla enskilt. Jag har snällt betalat min avgift till SULF i 40 år (minst 100.000 kr) utan att någonsin besvära med något enda litet ärende. När jag till slut vänder mig till SULF i en utsatt situation får jag kalla handen. Det är klart att jag känner mig ganska korkad som gått på en sådan nit. Men jag är väl inte ensam i den gamla trogna skaran som trodde att facket var till för medlemmen. Bland de unga är det väl inte så populärt att betala fackföreningsavgift. SULF stadgar säger: • SULF har till uppgift att tillvarata och bevaka medlemmarnas fackliga, sociala och ekonomiska intressen samt att företräda medlemmarna i sådana frågor. Skall jag tolka detta som att mina professionella intressen i min roll som professor ligger utanför SULFs ansvarsområde? Tar inte SULF upp enskilda ärenden utan bara ärenden som gäller alla medlemmarna? Skall alla medlemmarna ha utsatts för censur för att SULF skall ta upp frågan? Vi får höra vad Anna Götlind svarar: • Jag svarar inte. Jag har redan svarat att SULF inte bistår dig i detta ärende. Så småningom får jag väl sammanfatta mina erfarenheter av SULF i ett debattinlägg i förbundets tidskrift Universitetsläraren, såvida inte detta också censureras bort...men Newsmill finns ju alltid....nytt inlägg kommer snart... Definition as Physical Fact In science and philosophy the distinction between synthetic and analytic statements is fundamental, according to Kant's Critique of Pure Reason. An analytic statement is about language and its truth can be evaluated by checking the meaning of the words forming the statements. A definition is analytic as a specification of the meaning of a new word in terms of previously defined words, e.g. bachelor as unmarried man. A synthetic statement is about some reality and can in principle be checked by observing the reality. The statement "1 meter is equal to 100 centimeters", is analytic, while the statement "this stick is 1 meter long", is synthetic. To subject an analytic statement to experimental observation would be ridiculous: To check by experiment if there are 100 centimeters on 1 meter would not give a Nobel Prize, just laughs. So if an experiment is set up to test a statement, that is a sign that the statement is viewed as synthetic. In modern physics the distinction between a definition (analytic statement) and synthetic statement is sometimes blurred into statements which are viewed to be both analytic (true by definition) and synthetic about some reality, or rather sometimes analytic and sometimes synthetic, sometimes definition sometimes fact. Such a statement makes it possible to say something about reality which cannot be denied, and it is directly recognized as such. When you hear a physicist making a statement claiming that something cannot be denied, then the statement is such a double analytic-synthetic statement. Here are some key examples: 1. The speed of light in vacuum is constant. 2. Heavy mass is equal to inertial mass. The constancy of the speed of light is a definition since according to the 1983 standard length unit of a meter is defined as a certain fraction of a lightsecond = the distance traveled by light in one second. The speed of light is thus by definition equal to 1 lightsecond per second, no more no less. On the other hand, a physicist is convinced that the speed of light is constant as a physical fact. A physicist would say that because the speed of light is constant in reality, it can be used to define the length standard. So we have a definition which is a physical fact at the same time: Double analytical-synthetic. Einstein was a master of this form of double-play: The basic assumption of special relativity is that the speed of light is constant, and Einstein uses this statement sometimes as analytic and sometimes as synthetic. Very clever and very confusing. But according to Kant it is not reasonable. In general relativity Einstein uses the equality of heavy and inertial mass both as definition and physical fact. In this case experimental verification of equality could give a Nobel Prize. In climate science the following statement is the very basis of climate alarmism: • No-feedback climate sensitivity is equal to 1 C, with climate sensitivity the global warming from doubled atmospheric CO2. This is presented as an undeniable fact and as such is an example of a double analytic-synthetic statement. The 1 C comes from a direct application of Stefan-Boltzmann's radiation law Q = sigma T, in its differentiated form dQ ~ 4 dT with Q ~ 240 W/m2, T ~ 288 K and dQ = 4 W/m2 as "radiative forcing" from doubled CO2. Thus dT = 1 C as climate sensitivity. This statement is analytic because the simple algebraic law Q = sigma T cannot tell anything about the reaction of the complex Earth-atmosphere system upon a small perturbation. So climate sensitivity = 1 C is a definition but it is used as statement of factual global warming of 1 C. It is a double analytic-synthetic statement, and it is recognized as an undeniable fact about reality. It is so undeniable that even skeptics like Lindzen, Monckton and Spencer, are convinced that it is a true fact and not just a definition. We just learned that a double analytic-synthetic statement can be extremely powerful, the very basis of climate alarmism, yet it is easy to discover as soon as one is aware of the double-play. I hope the reader is stimulated to find other examples of double analytic-synthetic statements used in the debate today. They are not difficult to find once the light is on. For example, what about the statement: • Educated people are superior to no so well educated people! Definition or fact, or both? söndagen den 8:e maj 2011 The Final Solution by The Royal Swedish Academy • Together with Stockholm Environment Institute, Stockholm Resilience Centre, Beijer Institute for Ecological Economics and Potsdam Institute for Climate Impact Research, the Royal Swedish Academy of Sciences will bring together some of the world’s most renowned thinkers and experts on global sustainability, 16-19 May 2011 in Stockholm. Only for invited guests. • Normatively, the carbon-based economic model is also an unsustainable situation. • The transformation towards a low-carbon society is therefore as much an ethical imperative as the abolition of slavery and the condemnation of child labour. • This structural transition is the start of a "Great Transformation" into a sustainable society, which must inevitably proceed within the planetary guard rails of sustainability. • By the middle of the century, the global energy systems must largely be decarbonised. • Production, consumption patterns and lifestyles in all of the three key transformation fields must be changed in such a way that global greenhouse gas emissions are reduced to an absolute minimum over the coming decades, and low-carbon societies can develop. • The extent of the transformation ahead of us can barely be overestimated. • In terms of profound impact, it is comparable to the two fundamental transformations in the world‘s history: • the Neolithic Revolution, i.e. the invention and spreading of farming and animal husbandry, and the Industrial Revolution, meaning the transition from agricultural to industrialised society. • This would be something of a quantum leap for civilisation. • It should in principle also be possible to reach a universal consensus regarding human civilisation's ability to survive within the natural boundaries imposed by planet Earth. • This necessarily presupposes an extensive "Global Enlightenment". • So nothing less than a new social contract must be agreed to. • Science will play a decisive, although subservient, role here. • Ultimately, sustainability is a question of imagination. In other words, a Final Solution to the Carbon Question will be presented by the Royal Swedish Academy of Sciences. The basic idea is to comb Europe through from West to East from North to South for carbon and transport it to Eastern Poland, where it will be gassed (in special camps, see picture above). This will secure a carbon-free sustainable Europe, which will serve as a model for the rest of the world including its 3 billion people who still do not have access to essential modern energy services. The Symposium will conclude with a memorandum signed by key Nobel Laureates, crowned by a dinner hosted by King Carl XVI Gustaf. Among the invited 50 of the world’s most renowned thinkers, we find: • Martin Rees, President Royal Society, • Mikhail Gorbachev, Nobel Peace Prize 1990 • Andreas Carlgren, Swedish Minister of Environment • Murray Gell-Mann, Nobel Prize in Physics 1969 for his contributions and discoveries concerning the classification of elementary particles and their interactions. • David Gross, Nobel Prize in Physics 2004 for the discovery of asymptotic freedom in the theory of the strong interaction. • Johan Rockström, Stockholm Resilience Center • Anders Wijkman, Stockholm Environment Institute. Note that the key Nobel Laurates when signing the memorandum accept that science will play a subservient role. It is natural to compare with Manifesto of the Ninety-Three and the suppression of quantum mechanics and relativity in the Soviet Union as "idealistic" and "bourgeois", and in Nazi-Germany as "Jewish physics". fredagen den 6:e maj 2011 Presentation at Stockholm Initiative: The IPCC Trick The Hustler (1886-1905) by Ernst Josephson (student at the Royal Academy of Fine Arts in 1867) Here is a summary of a short presentation at the annual meeting of the Stockholm Initiative at the Royal Academy of Fine Arts, May 7. 1. IPCC Climate Sensitivity = 3 C • The CO2 climate alarmism of IPCC is based on an estimate of climate sensitivity (global warming by doubled CO2) of 3 C obtained by positive feedback from a no-feedback sensitivity of 1 C. • No-feedback sensitivity is obtained by definition from Stefan-Boltzmann dQ ~ 4 dT with dQ = 4 W/m2 assumed "radiative forcing" from doubled CO2. • Note: A definition says nothing about reality. The 4 W/m2 of "radiative forcing" is a theoretical assumption rather than observed reality. Insolation constant. 2. The Question • What is the global warming effect of a 1 % change of atmospheric radiative properties? • 4 W/m2 is about 1 % of gross insolation of 360 W/m2 • 3 C = 1 % of gross temperature 288 K • Reasonable?? Unreasonable?? 3. Observation + Simple Models: Climate Sensitivity = 0.3 C Combining basic mathematical models and direct observation of • temperatures, lapse rate, insolation and thermodynamics, one obtains a climate sensitivity which is 10 times smaller than IPCC: • 1 % change of atmospheric radiative properties • 0.3 C is about 1% of "atmospheric effect" of 33 C (= 288 - 255 K) • wellposed (stable): 1% forcing gives 1% = 0.3 C 3. IPCC Trick: Backradiation • Real radiative exchange between surface and atmosphere: 30 - 60 W/m2 • 1 % change of atmospheric properties: 0.3 - 0.6 W/m2 net radiative forcing • IPCC backradiation exchange: 300 - 400 W/m2 • 1 % change of atmospheric properties: 4 W/m2 gross radiative forcing • view 3 C as 1% of gross temperature 288 K, not 1% of "atmospheric effect". 4. Backradiation Fiction In Computational Blackbody Radiation I give a new mathematical derivation of Planck's radiation law showing that backradiation is fiction. This is mathematical evidence that the 3 C of IPCC is based on fiction: 10 times too big. 5. Wellposedness: Butterfly in Brazil vs Torando in Texas IPCC claims that a small cause (1% or 0.1% change of atmospheric properties) can have a big effect (global warming of 3 C = 10% of atmospheric effect 33 C). 6. The Lorenz Model Can a butterfly in Brazil set off a torando in Texas? • Can be disproved by removing butterfly and observing tornados. • Can never be proved, because a very precise model is required (both butterfly and tornado). Requires unstable system: small cause - big effect. 7. Is global climate unstable? Observations say No rather than Yes. Atmosphere as air conditioner: Radiative forcing changes intensity of thermodynamics with little temperature change. Compare with boiling water: heat forcing gives more vigorous boiling at steady temperature. 8. KTH-gate KTH censored my mathematical analysis of climate models. Unique in (Swedish) modern academic history (after 1632). At present my professors union SULF hesitates to take up my case, as if my union and KTH were acting in tandem to silence my voice. How is this possible? Well, in the new university system in Sweden 0f 2011, it is the administrative hierarchy of rector, dean and prefect, which determines the scientific truth and not the professor (as during 1632 - 2010). The censorship of my work is therefore fully logical and apparently accepted even by the professors union, and also by Swedish professors. Only one has questioned the censorship, Ingemar Nordin. torsdagen den 5:e maj 2011 The IPCC Trick 6
fea38dc485a9dc21
History and Philosophy of Physics 1204 Submissions [2] viXra:1204.0104 [pdf] submitted on 2012-04-30 17:26:23 Gravity: The Subatomic Electrical Contraction of Space and It's Relationship to Einstein’s General Theory of Relativity Authors: Keith D. Foote Comments: 8 Pages. Alternative model of gravity. This description of gravity is based on the Ultra-Space Field Theory. The Ultra-Space Field Theory is an associative field theory model, which describes the behavior patterns of kinetic energy, electrons, positrons, magnetic fields, gravity, and their predictable interactions. Joined positrons and electrons are described as the source of gravity. Additionally, Einstein’s General Theory of Relativity is reexamined and compared with the USF Theory’s model of gravity. Category: History and Philosophy of Physics [1] viXra:1204.0064 [pdf] submitted on 2012-04-16 04:51:42 Quantum Theory: Undulating Foundations, Uncertain Principles? Authors: Sosale Chandrasekhar Comments: 22 Pages. Intriguing questions from the early history of quantum theory (QT) raise serious doubts about the accepted theory of black body radiation. (The Planck theory builds on the apparently flawed Rayleigh-Jeans approach.) Furthermore, the validity of the theory of diffraction, the basis of the wave theory of radiation and matter, seems uncertain. Together, these raise fundamental questions about the foundations of QT and its current status. The apparently symbiotic relationship between QT and the theory of atomic and molecular structure, a key paradigm of modern scientific thought, may be a misleading indicator of the validity of QT. The protocols deriving from the Schrödinger equation lead to quantized states, but only along with certain assumptions. It is argued that QT applies uniquely to the interaction of electromagnetic radiation with matter, and that its scope is not universal: thus, it is best regarded as a quasi-empirical formalism. It is possible that the uncertainty surrounding QT is a legacy of the troubled historical period during which it was founded. This apparently has fascinating implications for the history and philosophy of science in general. Category: History and Philosophy of Physics
cb3eb3077ebf8d0f
Psychology Wiki Schrödinger's cat 34,199pages on this wiki Schrödinger's Cat: If the nucleus in the bottom left decays, the geiger counter on its right will sense it and trigger the release of the gas. In one hour, there is a 50% chance that the nucleus will decay, and therefore that the gas will be released and kill the cat. Schrödinger's cat is a seemingly paradoxical thought experiment devised by Erwin Schrödinger that attempts to illustrate the incompleteness of an early interpretation of quantum mechanics when going from subatomic to macroscopic systems. Schrödinger proposed his "cat" after debates with Albert Einstein over the Copenhagen interpretation, which Schrödinger defended, stating in essence that if a scenario existed where a cat could be so isolated from external interference (decoherence), the state of the cat can only be known as a superposition (combination) of possible rest states (eigenstates), because finding out (measuring the state) cannot be done without the observer interfering with the experiment — the measurement system (the observer) is entangled with the experiment. The thought experiment serves to illustrate the strangeness of quantum mechanics and the mathematics necessary to describe quantum states. The idea of a particle existing in a superposition of possible states, while a fact of quantum mechanics, is a concept that does not scale to large systems (like cats), which are not indeterminably probabilistic in nature. Philosophically, these positions which emphasise either probability or determined outcomes are called (respectively) positivism and determinism. The experiment Edit Schrödinger wrote: An illustration of both states, a dead and living cat. According to quantum theory, after an hour the cat is in a quantum superposition of coexisting alive and dead states. Yet when we look in the box we expect to only see one of the states, not a mixture of them. Schrödinger cat The experiment must be shielded from the environment to prevent quantum decoherence from inducing wavefunction collapse. The above text is a translation of two paragraphs from within a much larger original article, which appeared in the German magazine Naturwissenschaften ("Natural Sciences") in 1935: E. Schrödinger: "Die gegenwärtige Situation in der Quantenmechanik" ("The present situation in quantum mechanics"), Naturwissenschaften, 48, 807, 49, 823, 50, 844 (November 1935). It was intended as a discussion of the EPR article published by Einstein, Podolsky and Rosen in the same year. Apart from introducing the cat, Schrödinger also coined the term "entanglement" (German: Verschränkung) in his article. In posing this Schrödinger asked the question: when does a quantum system stop existing as a mixture of states and become one or the other? (More technically, when does the actual quantum state stop being a linear combination of states, each of which resemble different classical states, and instead begin to have a unique classical description?) If the cat survives, it remembers only being alive. But explanations of the EPR experiments that are consistent with standard microscopic quantum mechanics require that macroscopic objects, such as cats and notebooks, do not always have unique classical descriptions. The purpose of the thought experiment is to illustrate this apparent paradox: our intuition says that no observer can be in a mixture of states, yet it seems cats can be such a mixture. Are cats required to be observers, or does their existence in a single well-defined classical state require another external observer? Each alternative seemed absurd to Albert Einstein, who was impressed by the ability of the thought experiment to highlight these issues; in a letter to Schrödinger dated 1950 he wrote: But perhaps it was inevitable that Einstein would be impressed with Schrödinger's cat—Einstein had previously suggested to Schrödinger a similar paradox involving an unstable keg of gunpowder, instead of a cat. Schrödinger had taken the next step of applying quantum mechanics to an entity that may or may not be conscious, to further illustrate the putative incompleteness of quantum mechanics. Copenhagen interpretationEdit Quantum physics Quantum psychology Schrödinger cat Quantum mechanics Introduction to... Mathematical formulation of... Fundamental concepts Decoherence · Interference Uncertainty · Exclusion Transformation theory Ehrenfest theorem · Measurement Double-slit experiment Davisson-Germer experiment Stern–Gerlach experiment EPR paradox · Schrodinger's Cat Schrödinger equation Pauli equation Klein-Gordon equation Dirac equation Advanced theories Quantum field theory Quantum electrodynamics Quantum chromodynamics Quantum gravity Feynman diagram Copenhagen · Quantum logic Hidden variables · Transactional Many-worlds · Many-minds · Ensemble Consistent histories · Relational Consciousness causes collapse Orchestrated objective reduction Bohm · In the Copenhagen interpretation, a system stops being a superposition of states and becomes either one or the other when an observation takes place. This experiment makes apparent the fact that the nature of measurement, or observation, is not well defined in this interpretation. Some interpret the experiment to mean that while the box is closed, the system simultaneously exists in a superposition of the states "decayed nucleus/dead cat" and "undecayed nucleus/living cat", and that only when the box is opened and an observation performed does the wave function collapse into one of the two states. More intuitively, some feel that the "observation" is taken when a particle from the nucleus hits the detector. Recent developments in quantum physics show that measurements of quantum phenomena taken by non-conscious "observers" (such as a wiretap) most definitely alter the quantum state of the phenomena from the point of view of conscious observers reading the wiretap, lending support to this idea. A precise rule is that probability enters at the point where the classical approximation is first used to describe the system - almost by tautology, as the classical approximation is just a simplification of the quantum mathematics, and so must introduce imprecision in the measurement, which can be viewed as probability. Note, however, that this only applies to descriptions of the system, not the system itself. The cat is both 100% alive and 100% dead at the same time due to quantum theory. Under Copenhagen, the amount of uncertainty for a complex quantum system is predicted by quantum decoherence. Particles which exchange photons (and possibly other atomic or subatomic particles) become entangled with each other from the point of view of an observer, meaning that these particles can only be described accurately with reference to each other, which decreases the total uncertainty of those particles from the point of view of our observer. By the time one has reached "macroscopic" levels - such as a cat, which is made up of a number of atomic particles almost too large to express with words - so many particles have become entangled with each other so as to decrease the uncertainty to almost zero. (Quantum effects in huge collections of particles are only seen in very rare, and often man-made, situations, such as a Bose-Einstein condensate). Thus, at least from the point of view of the observer, any improbability regarding the cat as a system of quantum particles has disappeared due to the massive amount of entanglement between all of the particles that make it up, meaning that the cat does not truly exist as both alive and dead at the same time, at least from the point of view of any observer viewing the cat. Even before observation was noted to be fundamentally distinct from consciousness through experimentation, the experiment always contained at least two "observers" - the physicist and the cat. Even had the physicist been unaware of the cat's state in the hypothetical experiment, one would have had to posit that the cat, at least, would have been quite sure of its status (at least, as long as the gas had not yet ended its ability to "observe"). However, since "observation" has been shown by experiment to have nothing to do with consciousness - or at the very least, any traditional definition of consciousness - most conjecture along these lines probably falls under the "interesting but physically irrelevant" category. Everett many-worlds interpretation & consistent historiesEdit In the many-worlds interpretation of quantum mechanics, which does not single out observation as a special process, both states persist, but decoherent from each other. When an observer opens the box, he becomes entangled with the cat, so observer-states corresponding to the cat being alive and dead are formed, and each can have no interaction with the other. The same mechanism of quantum decoherence is also important for the interpretation in terms of Consistent Histories. Only the "dead cat" or "alive cat" can be a part of a consistent history in this interpretation. In other words, when the box is opened, the universe (or at least the part of the universe containing the observer and cat) is split into two separate universes, one containing an observer looking at a box with a dead cat, one containing an observer looking at a box with a live cat. Ensemble interpretationEdit In the Ensemble Interpretation, the Schrödinger's cat paradox is a trivial non issue. In this interpretation, the state vector does not apply to individual cat experiments, it only applies to the statistics of many similar prepared cat experiments. Indeed, the cat paradox was specifically constructed by Schrödinger to illustrate that the Copenhagen Interpretation suffered fundamental problems. It was not intended as an example that quantum mechanics actually predicted that a cat could be alive and dead simultaneously, though some have made this further assumption. Practical applicationsEdit This has some practical use in quantum computing and quantum cryptography. It is possible to send light that is in a superposition of states down a fiber optic cable. Placing a wiretap in the middle of the cable which intercepts and retransmits the transmission will collapse the wavefunction (in the Copenhagen interpretation, "perform an observation") and cause the light to fall into one state or another. By performing statistical tests on the light received at the other end of the cable, one can tell whether it remains in the superposition of states or has already been observed and retransmitted. In principle, this allows the development of communication systems that cannot be tapped without the tap being noticed at the other end. This experiment can be argued to illustrate that "observation" in the Copenhagen interpretation has nothing to do with consciousness, in that a perfectly unconscious wiretap will cause the statistics at the end of the wire to be different. Yet, one still cannot factor out the observation of the wiretap as having an effect upon the outcome. In quantum computing, the phrase "cat state" often refers to the special entanglement of qubits where the qubits are in an equal superposition of all being 0 and all being 1, i.e. |00...0\rangle + |11...1\rangle. A variant of the Schrödinger's Cat experiment known as the quantum suicide machine has been proposed by cosmologist Max Tegmark. It examines the Schrödinger's Cat experiment from the point of view of the cat, and argues that this may be able to distinguish between the Copenhagen interpretation and many worlds. Another variant on the experiment is Wigner's friend. Physicist Stephen Hawking once exclaimed, "When I hear of Schrödinger's cat, I reach for my gun," paraphrasing German playwright and Nazi "Poet Laureate", Hanns Johst's famous phrase "Wenn ich 'Kultur' höre, entsichere ich meine Browning!" ("When I hear the word 'culture', I release the safety on my Browning!") In fact, Hawking and many other physicists are of the opinion that the "Copenhagen School" interpretation of quantum mechanics unduly stresses the role of the observer. Still, a final consensus on this point among physicists seems to be out of reach. See also Edit External linksEdit Wikimedia Commons has media related to: Simple:Schrödinger's cat Around Wikia's network Random Wiki
72270bad85e70152
The invisible glass ceiling of the global greenhouse, versus factual data People need to be told: Update (19 March 2012): Higgs versus Nambu-Goldstone bosons, supersymmetry and a neutrino condensate W+ + W + Z0 -> 2H0 80.4 + 80.4 + 91.2 = 2(126) GeV W+ + W + Z0 -> 2H0 80.4 + 80.4 + 91.2 = 2(126) GeV Copy of a comment submitted to There is an illustration here. 2W + Z -> 2H truth quark + anti-truth quark -> H 2W + Z -> 2H Dr Peter Woit’s representation theory for electroweak symmetry Understanding spin is crucial to QFT. As shown in our paper, page 23, Figure 17, fermion spin results in angular momentum transfer in gauge boson exchange, producing magnetic fields, as Maxwell found in articles 822-3 of his final 1873 third edition of the Treatise on Electricity and Magnetism: “The … action of magnetism on polarized light leads … to the conclusion that in a medium … is something belonging to the mathematical class as an angular velocity … We must therefore conceive the rotation to be that of very small portions of the medium, each rotating [spin angular momentum].” (See Fig. 15 on page 21 of my paper for the origin of Maxwell’s theory.) Maxwell’s deterministic magnetic field model of what is now called “field quanta” spin as the basis of magnetic fields makes electromagnetism an SU(2) Yang-Mills theory, not essentially an U(1) theory as assumed by Feynman and Pauli for QED (see Figure 31 in my paper for how isospin and electric charge are then related under SU(2) in the standard model). Abelian U(1) hypercharge still exists but only as the basis for quantum gravity, giving mass to the weak bosons via Weinberg-Glashow mixing, which replaces the Higgs mass mechanism, although the left-handed symmetry breaking due to mixing can still produce spin-0 Nambu-Goldstone bosons with a mass/gravitational charge of half the sum of the gravitational charges of the three weak bosons, (80 + 80 + 91)/2 = 125.5 GeV, and this accords with the Dirac spinor, the SU(2) Pauli spin matrix, and Weyl’s 1929 argument that Dirac’s spinor is chiral. Copy of a comment submitted to: concerning the 2009 prediction by Dharwadker and Khachatryans of (80 + 80 + 91)/2 = 125.5 GeV spin-0 massive Nambu-Goldstone boson: Cooper pairs of spin-1/2 fermions produce a spin-1 boson (condensate) explaining superconductivity, so since the Higgs spin-0 boson is already a boson, your case is that you’re not going to have two Higgs fermions forming a Cooper pair. However, they do point out on pages 2-3: “Theoretically, it is known that the SM Higgs boson is one neutral quantum component of the Higgs field, along with another neutral and two charged components acting as Goldstone bosons.” What they are really doing (so far as their prediction is valid, ignoring BS arm-waving) is replacing this SM Higgs mechanism with a ~126 GeV spin-0 Higgs boson formed from two half integer spin particles (fermions). While “supersymmetry” (postulating an additional high mass boson for every fermion in order to try to achieve similar couplings for all interactions at the Planck scale) is arm-waving unfalsifiable speculation, there is a glimmer of relevant physics you can gain here, if you go for a simpler and more predictive “supersymmetry” in which all bosons are composites of either massless or massive fermions. Hence, SU(2) can be thought of as having two different charges of spin-1/2 fermions and their antiparticles, which can combine in 2×2 = 4 ways producing three distinctive bosons, with electric charges +1, -1, and 0 (there are two ways you get zero electric charge, thus a total of only three kinds of bosons from two charges of fermions). Please see page 51 of Woit’s 2002 paper “QFT and representation theory” (part 10, Speculative remarks about the standard model) at where he shows that taking U(2) as a subset of SO(4) gives the standard model electroweak fermions with chiral features, for both leptons and quarks if the hypercharge is selected to make the “overall average U(1) charge of a generation of leptons and quarks to be zero.” This is the underlying physics of the so-called “Higgs boson” mass (mass is quantum gravitational charge, and the “Higgs mechanism” ignores this), because since 1996 we have been publishing and a predictive U(1) gauge gravity theory, and the charge of quantum gravity is mass: so Woit’s 2002 argument about averaging hypercharge should also apply to masses for the particles. If there are right and left handed weak gauge bosons, half of the mass (the right-handed spinors) is “dark matter” because of the short-range (due to the mass) and the fact that it doesn’t undergo weak interactions. So Woit’s 2002 argument of averaging charges, applied to gravitational charges (masses) of the weak bosons, with only half of them engaging in weak interactions, could substantiate the formula (80.4 + 80.4 + 90)/2 ~ 126 GeV. Weyl’s chiral electromagnetism was rejected by Pauli, who believed in parity conservation and thus apparently didn’t understand Lenz’s law in electromagnetism: all electrons in motion produce similar chiral helicity of magnetic field curl around the current, where this chiral magnetic field in Maxwell’s theory is due to spin. Thus, the curl of the magnetic field around a current is indicative of the chiral SU(2) nature of electromagnetism, if Maxwell’s theory of electromagnetism is correct. The problem for Woit is that spin quantum numbers are essential in quantum mechanics for the Pauli exclusion principle, which is an electromagnetic effect, not a weak force SU(2) interaction. Thus, there is empirical evidence for SU(2) spinor phenomena in electrodynamics. This indicates that U(1) is not the QED symmetry. Dr Thomas S. Love also makes this point by quoting Hans C. Ohanian’s article “What is spin” from the American Journal of Physics, v54, 1986, pp. 500-5: “… contrary to the common prejudice, the spin of the electron has a close classical analog: it is an angular momentum of exactly the same kind as carried by the fields of a circularly polarized electromagnetic wave.” However, Gerard ‘t Hooft rejects the spin by using a false argument based on a solid electron (which doesn’t exist), stating on page 27 of his 1997 Cambridge University press book In Search of the Ultimate Building blocks: “the ‘surface of the electron’ would have to move 137 times as fast as the speed of light.” This is a false objection to spin, since there the classical solid model of an electron upon which this calculation is based is wrong: the electron doesn’t have a surface moving faster than light. The spin is conveyed by field quanta, not a classical electron solid revolving like a planet. Nitpicker (January 3, 2012 at 11:02 am): “A tad puzzled why you say “Spin(2n) as a double cover of SO(2n)”. For example Spin(3) = SU(2) is the double cover of SO(3) is the classic example of spin angular momentum.” Peter Woit (January 3, 2012 at 11:43 am): “In the course I’ll certainly discuss the relationship between SO(3) and Spin(3)=SU(2) and their reps, but for the general case of SO(n) and Spin(n), even and odd n behave somewhat differently. In the even case there’s a beautiful parallelism with the symplectic group which I want to discuss, so that’s the case I’ll work out in detail. If you take a look at the old lecture notes linked to, maybe you can see what I’m doing.” Woit’s 2002 paper on QFT and representation theory offers at page 51 an interesting and relevant U(2) representation in 4-d spacetime SO(4), which yields the correct chiral electroweak particle charges. This is interesting because as far as Woit is concerned, U(2) produces U(1) x SU(2), which is fair enough mathematically, but from our point of view the U(1) quantum gravity still contributes effectively (akin to hypercharge in the standard model) to SU(2) by Weinberg-Glashow mixing, although the actual mechanism is that the fractional SU(2) electric charges simply share field energy with mass (gravitational charge) as our model predicts. U(1) not only gives mass to SU(2) left-handed weak bosons by Weinberg-Glashow mixing, replacing the Higgs mass mechanism (although you can still have spin-0 massive Nambu-Goldstone bosons from the resulting breaking of symmetry), it also checkably predicted dark energy accurately two years ahead of discovery, and gravitation. General relativity is just a classical approximation, in which the Weyl quantum gauge type backreaction on the gravitational field is modelled by the contraction of the metric due to mass-energy. (This has nothing to do with Weyl’s earlier 1918 quantum gravity theory, which incorrectly quantized the metric, as explained in my paper.) In the Einstein field equation which relates the Ricci curvature tensor to the stress-energy field source tensor, the product of the Ricci scalar and the metric represent the equivalent to the minimal coupling procedure in QED: the gravitational field is contracted due to the gravitational energy employed on mass. In other words, the contraction term in general relativity is nearest gravitational equivalent to the running coupling behind charge renormalization in QED. The gravitational field comes with only one sign of charge, not two as in electromagnetism, so it is not renormalized due to pair production polarization like electromagnetism. But it is renormalized in the sense that mass-energy is conserved, and the use of gravitational field energy affects the mass-energy which is the source of the gravitational field. You can’t do work by gravity without taking energy out of the gravitational field. Similarly, in electromagnetism, an electric charge can’t polarize virtual charges without some of the electric charge energy being used (core field “screening”). If an apple falls off a tree and hits the ground with a thump, the energy of the sound waves has come from gravitons in the gravitational field which accelerated the apple, converting gravitational potential energy (offshell field energy) into the kinetic energy of the apple (onshell energy). This is the gravitational field “backreaction”. In QED, when the electromagnetic field does work, for instance in polarizing the vacuum, the energy used to polarize the vacuum has a backreaction upon the charge, “screening it”. This is just conservation of mass-energy. You cannot do work ordering the vacuum without expending energy. Einstein’s field equation contraction (needed to make both sides divergentless, for energy conservation) is analogous to this backreaction in the electromagnetic field. The work done energy by the gravitational field on mass (holding a planet together, for example) is exhibited by the conversion of gravitational charge (mass) into this energy. This is equivalent to a contraction of spacetime in the vicinity of mass. In other words, general relativity is already equivalent to QED in terms of quantum field theory. The major flaw of general relativity is the stress-energy tensor source term, which cannot correctly model discontinuous particles, but has to use “perfect fluid continuum” classical (smooth) approximations for the actually discontinuous distribution of matter. But the basic structure of Einstein’s field equation with the relativistic effects of the contraction term correctly models energy conservation. Multipath interference causes the indeterminism in quantum field theory On 30 Nov 2011, we completed a 63-page, 7.5 MB draft revision of the standard model, including quantum gravity predictions and confirmations for particle masses, couplings, etc. This is based on quantum field theory, Feynman’s approach to it, not Woit’s. Woit’s article in the American Scientist, “Grappling with Quantum Weirdness” claims that “quantum mechanics” (he doesn’t distinguish between 1st and 2nd quantization, one wavefunction or a path integral over separate wavefunctions for every path) “postulates that the state of a physical system is completely characterized by a vector in an infinite-dimensional vector space (the familiar quantum-mechanical “wavefunction”)”. Actually, each wavefunction amplitude is given by exp(iS/h bar), and you sum an infinite number of these wavefunctions, one for each path. So, yes, on an argand diagram this is represented by an infinite number of vectors (an infinite dimensional Hilbert space), and the resultant (integral of an infinite number of wavefunctions) is then equivalent to a single wavefunction for 1st quantization, but this is a false and wolly way of thinking. 1st quantization (a single wavefunction) is not relativistic and is not real: it’s a mathematical artifact of non-relativistic quantum mechanics. It’s wrong physically: there are field quanta, and the multipath interferences caused by these field quanta produce indeterminancy. The uncertainty principle is not a physical limit to understanding: in QFT it is caused by multipath interference from field quanta, as Feynman proves in his book QED (1985). Woit ignores this, proceeding instead with: “The general consensus of the physics community is that Bohr’s point of view triumphed, enshrined in what became known as the “Copenhagen interpretation” of quantum mechanics. According to Bohr, the state-vector of a physical system evolves in time according to the Schrödinger equation and does not typically have a well-defined value for classical observables like position and velocity. When the system interacts with an experimental apparatus, the state-vector “collapses” into a state with a well-defined value of the observable being measured. In general, Bohr’s interpretation works perfectly well operationally, but it is conceptually incoherent and leaves important questions unanswered. How exactly does this “collapse” take place? … Most physicists generally believe that quantum mechanics, in its relativistic version as a theory of quantum fields, is a complete, consistent and highly successful conceptual framework.” Woit shows he has no grasp of how 2nd quantization physically differs from 1st quantization. There is no single wavefunction for any particle: every particle has a separate wavefunction amplitude (wavefunction) for every single potential and real interaction with an onshell or offshell particle. It’s own field consists of offshell particles, with which it interacts. There is no single wavefunction! You always have a path integral, summing an infinite number of possible interaction paths. The Schrödinger equation has only a single wavefunction and is thus wrong: the real wavefunctions don’t “evolve” or “collapse”: “If you … use the ideas that I’m explaining in these lectures – adding arrows [wavefunctions] for all the ways an event can happen – there is no need for an uncertainty principle! … The phenomenon of interference becomes very important, and we have to sum the arrows to predict where an electron is likely to be.” – Richard P. Feynman, QED, 1990, pp. 55-56, and 84-85. (Feynman’s position is a path-integral over off-shell scattering interaction’s of a particle with its own field, which is just Sir Karl Popper’s argument on page 303 of his 1979 Oxford University press book Objective Knowledge, “… the Heisenberg formulae can be most naturally interpreted as statistical scatter relations, as I proposed [Popper, The Logic of Scientific Discovery, German ed., 1934] … There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation of quantum mechanics.”) As Dr Love explains, the eigenstates in quantum mechanics are artificial discontinuities which produce wavefunction “collapse” mathematically, not physically whenever a measurement is taken. There are no real eigenstates. The electron has a path integral of field quanta interference which determines (to the electron, not to a human, who can’t do the path integral accurately or non-perturbatively) where it is at any time, so there is no real wavefunction collapse (except in the 1st quantization non-relativistic Schrödinger equation) when a measurement is taken. The point is, as Feynman explains very clearly, there is a difference between reality and 1st quantization. It is a lie that a single wavefunction exists; this is proved by the fact that the Schrödinger equation is non-relativistic and hence is wrong. It is quantum mechanics double-talk to lie that 1st quantization is not replaced by 2nd quantization. This double-talk is equivalent to claiming that phlogiston theory is a duality to oxygen theory, that epicycles are a duality to Kepler’s elliptical orbits, or that Piltdown man was not really a fraud but was a very helpful evolutionary pedalogical tool for convincing/teaching students, until discredited.
a1edd34d4c5059db
Many-minds interpretation From Wikipedia, the free encyclopedia Jump to: navigation, search The many-minds interpretation of quantum mechanics extends the many-worlds interpretation by proposing that the distinction between worlds should be made at the level of the mind of an individual observer. The concept was first introduced in 1970 by H. Dieter Zeh as a variant of the Hugh Everett interpretation in connection with quantum decoherence, and later (in 1981) explicitly called a many or multi-consciousness interpretation. The name many-minds interpretation was first used by David Albert and Barry Loewer in their 1988 work Interpreting the Many Worlds Interpretation. The central problems[edit] 1. Unitary evolution by the Schrödinger equation In the introduction to his paper, The Problem Of Conscious Observation In Quantum Mechanical Description (June 2000), H.D. Zeh offered an empirical basis for connecting the processes involved in (2) with conscious observation: John von Neumann seems to have first clearly pointed out the conceptual difficulties that arise when one attempts to formulate the physical process underlying subjective observation within quantum theory. He emphasized the latter's incompatibility with a psycho-physical parallelism, the traditional way of reducing the act of observation to a physical process. Based on the assumption of a physical reality in space and time, one either assumes a coupling (causal relationship — one-way or bidirectional) of matter and mind, or disregards the whole problem by retreating to pure behaviorism. However, even this may remain problematic when one attempts to describe classical behavior in quantum mechanical terms. Neither position can be upheld without fundamental modifications in a consistent quantum mechanical description of the physical world. The many-worlds interpretation[edit] Hugh Everett described a way out of this problem by suggesting that the universe is in fact indeterminate as a whole. That is, if you were to measure the spin of a particle and find it to be "up", in fact there are two "yous" after the measurement, one who measured the spin up, the other spin down. Effectively by looking at the system in question, you take on its indeterminacy. This relative state formulation, where all states (sets of measures) can only be measured relative to other such states, avoids a number of problems in quantum theory, including the original duality – no collapse takes place, the indeterminacy simply grows (or moves) to a larger system. Everett claims that the universe has a single quantum state, which he called the universal wavefunction, that always evolves according to the Schrödinger equation or some relativistic equivalent; now the measurement problem suggests the universal wavefunction will be in a superposition corresponding to many different definite macroscopic realms ("macrorealms"); that one can recover the subjective appearance of a definite macrorealm by postulating that all the various definite macrorealms are actual – it seems to each observer that "we just happen to be in one rather than the others" because "we" are in all of them, but each are mutually unobservable. Continuous infinity of minds[edit] In Everett's conception the mind of an observer is split by the measuring process as a consequence of the decoherence induced by measurement. In many-minds each physical observer has a postulated associated continuous infinity of minds. The decoherence of the measuring event (observation) causes the infinity of minds associated with each observer to become categorized into distinct yet infinite subsets, each subset associated with each distinct outcome of the observation. No minds are split, in the many-minds view, because it is assumed that they are all already always distinct. The idea of many-minds was suggested early on by Zeh in 1995. He argues that in a decohering no-collapse universe one can avoid the necessity of distinct macrorealms ("parallel worlds" in MWI terminology) by introducing a new psycho-physical parallelism, in which individual minds supervene on each non-interfering component in the physical state. Zeh indeed suggests that, given decoherence, this is the most natural interpretation of quantum mechanics. The main difference between the many-minds and many-worlds interpretations then lies in the definition of the preferred quantity. The many-minds interpretation suggests that to solve the measurement problem, there is no need to secure a definite macrorealm: the only thing that's required is appearance of such. A bit more precisely: the idea is that the preferred quantity is whatever physical quantity, defined on brains (or brains and parts of their environments), has definite-valued states (eigenstates) that underpin such appearances, i.e. underpin the states of belief in, or sensory experience of, the familiar macroscopic realm. In its original version (related to decoherence), there is no process of selection. The process of quantum decoherence explains in terms of the Schrödinger equation how certain components of the universal wave function become irreversibly dynamically independent of one another (separate worlds – even though there is but one quantum world that does not split). These components may (each) contain definite quantum states of observers, while the total quantum state may not. These observer states may then be assumed to correspond to definite states of awareness (minds), just as in a classical description of observation. States of different observers are consistently entangled with one another, thus warranting objective results of measurements. However Albert and Loewer suggest that the mental does not supervene on the physical, because individual minds have trans-temporal identity of their own. The mind selects one of these identities to be its non-random reality, while the universe itself is unaffected. The process for selection of a single state remains unexplained. This is particularly problematic because it is not clear how different observers would thus end up agreeing on measurements, which happens all the time here in the real world. There is assumed to be a sort of feedback between the mental process that leads to selection and the universal wavefunction, thereby affecting other mental states as a matter of course. In order to make the system work, the "mind" must be separate from the body, an old duality of philosophy to replace the new one of quantum mechanics. In general this interpretation has received little attention, largely for this last reason. Objections that apply to the many-worlds interpretation also apply to the many-minds interpretation. On the surface both of these theories arguably violate Occam's Razor; proponents counter that in fact these solutions minimize entities by simplifying the rules that would be required to describe the universe. Another serious objection is that workers in no collapse interpretations have produced no more than elementary models based on the definite existence of specific measuring devices. They have assumed, for example, that the Hilbert space of the universe splits naturally into a tensor product structure compatible with the measurement under consideration. They have also assumed, even when describing the behavior of macroscopic objects, that it is appropriate to employ models in which only a few dimensions of Hilbert space are used to describe all the relevant behavior. In his What is it like to be Schrödinger's cat? (2000), Peter J. Lewis argues that the many-minds interpretation of quantum mechanics has absurd implications for agents facing life-or-death decisions. In general, the many-minds theory holds that a conscious being who observes the outcome of a random zero-sum experiment will evolve into two successors in different observer states, each of whom observes one of the possible outcomes. Moreover, the theory advises you to favor choices in such situations in proportion to the probability that they will bring good results to your various successors. But in a life-or-death case like getting into the box with Schrödinger's cat, you will only have one successor, since one of the outcomes will ensure your death. So it seems that the many-minds interpretation advises you to get in the box with the cat, since it is certain that your only successor will emerge unharmed. See also quantum suicide and immortality. Finally, it supposes that there is some physical distinction between a conscious observer and a non-conscious measuring device, so it seems to require eliminating the strong Church–Turing hypothesis or postulating a physical model for consciousness. See also[edit] External links[edit]
9f6d83b6632ddb5d
SciELO - Scientific Electronic Library Online vol.30 issue4Ernst Ising 1900-1998Spin pair correlation of the ANNNI chain in a field author indexsubject indexarticles search Home Pagealphabetic serial listing   Brazilian Journal of Physics On-line version ISSN 1678-4448 Braz. J. Phys. vol.30 no.4 São Paulo Dec. 2000  Exact solution of asymmetric diffusion with N classes of particles of arbitrary size and hierarchical order F. C. Alcaraz Departamento de Física, Universidade Federal de São Carlos, 13565-905, São Carlos, SP Brazil R. Z. Bariev The Kazan Physico-Technical Institute of the Russian Academy of Sciences, Kazan 420029, Russia Received on 5 August, 2000 The exact solution of the asymmetric exclusion problem with N distinct classes of particles (c = 1, 2, ..., N), with hierarchical order is presented. In this model the particles (size 1) are located at lattice points, and diffuse with equal asymmetric rates, but particles in a class c do not distinguish those in the classes c' > c from holes (empty sites). We generalize and solve exactly this model by considering the molecules in each distinct class c = 1, 2, ..., N with sizes sc (sc = 0, 1, 2, ...), in units of the lattice spacing. The solution is derived via a Bethe ansatz of nested type. I Introduction The similarity between the master equation describing time fluctuations in nonequilibrium problems and the Schrödinger equation describing the quantum fluctuations of quantum spin chains turns out to be fruitful for both areas of research [1]-[15]. Since many quantum chains are known to be exactly integrable through the Bethe ansatz, this provides exact information on the related stochastic model. At the same time classical physical intuition and probabilistic methods successfully applied to nonequilibrium systems give new insights into the physical and algebraic properties of quantum chains. An example of this fruitful interchange is the problem of asymmetric diffusion of hard-core particles on the one dimensional lattice ( see [16, 17, 18] for reviews). This model is related to the exactly integrable anisotropic Heisenberg chain in its ferromagnetic regime [19] (XXZ model). However if we demand this quantum chain to be invariant under a quantum group symmetry Uq(SU(2)), we have to introduce, for the equilibrium statistical system, unusual surface terms, which on the other hand have a nice and simple interpretation for the related stochastic system [3, 4]. In the area of exactly integrable models it is well known that one of the possible extensions of the spin- XXZ chain to higher spins is the anisotropic spin-S Sutherland model (grading 1 = 2 = ... = 2s+1 = 1) [20]. On the other hand in the area of diffusion limited reactions a simple extension of the asymmetric diffusion problem is the problem of diffusion with particles belonging to N distinct classes (c = 1, 2, ..., N) with hierarchical order [22]-[24] . In this problem a mixture of hard-core particles diffuses on the lattice. Particles belonging to a class c (c = 1, ..., N) ignore the presence of those in classes c¢ > c, i.e., they see them in the same way as they see the holes (empty sites). In [3] it was shown that for open boundary conditions the anisotropic spin-1 Sutherland model and this last stochastic model, in the case N = 2, are exactly related. The Hamiltonian governing the quantum or time fluctuations of both models is given in terms of generators of a Hecke algebra, invariant under the quantum group UqSU(3). In fact this relation can be extended to arbitrary values of N, and the quantum chain associated to the stochastic model is invariant under the quantum Uq(SU(N + 1)) group. In this paper we derive through the Bethe ansatz the exact solution of the associated quantum chain, on a closed lattice. Recently [15] (see also [14]) we have shown that without losing its exact integrability, we can consider the problem of asymmetric diffusion with an arbitrary mixture of molecules with different sizes (even zero), as long they do not interchange positions, that is, there is no reactions. In this paper we extend the asymmetric diffusion problem with N types of particles with hierarchical order, to the case where the particles in each class have an arbitrary size, in units of the lattice spacing. Unlike the case of asymmetric diffusion problem, we have in this case a nested Bethe ansatz [25]. A pedagogical presentation for the simplest case N = 2 was presented in [26]. The paper is organized as follows. In the next section we introduce the generalized asymmetric model with N types of particles with hierarchical order and derive the associated quantum chain. In section 3 the Bethe ansatz solution of the model is presented. Finally in section 4 we present our conclusions, with some possible generalizations of the stochastic problem considered in this paper, and some perspectives on future work. II The generalized asymmetric diffusion model with N classes of particles with hierarchical order A simple extension of the asymmetric exclusion model, in which hard-core particles diffuse on the lattice, is the problem where a mixture of particles belonging to different classes (c = 1, 2, ..., N) diffuses on the lattice.. This problem in the case where we have only N = 2 classes was used to describe shocks [22]-[24] in nonequilibrium and also has a stationary probability distribution that can be expressed via the matrix-product ansatz [27]. In [28] it was also shown that the stationary state of the case N = 3 can also be expressed by the matrix-product ansatz. In this model we have n1, n2, ... nN molecules belonging to the classes c = 1, 2, ..., N, respectively. All classes of molecules diffuse asymmetrically, but with the same asymmetrical rates, whenever they encounter empty sites (holes) at nearest-neighbor sites. However, when molecules of different classes, c and c¢ (c < c¢), are at their minimum separation, the molecules of class c exchange position with the same rate as they diffuse, and consequently the molecules in the class c see no difference between molecules belonging to the classes c¢ > c and holes. We now introduce a generalization of the above model, where instead of having unit size, the molecules in each distinct class c = 1, 2, ..., N have in general distinct sizes s1, s2, ..., sN (s1, ..., sN = 1, 2, ¼), respectively, in units of lattice spacing. In Fig. 1 we show some examples of molecules of different sizes. We may think of a molecule of size s as formed by s monomers (size 1), and for simplicity, we define the position of the molecule as the center of its leftmost monomer. The molecules have a hard-core repulsion: the minimum distance dab, in units of the lattice spacing, between molecules a and b, with a in the left, is given by dab = sa. In order to describe the occupancy of a given configuration of molecules we define at each lattice site i (i = 1, 2, ..., L) a variable bi (i = 1, 2, ¼, L), taking the values bi = 0, 1, ..., N. The values b = 1, 2, ¼, N represent sites occupied by molecules of class c = 1, 2, ¼, N, respectively. On the other hand the value b = 0 represents an empty site or an excluded one, due to the finite size of the molecules. As an example, in a chain of L = 8 sites, the configuration in which a particle of class 1, with size s1 = 2 is at site 1, and another particle, of class 2, with size s2 = 3 is at site 3, is represented by {b} = {1, 0, 0, 2, 0, 0, 0, 0}. Thus the allowed configurations are given by the set {bi} (i = 1, ¼, L), where for each pair (bi, bj) ¹ 0 with j > i we should have j -i ³ sbi. Figura.1 Example of configurations of molecules with distinct sizes s in a lattice of size L = 6. The coordinates of the molecules are denoted by the black squares. The time evolution of the probability distribution P({b}, t), of a given configuration {b} is given by the master equation where G({b}® {b¢}) is the transition rate for configuration {b} to change to {b¢}. In the present model we only allow, whenever the constraint of excluded volume is satisfied, the particles to diffuse to nearest-neighbor sites, or to exchange positions. The possible motions are diffusion to the right diffusion to the left and interchange of particles As we see from (4), particles belonging to a given class c interchange positions with those of class c¢ > c with the same rate as they interchange positions with the empty sites (diffusion). We should remark however that unless the particles in class c¢ have unit size (sc = 1), the net effect of these particles in those of class c is distinct from the effect produced by the holes, since as the result of the exchange the particles in class c will move by sc¢ lattice size units, accelerating its diffusion. The master equation (1) can be written as a Schrödinger equation in Euclidean time (see Ref. [3] for general application for two body processes) if we interpret | P > º P({b}, t) as the associated wave function. If we represent bi as |b > i the vector |b > 1 Ä |b > 2 Ä ¼ Ä |b > L will give us the associated Hilbert space. The process (2)-(4) gives us the Hamiltonian (see Ref. [3] for general applications) and periodic boundary conditions. The matrices Ea,b are (N + 1) × (N + 1) matrices with a single nonzero element (Ea, b)i,j = da, idb, j (a, b, i, j = 0, ..., N). The projector in (6), projects out from the associated Hilbert space the vectors |{b} > which represent forbidden positions of the molecules due to their finite size, which mathematically means that for all i,j with bi, bj ¹ 0, |i - j| ³ sbi (j > i). The constant D in (6) fixes the time scale; for simplicity we chose D = 1. A particular simplification of (6) occurs when the molecules in all classes have the same size s1 = s2 = ... = sN = s. In this case the Hamiltonian can be expressed as an anisotropic nearest-neighbor interaction spin-N/2 SU(N + 1) chain. Moreover in the case where their sizes are unity (s = 1) the model can be related to the anisotropic version [21] of the SU(N + 1) Sutherland model [20] with twisted boundary conditions. III The Bethe ansatz equations We present in this section the exact solution of the general quantum chain (6). A pedagogical presentation for the particular case where N = 2 was presented in [26]. Due to the conservation of particles in the diffusion and interchange processes the total number of particles n1, n2, ..., nN in each class are good quantum numbers and consequently we can split the associated Hilbert space into block disjoint sectors labeled by the numbers n1, n2... nN (ni = 0, 1, ¼; i = 1, ..., N). We therefore consider the eigenvalue equation and n = ni is the total number of particles. In (10) |x1, Q1; ¼; xn, Qn > means the configuration where a particle of class Qi (Qi = 1, 2, ..., N) is at position xi (xi = 1, ¼, L). The summation {Q} = {Q1, ¼, Qn} extends over all permutations of the n integer numbers {1, 2, ..., N} in which ni terms have the value i (i = 1, 2, ..., N) , while the summation {x} = {x1, ¼, xn} runs, for each permutation {Q}, in the set of the n nondecreasing integers satisfying Before getting the results for general values of n let us consider initially the cases where we have 1 or 2 particles. n = 1. For one particle on the chain, in any class c = 1, 2, ..., N, as a consequence of the translational invariance of (6) it is simple to verify directly that the eigenfunctions are the momentum-k eigenfunctions and energy given by n =2. For two particles of classes Q1 and Q2 (Q1, Q2 = 1, 2, ..., N) on the lattice, the eigenvalue equation (9) gives us two distinct relations depending on the relative location of the particles. The first relation applies to the case in which a particle of class Q1 (size sQ1) is at position x1 and a particle Q2 (size sQ2) is at position x2, where x2 > x1 + sQ1. We obtain in this case the relation where we have used the relation + + - = 1. This last equation can be solved promptly by the ansatz with energy where k1, k2, and are free parameters to be fixed. In (16) the summation is over the permutations P = P1, P2 of (1, 2). The second relation applies when x2 = x1 + sQ1. In this case instead of (15) we have If we now substitute the ansatz (16) with the energy (17), the constants and , initially arbitrary, should now satisfy At this point it is convenient to consider separately the case where Q1 = Q2 from those where Q1 ¹ Q2. If Q1 = Q2 = Q (Q = 1, ..., N) eq. (19) gives and the cases Q1 ¹ Q2 give us the equations Performing the above summation we obtain, after lengthy but straightforward algebra, the following relation among the amplitudes Equations (21) and (22) can be written in a compact form where we have introduced the S matrix. From (21) and (23) this S matrix has only N(2N - 1) non zero elements, namely Equations (23) do not fix the "wave numbers" k1 and k2. In general, these numbers are complex, and are fixed due to the cyclic boundary condition which from (16) give the relations This last equation, when solved by exploiting (23)-(25), gives us the possible values of k1 and k2, and from (17) the eigenenergies in the sector with 2 particles. Instead of solving these equations for the particular case n = 2 let us now consider the case of general n. General n. The above calculation can be generalized for arbitrary occupation {n1, n2, ..., nN} of particles in classes 1, 2, ..., N, respectively. The ansatz for the wave function (10) becomes where the sum extends over all permutations P of the integers 1, 2, ¼, n, and n = ni is the total number of particles. Application of the translation operator to the above wave functions implies that (10) are also eigenfunctions of the momentum operator with eigenvalues For the components |x1, Q1; ¼; xn, Qn > where xi+1 -xi > sQi for i = 1, 2, ¼, n, it is simple to see that the eigenvalue equation (9) is satisfied by the ansatz (28) with energy On the other hand if a pair of particles of class Qi, Qi+1 is at positions xi, xi+1, where xi+1 = xi + sQi, equation (9) with the ansatz (28) and the relation (30) give us the generalization of relation (23), namely with S given by eq. (25). Inserting the ansatz (28) in the boundary condition we obtain the additional relation which together with (31) should give us the energies. Successive applications of (31) give us in general distinct relations between the amplitudes. For example relate to by performing the permutations abg ® bag ® bga ® gba or abg ® agb ® gab ® gba, and consequently the S-matrix should satisfy the Yang-Baxter [19, 29] equation for a, a¢, a¢¢, b, b¢, b¢¢ = 1, 2, ..., N and S given by (25). Actually the relation (34) is a necessary and sufficient condition [19, 29] to obtain a non-trivial solution for the amplitudes in Eq. (31). We can verify by a long and straightforward calculation that for arbitrary number of classes N and values of the sizes s1, s2, ..., sN, the S matrix (25), satisfies the Yang-Baxter equation (34), and consequently we may use relations (31) and (33) to obtain the eigenenergies of the Hamiltonian (6). Applying relation (31) n times on the right of equation (33) we obtain a relation between the amplitudes with the same ordering in the lower indices: where we have introduced the harmless extra sum (see [26] for illustrations of the above equations). In order to fix the values of {kj} we should solve (35), i.e., we should find the eigenvalues L(k) of the matrix with periodic boundary condition The Bethe-ansatz equations which fix the set {kl} will be given from (35) by The matrix T(k) has dimension Nn × Nn and can be interpreted as the transfer matrix of an inhomogeneous N(2N - 1)-vertex model in a two dimensional lattice with periodic boundary conditions in the horizontal direction (n sites). Due to the special form of the S matrix (25) the eigenvalues of (37) are invariant under a local gauge transformation where for each factor S(kPl, k) in (37): where (l = 1, ..., L; a = 1, ..., N) are arbitrary functions. If we perform the transformation (40) with the special choice the equivalent transfer matrix to be diagonalized is given by with the twisted boundary condition with twisted phase The matrix in (43) and (44) is obtained from those in (25) by taking the size of all particles equal to unity. In this way the problem is transformed into the evaluation of the eigenvalues of a regular (all particles with size 1) inhomogeneous transfer matrix T0 with n(2N - 1) non-zero vertex and twisted boundary condition. Diagonalization of T0(k) The simplest way to diagonalize T0 is through the introduction of the monodromy matrix (k) [25], which is a transfer matrix of the inhomogeneous vertex model under consideration, where the first and last link in the horizontal direction are fixed to the values m1 and mN + 1 (m1, mN + 1 = 1, 2, ¼, N), that is The monodromy matrix (k) has coordinates {Q},{Q¢} in the vertical space (Nn dimensions) and coordinates m1, mN + 1 in the horizontal space (N2 dimensions). This matrix satisfies the following important relations for m1, n1, mN + 1, nN + 1 = 1, 2, ¼, N.This relation follows directly from successive applications of the Yang-Baxter equations (34) (see [26], for a graphical representation of these equations). In order to exploit relation (47) let us denote the components of the monodromy matrix in the horizontal space by where a, b = 1, 2, ¼, N - 1. Clearly the transfer matrix T0(k) of the inhomogeneous lattice with twisted boundary conditions, we want to diagonalize, is given by As a consequence of (47) the matrices , Ba, Ca and D in (48) obey some algebraic relations. By setting (n1, m1, nN + 1, mN + 1) = (N, a, g, b) in (47) we obtain with (a, b = 1, ¼, N - 1). By setting (n1, m1, nN + 1, mN + 1) = (N, N, N, a) we obtain where (a = 1, ¼, N - 1). The diagonalization of T0(k) in (49) will be done by exploiting the above relations. This procedure is known in the literature as the algebraic Bethe ansatz [25]. The first step in this method follows from the identification of a reference state |W > , which should be an eigenstate of (k) and D(k), and hence T0(k), but not of Ba(k). In the present case a suitable reference state is |W > = |{al = N} > l = 1, ¼, n, which corresponds to a state with N-class particles only. It is simple to calculate and | > = |{al ¹ i = N}, ai = a > . The matrices Ba(k) act as creation operators in the reference ("vacuum") state, by creating particles of class a (1, 2, ¼, N) in a sea of particles of Nth class |W > . We then expect that the eigenvectors of T0(k) corresponding to m1 (1, 2, ¼, n) particles, belonging to classes distinct from N, can be expressed as where {kl(1), l = 1, ¼, m1} and Fb1, ¼, bm1 are variables to be fixed by the eigenvalue equation Using (50) successively, and (52), (53) we obtain where the "unwanted terms" are those ones which are not expressed in the "Bethe basis" produced by the Ba(kj(1)) operators. Similarly, using (51) successively and (52)-(53) we obtain The relations (56) and (57) when used in (54)-(55) give us is a (N-1)m1-dimensional transfer matrix of a inhomogeneous vertex model, with inhomogeneties (notice the reverse order of the inohomogeneties, when compared with (43)) and twisted boundary conditions (boundary phases Fa, a = 1, ¼, N - 1). In order to proceed we need now to diagonalize the new transfer matrix T1(k), that is we must solve and then (58) give us where, using the fact that (kl, k) = 1, In order to prove that L(0) and |, F > are the eigenvalues and eigenvectors of T0(k), we should fix {, ¼, } by requiring that the "unwanted terms" in (61) vanish. Although for N = 2 this calculation is not complicated [26] for arbitrary N it is not simple. Since the expression (62) for the eigenvalues should be valid for arbitrary values of k we can obtain L(1)(kj(1)) in an alternative way from the following trick [31]. At k = kj(1) (j = 1, ¼, m1) the denominators of the factors in (62) vanish ((kj(1), kj(1)) = 0 , l ¹ N), and since we should have a finite result, we have the conditions Notice that our result in (63) does not depend on the particular ordering of the additional variables kj(1) (j = 1, ¼, m1). This means that if instead of the ordering chosen in (54), we chose the reverse order, namely, we would obtain the same results (61)-(63), but now T1 is the transfer matrix, with boundary condition specified by the phase Fa, of a problem with (N-1) species and inhomogeneities , ¼, (notice we have now the same order of the inhomogeneties as in (43)). This means that the eigenvalue L(k) = L(0)(k) of the transfer matrix of the problem with N classes and inhomogeneities () º (k1, k2, ¼, kn) is related to the eigenvalue L(1)(k) of the problem with (N - 1) classes and inhomogeneities , , ¼, . Iterating these calculations we obtain the generalization of the relation (62) and the condition (63) which connects the eigenvalues of the inhomogeneous transfer matrix Tl(k) and Tl+1(k), with inhomogeneities {kj(l)} and , related with the problem with (N - l) and (N - l - 1) classes of particles, respectively. However from (39) and (42)-(43), in order to obtain the Bethe-ansatz equations for our original problem we need the eigenvalues of the transfer matrices evaluated at kj (j = 1, ¼, n), i.e., L(0)(kj), which are given by since (kj, kPj) = 0. The conditions that fix the variables (kj(1), j = 1, ¼, m1) are given by (63). In the left side of this equation we have L(1)(kj(1)), which are the eigenvalues of the transfer matrix T1 of the model with (N - 1) classes of particles and inhomogeneities {kj(1), j = 1, ¼, m1}, evaluated at the partcular point kj(1). This value can be obtained from (65) which gives a generalization of (67) The condition (63) is then replaced by where now we need to find the relations that fix {kj(2)}. Iterating this process we find the generalization of (69) Equations (67) and (70) give us the eigenvalues of the transfer matrix T0(k) evaluated at the points {kj}, i. e. L(0)(kj). Inserting the above results in (42) and then in (39) we obtain the Bethe-ansatz equations of our original problem. The eigenenergies of the Hamiltonian (6) in the sector containing ni particles in class i (i = 1, 2, ..., N) ( n = nj) and total momentum p = (l = 0, 1, ..., L - 1) are given by where {kj(0) = kj, j = 1, ..., n} are obtained from the solutions {kj(l), l = 0, ..., N - 1; j = 1, ..., ml} of the Bethe ansatz equations and ml = nj, l = 0, ¼, N (m0 = N, mN = 0). It is interesting to observe that in the particular case where n2 = n3 = ... = nN = 0 we obtain the Bethe-ansatz equations, recently derived [15] (see also [14]), for the asymmetric diffusion problem with particles of size s1. Also the case s1 = s2 = ... = sN = 1 gives us the corresponding Bethe ansatz equations for the standard problem of N types of particles in hierarchical order. The Bethe-ansatz solution in the particular case of N=2 with a a single particle of class 2 (n1 = n - 1, n2 = 1) was derived recently [32]. The Bethe-ansatz equations for the fully asymmetric problem are obtained by setting in (72)-(73) + = 1 and - = 0. IV Conclusions and generalizations We obtained through the Bethe ansatz the exact solution of the problem in which particles belonging to N distinct classes with hierarchical order diffuse as well interchange positions with rates depending on their relative hierarchy. We show that the exact solution can also be derived in the general case where the particles have arbitrary sizes. Some extensions of our results can be made. A first and quite interesting generalization of our model happens when we allow molecules in any class to have size s = 0. Molecules of size zero do not occupy space on the lattice, having no hard-core exclusion effect. Consequently we may have, at a given lattice point, an arbitrary number of them. The Bethe-ansatz solution presented in the previous section is extended directly in this case (the equations are the same) and the eigenenergies are given by fixing in (71)-(73) the appropriate sizes of the molecules. It is interesting to remark that particles of a given class c¢ (2, 3, ..., N), with size sc¢ = 0, contrary to the case sc¢ > 1, where they "accelerate" the diffusion of particles in classes c < c¢, they now "retard" the diffusive motion of these particles. The quantum Hamiltonian in the cases where the particles have size zero is obviously not given by (6) but can be written in terms of spin S = ¥ quantum chains. Another further extension of our model is obtained by considering an arbitrary mixture of molecules, where molecules in the same hierarchy may have distinct sizes. The results presented in [15] correspond to the particular case of this generalization where N = 1 (simple diffusion). For general N the S matrix we obtain in (25) is also a solution of the Yang-Baxter equation (34), but the diagonalization of the transfer matrix of the associated inhomogeneous vertex model is more complicated. The Bethe-ansatz equations in the case of asymmetric diffusion, with particles of unit size [10, 11], or with arbitrary size [15], were used to obtain the finite-size corrections of the mass gap GN of the associated quantum chain. The real part of these finite-size corrections is governed by the dynamical critical exponent z, i. e., The calculation of the exponent z for the model presented in this paper, with particles of arbitrary sizes, is presently in progress [30]. This work was supported in part by Conselho Nacional de Desenvolvimento Científico e Tecnológico - CNPq - Brazil and by the Russian Foundation of Fundamental Investigation ( Grant 99-02-17646). [1] A. A. Lushnikov, Zh. Éksp. Teor. Fiz. 91, 1376 (1986)         [ Links ][Sov. Phys. JETP 64, 811 (1986)],         [ Links ]Phys. Lett. A 120, 135 (1987).         [ Links ] [2] G. Schütz, J. Stat. Phys. 71, 471 (1993).         [ Links ] [3] F. C. Alcaraz, M. Droz, M. Henkel, and V. Rittenberg, Ann. Phys. (N.Y.) 230, 250 (1994).         [ Links ] [4] F. C. Alcaraz and V. Rittenberg, Phys. Lett. B 324, 377 (1993).         [ Links ] [5] F. C. Alcaraz, Int. J. Mod. Phys. B 8, 3449 (1994).         [ Links ] [6] M. D. Grynberg and R. B. Stinchcombe, Phys. Rev. Lett. 74, 1242 (1995).         [ Links ] [7] K. Krebs, M. P. Pfannmüller, B. Wehefritz, and H. Henrichsen, J. Stat. Phys. 78, 1429 (1995).         [ Links ] [8] J. E. Santos, G. M. Schütz, and R. B. Stinchcombe, J. Chem. Phys. 105, 2399 (1996).         [ Links ] [9] M. J. de Oliveira, T. Tomé, and R. Dickman, Phys. Rev. A 46, 6294 (1992).         [ Links ] [10] L. H. Gwa and H. Spohn, Phys. Rev. Lett. 68, 725 (1992),         [ Links ]Phys. Rev. A 46, 844 (1992).         [ Links ] [11] D. Kim, Phys. Rev. E 52, 3512 (1995).         [ Links ] [12] D. Kim, J. Phys. A 30, 3817 (1997).         [ Links ] [13] F. C. Alcaraz, S. Dasmahapatra and V. Rittenberg, J. Phys. A. 31, 845 (1998).         [ Links ] [14] T. Sasamoto and M. Wadati, J. Phys. A 31, 6057 (1998).         [ Links ] [15] F. C. Alcaraz and R. Z. Bariev, Phys. Rev. E 60, 79 (1999).         [ Links ] [16] B. Derrida, Physics Reports 301, 65 (1998).         [ Links ] [17] T. M. Liggett, Stochastic Interacting Systems: Contact, Voter and Exclusion Process (Springer Verlag, 1999)         [ Links ] [18] G. M. Schütz, Exactly solvable models for many-body systems far from equilibrium in "Phase Transition and Critical Phenomena". Eds. C. Domb and J.L. Lebowitz Vol 19 (to appear).         [ Links ] [19] C. N. Yang, Phys. Rev. Lett. 19, 1312 (1967).         [ Links ] [20] B. Sutherland, Phys. Rev. B 12, 3795 (1975).         [ Links ] [21] S. V. Pokrovskii and A. M. Tsvelick, Zh. Eksp. Teor. Fiz. 93, 2232 (1987)         [ Links ](Sov. Phys.-JETP 66, 6 (1987)         [ Links ]) [22] C. Boldrighini, G. Cosimi, S. Frigio, and G. Nuñes, J. Stat. Phys. 55, 611 (1989).         [ Links ] [23] P. A. Ferrari, C. Kipnis and E. Saada, Ann. Prob. 19, 226 (1991).         [ Links ] [24] P. A. Ferrari, Prob. Theory Relat. Fields 91, 81 (1992).         [ Links ] [25] L. A. Takhtajan and L. D. Faddeev, Russ. Math. Surv. 34,11 (1979);         [ Links ]V. E. Korepin, I. G. Izergin and N. M. Bogoliubov, Quantum Inverse Scattering Method, Correlation Functions and Algebraic Bethe Ansatz (Cambridge University Press, Cambridge, 1992).         [ Links ] [26] F. C. Alcaraz and R. Z. Bariev, Braz. J. Phys. 30,13 (2000)         [ Links ] [27] B. Derrida, M. R. Evans, V. Hakim, and V. Pasquier, J. Phys. A 26, 1493 (1993).         [ Links ] [28] K. Mallick, S. Mallick and N. Rajewsky, J. Phys. A 32, 48 (1999)         [ Links ] [29] R. J. Baxter, Exactly Solved Models in Statistical Mechanics (Academic Press, New York, 1982).         [ Links ] [30] F. C. Alcaraz and R. Z. Bariev, (to be published). [31] P. P. Kulish and N. Yu. Reshetikhin, Zh. Eksp. Teor. Fiz. 80, 214 (1981);         [ Links ]C. L. Schultz, Physica A 122, 71 (1983);         [ Links ]M. Gaudin La fonction d'onde Bethe (Paris, Masson, 1983).         [ Links ] [32] B. Derrida and M. R. Evans, J. Phys. A 32, 4833 (1999).         [ Links ]
6fc19ac3bd8d818e
Viewpoint: Negative Frequencies Get Real Fabio Biancalana, Max Planck Institute for the Science of Light, Günther-Scharowsky Strasse 1/26, D-91058 Erlangen, Germany Published June 18, 2012  |  Physics 5, 68 (2012)  |  DOI: 10.1103/Physics.5.68 Negative-Frequency Resonant Radiation E. Rubino, J. McLenaghan, S. C. Kehr, F. Belgiorno, D. Townsend, S. Rohr, C. E. Kuklewicz, U. Leonhardt, F. König, and D. Faccio Published June 18, 2012 | PDF (free) +Enlarge image Figure 1 Figure 1 Schematic representation of a propagating optical soliton that sheds in its wake two distinct blue-shifted modes: the usual positive resonant radiation (RR), and a second mode identified by Rubino et al. as negative resonant radiation (NRR). All these modes are traveling in the forward direction indicated by the arrow. A soliton is a localized “lump” of light that is the product of wave effects in a nonlinear medium and can, under certain conditions, emit low-intensity, positive frequency resonant radiation in its wake, due to the phase matching between its momentum and the dispersion of the medium itself. Writing in Physical Review Letters, Eleonora Rubino at the University of Insubria in Como, Italy, and collaborators have discovered that there should be a negative frequency counterpart of this resonant emission, which they have identified experimentally in two different systems [1]. When light travels through a medium, the dispersion—the relation between frequency and momentum of a wave—has to be taken into account. This has very important consequences: vacuum, for example, possesses a trivial dispersion—a straight line across all frequencies—and thus all colors travel at the same speed in empty space. However, in any other medium, for example, a silica optical fiber, the dispersion is far from being a straight line, so that different frequencies travel at different velocities. This produces a typical temporal broadening of short input pulses in fibers. When nonlinear effects also come into play, the momentum (and thus the refractive index) depends not only on frequency, but also on the intensity of light. In this case, the spreading due to dispersion, and the self-focusing effect due to nonlinearity, can perfectly balance to create solitons—localized bell-shaped waves that travel for very long distances in the waveguide without any distortion. Solitons are nowadays commonly produced, sometimes in large quantities, in many experiments [2]. The soliton momentum is nonlinear and depends on its intensity. When it and fiber dispersion coincide, the so-called phase matching takes place. Phase matching is a very common phenomenon in nonlinear optics, when two or more waves at different frequencies are allowed to exchange energy efficiently, due to the coincidence of their phases (that are proportional to their momenta). Under phase-matching conditions, a special kind of low-intensity radiation can be emitted by the soliton at a well-defined frequency, called resonant radiation [3]. This radiation is one of the essential ingredients of the supercontinuum generation, an extremely important and useful nonlinear phenomenon, which massively broadens the spectrum of an input narrow-band pulse, producing a flat spectral distribution over a broad range of frequencies, similar to sunlight, but coherent, and more intense by six orders of magnitude [4]. Supercontinuum generation has been intensively studied in optical fibers over the last fifteen years, and thus the theoretical, analytical, and numerical tools that are available today are very adequate and advanced, and simulations that perfectly reproduce experimental findings are commonplace in any serious nonlinear optics laboratory [5]. It is therefore with great surprise that a missing ingredient of the supercontinuum generation has been recently identified experimentally, and explained theoretically, by Rubino et al. [1], in which the phase matching occurs between a soliton momentum and the fiber dispersion at negative frequencies. It is the usual practice when dealing with the classical Maxwell equations, to assume that only positive frequencies have an acceptable physical meaning. When the soliton dispersion (which is basically a straight line with a slope proportional to its velocity) and the fiber dispersion (which is a rather complicated curve) are phase matched at positive frequencies, positive resonant radiation is produced, which is the one that most people observe in experiments. However, there is no particular reason why we have to restrict our attention to positive frequencies only, since any electromagnetic wave is a real field, and thus it is the sum of a field with positive frequencies and its complex conjugate field, and therefore possesses negative frequencies. This simple reasoning leads to a phase matching between the soliton and the negative frequency part of the fiber dispersion, and the curious, but logical, consequence is that this phase matching is asymmetric, and so leads to the generation of a new resonant radiation peak at a frequency that is not mirror symmetric with its positive energy counterpart. Any physical electric field is a real function, and therefore can be expressed as a sum of two complex functions (called envelopes), which are conjugates of each other. If the first complex function contains only positive frequencies, the second must contain only negative ones. These two pieces always come together, and thus negative frequencies have always been thought to be “redundant,” i.e., positive and negative frequencies should contain the same physics in classical electromagnetism. However, the point of Rubino’s work is that this is not true. The presence of a soliton (or, as a matter of fact, any wave that has a steep front of intensity) can break the symmetry of the phase-matching condition, thus leading to two different resonant radiation frequencies, one which is positive (shown as RR in Fig. 1) and the other which is negative (shown as NRR in Fig. 1). The analysis shows that these two frequencies have different magnitudes, as well as different signs. Nevertheless, since in the electric field every wave comes together with its complex conjugate, at the end, the negative frequency mode instantaneously acquires a positive frequency by switching its sign, and thus in the experiments, one should see not only the conventional RR, but the NRR as well, although the latter must have a smaller amplitude than the former. In 2008, a related and very similar phenomenon was demonstrated in water waves propagating near an “event horizon” [6]. In order to prove the existence of this negative-frequency resonant radiation, which is typically emitted at shorter wavelengths than its positive counterpart, the team has performed experiments with photonic crystal fibers (PCFs)—highly nonlinear fibers in which the formation of solitons and resonant radiation is particularly favorable [7]. They launched extremely short pulses of 7 femtoseconds (fs) into a 5-mm PCF with a very broad input spectrum that favors the energy transfer between the soliton and the negative resonant radiation, which they were able to observe directly, exactly at the predicted frequency. They repeated a similar experiment in a bulk medium (2cm of calcium fluoride), using 60fs input Bessel pulses, again demonstrating the formation of a small-amplitude negative resonant radiation at the predicted wavelength [5]. The above findings, when and if confirmed experimentally by other groups, could generate a renewed interest in supercontinuum generation, introducing a novel and refreshing point of view on this “old” phenomenon. If researchers manage to control the formation and the generation of the negative resonant radiation, there will be chances to push supercontinuum generation to shorter and shorter wavelengths, which will be very useful for several applications, such as optical coherence tomography, the characterization of optical devices, and the generation and measurement of frequency combs. The work could affect substantially phenomena in other fields that are described by the nonlinear Schrödinger equation, for example, the formation of Bose-Einstein condensates. Rubino et al. claim that the generation of these new radiation bands cannot be explained in any other known way by only taking into account the “conventional” positive frequencies. In future experiments based on optical fibers, to conclusively prove the relevance of this “negative world” in nonlinear optics, it will be especially important to exclude the positive resonant radiation frequencies that are due to phase matching between the soliton and higher-order linear modes in fibers [8], the so-called four-wave mixing between solitons and continuous waves [9], and the generation of purely positive frequencies from dispersive moving fronts [10], while in bulk crystals one must exclude the contribution of higher-order Bessel-Gauss states [11], which are all potentially able to produce low-intensity waves at wavelengths close to those predicted in this study. 1. E. Rubino, J. McLenaghan, S. C. Kehr, F. Belgiorno, D. Townsend, S. Rohr, C. E. Kuklewicz, U. Leonhardt, F. König, and D. Faccio, ”Negative-Frequency Resonant Radiation,” Phys. Rev. Lett. 108, 253901 (2012). 2. Y. S. Kivshar and G. P. Agrawal, Optical Solitons - From Fibers to Photonic Crystals (Academic Press, San Diego, 2003)[Amazon][WorldCat]. 3. N. Akhmediev and M. Karlsson, “Cherenkov Radiation Emitted by Solitons in Optical Fibers,” Phys. Rev. A 51, 2602 (1995). 4. F. Biancalana, D. V. Skryabin, and A. V. Yulin, “Theory of the Soliton Self-Frequency Shift Compensation by the Resonant Radiation in Photonic Crystal Fibers,” Phys. Rev. E 70, 016615 (2004). 5. R. R. Alfano and S. L. Shapiro, “Observation of Self-Phase Modulation and Small-Scale Filaments in Crystals and Glasses,,” Phys. Rev. Lett. 24, 592 (1970); J. Dudley, G. Genty, and S. Coen, “Supercontinuum Generation in Photonic Crystal Fiber,” Rev. Mod. Phys. 78, 1135 (2006). 6. G. Rousseaux, C. Mathis, P. Maïssa, T. G. Philbin, and U. Leonhardt, “Observation of Negative-Frequency Waves in a Water Tank: A Classical Analogue to the Hawking Effect?,” New J. Phys. 10, 053015 (2008). 7. P. Russell, “Photonic Crystal Fibers,” Science 299, 358 (2003). 8. F. Poletti and P. Horak, “Optical Solitary Waves in Three-Level Media: Effects of Different Dipole Moments,” J. Opt. Soc. Am. B 25, 645 (2008). 9. A. V. Yulin, D. V. Skryabin, and P. St. J. Russell, “Four-Wave Mixing of Linear Waves and Solitons in Fibers with Higher Order Dispersion,” Opt. Lett., 29 2411 (2004). 10. F. Biancalana, A. Amann, A. V. Uskov, and E. P. O’Reilly, “Dynamics of Light Propagation in Spatiotemporal Dielectric Structures,” Phys. Rev. E 75, 046607 (2007). 11. V. Bagini, F. Frezzab, M. Santarsieroa, G. Schettinib, and G. Schirripa Spagnoloc, “Generalized Bessel-Gauss beams,” J. Mod. Opt. 43, 1155 (1996). About the Author: Fabio Biancalana Fabio Biancalana Subject Areas New in Physics
6bb64695e0df996a
Schrödinger's equation — what is it? Here is a typical textbook question. Your car has run out of petrol. With how much force do you need to push it to accelerate it to a given speed? The answer comes from Newton’s second law of motion:   \[ F=ma, \]     where $a$ is acceleration, $F$ is force and $m$ is mass. This wonderfully straightforward, yet subtle law allows you to describe motion of all kinds and so it can, in theory at least, answer pretty much any question a physicist might want to ask about the world. Erwin Schrödinger Schrödinger's equation is named after Erwin Schrödinger, 1887-1961. Or can it? When people first started considering the world at the smallest scales, for example electrons orbiting the nucleus of an atom, they realised that things get very weird indeed and that Newton's laws no longer apply. To describe this tiny world you need quantum mechanics, a theory developed at the beginning of the twentieth century. The core equation of this theory, the analogue of Newton's second law, is called Schrödinger's equation. Waves and particles "In classical mechanics we describe a state of a physical system using position and momentum," explains Nazim Bouatta, a theoretical physicist at the University of Cambridge. For example, if you’ve got a table full of moving billiard balls and you know the position and the momentum (that’s the mass times the velocity) of each ball at some time $t$, then you know all there is to know about the system at that time $t$: where everything is, where everything is going and how fast. "The kind of question we then ask is: if we know the initial conditions of a system, that is, we know the system at time $t_0,$ what is the dynamical evolution of this system? And we use Newton’s second law for that. In quantum mechanics we ask the same question, but the answer is tricky because position and momentum are no longer the right variables to describe [the system]." The problem is that the objects quantum mechanics tries to describe don't always behave like tiny little billiard balls. Sometimes it is better to think of them as waves. "Take the example of light. Newton, apart from his work on gravity, was also interested in optics," says Bouatta. "According to Newton, light was described by particles. But then, after the work of many scientists, including the theoretical understanding provided by James Clerk Maxwell, we discovered that light was described by waves." But in 1905 Einstein realised that the wave picture wasn't entirely correct either. To explain the photoelectric effect (see the Plus article Light's identity crisis) you need to think of a beam of light as a stream of particles, which Einstein dubbed photons. The number of photons is proportional to the intensity of the light, and the energy E of each photon is proportional to its frequency f:   \[ E=hf, \]     Here $h=6.626068 \times 10^{-34} m^2kg/s$ is Planck's constant, an incredibly small number named after the physicist Max Planck who had already guessed this formula in 1900 in his work on black body radiation. "So we were facing the situation that sometimes the correct way of describing light was as waves and sometimes it was as particles," says Bouatta. The double slit experiment Einstein's result linked in with the age-old endeavour, started in the 17th century by Christiaan Huygens and explored again in the 19th century by William Hamilton: to unify the physics of optics (which was all about waves) and mechanics (which was all about particles). Inspired by the schizophrenic behaviour of light the young French physicist Louis de Broglie took a dramatic step in this journey: he postulated that not only light, but also matter suffered from the so-called wave-particle duality. The tiny building blocks of matter, such as electrons, also behave like particles in some situations and like waves in others. De Broglie's idea, which he announced in the 1920s, wasn't based on experimental evidence, rather it sprung from theoretical considerations inspired by Einstein's theory of relativity. But experimental evidence was soon to follow. In the late 1920s experiments involving particles scattering off a crystal confirmed the wave-like nature of electrons (see the Plus article Quantum uncertainty). One of the most famous demonstrations of wave-particle duality is the double slit experiment. In it electrons (or other particles like photons or neutrons) are fired one at a time all over a screen containing two slits. Behind the screen there's a second one which can detect where the electrons that made it through the slits end up. If the electrons behaved like particles, then you would expect them to pile up around two straight lines behind the two slits. But what you actually see on the detector screen is an interference pattern: the pattern you would get if the electrons were waves, each wave passing through both slits at once and then interfering with itself as it spreads out again on the other side. Yet on the detector screen, the electrons are registered as arriving just as you would expect: as particles. It's a very weird result indeed but one that has been replicated many times — we simply have to accept that this is the way the world works. Schrödinger's equation The radical new picture proposed by de Broglie required new physics. What does a wave associated to a particle look like mathematically? Einstein had already related the energy $E$ of a photon to the frequency $f$ of light, which in turn is related to the wavelength $\lambda $ by the formula $\lambda = c/f.$ Here $c$ is the speed of light. Using results from relativity theory it is also possible to relate the energy of a photon to its momentum. Putting all this together gives the relationship $\lambda =h/p$ between the photon’s wavelength $\lambda $ and momentum $p$ ($h$ again is Planck’s constant). (See Light's identity crisis for details.) Following on from this, de Broglie postulated that the same relationship between wavelength and momentum should hold for any particle. At this point it's best to suspend your intuition about what it really means to say that a particle behaves like a wave (we'll have a look at that in the third article) and just follow through with the mathematics. In classical mechanics the evolution over time of a wave, for example a sound wave or a water wave, is described by a wave equation: a differential equation whose solution is a wave function, which gives you the shape of the wave at any time $t$ (subject to suitable boundary conditions). For example, suppose you have waves travelling through a string that is stretched out along the $x$-axis and vibrates in the $xy$-plane. In order to describe the wave completely, you need to find the displacement $y(x,t)$ of the string in the $y$-direction at every point $x$ and every time $t$. Using Newton’s second law of motion it is possible to show that $y(x,t)$ obeys the following wave equation:   \[ \frac{\partial ^2y}{\partial x^2} = \frac{1}{v^2} \frac{\partial ^2 y}{\partial t^2}, \]     where $v$ is the speed of the waves. Cosine wave A snapshot in time of a string vibrating in the xy-plane. The wave shown here is described by the cosine function. A general solution $y(x,t)$ to this equation is quite complicated, reflecting the fact that the string can be wiggling around in all sorts of ways, and that you need more information (initial conditions and boundary conditions) to find out exactly what kind of motion it is. But as an example, the function   \[ y(x,t)=A \cos {\omega (t-\frac{x}{v})} \]     describes a wave travelling in the positive $x$-direction with an angular frequency $\omega $, so as you would expect, it is a possible solution to the wave equation. By analogy, there should be a wave equation governing the evolution of the mysterious "matter waves", whatever they may be, over time. Its solution would be a wave function $\Psi $ (but resist thinking of it as describing an actual wave) which tells you all there is to know about your quantum system — for example a single particle moving around in a box — at any time $t$. It was the Austrian physicist Erwin Schrödinger who came up with this equation in 1926. For a single particle moving around in three dimensions the equation can be written as   \[ \frac{ih}{2\pi } \frac{\partial \Psi }{\partial t} = -\frac{h^2}{8 \pi ^2 m} \left(\frac{\partial ^2 \Psi }{\partial x^2} + \frac{\partial ^2 \Psi }{\partial y^2} + \frac{\partial ^2 \Psi }{\partial z^2}\right) + V\Psi . \]     Here $V$ is the potential energy of the particle (a function of $x$, $y$, $z$ and $t$), $i=\sqrt {-1},$ $m$ is the mass of the particle and $h$ is Planck’s constant. The solution to this equation is the wave function $\Psi (x,y,z,t).$ In some situations the potential energy does not depend on time $t.$ In this case we can often solve the problem by considering the simpler time-independent version of the Schrödinger equation for a function $\psi $ depending only on space, i.e. $\psi =\psi (x,y,z):$   \[ \frac{\partial ^2 \psi }{\partial x^2} + \frac{\partial ^2 \psi }{\partial y^2} + \frac{\partial ^2 \psi }{\partial z^2} + \frac{8 \pi ^2 m}{h^2}(E-V)\psi = 0, \]     where $E$ is the total energy of the particle. The solution $\Psi $ to the full equation is then   \[ \Psi = \psi e^{-(2 \pi i E/h)t}. \]     These equations apply to one particle moving in three dimensions, but they have counterparts describing a system with any number of particles. And rather than formulating the wave function as a function of position and time, you can also formulate it as a function of momentum and time. Enter uncertainty We'll see how to solve Schrödinger's equation for a simple example in the second article, and also that its solution is indeed similar to the mathematical equation that describes a wave. But what does this solution actually mean? It doesn't give you a precise location for your particle at a given time $t$, so it doesn't give you the trajectory of a particle over time. Rather it's a function which, at a given time $t,$ gives you a value $\Psi (x,y,z,t)$ for all possible locations $(x,y,z)$. What does this value mean? In 1926 the physicist Max Born came up with a probabilistic interpretation. He postulated that the square of the absolute value of the wave function,   \[ |\Psi (x,y,z,t)|^2 \]     gives you the probability density for finding the particle at position $(x,y,z)$ at time $t$. In other words, the probability that the particle will be found in a region $R$ at time $t$ is given by the integral   \[ \int _{R} |\Psi (x,y,z,t)|^2 dxdydz. \]     (You can find out more about probability densities in any introduction to probability theory, for example here.) Werner Heisenberg Werner Heisenberg, 1901-1976. This probabilistic picture links in with a rather shocking consequence of de Broglie's formula for the wavelength and momentum of a particle, discovered by Werner Heisenberg in 1927. Heisenberg found that there is a fundamental limit to the precision to which you can measure the position and the momentum of a moving particle. The more precise you want to be about the one, the less you can say about the other. And this is not down to the quality of your measuring instrument, it is a fundamental uncertainty of nature. This result is now known as Heisenberg's uncertainty principle and it's one of the results that's often quoted to illustrate the weirdness of quantum mechanics. It means that in quantum mechanics we simply cannot talk about the location or the trajectory of a particle. "If we believe in this uncertainty picture, then we have to accept a probabilistic account [of what is happening] because we don’t have exact answers to questions like ’where is the electron at time $t_0$?’," says Bouatta. In other words, all you can expect from the mathematical representation of a quantum state, from the wave function, is that it gives you a probability. Whether or not the wave function has any physical interpretation was and still is a touchy question. "The question was, we have this wave function, but are we really thinking that there are waves propagating in space and time?" says Bouatta. "De Broglie, Schrödinger and Einstein were trying to provide a realistic account, that it's like a light wave, for example, propagating in a vacuum. But [the physicists], Wolfgang Pauli, Werner Heisenberg and Niels Bohr were against this realistic picture. For them the wave function was only a tool for computing probabilities." We'll have a closer look at the interpretation of the wave function in the third article of this series. Does it work? Louis de Broglie Louis de Broglie, 1892-1987. Why should we believe this rather fantastical set-up? In this article we have presented Schrödinger's equation as if it were plucked out of thin air, but where does it actually come from? How did Schrödinger derive it? The famous physicist Richard Feynman considered this a futile question: "Where did we get that [equation] from? It's not possible to derive it from anything you know. It came out of the mind of Schrödinger." Yet, the equation has held its own in every experiment so far. "It's the most fundamental equation in quantum mechanics," says Bouatta. "It's the starting point for every quantum mechanical system we want to describe: electrons, protons, neutrons, whatever." The equation's earliest success, which was also one of Schrödinger's motivations, was to describe a phenomenon that had helped to give birth to quantum mechanics in the first place: the discrete energy spectrum of the hydrogen atom. According to Ernest Rutherford's atomic model, the frequency of radiation emitted by atoms such as hydrogen should vary continuously. Experiments showed, however, that it doesn't: the hydrogen atom only emits radiation at certain frequencies, there is a jump when the frequency changes. This discovery flew in the face of conventional wisdom, which endorsed a maxim set out by the 17th century philosopher and mathematician Gottfried Leibniz: "nature does not make jumps". In 1913 Niels Bohr came up with a new atomic model in which electrons are restricted to certain energy levels. Schrödinger applied his equation to the hydrogen atom and found that his solutions exactly reproduced the energy levels stipulated by Bohr. "This was an amazing result — and one of the first major achievement of Schrödinger's equation." says Bouatta. With countless experimental successes under its belt, Schrödinger's equation has become the established analogue of Newton's second law of motion for quantum mechanics. Now let's see Schrödinger's equation in action, using the simple example of a particle moving around in a box. We will also explore another weird consequence of the equation called quantum tunneling. Read the next article: Schrödinger's equation — in action But if you don't feel like doing the maths you can skip straight to the third article which explores the interpretation of the wave function. About this article Marianne Freiberger is Editor of Plus. She interviewed Bouatta in Cambridge in May 2012. She would also like to thank Jeremy Butterfield, a philosopher of physics at the University of Cambridge, and Tony Short, a Royal Society Research Fellow in Foundations of Quantum Physics at the University of Cambridge, for their help in writing these articles. • Lines and paragraphs break automatically. More information about formatting options To prevent automated spam submissions leave this field empty. By submitting this form, you accept the Mollom privacy policy.
e45592d69689296f
Sign up Here's how it works: 1. Anybody can ask a question 2. Anybody can answer I want to solve the time-dependent Schroedinger equation: $$ i\partial_t \psi(t) = H(t)\psi(t) $$ for matrix, time-dependent $H(t)$ and vector $\psi$. What is an efficient way of doing this so that it efficiently scales to high-dimensional spaces? share|improve this question what are the valuse of b and omega for the second plot? I want to check my code to see is working or not! Thanks Jiyan. – user21640 Oct 23 '14 at 17:26 up vote 27 down vote accepted Time-dependent case in the time-dependent case, $[H(t),H(t')]\neq0$ in general and we need to time-order, ie, the operator taking a state from $t=0$ to $t=\tau$ is $U(0,\tau)=\mathcal{T}\exp(-i\int_0^\tau dt\, H(t))$ with $\mathcal{T}$ the time-ordering operator. In practice we just split the time interval into lots of small pieces (basically using the Baker-Campbell-Hausdorff thing). So, consider the time-dependent Hamiltonian for a two-level system: $$ H = \left( \begin{array}{cc} \epsilon_1 && b \cos(\omega t) \\ b\cos(\omega t) && \epsilon_2 \\ \end{array} \right) $$ i.e. two level coupled by a time-periodic driving (see here). Even this simplest possible periodically-driven system can't be solved analytically in general. Anyway, here's a function to construct the hamiltonian: ham[e1_, e2_, b_, omega_, t_] := {{e1, b*Cos[omega*t]}, {b*Cos[omega*t], e2}} and here's one to construct the propagator from some initial time to some final time, given a function to construct the Hamiltonian matrix at each point in time (and splitting the interval into $n$ slices--you should try with increasing $n$ until your results stop changing): constructU::usage = "constructU[h,tinit,tfinal,n]"; constructU[h_, tinit_, tfinal_, n_] := Module[{dt = N[(tfinal - tinit)/n], curVal = IdentityMatrix[Length@h[0]]}, Do[curVal = MatrixExp[-I*h[t]*dt].curVal, {t, tinit, tfinal - dt, This constructs the operator $U(0,\tau)=\mathcal{T}\exp(-i\int_0^\tau dt\,H(t))$ as $$ U(0,\tau)\approx\prod_{n=0}^{N}\exp\left( -iH(ndt)dt \right) $$ with $N=\tau/dt-1$ (or its ceiling anyway). This is an approximation to the correct $U$. And now here is how to look at the time-dependent expectation of $\sigma_z$ for different coupling strengths $b$: ClearAll[cU, psi0]; psi0 = {1., 0}; Chop[#\[Conjugate].PauliMatrix[3].#] &@(constructU[ ham[-1., 1., b, 1., #] &, 0, upt, 100].psi0), {upt, .01, 20, .1} Joined -> True, PlotRange -> {-1, 1} {b, 0, 2} Mathematica graphics Alternatively, you could calculate the wavefunction at some time tfinal given the wavefunction at time tinit with this: propPsi[h_, psi0_, tinit_, tfinal_, n_] := psi = psi0}, psi = MatrixExp[-I*h[t]*dt, psi], {t, tinit, tfinal - dt, dt} which uses the form MatrixExp[-I*h*t,v]. For large sparse matrices (eg, for h a many-body Hamiltonian), this can be much faster at the cost of losing access to $U$. share|improve this answer Thanks a lot for all this. However as you mentioned in your previous comment, my problem is a time-dependent schrodinger equation. In this case, the Hamiltonian doesn't commute in different times, and it can't just be a simple exponential; it should be a path ordered exponential. This is the reason that I can't do this. Such this problems don't have an analytical solution, they ave to be solved numerically!! – ZKT Mar 21 '13 at 13:49 @Zahra For a time-dep H, you can simply construct the propagator from $t=0$ to some time $t=\tau$, say. I can explain how if you want. but let me know if you actually want so I don't waste my time if you insist on doing it with NDSolve--but ask yourself which is the most practical way if you have a hilbert space of dimension 20000, for instance (so you'd need to solve 20000 coupled ODEs with your approach) – acl Mar 21 '13 at 15:35 Thanks a lot for the time that you give to ask my question. I really appreciate that. I'm not insisting to solve my problem with NDSolve. It would be great if I can solve it the way you are explaining. I just thought it is not possible to solve it non-numerically. Can you please tell me how to do that? I appreciate it. – ZKT Mar 21 '13 at 15:55 @Zahra oh I see, no, what I suggest is fully numerical. You're absolutely right that such problems cannot be solved analytically in general. OK let me write it up quickly and you can see if it's useful (I routinely use this on systems with much bigger Hilbert spaces than yours, up to 20-30000) – acl Mar 21 '13 at 15:58 here you go (I had actually done this the first time, just posted only the time-independent limit because I did not realize you had a time-dependent hamiltonian). Note that the way I construct the Manipulate is not efficient because I recalculate $U$ from scratch all the time, but it's fast enough... – acl Mar 21 '13 at 16:04 Since there hasn't been any discussion of NDSOlve yet, let me point out that for a finite-dimensional Hilbert space where the Schrödinger equation is merely a first-order equation in time, it's easiest to just do this (using the two-dimensional Hamiltonian ham from acl's answer): ham[e1_, e2_, b_, omega_, Manipulate[Module[{ψ, sol, tMax = 20}, sol = First@NDSolve[{I D[ψ[t], t] == ham[-1, 1, b, 1, t] .ψ[t], ψ[0] == {1,0}}, ψ, {t, 0, tMax}]; Plot[Chop[#\[Conjugate].PauliMatrix[3].#] &@(ψ /. sol)[t], {t, 0, tMax}, PlotRange -> {-1, 1}] {{b, 1}, 0, 2} I copied the parameters from acl's answer too, to show the direct comparison in the Manipulate. Here the vector $\psi$ is recognized by NDSolve as two-dimensional, so the formulation of the problem is quite concise, and we can leave the time step choice up to Mathematica instead of choosing a discretization ourselves. share|improve this answer In fact the original question explicitly mentioned NDSolve (I edited it to make it less localized). There's nothing wrong with NDSolve for up to a few thousand states, but the approach I gave scales much better (I use it for systems with dimensions in the tens of thousands; NDSolve seems to tank much earlier); of course the way I wrote the code it's inefficient. – acl Mar 22 '13 at 9:33 as an additional comment, this approach (using NDSolve directly) works also for cases where the "Hamiltonian" depends on the wavefunction, so that we have a set of nonlinear coupled ODEs. This kind of problem appears in various mean-field approaches to many-body systems (eg, Gutzwiller-ansatz approach to many-body dynamics of bosons, see eg eq 3 here ). I've used NDSolve for precisely this problem with up to a couple of thousand coupled ODEs; it's really not practical at those sizes, but there's no alternative (in mma) for nonlinear ODEs. – acl Mar 22 '13 at 14:10 @acl Thanks for pointing that out (I already upvoted your answer). I think it would have been better to edit the question in such a way as to retain some information on what the OP originally tried already. – Jens Mar 22 '13 at 14:19 feel free to change it, I would not object (and I imagine neither would Zahra). I thought the question as it was was way too localized (eg, it was asked about a specific Hamiltonian, defined only in hard-to read code--take a look at the original form if you haven't) and wanted to make it as general as possible so it's useful. I think that phrasing it the current way it admits as many approaches as possible. – acl Mar 22 '13 at 14:47 @acl I see your point. No worries, it's fine the way it is. – Jens Mar 22 '13 at 15:16 Frame it as a set of Linear ODEs and solve it somehow. I usually use Implicit Runge Kutta in the interaction picture. solver[H_, a_] := soln = Module[{d, init, eq, vars, solargs, t, t0, tf}, d = Dimensions[H][[1]]; t0 = a[[2]]; tf = a[[3]]; t = a[[1]]; u[t_] := Table[Subscript[u, i, j][t], {i, 1, d}, {j, 1, d}]; init = (u[t0] == IdentityMatrix[d]); eq = (I u'[t] == H.u[t]); vars = Flatten[Table[Subscript[u, i, j], {i, 1, d}, {j, 1, d}]]; solargs = LogicalExpand[eq && init]; NDSolve[solargs, vars, a, Method -> {"FixedStep", Method -> {"ImplicitRungeKutta", "DifferenceOrder" -> 10}}, StartingStepSize -> tf/100, MaxSteps -> Infinity]]]; U[t_] := u[t] /. soln[[1]] Alternatively, you could solve for the $\psi(t)$ and obtain U as $|\psi(t)\rangle\langle \psi(0)| $ and use an appropriate normalisation to preserve probability. share|improve this answer Your Answer
f9fe5af6f3a0c555
lunes, 31 de agosto de 2009 "Nuestro Futuro Robado" About this web site Meet the book's authors... Our Stolen Future is a scientific detective story that explores the emerging science of endocrine disruption: how some synthetic chemicals interfere with the ways that hormones work in humans and wildlife. This web site,, is the web home for the authors of Our Stolen Future, where we provide regular updates about the cutting edge of science related to endocrine disruption. We will also post information about ongoing policy debates, as well as new suggestions about what you can do as a consumer and citizen to minimize risks related to hormonally-disruptive contaminants. We want to make it easy for you to quickly get up-to-date information about this issue, to get some insights into what new scientific findings mean in a broader context, to explore the existing scientific literature and to find ready access to other places on the web that carry information about endocrine disruption. Anyone who has followed this issue or watched the response to the book knows that not everyone agrees with our interpretation of the science or with our recommendations. Advances in science are always surrounded by debate--and we think that's healthy. So we've included references to a range of the critics' publications and even links to their web sites. Go see for yourself. El HiperEspacio de Selby El HiperEspacio de Selby (et al.,) se construye a partir del Espacio de Selby (et al.,). El Espacio de Selby es descrito en: "Educación Global. Hacia una irreductible perspectiva global en la Escuela". En el espacio de Selby lasdimensiones son cuatro: Subjetiva y La reciente actualización y divulgación profunda de la mecánicacuántica,profetiza una prontaactualización, a su vez, de todos los campos delconocimiento.Quizás seael último escalon hacia, en ydesde,lasociedad delconocimiento. Porque la actualización e interfertilización de la pléyaderealmenteesistente de disciplinas académicas,hará que ya lamemorística, como método hastaahora aparentemente priritario en muchos centros deaprendizaje, deje de pesar lo que hoy. Si en mecánicacuántica,o en la teoría decuerdas, sesospecha y habla dela posibilidadde laexistenciade hasta oncedimensiones, en esteartículo especularemos, con ayudade la biología cuántica (de lapercepción) y demá que esa hiperdimensionalidad no está tan lejosdeser percibida hoy día,como quizás haya sido siempre (dealguna manera) y no solopercibida sino que estásujeta a descripción científica al uso, ya que puedeser registradapor la pléyade de aparatos y sistemas de medición y o degrabación, que han sido desarrollados en estasdécadas. Que conste que eluso cotidiano de las tics, en especialpor los nativos digitales, sitúa la cosmovisión deestos chavales en un nivel depor sí cuántico. Como ya hemos dicho en otros lugares, los transistores, que constituyen el corazón holónico delos microchips, estan diseñados con fórmulas y principios cuánticos. Está por otro lado claro que todacreatividad derivadadel quehacer humano, estaderivada, implícitamente de la "naturaleza", o de esesistema mayor que el individuo humano y aque a su vez lo abraza y por tanto lo constituye... Si esta creatividdad humana que llamamos mecánicacuántica, ha sidoposible de serincorporada a los cacharos digitales por doquier,eso significaque, de alguna manera, esa misma creatividad (mecanicacuantica) seencuentra formando parte de los ecosistemas, estoes, delos sistemas en los que estamos integrados. Al parecer los sistemas sensoriales funcionan cuánticamente. Y no pueden serexplicadossolo con las herramientasdelafísicaclásica. Osea,por un lado tenemos una buena partedelahumanidad que vive, digamos,inmersa (sea o no consciente) en elmundo cuántico, en el sentido, digamos explícito, de que usa herramientasque le permiten saltarsea latorera aquellas condicones físicas que eran inherentes a la física o mecánicaclásica. Y por otro lado tenemos que la sensosfera senos muestra como una fronteracuántica, que requieredela cuánticaparapoderserexplicada. Por otro lado tenemos alpropio ser-observador humano, científico o no, que está provistode unos sistemas altamentecomplejos llamados sesnsoriales. Los sistemas vivos son computadorescuánticos. ¿Pero quécomputan? Pues computan lo que les compete,lo "que les pincha", "lo que les toca la fibra",lo que sensibiliza sus innumerablessensores... También tenemos otrasingularidad, epistémica,muy digna y oportuna ahoritadesernombraday comentada. Nos referimos a una sila semánticallamadacerebro. Es fácil, con las herramientas de búsqueda disponibles,calcular el grado de aislamiento del cerebro en la ciencia actual, en relación al sistema físico o a los sistemas físicos,de los que el sistema cerebral forma parte inherente. Igual que Sandra Harding Hablabade laInmaculadaConcepción,parareferirse irónicamente al ADVENIMIENTO o aparición,como si de laNada, de la Ciencia Moderna Occidental... Pues aquí tenemos otro buen ejemplo que también va a hacer las delicias deSandra. QUEDA Bonito: Las deliciasde Sandra... Estassegundas deliciasdeSandra a la que nos referimos es alaislamiento semántico del cerebro. Aislamiento del cuerpo, de los sistemas sensoriales, ydel ecosistema, que constituyen holónicamente,sistemas embutidos fractalmente unos en otros y que no deberían en adelante ser tan eludidos por nuestras,así denominadas, autoridadesacadémicas, en especial en el terreno de la neurobiología,que más que "última fronteradela Ciencia (que también) debería considerarse como: Cadavez que en una conversación o texto cualsea, veasque se cita más detres veces el término "CEREBRO", sin citar a su vez a al ecosistema,cuerpo y/o sensorialidad, ¡YUYU!... Significa que teestán aplicando un tipo de abducción aisladora y normalizadora de altovoltaje. pORQUE EL CEREBRO PARECIA eldios,o elaltar de la diosarazón... Allí en lo alto de labipedestación,casi exclusiva de nuestraespecie... Allí brillaba el cerebro sobre elhorizonte... Un horizonte vulgar,muy alejado de dios... El cuerpo la sensorialidad, elecosistema,... pasaban a sí a un muy segundo plano... Y con ello también seplanificaba, en el doble sentido, nuestra propia consciencia... Seplanificaba porque se reducía en su dimensionalidad... En paralelo con la mono o dí- (segúnykómo) mensionalidad reinante en el"wisdomscape" de la humanidad culta... Un paisaje conceptual, concebido mayormente sobre un plano dedos dimensiones,donde incluso el foco, o puntodevista, obviaba casi en lapráctica una delas dimensiones, pasando así a serun vector, con una dirección determinada y/o preferente (la del progreso,la del pib, ...) El cerebro es una doble madeja muy embrollada. El cerebro es un mapa decoherencias. El cerebro no almacena lainformación "exterior" sino como referencias,como sensaciones,como un campo continuo de coherencias, que serefuerza en la vida diaria, y que constiruye como un paisaje (wisdomscape) de coherencias,sobre el que fluimos cada día, en un proceder que resulta cuasiautomático. Dentrodese mapa decoherencias sensacionales,destacan las singularidades. Escomo un llano con cerritos o montañitas. El llano son las repeticiones las rutinas, que sedan en nuestro paisaje vital. Los edificiosde nuestrasciudades, aquellas cosas sólidas, con las que nuestro equipo sensorial "conversa" cadadía... Lassingularidades son los cambios, o heterogeneidades de nuestro paisaje cotidiano. El autobus puedeserdelos antiguos o delosmodernos,la gente,los pasajeros cambian cadadía,más o menos,así como multituddeotros cambios en lapercepción cotidiana, delos que tú lector puedessermás consciente, a condición de que lesprestes un poco más deatención (consciente) y/o lo compartas con otros observadorxs, como una forma de hacerlo más tuyo,másconsciente... Esta mañana,alamanecer,por ejemplo,me hiceconscientede lo que he bautizado como " Una valla publicitaria deacera fue el escenario de los saltos. Tras cruzar un semáforo,me encaré frente a esavalla y me dí cuenta de que reflebjaba objetos y destellosdeluces,ya que estaba orientada más bien hacia eleste... Seguí mirando a esos reflejos mientras andaba cadavezmás cercadela valla... Como buscando algún efecto sensorial curioso, ... Entonces percibí que los reflejos saltaban paraarribay paraabajo, acompasados con mis pasos... Essencillo deexplicar,pero lolindo y lo que alimenta todo estegrueso capítulodeargumentos que andamos acumulando en relación con la sensosfera,biología cuanticadelapercepción, etc... Como vascaminando, enfilas la rampita del paso depeatones, frente a la valla... Y al igual que tu cuerpo sube y baja con tus pasos... Pues los reflejos hacían lo mismo...esto es, reflejaban ese ritmo ascendentedescendente de mi cuerpo al andar... Mientras esos reflejos seiban haciendo a suvez más grandes... Ya que al subir la rampita roja mi imagen corporal cuadraba mejor en el territoriodela valla... Perception-Distance Diversity in three Learning Espaces viernes, 28 de agosto de 2009 A quantum leap in biology A quantum leap in biology. One inscrutable field helps another, as quantum physics unravels consciousness Philip Hunter The most esoteric research field in the natural sciences is probably quantum physics. Despite the fact that Werner Heisenberg first proposed its central concepts nearly 80 years ago, it continues to baffle physicists and to cause headaches among non-physicists. Even Albert Einstein was unwilling to accept the central tenet that everything is just a matter of possibilities; he famously dismissed Heisenberg's ideas by asserting that "God does not throw dice." As quantum physics seems too mystical to be relevant to anything as real as a living organism, it might come as a surprise that its first applications have arrived in biology, rather than physics. The seeds of contemporary quantum biology were sown as early as 1930, a mere three years after Heisenberg postulated his uncertainty principle describing the inability to measure related quantities exactly (see sidebar). At that time, Erich Hückel, a German chemist and physicist, developed simplified methods based on quantum mechanics (QM) for analysing the structure of unsaturated organic molecules, in particular to explain the state of electrons in aromatic compounds. But Hückel was too far ahead of his time, and his concepts went almost completely unrecognized until the 1950s, when the arrival of computers made it possible to perform more detailed calculations. It was not until the 1990s, however, that the field of quantum biology became established with the development of density functional theory (DFT), which allows accurate calculations of electronic structure (see sidebar). By that time, high-resolution structures of protein complexes obtained using X-ray crystallography and nuclear magnetic resonance produced sufficiently accurate descriptions of crucial molecules for QM methods to unravel the details of key reactions, such as ATP hydrolysis. The seeds of contemporary quantum biology were sown as early as 1930, a mere three years after Heisenberg postulated his uncertainty principle... A Background in Physics In the early nineteenth century, physics faced a crisis, as researchers made a number of observations on the behaviour of single photons and electrons that could not be explained by Newton's laws of mechanics. In 1926, the German physicist Werner Heisenberg finally solved these problems with his famous uncertainty principle, which has formed the cornerstone of quantum mechanics (QM) ever since. According to this principle, it is impossible to measure exactly all of a particle's quantities (such as mass, energy, position or momentum), simply because they do not have absolutely fixed values. Instead, they have a range of possible values within a probability distribution, but at normal space and timescales this range is relatively small. In everyday life, it is therefore possible to make exact measurements limited only by the sensitivity of the equipment. However, at small space and timescales, such as those that operate at the submolecular level, the impact of the uncertainty becomes much greater. If, for example, researchers wanted to predict the location of a particle at a given time, they would measure its current position and its rate of change in position expressed as its momentum. The uncertainty principle states that the more accurately researchers determine one of these quantities, the less accurately they will know the other, which imposes a deterministic limitation on the accuracy of prediction. This uncertainty becomes significant not only within small spatial dimensions, but also at short time intervals. This is relevant for biology, given that many processes at the molecular level occur over short timescales. For example, the operation of molecular motors is coupled to a chemical reaction that occurs over a few femtoseconds. Explaining enzymatic reactions requires the analysis of quantum effects because the core processes usually take place just one molecule at a time and are not bulk chemical reactions in a test tube. As such, they rely on the precise alignment of molecules, for example, when water molecules are split by the catalytic actions of four manganese ions and one calcium ion in photosynthesis. The electrons involved govern the resulting molecular interactions, and QM can be used to resolve their energies and, thus, the outcome. Similarly, photoreception involves the excitation of orbital electrons, and calculating the resulting energy change requires QM. The uncertainty principle is applied to such problems by determining the energy levels of atoms or molecules using the wave equation, which was developed by the Austrian physicist Erwin Schrödinger. It encapsulates the uncertainty in any system as the probability of finding a given particle at a particular place. Schrödinger's equation is relevant for all chemical reactions or any interactions involving electrons, which can be described as an electromagnetic wave owing to the uncertainty of their position. When one electron interacts with another, such as in a chemical reaction, the waveform is said to collapse, as the electron assumes a definite position. Density functional theory (DFT) replaces the individual electrons of a system, such as a molecule, with a single electronic density function to represent both the aggregate charge and the interactions between individual electrons. This means that the algorithm considers only three spatial dimensions when analysing quantum-level interactions between systems, irrespective of the number of electrons involved. Before DFT was first used in the 1990s, every electron had to be considered separately, restricting quantum-level analysis to the smallest interactions involving only a few atoms. The Penrose–Hameroff model of consciousness uses the effect of quantum tunnelling to explain how several hundred neurons are able to simultaneously coordinate their firing rate. Quantum tunnelling exploits uncertainty about the position of an electron—with a high probability, it is near its atomic nucleus but it might also be at the far end of the galaxy, albeit with a much lower probability. By exploiting the uncertainty inherent in its wave nature, the electron appears to jump from one position on its probability wave to another, and seemingly hops over, or tunnels through, obstacles such as an atomic nucleus. QM has also made a significant impact on the study of photoreception and the detection of colour, on research into the sensing of magnetic location and directional information by migratory birds, and, most controversially, in understanding the processes underlying consciousness. The last example relies on certain unprovable assumptions about the scientific basis of perception, whereas research on catalytic reaction centres (such as analysing substrate binding) hinges on solving the Schrödinger wave equation. Described in 1926, and central to the theory of QM, this equation describes the probability that a given electron is in a particular location at a certain time (see sidebar). Such QM-based applications calculate the sequence of events at the atomic level by analysing the electronic properties during the formation and breakage of chemical bonds or the orientation of electron orbitals, as determined by their quantum wave function. The validity of QM methods is not seriously disputed, but their high computational intensity precluded their use until the 1990s. Although computers had been used to simulate the function of proteins and their chemical reactions since the mid-1960s, these calculations were based on molecular mechanics (MM) techniques derived from Newton's laws of motion. These operate at the level of molecules rather than electrons, and describe the energy and forces associated with particular protein structures, by studying simpler model compounds that mimic the chemical groups in the constituent amino acids and other components. The weakness of MM methods is that they rely on making simple assumptions. For example, electrons are not considered directly, but are assumed to be in an optimum position determined by the location of their atomic nuclei. This process, based on the Born–Oppenheimer approximation of the Schrödinger equation, treats a complex molecule like an assembly of weights connected by springs. Therefore, when the MM algorithm calculates the energy required to stretch or compress a chemical bond, it applies a formula similar to Hooke's law of elastic springs under tension. This works reasonably well for determining the geometry and total enthalpy of a molecule in isolation, but fails to describe reactions that involve binding or recognition in solution, as occurs in most crucial reactions in biology. Most reactions in nature involve bond formation and breakage, with associated changes in electron organization that cannot be described accurately by classical mechanics because of the uncertainties involved. Similarly, reactions involving docking or molecular recognition in solution require the calculation of polarization effects —how molecules orientate themselves as they approach each other—which are determined by the behaviour of their orbitals. Whenever electrons and their associated energies need to be considered explicitly, QM steps in. The same is true for studying reactions that involve the recognition of light (such as in the retina) or stimulation by light (as in photosynthesis), because these processes involve the excitation of electrons. QM methods, which are often described as ab initio because they work from first principles without using empirical techniques, also reveal the dynamics of reactions as they are taking place. They make it possible to determine and analyse intermediates, such as radicals and oxidation states of metal ions, that exist transiently before the finished products of the reaction are formed. Rapidly increasing computational power combined with new methods, notably DFT to simplify calculations, makes it possible to apply QM methods to analyse enzymatic reactions. Before scientists began to use DFT widely in the 1990s, every electron in a system being analysed using QM had to be dealt with separately in each of the three spatial dimensions, whereas DFT combines all electrons into a single density function. For a system with N electrons this reduces the number of variables from 3N to 3, without, in principle, introducing any approximations. Rapidly increasing computational power combined with new methods ... makes it possible to apply QM methods to analyse enzymatic reactions Although DFT greatly reduces the degrees of freedom, QM calculations are still so computationally intensive that, even with contemporary supercomputers or computing grids, only a small number of atoms can be analysed at one time; the current maximum is around 100. Even more restrictive is the limit on the time-span of the simulation, according to Paolo Carloni, a professor at the International School for Advanced Studies in Trieste, Italy, who specializes in ab initio and MM simulations. "First-principle calculations of a system of, say, 100 atoms-can cover up to a few tens of picoseconds," he said. However, most processes involving quantum effects occur over a much longer timescale, with many enzymatic reactions taking several milliseconds, for example. Researchers use statistical methods to extend the time range of QM methods, but this inevitably introduces errors. ...QM calculations are still so computationally intensive that even with contemporary supercomputers or computing grids, only a small number of atoms can be analysed at one time... There has been considerable success combining QM with traditional MM techniques to circumvent the limited scaling of the former. Such hybrid methods are now widely deployed in the study of enzyme reactions. The crucial part of the system under study (such as the active site of an enzyme complex or a molecule in solution) is analysed using QM methods, whereas the energy and forces for the remainder (such as the non-reacting part of a protein complex or the solvent molecules) are calculated using the traditional MM model. The idea is to use MM approximations for those parts that are sufficiently far removed from the active reaction area so as not to contribute significantly to the overall system. Inevitably, hybrid QM/MM methods represent a compromise, and so require judicious application if they are to be sufficiently accurate. "I would say that the reliability of any QM/MM simulation strongly depends on the skills and thoroughness of the researcher doing the work, particularly during the planning/testing phase of the simulations," said Markus Dittrich from the University of Illinois at Urbana-Champaign, USA, who applies such methods to analyse ATP hydrolysis. "Once all the necessary steps have been taken, QM/MM simulations can give meaningful qualitative results, in some cases even quantitative ones, at least in my opinion." However, this requires significant testing and benchmarking for each system, as well as describing the QM/MM interface and the size of the part treated with the high-precision QM methods, according to Dittrich. Still, even hybrid methods require enormous computation power to analyse many biological structures and interactions, because of the large range of spatial dimensions and timescales involved. For example, molecular motor proteins coordinate chemical reactions over a few femtoseconds with mechanical motions taking place over microseconds or even milliseconds. Similarly, distances range from bond breaking in the catalytic binding site in a single angstrom to structural changes during molecular motion that span up to 10 nm. No single computational approach can calculate over such ranges of time and distance; however, combining QM/MM with other classical techniques to focus on a small number of crucial variables has proved successful. Klaus Schulten and colleagues at the University of Illinois at Urbana-Champaign integrated a variety of methods, including QM/MM and molecular dynamics, to obtain new insights into the mechanism of the PcrA helicase molecular motor, which unwinds double-stranded DNA. PcrA uses the energy from ATP hydrolysis to skip along a single strand of DNA, one base pair at a time. Schulten's breakthrough lay in determining the link between the mechanical motion and the binding and unbinding of PcrA with ATP as it skips along the DNA (Yu et al, 2006). This link is mediated by a two-way conformational change in the protein. "When the protein binds, one part of the structure is a bit loose, and then the reverse happens when it unbinds," said Schulten. This sequence of flexing allows the protein to traverse the DNA. As protein motors go, PcrA helicase is relatively simple, as it moves in a linear direction. Other motors involve more complex rotary motion. One of the best known and most sophisticated examples is the combination of Fo and F1 motors that work in tandem to synthesize or hydrolyse ATP. These motors convert energy between the two forms in which it is stored in cells: as a transmembrane electrochemical gradient or in a chemical bond, such as the gamma phosphate bond in ATP. Fo and F1 act reversibly, with the former using the transmembrane electrochemical gradient to generate a rotary torque to drive ATP synthesis in the latter. The system can operate in reverse when F1 hydrolyses ATP instead of producing it, generating torque that can then be harnessed by Fo to pump ions 'uphill' against their transmembrane electrochemical gradient. As Schulten noted, this complex motion has yet to be fully explained, but the work on PcrA helicase will provide some clues. "When you look at the binding sites of ATPase and PcrA, you see that they are the carbon image of each other," said Schulten. Another fundamental process that has benefited from the use of QM is photoreception. For years, researchers have been puzzled by how some animals, particularly migratory birds, use their retinal receptors not only for normal vision, but also to 'see' longer distances by measuring the direction and strength of the Earth's geomagnetic field. QM theories have been used to describe two mechanisms for this magnetoreception: the radical-pair mechanism and the magnetite-based mechanism. Originally believed to be competing, the two explanations have since been found to be complementary. In the radical-pair mechanism, a light-induced electron transfer in photopigments in a receptor in the eye creates a pair of excited electrons with a particular magnetic orientation or quantum spin state. The Earth's magnetic field affects the transition between these spin states, thus altering how the bird perceives colours. These radical-pair receptors amplify the Earth's weak magnetic signal using magnetic resonance, thus allowing the bird to detect it. Under the magnetite-based mechanism, the Earth's field exerts a mechanical force on magnetite particles in the upper beaks of migrating birds. An increasing number of researchers now believe that the radical-pair mechanism provides directional information that is comparable to that from a magnetic compass, whereas the magnetite-based mechanism provides positional information as it measures the strength of the signal, which varies with location (Wiltschko & Wiltschko, 2006). However, further work will be needed to elucidate fully how the brain reconciles and processes information from the two magnetic sources alongside normal vision. The debate over magnetoreception might not be settled, but there is broad agreement over the applicability of QM to this field. What is still disputed is its application to the study of consciousness. QM has always been inextricably linked to consciousness, given the vital role of the observer in making measurements and defining events, and consciousness itself can be explained with reference to QM according to a number of researchers. "It seems that consciousness operates very well in the classical realm," said Koichiro Matsuno, a professor in the Department of Bioengineering at Nagaoka University of Technology in Japan, and a leader in the QM field. "But one serious question would arise at this point. That is, how could one guarantee the robustness of such seemingly classical phenomena including our brain activities." The most celebrated theory of quantum consciousness—linking events at the sub-atomic level with our perception of consciousness—was developed by the British mathematician and physicist Roger Penrose, and by Max Hameroff, a physician at the University of Arizona Medical Center in Tucson, USA (Hameroff & Penrose, 1996). In the quantum world, matter exists only as a set of possibilities. 'Reality' emerges when the Schrödinger equation used to describe these possibilities 'collapses'—or moves from a probability wave form to a fixed state—which translates into a classical event, obeying rules such as Newton's laws of motion. According to the Penrose–Hameroff model, such quantum states of infinite possibilities exist in the tubulin subunits of microtubules in brain neurons and glia cells, in which they are isolated from their environment to prevent them from collapsing as a result of interacting with each other. Furthermore, the proteins and their associated quantum states are linked through quantum tunnelling, which allows particles to overcome energy and space–time barriers. Consciousness then occurs whenever a series of quantum states connected across neurons can no longer be preserved, and interact to yield a signal that the brain can recognize and respond to. This model explains the observation that consciousness seems to involve the simultaneous coordination of multiple neuronal signals. The Penrose–Hameroff model has attracted criticism. Max Tegmark, an astrophysicist at the Massachusetts Institute of Technology in Boston, USA, suggested that the model could not work as the brain is simply too warm for quantum effects to occur (Tegmark, 2000). However, like Matsuno, Hameroff insists that no classical theory of consciousness has stood up to scrutiny. "Classical theories based on complexity, emergence, and so forth, have yet to make any testable predictions, and are not, as far as I can tell, falsifiable," he said. "Thus, although we are often criticized, we have a theory and our critics do not"—at least, that is, no theory that can be proved or disproved. Hameroff has further attempted to define the relationship between complexity and consciousness, given that the phenomenon did not exist at the beginning of evolution and must have emerged at some point, either gradually or abruptly. His explanation depends to some extent on the Penrose–Hameroff theory, and posits that consciousness arises at the boundary between classical states—events, such as neural signals, which can be recognized or processed as information—and the underlying quantum processes that generated them. On this basis, the threshold of complexity for consciousness was passed 540 million years ago in small worms, such as nematodes. Their neuronal network was sufficient to create quantum tunnel effects involving 100–1,000 neurons, which Hameroff considers enough to generate a single conscious event. This single event was defined by Libet et al. (1991) to have a pre-conscious time of 500 ms—the time between the formation of a new waveform and its subsequent collapse. The basis for magnetoperception might have evolved even earlier, given that plants and animals have been shown to suffer when shielded from the Earth's magnetic field (Galland & Pazur, 2005). Although the debate on the role of QM in consciousness persists, quantum physics has nevertheless made inroads into biology, and will further help biologists to understand other phenomena and mechanisms. If QM is the basis of reality, as some researchers believe, it should come as no surprise that it is intimately involved in all kinds of biological processes, even sensation and cognition. Galland P, Pazur A (2005) Magnetoreception in plants. J Plant Res 118: 371–389 | Article | PubMed | Hameroff SR, Penrose R (1996) Orchestrated reduction of quantum coherence in brain microtubules: a model for consciousness. In Hameroff SR, Kaszniak AW, Scott AC (eds) Toward a Science of Consciousness: The First Tucson Discussions and Debates, pp 507–540. Cambridge, Massachusetts, USA: MIT Press Libet B, Pearl DK, Morledge DE, Gleason CA, Hosobuchi Y, Barbaro NM (1991) Control of the transition from sensory detection to sensory awareness in man by the duration of a thalamic stimulus. The cerebral 'time-on' factor. Brain 114: 1731–1757 | Article | PubMed | Tegmark M (2000) Importance of quantum decoherence in brain processes. Phys Rev E Stat Phys Plasmas Fluids Relat Interdiscip Topics 61: 4194–4206 | PubMed | ChemPort | Wiltschko R, Wiltschko W (2006) Magnetoreception. Bioessays 28: 157–168 | Article | PubMed | ChemPort | Yu J, Ha T, Schulten K (2006) Structure-based model of the stepping motor of PcrA helicase. Biophys J (published online) doi:doi: 10.1529/biophysj.106.088203 | Article | top go to top This article Email link to a friend Download PDF Download PDF rights and permissions Rights and permissions order commercial reprints Reprints Next article Previous article
a3e79eb9ce62874a
Take the 2-minute tour × I want to ask how do we actually measure the probability amplitude that appears in Schrödinger equation. From what I read in quantum mechanics textbooks, it appears that after the measurement, the system "collapses" to its eigen-state. And we know this because the probability amplitude of events changed. For a typical event involving two states, previously we only know its amplitude is $|A_{1}+A_{2}|^{2}$, but now it is $|A_{1}|^{2}+|A_{2}|^{2}$, etc. My question is, how do we know that our measurement of the probability amplitude is accurate? How do we know that there is no uncertainity principle that makes $$ \Delta A_{1}*\Delta A_{2}\ge k $$ for example? Since the probability amplitude is one of the intrinsic quantum properties of the system, it seems to me any measurement should disturb it to some extent. For example, if the above hypothetical "uncertainty relation" holds, then we cannot say in principle what $|A_1+A_2|^{2}$ is (since it must be disturbed by our measurement), but only what $E|A_1+A_2|^{2}$ is based on experiment. But if $A_1,A_{2}$ is in certain range, we may not be able to distinguish $|A_1+A_2|$ and $|A_1|^{2}+|A_2|^{2}$ anymore unless we take a huge number of experiments. To elaborate it, my rough conception of the way people measure it is this: We make the experiment in identical situation $N$ times, and we assume via strong law of large numbers that the average frequency must approach the mean value. However, this assumption does not exclude the possibility that every time we measure the probability of event $A_1$, the accuracy of measuring event $A_2$ may be somehow influenced. Therefore in actual measurement, the probability we get is close to $E|A_1+A_2|^{2}$, but not really the real value if by measurement we caused a huge variance in the data. Let us for simplicity consider an even simpler case with only one event $A$. $A$'s probability amplitude is given by the complex number $a$. Suppose in actual measurement, we found if we take $N$ measurements at the $i$th time, then the sample average is about $a+(-1)^{i}b$. Then we can propose either $A$'s probability amplitude changes with time (like in a two state system), or our measurement somehow influenced $a$'s sample value. Suppose we are in the second case, how do we know what is the true probability amplitude $a$? If we only do $N$ experiments, we would only get a biased value. And if we do more, the chance of the bias accumulating is small but still not negligible if $b$ is really large. share|improve this question You're asking a couple of different questions here. In the last paragraph you're just asking how you can tell that the measurements done on an ensemble of presumably identically prepared systems are uncorrelated. In practice you would vary the time interval at which you do the measurements to check for time correlation. –  DanielSank May 26 '14 at 5:04 Your Answer Browse other questions tagged or ask your own question.
546c23a0248f150c
Take the 2-minute tour × Say I'm given the following Schrodinger equation $$\frac{d^2u}{dx^2}+ \left[E - V(x)+ \frac{a}{x^2}\right]u(x) =0$$ Where $a \in \mathbb{R}$. What are the physical interpretations of this equation? I understand that it means the angular momentum is fixed, but are there any other implications associated with this? share|improve this question This kind of reminds me of the centrifugal Schrödinger: en.m.wikipedia.org/wiki/…. Sorry whiskey might affect. –  Love Learning Jan 26 '14 at 1:49 Go further down in the link to radial... You will see a 1/r^2 term. –  Love Learning Jan 26 '14 at 1:52 1 Answer 1 Given the constraint that $a$ be any real number, this equation can only be interpreted as the Schrödinger equation for a massive non-relativistic particle in one dimension, subject to the potential $V(x)-a/x^2$. If $V(x)$ is sufficiently regular at the origin, the equation will have a fairly ugly singularity there. The equation does, as you note look superficially like the Schrödinger equation of a particle in three dimensions, subject to a central potential which preserves angular momentum, once the angular dependence has been separated. However, in that case the possible values of $a$ are much more restricted. This is because the singular term $a/r^2$ and the second derivative both come from the kinetic energy term described by the laplacian, $\nabla^2$, and this means that the same constants that multiply the radial derivative must also multiply the kinetic energy term. In such an equation, one only ever encounters the combination $$ \frac{d^2}{dr^2}-\frac{l(l+1)}{r^2}, $$ where $l=0,1,2,\ldots$. The reason for this is that when you do a separation of variables into a radial and angular part as $\Psi(r,\theta,\phi)=\frac1r u(r)Y(\theta,\phi)$, the angular part of the wavefunction is constrained to obey an eigenvalue problem of its own, $$ \frac{1}{\sin^2\theta}\left[ \sin\theta\frac\partial{\partial\theta} \sin\theta\frac\partial{\partial\theta} +\frac{\partial^2}{\partial\phi^2} \right] Y(\theta,\phi)=aY(\theta,\phi). $$ The solutions of this eigenvalue problem are quite 'rigid', and the only way to have regular solutions is, as usual, for the constant $a$ to be restricted to a specific set. Here, it must obey $a=-l(l+1)$ for $l$ a nonnegative integer. Thus, if your constant does obey such a condition, then yes, your equation does admit interpretation as the radial equation of a 3D particle with conserved angular momentum, and the singular term represents the centrifugal barrier. Otherwise, no: it can only be interpreted as a 1D Schrödinger equation. share|improve this answer nice answer, i completely agree. +1 –  Zoltan Zimboras Jan 26 '14 at 3:07 I've been informed that in 2-dimensional systems it is possible for the angular momentum to take on any possible value, and it is related to the physics of anyons.. see the following paper susyqm.com/wp-content/uploads/2012/11/… (page 2) for more information. –  user119264 Jan 26 '14 at 4:17 Your Answer
927c2db96a87559d
Dismiss Notice Join Physics Forums Today! The physical meaning of Schrödinger's equation 1. Jun 28, 2008 #1 OK, I understand the physical interpretation of wave function which is the solution of Schrödinger's equation. The interpretation of wave function is in term of probability. What is physical meaning of Schrödinger's equation itself, in term of Newton's equation(F=ma)? 2. jcsd 3. Jun 28, 2008 #2 User Avatar Staff Emeritus Science Advisor Gold Member Check out this thread, in particular post #8. Hey, you're the one who asked the question then. :confused: Last edited: Jun 28, 2008 4. Jun 30, 2008 #3 You could perhaps see Schrodinger equation as the quantistical equivalent of Newton's law in the sense that while Newton's law tells you the "future story" of a non-quantistical particle (its trajectory due to forces), the Schrodinger equation tells you the same for a quantistical particle. The difference being that for a quantistical particle you cannot speak of a trajectory in the classical sense due to the Heisenberg uncertainty principle, but you can speak of a wave function (with a probabilistic meaning) and Schrodinger equation will tell you the "future story" of the wave function. Similar Discussions: The physical meaning of Schrödinger's equation 1. Schrödinger equation (Replies: 11)
e30d8be5c85bb7c3
Matter wave From Wikipedia, the free encyclopedia Jump to: navigation, search In quantum mechanics, a branch of physics, a matter wave is when you think of matter as a wave. The concept of matter waves was first introduced by Louis de Broglie. Matter waves are hard to visualize, because we are used to thinking of matter as solid. De Broglie revolutionized quantum mechanics by producing the equation for matter waves. Wavelength of Matter[change | change source] Based on the fact that light has a wave-particle duality, De Broglie showed that matter might exhibit wave-particle duality as well (simply meaning that matter is made of both particles and waves). Basing his formula on earlier formulas, he arrived at the equation below. Where λ is the wavelength of the object, h is Planck's constant, m is the mass of the object, and v is the velocity of the object. An alternate but correct version of this formula is Where p is the momentum. (Momentum is equal to mass times velocity). These equations merely says that matter exhibits a particle-like nature in some circumstances, and a wave-like characteristic at other times. Erwin Schrödinger created an advanced equation based on this formula and the Bohr model, known as the Schrödinger equation. Related pages[change | change source]
f746b3e57f47ab7d
stub born Born-Oppenheimer approximation In quantum chemistry, the computation of the energy and wavefunction of an average-size molecule is a formidable task that is alleviated by the Born-Oppenheimer (BO) approximation. For instance the benzene molecule consists of 12 nuclei and 42 electrons. The time independent Schrödinger equation, which must be solved to obtain the energy and molecular wavefunction of this molecule, is a partial differential eigenvalue equation in 162 variables—the spatial coordinates of the electrons and the nuclei. The BO approximation makes it possible to compute the wavefunction in two less formidable, consecutive steps. This approximation was proposed in the early days of quantum mechanics by Born and Oppenheimer (1927) and is still indispensable in quantum chemistry. In basic terms, it allows the wavefunction of a molecule to be broken into its electronic and nuclear (vibrational, rotational) components. Psi_{ total} = psi_{ electronic} times psi_{ nuclear} In the first step of the BO approximation the electronic Schrödinger equation is solved, yielding the wavefunction psi_{electronic} depending on electrons only. For benzene this wavefunction depends on 126 electronic coordinates. During this solution the nuclei are fixed in a certain configuration, very often the equilibrium configuration. If the effects of the quantum mechanical nuclear motion are to be studied, for instance because a vibrational spectrum is required, this electronic computation must be repeated for many different nuclear configurations. The set of electronic energies thus computed becomes a function of the nuclear coordinates. In the second step of the BO approximation this function serves as a potential in a Schrödinger equation containing only the nuclei—for benzene an equation in 36 variables. The success of the BO approximation is due to the high ratio between nuclear and electronic masses. The approximation is an important tool of quantum chemistry, without it only the lightest molecule, H2, could be handled; all computations of molecular wavefunctions for larger molecules make use of it. Even in the cases where the BO approximation breaks down, it is used as a point of departure for the computations. The electronic energies, constituting the nuclear potential, consist of kinetic energies, interelectronic repulsions and electron-nuclear attractions. In a handwaving manner the nuclear potential is an averaged electron-nuclear attraction. The BO approximation follows from the inertia of electrons to be negligible in comparison to the atom to which they are bound. Short description The Born-Oppenheimer (BO) approximation is ubiquitous in quantum chemical calculations of molecular wavefunctions. It consists of two steps. In the first step the nuclear kinetic energy is neglected, that is, the corresponding operator Tn is subtracted from the total molecular Hamiltonian. In the remaining electronic Hamiltonian He the nuclear positions enter as parameters. The electron-nucleus interactions are not removed and the electrons still "feel" the Coulomb potential of the nuclei clamped at certain positions in space. (This first step of the BO approximation is therefore often referred to as the clamped nuclei approximation.) The electronic Schrödinger equation H_mathrm{e}(mathbf{r,R} ); chi(mathbf{r,R}) = E_mathrm{e} ; chi(mathbf{r,R}) is solved (out of necessity approximately). The quantity r stands for all electronic coordinates and R for all nuclear coordinates. Obviously, the electronic energy eigenvalue Ee depends on the chosen positions R of the nuclei. Varying these positions R in small steps and repeatedly solving the electronic Schrödinger equation, one obtains Ee as a function of R. This is the potential energy surface (PES): Ee(R) . Because this procedure of recomputing the electronic wave functions as a function of an infinitesimally changing nuclear geometry is reminiscent of the conditions for the adiabatic theorem, this manner of obtaining a PES is often referred to as the adiabatic approximation and the PES itself is called an adiabatic surface. In the second step of the BO approximation the nuclear kinetic energy Tn (containing partial derivatives with respect to the components of R) is reintroduced and the Schrödinger equation for the nuclear motion left[T_mathrm{n} + E_mathrm{e}(mathbf{R})right] phi(mathbf{R}) = E phi(mathbf{R}) is solved. This second step of the BO approximation involves separation of vibrational, translational, and rotational motions. This can be achieved by application of the Eckart conditions. The eigenvalue E is the total energy of the molecule, including contributions from electrons, nuclear vibrations, and overall rotation and translation of the molecule. Derivation of the Born-Oppenheimer approximation It will be discussed how the BO approximation may be derived and under which conditions it is applicable. At the same time we will show how the BO approximation may be improved by including vibronic coupling. To that end the second step of the BO approximation is generalized to a set of coupled eigenvalue equations depending on nuclear coordinates only. Off-diagonal elements in these equations are shown to be nuclear kinetic energy terms. It will be shown that the BO approximation can be trusted whenever the PESs, obtained from the solution of the electronic Schrödinger equation, are well separated: E_0(mathbf{R}) ll E_1(mathbf{R}) ll E_2(mathbf{R}), cdots for all :mathbf{R}. We start from the exact non-relativistic, time-independent molecular Hamiltonian: H= H_mathrm{e} + T_mathrm{n} , with H_mathrm{e}= -sum_{i}{frac{1}{2}nabla_i^2}- sum_{i,A}{frac{Z_A}{r_{iA}}} + sum_{i>j}{frac{1}{r_{ij}}}+ sum_{A > B}{frac{Z_A Z_B}{R_{AB}}} quadmathrm{and}quad T_mathrm{n}=-sum_{A}{frac{1}{2M_A}nabla_A^2}. The position vectors mathbf{r}equiv {mathbf{r}_i} of the electrons and the position vectors mathbf{R}equiv {mathbf{R}_A = (R_{Ax},,R_{Ay},,R_{Az})} of the nuclei are with respect to a Cartesian inertial frame. Distances between particles are written as r_{iA} equiv |mathbf{r}_i - mathbf{R}_A| (distance between electron i and nucleus A) and similar definitions hold for r_{ij}; and R_{AB},. We assume that the molecule is in a homogeneous (no external force) and isotropic (no external torque) space. The only interactions are the Coulomb interactions between the electrons and nuclei. The Hamiltonian is expressed in atomic units, so that we do not see Planck's constant, the dielectric constant of the vacuum, electronic charge, or electronic mass in this formula. The only constants explicitly entering the formula are ZA and MA—the atomic number and mass of nucleus A. It is useful to introduce the total nuclear momentum and to rewrite the nuclear kinetic energy operator as follows: T_mathrm{n} = sum_{A} sum_{alpha=x,y,z} frac{P_{Aalpha} P_{Aalpha}}{2M_A} quadmathrm{with}quad P_{Aalpha} = -i {partial over partial R_{Aalpha}}. Suppose we have K electronic eigenfunctions chi_k (mathbf{r}; mathbf{R}) of H_mathrm{e},, that is, we have solved H_mathrm{e};chi_k (mathbf{r};mathbf{R}) = E_k(mathbf{R});chi_k (mathbf{r};mathbf{R}) quadmathrm{for}quad k=1,ldots, K. The electronic wave functions chi_k, will be taken to be real, which is possible when there are no magnetic or spin interactions. The parametric dependence of the functions chi_k, on the nuclear coordinates is indicated by the symbol after the semicolon. This indicates that, although chi_k, is a real-valued function of mathbf{r}, its functional form depends on mathbf{R}. For example, in the molecular-orbital-linear-combination-of-atomic-orbitals (LCAO-MO) approximation, chi_k, is an MO given as a linear expansion of atomic orbitals (AOs). An AO depends visibly on the coordinates of an electron, but the nuclear coordinates are not explicit in the MO. However, upon change of geometry, i.e., change of mathbf{R}, the LCAO coefficients obtain different values and we see corresponding changes in the functional form of the MO chi_k,. We will assume that the parametric dependence is continuous and differentiable, so that it is meaningful to consider P_{Aalpha}chi_k (mathbf{r};mathbf{R}) = - i frac{partialchi_k (mathbf{r};mathbf{R})}{partial R_{Aalpha}} quad mathrm{for}quad alpha=x,y,z, which in general will not be zero. The total wave function Psi(mathbf{R},mathbf{r}) is expanded in terms of chi_k (mathbf{r}; mathbf{R}): Psi(mathbf{R}, mathbf{r}) = sum_{k=1}^K chi_k(mathbf{r};mathbf{R}) phi_k(mathbf{R}) , with langle,chi_{k'}(mathbf{r};mathbf{R}),|, chi_k(mathbf{r};mathbf{R})rangle_{(mathbf{r})} = delta_{k' k} and where the subscript (mathbf{r}) indicates that the integration, implied by the bra-ket notation, is over electronic coordinates only. By definition, the matrix with general element big(mathbb{H}_mathrm{e}(mathbf{R})big)_{k'k} equiv langle chi_{k'}(mathbf{r};mathbf{R}) | H_mathrm{e} | chi_k(mathbf{r};mathbf{R})rangle_{(mathbf{r})} = delta_{k'k} E_k(mathbf{R}) is diagonal. After multiplication by the real function chi_{k'}(mathbf{r};mathbf{R}) from the left and integration over the electronic coordinates mathbf{r} the total Schrödinger equation H;Psi(mathbf{R},mathbf{r}) = E ; Psi(mathbf{R},mathbf{r}) is turned into a set of K coupled eigenvalue equations depending on nuclear coordinates only left[mathbb{H}_mathrm{n}(mathbf{R}) + mathbb{H}_mathrm{e}(mathbf{R}) right] ; boldsymbol{phi}(mathbf{R}) = E; boldsymbol{phi}(mathbf{R}). The column vector boldsymbol{phi}(mathbf{R}) has elements phi_k(mathbf{R}),; k=1,ldots,K. The matrix mathbb{H}_mathrm{e}(mathbf{R}) is diagonal and the nuclear Hamilton matrix is non-diagonal with the following off-diagonal (vibronic coupling) terms, big(mathbb{H}_mathrm{n}(mathbf{R})big)_{k'k} = langlechi_{k'}(mathbf{r};mathbf{R}) | T_mathrm{n}|chi_k(mathbf{r};mathbf{R})rangle_{(mathbf{r})}. The vibronic coupling in this approach is through nuclear kinetic energy terms. Solution of these coupled equations gives an approximation for energy and wavefunction that goes beyond the Born-Oppenheimer approximation. Unfortunately, the off-diagonal kinetic energy terms are usually difficult to handle. This is why often a diabatic transformation is applied, which retains part of the nuclear kinetic energy terms on the diagonal, removes the kinetic energy terms from the off-diagonal and creates coupling terms between the adiabatic PESs on the off-diagonal. If we can neglect the off-diagonal elements the equations will uncouple and simplify drastically. In order to show when this neglect is justified, we suppress the coordinates in the notation and write, by applying the Leibniz rule for differentiation, the matrix elements of T_{textrm{n}} as mathrm{H_n}(mathbf{R})_{k'k}equiv big(mathbb{H}_mathrm{n}(mathbf{R})big)_{k'k} = delta_{k'k} T_{textrm{n}} + sum_{A,alpha}frac{1}{M_A} langlechi_{k'}|big(P_{Aalpha}chi_kbig)rangle_{(mathbf{r})} P_{Aalpha} + langlechi_{k'}|big(T_mathrm{n}chi_kbig)rangle_{(mathbf{r})} The diagonal (k'=k) matrix elements langlechi_{k}|big(P_{Aalpha}chi_kbig)rangle_{(mathbf{r})} of the operator P_{Aalpha}, vanish, because this operator is Hermitian and purely imaginary. The off-diagonal matrix elements satisfy langlechi_{k'}|big(P_{Aalpha}chi_kbig)rangle_{(mathbf{r})} = frac{langlechi_{k'} |big[P_{Aalpha}, H_mathrm{e}big] | chi_krangle_{(mathbf{r})}} {E_{k}(mathbf{R})- E_{k'}(mathbf{R})}. The matrix element in the numerator is langlechi_{k'} |big[P_{Aalpha}, H_mathrm{e}big] | chi_krangle_{(mathbf{r})} = iZ_Asum_i ;langlechi_{k'}|frac{(mathbf{r}_{iA})_alpha}{r_{iA}^3}|chi_krangle_{(mathbf{r})} ;;mathrm{with};; mathbf{r}_{iA} equiv mathbf{r}_i - mathbf{R}_A . The matrix element of the one-electron operator appearing on the right hand side is finite. When the two surfaces come close, {E_{k}(mathbf{R})approx E_{k'}(mathbf{R})}, the nuclear momentum coupling term becomes large and is no longer negligible. This is the case where the BO approximation breaks down and a coupled set of nuclear motion equations must be considered, instead of the one equation appearing in the second step of the BO approximation. Conversely, if all surfaces are well separated, all off-diagonal terms can be neglected and hence the whole matrix of P^{A}_alpha is effectively zero. The third term on the right hand side of the expression for the matrix element of Tn (the Born-Oppenheimer diagonal correction) can approximately be written as the matrix of P^{A}_alpha squared and, accordingly, is then negligible also. Only the first (diagonal) kinetic energy term in this equation survives in the case of well-separated surfaces and a diagonal, uncoupled, set of nuclear motion equations results, left[T_mathrm{n} +E_k(mathbf{R})right] ; phi_k(mathbf{R}) = E phi_k(mathbf{R}) quadmathrm{for}quad k=1,ldots, K, which are the normal second-step of the BO equations discussed above. We reiterate that when two or more potential energy surfaces approach each other, or even cross, the Born-Oppenheimer approximation breaks down and one must fall back on the coupled equations. Usually one invokes then the diabatic approximation. Historical note The Born-Oppenheimer approximation is named after M. Born and R. Oppenheimer who wrote a paper [Annalen der Physik, vol. 84, pp. 457-484 (1927)] entitled: Zur Quantentheorie der Moleküle (On the Quantum Theory of Molecules). This paper describes the separation of electronic motion, nuclear vibrations, and molecular rotation. Somebody who expects to find in this paper the BO approximation—as it is explained above and in most modern textboooks—will be in for a surprise. The reason being that the presentation of the BO approximation is well hidden in Taylor expansions (in terms of internal and external nuclear coordinates) of (i) electronic wave functions, (ii) potential energy surfaces and (iii) nuclear kinetic energy terms. Internal coordinates are the relative positions of the nuclei in the molecular equilibrium and their displacements (vibrations) from equilibrium. External coordinates are the position of the center of mass and the orientation of the molecule. The Taylor expansions complicate the theory and make the derivations very hard to follow. Moreover, knowing that the proper separation of vibrations and rotations was not achieved in this paper, but only 8 years later [by C. Eckart, Physical Review, vol. 46, pp. 383-387 (1935)] (see Eckart conditions), one is not very much motivated to invest much effort into understanding the work by Born and Oppenheimer, however famous it may be. Although the article still collects many citations each year, it is safe to say that it is not read anymore (except perhaps by historians of science). External links Resources related to the Born-Oppenheimer approximation: See also Search another word or see stub bornon Dictionary | Thesaurus |Spanish Copyright © 2015, LLC. All rights reserved. • Please Login or Sign Up to use the Recent Searches feature
76079b222c8c64d3
Quantum Mechanics and Decision Theory By Sean Carroll | April 16, 2012 8:20 am Several different things (all pleasant and work-related, no disasters) have been keeping me from being a good blogger as of late. Last week, for example, we hosted a visit by Andy Albrecht from UC Davis. Andy is one of the pioneers of inflation, and these days has been thinking about the foundations of cosmology, which brings you smack up against other foundational issues in fields like statistical mechanics and quantum mechanics. We spent a lot of time talking about the nature of probability in QM, sparked in part by a somewhat-recent paper by our erstwhile guest blogger Don Page. But that’s not what I want to talk about right now. Rather, our conversations nudged me into investigating some work that I have long known about but never really looked into: David Deutsch’s argument that probability in quantum mechanics doesn’t arise as part of a separate ad hoc assumption, but can be justified using decision theory. (Which led me to this weekend’s provocative quote.) Deutsch’s work (and subsequent refinements by another former guest blogger, David Wallace) is known to everyone who thinks about the foundations of quantum mechanics, but for some reason I had never sat down and read his paper. Now I have, and I think the basic idea is simple enough to put in a blog post — at least, a blog post aimed at people who are already familiar with the basics of quantum mechanics. (I don’t have the energy in me for a true popularization at the moment.) I’m going to try to get to the essence of the argument rather than being completely careful, so please see the original paper for the details. The origin of probability in QM is obviously a crucial issue, but becomes even more pressing for those of us who are swayed by the Everett or Many-Worlds Interpretation. The MWI holds that we have a Hilbert space, and a wave function, and a rule (Schrödinger’s equation) for how the wave function evolves with time, and that’s it. No extra assumptions about “measurements” are allowed. Your measuring device is a quantum object that is described by the wave function, as are you, and all you ever do is obey the Schrödinger equation. If MWI is to have some chance of being right, we must be able to derive the Born Rule — the statement that the probability of obtaining a certain result from a quantum measurement is the square of the amplitude — from the underlying dynamics, not just postulate it. Deutsch doesn’t actually spend time talking about decoherence or specific interpretations of QM. He takes for granted that when we have some observable X with some eigenstates |xi>, and we have a system described by a state $latex |psirangle = a |x_1rangle + b |x_2rangle , $ then a measurement of X is going to return either x1 or x2. But we don’t know which, and at this stage of the game we certainly don’t know that the probability of x1 is |a|2 or the probability of x2 is |b|2; that’s what we’d like to prove. In fact let’s just focus on a simple special case, where $latex a = b = frac{1}{sqrt{2}} . $ If we can prove that in this case, the probability of either outcome is 50%, we’ve done the hard part of the work — showing how probabilistic conclusions can arise at all from non-probabilistic assumptions. Then there’s a bit of mathematical lifting one must do to generalize to other possible amplitudes, but that part is conceptually straightforward. Deutsch refers to this crucial step as deriving “tends to from does,” in a mischievous parallel with attempts to derive ought from is. (Except I think in this case one has a chance of succeeding.) The technique used will be decision theory, which is a way of formalizing how we make rational choices. In decision theory we think of everything we do as a “game,” and playing a game results in a “value” or “payoff” or “utility” — what we expect to gain by playing the game. If we have the choice between two different (mutually exclusive) actions, we always choose the one with higher value; if the values are equal, we are indifferent. We are also indifferent if we are given the choice between playing two games with values V1 and V2 or a single game with value V3 = V1 + V2; that is, games can be broken into sub-games, and the values just add. Note that these properties make “value” something more subtle than “money.” To a non-wealthy person, the value of two million dollars is not equal to twice the value of one million dollars. The first million is more valuable, because the second million has a smaller marginal value than the first — the lifestyle change that it brings about is much less. But in the world of abstract “value points” this is taken into consideration, and our value is strictly linear; the value of an individual dollar will therefore depend on how many dollars we already have. There are various axioms assumed by decision theory, but for the purposes of this blog post I’ll treat them as largely intuitive. Let’s imagine that the game we’re playing takes the form of a quantum measurement, and we have a quantum operator X whose eigenvalues are equal to the value we obtain by measuring them. That is, the value of an eigenstate |x> of X is given by $latex V[|xrangle] = x .$ The tricky thing we would like to prove amounts to the statement that the value of a superposition is given by the Born Rule probabilities. That is, for our one simple case of interest, we want to show that $latex Vleft[frac{1}{sqrt{2}}(|x_1rangle + |x_2rangle)right] = frac{1}{2}(x_1 + x_2) . qquadqquad(1)$ After that it would just be a matter of grinding. If we can prove this result, maximizing our value in the game of quantum mechanics is precisely the same as maximizing our expected value in a probabilistic world governed by the Born Rule. To get there we need two simple propositions that can be justified within the framework of decision theory. The first is: Given a game with a certain set of possible payoffs, the value of playing a game with precisely minus that set of payoffs is minus the value of the original game. Note that payoffs need not be positive! This principle explains what it’s like to play a two-person zero-sum game. Whatever one person wins, the other loses. In that case, the value of the game to the two participants are equal in magnitude and opposite in sign. In our quantum-mechanics language, we have: $latex Vleft[frac{1}{sqrt{2}}(|-x_1rangle + |-x_2rangle)right] = – Vleft[frac{1}{sqrt{2}}(|x_1rangle + |x_2rangle)right] . qquadqquad (2)$ Keep that in mind. Here’s the other principle we need: If we take a game and increase every possible payoff by a fixed amount k, the value is equivalent to playing the original game, then receiving value k. If I want to change the value of a playing a game by k, it doesn’t matter whether I simply add k to each possible outcome, or just let you play the game and then give you k. I don’t think we can argue with that. In our quantum notation we would have $latex Vleft[frac{1}{sqrt{2}}(|x_1+krangle + |x_2+krangle)right] = Vleft[frac{1}{sqrt{2}}(|x_1rangle + |x_2rangle)right] +k . qquadqquad (3)$ Okay, if we buy that, from now on it’s simple algebra. Let’s consider the specific choice $latex k = -x_1 – x_2 $ and plug this into (3). We get $latex Vleft[frac{1}{sqrt{2}}(|-x_2rangle + |-x_1rangle)right] = Vleft[frac{1}{sqrt{2}}(|x_1rangle + |x_2rangle)right] -x_1 – x_2. $ You can probably see where this is going (if you’ve managed to make it this far). Use our other rule (2) to make this $latex -2 Vleft[frac{1}{sqrt{2}}(|x_1rangle + |x_2rangle)right] = -x_1 – x_2 , $ which simplifies straightaway to which is our sought-after result (1). Now, notice this result by itself doesn’t contain the word “probability.” It’s simply a fairly formal manipulation, taking advantage of the additivity of values in decision theory and the linearity of quantum mechanics. But Deutsch argues — and on this I think he’s correct — that this result implies we should act as if the Born Rule is true if we are rational decision-makers. We’ve shown that the value of a game described by an equal quantum superposition of states |x1> and |x2> is equal to the value of a game where we have a 50% chance of gaining value x1 and a 50% chance of gaining x2. (In other words, if we acted as if the Born Rule were not true, someone else could make money off us by challenging us to such games, and that would be bad.) As someone who is sympathetic to pragmatism, I think that “we should always act as if A is true” is the same as “A is true.” So the Born Rule emerges from the MWI plus some seemingly-innocent axioms of decision theory. While I certainly haven’t followed the considerable literature that has grown up around this proposal over the years, I’ll confess that it smells basically right to me. If anyone knows of any strong objections to the idea, I’d love to hear them. But reading about it has added a teensy bit to my confidence that the MWI is on the right track. • anon. This is cute, but one aspect of it is bothering me. Believing in QM and understanding decoherence gets you to the point that Hamiltonian evolution in the presence of an environment gives you states that have some “weight,” measured by the Hilbert space measure, clustered around apparent classical outcomes. The inner product, which measures this “weight,” is an intrinsic part of QM, I think. I see the problem of deriving the Born Rule as being the problem of showing that if you repeat an experiment a number of times, the frequencies approach those corresponding to counting these states by the Hilbert space weight. In other words, the inner product isn’t just a mathematical device that hangs around, it plays a key role in determining observable outcomes. So: where’s the inner product on Hilbert space hiding in the argument you outlined above? It might be hiding in some assumption about how the x states are normalized, but can it be made explicit in a way that shows that this is really addressing the right question? The step from the equation just before “You can probably see where this is going” to the equation just after makes implicit use of the inner product. (Update: oops, not true, see #6 and #7 below.) Note that we switched the order of |x_1> and |x_2> in the sum, which wouldn’t have been possible if they didn’t have equal amplitudes. • MPS17 Thanks for the post. Zurek has some ideas on this too. Although I haven’t read the paper, I heard the talk and they seemed more in line with ways we physicists like the approach problems. UPDATE: I think this links to the original literature. I haven’t thought carefully about this so please excuse if this discusses a differently nuanced issue: I think it is the same kind of issue, and Zurek’s papers are extremely interesting. Instead of talking about decision theory, he talks about symmetries. He claims that, once we allow for the existence of an environment, there is a new symmetry (“envariance”) that applies to states like (1), so that the probabilities of getting x_1 and x_2 must be equal. From there the same reasoning applies. There is some critique along the lines of “Zurek shows that if it’s appropriate to think of quantum mechanics in terms of probabilities at all, then those probabilities should obey the Born Rule, but he doesn’t actually demonstrate the need for probabilities.” It’s not clear to me that this couldn’t also be applied to Deutsch’s argument. But this is philosophical terrain, and I think the underlying thrust of Deutsch and Zurek are actually quite similar, although using quite different vocabularies. • http://www.dudziak.com will the 1/sqrt(2) does not seem justified, and as that is the crux of the discussion, this argument does not convince me well. You might as well replace 1/sqrt(2) with a variable ‘m’ for example throughout all the equations, and your final conclusion would be just as “correct”. With 1/sqrt(2) removed, the whole argument becomes a tautology… interesting no doubt, but proving nothing except that the author is well versed in basic algebra. • http://mattleifer.info Matt Leifer Sean, that is not using the inner product. It is simply using the vector space structure. You can’t assume that the inner product has any a-priori relevance within this approach because that is what you are trying to derive, i.e. the only reason you pay attention to things like inner products and unitarity within conventional quantum mechanics is because you are trying to avoid negative probabilities, but you have no reason for connecting those two things until you have first derived the Born rule. I too like this argument, although I have my own version of it that makes use of Gleason’s theorem which I prefer, since it tells you that you should structure your probability assignments according to traces of operators against some density operator, even if you don’t know what the “wavefunction of the universe” is. There are legitimate issues surrounding the interpretation of probability in this approach, i.e. should one also be trying to derive a limiting frequency. Many of these issues are not specific to QM, since people differ on whether this is required even in the classical case. However, whether or not you think frequencies are required, it must be admitted that getting the decision theoretic interpretation right is even more important. After all, if I could derive a relative frequency, but was not able to derive the fact that I should use probabilities to inform my decisions then that would be a complete disaster. What use is it if I can derive that a fair quantum coin should have limiting 50/50 relative frequencies, but not that I should consider a bet on heads at stake $1 that pays $2 to be fair? There are also issues surrounding the very meaning of terms like “probability” and “utility” in this approach, since we are assuming that all outcomes actually occur. The two concepts get mushed together into something like a “caring weight” which measures how much we should care about each of our successors at the end of a quantum experiment. If you think about that for a minute it leads to moral issues, e.g. why should I care less about a successor who lives in a branch that happens to have a small amplitude. In the analagous classical case we can say it is because there is a very small chance that such a successor will exist, but quantum mechanically they definitely will exist. Thus, one can question whether it is moral to accept a scenario in which you get a large sum on money on a large amplitude branch, but die a horrible painful death in another branch, even with an amplitude that is epsilon above zero. In light of the Deutsch-Wallace argument, this indicates one of two things, either: - The usual intuitions about decision theory break down in a many-worlds scenario. - They do not break down, but we would always use extremal utilities, which makes it vacuous. By an extremal utility, I mean one that is infinity or -infinity on some outcomes, e.g. dying a painful death. The principle of maximum expected utility is useless in such cases. I have a lot more to say on this subject, but not the energy to go into it right now. I do have a paper on the backburner at the moment that deals with these issues. Matt– You’re right, I was being very sloppy. That’s just the vector-space structure. The role of the inner product is essentially what you’re trying to derive, as you say. Thanks for the other comments. As you say, most of the additional issues refer to the nature of probability (or the definition of “value”), not really specifically to quantum mechanics. will– The argument certainly isn’t a tautology. Of course you could replace the 1/sqrt{2} by any number, as long as the coefficient of both terms is the same (that’s what was used in the argument just referenced). But that’s what you want! If that number were something else, you would have a non-normalized wave function. But you would still want to have equal probabilities for two branches with equal weights. • Peli Grietzer This fantastic paper by Adrian Kent has some great arguments about why the ‘but what does speaking about probabilities even mean’ issue for MW is sharply unlike any similar issues that arises for one-world theories: http://arxiv.org/abs/0905.0624 • CU Phil There is quite a bit of criticism of the decision-theoretic proposal (most vociferously from David Albert and Adrian Kent) as well as several papers advocating the approach in this volume: The review gives a nice summary of the debate. Also, Bob Wald reviewed the above volume in Classical and Quantum Gravity: and also gives an insightful review. • Michael Bacon I don’t think that Kent’s argument succeeds in proving the failure of the Everett program. However, assuming that his argument does succeed, Kent goes on to say that such Everettarian failure “adds to the likelihood that the fundamental problem is not our inability to interpret quantum theory correctly but rather a limitation of quantum theory itself.” Perhaps, but at least for now, my money remains on quantum theory. • http://mmcirvin.livejournal.com/ Matt McIrvin @will: The requirement that state vectors have norm 1 is already a requirement of quantum mechanics separate from any interpretation of amplitudes as probabilities. Given that, the factor of 1/sqrt(2) (up to some arbitrary complex phase) is necessary if the two terms have equal coefficients. Once you make any move in the direction of a probabilistic interpretation, the Born rule falls out as the only one that makes mathematical sense; there are many ways of demonstrating this. But that first step is a doozy, and I always have the sneaking suspicion that arguments like this one have somehow smuggled their conclusion in as part of an assumption that only seems less controversial. • http://mmcirvin.livejournal.com/ Matt McIrvin …my own favorite handwaving quasi-derivation of the Born rule was a probably-not-original stochastic argument that I thought up on a long walk along the Charles River many years ago. Consider the Feynman path integral for a particle that travels from point A to point B. Now suppose that you put a screen between point A and point B that randomly tweaks the particle’s wavefunction phase to a different value at each point (maybe coarse-grain it a little to make the math tractable: divide it into tiny “pixels” that each have a different random phase factor). Now consider the amplitude that the particle goes from point A to point B traveling through some coarser-grained but still small bundle of pixels. The amplitudes for each pixel will add like a random walk, yielding an overall amplitude that increases as the square root of the number of pixels. Which is exactly what you’d get by interpreting the square of the amplitude as a probability. • Moshe I’m puzzled about something really basic: you are trying to argue for an expression that is quadratic in the coefficients a,b of your wavefunction (something that encodes in it interference, the essential mystery of QM). Instead you are deriving an expression which is linear in these coefficients (as pointed out, you have only used the linear structure of the Hilbert space, not the inner product). The derivation seems to use in an essential way the equality of both coefficients a=b, and of course that is precisely the only case where quadratic and linear expressions have the same consequences. But, what happens in the generic case? For example, what happens if a,b only differ by a phase? that should still lead to the same final expression. It seems to me that if you put a=-b and repeat your derivation, you’d find the same minus sign in the RHS of (1), instead of the result predicted by the Born rule. Moshe– I encourage you to put a minus sign in front of the x_2 term and go through the math. :) Obviously there is work to be done generalizing to other amplitudes, but that’s done in the paper; I don’t think there’s much controversy about that part. • http://www.uweb.ucsb.edu/~criedel/ Jess Riedel Sean: Like Peli Grietzer, I highly recommend Kent’s criticism of the decision-theory approach. To add to what Peli said, I think Kent conclusively shows that the axioms of decision theory in the many-worlds context are not nearly as obvious as they first appear, to the point that they become much less attractive than approaches which rest on Gleason’s theorem like Matt Leifer suggests. Of course, this is all truly philosophy; the game here is to try to reduce the axioms of quantum mechanics to their most beautiful (and, usually, simple) form. Sometimes, this improvement is so dramatic that I think everyone should agree that the new axioms are superior [such as my advisor Zurek's work--which I am constantly advertising--showing that the mysterious declaration that observables be Hermitian operators can be traced back to the linearity of evolution and the need for amplification (http://arxiv.org/abs/quant-ph/0703160)]. But sometimes, it’s just a matter of taste. Also, I’d like to clarify Michael Bacon’s comment. Kent’s paper strongly concentrates on attacking the decision-theoretic basis of Born’s rule, and only addresses the attractiveness of quantum theory in general as an aside. In particular, by the “Everett program”, Kent means that claim that quantum theory need not be supplemented by an ad-hoc assumption for extracting probabilities. I believe Kent is open to the idea that quantum theory need not be modified *if* a sufficiently attractive assumption can be found which allows the extraction of unambiguous probabilities (e.g. if the “set-selection problem” in the consistent histories framework could be solved, which he has written about). But yes, Kent does take the extreme difficulty of finding non-ad-hoc assumption as weak evidence that quantum theory is fundamentally wrong. • Michael Bacon You obviously are closer to this than I am, and you may well be right that all Kent really thinks is that the extreme difficulty of finding non-ad-hoc assumptions is “weak” evidence that quantum theory is fundamentally wrong. However, that’s not what the language I quoted says. At least here, he’s clearly saying that there is a “likelihood” that quantum theory is wrong — i.e, more likely than not. And, that his work merely adds to that “likelihood”. Nevertheless, perhaps I’m making too much of the particular words he chose to describe his view. By the way, I love the picture of you in your natural environment on your web page. ;) • Anonymous Coward I’d be interested to how you view the relation to classic thermodynamics. There, likewise, a probability distribution “falls out of the sky”. There is some justification in things like the Sinai-Boltzmann Conjecture, stating that the standard (Liouville-phase-space-) measure is the only sensible one (uniquely ergodic, for the toy problem of hard-ball billiard)… IF you assume, that the god who has chosen the initial conditions of the world has done so with an absolutely continuous probability distribution (SRB-measure). If you admit “pathological” probability measures, the entire argument collapses unto itself. I always viewed, maybe naively, the Born rule as a similiar thing. People conjecture and hope to prove at some point, that the Born rule follows if we make the pretty basic (and mind-boggingly subtle!) assumption that the initial conditions of our universe have been picked compatibly with some infinite-dimensional generalization of Lebegue-measure. [sorry for the theistic metaphor... personificating some aspects of nature helps me think more clearly] I think it’s certainly a good question. People like Albrecht and Deutsch believe that the only way to justify any classical probability distribution is ultimately in terms of the Born Rule. I wouldn’t necessarily think it’s a failure if the answer is “that’s the most natural measure there is,” but I’m hopeful that some better picture of the connection between QM and classical stat mech (plus perhaps some initial-conditions input from cosmology) explain why the Liouville measure is the “right” one. • Moshe I see where I was confused: you are using a linear structure in the space of eigenvalues, not for the coefficients, so the value for a=-b is not determined by the above considerations. I should probably take a look at the paper sometime, sounds mysterious how one can get anything quadratic from what you wrote so far. • Ben Hi Sean, I remember a great lecture by Nima Arkani-Hamed at T.A.S.I. 2007, http://physicslearning2.colorado.edu/tasi/hamed_02/SupportingFiles/video/video.wmv , where he points out that the Born Rule can be derived from the operator postulate, i.e. that physical measurement outcomes can be identified with the eigenvalues of a corresponding Hermitian operator. The argument is as follows: Construct the tensor-product state of N identically prepared copies of a|x1> + b|x2>. This could be expanded out using binomial coefficients. There is a Hermitian operator N1 which counts how many copies are in the state |x1>. Then if we take N1/N in the limit N to infinity, we obtain a Hermitian operator whose eigenvalue is |a|^2, i.e. it is the probability operator. So we get the Born Rule for free! • http://jbg.f2s.com/quantum2.txt James Gallagher You can’t get fundamental probability out without putting fundamental probability in, the Everett approach is just untenable and even quite ridiculous imho compared to just accepting that fundamental randomness exists – then the Born rule emerges as a kind of thermodynamic property of the Schrödinger evolution – the Bohmian guys have even demonstrated this (based on their wrong ontological model) Also, as I keep trying to tell everyone, the past universe does not exist, you have to look at the (discrete) flow of the Schrödinger evolution exp(hL).U(t) – U(t) to describe what we observe, and in this case we get 3D space as period-3 points in the Hilbert Space. • Colin The math is incorrect on your equation 3 (and also in Deutsch’s original paper). You only add 1/rt2 K to each outcome of the game on the left side of the equation, whereas you add an entire K to the right side of the equation. In reality, where you have 1/rt2(x1+x2) standing in place for the entire system Psi, you can do one of two things to manipulate the equation: value psi as game with only one outcome, and add a single k to each side (trivial)….Or you can keep 1/rt2(x1) and 1/rt2(x2) separate, and add an entire K to each…but still only 1 k on the right. EDIT…I noticed that this argument is a little skewed as you are adding K to each eigenstate…so it’s not the simple math; but the premise is still correct…what has been added to each outcome on the left is not what as been added to teh entire game on the right. If I started with V(Psi>) instead of V(1/rt2x1> +1/rt2x2>) (which are identical by assumption), I would add K to get V(psi>+k). • Pingback: Daily Run Down 04/16/2012 | Wayne's Workshop • Pingback: Linkblogging for 16/04/12 « Sci-Ence! Justice Leak! • http://qpr.ca/blog/ Alan Cooper The reference to state vectors of form |x+k> seem to be as eigenvectors for the operator X+k rather than for X, so I am not clear that it makes sense to say |x1-(x1+x2)>=|-x2> (In fact, making the operator explicit, we would seem to have |x1-(x1+x2) for X-(x1+x2)>=|x1 for X> not |-x2 for X>) And in any case the argument seems to be showing that if there was an expectation function with the expected properties then it would have to satisfy the Born rule. But that is not the same as saying that such a function should actually have a probabilistic interpretation. (Actually I guess this is the same complaint as what you alluded to in the second para of your coment#4 but I do think it’s a serious one.) • http://prce.hu/w/ Huw Price I’m disappointed that CU Phil thinks that my objections in the Many Worlds@50 volume are less vociferous than those of Adrian Kent and David Albert! My piece is here: http://philsci-archive.pitt.edu/3886/ I think that the plausibility of the Deutsch-Wallace axioms actually presupposes what needs to be shown, viz that there is some analogue of classical uncertainty in the MW picture. Moreover, if we assume that the argument is a good one with that assumption made explicit, then we can exploit a point noticed by Hilary Greaves to show the assumption must be false. Here’s how. Let P = “There is a suitable analogue of classical uncertainty in MW”, and Q= “Rationality requires that any Everettian agent should maximise her expected in-branch utility, using weights given by the Born Rule”. Then if the Deutsch argument works, it establishes: 1. If P then Q. But Greaves’ observation shows that Q simply can’t be true, because MW introduces a new kind of outcome that an agent may have preferences about, namely the shape of the future wave function itself (or at least, the portion of it causally downstream from the agent’s current choice). In effect, Q is telling us that rationality requires us to prefer future wave functions with a characteristic feature, that of maximising Born-rule-weighted in-branch utility. But this is obviously and trivially wrong, in the case of an agent who has preferences about the shape of the wave function itself, and just prefers (i.e., assigns a higher utility to) some other kind of future wave function. Decision-theoretic rationality tells us what to do, given our preferences. It doesn’t tell us what our preferences should be. (But wouldn’t such an agent already be crazy, for some other reason? No — see the paper for details.) Given Greaves’ observation, then, there are only two possibilities: either P is false, or the Deutsch argument fails — either way, it’s bad news for the project of making sense of MW probabilities in terms of decision-theoretic considerations. • http://www.pipeline.com/~lenornst/index.html Len Ornstein It’s seems you’re approaching this ‘problem’ as a Platonist, looking for a model(s) which comes closest to some preconceived (and not widely entertained) concept of an absolutely true representation of reality – rather than from the general scientific requirement that a model(s) status must be judged by how closely the empirical record can be matched? For the Platonist ‘test’, the issue is whether or not Born’s added ‘axioms’ and his formulation fit together with QM better than do the construction of Decision Theory and ITS unique axioms – perhaps an Occam’s Razor type question. For the more generally accepted scientific requirement – match to the empirical record – you have so far offered no arguments to distinguish the ‘performance’ of Born’s probability interpretation from that of a Decision Theoretical approach! • http://skepticsplay.blogspot.com miller So… suppose that we have N identical systems, each with state |x1>+|x2>, where x1 and x2 are eigenvalues of operator X. And suppose we have an operator Y which represents a simultaneous measurement of X in all of the N systems. Operator Y gives the value 1 if nearly half of the measurements of X result in x1. Otherwise, operator Y gives the value 0. If I understand Deutsch’s paper, we cannot say that a measurement of Y has a high probability of returning 1. But if we are rational decision makers, we would treat the expected value of Y as being close to 1 (and getting even closer to 1 as N goes to infinity). This may not prove that the results actually follow the frequency distribution given by Born’s rule, but it sure seems like the next best thing. Huw– I’ll admit I haven’t read your paper or Greaves’s, but that objection doesn’t seem very convincing at face value. Can’t we just say that preferences are something that people have about outcomes of measurements, not about wave functions? Outcomes are what we experience, after all. • http://prce.hu/w/ Huw Price Sean, I don’t think that response is going to help Deutsch and Wallace, who are trying to establish a claim about any rational agent, not just about agents with the kind of preferences we happen to have. But in any case, it is easy to think of examples of preferences for wave functions of the kind my objection needs, which are themselves grounded on what the wave functions imply about the experiences of people in the branches of those wave functions — e.g., a preference for a wave function in which I don’t get tortured in any branches (even very low weight branches), over a wave function in which I do get tortured in a very low weight branch, but get rich in all the high weight branches. (My Legless at Bondi example in the paper is much like this, and I discuss why MW makes such a difference, compared to the classical analogue.) • http://qpr.ca/blog/ Alan Cooper The key equation seems to be asserting that the “Value” (expectation?) of an observation (of the observable X-(x1+x2)) where the possible values are -x2 and -x1 is the same as the subtracting (x1+x2) from the Value of an observation (of X) where the possible values are x1 and x2. And then the application of (2) seems to be saying that the Value of that observation (of the observable X-(x1+x2)) where the possible values are -x2 and -x1 is the negative of the Value of an observation (of X) where the possible values are x1 and x2. But if (2) is being applied this way – ie without regard to which observable is involved and so without regard to which of the two terms is associated with which value, then isn’t that essentially assuming that equal probabilistic weights are being assigned to each of the two outcomes which amounts to begging the question of probabilistic weights being equal when the vector magnitudes are? (After all, the principle that negating the payoffs negates the expectation requires keeping the same probabilities, and switching cases only works if the probabilites are equal: eg 1/3(-x1)+2/3(-x2)= -{1/3(x1)+2/3(x2)} but {1/3(-x2)+2/3(-x1)} is not the same) • http://alastairwilson.org/ Alastair Wilson I think the Greaves/Price objection is a serious worry for probability in EQM in general, and for the decision-theory strategy in particular. Assigning objective probabilities to outcomes does seem to presuppose the possibility of uncertainty about which of the outcomes will occur. But EQM seems to say they all occur. So there’s a prima facie problem here. (Greaves’ response is: so much the worse for probability in Everett, but Everettians can do without it.) Wallace doesn’t think the problem is too serious these days (in contrast to his older papers which argue that Everettians must make sense of ‘subjective uncertainty’) – roughly, he now thinks that the objection appeals to pre-theoretic intuitions about the nature of uncertainty, and that intuition is unreliable in such areas. However, in his new book he does provide a semantic proposal which allows us to recover the truth of ordinary platitudes about the future (like ‘I will see only one outcome of this experiment’)', by interpreting them charitably as referring only to events in the speaker’s own world. I have a new paper forthcoming in British Journal for Philosophy of Science which argues that the Greaves/Price objection can be met on its own terms, by leaving the physics, the epistemology and the semantics alone and instead tinkering with the metaphysics. Here’s the link: http://alastairwilson.org/files/opieqmweb.pdf Sean’s remarks above capture the spirit of my suggestion nicely: if Everett is right, then our ordinary thought and talk about alternative possibilities *just is* thought and talk about other Everett worlds. To reply to Huw’s last points from this perspective: a) if Everett worlds are (real) alternative possibilities then any possible rational agent (not just one with preferences like ours) is going to be an agent with in-branch preferences, b) the kinds of ‘preferences for wave-functions’ that you describe can be made sense of on this proposal, though I would describe them differently; they correspond to being highly risk-averse with respect to torture. “Last week, for example, we hosted a visit by Andy Albrecht from UC Davis.” What do you think of Andy’s de Sitter equilibrium cosmology (e.g. http://arxiv.org/abs/1104.3315 and references therein)? Philip– I think it’s an interesting idea, although the chances that it’s right are pretty small. Andy takes the requirement of accounting for the arrow of time much more seriously than most cosmologists do, which is a good thing. But his intuition is that the real world is somehow finite, while my intuition is the opposite. (Intuition can’t ultimately carry the day, of course, but it can guide your research in the meantime.) • http://prce.hu/w/ Huw Price Alastair, Thanks for the link, though as you know, I prefer to tinker with metaphysics as little as possible ;) Concerning your (a), my point doesn’t depend at all on denying that we have in-branch preferences, but only on pointing out that the new ontology of the Everett view makes it possible for us to have another kind of preference, too — a preference about the shape of the future wave function. Concerning (b), any ordinary notion of risk-aversion is still a matter of degree, whereas the worry about low weight branches isn’t a matter of degree. So you’ll need infinite risk aversion, won’t you? And in any case, what does the response buy you? A demonstration that the choices of an ordinary agent in an Everett world should be those of a highly risk-averse agent in a classical world? That doesn’t seem good enough, for the Deutsch-Wallace program. They want to show that the ordinary agent should make the same choices in the two cases. • Daryl McCullough I’m not sure I understand what you’re saying. In Sean’s derivation, all the states are eigenstates of the X operator. The meaning of the state |x> is the eigenstate of the X operator with eigenvalue x. |x+k> is an eigenstate of the X operator with eigenvalue x+k. Sean’s assumptions might make more sense to you if we explicitly introduce some additional operators. Let T(k) be the operator (the translation operator) defined by T(k) |x> = |x+k>. Let P be the operator (the parity operator) defined by P |x> = |-x>. We assume that they are linear, which means T(k) (|Psi_1> + |Psi_2>) = T(k) |Psi_1> + T(k) |Psi_2> P (|Psi_1> + |Psi_2>) = P |Psi_1> + P |Psi_2> So Sean’s assumptions about the value function V(|Psi>) are basically: (1) V(|x>) = x (2) V(T(k) |Psi>) = V(|Psi>) + k (3) V(P |Psi>) = – V(|Psi>) (2) and (3) follow from (1) for eigenstates of the X operator, but we need the additional assumption that they hold for superpositions of eigenstates, as well. • http://qpr.ca/blog/ Alan Cooper ok – Maybe trying three times is considered rude, but I would really appreciate it if someone could explain what I have wrong here. In the Deutsch paper we have “It follows from the zero-sum rule (3) that the value of the game of acting as ‘banker’in one of these games (i.e. receiving a payoff -xa when the outcome of the measurement is xa) is the negative of the value of the original game. In other words” followed by your equation (2). But acting as banker is *not* the same as just having a *set* of outcome values which are the negatives of those of the player. They also have to be matched to the outcomes – ie it is the *ordered* sets which must be negatives. And in the case with Y=X-(x1+x2) it is in the situation where X sees x2 that Y sees -x1 and in the situation where X sees x1, Y sees -x2. This is not the same as Y being the “banker” when X is the “player” so I don’t see why the values should sum to zero. Please, what am I missing? • Daryl McCullough I don’t understand what you mean when you say “in the case with Y=X-(x1+x2) it is in the situation where X sees x2 that Y sees -x1 and in the situation where X sees x1, Y sees -x2″ That doesn’t agree with the meaning of the “game” as described. I think you’re confusing a sum of states with a tensor product of states. There is no need to talk about X and Y. You only need to talk about one operator, X. The game works by starting in a state |Psi>, measuring X in that state to get a value x. If x > 0, then the banker pays the player x dollars. If x < 0, then the player pays the banker -x dollars. So it's not that the banker measures one observable and the player measures a different one. There is only one measurement, and that determines who pays who. The banker's winnings are always the negative of the player's winnings. • Sudip Dear Sean, It seems to me that assuming the “two simple propositions” is just a way of putting the Born rule through a backdoor. Of course, they seem very intuitive but how sure can we be that nature upholds them? I’m reminded of Neumann’s proof of the impossibility of deriving QM from a deterministic theory. As Bell pointed out Neumann made seemingly innocuous assumptions which may not be true. After all why V|x+k> has to be V|x>+k? Why it can’t be V|x>+k^2? I understand that these are justified using decision theory. However decision theory is a theory of decision making by rational agents – why should it have any relevance in the natural world? I admit that I haven’t looked at Deutsch’s paper or at Zurek’s paper mentioned in the comments. On a related note, do you know of any attempt of defining what constitutes a measurement in the context of MWI? As the wave function branches there it seems to me that a fully formulated theory should explain where those branchings occur. • Anonymous Coward As far as I understood MWI (correct me if I’m wrong; I didn’t read Everetts’s paper, just a coule of graduate textbboks) the words “branching” and “measurements” should be viewed as a heuristic description of the following process and theorem: Suppose you do a (for simplicity spin of an electron) measurement; the measurement is described by a unitary operator $U_M$ (time propagation of your apparatus). You call it a branching into two possible worlds (orthogonal subsapces spanning the entire Hilbert space of the MWI-world) $+$ and $-$, if the time-propagation for all later times leaves these subspaces almost invariant. If this should be the case, we can simplify all further calculations by projecting onto one of the subspaces and calculating the future evolution of each of these branches (“collapse the wavefunction”). What a nifty trick to get approximate results! Everetts contribution was to show that for suitable limits (larger hilbert-space, many particles, suitable definition of “almost invariant”) and actual measurement devices (full QM toy models of amplifiers), this does in fact occur. Therefore, Schroedinger’s equation alone implies the very good heuristic of collapsing wave-functions. Furthermore, if we should ever wish to assign weights to different branches, the only way to do this consistently is the Born rule — where consistently means “If I collapse after two measurements and calculate the evolution until the scond measurement in full QM, I get roughly the same result as if I collapsed after the first measurement and again afer the second one”. This way, even if we believed in magical “Kopenhagen collapse induced only by human observers”, Everett has shown that “occasional collapse + Born rule” yields very good approximate methods to calculate time-evolution until the “magical collapse”. From here it is not far fetched to postpone the “magical collapse” into the far future or *gasp* remove it at all. Futhermore, we can set out to precisely define “branching in the sense of invariance of subspaces up to $varepilon$” or “up to order such and so”. However, the words “branching” or “measurement” without further qualifiers should remain a (undefinable but not meaningless) heuristic, like “two points are close”. • Daryl McCullough Sudip writes: The meaning of |x> is that it is a state such that the measurement of operator X is certain to produce result x. So the expected result of an X-measurement is V(|x>) = x. Similarly, |x+k> is a state such that the measurement of X is certain to produce result x+k. So V(|x+k>) is x+k. • Neal J. King What leaves me unsatisfied about this approach is that you are postulating the existence of an operator V with a complete set of states that behaves in the manner indicated; and then applying the inferred “Born’s rule” to the rest of quantum mechanics. Can you make the argument work for real quantum operators that we have some reason to believe in? Like the z-component of spin-1/2 ? • Hal S I am not entirely sure why the Born rule is hard to understand. The point of process is to allow one to use the computational flexibility associated with functions on the order of the reals and extract certain features of those functions (like the peaks and valleys…or extrema of the function). Remember, the wave function itself is a continuous deterministic function. More specifically we operate in the complex plane in order to exploit the computational power associated with manipulating systems with uncountable basis’. If we accept the information extraction interpretation, the question is how to economise that process. Since we are dealing with complex numbers, and we are dealing with countable features of the wave function, we can ask the question what happens when we take the function to other powers. Since 2 is the smallest prime number, we can interpret any even numbered power to simply being a rescaling of the information associated with squaring the number. If we consider odd powers, we can interpret the effect as being a rescaling of the wave function by some real number. If we consider all the potential combinations, one quickly must consider all possibilities, and essentially one quickly realizes that what the are really doing is trying to capture all the information in the wave function and essentially are also building a type of matrix that should be recogizable as an operator in a type of transformation procedure. In any case, squaring the amplitude is a process that economizes the information extraction from the complex plane into a series of integer indexed real numbers. • Hal S It makes me wonder if one can make and argument that if all the trivial zeros of the zeta function lie on the real line, then all the non-trivials have to be on the one-half real line. Interesting. • Pingback: The Alternative to Many Worlds « My Discrete Universe • Sudip @Anonymous Coward Thanks, that’s helpful. @Daryl Sorry, I didn’t mean to say that. Of course V|x+k>=V|x>+k by definition. What I intended to ask was why should V act linearly on a superposition of kets? • http://jbg.f2s.com/quantum2.txt James Gallagher The biggest criticism of Sean’s post is that the argument fails to explain why the Born Rule must obey a squared power relation rather than a quartic or higher one. Even Pauli recognized this problem back in 1933 (republished in english translation in his ‘General Principles of Quantum Mechanics’ Ch 2 p15) where he deduced that the Born Rule must be a positive definite quadratic form in ‘psi’ and anything not involving the product of psi.psi* would not be conserved by the Schrödinger Evolution so we only have terms in psi.psi* = |psi|^2 and higher powers as possibilities. Pauli, being a genius, realised that only Nature then determines that the rule is a squared one (rather than a higher even power) and ultimately the rule is fixed by experimental observation – not deduced from anything simpler (and certainly not from obfuscating arguments involving rational beings and decision theory!) I mentioned above that there is a Bohmian argument for how the absolute squared law is emergent from the dynamics of the Evolution ( eg http://arxiv.org/abs/1103.1589 ) – this is true unless your initial distribution was a higher power invariant one – so the squared power one seems favoured on a positive measure set of starting distributions (maybe even measure 1). But you can just chuck away all the troublesome baggage that the Bohmian model entails and accept fundamental randomness – then the squared power rule is the most likely outcome, a large numbers result – ie it is a thermodynamic property of the evolution • Hakon Hallingstad Sean @ 16 and Moshe, If one carries out the calculation for a|-x2> – a|-x1>, one comes to the equation: V[a|x2> - a|x1>] + V[-a|x2> + a|x1>] = x1 + x2. However under the assumptions above, we are not allowed to assume V[a|x2> - a|x1>] = V[-a|x2> + a|x1>] and so the derivation stalls at this point. It is absolutely crucial for the argument that the coefficients in a|x1> + b|x2> are equal, contrary to QM which allows an arbitrary phase. For instance V[a|x1> + b|x2>] = (|a| x1 + |b| x2) / (|a| + |b|), would be consistent with the 2 axioms. As far as I can tell, the axioms imply - V is linear in x1 and x2 - The coefficient of x1 is some function f(a, b), with f(1, 0) = 1 - The coefficient of x2 is f(b, a) = 1 – f(a, b) In order to show V is the expected average value of a measurement of X, one will have to prove f(a, b) = |a|^2/(|a|^2 + |b|^2), so there is still a lot of derivations left to be done. And showing the coefficient goes as |a|^2 is the hard part of the Born rule. • http://alastairwilson.org/ Alastair Wilson Huw – actually, I’d have thought that freely modifying metaphysics in situations like this is congenial to pragmatism. The ‘harder’ scientific claims of physics, confirmation theory, natural language semantics, etc aren’t meddled with; we just pick (on a pragmatic basis) whichever metaphysical framework allows the harder claims to hang together most naturally. On a) – I was suggesting that any possible agent is going to be an agent with *only* in-branch preferences – sorry for being unclear. From the perspective I advocate, the whole state of the wavefunction is a non-contingent subject-matter: the only contingency is self-locating. On a functionalist account of mental states, it makes no sense to ascribe preferences defined over non-contingent subject-matters. (What’s going on here is that the modal framework is helping reinforce Wallace’s ‘pragmatic’ argument for his principle Branching Indifference.) On b) – yes, the equivalent of wanting to avoid torture in any world, in the limiting case of infinitely many worlds, will be infinite risk aversion. Is that a problem? (In any case, the limiting case might turn out to be metaphysically impossible – that’s an empirical matter.) What the response is meant to buy is a translation between ‘preferences over wavefunctions’ and ordinary preferences. Everettians who take this line can explain away the apparent coherence of preferences over wavefunctions by showing that they’re just ordinary kinds of preferences (i.e. preferences about self-location) under an unfamiliar mode of presentation. • Hal S Just one last note. Using ‘ to represent an index, an equation that makes some of the previous comments clearer is = Sum (E’ |z’|^2) which is understood as meaning that the probability of seeing eigenvalue E is the absolute value of complex number z squared. Now Dirac has some interesting points that should be considered in ‘The Principles of Quantum Mechanics 4th ed’. “One might think one could measure a complex dynamical variable by measuring separately its real and imaginary parts. But this would involve two measurements or two observations, which would be all right in classical mechanics, but would not do in quantum mechanics, where two observations in general interfere with one another-it is not in general permissible to consider that two observations can be made exactly simultaneously,..” “In the special case when the real dynamical variable is a number, every state is an eigenstate and the dynamical variable is obviously and observable. Any measurement of it always gives the same result, so it is just a physical constant, like the charge of an electron.” “Even when one is interested only in the probability of an incomplete set of commuting observables having specified values, it is usually necessary first to make the set a complete one by the introduction of some extra commuting observables and to obtain the probability of the complete set having specified values (as the square of the modulus of a probability amplitude), and then to sum or integrate over all possible values of the extra observables.” So an observer can not make two simultaneous measurements of the same observable, physical constants are real numbers, and if you don’t have enough indices to fully describe the state you add more indices and consider all potential values. Since this procedure can continue indefinitely one begins running into the same problems with the continuum. The point in this rambling is that although we can not know whether such higher order hierarchy has real existence, we have to resort to it from a computational standpoint. • http://van.physics.illinois.edu/qa/index.php Michael Weissman Just a quick semi-coherent placeholder note, since I have to run now. As you say, the issue of P in MW is much trickier than if you have some sort of extra collapse in which to insert special new rules. The traditional argument justifying Born is the one that Ben refers to, reproduced by Arkani-Hamed, but that’s long since been known to be invalid, since the limiting procedure is irrelevant. On Deutsch and decision theory: “Given a game with a certain set of possible payoffs, the value of playing a game with precisely minus that set of payoffs is minus the value of the original game.” What does a “precisely minus payoff” even mean, except in the context of little financial games, where the statement is well-known to be false? The question is not so much what a rational actor would bet, but how the existence of rational actors can be reconciled with the unitary structure+decoherence. The problem becomes one of why the probabilities for sequential observations factorize, i.e. why the chance of Schroedinger’s cat having survived the Tuesday experiment doesn’t change on Wednesday due to quantum fleas. As has repeatedly been shown, only the standard quantum measure gives the conserved flow needed to allow that factorization and hence allow the existence of rational actors. So that’s a requirement but not an explanation. The best (only) explanation I’ve seen is by Jaques Mallah. If the state consists of the usual part we think about plus some maximal entropy white noise, a physical definition of a thought as a robust quantum computation, together with ordinary signal-to-noise constraints on robustness (square-root averaging), gives the Born rule from ratios of counts of thoughts! Why that particular (mixture of low S +high S parts) starting state? Mallah doesn’t like this idea but I suggest the old cheat: anthropic selection. If that type of state is needed to allow the existence of rational actors, nobody will be arguing about why they find themselves part of some other type of state. I’ll try to get back to fill this in more coherently in 24 hours. p.s. Zurek’s paper sneaks in context-independent probabilities, and thus doesn’t really address the core question. • Abram Demski How do the coefficients enter into the story at all? It looks like assumptions (2) and (3) make just as much sense if the coefficients for the two states are different, but if that’s true, the we can derive (1) for the case when the coefficients are different as well… in other words, taken at face value, the argument seems to prove that V[a|x_1>+b|x_2>]=1/2(x_1+x+2) no matter what ‘a’ and ‘b’ are. • Abram Demski I revoke my previous question (after actually trying to carry though the math). • Michael Weissman I should make at least one small correction to my hasty and over-compact note. The background entropy in Mallah’s picture is high, not maximal. • Hakon Hallingstad Since this article doesn’t explain where the absolute square of the amplitude comes in with Deutsch’s argument (48), I have read his paper which introduces it in equations 16 – 21. However I don’t understand the argument. It would be great if someone could explain why the value of eq. 18 equals the LHS of eq. 16, i.e. why is V[|x1>|y1> + ... + |x1>|ym> + |x2>|y m+1> + ... + |x2>|yn>] = V[sqrt(m) |x1> + sqrt(n - m) |x2>] when y1 + … + ym = y_{m+1} + … + yn = 0? Can this actually be derived or is it an axiom? If the former, it does seem to rely on the state vectors being normalized, which would also need to be postulated? • Hakon Hallingstad • Hal S Got a copy of Pauli’s book. Good stuff. I like this on the first page, written in 1933 “The solution is obtained at the cost of abandoning the possibility of treating physical phenomena objectively, i.e. abandoning the classical space-time and causal description of nature which essentially rests upon our ability to separate uniquely the observer and the observed.” Combined with the fact that any bound state can be represented in a quantum field theory, it appears we are getting closer to completely abandoning any notion that general relatively is even needed. • http://qpr.ca/blog/ Alan Cooper Daryl, Thank you for responding (@36&38) to my question. Unfortunately I have been away for a few days and so have been slow to respond, but I hope you are still around and following this discussion as I remain puzzled. I have no problem with agreeing that your conditions (1)(2)(3) imply the Born rule (and similarly for Sean’s and David’s similarly numbered equations) but I still don’t see how these are implied by decision theory without essentially assuming the Born rule to start with. Yes, the “states” in question are all eigenfunctions for the same observable, but on the two sides of each value equation (other than (1)) they correspond to different eigenvalues so they are not actually the same states. In fact the decision theoretic increment of value that is expected from replacing X by X+k and the reversal that comes from replacing X with -X seem to me to be obvious only if we work with the same state and consider the observable to be what is changing. To ask for these to also apply when the operator stays the same but the states are changed seems to involve an implicit assumption that V(a1|x1>+a2|x2>) is a linear combination of x1 and x2 with coefficients p1(a1,a2) and p2(a1,a2) which are independent of |x1> and |x2>. And to me that looks very much like begging the question. Is there a way to show (without assuming the usual expectation formula) that V(X+k,|Psi>)=V(X, T(k)|Psi>? • http://qpr.ca/blog/ Alan Cooper What seems odd about this business of starting with the Hilbert space and inferring a probabilistic interpretation after the fact, is that the Hilbert space itself arises naturally as a way of representing the possible families of probability distributions for observables. In that approach, pioneered by von Neumann and Mackey, and nicely developed and summarized in the books by Varadarajan, the starting point is a lattice of questions (observables with values in {0,1}) and the notion of probability for these seems to be no less elementary than that of decision theoretic “value” since the expected value of a proposition in any state is just the same as the probability that it is observed to be true. • Hakon Hallingstad Here’s an example where (2) and (3) is consistent with a different probability rule than Born’s. Just before we’re measuring the observable X (or “playing the game” in Deutsch’ terminology), we will scale |psi> such that the sum of the expansion coefficients is 1. 1. |psi> = a1 |x1> + a2 |x2> 2. a1 + a2 = 1 This scaling is not as physically illogical as you might think, for instance the collapse of the wavefunction can also be viewed to contain a rescaling of the observed eigenvector immediately after/during the observation. Let |base> be the sum of all eigenvectors of X. 3. |base> = |x1> + |x2> + … I’m going to show that the following definition of the expected value of the measurement (“payoff”) satisfies (2) and (3) in this article. 4. V[|psi>] = <base| X |psi> Here’s how (2) is satisfied: 5. V[a1 |x1 + k> + a2 |x2 + k>] = a1 (x1 + k) <base|x1 + k> + a2 (x2 + k) <base|x2 + k> = a1 x1 + a2 x2 + k = V[a1 |x1> + a2 |x2>] + k Above, <base|x1 + k> is 1 since |base> is the sum of the eigenvectors, and |x1 + k> is an eigenvector. Similarily, (3) is satisfied because: 6. V[a1 |-x1> + a2 |-x2>] = -a1 x1 <base|-x1> – a2 x2 <base|-x2> = -(a1 x1 + a2 x2) = -V[a1 |x1> + a2 |x2>] The main result in the article is then reproduced easily: 7. V[(|x1> + |x2>) / 2] = (x1 + x2) / 2 • Hakon Hallingstad Let me follow the same arguments in this blog article and Deutsch’s, to prove something other than the Born rule: 0. V[a1 |x1> + a2 |x2> + …] = |a1| x1 + |a2| x2 + … To be able to make the argument we will need to postulate that the state vector should be scaled just prior to the measurement, such that the sum of the absolute values of the probability amplitudes is 1, instead of being normalized. 1. If |psi> = a1 |x1> + a2 |x2> + …, then |a1| + |a2| + … = 1 Because of this postulate, instead of (|x1> + |x2>) / sqrt(2) we will use (|x1> + |x2>) / 2, etc. If we now assume the equivalent of equation (2) and (3) from this blog article: 2’. V[(|-x1> + |-x2>) / 2] = -V[(|x1> + |x2>) / 2] 3’. V[(|x1 + k> + |x2 + k>) / 2] = V[(|x1> + |x2>) / 2] + k we will end up with the equivalent equation for exactly the same reasons made in this article, since the sqrt(2) never comes into the derivation. Let’s move over to Deutsch’s article, and the chapter “The general case”. We first want to prove the equivalent of equation (Deutsch.12): 12’. V[(|x1> + |x2> + … + |xn>) / n] = (x1 + x2 + … + xn) / n The proof is made by induction in two stages. Now I must admit that I don’t understand the first stage, but it doesn’t sound like that will be a problem for my argument (the sqrt(2) is again not used). For the second stage we can use the same arguments of “substitutibility”. Let V[|psi1>] = V[|psi2>] = v, then: 13’. V[(a |psi1> + b |psi2>) / (|a| + |b|)] = v If we now set 14’. |psi1> = (|x1> + … + |x n-1>) / (n – 1), |psi2> = | V[|psi1>] >, a = n – 1, b = 1 Then (13’) implies: 15’. (x1 + x2 + … + x_{n-1} + V[|psi1>]) / n = V[|psi1>] Note, (15’) is identical to (Deutsch.15). Now to the crucial part of Deutsch’s argument. What we want to show and the equivalent of (Deutsch.16) is: 16’. V[ (m |x1> + (n - m) |x2>) / n] = (m x1 + (n – m) x2) / n and (Deutsch.17), (Deutsch.18), and (Deutsch.20) are: 17’. sum_{a = i}^m |ya> / m or sum_{a = m + 1}^n |ya> / (n – m) 18’. (sum_{a = i}^m |x1>|ya> + sum_{a = m + 1}^n |x2> |ya>) / n 20’. (sum_{a = i}^m |x1 + ya> + sum_{a = m + 1}^n |x2 + ya>) / n Again, we’re allowed to do this according to the postulate, because we’re just about to do a measurement, and then we need to scale such that the sum of the absolute values of the probability amplitudes is 1. Equations (Deutsch.19) and (Deutsch.21) are not changed. (Deutsch.22) obviously reads: 22’. sum_a p_a |x_a>, sum_a p_a = 1 The next arguments may pose a problem. They’re supposed to show that even though the above results are valid for p_a being rational numbers, they should also apply if p_a is a real number. For instance the unitary transformation is imagined to transforms eigenvectors into a higher eigenvalued eigenvectors. The value of the game is then guaranteed to increase. Not so with our postulate, since we need to scale our state vector just prior to a measurement, and in general the scale factor would be different before a unitary transformation and after. I’m guessing there is an argument for proving how to extend it to real numbers, but I just don’t see it yet. So for now, we will have to be content with the probability amplitudes being rational numbers. The conclusion of all of this is that the normalization of the state vector is crucial for Deutsch’s derivation. • DanW @Hakon Hallingstad: your reasoning here is, I’m afraid, totally bogus. I’m not trying to be nasty and I’m sure you won’t take it as such since you seem in other posts to be pretty keen on learning properly how to do these things. One particular error I can spot: “For instance the unitary transformation is imagined to transforms eigenvectors into a higher eigenvalued eigenvectors. ” In what follows, m* = “hermitian conjugate of m” not, “times by” :-) . Unitary operators have eigenvalues of magnitude 1. to see this, consider that the definition of a unitary operator is that its inverse is equal to its hermitian conjugate. UU* = 1 by the definition of unitarity. If U|a> = m |a> , this implies <a| U* = <a| m*, but = by unitarity definition. From above, = m m* , hence m m* = 1. This means that the magnitude of m is 1. So you can’t have a “unitary transformation” that makes “the eigenvalues higher”. It is a contradiction in terms. • Hakon Hallingstad > your reasoning here is, I’m afraid, totally bogus. [...] Right, assumption (61.1) does not hold in Quantum Mechanics proper. I’m interested in knowing about other flaws you can point out, and to see whether those flaws can also be applied to Deutsch’s original arguments. > One particular error I can spot. [...] I was too careless with my choice of words, so you misunderstood me. I was only trying to refer to Deutsch’s argument on page 12, for instance he says “Now, if U transforms each eigenstate |xa> of X appearing in the expansion of |psi> to an eigenstate |xa’> with higher eigenvalue.” See there for details. Discover's Newsletter Cosmic Variance Random samplings from a universe of ideas. About Sean Carroll See More Collapse bottom bar Login to your Account E-mail address: Remember me Forgot your password? Not Registered Yet?
f5298c5a10bf774d
next up previous contents Next: Bibliography Up: Sturm-Liouville boundary conditions Previous: Sturm-Liouville boundary conditions self adjoint equations and the symplectic metric The form in which the initial value differential equation for a pair of first order equations is written can be called its standard form, $\displaystyle \frac{dZ(z)}{dz}$ = M(z) Z(z), (341) but for Sturn-Liouville applications, it is preferable to write it in canonical form, which is $\displaystyle \alpha(z) \frac{dZ(z)}{dz} + \{\beta(z) + \frac{1}{2} \frac{d\alpha(z)}{dz}\}Z(z)$ = $\displaystyle \lambda \gamma(z) Z(z).$ (342) Here $\beta$ takes the place of M from the standard equation, but without any eigenvalue which M might have contained. The factor $\alpha$, only half of whose derivative is written explicitly (the other half being combined with M in $\beta$). allows for the possibility of attaching multipliers to the derivatives, which is sometimes either necessary or convenient. Finally, $\gamma$ positions the eigenvalue where it belongs in the matrix multiplier of Z. If $\alpha$ is invertible, it is easy to interconvert the standard and canonical forms; if it is not, the order of the system of differential equations ought to be reduced. By inspection, M(z) = $\displaystyle \alpha^{-1}(z)\{\lambda\gamma(z)-\beta(z)-\frac{1}{2}\frac{d\alpha(z)}{dz}\}.$ (343) So the only outstanding question concerns the motivation for the strange way of writing the canonical equation. There is a further innovation, which consists in introducing the adjoint of the canonical equation, $\displaystyle -\alpha^T(z) \frac{dW(z)}{dz} + \{\beta^T(z) - \frac{1}{2} \frac{d\alpha^T(z)}{dz}\}W(z)$ = $\displaystyle \lambda \gamma^T(z) W(z).$ (344) If the combination of signs and transposes seems mystifying, think of the adjoint W as the transpose of the inverse of Z. More exactly, direct substitution and invoking the uniqueness of solutions relative to initial conditions, ZA = $\displaystyle (\alpha Z)^{-1 \ T}$ (345) By regarding the left hand side of the canonical equation as the application of an operator ${\cal L}$ to Z, and likewise the left hand side of the adjoint equation as the application of ${\cal M}$ to W, the two can be written in the abbreviated form $\displaystyle {\cal L}(Z)$ = $\displaystyle \lambda\gamma Z$ (346) $\displaystyle {\cal M}(W)$ = $\displaystyle \lambda\gamma^T W.$ (347) There are conditions under which an operator is just the same as its adjoint; by comparison they are $\displaystyle \alpha$ = $\displaystyle - \alpha^T$ (348) $\displaystyle \beta$ = $\displaystyle \beta^T$ (349) $\displaystyle \gamma$ = $\displaystyle \gamma^T.$ (350) Once the definitions have all been established, Green's formula, which asserts that $\displaystyle \int_a^b\{\phi^T{\cal L}(\psi) - {\cal M}(\phi)^T\psi\}dz$ = $\displaystyle \phi^T\alpha\psi\vert _b - \phi^T\alpha\psi\vert _a,$ (351) can be derived. In general terms, it is a consequence of ${\cal L}$ and ${\cal M}$ acting like derivatives applied to a product which evaluates into the product evaluated at its endpoints. The $\alpha$ sandwiched between the vectors follows from the detailed structure of these differential operators. If the vectors $\psi$ and $\phi$ are eigenvectors, $\displaystyle {\cal L}(\psi)$ = $\displaystyle \lambda \gamma \psi$ (352) $\displaystyle {\cal M}(\phi)$ = $\displaystyle \mu \gamma^T \phi$ (353) the left hand simplifies to give the Christoffel-Darboux formula, $\displaystyle (\lambda - \mu) \int_a^b\phi^T\gamma\psi dz$ = $\displaystyle \phi^T \alpha \psi \vert _a^b.$ (354) Among other things, it justifies eigenfunction expansions with respect to the solutions of differential equations. The matrix form of these results is more complicated than the version which is usually seen in textbooks, but it has the advantage of broad applicability. For example, for Schrödinger's equation, $\displaystyle \gamma$ = $\displaystyle \left[ \begin{array}{cc} 1 & 0 \\ 0 & 0 \end{array} \right].$ (355) Although $\gamma$ is singular, rather than disrupting the results, it makes the norm of functions depend on their values alone, and not at all on their derivatives. Spaces in which functions have this more complicated norm are called Sobolev spaces rather than Banach spaces. For the Dirac equation, $\gamma = {\bf 1}$, and both positive and negative energy components figure in the calculation of norms. The selfadjoint $\alpha$ is antisymmetric - a multiple of ${\bf i}$ - turning the inner product $\phi^T \alpha \psi$ into a determinant; concretely, the Wronskian of the solutions $\psi$ and $\phi$, in the case of the Schrödinger equation. The Christoffel-Darboux formula establishes the orthogonality of solutions belonging to different eigenvalues, but cannot establish a norm because of the zero factor resulting from equal eigenvalues. The norm might be gotten as a confluent case, taking limits as two eigenvalues approach, or the undefined norm might just be left in place in expansion formulas. By using Hermitean conjugates instead of transposes in the relevant formulas, and treating the eigenvalue as a complex variable, the factor turns into $(\lambda^*-\mu)$ which leaves the imaginary part of the eigenvalue when the two eigenfunctions are the same. The real result can be gotten by taking the limit as the imaginary part vanishes. Supposing that f(z) = $\displaystyle \sum_{i=0}^\infty c_i \psi^i(z),$ (356) and setting aside questions of convergence, $\displaystyle \int_a^b\psi_i^T(z)f(z)dz$ = $\displaystyle c_i \int_a^b\psi_i^T(z)\psi_i(z)dz,$ (357) so that ci = $\displaystyle \frac{\int_a^b\psi_i^T(\sigma)f(\sigma)d\sigma} {\int_a^b\psi_i^T(\sigma)\psi_i(\sigma)d\sigma},$ (358) The Stieltjes integral comes into play when we find that the eigenfunctions are quite numerous and the actual separation between eigenvalues is very small but nevertheless the eigenvalues are packed irregularly. It is also desirable to separate the parts of the coefficient which are due to integrating to function to be represented, and the small factor occasioned by the large norm. Therefore, keep the integral of the function as a coefficient $\displaystyle \xi_i$ = $\displaystyle \int_a^b\psi_i^T(\sigma)f(\sigma)d\sigma,$ (359) and introduce the distribution function $\rho(\lambda)$ which is a step function vanishing at $-\infty$, with increments at the eigenvalues $\lambda_i$ in the amount of the reciprocals of those denominators, which are actually the square of the norm of the eigenfunction. f(z) = $\displaystyle \sum_{i=0}^\infty \xi_i \psi_i(z)(\rho_{i+1}-\rho_i),$ (360)   = $\displaystyle \int_{-\infty}^{\infty}\xi(\lambda)\psi(\lambda,z)d\rho(\lambda)$ (361) There still remains the selection of an appropriate right hand boundary condition, and the choice of a uniform normalization for all the eigenfunctions. Help in making the selections can be had from looking at Parseval's equality, $\displaystyle \int_a^b\vert f(x)\vert^2 dx$ = $\displaystyle \sum_{i=0}^\infty\vert c_i\vert^2,$ (362) which is a generalization of the Pythagorean theorem to function space, and which can be written in the more symbolic form (f,f) = $\displaystyle \sum_{i=0}^\infty\vert(\psi_i,f)\vert^2.$ (363) By the Christoffel-Darboux formula, and using the complex version of Green's formulas rather than the real version, $\displaystyle \frac{[f,f](b)-[f,f](a)}{\lambda-\lambda^*}$ = $\displaystyle \sum_{i=0}^\infty\frac{\vert[f,\psi_i(b)]-[f,\psi_i(a)]\vert^2} {\vert\lambda - \lambda_i\vert^2}$ (364) Having turned integrals in function space into boundary sums (an interval has just two boundary points, a and b), it would be convenient to eliminate the dependence on b. One mechanism is to look at a solution $f = \phi + m \psi$ which is a combination of the two standard solutions, and to suppose that f satisfies a real boundary condition at b, resulting in [f,f](b)=0. Whatever that boundary condition, it should be used for the $\psi$'s as well; in other words, $\displaystyle [f,\psi_i](b)$ = 0 (365) Having removed the influence of the right boundary point by working with real boundary values in the complex domain, there remains the left boundary to assign some standard form. Using real values there too, and recalling the two linearly independent solutions $\psi$ and $\phi$, altogether, $\displaystyle [\psi, \psi_i](a)$ = 0 (366) \begin{displaymath}[f,\psi_i](a) = [\phi, \psi_i](a) = r_i.\end{displaymath} This quantity ri, which is the increment in the Stieltjes integral, is the initial amplitude of a real, normalized solution of the differential equation over the finite interval $a,\ b$. That is another way to get the step in the spectral distribution function, because Parseval's equality now reads $\displaystyle \frac{m-m^*}{\lambda-\lambda^*}$ = $\displaystyle \sum_{i=0}^\infty\frac{r_i^2}{\vert\lambda-\lambda_i\vert^2}$ (367)   = $\displaystyle \int_{-\infty}^\infty\frac{d\rho(\mu)}{\vert\lambda - \mu\vert^2}$ (368)   = $\displaystyle \int_{-\infty}^\infty\frac{\rho'(\mu)d\mu}{\vert\lambda - \mu\vert^2}.$ (369) The last line is admissable for points in the continuous spectrum of the differential operator, but the Stieltjes form must be retained for the point spectrum. If $\rho'$ exists, it is called the spectral density. The description of the spectral density calls for some care in its presentation. In works on solid state theory especially, the spectral density is often considered to be ``the number of eigenvalues per unit of frequency interval,'' which is only a part of the story. This works well for plane waves, or functions which are more or less homogeneous throughout their extent, but a much more important consideration is the relationship between near amplitude and far amplitude. That determines the weight any given function requires to build up a given wave packet, and it is that weight which is properly the spectral density. A good way to appreciate the difference is to return to the Dirac Harmonic Oscillator previously discussed. The spectrum is continuous, and there is no particular reason to think of the number of eigenvalues per unit interval because they are pretty much uniformly distributed. But most of them dissipate the presence of their particle by their large relative amplitude at infinity. Only eigenvalues in small, selected, intervals contribute to the presence of a particle near at hand, and it is this emphasis which assigns them a high spectral density. Figure 26: The spectral density function for the Dirac Harmonic Oscillator is the section of this surface bisecting the figure. Actually, there is a spectral density matrix [5], of which this section only reveals the even, or (0,0), element of that matrix. \put(0,0){\epsfxsize=300pt \epsffile{dirres.eps}} Figure 26, although it graphs probability density as a function of both energy and distance, has been normalized to unit amplitude at infinity, so the values over zero distance portray the Titchmarsh-Weyl m-function, or in other words, the spectral density. Whereas it might be overly ambitious to write $\displaystyle m(\lambda)$ = $\displaystyle \int_{-\infty}^\infty\frac{\rho'(\mu)d\mu}{\lambda - \mu}$ (370) and compare it to Cauchy's integral formula, the relation certainly holds for the imaginary part of the equation, and invites considering $\rho'$ as the boundary value of an analytic function which could be extrapolated throughout a half-plane, at least [19]. next up previous contents
c5e4ab9056d4521a
Physicists in the US have calculated that a new class of three-body bound states should exist for atoms that experience long-range interactions – even though the interactions themselves are too weak to bind pairs of the same atoms. Such states have previously been seen in bosonic atoms affected by short-range interactions, but the team says that this latest phenomenon is very different, particularly because it can also occur for fermions. Although the researchers have not yet seen the new states, these could be revealed in experiments on ultracold atomic gases. The idea that three atoms can form a loosely bound quantum state – even if any two of the atoms on their own cannot bind together – was first predicted by the Russian physicist Vitaly Efimov in the early 1970s. Now known as Efimov three-body bound states, they were first spotted in 2006 in a gas of caesium atoms that was cooled to just 10 nK by a team led by Hanns-Christoph Nägerl of Innsbruck University in Austria. Efimov states only occur for atoms that are bosons; that is, atoms that have integer, rather than half-integer, values of spin. The long and the short of it One important feature of Efimov states is that the interactions between the atoms are short ranged – in other words, they are described by an attractive potential that falls off faster than the inverse square of the distance between atoms. If the potential has a longer range, then Efimov's calculations do not apply and Efimov states do not exist. Of course, if these long-range potentials happen to be strong, then there will be an infinite number of three-body bound states – but these are not Efimov states. Until now, however, it has not been clear if three-body bound states exist when the potential is so weak that it does not bind together pairs of atoms. What Brett Esry and colleagues at Kansas State University have found is that bound states of three atoms should occur when they are attracted to each other by a very weak inverse-square potential. The team came to this conclusion by studying numerical solutions to the three-body Schrödinger equation for three identical bosons. Esry does it Esry and colleagues then turned their attention to fermions and obtained a second surprising result. When the spins of all three atoms point in the same direction, three-body bound states occur even when atoms in pairs repel each other. Esry told that it may be possible to see the new states in lab experiments. "Given that they are very weakly bound – much like Efimov states – ultracold gases are the most likely candidates for seeing them," he says. "The most likely scenario for seeing our states is in a mixture of heavy bosonic atoms interacting with light fermionic atoms." In this scenario, the fermions act as force mediators resulting in an effective attractive inverse-square potential between the bosons. While Esry believes that the effect could reveal itself in a gas of bosonic caesium and fermionic lithium, more ideal candidate systems should have a larger mass ratio between the boson and fermion. Possible systems include ytterbium–hydrogen or erbium–hydrogen, although Esry points out that hydrogen is particularly difficult to work with so lithium might be a better choice of fermion. Nägerl says he "would not have expected" this result, adding that it is "surprising how rich the [inverse square] case is". However, Nägerl believes that it may be difficult to persuade experimentalists to try to confirm the theoretical result because of the challenges associated with tooling up their labs to create and study an appropriate boson–fermion combination. The work is described in Physical Review Letters.
feef06d8749c8c5b
Take the 2-minute tour × Recently there have been some interesting questions on standard QM and especially on uncertainty principle and I enjoyed reviewing these basic concepts. And I came to realize I have an interesting question of my own. I guess the answer should be known but I wasn't able to resolve the problem myself so I hope it's not entirely trivial. So, what do we know about the error of simultaneous measurement under time evolution? More precisely, is it always true that for $t \geq 0$ $$\left<x(t)^2\right>\left<p(t)^2\right> \geq \left<x(0)^2\right>\left<p(0)^2\right>$$ (here argument $(t)$ denotes expectation in evolved state $\psi(t)$, or equivalently for operator in Heisenberg picture). I tried to get general bounds from Schrodinger equation and decomposition into energy eigenstates, etc. but I don't see any way of proving this. I know this statement is true for a free Gaussian wave packet. In this case we obtain equality, in fact (because the packet stays Gaussian and because it minimizes HUP). I believe this is in fact the best we can get and for other distributions we would obtain strict inequality. So, to summarize the questions 1. Is the statement true? 2. If so, how does one prove it? And is there an intuitive way to see it is true? share|improve this question Why do you think it would apply? You can't really make a measurement that way (either you measure at $t=0$ or at $t=T$, but never both), so you basically have two different $\psi$ solutions. Both will obey the principle independently. Am I misunderstanding your question? –  Sklivvz Mar 19 '11 at 16:06 If your wavepacket, to begin with, saturates the uncertainty bound (i.e. is a coherent state) then this is trivially true - coherent states stay coherent under time-evolution. If your initial state is not a coherent state then the evolution is clearly more involved, but in that case you could expand your arbitrary initial state in the coherent state basis - so that this inequality (as established for coherent states) could still be used, component by component to show that it remains true for the arbitrary state. Or perhaps not. Chug and plug, baby, chug and plug. –  user346 Mar 19 '11 at 16:08 I don’t think the statement is true. Put the minimum uncertainty wave packet at t=0. What was the uncertainty before, at t<0? it was larger so it has been decreasing before t=0. More generally, you cannot derive time asymmetric statements from time symmetric laws. –  user566 Mar 19 '11 at 16:39 @Moshe: there are loopholes in your argument: there might be no minimum for a given system (just infimum) and if there is minimum, it might be preserved in evolution (as for free Gaussian). Still, nice idea and I'll try to use it to find a counterexample in some simple system. As for the second statement: right, so I am sure you'll tell me that we can't obtain second law too... just kiddin', I don't want to get into this discussion that made Boltzmann commit suicide :) –  Marek Mar 19 '11 at 16:47 @Marek, in any example you can solve the Schrodinger equation, you'll find that the quantity you are interested in grows away from t=0, both towards the past and towards the future, this is guaranteed by symmetry. As for the general statement, it is also true for the second law. You cannot derive time asymmetric conclusions from time symmetric laws without extra input, this is just basic logic, nothing to do with physics. The whole discussion is what is that extra input and where does it come in. –  user566 Mar 19 '11 at 16:57 show 14 more comments 5 Answers up vote 28 down vote accepted The question asks about the time dependence of the function $$f(t) := \langle\psi(t)|(\Delta \hat{x})^2|\psi(t)\rangle \langle\psi(t)|(\Delta \hat{p})^2|\psi(t)\rangle,$$ $$\Delta \hat{x} := \hat{x} - \langle\psi(t)|\hat{x}|\psi(t)\rangle, \qquad \Delta \hat{p} := \hat{p} - \langle\psi(t)|\hat{p}|\psi(t)\rangle, \qquad \langle\psi(t)|\psi(t)\rangle=1.$$ We will here use the Schroedinger picture where operators are constant in time, while the kets and bras are evolving. Edit: Spurred by remarks of Moshe R. and Ted Bunn let us add that (under assumption (1) below) the Schroedinger equation itself is invariant under the time reversal operator $\hat{T}$, which is a conjugated linear operator, so that $$\hat{T} t = - t \hat{T}, \qquad \hat{T}\hat{x} = \hat{x}\hat{T}, \qquad \hat{T}\hat{p} = -\hat{p}\hat{T}, \qquad \hat{T}^2=1.$$ Here we are restricting ourselves to Hamiltonians $\hat{H}$ so that $$[\hat{T},\hat{H}]=0.\qquad (1)$$ Moreover, if $$|\psi(t)\rangle = \sum_n\psi_n(t) |n\rangle$$ is a solution to the Schroedinger equation in a certain basis $|n\rangle$, then $$\hat{T}|\psi(t)\rangle := \sum_n\psi^{*}_n(-t) |n\rangle$$ will also be a solution to the Schroedinger equation with a time reflected function $f(-t)$. Thus if $f(t)$ is non-constant in time, then we may assume (possibly after a time reversal operation) that there exist two times $t_1<t_2$ with $f(t_1)>f(t_2)$. This would contradict the statement in the original question. To finish the argument, we provide below an example of a non-constant function $f(t)$. Consider a simple harmonic oscillator Hamiltonian with the zero point energy $\frac{1}{2}\hbar\omega$ subtracted for later convenience. $$\hat{H}:=\frac{\hat{p}^2}{2m}+\frac{1}{2}m\omega^{2}\hat{x}^2 -\frac{1}{2}\hbar\omega=\hbar\omega\hat{N},$$ where $\hat{N}:=\hat{a}^{\dagger}\hat{a}$ is the number operator. Let us put the constants $m=\hbar=\omega=1$ to one for simplicity. Then the annihilation and creation operators are $$\hat{a}=\frac{1}{\sqrt{2}}(\hat{x} + i \hat{p}), \qquad \hat{a}^{\dagger}=\frac{1}{\sqrt{2}}(\hat{x} - i \hat{p}), \qquad [\hat{a},\hat{a}^{\dagger}]=1,$$ or conversely, $$\hat{x}=\frac{1}{\sqrt{2}}(\hat{a}^{\dagger}+\hat{a}), \qquad \hat{p}=\frac{i}{\sqrt{2}}(\hat{a}^{\dagger}-\hat{a}), \qquad [\hat{x},\hat{p}]=i,$$ $$\hat{x}^2=\hat{N}+\frac{1}{2}\left(1+\hat{a}^2+(\hat{a}^{\dagger})^2\right), \qquad \hat{p}^2=\hat{N}+\frac{1}{2}\left(1-\hat{a}^2-(\hat{a}^{\dagger})^2\right).$$ Consider Fock space $|n\rangle := \frac{1}{\sqrt{n!}}(\hat{a}^{\dagger})^n |0\rangle$ such that $\hat{a}|0\rangle = 0$. Consider initial state $$|\psi(0)\rangle := \frac{1}{\sqrt{2}}\left(|0\rangle+|2\rangle\right), \qquad \langle \psi(0)| = \frac{1}{\sqrt{2}}\left(\langle 0|+\langle 2|\right).$$ $$|\psi(t)\rangle = e^{-i\hat{H}t}|\psi(0)\rangle = \frac{1}{\sqrt{2}}\left(|0\rangle+e^{-2it}|2\rangle\right),$$ $$\langle \psi(t)| = \langle\psi(0)|e^{i\hat{H}t} = \frac{1}{\sqrt{2}}\left(\langle 0|+\langle 2|e^{2it}\right),$$ $$\langle\psi(t)|\hat{x}|\psi(t)\rangle=0, \qquad \langle\psi(t)|\hat{p}|\psi(t)\rangle=0.$$ $$\langle\psi(t)|\hat{x}^2|\psi(t)\rangle=\frac{3}{2}+\frac{1}{\sqrt{2}}\cos(2t), \qquad \langle\psi(t)|\hat{p}^2|\psi(t)\rangle=\frac{3}{2}-\frac{1}{\sqrt{2}}\cos(2t),$$ because $\hat{a}^2|2\rangle=\sqrt{2}|0\rangle$. Therefore, $$f(t) = \frac{9}{4} - \frac{1}{2}\cos^2(2t),$$ which is non-constant in time, and we are done. Or alternatively, we can complete the counter-example without the use of above time reversal argument by simply performing an appropriate time translation $t\to t-t_0$. share|improve this answer I was thinking of trying to work out some harmonic oscillator example myself (because I have few further questions and it seems like simplest system where something nontrivial is happening) but you've beat me to it. Thanks! –  Marek Mar 20 '11 at 18:57 Although there is one thing that bugs me. I believe the calculation is essentially right, however we have $f(0) = 1/4$ which means it minimizes HUP (unless I am misunderstanding your conventions) and therefore $\psi(0)$ would have to be Gaussian -- a contradiction with your initial state. Is there a little mistake in calculation somewhere or do I have a flaw in my argument? –  Marek Mar 20 '11 at 19:02 Okay, I fixed it (I hope) :) –  Marek Mar 20 '11 at 19:20 Dear @Marek: I agree, there was powers of $2$ missing in three formulas. –  Qmechanic Mar 20 '11 at 19:32 One thing that's worth noting: you say that the Schrodinger equation is not invariant under time reversal. It's true that simply substituting $t\to -t$ is not invariant, but simultaneously changing $t\to -t$ and complex conjugating $\psi\to\psi^*$ does leave the equation invariant. That means that, for every solution $\psi(t)$, there is a corresponding solution $\psi^*(-t)$ that "looks like" the same state going backwards in time (and in particular has the same expectation values for all operators). That's what people mean when they say that the Schrodinger equation has time-reversal symmetry. –  Ted Bunn Mar 21 '11 at 13:02 show 1 more comment The Schrodinger equation is time-symmetric. The answer is therefore No. From all of the comments, I feel like I must be oversimplifying or missing something, but I can't see what. share|improve this answer I'm with you, but it is probably useful for Marek to see for himself how this works in the simple example to be convinced of the general statement. –  user566 Mar 19 '11 at 17:19 Yes, this seems like a good argument to settle the original question. But it brings in further questions :) In particular, Moshe's solution (minimum growing towards both future and past) is a kind of bounce. But on both sides of that bounce I suppose the inequality would be satisfied. In other words, would the statement hold if we allowed these simple bouncy solutions and the time "t=0". Or to put it more clearly: I should've asked more general question of what does the uncertainty as a function of time look like... We now know it need not be monotone but perhaps it has other nice properties. –  Marek Mar 19 '11 at 18:07 I can't make heads or tails of this sentence: In other words, would the statement hold if we allowed these simple bouncy solutions and the time "t=0". I don't know if anything interesting in general can be said about the time evolution of $\Delta x\,\Delta p$, other than of course that it's bounded below. –  Ted Bunn Mar 19 '11 at 18:09 @Ted: ah, that was indeed not very clear. The best rephrasing is probably this: whether there exists time $t_0$ such that the inequality holds for all times $t \geq t_0$. But it is a different question. –  Marek Mar 19 '11 at 20:15 I thnk that @Marek and I are in complete agreement. Just to be explicit, let me answer @Carl's question about how we know $\Delta p$ is constant. Marek is right: For a free particle, $p^n$ commutes with the Hamiltonian, so all expectation values $\langle p^n\rangle$ are constant. So $\Delta p^2=\langle p^2\rangle-\langle p\rangle^2$ is constant. (Indeed, the entire probability distribution for $p$ is constant in time.) As a result, a Gaussian wave packet for a free particle does not remain minimum-uncertainty for all time. It spreads in real space while remaining the same in momentum space. –  Ted Bunn Mar 20 '11 at 14:05 show 10 more comments No. Here's a simple example where it shrinks: You have a particle that has a 50% chance of being on the left going right, and a 50% chance of being on the right going left. This has a macroscopic error in both position and momentum. If you wait until it passes half way, it has a 100% chance of being in the middle. This has a microscopic error in position. There will also only be a microscopic change in momentum. (I'm not entirely sure of this as the possibilities hit each other, but if you just look right before that, or make them miss a little, it still works.) As such, the error in position decreased significantly, but the error in momentum stayed about the same. share|improve this answer add comment Think in terms of Harmonic Functions and their Maximum Principle (or Mean Value Theorem). For simplicity (and, in fact, without loss of generality), let's just think in terms of a free particle, ie, $V(x,y,z) = 0$. When the Potential vanishes, the Schrödinger equation is nothing but a Laplace one (or Poisson equation, if you want to put a source term). And, in this case, you can apply the Mean Value Theorem (or the Maximum Principle) and get a result pertaining your question: in this situation you saturate the equality. Now, if you have a Potential, you can think in terms of a Laplace-Beltrami operator: all you need to do is 'absorb' the Potential in the Kinetic term via a Jacobi Metric: $\tilde{\mathrm{g}} = 2\, (E - V)\, \mathrm{g}$. (Note this is just a conformal transformation of the original metric in your problem.) And, once this is done, you can just turn the same crank we did above, ie, we reduced the problem to the same one as above. ;-) I hope this helps a bit. share|improve this answer I am sorry but I don't see how this is related to uncertainty and time evolution. Could you explain that? –  Marek Mar 19 '11 at 20:51 @Marek: the point was made explicit by Qmechanic, in his answer above. If you apply what i said in the Schrödinger picture, you get evolving states whose magnitude is always bound by the Mean Value Theorem. (If we were talking about bounded operators, this could be made rigorous with a bit of Functional Analysis.) –  Daniel Mar 20 '11 at 19:32 add comment A physical way of seeing this is that the phase space volume of a system is preserved. Hamiltonian mechanics preserves the volume of a system on its energy surface H = E, which in quantum mechanics corresponds to the Schrodinger equation. The phase space volume on the energy surface of phase space is composed of units of volume $\hbar^{2n}$ for the momentum and position variables plus the $\hbar$ of the energy $i\hbar\partial\psi/\partial t~=~H\psi$. This is then preserved. Any growth in the uncertainty $\Delta p\Delta q~=~\hbar/2$ would then imply the growth in the phase space volume of the system. This would then mean there is some dissipative process, or the quantum dynamics is replaced by some master equation with a thermal or environmental loss of some form. For a pure unitary evolution however the phase space volume of the system, or equivalently the $Tr\rho$ and $Tr\rho^2$ are constant. This means the uncertainty relationship is a Fourier transform between complementary observables which preserve an area $\propto~\hbar$. share|improve this answer -1, this is completely irrelevant to my question. I am interested just in pure states and for those phase volume is always zero and so trivially conserved. But this doesn't give any information on the behavior of uncertainty. –  Marek Mar 21 '11 at 13:20 The volume a system occupies in phase space defines entropy as $S~=~k~log(\Omega)$ for $\Omega$. The von Neumann entropy $$ S~=~-k~Tr~\rho log(\rho). $$ A mixed state has each element of $\rho~=~1/n$ and the trace is $\sum(1/n)log(1/n)$ $~=~log(n)$. A pure state then occupies a phase space region that is normalized to unit volume --- not zero. –  Lawrence B. Crowell Mar 21 '11 at 14:45 add comment Your Answer
e204d57d35e1973c
Visual Quantum Mechanics < Prev | Next > Solution of the free Dirac equation (Zitterbewegung) The Dirac equation is the fundamental equation of relativistic quantum mechanics. While it improves on certain aspects of the Schrödinger equation it has solutions that are rather difficult to interpret. This is a solution of the free Dirac equation in one space dimension. This is an equation for two-component wave functions. The plot shows both components of this "spinor-valued" wave function. The gray curve in the background gives the absolute value of the spinor (its square is often tentatively associated with a position probability density). The initial function is a Gaussian function in both spinor components. The time evolution exhibits the phenomenon called "Zitterbewegung": The expectation value of the position oscillates around a mean value. Note that despite the symmetry of the initial condition, the mean position drifts slowly to the right. The reason for the Zitterbewegung is that the wave packet shown here is a superposition of positive- and negative-energy states. We refer to "Advanced Visual Quantum Mechanics" for a more detailed explanation.
a31b6a6b875fac55
Three-pair final-state interaction in the reaction close to threshold 111Talk given at COSY-11 meeting, Cracow, 1-3 June, 2004 A. Deloff [0.1cm] Institute for Nuclear Studies, Hoża 69, 00-681 Warsaw, Poland Abstract: We present a three-body formalism describing the final-state interaction effects in the reaction close to threshold. We derive a three-body enhancement factor devised in such a way that all three pair-wise interactions are regarded on equal footing. The enhancement factor is obtained by expanding the three-particle wave function in hyperspherical harmonics. It has been shown that close to threshold the interaction strongly dominates whereas the interaction gives almost negligible contribution to the calculated effective mass spectra. Within the presented three-body approach it has been possible to reproduce the effective mass distributions at the excitation energy Q=15.5 MeV in good accord with the data. 1 Introduction The last decade has seen major advances in the experimental investigation of the near threshold meson production reactions in nucleon-nucleon collisions (for a review cf. [1] and [2]). In the recent measurements of the reaction a very accurate determination of the four-momenta of both outgoing protons allowed for the full reconstruction of the kinematics of the final state. In consequence, these measurements provided in addition to the and the proton angular distributions, also the and effective mass distributions [3, 4]. The common feature of the near-threshold meson production in proton-proton collisions is the dominance of the very strong proton-proton final state interaction (FSI). This effect is clearly visible in the invariant mass distributions: as a prominent peak close to threshold in the (pp)-mass distribution, or as a bump near the end-point in the -mass spectrum. For sufficiently low excitation energies the transition amplitude becomes necessarily the sole contributor to the cross section as it is the only amplitude surviving at threshold. The supposition that this happens at the lowest available excitation energy equal Q=15.5 MeV [4] appears to be quite plausible, especially that in a similar experiment [3] at Q=15 MeV the measured angular distributions were consistent with isotropy. The description in terms of a simple model in which a constant production amplitude is multiplied by pp FSI enhancement factor, although qualitatively correct, is not fully satisfactory in quantitative terms. The calculated invariant mass distributions are presented in Fig. 1 (dashed curves). In order to improve the agreement with experiment two possibilities have been considered. The most obvious explanation admits the contribution from the p-waves, or, more precisely, from the and amplitudes. These amplitudes have the best chance to show up when the relative momenta of the final-state protons take the largest values allowed by the phase space. Since at Q=15.5 MeV this sector still overlaps with the peak region of the enhancement factor, the s-wave also receives there maximal amplification. Therefore, the relative strength of the p-wave amplitudes to be discernible has to be quite substantial which in general should be reflected by a pronounced angular dependence of the cross section. This difficulty has been thoroughly examined by Nakayama et al. [5] who pointed out that the unwanted angular dependence might still be suppressed under two circumstances: (i) if the amplitude was negligibly small so that the angular dependent term was absent, or, (ii) if the lack of angular dependence resulted from cancellations – from a destructive interference between and amplitudes. Thus, a model basing upon a strong p-wave needs additionally somewhat fortuitous coincidences. According to the second proposition presented in [6], the s-wave amplitude dominates because it is proportional to the huge FSI enhancement factor whilst the p-wave amplitudes are not. Therefore, the latter might be neglected explaining in a natural way the lack of angular dependence. Instead, a weak energy dependence is admitted in the production amplitude. Both models are capable of improving the agreement with experiment at the expense of introducing an adjustable parameter. The best fit from [6] is depicted in Fig. 1 by the full line. Figure 1: Invariant mass distributions at Q=15.5 MeV; left panel: (pp)-mass plot; right panel: (p)-mass plot. The data are from [4] the calculated curves are from [6]. Since the agreement with experiment presented in [5] is of similar quality, polarization experiments are required to discriminate between these two models. The prediction of the s-wave model [6] is that all polarization observables are bound to vanish. Any non-zero value for the analyzing power reveals the presence of higher partial waves [5]. In the above considerations final-state p interaction has been ignored and an interesting question arises how much its inclusion can change the resulting invariant mass spectra. The p interaction is poorly known but there have been suggestions that the corresponding scattering length might be as large as about 1 fm. This is still one order of magnitude smaller than the pp scattering length but the discrepancies in the invariant mass distribution in Fig. 1 are not large either. The purpose of this work is to shed some light on the possible role of the p final-state interaction but to be able to do that we need a formalism in which all pair-wise interactions would be regarded on equal footing. The plan of our presentation is as follows. In the next Section we start with the well known two-body case recalling the arguments leading to the derivation of the FSI enhancement factor. In Section 3 we generalize these ideas to the three-body case by utilizing the hypersherical harmonics approach. Finally, in Section 4 we verify our model by confronting the obtained results with the experimental data. 2 Two-body final-state interaction Since the proton-proton final-state interaction is believed to be the dominant ingredient in the description of the reaction close to threshold, it is logical to begin with the two-body FSI problem. The basic idea how to account for final state interaction was put forward 50 years ago by Fermi, Watson, Migdal [7] and others (for a review cf. [8]) and is based on the observation that in many processes the interaction responsible for carrying the system from the initial to the final state is of such a short range that in the first approximation may be regarded as point like. As a prototype one may consider a meson (x) production reaction . To generate the meson mass m in nucleon-nucleon collision a large momentum transfer is required between the initial and the final nucleons, which is typically of the order , with being the nucleon mass. The corresponding ”range” of the production interaction is therefore much shorter than the range of the interaction between the two final state nucleons. Although it is perfectly true that the final state NN interaction significantly distorts the NN wave function but in the transition matrix element the contribution from all but the smallest NN separations will be strongly suppressed and the main effect may be attributed to the change of the normalization of the wave function at zero separation. If the non-interacting NN pair is described by a plane wave , where is the relative NN momentum ( units are used hereafter), to account for final state interaction the latter must be replaced in the transition matrix element by the complete NN wave function satisfying outgoing spherical wave boundary condition at infinity. Nevertheless, for a point-like interaction, we may set in the matrix element so that the final state interaction will be accounted for by multiplying the transition matrix element by the enhancement factor, defined as The factor that appears in the cross section represents the ratio of two probabilities: one of finding the interacting NN pair at zero separation, while the other probability is associated with non-interacting particles. By construction, when the final state interaction is turned off, the enhancement factor will be equal to unity. Expanding both, the numerator and the denominator on the right hand side of (2) in partial waves, we have where for small , is spherical Bessel function and denotes Legendre polynomial. Clearly, in the limit in (3), all higher partial waves will be suppressed by the centrifugal barrier, and only the contribution the from s-wave survives. Thus, we obtain a simple formula where prime denotes derivative with respect to , and, as apparent from (4), the enhancement factor is determined by the slope of the wave function at the origin. To find this slope we must know the NN s-wave interaction and for simplicity we shall in the following assume that the latter takes the form of a spherically symmetric radial potential. The shape of this potential may be arbitrary but it must be of a short range. Given the NN potential, we can integrate outward the appropriate wave equation, containing both the nuclear and the Coulomb potential, generating numerically a regular solution (i.e. vanishing at the origin) whose derivative for later convenience is selected to be where denotes the Sommerfeld parameter and is the Coulomb barrier penetration factor. The sought for physical solution occurring in (4) which is also regular, is necessarily proportional to , and, more explicitly, we have Now, all we need to calculate is the asymptotic expression for the physical wave function. For with R much bigger than the range of the nuclear potential, the physical wave function takes the form where with and being the standard Coulomb wave functions defined in [9], and denotes the s-wave scattering amplitude with being the s-wave Coulomb distorted phase shift. The differentiation of (7) with respect to R, provides us with a second condition for the derivatives but it should be noted that and occurring in these two matching conditions are to be regarded as known quantities. Indeed, they are fully specified by the boundary condition at the origin (5) and can be either calculated analytically, or obtained by numerical methods. Therefore, we end up with two algebraic equations in which the two unknowns are the enhancement factor and the scattering amplitude and the respective solutions, can be conveniently written as where the symbol denotes the Wronskian defined as . The specific Wronskian occurring in the denominators of (8)–(9) has been referred to as the Jost function [8]. Thus, given the potential, the two-body enhancement factor can be obtained from (8). 3 Three-body final-state interaction We wish now to extend the ideas outlined above to the three-body case. We assume that the pair-wise short range interaction between the particles is strong but the system is Borromean, i.e. no binary bound state may exist in the three-body system under consideration. A formal theory of the continuum in a Borromean system was developed in [10] using the same hyperspherical harmonics method which has been widely employed for the investigation of bound states and specifically the halo nuclei [11]. In this paper we follow this approach to derive the enhancement factor for three interacting particles. The wave functions in the continuum are solutions to the three-body problem satisfying the correct boundary conditions at infinity where the three-body asymptotics is most naturally expressed in terms of the rotationally and permutationally invariant hyperradius defined as the square root of the sum of squares of the inter-particle distances. Since the hyperradius reflects the size of the three-body system, similarly as in the two-body case, the enhancement factor may be obtained as the limiting value of the three-body wave function. The method of hyperspherical harmonics has been well documented in the literature, but to make this paper self-contained we wish to summarize briefly the theoretical framework necessary to treat the continuum of a Borromean system. 3.1 Coordinate sets and hyperspherical harmonics For assigned particle positions and masses , the translationally invariant normalized sets of Jacobi coordinates are defined, as follows where is a permutation of the particle labels , , and denotes an arbitrary mass which just sets the mass scale. Each of the three equivalent pairs together with the center of mass coordinate describes the system. The transformation between different sets of Jacobi coordinates is referred to as a kinematical rotation and takes the form where the rotation angle, confined by , is given by with denoting the sign of the permutation . The Jacobi momenta and , canonically conjugate to and , respectively, are defined by the relations, where are the laboratory frame momenta. Instead of the Jacobi coordinates we shall use hyperspherical coordinates comprising of a hyperradius and five angles. The hyperradius determines the size of a three–body system and is invariant with respect to kinematic rotations The five angular variables forming a five-dimensional solid angle include the usual angles , specifying the unit vectors , and these are supplemented by the hyperangle defined by the equalities where . The five-dimensional volume element d is For the conjugate momenta, we proceed in a similar fashion introducing the hypermomentum and the associated hyperangle where owing to the energy conservation . In the six-dimensional space, the kinetic energy operator takes a separable form where the hypermomentum operator is the generator of rotations in the six-dimensional space. The operator takes the form where the angular momentum operators and occurring in (19), are defined as and and have eigenvalues and , respectively. The operator (19) has eigenvalues where for integer . The quantum number has been dubbed hypermomentum, its value is the same in all three Jacobi systems, and the corresponding eigenfunctions are known as hyperspherical harmonics (abbreviated HH hereafter). From now on we choose set 3 as the basic one and to simplify notation in the following we will suppress the label referring to our particular choice of the Jacobi system. The HH have the explicit form where is the total angular momentum resulting from vector addition . In (21) the symbol denotes collectively all five angular variables with being the usual spherical harmonics. The square bracket in (21) indicates vector coupling of and producing a total angular momentum and the appropriate quantum numbers associated with the latter operator are and . In (21), the hyperangular eigenfunctions are where are the Jacobi polynomials (cf. [9]) and is the normalization constant. The HH in (21) are orthonormalized using the volume element (16) and stands for any of the . For clarity reasons, to avoid unnecessary complications, we are going to restrict our considerations to the simplest case when the particles have no internal degrees of freedom. When the particles are moving freely the spacial part of the three-particle wave-function will be described in the c.m. frame by a plane wave, whose HH expansion can be written as where denotes Bessel function of integer order and solid angle comprises five angular variables that specify the directions of the incident momenta in the six-dimensional space. It is worth noting that since the radial part in (22) depends solely upon , the plane wave is an invariant under six-dimensional rotations. The two-body interactions break up this invariance and for interacting particles the three-particle wave function in the continuum will have a more complicated HH expansion. For the simplest case of pair-wise central potentials depending upon the particle separations, we have where the summation indices are and the normalization condition reads In the expansion (23), only the total three-particle angular momentum L is a good quantum number and the as yet unspecified hyperradial part must be determined from the underlying three-body dynamics. The dependence upon the tilded indices, associated with the incident momenta, enters solely via the asymptotic boundary condition at . The full wave-function is a solution to the three-body Schrödinger equation with where stands for the pair-wise interaction between particles and . Inserting (23) in (25) and projecting onto the hyperangular part of the wave function, results in an infinite set of coupled systems of differential equations enumerated by the conserved total angular momentum – the only quantum number that does not mix. For an assigned , we have and we are left with a multi-channel situation where each channel is specified by three quantum numbers . Each system of equations (26) with is infinite because there is no upper limit for , and, therefore, for practical reasons must be truncated at some finite value so that the orbital momenta and are thereby restricted to vary in finite limits. The value of determines the order of the approximation and must be be large enough to ensure the convergence of the method. It should be noted that in (26) the potential term has been sandwiched between the HH functions and after integration over the five angular variables , these matrix elements depend solely upon the hyperadius With the adopted here set 3 of HH, the computation of the potential matrix element is relatively straightforward because the separation vector between particles 1 and 2 is proportional to and the calculation of the potential matrix involves a single integration. By contrast, in the two remaining potentials the corresponding separations, and , respectively, are linear combinations of the two vectors and which unavoidably leads to five-dimensional integrations in the calculation of the potential matrix. Fortunately, there is a very efficient procedure to overcome this difficulty, based on the observation that all HH with fixed values defined with respect to set (cf. (10)) are linear combinations of HH belonging a set and this transformation is effected by means of the so called Raynal-Revai (RR) coefficients [12], viz. where the RR coefficients are functions of the angle specifying the kinematic rotation given in (12). With the aid of (28), the potential matrix of may be computed in basis where this task is simple and subsequently transformed to basis . The radial wave functions must be regular at and the boundary condition imposed on the solutions of (26) is For the potential term in (26) goes to zero so that in this limit the system of equations (26) becomes decoupled. In absence of the potential term, the asymptotic solutions of the radial equations are well known, they are given in terms of the Bessel and Neuman functions [9] and , respectively. With the Coulomb interaction present, for large strong interaction potentials become negligible, and we set where we have extended the standard definition of Coulomb wave functions so that the orbital quantum number which takes on integer values is replaced by with half-integer . The T-matrix occurring in (30) describes scattering processes and is similar to that encountered in the two-body multichannel problem. In particular, it can be diagonalized which allows to determine the appropriate eigenphases whose rapid variation with energy indicates a resonance. 3.2 Three-body final-state interaction enhancement factor The formalism developed in the preceding subsection will be now applied to calculate the enhancement factor describing final-state interaction between three particles in a Borromean system. The approximation scheme will be based on the assumption that all interactions are of a short range and therefore the most important effect comes from the distortion of the wave function at very small separations. Bigger separation region is strongly suppressed by the small size of the interaction volume and gives relatively small contribution to the overlap integrals representing reaction transition amplitudes. Thus, similarly as in the two-body case, we shall define the enhancement factor as the limit reached by the square of the absolute value of the ratio – the full three-body wave function divided by the plane wave – when the size of the system represented by goes to zero. It is apparent from (26) that the role of the centrifugal barrier plays now the quantum number and the leading term in the wave function in the limit has implying that and . Therefore, we confine our attention solely to the set (26) where significant simplifications take place, namely we have and necessarily even: . Formally, the three-particle enhancement factor will be defined as the limit which by making use of (22) and (23), simplifies to the form where we have dropped the uninteresting constant factor . To obtain the enhancement factor it is sufficient to calculate the radial wave function for small and take the limit indicated in (33). The calculation of the radial wave function is carried out similarly as for the two-body case except that scalar quantities now need to be replaced by matrices in channel space. These channels are specified by two quantum numbers but to simplify the notation it is possible to use a single integer and we have to solve second order differential equations given in (26). The physical solution of (26) can be envisaged as a column vector in channel space. To obtain the physical solution we must first generate numerically linearly independent particular solutions vanishing at the origin with an arbitrary slope at . These solutions can be grouped together to form the columns of a single square matrix (hereafter we denote matrices by boldface symbols). In practice, this matrix may be obtained by solving times the system of equations (26) imposing for small the boundary condition so that these particular solutions will be enumerated by two quantum numbers . Since the columns of are linearly independent, they span the space of all possible slopes. Therefore, any column vector solution of (26) must be expressible as some linear combination of the N columns of , and we have with denoting a column vector of constant coefficients. The matrix of physical solutions occurring in (33) differs from at most by a constant matrix and the latter will be explicitly determined by making use of the boundary condition for asymptotic . For large the physical solution must be of the form where is the true scattering matrix. In formula (37) denotes a diagonal matrix containing the regular solutions of (26) in absence of the strong interaction, whereas is another diagonal matrix containing outgoing hyperspherical waves . Since is proportional to , for some large , we may set the following matching condition for the wave functions and their derivatives where prime denotes derivative with respect to and is a constant (albeit energy dependent) matrix to be determined. Eliminating between the two matrix equations (38), we obtain and is seen to be proportional to the inverse of the Jost matrix. Clearly, formula (39) may be viewed as a three-body extension of (8). Because the matrix involves the inverse of the Jost determinant it is bound to have the same singularity structure as the T-matrix. The physical solution can be now expressed entirely in terms of and the limiting procedure in (33) may be effected with the aid of (35). Similarly as in the two-body case, the only terms in the summations giving non-vanishing contribution are those with . Leaving out irrelevant numerical factor, the enhancement factor in (33) takes the form where the expansion coefficients are provided as the first row of the matrix given in (39). Formula (41) gives the enhancement factor in the form of an expansion in an orthonormal set of momentum dependent HH functions which turns out to be quite useful in effecting phase space integrations. We shall illustrate this point for the case when the transition matrix is assumed to be constant so that the cross section is essentially determined by the enhancement factor. The integration over the five angles , yields the total cross section where results from phase space and denotes the incident flux factor. The summation over extends over all even numbers from 0 up to and in the following we take it to be the value at which the convergence has been attained. When the incident energy is fixed (i.e, is a constant), the quantities of interest are usually the effective mass distributions, or equivalently, the corresponding kinetic energy distributions of the different pairs in their c.m frame. Such distributions are obtained here by integrating only over the directions of and retaining the dependence upon the two remaining variables, and . With our choice of 3 as the basic Jacobi set, the distribution of the center of mass energy of the pair , takes particularly simple form where and the square root factor is a remanet of the phase space. All other factors that depend only upon have been dropped as they may be absorbed in the arbitrary normalization constant. The calculation of c.m kinetic energy distributions of the two remaining pairs, i.e and , respectively, may be carried out in exactly the same manner but the HH occurring in (41) need to be first transformed to the appropriate Jacobi system by applying a kinematical rotation. Thus, using the transformation (28) in (41), the distribution of the kinetic energy of the pair, , is where and the corresponding distribution of follows from (44) in result of permutation. It should be perhaps clarified that this permutation in general changes the kinematic rotation angle that enters the RR coefficients and in consequence the distribution does not have to be the same as that given in (44). 4 Comparison with experiment and conclusions In the preceding section our considerations have been quite general and now we wish to apply this theory to the investigation of FSI effects in the reaction close to threshold. To simplify matters we assume that the excitation energy is sufficiently low so that the total orbital momentum in the final state is zero and the transition amplitude is the sole contributor to the cross section. We label the two protons as 1 and 2, respectively, and the meson as 3 choosing Jacobi set 3 for all computations. With the two protons in a singlet state, Pauli principle requires that in this frame the orbital momenta take only even values. This guarantees that the spatial part wave function will be symmetric with respect to the permutation. We have tried several possible forms of the pp potential in the state: a delta-shell, a Gaussian or a fully realistic Reid potential (for details cf. [6]). The -p interaction is poorly known, for the real part of the scattering length values between 0.2 and about 1.0 fm have been suggested [13]. Additional difficulty stems from the multi-channel nature of this interaction: already at threshold there are open channels so that p scattering length is a complex number. At present there is not enough information to include these additional channels into our formalism, therefore in this work -p interaction has been simulated by the simplest non-absorptive delta-shell potential operative in the s-wave only whose range has been arbitrarily fixed to be 1 fm. The depth of this potential can be then adjusted as to yield an assigned value of the p scattering length . Since the precise value of the latter is not available [13] we used three values 0.5, 1.0 and 1.5 fm, respectively, which we believe is a representative sample. All our computations have been carried out for the lowest excitation energy equal Q=15.5 MeV for which two-particle invariant mass spectra are experimentally available. The system of equations (26) was perpetually solved by increasing successively in each step the value of until convergence has been attained. For delta-shell and Gaussian pp potentials this occurred at but with Reid potential this figure must be doubled. Similarly as in the two-body case [6] the results are completely insensitive to the shape of the pp potential. The results of our computations are presented in Fig. 2 where they are compared with the data from [4]. Figure 2: Invariant mass distributions at Q=15.5 MeV obtained from a three-body calculation: left panel pp pair; right panel p pair. The data are from [4]. Not unexpectedly, the dominance of the pp interaction is apparent from both plots. Since the values of are probably rather unrealistic [13], we may say that the p interaction is of marginal importance as far as invariant mass distributions are concerned. Comparing the invariant mass spectra obtained in the two-body approach (Fig. 1) with those resulting from the full three-body calculation (Fig. 2) we can see that even when the p interaction is completely disregarded, the invariant mass plots are different in these two cases. Although the input in both approaches is the same but the underlying calculational schemes are different. Owing to the proper boundary condition (37) also in absence of -p interaction the three-body wave function has entangled form, i.e it cannot be expressed as a product of the p-p wave function times a plane wave associated with the free propagation of the . Since in the approximation using the two-body enhancement factor the very existence of the is unaccounted for, the latter does not depend upon the meson kinematics (in [6] we made an attempt to lift this deficiency by introducing ad hoc a linear dependence upon q). By contrast, even in absence of -p interaction the three-body enhancement factor accounts for the propagation. In particular, the mass of the strongly influences the dynamics of the three-body system as can be seen from (10a) and (27). The three-body calculation is in both spectra closer to experiment. We wish to note here that the curves presented in Fig. 2 and as well as the dashed curves from Fig. 1 contain no adjustable parameters except for the overall normalization. Our three-body calculation clearly favors smaller values . For the height of the close to threshold peak in plot in Fig. 2 is depressed which is accompanied by a build up of a pronounced shoulder at the high-energy end. At the same time another shoulder appears at the low-energy end in distribution in Fig. 2. All these features worsen the agreement with experiment. Unfortunately, it would be difficult to improve the existing estimates of the scattering length using the data displayed in Fig. 2, especially that absorptive effects have been ignored. Summarizing, we have developed a general three-body formalism for calculating FSI effects in a three-particle final state. To the best of our knowledge, this is the first attempt in the literature to derive a three-body final-state interaction enhancement factor. The presented calculational framework employs the hypersherical harmonics method and is applicable for three-body systems in which no binary bound state can exist. The necessary input which must be provided are all three pairwise potentials. Detailed computations carried out for the reaction at Q=15.5 MeV show that in the invariant mass spectra the role of the -p interaction is marginal and more important is the proper boundary condition imposed on the final-state wave function. The three-body calculations involving no adjustable parameters reproduce quite well the experimental invariant mass spectra. Partial support under grant KBN 5B 03B04521 is gratefully acknowledged. • [1] P. Moskal et al., Prog. Part. Nucl. Phys. 49 (2002) 1. • [2] C. Hanhart, e-Print Archive: hep-ph/0311341. • [3] M. Abdel-Bary et al., Eur. Phys. J. A 16 (2003) 127. • [4] P. Moskal et al., Phys. Rev. C 69 (2004) 025203. • [5] K. Nakayama et al., Phys. Rev. C68 (2003) 045201. • [6] A. Deloff, Phys. Rev. C 69 (2004) 035206. • [7] E. Fermi, Nuovo Cim., Suppl. Vol. II, No. 1 (1955) 17 K.M. Watson, Phys. Rev.   88 (1952) 1163; ibid. 89 (1953) 575; A.B. Migdal, Zh.  Exper.  Theor.  Fiz.  1 (1955) 17. • [8] M.L. Goldberger and K.M. Watson, Collision Theory, Wiley, N.Y., 1964; M. Froissart and R. Omnes, in Physique des Hautes Energies, C. DeWitt and M. Jacob (eds.), Gordon & Breach, New York 1965; J. Gillespie, Final state interactions, Holden-Dey, San Francisco 1964. • [9] M. Abramowitz and I.A. Stegun (eds.) Handbook of Mathematical Functions, Dover, New York, 1965. • [10] B.V. Danilin and M.V. Zhukov, Phys.  At.  Nucl.  56 (1993) 460. • [11] E. Nielsen, D.V. Fedorov, A.S. Jensen and E. Garrido, Phys.  Rep.   347 (2001) 373. • [12] J. Raynal and J. Revai, Nuovo Cim. A 68 (1970) 612. • [13] A.M. Green and S. Wycech, Phys. Rev. C 55 (1997) 2167; ibid. C 60 (1999) 035208. For everything else, email us at [email protected].
c039af03a2f6653e
JMP  Vol.12 No.12 , October 2021 Mechanism of Quantum Consciousness that Synchronizes Quantum Mechanics with Relativity—Perspective of a New Model of Consciousness Abstract: Synchronization of quantum mechanics with relativity has been considered differently from the present quantum gravity models. It is originated from the roots of philosophy of physics and the basic concepts of relativity & quantum mechanics. It emphasizes the fact that two conscious observers are necessary to experience one conscious moment. Various concepts of consciousness have been discussed and emphasized the necessity for the introduction of a new model of quantum consciousness. A quantum coordinate system has been introduced to explain the present understanding of the phenomena “observation” and “reality”. It has been elaborated that the observation defined by physics is confined to Lorentz space time coordinate system, Minkowski coordinate system and general relativity. But phenomena of observation cannot be completed without considering one more hidden transformation explaining quantum coordinate system which transforms the quantum states into relativistic coordinate system as an interaction between two conscious observers explained by an interactive mechanism of quantum states. A flow chart has been illustrated by a mechanism giving rise to conscious moment and proposed a new model of consciousness. It emphasizes on the fact that “reality” is different from “observation” defined by physics. It affects the relativistic factor of special relativity and suggests a modification for it. If this modified relativistic factor is proved experimentally, the results establish consciousness’s mechanism and a remarkable breakthrough in physics of consciousness studies. 1. Introduction Fundamentally quantum mechanics is not synchronizing with General relativity because, at quantum level i.e. beyond a limit, General relativity equations cannot explain space time. Quantum mechanics describes discreteness of space time and General Relativity interprets continuous and smooth space time. These two are not in synchronization. 1.1. Present Quantum Mechanics Wave theory of light has been introduced in 17th century. Double slit experiment proposed in 1803 by Thomas young played an important role in establishing wave theory. In 1900 Planck proposed quantum theory. In 1905 Einstein explained photo electric effect by Planck’s quantum theory. Modern quantum mechanics originated after the introduction of de Broglie’s equation explaining the wave nature of particle in the years 1923 to 1925. Later, matrix mechanics was introduced. Schrödinger wave function was introduced in 1926. By 1930 quantum mechanics had been further unified and formalized by David Hilbert, Paul Dirac and Neumann with greater emphasis on measurement, the statistical nature of our knowledge of reality and definition of “observer”. Even today, Measurement problem, observables and “observer” plays an important role in the development of quantum theory [1] [2] [3]. Observer and observation has a deeper meaning involving the concept of consciousness [2] [4]. In this paper it is explained that without observers there is no meaning for the wordRealitydescribed by physics. Of course the reality is linked with Relativity also. 1.2. Relativity Galilean or Newtonian transformations are equations that relate space and time coordinates of two systems moving with constant velocity relative to each other. It is failed to interpret light velocity which was described by Maxwell’s equations. They are not invariant in Galilean transformations. 1.3. Lorentz Transformation According to Lorentz transformation, the observers moving at different velocities may measure different distances such that the velocity of light is the same in all inertial reference frames. This invariance of light velocity has been considered as a postulate of special theory of light. 1.4. Special Relativity In the year 1905 special theory of relativity has been published. It has elaborated the conventional notion of an absolute universal time with the notion of a time that is relative to reference frame and its position in space. Rather than an invariant time interval between two events, there is an invariant space time combined with other laws of physics and proposed the mass energy equivalence principle. Special relativity interprets a flat four-dimensional Minkowski space time. It appears to be very similar to the standard Three-dimensional Euclidian space, but there is a difference with respect to time. It has reduced the spatial dimensions in to two, so that we can represent the physics in a three dimensional space. In Newtonian mechanics, quantities that have magnitude and direction are mathematically described as three dimensional vectors in Euclidean space, and in general they are represented by time. In special relativity, this notion is extended by adding the appropriate time like quantity to a space like vector quantity, and we have four dimensional vectors, or “four vectors” in Minkowski space time. The components of vectors are written using tensor notation. In Newtonian gravity, the source is mass. In special relativity, mass turns out to be part of a more general quantity called the energy-momentum tensor which includes both energy and momentum as well as stress pressure and sheer. Using the equivalence principle, this tensor is readily generalized to curved space time. 1.5. General Theory of Relativity Thus in 1915 General relativity proposed. According to this theory, there is no gravitational force deflecting objects from their natural, straight paths. Instead, gravity tends to changes in the properties of space and time, which in turn changes the straightest-possible paths that objects will naturally follow. The curvature is, in turn, caused by the energy-momentum of matter. As it is constructed using tensors, general relativity exhibits general covariance. It thus satisfies a more stringent general principle of relativity, says that the laws of physics are the same for all observers. In other words, as expressed in the equivalence principle, space-time is Minkowiskian and the laws of physics exhibit Lorentz invariance. If we consider General relativity as most fundamental and explainable by geometry, quantum theory the basis of understanding matter from elementary particles is unexplainable by space time geometry at quantum level. However, how to reconcile quantum theory with general relativity is still an open question. In order to explain the importance of the problem, few authors conducted a survey and published the results [5]. 1.6. Relativistic Quantum Mechanics Relativistic quantum mechanics is application of special relativity for quantum particles. It is not a theory to reconcile quantum mechanics with general relativity. Dirac equation is resultant of this concept. Based on this concept, super fluid theories have been proposed [6]. According to General Relativity the conventional gravitational wave is: 1) A small fluctuation of curved space time; 2) It has been separated from its source and propagates independently. These cannot be completely justified in a theory with exact Lorentz symmetry. They are not perfectly described by relativistic theory. In this paper the conceptual and physical interpretation of quantum coordinates, in to Lorentz or Minkowski’s space time have been explained and the space time incorporated in general relativity. Thus mathematical interpretation of space time curvature is possible by the concept of physical transformation of quantum states represented by quantum coordinates to space time coordinate system of reality. This paper has no relation to super fluid relativity. But it is based on a previous calculation of space time diameters for all fundamental forces [7]. All these space time diameters are interpreted as points with zero space in quantum coordinates in order to obey the property of signal required for transformation. 1.7. Synchronization of QM with GR General relativity equations describe space time curvature. When it is applied to black holes, the physical quantities such as space time divergence at the centre of black holes, when it goes closer to the centre, less than Planck length distance, there is a breakdown of General Relativity (GR) equations. There must be a new theory which goes beyond GR is required and quantum influence plays dominant role. Thus quantum gravity theories originated [8]. 1.8. Loop Quantum Gravity (LQG) The main result of loop quantum gravity is the derivation of a granular structure of space at the Planck length. The quantum state of space time is described in the form of spin networks. Much of the work in “loop quantum gravity” or “quantum geometry” area has been based on Dirac quantization of the constraints, though there have been recent advances in the use of covariant “spin foam” methods [8]. All the above models are based on space time geometries, renormalization [8] [9], space time coordinates defined by Newtonian, Galilean, Lorentz etc. But nowhere it is not mentioned that how does this space time forms. The definition of observer which is a part of these coordinate systems is really in the same system? How are we transforming these systems in to one another without knowing the presence of observer whether it is inside or outside of the system? This probe will lead us to formulate a new approach for synchronizing quantum mechanics with general relativity. “Physics of consciousness” emphasizing on these aspects. Involvement of “consciousness” plays a vital role in the synchronisation of quantum mechanics with general relativity. Present studies on consciousness explains as follows. 1.9. Consciousness Consciousness is an interdisciplinary concept with quantum mechanics, relativity, space time structure, and biology. Lots of theories are there to define consciousness. But in this paper, based on earlier work, physics of consciousness especially on “measurement problem”, “observer effect”, and “wave function collapse” have been considered for further probe in to fundamental physics. Orch-OR [10] is one of the proposals on this subject. According to that, consciousness is associated to Orchestral Reduction process and is related to gravity. Threshold time to form gravitational space time at basic level, has been calculated [7]. This is the first step to synchronise quantum mechanics with relativity since the quantum particle of our four dimensional space time i.e. gravity is obeying both quantum mechanics and general relativity principle. This has leaded me to propose new hypotheses of consciousness [11]. Some of the experiments in biology [12] are promising for the proof of this direction of thought. Of course, the author expressed it as a proof of Orch-OR theory. The same can be applied further to new hypothesis of consciousness [11] also and the model elaborated in the present paper regarding the physics part of that biological model of consciousness. 1.10. New Hypothesis of Consciousness Analysis on some of the concepts involved in Orch-OR proposal such as threshold time, quantum de-coherence, entanglement, system and the environment with which system interacts etc and the process connecting all these phenomena raised so many alternative solutions for connecting concepts and alternative proposals for integration of them. Thus a new hypothesis explaining the model for “consciousness and information processing” has been proposed [11]. Now, in the present paper, a new model of consciousness has been proposed to interpret “consciousness” in terms of physics. It is completely different from the existing models but similar to Orch-OR proposal. All the above theories tried to explain this synchronization on the basis of General relativity. But special relativity plays a vital role in this synchronization. Special relativity is a simple explanation to transformations from Newtonian, Galilean to Lorentz. Finally Lorentzian manifold has been transformed in to minkowiski space time manifold which is the basis for general relativity. Quantum mechanics is a parallel development and was not based on these transformations. So general relativity is not in synchronization with quantum mechanics. Loop quantum gravity theories tried to synchronize general relativity with quantum mechanics directly from general relativity. Even though they considered the synchronization through Lorentzian manifold, it has not considered the fact of observation and its transformation which is basically connected to consciousness. If we consider the involvement of consciousness in all these transformations, Newtonian, Galilean and Lorentz transformations, we can understand that one transformation is missing. This transformation has been interpreted by quantum coordinate system. Mechanism of consciousness plays an important role in this synchronization. Of course, without double relativity effect [13] [14] [15] it is not at all possible to get an overall view of quantum coordinates. Explaining all these transformations along with the mechanism of consciousness again one by one, we can find a new way of interpretation for this synchronization. This interpretation will definitely transform the classical or relativistic concepts and quantities in to quantum mechanical system. 2. Theory & Discussion Established theoretical background of physics explains the space time notions and the representation of its coordinate systems from Newtonian mechanics to General Relativity. Their transformations also have been well established. Now we are introducing a new coordinate system termed as “Quantum Coordinates” in this sequential order. Before introduction of this coordinates, the sequential order is Newtonian, Galilean, Lorentz, Minkowski and General Relativity. Sequential order means without the previous one, the next one is not possible. After introduction of Quantum coordinates, the sequential order will be changed to Newtonian, Galilean, Lorentz, Quantum and Minkowski or General Relativity. Let us see the interpretation of quantum coordinates in transformations. 2.1. Quantum Coordinate System A quantum coordinate system is also like Minkowski or general relativity coordinate system. In that, space will be X coordinate and time will be Y coordinate. Figure 1 shows the quantum coordinate system in terms of relativistic coordinates and its interpretation with respect to relativistic coordinates. But the difference is in quantum coordinates; all points will have zero space. Time only will exist. But in the interpretation of coordinate system, space axis will be existed. Means, even though, space exists, it is considered as zero only. In conventional coordinate system a point will have both time and space where as time only will exist in quantum coordinate system. But the space will be shown as information. All the points will have different time and different information but with space zero. So instead of space axis there exists information axis when comparing with conventional relativistic coordinate system. As per space time equivalence principle, space converts in to time [16]. At the same time, it (time) is nothing but information [11]. So finally, a point will contain time only and as per its position in the coordinate system, it will have time and information. Its space is zero. This time and information describes that point. Since space is zero, it can not contain any object like mass. In the transformation of this coordinate system to space time coordinates, it will be in synchronization with general relativity with space time coordinates and the general relativity equations will be applied. Figure 2 illustrates the observers view of an object through quantum phenomena represented by quantum coordinates. 2.2. Transformation of Quantum Coordinate System For any Lorentz coordinate system, if time changes and space remains constant Figure 1. Quantum coordinate system and its interpretation with respect to relativistic coordinate system. Each point represents quantum state and each line represents film of the universe so that the universal film is a superposition of quantum states. Figure 2. Observer’s view of an object through quantum phenomena represented by quantum coordinates. Observer is confined to conventional relativistic frames of reference. or space changes and time remains constant for a particular frame of reference with velocity, means that frame (or particle) is with a constant velocity relative to all inertial frames of reference. For example, if we observe a particle with velocity “v”, it will cover a distance within a duration of time and at the same time if the observing system moves with velocity, then the relative velocity will be different. But if it is not changed, then change in velocity is zero. If change in velocity is zero, change in either of space or time will be zero. Let us suppose the case that time exists and the space (covered distance) is zero as explained above, then its velocity is constant for all inertial frames of reference. Such particles will not exist in this Lorenz coordinate system. Only velocity exists. Any particle in this world occupies space and can be considered as “combination of particles”. So “constant velocity relative to each other” has no meaning for that “combination of particles or mass” and such particles will not exist in this conventional space time coordinate system (or Lorentz coordinate system). Long back author explained the same phenomena by “double relativity effect” and NC particles [13] [14]. Now in this, it is applied for profound understanding of the relation between quantum states and their interpretation in transformation of relativistic frames. 2.3. Double Relativity Effect Double Relativity effect is an effect attributable to absolute velocities. While observing a particle with constant velocity relative to all inertial frames of reference (Light particle or a travelling photon is an example for it). In this, special relativity will be applied twice in a single observation at the same time [13] [14] [15]. Here, the velocity considered in the first stage is absolute velocity and will be effected by a phenomena called “double relativity effect”. Thus the absolute velocity will be changed to observed velocity. Double relativity effect elaborates that the Absolute velocity and observed velocity is related by equation v o = v a γ where γ = 1 1 ( v a c ) 2 . Any velocity in this universe must pass through infinitely series of points in relativistic space time coordinates. These points are imaginary only. But when object changes through these points in the course of its velocity, the double relativity effect will be applied to that object at that point in coordinate system. So observed velocity is v o = v a γ . 2.3.1. Role of Observer The paper [17] explains the role of observer and consciousness on special relativity. In this paper, it is elaborated how does an observer considers the object by the application of absolute velocities concept and how films of the universe changes with time does. Film theory’ of the universe has been applied and much more elaborated in the paper [11] 2.3.2. Film of the Universe A film of the universe is an inertial frame of reference containing same time interval all over the film as per the film theory of the universe [11] [13]. Further when it is applied to Universe, It is postulated that whole universe is existed with points containing absolute velocity [16]. This velocity is constant for all inertial frames of reference and an absolute velocity “v” exist at distance d from an observer and related by equation v = H d . At the same time it will have a velocity v d = K . The resultant is the observed velocity. Here “H” is Hubble’s constant and “K” is Siva’s constant. The same concept is applicable to space time and showed that all the fundamental forces are made up of space times but relatively with different space time densities [7]. Interaction between these fundamental fields with separate space times giving raise to new particles. Space time conversion in to matter is also explained [16]. Here it is connected to Quantum physics and emphasizes the fact that “signal in a space time is nothing but the least diameter of space time in that field”. Space time diameters for all fundamental forces have been calculated in the previous works [7]. Now let us see how these concepts can be interpreted by quantum mechanics. 2.4. Application to Quantum Mechanics The physical meaning of quantum state has been explained as per film theory of the universe. [11] [17]. As per film theory of the universe, a film is an inertial frame of reference in which every point shows the same clock. But as per absolute velocities and double relativity effect, the points in space time relativistic coordinates will have different Inertial Frames of references (IFRs) and every point is a representation of universal film with a specific time. At the same time there will not exist passage or flow of time with in it. Once the change of film happens, time flows within the film as well as film to film. Thus in a space time (Minkowski) all the points will have different quantum states and each quantum state is without flow of time. So in that coordinate systems there exist points with constant time but vary with space. All these points describes the position of the point (contains information) in this universe. It describes an object at that point. But due to the fact that it is quantum point, it will not contain space. The combination of all these points provides positions of all points as a single film of the universe. Here the word “Combination” means “superposition”. Thus “super position” of all these quantum states termed as a universal film describing a point in quantum space time or quantum coordinate system. But we can not observe this quantum point since there will not be flow of time until unless goes from one film to another. So it is imaginary only. This is the point where we have gone much profoundly than the quantum state interpreted mathematically as state vectors [18] and the wave function involved in Schrodinger wave equation [19]. In these mathematical descriptions, flow of time has been included. It is a hidden secret of nature. But when we consider the physical meaning of quantum states at the most fundamental level, a quantum state is a universal film in which time exists but flow of time will not be there. It is a standstill picture of space (further it is explained as information and time in quantum coordinates). Flow of time exists only when a film changes in to another film (in theory it is described by a mechanism of consciousness). Even Schrödinger wave equation (time dependent or independent) is also inclusive of flow of time. Obviously, since it is a wave, it must be a part of passage of time. But when it is divided further in to films where time will not exist (only space exist), the quantum state described by mathematics has no meaning. Means, even if it is a pure state, it is a mixture of two more states (which are termed as films). There is a hidden theory in between these purest states that involves origin of time and consciousness. This paper tried to provide that insight as a new model of consciousness Observation and the Reality Figure 3 represents the transformation to make time to flow. When time flows, imaginary quantum states will be super positioned and form a reality within the film and at the same time film also will be changed. Since it is working with a mechanism to form an internal superposition, film has to be changed. Thus it continues and shows any point in this space time coordinates with a specific special distance (non zero). Thus all the points of object observed as a reality of that object. Figure 3 illustrates that the super positioned quantum states at point qs in quantum coordinate system will be transformed as a real point pr in conventional relativistic frame. Thus interpretation of observation or measurement of quantum states which Figure 3. Transformation of quantum coordinate system to relativistic coordinate system in the process of observation. The point qs in quantum coordinate system will be transformed as the point pr in conventional relativistic frame as reality. are not real (imaginary only). The reality exists in transformation through the additional coordinate system called as “quantum coordinate system”. 3. Perspective of a New Model of Consciousness The physics of consciousness has been explained in “new hypothesis on consciousness” [11] elaborately. Actually it is not a hypothesis. It is an analysis of few fundamental queries of philosophy of physics like “why light velocity is constant for all inertial frames of reference?”, “how space time originated?”, “what is physical entity that makes difference between a living thing and a non living thing?” etc. It has concluded a preliminary model of consciousness integrating all the aspects explained in above sections of this article. It was similar to Orch-OR Theory. But it has refuted the idea of quantum states described by mathematical support. Especially it is contradicting Orch-OR in emphasizing the fact that physical interpretation of quantum state and superposition of states is not relevant at the most fundamental level where time plays an important role. Now in this paper, it was substantiated by introducing quantum coordinate system that was hidden in the conventional transformations. The model has not been questioned much on biological or neurological aspects of the Orch-OR theory. But this paper says that no models or experiments on consciousness can sustain without considering these basics at quantum level. Let us review the mechanism behind this new model of consciousness (Figure 4). 3.1. Mechanism behind This Model Quantum mechanics interprets a point as space zero (time only will exist) [7]. Figure 4. 1 & 2 are separate conscious observers; 3 to 8 are Quantum states associated to observer 1; 9 to 14 are quantum states associated to observer 2; 15 is Super positioned state of one state from observer with one state of observer 2 for example super positioned state of 4 & 9; 16 is Film containing observers 1 & 2; 17 is the change in film 16 as a part of this mechanism; 18 is the Process of mechanism that is to be completed for film change from 16 to 17 (It is the process of super position of 1042 quantum states of one observer with one quantum state of another observer); 19 is the Process of mechanism of film change from 16 to 17. It is a consequential and simultaneous process of 18; 20 is the Process indicating Superposition of films 16 & 17; 21 is the Final result, the reality. A specific time will exist for a specific point. If we interpret this in a quantum coordinate system, specific time will exist for all points on a line parallel to axis denoting space. Thus all the points can be shown with different space on the same line. Line parallel to space coordinate will have different space for each point on that line. But different space does not have a meaning because, according to quantum coordinates there will not exist distance or space. So even though there exist space, it is considered as zero space only. So instead of space coordinates, we can call it as information coordinates. On any parallel line, time will be constant but information will change (in other terms, space changes and the time also changes accordingly). So in order to keep time constant, the excess time will change in to space as per space time equivalence principle. Since space is zero, it will be denoted by change in information). “Zero space” between points means that all these points with different information are super positioned at a specific point. Thus that point is a superposition of all information nothing but superposition of all quantum states. After overall view of this mechanism, the statement or the postulate that time is information [11] has to be modified as “the quantum of space time is information” according to space time equivalence principle, space and time can be convertible to each other. So to keep time as constant the space has to be converted in to information (information means, it may contain mass like quantities or observables associated to that quantum state). But when we transform it to relativistic coordinates, there will not be zero space point. Whatever small the point is, it contains some diameter. So the super positioned quantum states will be divided to a space and the point will be observed with some diameter. In other words any object can be observed with some length coordinate. For example, in quantum coordinates, photon is a space zero super positioned quantum state. But in relativistic transformation, it will have space. Since relativity considered it as maximum velocity, as per its basic principle, its space must be zero. Since it is relativistic, it is the space between two divided quantum states. Conscious observer observes it again as super positioned quantum states. This is the final observation called as reality than the observation defined by physics without involvement of consciousness. This is the reality. In quantum coordinate system “time” and “information” exists and in relativistic system both space and time exist. Means, time converts in to space. Here we have seen that in space time conversion, mass or charge also will be originated. Space time conversion phenomena elaborate it [16]. 3.2. Application of Double Relativity Effect to Consciousness There exists only one signal which is the maximum velocity and whose velocity is constant for all inertial frames in each space time. All fundamental forces will have their own space times and space time diameter can also be calculated [7]. This signal is a point in quantum coordinates with space zero but contains time and information. It will be mathematically represented as quantum state. Since it is a signal, its velocity is an absolute velocity (velocity is constant for all inertial frames of reference. For example, photon with velocity “c” for our four dimensional universe). Observed velocity is the velocity which can be represented by Lorentz coordinates. As per “double relativity effect”, the relation between observed velocity v o and absolute velocity v a is v o = v a γ . So the observed velocity “c will have an absolute velocity as c 2 and γ = 2 since “c” is the observed velocity follows the equation v a γ = c . But without involvement of conscious mechanism as per Figure 2 and Figure 4, one cannot find the reality of the observed velocity. Thus, this observed velocity can be changed as reality as in Figure 5. Figure 5 explains three instances of observation. 3.2.1. Instance-1 Figure 5(a) shows the phenomena of observation in relativistic coordinates due to “double relativity effect”. In that v a is absolute velocity and v o is observed velocity and are related by equation v o = v a γ (1) γ = 1 1 ( v a c ) 2 (2) Figure 5(a) illustrates the “double relativity equation” “ v o = v a γ ”. And the relativistic factor “ γ ” for conventional relativistic (Galilean or Lorentz coordinates) coordinate system. Figure 5. The reality is the consequence of double relativity effect and consciousness on the process of observation. (a) Observation in relativistic coordinates; (b) Observation in Quantum coordinates; (c) Final reality in the process of observation. 3.2.2. Instance-2 The same observation in quantum coordinates has been shown in Figure 5(b). As explained above (Section 2.1) quantum coordinates contains points in space time coordinates with time coordinate only and the space coordinate is always zero. Its quantum particle is “bio force” particle [7] [15] and signal velocity is 2 c . Thus the same special relativity principle is applied and all the velocities are comparable to that velocity. Here absolute velocity and observed velocity are also related by v o = v a γ but here γ = 1 1 ( v a c 2 ) 2 it represents consciousness frame of super relativity [15]. Figure 5(b) shows the quantum coordinates. So the absolute velocity va can be replaced with vq. The same has been explained in Figure 6 now we can rewrite the quantum relativistic factor γ q = 1 1 ( v q c 2 ) 2 v o = v q γ q (3) γ q = 1 1 ( v q c 2 ) 2 (4) Figure 5(b) illustrates the “double relativity equation” v o = v q γ q and relativistic factor γ q for quantum coordinate system. These two instances (instances 1 & 2) are not relevant when we use consciousness mechanism’s affect on observation. So the observed velocity in Figure 5(c) is a combination of these two instances and the special relativity should not be violated. So the same “double relativity effect” explains it as v o is observed velocity and it is the reality, where γ r = 1 1 ( v o c 2 ) 2 . v r = v o γ r (5) γ r = 1 1 ( v o c 2 ) 2 (6) Thus γ r satisfy both the instances. Figure 5(c) illustrates the “double relativity equation” “ v o = v q γ q ” and relativistic factor “ γ q ” as absolute reality due to the affect of consciousness on observation. This has been elaborated in Figure 6. It shows a quantum point with respect to coordinates and its position inside that point. The reality is combination of both. Figure 6. Role of consciousness in relativistic interpretation of quantum point. With respect to XYZ frame the circle is a point and the space inside is zero. This coordinate system is conventional Lorentz coordinate system obeying special theory of relativity. The velocity shown is v o . The signal velocity is “c” as it follows special relativity. The same point with respect to the coordinate system X q Y q Z q inside circle i.e. quantum coordinate system, for which space coordinate is zero for all points. Only time coordinate exist. And the velocity is v q and follows the equation v o = v q γ q where γ q is according to equation (4). Conscious observer will be in different frame X c Y c Z c . With reference to conscious observer at origin O c , the velocity is v r and follows Equations (5) & (6). Finally, we can say frame of observation will be affected due to “consciousness” and the frame will be considered as different from observers frame of reference defined in conventional relativistic (Galilean or Lorentz coordinates). Due to “double relativity effect” and “affect of consciousness on observation” the signal velocity expected to become c 2 and the relativistic factor will be changed accordingly. Calculation of this 2 is also an important aspect of consciousness which was elaborated in [15]. Now we can emphasize it in simple way. Simply the logic behind this is. If the relativity factor for “double relativity effect” and “special relativity” is same, the equation can be written as γ = 1 1 ( v a c ) 2 In double relativity effect v o = v a γ so “c” will be c γ since “c” is also absolute velocity. So equation can be written as γ = 1 1 ( v o c ) 2 . Here signal velocity is c γ and observed velocity remains v o . Therefore we can rewrite the equation as γ = 1 1 ( v o c γ ) 2 . If we solve it, γ will be 2 . Now the signal velocity is c 2 and the Equations (5) & (6) are applicable. It is illustrated by Figure 5(c) as the reality of observation. Thus the final reality is an overall affect of “double relativity effect” on “consciousness” and observation. Figure 5 illustrated it. It is not possible to observe these velocities without involvement of consciousness. So the result is a proof for consciousness also. Finally, in the process of observation, fist the quantum coordinates have to be changed to Lorentz. So double relativity effect must be used. Then these points are compatible with Lorentz coordinates. It is called observation and velocity is observed velocity.This will undergo the process of consciousness mechanism and become reality. So again relativity must be applied with respect to conscious observer. The signal velocity is c 2 . All velocities must be compared to it. And velocity is observed velocity and follows Equation (5). So this is the final reality for which relativistic factor γ r follows the Equation (6). This factor is different from Lorentz relativistic factor. Here velocity v o c 2 . Velocity v r is the real velocity which is γ r times than the observed velocity “ v o ”. Observed velocity is limited to Lorenz coordinates. Real velocity is due to the effect of consciousness on observed velocity. So the consequences like relativistic kinetic energy etc are also different. This result will supersede relevant conclusions of the paper [17] and proposes a profound understanding of the problem. This can be observed by any particle physics laboratory. Specifically these affects can be observed on the observed velocities between c 2 to c. 4. New Perspective on Synchronization of General Relativity with Quantum Mechanics 4.1. Involvement of Concept of Time Einstein-Hilbert action is a similar approach as a least action principle in which least action is treated as a point in space time and the equation shows a graviton if it converges. Till now it has considered from the point of general relativity. So there are suspicious views of scientists on this approach [20]. Perturbation in basically fixed point is fine at larger distances but quantum mechanically the gravitational interaction is irrelevant [9]. Not only that, the point they considered as least action is not space zero mass zero. In this Cartesian system it must have some space. Zero space cannot be considered in this system whereas space zero can be considered in quantum system. But when it converts in to space in this coordinate system, it will have space time diameter with a least diameter that converges due to Hilbert action. 4.2. Relevance of Time Operator in Quantum Mechanics It seems relevant to mention an idea [21] regarding interpretation of “time operator” while discussing these issues. The mathematical methodology the author adopted is good and helpful to develop the conceptual interpretations described in this paper. Otherwise, the author can utilize the theoretical background to interpret the most basic level of quantum state with respect to the involvement of “time operator” proposed by him. The present paper demands the necessity of a Schrödinger equation without a “time operator” or with a zero value. Thus the present paper may lead to an exact and most relevant time independent Schrödinger wave equation to reveal some of the secrets of this nature. 4.3. Relevant to Other Interpretations Other interpretations like Random discontinuous motion (RDM) interpretation of Quantum Mechanics [22] [23] or “particle interference” rather than wave interference to describe wave function collapse [24] etc. are also must be originated from the conceptual conclusions of the present paper. It elaborates the evolution of time and its involvement in superposition of two time independent quantum states to form space time which is the root cause for the formation of particle or wave. 5. Comparison with Other Models of Consciousness This model is different from other theories of consciousness in 1) Observation is not for object only. Object is a part of the universal film as a whole at a particular time. 2) Minimum two observers are required to generate one conscious movement of any observer. 5.1. Observer Effect and Necessity of Conscious Observer On double slit, observer’s effect changes the reality. It raised a question on the necessity of conscious observer. Most of the scientists ruled out the point that observer must be conscious [4] [25]. But, as per this paper, observer means conscious observer only. Detector is also part of conscious observer. Instead of detector if we provide conscious observer, that is also not sufficient to accept the affect of conscious observer on observation. The model of consciousness presented in this paper and its mechanism emphasizes on the fact that there must be two conscious observers to observe the physical entities existed in this universe. The transformations of the interactions between quantum states involved in this observation and super position of these states considered as reality which is more than a simple observation. The change in these interactions with time only will provide wave function and its collapse which turns a particle in to wave and vice versa. So this experiment explains only the measurement problem. It is not connected to consciousness and observation. Two conscious observers must exist to get conscious experience of the effect of observation or reality. 5.2. Refute of the Idea That Consciousness Is Everywhere in This Universe Few authors are denying the idea that consciousness is attributable to living things only. They are saying that consciousness belongs to whole universe [4] [25] [26] of course there is an argument against it [27] which says that wave function collapsed somewhere and all in one. My paper is supporting that idea and contradicts that consciousness is everywhere. And clearly says that consciousness is for living things only. Because, detector is also part of observation and the conscious observer only can find detector. This can easily understand by double slit experiment. If we provide a detector in between slit and the beam generator, the wave function will be collapsed. If we switch of it, it will not be collapsed. Means, the universal films of observer is a super position of quantum states of the objects signal generator, double slit screen and the detector in the process of signal generation to the screen via slit. It will be collapsed and behaves like a particle if detector is on. If it is switched off, the screen shows interference pattern because the detector finds the reality in to one “eigen state” just like conscious observer. If it is switched off, it is not super positioned as explained for conscious observer. Here there is no observation by conscious observer so when detector switches off, interference pattern occurs. If conscious observer is there in the place of detector, the same results will come. That means, the mechanism is same for a detector as well as conscious observer (Detector itself will not exist without conscious observers). Detector itself is a superposition of lot of quantum states or universal films. So if it is divided in to single states, there will not be existence at all. Again conscious observer is required to create the superposition of quantum states of all the objects and the process. 5.3. This Consciousness Model & the Quantum Eraser Delayed Choice Experiment One of the outcomes of delayed choice quantum eraser physically means that “the future will change past” Big question is how is it possible? It is possible only when the observer is conscious and it is completely interlinked with total universe instead of a particular object as explained in the paper [11] where universal film is made up of a loop connecting all the conscious observers only. Non living things or objects are created and experienced by these conscious observers connected in this loop. The loops can be connected from one to another depending upon their time frame. The model elaborated in this paper explains that all the detectors are objects only and they are part of the conscious process generated by conscious observer. So here in this experiment also all the detectors can be viewed as conscious observers. Here in this experiment, a photon splits in to a pair of entangled photons. One is delayed than the other. As per this model of consciousness, in the observation of delayed pair of photon, the quantum states super positioned at a point in quantum coordinates and the entangled pair with another point. Quantum coordinates denote a point with time and information. So, for these two pairs the time and information is different. But to observe each pair each observer is required and observer’s time is different. If time is different, it cannot be in same film. These two observers must be in the same film if the entangled pair is of one particular photon. Then the process will be continued to next film. So as per this model there is no meaning in “past” and “future” in within these two. Only the “present” exists with combined effect. Whatever change comes to one pair will change another. If we come out of the box of this experiment, whole universe is existed like that. Film of total universe is connected to the loop of conscious observer [11] and the loop will change its connection as per its inertial frame of reference, whenever requires and goes to past films or to future films. It is nothing but time travel. This is the explanation for the question “whether the future will change past?” 5.4. Explanation to Grandfather Paradox by This Model Grandfather paradox is also an important aspect of “physics of time travel” and “freewill” concepts of philosophy. Few authors [28] elaborated the closed loops of time travel within the scope of existing theories. According to this model of consciousness, 1) Conscious observer is necessary for the existence of reality of the universe. Reality of the materialistic universe is due to interaction of quantum states of two conscious observers. There may exist lot of films in between these interactions. 2) Two conscious observes create one more conscious observer. There exist several connected films in a sequential order which obey cause and effect law of physics. So any of these two conscious observes erased by any reason before creation of the new conscious observer, the interlinked sequential order of the films will be erased and new conscious observer will be erased. There exist lot of other quantum states of this erased conscious observer but they are all imaginary only since reality requires a devise like brain or a centralized network connecting all. There are lot of films in between interaction of two observers to create a conscious observer with such devise. Without such devise, the interactions are not realistic. So to form this erased observer in to reality there must exist interaction which creates this devise otherwise the connective reality will be erased with the new observer. 3) If it is applied to grandfather paradox. If a grandson of a grandfather goes to past and kills his grandfather, he will also be vanished since the connective films after the death of grandfather will be erased from reality. Grandson is also part of that reality. Now both the devises grandfather and grandson are compatible to each other. So grandson vanished. Compatible means both are in reality. Above model says that only one reality exists. If one more reality with one more devise exist, both cannot be compatible. All other are imaginary only. Thus the time travel will work out within the compatible devises or one reality only. Thus, if devises are not vanished because of each other, time travel is possible by regulating their devise. Only present will exist for that devise and it is the only reality. For others it may be viewed as past and present. Grandson can change all materialistic things in future and present but it cannot change the interaction of conscious observers who are reason for his birth. 4) With reference to new hypothesis [11] consciousness is a circuit or a loop connecting to conscious observers only [Refer fig. 7 Page no.39]. Here in this paper it is substantiated as reality. So one observer may connect to another observer in so many ways. New observers in this circuit can also be created due to interaction of two observers. Thus a new circuit can also be created like branches from original and sub branches to main branches. In this if any of the main branch erases, all the sub branches will also be erased. Thus in grandfather paradox, grandfather is the main branch and sub branches are father and mother. Grand son is sub branch to father and mother. Here if he kill his grandfather, whatever may be other connections of the grandson, all the connected branches to his grandfather will be erased. And the grandson and his all other connections will be erased. 5) But in some theories it is told that grandson is a copy of the original went to past and other alternate of the universe will be started after killing of grandfather. But here it is only one reality and connects only to conscious observers. without conscious observers the universe will not exist. 6) Thus films with zero time and infinite time are not possible .In between so many films will exist. Finally two conscious states are must to create reality. Thus, those two conscious observers are eternal whatever may be their time frame is. Other conscious observers are created from them. Materialistic creation is due to the interaction of these conscious observers only. 7) Devise and regulators are important for consciousness. 8) Two conscious observers created this space time. At that stage they might have a separate device than us. Later they created several conscious observers with different devises. 9) Now we have a devise called brain and it will have a regulator. If we go back to past, when we have not born, the devise will not be there. So everything connected to this devise will be lost. Devise connection is reality, means, through devise only we can see reality. 10) So we cannot go to future where we will not exist (our consciousness will not exist) and cannot go to past where we have not created (born). 11) Devise will have reality. Reality will have past and future. Another devise is required to see beyond us. So for one devise, the universe will not exist but universe will exist for other devises. 12) Totally universe exists always as it is with quantum information. Its superposition is time. 13) We will see the universe through devise in a streamlined way otherwise that is only haphazardly distributed information and time only. Since there will not be space it is a simple point from where the time emerges. 14) So if we kill our grandfather in past we will not exist. We will be vanished and universe also will not exist simply it is nothing not even a space. 6. Conclusions 1) The word “observation” of physics has been redefined and it is emphasized on the fact that observation is attributed to Lorentz coordinates only. Till now physics says that “observation” and “measurement” in these conventional coordinates are taken granted as reality. But actual reality is different from this observation. Quantum mechanics is not reality. It remains imaginary until unless it converts into relativistic coordinates. Simply applying relativity to quantum mechanics as in the case of “relativistic quantum mechanics” is not relevant. The physical meaning of quantum states is connected to the problem of time and formation of space time. So the involvement of consciousness is unavoidable in observation or measurement. Thus the imaginary quantum states first have to be interpreted in terms of conventional coordinate system. Prior to transforming these quantum states into conventional coordinate system, “double relativity effect” is to be applied. Due to the application of double relativity effect the observation changes and the reality will be different from observation. The ultimate result says that factor Lorentz relativistic factor “ γ ” has to be modified as γ r = 1 1 ( v o c 2 ) 2 . So the observables will be modified than the results of special theory of relativity and the results will be different within the range of velocities c 2 to “c”. This is can be verified in any laboratory. 2) Application of this concept may provide new path to researchers working on velocities more than that of light and the affect of drastic change in momentum on Einstein’s field equations towards singularity. The synchronization of quantum mechanics and relativity applied in this paper indicates that General Relativity equations may be applicable to lower diameters comparable to Planck length up to zero space. 3) A new model of consciousness has been proposed by emphasizing the fact that conscious observer plays an important role in observation. It also proposed that there must exist minimum two conscious observers to find the reality. It explains the concept of origin of time and formation of space time and its curvature at the basic level (in other terms its quantum level). Thus it explains the synchronization between quantum mechanics and relativity. 4) Totally it concludes a major change in observation and reality which can be observed in any lab. Thus if it is proved experimentally, it will be the best proof for this model of consciousness. Cite this paper: Kodukula, S. (2021) Mechanism of Quantum Consciousness that Synchronizes Quantum Mechanics with Relativity—Perspective of a New Model of Consciousness. Journal of Modern Physics, 12, 1633-1655. doi: 10.4236/jmp.2021.1212097. [1]   Casado, C.M.M. (2008) Latin-American Journal of Physics Education, 2, 152. [2]   Deutsch, D. (1985) International Journal of Theoretical Physics, 24, 1-41. [3]   Fields, C. (2012) Information, 3, 92-123. [4]   Yu, S. and Nikoli, D. (2010) Quantum Mechanics Needs No Consciousness (and the Other Way Around). [5]   Schlosshauer, M., Koer, J. and Zeilinger, A. (2013) A Snapshot of Foundational Attitudes toward Quantum Mechanics. [6]   Sinha, K.P., Sivaram, C. and Sudarshan, E.C.G. (1976) Foundations of Physics, 6, 65-70. [7]   Kodukula, S.P. (2019) Journal of Modern Physics, 10, 466-476. [8]   Carlip, S. (2001) Quantum Gravity: A Progress Report. [9]   Shomer, A. (2007) A Pedagogical Explanation for the Non-Renormalizability of Gravity. [10]   Hameroff, S. and Penrose, R. (2014) Physics of Life Reviews, 11, 39-78, 41, 48, 49, 59. [11]   Kodukula, S.P. (2019) International Journal of Physics, 7, 31-43. [12]   Mihelic, F.M. (2019) Experimental Evidence Supportive of the Quantum DNA Model. Proceedings SPIE, Quantum Information Science, Sensing, and Computation XI, Vol. 10984, 1098404. [13]   Kodukula, S.P. (2009) Double Relativity Effect & Film Theory of the Universe. Lulu.com, Raleigh, 5-6, 7-12, 13-32. [14]   Kodukula, S.P. (2009) Heart of the God with Grand Proof Equation—A Classical Approach to Quantum Theory. Lulu.com, Raleigh. [15]   Kodukula, S.P. (2014) American Journal of Modern Physics, 3, 232-239. [16]   Kodukula, S.P. (2021) Journal of High Energy Physics, Gravitation and Cosmology, 7, 1333-1352. [17]   Kodukula, S.P. (2017) International Journal of Physics, 5, 99-109. [18]   Weinberg, S. (2012) Collapse of the State Vector. [19]   Schrodinger, E. (1926) The Physical Review, 28, 1049-1070. [20]   Padmanabhan, T. (2008) International Journal of Modern Physics D, 17, 367-398. [21]   Routh, A.K. (2019) Open Access Library Journal, 6, e5816. [22]   Gao, S. (2020) Foundations of Physics, 50, 1541-1553. [23]   Wechsler, S.D. (2021) Journal of Quantum Information Science, 11, 99-111. [24]   Niehaus, A. (2019) Journal of Modern Physics, 10, 423-431. [25]   De Barros, J.A. and Oas, G. (2017) Foundations of Physics, 47, 1294-1308. [26]   Tononi, G. and Koch, C. (2015) Philosophical Transactions of the Royal Society B, 370, Article ID: 20140167. [27]   Reason, C.M. (2017) Comment on the Paper Quantum Mechanics Needs No Consciousness by Yu and Nikolic (2011). [28]   Lobo, F. and Crawford, P. (2003) Time, Closed Time like Curves and Causality.
911f262eee9547f9
SciELO - Scientific Electronic Library Online vol.62 número2Simetrías en la naturaleza y efecto túnel: breve estudio de pozos cuánticos doblesSimulación computacional de una fibra óptica con índice escalonado y propagación multimodal índice de autoresíndice de materiabúsqueda de artículos Home Pagelista alfabética de revistas   Servicios Personalizados Links relacionados • No hay artículos similaresSimilares en SciELO Revista mexicana de física E versión impresa ISSN 1870-3542 Solving Schrödinger equation by meshless methods H. Montegranario a   M.A. Londoño a   J.D. Giraldo-Gómez a   R.L. Restrepo b   M.E. Mora-Ramos c   C.A. Duque d   *  aInstituto de Matemáticas, Facultad de Ciencias Exactas y Naturales, Universidad de Antioquia UdeA, Calle 70 No. 52-21, Medellín, Colombia. bUniversidad EIA, CP 055428, Envigado, Colombia. cCentro de Investigación en Ciencias-IICBA, Universidad Autónoma del Estado de Morelos, Av. Universidad 1001, 62209 Cuernavaca, Morelos, México. dGrupo de Materia Condensada-UdeA, Instituto de Física, Facultad de Ciencias Exactas y Naturales, Universidad de Antioquia UdeA, Calle 70 No. 52-21, Medellín, Colombia. In this paper we apply a numerical meshless scheme for solving one and two dimensional time independent Schrödinger equation by means of collocation method with Radial Basis Functions interpolants. In particular we approximate the solutions using multiquadrics. The method is tested with some of the well-known configurations of Schrödinger equation and compared with analytical solutions, showing a great accuracy and stability. We also provide some insight on how to use meshless algorithms for obtaining the eigenenergies and wavefunctions of one- and two-dimensional Schrodinger problems. Keywords: Meshless methods; low dimensional systems; quantum wells; quantum dots; Schrödinger equation PACS: 78.67.Hc; 78.67.-n 1. Introduction During the last two decades there has been an increasing interest toward the solution of different n-dimensional problems by meshless methods. This methods are referred as being applied over scattered data and independent of dimension. They have been found to be widely successful for the interpolation of scattered data. More recently, radial basis functions (RBF) methods have emerged as an important tool for the numerical solution of partial differential equations (PDEs) 1. RBF’s methods can be as accurate as spectral methods without being tied to a structured computational grid. This results in ease of application for the case of complex geometries in any number of space dimensions. This way of solution of PDEs has drawn the attention of many researchers in science and engineering. One of the first meshless methods, the so-called Kansa’s method -developed by Kansa in 1990 2,3-, is obtained by directly collocating the RBFs, particularly the multiquadrics (MQ) one, in the look for the numerical approximation of the solution. Kansa’s method was recently extended to solve various ordinary and partial differential equations including the 1D nonlinear Burgers equation 4 with shock wave, shallow water equations for tide and currents simulation 5, heat transfer problems 6, and free boundary problems 7,8. In most of the cases, the accuracy of the RBF solution, however, depends heavily on the choice of the so-called shape parameter c in the multiquadric function ϕ(r) = (c2 + r2)β/2 or that for β in the Gaussian basis functions ϕ(r) = exp(-βr2). The choice of this optimal value is still under intensive investigation and many authors have investigated the role of this shape parameter. For instance, Carlson and Foley 9 have shown that it is problem-dependent. Tarwater 10 found that, by increasing c, the Root-Mean-Square (RMS) of the error dropped to a minimum and, then, increased sharply afterwards. In general, as long as c increases, the system of equations to be solved becomes ill-conditioned. Radial basis function collocation is currently one of the main methods for meshless collocation. They differ from the polynomial basis functions typically applied in classical collocation methods. There are, in principle, two different approaches to collocation using radial basis functions. The symmetric approach relies on Hermite-Birkhoff interpolation in reproducing kernel Hilbert spaces 11 and its application to PDE was first analyzed in 12,13. The Asymmetric approach by Kansa 14 that will be used in this paper, has the advantage that less derivatives have to be formed but has the drawback of an asymmetric collocation matrix, which can even be singular in specially constructed situations 15,16. Despite this drawback asymmetric collocation has been used frequently and successfully in several applications. Schaback has shown that with some assumptions and modifications is possible to prevent numerical failure and to prove error bounds and convergence rates 16. On the other hand, with the development of high-speed computing machines, seeking the numerical solution for the Schrödinger equation has become a subject of great activity. Analytic solutions for Schrödinger equation have been developed and studied extensively in past decades. In more than one dimension, the two main approximate techniques besides the direct solution of relatively few cases via special functions, are the perturbation theory and the variational methods 17. Many of the closed solutions have been established using these methods. However, in most cases the use of numerical methods is indispensable, for example, analytical solutions are not possible or become difficult to find in the case of an exciton in a spherical quantum dot system, due to the fact that the corresponding electron and hole effective mass differential equations cannot be uncoupled. Thus, it is important to find numerical alternatives for solving these equations that do not reveal as much demanding when it comes to computer capacity. In recent decades, a rather large number of numerical methods have been developed for the solution of the Schrödinger-like problem. Among them we can mention the shooting methods 18,19; Anderson Monte Carlo 20; genetic algorithms 21; and variational methods 21, 22. Historically, the main approach for the numerical solution of the Schrödinger problem in more than one-dimension has made use of finite difference methods (to know about only a few of them see 23,24 and references therein). In recent times, some authors have put forward the use of the meshless approach for the solution of the Schrödinger problem in more than one dimension, as an alternative 25. Within the same spirit, the present work is aimed at providing some inside on the use of meshless algorithms for obtaining the eigenenergies and wavefunctions of one- and two-dimensional Schrödinger equations that can be related with the effective mass description of electron states in low-dimensional semiconductor heterostructures 26. We organize the article in the following way: In Sec. 2, we introduce meshless methods with the help of the concept of conditionally positive definite functions. In Sec. 3 we give details of the n-dimensional interpolation problem for scattered data. These tools are applied in section four in the numerical solution of Schrödinger eigenvalue problem by collocation of radial basis functions; we explain our algorithm and finally in section five we show the accuracy and application of our method comparing with some well-known analytical solutions of Schrödinger equation. 2. Meshless methods and Radial basis functions The evolution of what is currently known as meshless methods started around 1950’s, closely related to spline theory, in the context of interpolation and approximation of functions 27,28, inverse problems 29,30 or computer vision30,32. Later, they have been applied to other problems, specially the numerical solution of PDE. Meshless methods are based on radial basis functions interpolators of the form S : ℝn → ℝ S(x)=i=1Mαiϕ(||x-ai||)+p(x), (1) where x and ai belong to ℝn , and ai are the interpolation nodes. Besides, ||⋅|| is the Euclidian norm and φ: ℝ → ℝ is a radial basis function. Some well-known choices for ϕ(r) are 1. Multiquadric ϕ(r)=c2+r2[/p] 2. Thin Plate Spline (TPS) ϕ(r) = r2 log(r), and 3. Gaussian ϕ(r) = exp(-βr2). On the other hand, p(x) is a polynomial term of small degree in n variables. For example, for thin plate spline in ℝ2, x = (x,y), and p(x) = β1+ β2x+ β3 y. In some cases, S(x) does not have polynomial term as, for example, the multiquadric and the Gaussian radial basis. In accordance, we have the following Definition 1 A function Φ: ℝn → ℝ is called radial if there exists a univariate function ϕ: [0,∞) → ℝ such that Φ(x)=ϕ(r), with r=||x|| (2) where ||⋅|| is the Euclidian norm and ϕ is called a radial basis function(RBF). In spite of the great number of available functions to be considered as radial basis, there is a relatively small number of RBFs that has arisen in the treatment of some particular problems, and constitute the commonly used radial functions in practice. They appear collected in Table I. Table I The most frequently applied radial basis functions.  Given any function Φ : ℝn → ℂ, and a set of knots or centers 𝒜 = {a1,…, aM} ⊂ ℝn , we can associate some kind of interpolation matrix A, whose entries are Ajk = Φ(aj - ak), together with the quadratic form j=1Mk=1MαjαkΦ(aj-ak). The properties of this quadratic form and its application to the approximation problem can be understood with the following definition Definition 1 A continuous function Φ : ℝn → ℂ is said to be conditionally positive definite (CPD) of order m on ℝn if αtAα=j=1Mk=1MαjαkΦ(aj-ak)0 (3) for any M different points 𝒜 = {a1, a2,...,aM} ⊆ Ω ⊂ ℝn and α = [α1, α2,..., αM]t ∈ ℂn , satisfying j=1Mαjp(aj)=0, (4) for any p in Πm -1(ℝn). Where Πm-1(ℝn) is the ring of polynomials of n variables of degree less or equal than m - 1. If αt > 0 -provided that the points ai are distinct- and α ≠ 0, we say that the function Φ is strictly conditionally positive definite of order m. In particular, the case m = 0 yields the class of (strictly) positive definite functions. As a consequence of this definition a function which is CPD of order m on ℝn is also CPD of any higher order. Micchelli 33 showed that interpolation with strictly CPD functions of order one is possible even without adding a polynomial term. 3. The Interpolation Problem Once the role of radial basis functions as interpolators is understood, the application for solving differential equations is straightforward. In order to properly illustrate, we first give a description of the interpolation issue. Accordingly, the approach to meshless reconstruction is based in the following multivariate interpolation problem: Given a discrete set of scattered points 𝒜 = {a1, a2,…, aM} ⊂ ℝn and a set of possible noisy measurements {z1,…, zM} ⊂ ℝ, find a continuous or sufficiently differentiable function f : ℝn → ℝ, such that f interpolates (f(ai ) = zi) or approximates (f(ai ) ≈ zi) the data D=(ai,zi)i=1M. Example 1 Let us apply radial basis functions Eq. (1) to interpolate a surface given in the form z = f(x,y) from the data zi=f(xi,yi)i=1M or D=(ai,zi)i=1M. The coordinates of nodes are given by ai = (xi , yi ), i = 1,...,M, and the surface is given -just to pick one- by Franke’s function (Fig. 1(a)), f(x,y)=34exp-(9x-2)24-(9y-2)24+34exp-(9x+1)249-9y+110+12exp-(9x-7)24-(9y-3)24-15exp-(9x-4)2-(9y-7)2. (5) Figure 1 Reconstructions of Franke’s function: Direct plot of Franke’s function (a), Multiquadric reconstruction (b), Gaussian reconstruction (c), and distribution of the scattered points (d).  This function has been sampled for nodes in the region Ω = [0,1] × [0,1] (see Fig 1(d)) with M=100. The usual plot of f(x,y) appears in the Fig. 1(a), whereas applying the multiquadric and the Gaussian, S(x) -without polynomial term- to the scattered sample points, it has the form displayed in Figs. 1(b) and 1(c), respectively. The interpolation conditions S(ai)=zi , i=1,2,…, M, produce M equations and the interpolant S is completely determined by the αi ’s after solving the system Aα = z, where A is an interpolation matrix, A = [Aij], with Aij=ϕ(||ai-aj||) with i,j=1,...,M,α=[α1,α2,...,αM]t,z=[z1,z2,...,zM]t. (6) It is possible to observe that the use of RBF interpolation in the reconstruction of the function leads to -essentially- the same graphics as it is were produced by the direct plot of the Franke’s function (in all cases we used MATLAB for producing the figures). Although the use of RBF interpolation in this example might be seen as useless, we consider it suitable for highlighting the power of this kind of interpolating schemes. 4. Meshless solutions of Schrödinger equation As it is known, in quantum mechanics, the state of motion of a particle is specified by giving the wave function, which is -in general- the solution of the time-dependent Schrödinger equation. The Schrödinger’s equation (SE) is a postulate that constitute the fundamental equation of quantum mechanics. It affirms that the time evolution of a particle of mass m, described by a wave function ψ(x,t), is linked to the potential in which it is moving by the relation i ψ(x,t)t=-22m2+V(x,t)ψ(x,t). (7) In this partial differential equation V(x,t) is the local potential operator, which depends of the spatial position x = (x, y, z) and the time t. In addition, -(ℏ2/2m)∇2 is the kinetic energy operator, where ∇2 ψ = ψxx + ψyy + ψzz is the Laplacian operator. The sum of these two latter terms defines what is known as the Hamiltonian operator (or total energy operator) H^ of the particle’s motion: H^=-22m2+V(x,t). (8) The physical state of a particle at time t is completely described by the wave function ψ(x,t). The probability of finding the particle within a region Ω is Ω|ψ|2dxdydz. (9) Since the particle must always be somewhere in the space, the probability of finding the particle within the whole space is one; that is, there is normalization condition that reads ---|ψ|2dxdydz=1. (10) Many interesting problems in quantum mechanics do not require considering the SE in its whole generality. Usually, the most interesting states of a quantum system are those in which the system has a definite total energy, and it turns out that for these states the wave function is a standing wave. In other words, the SE predicts that wave functions can form standing waves, called stationary states. When the time-dependent SE is applied to these standing waves, it reduces to a simpler equation called the time-independent SE, 2ψ+2m2E-V(x)ψ=0. (11) The latter expression can also be written as an eigenvalue problem H^ψ(x)=Eψ(x) (12) It is generally accepted from the early stages of Quantum Mechanics foundation that the determination of the possible eigenvalues, E of the Hamiltonian characteristic Eq. ([eq:eigproblem]) is one of the essential problems of the study of the quantum world. Until today it has not been solved exactly for many physical systems. Even for relatively simple cases -some of which relate with the simulation of the energy band structure of artificially made condensed low-dimensional systems- the exact solution of the stationary SE is not possible with the means of nowadays mathematics. This constitutes a motivation for the use of efficient numerical approaches to obtain it; specially those capable of dealing in a handy way with 2D and 3D domains of complicated geometry. In this sense, it is possible to mention the application of a meshless approach by Dehghdan and Shokri 34. We now consider a general class of boundary or initial value problems for partial differential equations: Lu=f, in ΩBu=0, in Ω (13) with 𝓛 a linear partial differential operator, and 𝓑 a linear boundary operator that prescribes values on the boundary of ∂Ω. If we look for a solution for 𝓛u = f such that u=j=1Nαjvj in terms of basic functions {υ1, υ2,…, υN}, then Lu=j=1NαjLvj=f (14) The collocation method finds an approximate solution to a differential equation evaluating in collocation points {ai}i=1N, such that Lu(ai)=j=1NαjLυj(ai)=f(ai),fori=1,,N (15) is a system of linear equations. Then if we want to solve Eq. (13) with first order absorbing boundary condition The Asymmetric Kansa’s approach assumes an approximate solution of the form u(x)=i=1Nαiϕ(||x-ai||) (16) For NI nodes {ai}i=1NIΩ in the interior of the domain and NB nodes {ai}i=NI+1NΩ on the boundary, where N = NI + NB . The approximated solution is completely determined by finding the scalars α = [α1, α2,..., αN ]t . In order to obtain α, we enforce 𝓛 at every point in Ω and 𝓑 at every point in ∂Ω (Lu)(ajaj)=L(i=1Nαiϕ(||x-ai||))(aj)=i=1NαiLϕ(||aj-ai||)=f(aj),for j=1,2,,NI (17) (Bu)(aj)=i=1NαiBϕ(||aj-ai||)=0,for j=NI+1,NI+2,,N (18) This leads to the matrix form A=L[ϕ]B[ϕ] (19) L[ϕ]ij=L(ϕ)(||aj-ai||),where j=1,2,Ni,i=1,,NB[ϕ]ij=B(ϕ)(||aj-ai||),where j=NI+1,,N,i=1,,N (20) And the linear system we have to solve is Aα=f0 (21) In particular, we are going to apply this scheme for solving the time-independent SE in ℝn(n = 1, 2) in the form of an eigenvalue problem with Dirichlet boundary conditions on Ω ⊂ ℝn H^ψ(x)=Eψ(x),xΩψ|Ω=0 (22) where H^=-2+V˜(x). For sake of numerical simplicity we have omitted the coefficient ℏ2/2m, once we have done this, the units of our numerical findings will be in effective units -for length and energy. The problem (22) usually appears when one looks for the confined energy states in a quantum heterostructure. Thus, we assume an approximation ψ(x) to the solution of the Eq. (22), proposed in the form of radial basis function, without polynomial term ψ(x)=i=1Nαiϕ(||x-aj||) (23) with N distinct collocation points 𝒜 = 𝒜Ω ∪ 𝒜∂Ω and ensuring (22) holds at these points. Evaluating (23) at the collocation points, we have a system of linear equations ψ(ai)=j=1Nαjϕ(||ai-aj||),i=1,..,N. (24) Now, we introduce the notation ψj = ψ(aj ), ψ = [ψ1, ψ2,..., ψM , ψm+1,..., ψN]t , ϕij = ϕ(∥ai - aj ∥), and define the matrix A=ϕ11ϕ12ϕ1Nϕ21ϕ22ϕ2NϕN1ϕN2ϕNN (25) such that, according with Eq. ([duque]), it is possible to write ψ=Aα. (26) Here, A is strictly positive definite. So, it is nonsingular and we can put α=A-1ψ. (27) On the other side A can be decomposed in two parts AΩ=ϕij if 1iM and 1jN0 if M+1iN and 1jN (28) AΩ=0 if 1iM and 1jNϕij if M+1iN and 1jN. (29) Clearly, A = AΩ + A∂Ω. Hence, with (27) and the former notation, one obtains H=-2AΩ+AΩ+diag([V~])AΩA-1, (30) where [V~] is the vector whose components are the results of the evaluation of the potential function V~(x) at the collocation points {ai}i=1M [V~]=[V~(a1),,V~(aM),0M+1,,0N], and 2AΩ=2ϕij if 1iM and 1jN0 if M+1iN and 1jN. (31) As a result of this procedure, we finally arrive at Hψ=Eψ (32) which represents the discretization of the eigenvalue problem associated to the time-independent SE (22). From a purely mathematical point of view, the Eq. (22) can be solved for any value of the energy E and used to obtain a corresponding discretized wave function ψE (x). However, in the particular situation of allowed confined energy states, the wave functions are physically meaningful if they satisfy the boundary condition ψE(x) → 0 as ∥x∥ → ∞. So, we must check whether or not a solution obtained for some chosen value of E satisfies this asymptotic condition. In our algorithm this property is ensured by looking for solutions that are sufficiently close to zero at the boundary of the region Ω. That is, we require ||ψ∂Ω|| ≈ 0. In other words, we get physically acceptable wave functions ψE(x) only for a set of “allowed” discrete values of the energy, En (n=1, 2, ...). In accordance, ψE(x) is an eigenfunction of the SE, and E the associated energy eigenvalue. 4.1. The Algorithm Now we briefly summarize the algorithm proposed for solving the SE in confined condensed quantum systems. Typically, the Schrödinger problem in this kind of structures involves the use of the so-called effective mass and parabolic approximations which, under the assumption of independent energy bands, leads to the same mathematical form given for the stationary SE in Eq. (8). INPUT. A set of collocation points and the potential function V~. OUTPUT. An eigenfunction ψ(x)=j=1Nαjϕ(||x-aj||) and energies E (eigenvalues). The following steps are: 1. Calculate matrix Aij= ϕ(||ai - aj ||), i,j = 1,..., N. 2. Calculate H=-2AΩ+AΩ+diag([V˜])AΩA-1. 3. Find eigenvalues and eigenvectors ψ = [ψ1, ψ2,..., ψN]t in Hψ=Eψ. Use an eigenvalues solver for non-symmetric matrices. 4. Among all eigenvectors ψ, to choose those which satisfy the boundary condition ||ψ∂Ω|| ≈ 0. 5. Calculate α = A-1ψ, α = [α1, α2, ...αN]t . 6. The eigenfunction or wave function is ψ(x)=j=1Nαjϕ(||x-aj||). 5. Examples and numerical calculations In order to show the advantages of the meshless approach introduced in this article we have made the choice of reproducing some pretty well known problems of quantum mechanics in one and two-dimensions. This will allow to compare with existing results, some of them showing analytical solutions. Thus, we leave the study of the energy states in less common structures for a future work. In all the following examples the inverse multiquadric RBF (β = -1) will be used. Let us first address some of the simplest one-dimensional problems in quantum mechanics. It is worth mentioning that, although they might seem rather academic ones, their use as conduction and valence band bending profile models is pretty much extended in the literature on low-dimensional heterostructures. Among those that have direct analytical solutions we find: Example 2 The one-dimensional harmonic oscillator. The simple harmonic oscillator model plays an important role in many areas of physics. It is used in classical and quantum mechanics for a system that oscillates about a stable equilibrium point to which it is bound by a force obeying Hooke’s law F = -kx. Thus, any particle oscillating about a stable equilibrium point will oscillate harmonically for sufficiently small displacements. An important example of a quantum harmonic oscillator is the motion of ionic cores inside a solid crystal. Each atom has a stable equilibrium position relative to its neighbors and can oscillate harmonically about that position. Another important example is the diatomic molecule, such as HCl, whose two atoms can vibrate harmonically, in and out from one another. The quantum mechanical motion of the harmonic oscillator is described by the one-dimensional SE with potential energy function V(x) = 1/2m* ω2 x2 (Here, m* is the effective mass of the particle). The normalized complete set of eigenfunctions ψn(x) for the problem = can be expressed in terms of Hermite polynomials Hn(x) 35, 36. So we can write the Eq. (12), in this case, as -22m*2+12m*ω2x2ψ(x)=Eψ(x) (33) where ω is the cyclotron frequency. The analytic eigenfunctions of the Hamiltonian above are given by ψn(x)=NnHn(αx)exp-α2x22, (34) with the corresponding eigenvalues or energy levels EEn=n+12ω,   n=0,1,2,... (35) Here α=m*ω/ and Nn are the normalization constants. We have compared these exact analytical solutions with the numerical solutions by meshless method (Table II and Fig. 2), the results obtained for the energy levels are in remarkably good agreement with the exact values. Table II Eigenstates for the one-dimensional harmonic oscillator, given in units of ℏω, calculated with N = 500 collocation points. The CPU time was 13:1 s.  Figure 2 Eigenfunctions for the Harmonic oscillator potential. The confinement potential is shown by the parabolic line. Each wavefunction is localized on its corresponding energy. Energies and lengths are given in effective units. Used number of collocation points, N = 500. CPU time 13:1 s.  We define the effective Rydberg for the energy units [Ry* = m* e4/(2 ℏ2ε2)] and the effective Bohr radius for the length units [a0*=2ε/(m*e2)]. Here ε is the dielectric constant of the semiconductor material where we are considering the carriers or particles and e is the absolute value of the electron charge. The results in Fig. 2 have been presented in effective units. Example 3 Particle confined in a finite double square quantum well (DSQW). We now apply the meshless method with multiquadric basis function ϕ(r)=c2+r2 to obtaining the numerical solution of the Schrödinger eigenvalue problem, H^ψ=Eψ, in the well-known case of a finite DSQW. This problem describes a non-relativistic quantum particle of mass m* moving in a quantum well potential defined by the function V(x)=VH if x-Ld/2-L1 or xLd/2+L2Vb if |x|Ld/20 otherwise ,  (36) where L1 and L2 are the widths of two square wells, which are separated by a potential barrier of length Ld . For the numerical evaluation we have used the effective units, defined above. In consequence, the problem to solve is given by the following differential equation -ψ''(x)+V˜(x)ψ(x)=Eψ(x). (37) The calculated lowest eight confined energy levels as well as their corresponding wave functions are shown in Table III and Fig. 3, respectively. All the numerical procedure took 31 seconds of CPU time in an average portable personal computer, using MATLAB. The number of collocation points in the process was N = 500, where two of them were taken as boundary ones and another two were placed at the interfaces. Table III (MM: Meshless method, DM: Diagonalization method) Lowest energy eigenstates of a finite double square quantum well. On the second column there are the values calculated by our algorithm. N = 500 interpolation points were included in the calculation, with two of them taken as boundary points. The remaining data are: L1 = L2 = 2:0a0*, Ld = 0.6a0*, VH = 40 Ry* and Vb = 20 Ry*. CPU time with MM was 31 s. Energy values appearing in the third column were obtained by the direct diagonalization of the Hamiltonin, using an expansion over the a basis of eigenfunctions of a rectangular qunatum well with infinite potential barriers and well width equals to 50a0*. The Hamiltonian matrix was constructed using 300 terms in the expansion, which guarantees a convergence toward exact energies with an error less than 0.1 Ry* for the lowest 15 confined states 37. The fourth column contains the relative error in the MM with respect to the DM.  Figure 3 Schematic plot of the wave functions corresponding to eigenstates of a finite double square quantum well. Each wave function is placed at the energy corresponding to its eigenvalue. Details are the same as in Table III We turn to the solution of the SE in two-dimensional domains. In these cases, for the sake of simplicity, we choose systems with infinite potential barriers. However, this is not a restriction to the method provided finite potential structures can also be calculated by conveniently defining -as it turns out to be often in reality- a boundary ∂Ω, at which the condition of ψ(x) ≈ 0 fulfills. Example 4 Schrödinger equation for a particle in an isosceles right triangle. The determination of energy levels and wave functions for a particle confined within different geometric structures in more than one dimension is relevant given the prospective use of these systems in modern technology. There are some few cases with exact analytical solutions of the Schroödinger problem as it is the case of the one describing the motion of a particle inside a triangle. For a right isosceles triangle those solutions are given in the following conditions. In this two-dimensional problem there is a triangular infinite potential well that confines the particle inside a region determined by the length, L, of the two legs. Then it can be shown that the expressions for the eigenfunctions are 38, ψmn'(x,y)=2Lsin(mπxL)sin(nπyL)+sin(nπxL)sin(mπyL), (38) for m = n ± 1, n ± 3,… and ψmn(x,y)=2Lsin(mπxL)sin(nπyL)-sin(nπxL)sin(mπyL), (39) For m = n ± 2, n ± 4,… The corresponding energy levels, in effective units, are Emn=π2L2(m2+n2),m,n=1,2,3,,mn. (40) The results obtained by our algorithm with N = 500 are in quite good agreement with the values obtained from the corresponding formula, (see Table IV). Table IV Eigenenergies, in effective Rydberg units, for a particleconfined into a isosceles right triangle. Here the leg length is L = 4a0* and N = 500 is the number of collocation points. Used CPU time was 117.6s.  Figure 4 contains plots of the two lowest confined states obtained by solving the SE by meshless method and multiquadric RBF in the case of a right isosceles triangle with leg equals to L = 4a0*. At this point, some important remarks are worth to be pointed out: i) the differential equation in 2D is solved with a number of N = 500 collocation points. ii) with this number of points we need to obtain the wave functions, so one proceeds to perform the interpolation scheme ([eq:interpolator]), arriving to a grid of 10201 points. iii) with this, we achieve the sufficient smoothness in the resulting functions, in such a way that ulterior integration processes involving them may become accurate enough. Figure 4 Numerical solutions for the wave functions of the two lowest energy states of a particle confined within a isosceles right triangle computed by interpolation methods 39, with data obtained using our algorithm. The scale is given in effective units.  The density plot of the wave functions of the calculated lowest eight energy eigenstates appear in the left-hand column of the Fig. 5, the corresponding densities of probability are shown in the central column. Figure 5 Density plots for wave functions (left-hand panel) and their corresponding density of probability (central panel) associated to the first eight eigenenergies of a particle into a isosceles right triangle with infinite confinement potential. Density plot of absolute error between calculated and analytical density of probability for the first eight eigenfunctions (right-hand panel). The color bar in the bottom belongs to the right-hand panel.  On the other hand, right-hand column of the Fig. 5 shows the absolute error of the calculated solutions of the 2D SE equation, corresponding to the energy levels reported in Table IV, with respect to the exact analytical expression of Eqs. (35) and (36). We can see, in all cases, that the lack of coincidence is, at most, of the order of 10-2, with the higher accuracy being obtained for the lowest energy states. This is an indication that the numerical approach is pretty well accurate. In general the boundaries of the region present some difficulties to interpolants, since their reconstruction requires a precise spatial control over the smoothing properties of the radial basis function. Larger errors appear in regions close to the border triangle legs, and that is a consequence of a smaller density of points on the boundary and also of the global smoothness of radial basis interpolants. Example 5 Particle in an infinite quantum disc. The two-dimensional infinite quantum disc is defined by the confinement potential function V(r)=0,0rR,r>R, (41) where r = (x2 + y2)1/2. Inside the disc region the well known analytical ψ(r,θ) functions that fulfill the boundary conditions ψ(r = R, θ) = 0 are given by using the Bessel functions, which depend on two independent integer quantum numbers m and n. They are given by (see 40 ) ψ(r,θ)=Nm,nJm(km,nr/R)exp(imθ), (42) where Nm,n are the normalization constants, m = 0, ±1, ±2,… and km,n is the n-th zero of the function Jm(r), with n=1, 2, 3,…. Or, more conveniently, for bound states ψ-(r,θ)=Jm(km,nr/R)sin(mθ) (43) ψ+(r,θ)=Jm(km,nr/R)cos(mθ), (44) with eigenenergies, in effective units, given by Em,n=km,n2R2. (45) The numerical results obtained by our algorithm are remarkably similar to the analytical solutions, and are depicted in Table V for a quantum disc of radius R=2a0*. Again, the coincidence of the eigenvalues is significantly good. Note that the largest errors for the obtained wavefunctions correspond to the boundary regions. This could be mitigate by increasing the number of boundary points 1 Given the symmetry, the calculation in this case has only included N = 500. collocation points. Table V Eigenenergies for a particle into a quantum disc with infinity confinement potential. These values were obtained using N = 500 interpolation points within a disc of radius R=2a0*. CPU time was 126s.  In the Fig. 6 we are presenting the density plots of the confined wavefunctions that correspond to the energy states reported in Table V (left-hand column) as well as the associated probability densities (right-hand column). It is possible to notice that the particular symmetries of the different states are correctly reproduced. Figure 6 Eigenenergies from a particle with infinity confining within a circle of radius R=2a0*. The algorithm was performed with N = 500 interpolation points.  6. Conclusions and future research We have addressed the subject of solving the Schrödinger equation in one- and two-dimensional problems using a meshless method. The procedure of implementing the numerical procedure has been thoroughly presented. For the sake of comparison, we chose some well known examples for the application of this approach which have, in general, analytical exact solutions. In all cases, the use of the meshless method leads to a remarkably good qualitative and quantitative agreement. In this paper we have used Multiquadric, a globally supported radial basis, the same as thin plate splines and many others. The main disadvantage of globally supported RBFs is that their associated interpolation matrix is full. As a consequence, there exists an upper limit in the number of collocation points of globally supported RBF collocation methods. If the distance between collocation points is very small, the matrix will become very ill-conditioned, leading to serious numerical singularity and degradation in numerical accuracy. These problems can be avoided by using compactly supported RBFs, and this is a subject of future investigation. That opens the possibility of using this kind of numerical schemes to the study of electron states -and related physical properties- in quantum nanostructures of more complicated or even non-regular geometries. Another possible application is the self-consistent solution of the Schrödinger and Poisson equations in problems with selective doping or polarization-induced charge densities. A straightforward extension of our approach will be the study of quantum states in three-dimensional confined structures such as quantum dots. The results of such a work will be published elsewhere. This research was partially supported by Colombian Agencies: CODI-Universidad de Antioquia (Estrategia de Sostenibilidad de la Universidad de Antioquia and projects: “Propiedades ópticas de impurezas, excitones y moléculas en puntos cuánticos autoensamblados” and “On the way to development of new concept of nanostructure-based THz laser”), Facultad de Ciencias Exactas y Naturales-Universidad de Antioquia (CAD-exclusive dedication project 2015-2016), and El Patrimonio Autónomo Fondo Nacional de Financiamiento para la Ciencia, la Tecnología y la Innovación, Francisco José de Caldas. RLR is grateful to the Universidad EIA for its financial support through EIA-UdeA project “Efectos de láser intenso sobre las propiedades ópticas de nanoestructuras semiconductoras de InGaAsN/GaAs y GaAlAs/GaAs”. CAD and RLR are grateful to UdeA-EIA-UdeM for financial support through the project: “Propiedades opto-electrónicas de sistemas altamente confinados: Una aproximación teórica”. 1. G. F. Fasshauer, Meshfree Approximation Methods with MAT-LAB, World Scientific Publishing Co., New York (2007). [ Links ] 2. E.J. Kansa, Computers & Mathematics with Applications 19 (1990) 127. [ Links ] 3. E.J. Kansa, Computers & Mathematics with Applications 19 (1990) 147. [ Links ] 4. Y.C. Hon and X.Z. Mao, Appl. Math. Comput. 95 (1998) 37. [ Links ] 5. Y.C. Hon , K.F. Cheung, X.Z. Mao , and E.J. Kansa, J. Hydraulic Eng. 125 (1999) 524. [ Links ] 6. M. Zerroukat, H. Power, and C. S. Chen, Int. J. Numer. Meth. Eng. 42 (1998) 1263. [ Links ] 7. Y.C. Hon and X. Z. Mao, Financial Eng. 8 (1999) 31. [ Links ] 8. M. Marcozzi, S. Choi, andC. S. Chen , Appl. Math. Comput. 124 (2001) 197. [ Links ] 9. R.E. Carlson and T.A. Foley, Comput. Math. Appl. 21 (1991) 29. [ Links ] 10. A.E. Tarwater, A parameter study of Hardy's multiquadric method for scattered data interpolation, Lawrence Livermore National Laboratory, (1985). [ Links ] 11. Z. Wu, Approximation Theory Appl. 8 (1992) 1. [ Links ] 12. C. Franke and R. Schaback, Adv. Comput. Math. 8 (1998) 381. [ Links ] 13. C. Franke and R. Schaback, Appl. Math. Comp. 93 (1998) 73. [ Links ] 14. E.J. Kansa, Eng. Anal Boundary Elements. 31 (2007) 577. [ Links ] 15. G.E. Fasshauer, Meshfree Approximation Methods with MAT-LAB. World Scientific (2007). [ Links ] 16. S.I.A.M Schaback, J. Numer. Anal. 45 (2007) 333. [ Links ] 17. S. Flügge, Practical Quantum Mechanics, Springer, Berlin (1998). [ Links ] 18. M.V. Vener, O. Kuhn, and J. Sauer, J. Chem. Phys. 114 (2001) 240. [ Links ] 19. M.V. Vener and N. D. Sokolov, Chem. Phys. Lett. 264 (1997) 429. [ Links ] 20. J.B. Anderson, J. Chem. Phys. 63 (1975) 1499. [ Links ] 21. D.E. Makarov and H. Metiu, J. Phys. Chem. A104 (2000) 8540. [ Links ] 22. B.W. Shore, J. Chem. Phys. 59 (1973) 6450. [ Links ] 23. L. Gong, Y.-Ch. Shu, J.-J. Xu, Q.-Sh. Zhu, and Zh.-G. Wang, Superlatt. Microstruct. 60 (2013) 311. [ Links ] 24. L. Gong , Y.-Ch. Shu , J.-J. Xu , and Zh.-G. Wang , Superlatt. Microstruct. 61 (2013) 81. [ Links ] 25. X.G. Gong, L.H. Shen, D.E. Zhang, and A.H. Zhou, J. Comput. Math. 26 (2008) 310. [ Links ] 26. P. Harrison, Quantum Wells, Wires and Dots: Theoretical and Computational Physics of Semiconductor Nanostructures, Wiley, Chichester (2010). [ Links ] 27. R.L. Hardy, J. Geophys. Res. 76 (1971) 1905. [ Links ] 28. J. Duchon, Splines minimizing rotation invariant seminorms in Sobolev spaces, Lecture Notes in Math. 571, Springer, Berlin (1977). [ Links ] 29. R. Franke, A critical comparison ofsome methods for interpolation ofscattered data, Naval Postgrad. Scool Tech. Report NPS-53-79-003 (1979). [ Links ] 30. D. Marr, Vision: A Computational Investigation into the Human Representation and Processing of Visual Information, Freeman, New York (1982). [ Links ] 31. G. Wahba, Spline models for observational data. (CBMS-NSF Regional Conference Series in Applied Mathematics). SIAM: Society for Industrial and Applied Mathematics (1990). [ Links ] 32. D. Terzopoulos, Multiresolution Computation of Visible-Surface Representations, Ph.D. Dissertation, Dept. of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, January (1984). [ Links ] 33. C.A. Micchelli, Constructive Approximation 2 (1986) 11. [ Links ] 34. M. Dehgdan and A. Shokri, Comp. Math. Appl. 54 (2007) 136. [ Links ] 35. S. Weinberg, Lectures on Quantum Mechanics. Cambridge University Press (2012). [ Links ] 36. T. Myint-U and L. Debnath, Linear Partial Differential Equations for Scientists and Engineers, Birkhauser (2006). [ Links ] 37. M. de Dios-Leyva, C.A. Duque, and L. E. Oliveira, Phys. Rev. B 76 (2007) 075303. [ Links ] 38. W.-K. Li, J. Chem. Educ. 61 (1984) 1034. [ Links ] 39. H. Montegranario and J. Espinoza, Variational Regularization of 3D Data: Experiments with MATLAB, Springer Briefs in Computer Science (2014). [ Links ] 40. R.W. Robinnet, Quantum Mechanics, Univesity Press, Oxford (2006). [ Links ] Received: March 26, 2016; Accepted: April 29, 2016 Phone: +57 4 219 56 30. e-mail: cduque@fi
f7a9bf89306d7467
[YHVH says to all of you]: ‘Look, I am doing something new — It is emerging right now, Can’t you see it? I am creating a path in the wilderness, And rivers out of the desert.’ Isaiah 43:19: I quote from the book of Isaiah to open this review because I see it as a counterpoint to Kauffman’s opening of his book with a poetic excerpt from the English metaphysical poet John Donne, a selection that strikes me as an exceedingly odd choice given that Donne’s poem not only involved a Trinitarian conception of God (a view of the sacred that Kauffman himself later repudiates, as we’ll see) but also an intense and paradoxical depiction of a clash between faith and reason. Yet, this latter theme just doesn’t mesh with Kauffman’s book since, rather than delving into any sort of spiritual crisis, it comes down squarely on the side of secular humanism with the little he actually does devote to faith and the sacred never rising above the banal, something that could certainly never be said of Donne’s poetry. However, in general, it’s hard to know what to make of this book for most of it is given over to ruminations which seem, to this reviewer at least, as largely uninformed, many times as patently errant, and for the most part lacking cogency. Indeed, it makes me wonder if there are any genuine editors left, and please don’t believe the testimonies on the back cover—one would have to be a total fool nowadays to give any credence to the blatant favor-swappings found on back covers! Much of Kauffman’s speculations on such subjects as the “quantum brain” are even more far-fetched and poorly argued than most other utterances emanating from scientists during the “philosopause” phase of their careers. But before we get to them, first, let’s go over the little that Kauffman does have to say about the sacred. What kind of “sacred” is Kauffman talking about? Although Kauffman’s book purports to be about the sacred,this is the most arid and least interesting aspect of the book. Hoping to find insights from Kauffman about the sacred, coming from a perspective shaped by his long and vigorous study of complex systems, we come across, instead, statements concerning faith like the following: “As we see ourselves in a creative universe, biosphere, and culture, I hope that we will see ourselves in the world in a single framework of our entire humanity that spans all of human life, knowing, doing, understanding, and inventing. The word we need for how we our lives is faith, bigger by far than knowing or reckoning. A committed courage to get on with life anyway” (244). I’ll come back to this sentence shortly but here are some additional reflections on faith that never move out of the arena of platitudes (245ff): • “we make the meaning of our lives, to live a good life, in all these ways” • “Our choice is between life and death. If we choose life, we must live with faith and courage, forward, unknowing. To do so is the mandate of life itself in a partially lawless, co-constructing universe, biotic, and human world” • “We can also choose to face this unknown using our full human responsibility, without appealing to a Creator God, even though we cannot know everything we need to know” • “On contemplation, there is something sublime in this action in the face of uncertainty. Our faith and courage are, in fact, sacred—they are our persistent choice for life itself.” In this context, Kauffman explicitly points to one of the sources which served as an “inspiration” for his thinking about faith, namely, certain credos he borrows from the Inter-denominational Center in Atlanta which includes such ditties as: The use of capital letters for emphasis presumably expresses something about the purported profundity of these “faith” declarations. But, there is so little real content in such “credos”, it is hard to either agree or to take exception with them. The nebulousness of such statements pervades Kauffman’s remarks on the sacred, to the point that the reader becomes thankful that he doesn’t wind-up spend much time on the sacred after all. If we go back to the first quotation in the first paragraph of this section, we already see evidence of this vagueness. What, for example,: is this “single framework” about which he is talking? And what about the last word in the quotation “anyway”? Presumably this “anyway” refers to suffering in the world, a subject to which Kauffman’s supplies his own mini-reflections on, that is, what in theological circles is usually called the issue of “theodicy” which was so succinctly described by the poet and playwright Archibald McLeash in his play J.B.: if God is God, he is not good and if God is good, he is not God! Here is Kauffman’s “resolution” of the issue of theodicy: “If we are inventing the sacred with ourselves as part of the real world, with all its wondrous creativity around us, then we have to come to terms with the fact that evil happens at our own hands, let alone for causes beyond our control.” Kauffman addresses the theme of atrocity by prefacing his remarks with an anecdote about how he once consulted with some generals in the Pentagon to share his research as if these meetings had somehow granted him some special sort of access into how atrocities can happen. And he ends this section with just another banality: “We are capable of atrocity… Surely, we should be as conscious of this as we can” (256). This is what he learned from the working with the Pentagon? One can’t help but ask, though, if he felt so strongly about our universal propensity for violence, why did he work for the Pentagon in the first place? And, just because he helped some generals make some kind of simplistic model of warfare, now he has special insight into theodicy? Kauffman distinguishes between three classes of response to faith: those believing in what he calls an “Abrahamic” Creator God (I guess referring to Judaism, Christianity, and Islam all three of which religions claim the Biblical personage of Abraham as a forefather), Eastern religious traditions (Kauffman no doubt referring to such “spiritual” practices as yoga, meditation, and so forth), or secular humanists. But this tripartite division is so thin that even the briefest purview of the history of religions cannot but reveal a vastly richer set of possibilities for faith: from all the many different ways a creative deity has been understood; to the wide variety of polytheistic belief systems; to the equally multitudinous ways of belief and practice characterizing Eastern religions; and even to the great diversity of being a secular humanist. Kauffman himself, as far as I can ascertain, would classify himself as a secular humanist. But that creates a problem right at the outset for secular humanism for the most part denies the whole existential or metaphysical category of the sacred, a problem that finds its most glaring expression in his very notion of “reinventing” the sacred. Indeed, the very word “sacred” is customarily used in “sacred” traditions to indicate something about transcendence, a quality entirely lacking in Kauffman’s book. Indeed, rather than finding any evidence of a Donnian clash between faith and reason anywhere in this book, one finds more and more ill-defined restatements about a “courage to be” or “courage to live” in spite of …? I’m not sure what to fill-in here for “in spite of…”: in spite of atrocities, in spite of unpredictability, in spite of a whole host of painful aspects of being alive. This “courage to live” comes across as a watered-down version of Existentialism 101 combined with some kind of ungrounded “faith” in “progress.” Moreover, Kauffman writes about this “courage to be” as if made it up all on his own, completely ignoring the much deeper reflections on the relation of faith, the sacred, and the “courage to be” that were the centerpiece of the life work of the renowned Protestant theologian Paul Tillich. Among what Kauffman calls the “Abrahamic” traditions, the original word for sacred in Hebrew (transliterated as “kadosh”) meant primarily separation from, thus emphasizing that the sacred is transcendent to or radically different than the profane, e.g., the Sabbath day is to be kept sacred or holy by being separated from the other six days of the week so that on the Sabbath day one refrains the work one performs on the other six days. Etymological dictionaries keep this sense of “separation from” in the Latinate words “consecrate” or “to remove from the everyday” as well as in the words “saint” and “sanctuary” for people and places apart from or transcending the normal mundanities of everyday life. This is why even Kauffman’s discourse about reinventing the sacred simply misses the point of the transcendent associations intimately involved with the meaning of “sacred.” I would claim that, by definition, if something can be invented and then reinvented, then it, by this same definition, cannot be sacred but must remain profane. Of course the “Abrahamic” traditions don’t have a corner on transcendence — the sacred as a marker for transcendence also characterizes those “Eastern traditions” Kauffman mentions in passing. Consider, for example, the following words of the Buddha when describing Nirvana to his disciples, words about the path to enlightenment that are certainly considered “sacred” to Buddhists the world over, whether Zen or Tibetan or some heritage: “Oh, monks, there is an unborn, unarisen, and unconditioned. Were there not an unborn, unarisen, and unconditioned, there would be no escape for those born, arisen and conditioned. Because there is the unborn, unarisen, unconditioned, there is escape for those born, arisen, and conditioned” (from the Buddhist scripture Udana, see, Thero, No Date). One sees transcendence as well in Buddhist approaches to the issue of suffering in the world, the “theodicy” of the “Abrahamic” traditions. In this regard, the story is told of a mother, grief stricken over the death of her child, who once came to Gautama Buddha begging him to bring her child back to life. The Buddha looked at her compassionately and said, “To heal your child I need a mustard seed from a home where death has never occurred.” This woman then went searching in every house in the village, but there was not a house where death had not occurred. It was from this shocking realization that the grieving mother took up the path of the Buddha dharma about impermanence and loss, a “theodicy” born from the personal experience of grief and compassion but also pointing to a transcendent resolution of suffering. Notice that in neither the “Abrahamic” nor the “Eastern” traditions is the sacred “reinvented” into some kind of vague “courage to be.” What I find most surprising, though, about Kauffman’s remarks on the sacred is what I briefly brought up above, namely, their complete lack of reference to Process Theology (see, e.g., the journal Process Studies), one of the mainstream theological currents of the twentieth century that happened to have been founded on the very idea of emergence found at the core of Whitehead’s philosophy. Thus, Kauffman can proclaim: “Let God be our name for the creativity in the universe…”(232) and seemingly believe he is being original and profound whereas Whitehead and his Process Theologian followers had been saying things like that for nearly a century now and with much greater scope, cogency, and power. With Whitehead’s conceptualization of emergence as its foundation (Whitehead’s book Process and Reality was compiled from his Gifford Lectures of 1928-29, the same lectures at which other Emergent Evolutionists like C. L. Morgan had put forward their emergentist philosophies and philosophical theologies), Process Theology was then elaborated on by the likes of the theologians/philosophers Charles Hartshorne, John Cobb, Schubert Ogden, and more recently and with complexity theory twists by David Ray Griffin (2001). Process Theology, rooted in the idea of emergence,, even inspired the thought and social activism of Martin Luther King, Jr. through the work of the theologian Henry Nelson Wieman (Minor, 1977). Indeed, the phenomenon and concept of emergence also played a key role also in the visionary theology of the paleontologist/priest Pierre Teilhard de Chardin (1970) as well as in the much more recent and influential theology of the Jesuit Bernard J. F. Lonergan (1978). Is it too much to ask that a scientist whose research career has been so deeply involved with the idea of emergence to at least have looked into earlier valuable reflections on the relation of emergence to the sacred? This is not to say, of course, that we should expect Kauffman to be a theologian, for alter all he has found faith in his own courage to be and not in some “Abrahamic” tradition. But at least a little familiarity with what some others have thought about in terms of the relation of emergence to the sacred would have informed Kauffman’s own reflections. One can’t but develop the impression from this book that Kauffman, in general, is simply not very interested in what anyone else has thought or written or said, particularly when, it appears, it is about subjects of which he consistently shows he knows next to nothing. The incoherence of decoherence When it comes to his scientific in contrast to his theological forays, I wish I could report that Kauffman’s new book fares better. But alas, I’m afraid these parts of the book should be considered even worse. Besides his claims for vast insight into genetics, evolutionary biology, economic theory, the origin of life, even the origin of the cosmos, Kauffman now, in Chapter 13, has veered into the highly perplexing arena of the study of consciousness with a theory of mind based on the quantum mechanical idea of decoherence. What this is supposed to have to do with the sacred which, after all, is what his book is purported to be about, escapes me although perhaps it boils down to: “there a mystery, here a mystery.” That is, consciousness is mysterious and the sacred is mysterious so they must share something extremely important—but, of course, this is the specious logic of “ravens are black” and “anthracite coal is black”, therefore “ravens are anthracite coal!” However, the relation of his decoherence theory of mind to the theme of the sacred is the least of Kauffman’s worries in this chapter since, of far greater importance, is how incoherent, both scientifically and philosophically, his decoherence theory of consciousness turns out to be. Decoherence is one of at least six alternative “resolutions” to the quandary of the so-called “quantum measurement problem” sometimes referred to as the problem of the wave collapse, or how the micro-world where quantum mechanics and its “weirdness” holds sway becomes the macro-world where the very different “classical” physical laws reign (see, e.g., Hartle, 1998; Penrose, 2004; Zeh, 1991). According to the renowned mathematical physicist Roger Penrose, other alternatives include: the Copenhagen Interpretation, mostly associated with Niels Bohr, which essentially downplays the need to understand what is “really” happening in favor of relying on the QM mathematical formalism itself; the “many worlds” interpretation offered by Hugh Everett; the “pilot wave” approach of David Bohm et al.; and a few other candidates. Penrose points out that each of these alternative “ontologies” addresses the apparent conflict between two fundamental quantum processes: “U” or the process described by the Schrödinger equation, that is, the state vector as controlled by a deterministic and continuous unfolding of a partial differential equation; and “R” or the phenomena of quantum state reduction when a “measurement” is made characterized by a seemingly discontinuous random jumping of the same state vector. As a play on the term “coherence,” the theory of “decoherence” contends that, whereas at the quantum level, the state of a quantum system possesses coherence through its property of superposition, at the classical level this coherence is so swamped by random and multitudinous environmental elements, it “decoheres” or, in other words, the coherent “wave collapses” or a “quantum state reduction” occurs. Penrose points out that at this decoherence stage, the mathematical construct of a “density matrix” is employed even though the exact ontology of the “density matrix” is never made clear, resulting in the situation that the theory “gives us no consistent ontology for physical reality” (Penrose, 2004: 810). Kauffman himself, in an uncharacteristic burst of humility, admits that decoherence “is still only partially understood” (208) but obviously this is no big deal for him since he shows no reservations in jumping right ahead with his theory of decoherence applied to mind. Yet, right from the start, why should Kauffman have selected decoherence to be the cornerstone of his theory of consciousness, a rather surprising theoretical move since the idea of decoherence is not typically thought to be the most plausible of the current alternatives in explaining the quantum measurement problem or “paradox” (the latter term, according to Penrose, attributed to the Nobel laureate Tony Leggett). Moreover, decoherence, at least on first impression, would appear to have little to do with Kauffman’s usual repertoire of complexity-based constructs. I surmise, however, that by focusing on decoherence, Kauffman hopes to achieve two outcomes. The first is that decoherence allows Kauffman to link his theory of consciousness to the quantum approach to consciousness put forward by Penrose (1990, 1996), a linkage that would elevate Kauffman’s own theory to be on par with that of Penrose. That is why, in this context, Kauffman can write: “Penrose and I both believe that consciousness depends on some very special physical system” (203), and “With Penrose, I think it may instead be partially quantum mechanical” (204) (notice how putting his “I” into these sentences does have the nice effect of elevating Kauffman to Penrose’s celebrated status). It is interesting to note, by the way, that Kauffman nowhere indicates that Penrose’s own quantum theory of consciousness, in spite of its dramatically impressive thoroughness and rigor, has come under heavy criticism by the equally esteemed mathematical logician Solomon Feferman (1996, 2007) for misinterpreting the limitative theorems of Gödel and Turing. But again, such “trifling” inconsistencies are simply like little “no-seeums” that Kauffman can easily brush aside. I suggest that the second reason Kauffman why has zeroed-in on decoherence is that it, at least as interpreted by Kauffman, has the right kind of theoretical structure to fit with the analogous theoretical structure at the base of the idea of the “edge of chaos,” a construct to which Kauffman has consistently shown unflagging allegiance. I’ll further elucidate why I think this is the second reason later on. Kauffman succinctly announces the essence of his decoherence theory of mind: “the conscious mind is a persistently poised quantum coherent-decoherent system, forever propagating quantum coherent behavior, yet forever also decohering to classical behavior” (209; his emphases). Furthermore, this consciousness “is identical with quantum coherent immaterial possiblities, or with partially coherent quantum behavior, yet via decoherence, the quantum coherent mind has consequences that approach classical behavior so very closely that mind can have consequences that create actual physical events by the emergence of classicity. Thus, res cogitans has consequences for res extensa [Descartes]! Immaterial mind has consequences for matter” (209; his emphases). Now that all that’s perfectly clear, we can move on! Actually, Kauffman does go on to try to explicate what he means in much more detail yet I’m afraid the deeper he goes, the more cogency he loses (his explanation is surely a “cogency-free” network)! First off in the cogency-loss department is Kauffman’s claim that his theory can avoid all those nasty quandaries about matter/mind causal interactions, that is, those old mind/body problems so nicely depicted in Descartes’ dualist “resolution” of them. In this regard, Kauffman argues his decoherence approach is not about causality per se since the “quantum coherent expression of the Schrödinger wave yields possibility waves, not causes” (225; his emphases). Certainly I’ve heard the Schrödinger wave described in terms of probabilities—Max Born I think was the first to do so—but not “possibility” waves which seem a very different kind of thing. But that’s not the real crux of Kauffman’s problematic philosophical reasoning. Instead, it is his argument that his theory of decoherence, by talking about “possibility waves” and not “causes,” can replace the issue of causal action in toto with his own notion of “has consequences for” and thereby manage to avoid the bewildering question of how an “extensionless” (from Descartes) mind can causally act on an “extended” (also from Descartes) matter or body. Kauffman contends that the issue of “mental causation” is “a confusing use of language” and “…the quantum coherent mind… does not act causally on the material world” but rather has “consequences for the classical world” (225; his emphases) through decoherence. But I simply cannot fathom why Kauffman thinks there is any significant theoretical or philosophical difference between “causal action” and “having consequences for”. Aren’t these two expressions pretty much semantically equivalent? Doesn’t “act on” necessarily imply that whatever is doing the acting on will have consequences for what is being acted upon and doesn’t “have consequences for” imply that whatever has consequences for something is somehow or other “acting upon” that something? For example, if I say that “the engine of my car causes it to move” and then I replace that statement with “the engine of my car has consequences for the movement of my car” aren’t I implying the same thing? Except that the first way is a lot shorter, less awkward, and clearer to boot? Perhaps one could argue that “has consequences for” is more indirect or there is some time delay involved in it but neither of these properties of “has consequences for” offer themselves as any philosophical help for what Kauffman seems to desire. In fact, if this kind of semantic replacement were all it took to solve the mind/body problem, there would be no mind body problem to begin with. To me at least, if someone did utilize Kauffman’s preferred way of speaking, we might conclude she/he were perhaps just learning the English language or maybe writing a not very good paper in Philosophy 101. Hence, Kauffman’s supposed “resolution” of the mind/body is a philosophical step that, in my opinion, in spite of all of his hand-waving and repetition, borders on the farsical, not unlike that character in one of Moliere’s plays who attributes the potency of a sleeping medicine to its “dormative” properties! The next phase of the decoherence theory involves Kauffman positing a hoped-for poised state existing between quantum coherence and classical decoherence while at the same time conceding both that the very possibility of such a poised state is “open to research” (210) as well as that most physicists would rule it out, certainly at body temperature. Yet, Kauffman comes to the rescue, in spite of this unpropitious beginning, by offering as evidence for the existence of such a “poised” state, believe it or not, the chemical chlorophyll. He puts forth chlorophyll at this juncture of his theory building since new research has shown it has a unique property of being able to maintain a coherent quantum state for 750 femtoseconds when it absorbs a photon of light energy and transforms it to chemical energy. Moreover, the antenna protein which holds chlorophyll has been shown to aid in sustaining its coherent state against degradation into decoherence. Kauffman adds, in his by now characteristic fashion, that natural selection has evolved this antenna to do just this sustaining of quantum coherence: as open thermodynamic systems into which matter and energy and information can flow, “cells may have evolved the ability to maintain coherent or near coherent [quantum] behavior” (213). He even mentions one physicist, without naming him or her, who says that sustained quantum behavior at body temperature is now no longer excluded. One does indeed wonder why this one physicist remains unnamed. At the very same time, though, that evidence involving chlorophyll’s coherence is employed to support his argument for the very existence of such a poised state, Kauffman admits that chlorophyll’s maintenance of a coherent state lasts only a trillionth of a second while neural events are a million times slower! So what if he’s off by only a magnitude of a million? To paraphrase Senator William Proxmire’s famous quote about government spending, what’s a magnitude of a million here or a million there? Obviously not something that appears to worry Kauffman’s “theoretical” effusions. And, of course, the “trifling” relevance of chlorophyll to mental functioning is never brought up at all. I guess we’ll have to wait for the advent of a literally “green” brain science—forget about “brains in a vat” now we’ll have “plants inside brains inside vats”! (Perhaps, a few dilithium crystals might help here and, for extra good theoretical measure, a dollop of tachyons!) We haven’t even gotten to how Kauffman conceives of the relevance of decoherence to consciousness. Somehow, it has something to do with the capability of coherence to perform quantum computations. By now, of course, we shouldn’t be surprised that quantum computation is brought into the picture since it is just one more conceptually sexy idea that surely can be tailored to play a critical role in human consciousness. In this regard, Kauffman appeals to a recent theorem from quantum computing which involves building-in enough redundancy in a quantum computer that error correction can proceed, errors caused by those quantum qubits (the quantum computational analogue of bits in regular digital micro-chips) which begin to decohere before the computational work of the coherence stage, with its property of superposition, has been completed. It is just this redundancy which, according to Kauffman’s interpretation of it, allows for partially decoherent degrees of freedom to reverse to fuller coherence. Thus, his theory of consciousness is not really based on decoherence per se but quantum coherence and its quantum computational capabilities. From this arcane mathematical theorem from the arcane world of quantum computation (which, besides the most elementary of manipulation of qubits, doesn’t even exist yet except as a set of highly speculative conjectures), Kauffman can then exclaim, “If so, a poised state persisting between ‘largely’ coherent and partially decoherent quantum variables looks possible” (212; his emphases) and, “… a poised quantum-classical mind system could process the entire set of sums and differences of the wave function, and tune the interference to increase the probabilities of good eventual classical behavior. The mind thus searches a vast space of possibilities to create a ‘good’ physical response” (214, 215). I presume the reader of this review is having a similar experience to what I had when I came across these sentences, namely, to feel totally befuddled. To what do “good” classical behavior and “good” physical response refer? How is this supposed “good” to come from processing “the entire set of sums and differences of the wave function”? What does processing “the entire set of sums and differences of the wave function” have to do with conscious experience? Or to sum up these befuddlements, why should some phantasmagoria of quantum computation have anything at all to do with human mentation? Even if this “processing of the sums and differences” takes place only in coherence and not decoherence and thus really not in the poised state between them as he had first postulated, what exactly is this processing supposed to be for? It seems that Kauffman’s theory of consciousness effectively comes down to understanding the human mind as a quantum computer. By the way, such a move renders his theory of mind an algorithmic one, albeit according to quantum computational algorithms as devised, for example, by the mathematician Peter Shor and others (Johnson, 2003). Yet, wasn’t Kauffman supposed to have put himself in alliance with Roger Penrose’s own quantum theory of mind which was essentially formulated to be non-algorithmic according to Penrose’s appeal to the Gödel and Turing limitation theorems in mathematical logic? I guess this is just another “trifling” inconsistency. Indeed, to shift back to the theme of the “reinventing the sacred” for a moment, what’s being reinvented here is not the sacred (as we might have thought from the title of his book) but another vision of ourselves based on the machines we invent. We invent machines and then believe we function in the same manner as the machines we’ve invented. This is nothing new of course. When digital programming and computers first came along, the field of artificial intelligence got a tremendous boost as well as that view of cognitive science which became dominated by a striving to understand human intelligence and consciousness along the lines of digital computation. Now with quantum computation coming down the pike, I guess we’ll have to brace ourselves for a similar phenomenon with quantum computation taking over from digital computation as the putative key to understanding human intelligence and consciousness. This, of course, neglects that the whole point behind even desiring to create quantum computers has had to do with their potentially breakthrough calculational abilities which are supposed to be provided by the property of quantum superposition. But why exactly does human thinking even in a miniscule amount resemble such a type of lightning calculating ability? This is an issue that Kauffman never broaches. And, of course, neither does he bring in any actual neuroscience and those messy, wet neurons and synapses or any of the other recent, amazing discoveries about networks of neurons in the brain (see, e.g., Hagmann et al., 2008; thanks to Michael Lissack for bringing this article to my attention). Instead, Kauffman’s decoherence theory of mind is a Rube Goldberg device cobbled together with this and that bit of theoretical flotsam and jetsam floating around in the conceptual Zeitgeist. If Kauffman actually thinks his less-than-a-scintilla-of-evidence-based decoherence theory of mind makes consciousness even an eensy-teensy more understandable, then I know a bridge I would like to sell him. Kauffman’s mentalist escapades are not over yet. As a further step in his argument, he resorts to that last refuge of all tottering conjectures, viz., that no physical law prevents there being molecular systems with a capacity for both maintaining their quantum coherence over long enough periods of time as well as possessing an ability for a recoherence of already decoherent states. No physical law prevents it? What kind of reason is that for supporting the veracity of a scientific theory? That no physical laws prevents it supports an incredibly enormous number of possibilities. For example, no physical laws prevent unicorns but it is not because physical laws don’t prevent them that there are no unicorns. It’s not just that there’s too many “ifs” involved in his theory, it’s that there are only “ifs” and not one brain mechanism, not one neural factor, not one neuron or sets of neurons, not one network of neurons is anywhere enlisted to aid in his theory of consciousness. Surely it was this kind of theory building which led Mark Twain to his conclusion on science: Indeed, it is because of the lilliputian investment of fact characterizing Kauffman’s theory of consciousness, that I have conjectured there is a hidden reason, which I briefly mentioned above, as to why Kauffman chose coherence/decoherence over alternative approaches to the quantum measurement paradox. This reason is that in his way of casting it, Kauffman’s “poised state” theory has the right conceptual structure of a threshold realm between two separate regions, a structure quite similar to his favorite notion of the “edge of chaos” whereas the other quantum measurement alternatives don’t’ come along with this kind of structure, certainly not the Copenhagen interpretation, or the pilot wave approach or the many worlds speculation. So let’s have a closer look at Kauffman’s adherence to the “edge of chaos.” The “poised” quantum mind: The “edge of chaos” in new clothes In Kauffman’s “poised state” between coherence and decoherence, we can read “poised” as “edge of” and replace “coherence” with “ordered” and “decoherence” with “chaos” and accordingly come up with a “poised state between coherence and decoherence” as a stand-in for “edge of chaos”. This supposed “poised state” theory of consciousness is then the latest outcropping of Kauffman’s fealty to the “edge of chaos,” his own “theory of everything” for marking where emergence is most probable and thereby the key to his contention that biological systems evolve to such a state. To be sure, the “edge of chaos” is a theme that has dominated Kauffman’s work for quite a while now, a theme with which he has justified nearly all of his major speculations because of its supposed special capacities. Thus, in an earlier work, Kauffman (1996) wrote: …on many fronts, life evolves toward a regime that is poised between order and chaos” (26); It is a very attractive hypothesis that natural selection achieves genetic regulatory networks that lie near the edge of chaos (26); …life exists at the edge of chaos…; It is almost spooky that such systems seem to coevolve to the regime at the edge of chaos (27); The best exploration of an evolutionary space occurs at a kind of phase transition between order and disorder…(27); …as if by an invisible hand, the system may tune itself to the poised edge of chaos… (28). This theme goes back at least to Stephen Wolfram’s (1994) early classification of the dynamics of cellular automata into separate zones of rigidly ordered, chaotic as in random, and complex. However, it was the speculations of the artificial life researcher Chris Langton (1990) with his λ parameter and the independent computational experiments of the physicist Norman Packard (1984; 1988) which lent the “edge of chaos” its purportedly deep significance, Kauffman falling in with those who eagerly glommed onto the idea. What elevated this putatively potent state of the “edge of chaos” into such conceptual importance were the claims made by Langton and Packard that this particular threshold of dynamical behavior was especially powerful in generating complex as opposed to either ordered or random behavior, and that this region of complexity, by being a seed bed for the proliferation of emergent phenomena, was identified as possessing a pleroma of emergent computational capacity that was not to be found in either orderly ordered or overly random behavior observed in cellular automata. Kauffman, in fact, has been so enamored with this notion that he has repeatedly suggested that biological organisms have some innate propensity to evolve to such a state. Indeed, we saw a similar contention in the above discussion concerning how longer coherent states, e.g., in chlorophyll, are also supposed to be something towards which evolution evolves. William James once wrote that the British philosopher of evolution Herbert Spencer’s idea of a tragedy was a fact meeting one of his theories. Similarly, Kauffman’s reliance on the “edge of chaos” has one itsy bitsy fact poking its way around inside his theoretical ointment, namely, that the original computational experiments on which the whole idea of the “edge of chaos” was founded on, those conducted by Langton and Packard, were subsequently found to be erroneous by other Santa Fe Institute affiliated scientists, Melanie Mitchell, James Crutchfield, and James Hraber (Mitchell, Crutchfield, and Hraber, 1999; Mitchell, Hraber, and Crutchfield, No Date). As far as I know, Kauffman has never addressed this countervailing evidence. But as he himself declares in the present book, he is not a Popperian when it comes to science, meaning that he doesn’t feel bound by the need to evaluate the soundness of a theory by the possibility of its falsification, a very convenient philosophy of science to adhere to when the evidence is simply not going your way! Langton had claimed that as his statistic λ increased, the complexity of the dynamics increased with longer and longer transient phases eventually reaching a uniquely qualified “edge of chaos” region where the most complex, that is, the most non-periodic and non-random, behavior would occur. Along similar lines, Packard had used a genetic algorithm to evolve cellular automata to perform complex computations, contending he had identified a special “edge of chaos” where such a capability was supposed to be at its prime. In Packard’s case, he used cellular automata to perform an image processing task, turning to the so-called Kacs, Kurdyomov, and Levin rule tables for a cellular automata. Packard interpreted his findings to imply that when complex computation (read: “complex emergence)” is required, evolution selects rules that lead to a cognate “edge of chaos” It was this latter claim that presumably got Kauffman all fired-up since the fertile capabilities of the supposed “edge of chaos” were exactly what he was seeking for in his attempt to provide an alternative to strict Darwinian approaches to evolution But Mitchell, Crutchfield and Hraber replicated the early computational experiments of Langton and Packard and found the opposite, viz., that the cellular automata rules capable of performing complex computations, that is, the ones capable of producing complex emergent phenomena, were actually not to be found in the transitional locus of some “edge of chaos” between ordered and chaotic dynamics. These researchers pointed out that that an underlying assumption held by both Langton and Packard was that rule tables were the most important aspect of cellular automata behavior “in stark contrast” to state space and attractor basin aspects of dynamical systems. Yet, it is well-known that phase state behavior cannot be adequately parametricized by Langton’s λ. Furthermore, whereas Langton and Packard presumed that the underlying averages converge, in point of fact they do not. However, the most problematic assumption of Langton and Packard was that the supposed critical threshold of λ pointed toward the most fertile computational possibilities. Yet, Langton had not correlated λ with any independent measure of computation, an inadequacy that Packard, at least, tried to remedy. When Mitchell, Crutchfield, and Hraber performed an analogous computational experiment, they found that the rules for complex computation did not occur at some critical state of the λ statistic or the “edge of chaos” at all: “In summary, we conclude that there is no evidence for a generic relationship between λ and computational ability in CA and no evidence that an evolutionary process with computational capability as a fitness goal will preferentially select CAs at a special λ region” (Mitchell, Crutchfield, and Hraber, 1999: 11). On the contrary, they found that “independent of the population size a given run will be driven by and the population organized around the fit individuals that appear earliest.” They even found that the supposed “phase transitional” regime in which symmetry was broken (following the construct of symmetry-breaking in phase transitions) was simply not the best realm for computational efficacy after all. Instead, computations performed better at symmetrical conditions. Indeed, Crutchfield, working with another colleague James Hanson (Hanson and Crutchfield, 1997), found that computational competence could be found in that dynamical region characterized as the “chaotic” class rather than the “edge of chaos” regime but that it might not be observed to do so because of deficiencies of the “filters” used in exploring the chaotic regime. Now one might think that such countermanding evidence would lead, at the very least, to some caution when utilizing the “edge of chaos” or its analogues in theory building. This does not seem to be the case for Kauffman, however. Perhaps he knows of problems or errors in the work of Crutchfield, Hanson, Hraber, and Mitchell which invalidate their findings and thus reaffirm Langton’s and Packard’s earlier conclusions. Although I haven’t heard of anything like this, perhaps some reader of this review does know of such findings. If you do, please write to the editor. As I mentioned above, countervailing evidence would not be of much concern to someone who repudiates the very idea of a Popperian approach to the philosophy of science. Indeed, in this book, Kaufmann not only repudiates Popper’s concept of falsification, he proclaims his own approach to science as following the “holist” position proffered by the celebrated philosopher and logician Willard van Orman Quine. In Kauffman’s interpretation of Quine’s “holism,” a scientific theory represents a whole world-view in which each element of the theory operates like words in a language so that there can never really be an adequate translation of, say, a poem from one language into another. Only completely “overwhelming” evidence would then count against one’s theories, evidence so countervailing that a scientist’s whole scientific worldview would have to be challenged. Of course, one can see a bit of Quinean “holism” in Thomas Kuhn’s famous theory of scientific revolutions. Moreover, by rejecting Popper and affirming Quine, or at least, Kauffman’s interpretation of Popper and Quine, turns out to be a pretty nifty position for a scientist to take, particularly one who is relying on shaky evidence. Also, it must be pointed out that Quine himself was not a scientist but a philosopher and a logician, and, furthermore, his “holism” was much more subtle and less encompassing than Kauffman’s interpretation of it would have it. Indeed, seen through the lens of Quine’s “holist” argument, we can understand Kauffman’s commitment to the construct of the “edge of chaos”, in spite of countervailing evidence, as a way to buttress his grander speculations about evolution taking place through a propensity to evolve towards some special realm of evolvability, an idea that teases with, but never quite goes the whole way, towards teleology. Eva Jablonska and Marion Lamb (Jablonska & Lamb, 2005), known for their theory of epigenetic inheritance systems, have pointed out that teleology, by turn, teases with bringing a “designer” back into evolution, exactly what the Darwinian approach to evolution was supposed to have elided from biological theory once and for all. One would think that by entering into such bold theoretical territory, Kauffman ought to have some pretty solid evidence besides the shaky nature of the “edge of chaos” or its more recent stand-in of the “poised state” between coherence and decoherence. Jablonska and Lamb have phrased the kind of theoretical stance, which we can interpret as what Kauffman is up to, as one where biological systems are characterized as evolving to a state where the ability to make evolutionary changes is more possibly realizable. An example they give is bacteria evolving to a condition where they can produce a burst of mutations when the going gets rough for their survival, e.g., MRSA, those nasty super bugs which are winning against our pharmaceutical companies in the current antibiotic war. According to Jablonska and Lamb, this tendency for evolution to go in a specific direction which allows for the generation of “variations that could promote evolutionary change just when it’s needed” (345) does seem to take place with various strains of e. coli bacteria that appear to respond to stresses like starvation by increasing their mutation rates and thereby generate more variations. Research has, in fact, revealed differences among the strains depending on which environments they were taken from, suggesting that this trait was indeed an evolved one. However, Jablonska and Lamb warn that even these findings don’t necessarily imply there is an evolutionary propensity to such a telos. Such teleological tendencies in evolutionary theory, according to Jablonska and Lamb, run counter to the accepted Darwinian contention that selection takes place on an individual, rather than on the group level or among lineages not individuals. Moreover, they point out that even if there really is selection by lineage and that those lineages that survive are the ones whose survival-benefitting variations help them survive, this still doesn’t entail that variation-producing systems initially evolved to such an end or telos: “It is too easy to assume that because a particular aspect of an organisms biology promotes evolution, it evolved for this reason” (346). It is too easy because most mechanisms that promote evolutionary change through the promotion of variations, e.g., crossover in sexual reproduction, simply did not originate as adaptations to enable this greater evolvability but rather came about as by-products. Of course, just because Kauffman is toying around with Darwinian heresy doesn’t mean his speculations are indisputably wrong, but it does, in my opinion, render the theoretical situation facing him to be one where the burden of proof would weigh more heavily on his shoulders. Yet, he doesn’t seem to see it that way but rather, it’s as if the more outlandish, and I would add, the less cogent his speculations become, the more willing he seems to be in accepting very shaky evidence and proposals. The science writer John Horgan (1997) was involved in several well-known debates during the mid-nineties with Stuart Kauffman and some other Santa Fe “chaocomplexologists.” Horgan termed what Kauffman and his cohorts were up to as “ironic science” which Horgan believed was more like philosophy, theology, or art in addressing questions which were not answerable. “Ironic science,” according to Horgan, proliferated ironic hypotheses which could never be demonstrated as literally true, e.g., questions like “why is there something and not nothing.” Indeed, Kauffman had characterized his own approach as “Nothing’s finished. I’ve only had a first glance at a bunch of things. I feel more like a howitzer shell piercing through wall after wall, leaving a mess behind. I feel that I’m rushing through topic after topic, trying to see where the end of the arc of the howitzer shell is, without knowing how to clean up anything on the way back” (Kauffman quoted in Waldrop, 1992: 300). For sure, maybe breaking new ground does include actions like a howitzer which of course literally does break new ground. Nevertheless I fail to see how either his platitudinous approach to the sacred or his far-fetched, implausibly argued theories replete with extremely tenuous evidence can further scientific, philosophical, metaphysical or spiritual aims. It is one thing to think that perhaps Popper’s falsification thesis is problematic in important respects, certainly many philosophers of science have argued quite cogently against it, but it is another to simply dispense with it and affirm in its stead a kind of Quinean “holism” when your reasoning is so manifestly faulty and the support on which you build your theories is so flimsy at best. I realize that at times I have been sarcastic in my review of this book. I didn’t start out in that direction, but along the way I became so flabbergasted at times that reading the book made me despair even about the entire publishing industry. Don’t editors exist anymore? Or are they too interested in selling books through someone’s often misplaced popularity that they just don’t care if what they are publishing is simply not very good. Indeed, we’ve certainly seen a fair share of scandals in the publishing business because of the so-called true autobiographical “fictions.” I certainly have no personal animus directed at Kauffman. I’ve never even met the man. And, as I said at the outset, I have been very much influenced by his ideas on emergence, however I might disagree with this or that aspect of them. Nor am I personally or professionally opposed to speculation, even wild speculation, since I am becoming more and more convinced of a serious failure of the imagination on the part of many scientists and mathematicians. Indeed, again as I wrote above, Kauffman’s chapters on anti-reductionism can be read as well-argued critiques of this same failure of the imagination. But what I am concerned about is how unfortunate it would be for readers, unfamiliar with the sciences of complex systems, to pick up this book because of the testimonials on the back cover and as a result of reading the book become convinced that, after all, this “chaoscomplexology” stuff is really just a bunch of tripe dressed up in sophisticated terminology. That would be a real shame since I do believe that close study of the dynamics of complex systems and their fascinating properties of emergence do have significant, perhaps even profound implications for all sorts of philosophic, metaphysical, and spiritual matters (see, e.g., Russell, Murphy, and Peacocke, 2000). However, in my estimation, this is certainly not the book that will further such an agenda. Complexity theory deserves better than this.
ef61d4b3a7cd755e
How Do Quantum Computers Work? By: | May 28th, 2019 How Do Quantum Computers Work Image by Gerd Altmann from Pixabay Understanding Data Computing in Classical Computers Have you ever wondered how letters and words gets stored and processed on your desktop, smartphone, laptop and hard drive? Do we really understand what happens when we press the letters on our keypads? Currently there is the classical “bit” which is nothing more than an electrical state of “language” called Binary.  This “bit” can either be presented in a “1” which is a positive electrical state or a “0”, which is a negative electrical state. These states are all predetermined. These “bits” are then added into an 8 “bit” word called a “byte” which presents a specific character i.e. a letter A. This single input into the processing stage will also be equal to the same single output of the processing stage. Quantum Computing With the use of superposition, the fundamental principle of quantum computing or processing is derived. What is superposition? It can be defined as taking two or more know classic bit/byte states and adding them together electrically in subatomic particles to create a transmission medium for the information known as “qubits”. The resultant of the superposition will have a different valid quantum state than the original input states and represents the combination of the original input states. This will have a huge impact on the processing (analogue to digital and digital to analogue conversions – PCM) time and will require less power. Mathematically, it refers the Schrödinger equation where it states that the linear superposition of solutions (the original input states i.e. 0 and 1’s) will produce a linear solution, albeit different (qubits). This means that the probabilities of measuring 0 or 1 for a qubit are in general neither 0.0 nor 1.0, and multiple measurements made on qubits in identical states will not always give the same result. To distinguish between these results, quantum logic gates using algorithms (0.0 and/or 1.0) have to be used instead of the classical logic gates (0 or 1’s) Who Is in The Race? With High Tech Companies like Google, IBM, Rigetti and D-Wave in the race, there are already experimental cloud based 20 qubit systems available to users. In January 2019, IBM launched IBM Q System One, its first integrated quantum computing system for commercial use, opening up the playing field for new and existing computing experiences that will in the not too far future, be part of our everyday existence.   Louie Gerhard Specialized in the Mechanical, Engineering and IT Technical environment with over 33 years experience. More articles from Industry Tap...
66973ae8d73f31d1
Quantum mechanics - Table of contents This is a table of contents for my notes on quantum mechanics. The posts are arranged in an order that might be found in a textbook. For an alphabetical index, see this page. Review of classical mechanics Schrödinger equation Infinite square well Harmonic oscillator Free particle Delta function well Finite square well & barrier Linear algebra in quantum mechanics Transformations and symmetries Rotations and angular momentum Schrödinger equation in three dimensions Hydrogen atom Spin in non-relativistic quantum mechanics Addition of angular momenta Multiple particles Quantum solids Statistical mechanics in quantum theory Time-independent perturbation theory Fine structure of hydrogen Zeeman and Stark effects Variational principle Adiabatic approximation WKB approximation Time-dependent perturbation theory Emission of radiation
0c971698a819a1c7
Friday, August 24, 2012 Simple proof QM implies many worlds don't exist A vast majority of the people who write popular books, blogs, and comments at discussion forums about the foundations of quantum mechanics are peers of the stupid monkeys. A week ago, Scott Aaronson wrote that he is a champion of the "Many Worlds Interpretation" (MWI) even though MWI is slightly more frail than heliocentrism. That's what I call an understatement on steroids. The term "MWI" is notoriously ill-defined, it may mean everything or nothing or something in between and there is no actual theory of physics that would deserve this name and that would work. But let's assume that the proponents of MWI mean that there exist many worlds and different mutually exclusive properties of a physical system are realized simultaneously. In the following 40 seconds, let's see that it ain't the case. Let's take an electron and measure its spin component \(j_z\) via the Stern-Gerlach apparatus i.e. via a magnetic field. The initial state of the electron is prepared to be "up" with respect to a particular tilted axis – every state of the spin in 3 dimensions is "up" with respect to a semi-axis – so that we have\[ \ket\psi = 0.6 \ket{\rm up} + 0.8 \ket{\rm down}. \] So the electron will have a 36% chance to have the spin "up" and 64% chance to have the spin "down". Note that it's not just the absolute values of the amplitudes that matter. The relative phase matters, too. If we changed the relative phase of the two terms by the factor of \(\exp(i\alpha)\), it would mean that the axis with respect to which the electron is polarized "up" would rotate by the angle \(\alpha\). Such a rotation may be inconsequential for our measurement of \(j_z\) but it would matter for the measurement of all other components of the spin. Now, let's ask the key MWI question: will there be an electron with spin "up" as well as an electron with spin "down"? The MWI proponents say "Yes". They imagine that different possibilities "really occur" in different universes, and so on. So this is the main question that decides about the validity of the MWI. Stupid monkeys are obsessed by questions whether MWI and other things are "not even wrong", "politically correct", "obeying Occam's razor", "pretty", and all such irrational adjectives, but no one seems to care about the question whether it is scientifically false or true. Quantum mechanics offers a universal rule to answer all Yes/No questions that have any physical meaning, that are in principle observable. For the given question, we identify the projection operator \(P\), i.e. a Hermitian operator \(P=P^\dagger\) obeying \(P^2=P\) (which is why its eigenvalue have to obey \(p^2=p\) as well and they must belong to the set \(\{0,1\}\) i.e. {No, Yes}). The expectation value\[ {\rm Prob} = \bra \psi P \ket \psi \] is interpreted as the probability that the answer is Yes. Quantum mechanics doesn't allow us to predict anything else than probabilities. So there's always some uncertainty about the answer to the question. The only exceptions are projection operators whose expectation values are equal to \(0\) or \(1\): these values correspond to "certainly No" or "certainly Yes" and there's no uncertainty left. We will see that the "key question of MWI" is of this sort. The projection operator for a question "A and B" is constructed as\[ P = P_A \cdot P_B. \] When it comes to operators, "and" is multiplication. That's why Logical AND i.e. conjunction is also known as "binary multiplication". And that's also why the probabilities of two independent questions' having answers "Yes" is equal to the product of probabilities. Fine, what are \(P_A\) and \(P_B\)? They are projection operators on the subspaces for which the answers to questions A and B are "Yes". In particular, we have\[ P_A = \ket{\rm up}\bra{\rm up}, \quad P_B=\ket{\rm down}\bra{\rm down}. \] They're projection operators on the "up" and "down" states of the electron, respectively. There are just no other states in the Hilbert space for which the statement "there is an isolated electron with the spin up" or similarly "...down" would be valid. Now,\[ \braket{\rm up}{\rm down} = 0 \] and therefore\[ P = P_A P_B = \ket{\rm up}\bra{\rm up}\cdot \ket{\rm down}\bra{\rm down} = 0. \] Therefore, the probability that there will be both an electron "up" and an electron "down" is\[ \bra\psi P \ket \psi = \bra \psi 0 \ket\psi = 0 \braket\psi\psi = 0. \] I've written the derivation really, really slowly so that at least 10% of the stupid monkeys have a chance to follow it. At any rate, we may prove that the probability that the electron exists in both mutually exclusive states simultaneously is zero. It can't happen. The derivation is identical for any other mutually excluding alternative properties of any physical system. Note that the operators \(P_A,P_B\) commute with one another, i.e. \(P_A P_B=P_B P_A=0\), which means that both questions may have an answer at the same moment (the uncertainty principle adds no extra hassle). That allows us to avoid some discussions. The simple conclusion is that there aren't many worlds. QED. Get used to it, monkeys. ;-) Let me now spend some time by discussing how indefensible various "loopholes" would be and why there are many other ways to see that the answer to the question "Are there many worlds?" had to be "No". And I want to mention several likely fundamental and rudimentary errors that prevent MWI advocates from deriving the right answer to this simple question and from seeing that this is truly a kindergarten stuff and not something that they should be confused by for days, weeks, months, years, decades, or centuries. First, let me discuss the interpretation of the "plus" sign. As I already suggested, it's important to distinguish addition and multiplication. (If you don't know what multiplication is, watch 0:40-0:45 Miss USA on maths.) The key fact is that the wave function composed of several mutually exclusive pieces such as\[ \] has a plus sign that roughly means "OR", not "AND" as many people apparently think. When we care about the \(j_z\) component of the spin, the formula above says that the state \(\ket\psi\) allows the electron to be either "up" OR "down". It doesn't say that there is both a spin "up" AND a spin "down". If we need to say "AND" in quantum mechanics, either "one proposition/question AND another proposition/question" (as discussed with the \(P=P_A P_B\) relationship above) or "one object added on top of another object", we need multiplication, not addition. For the case of the two propositions, we have already discussed an example, the \(P=P_A P_B\) relationship above. If we discussed physical systems composed of several pieces, e.g. a group of 2 apples and a group of 3 apples, we would need another kind of a product, the tensor product,\[ \ket{\text{5 apples}} = \ket{\text{2 apples here}} \otimes \ket{\text{3 apples there}}. \] The matrix elements extracted from similar "tensor products" are products of the matrix elements for the individual subsystems and the same thing therefore holds for the probabilities, too. Some people may be thinking that it almost looks like I am suggesting that the MWI advocates are complete idiots with the IQ of a retarded third-grader because they can't distinguish addition from multiplication. The reason why it looks so is that this is exactly what I am trying to say. In fact, it's pretty obvious that my attempts to say such a thing are successful and I am actually saying this thing. ;-) Why is there so much confusion about the meaning of addition and multiplication here? Because people with common sense – as it evolved for millions of years – and no genuine knowledge of the pillars of modern physics (which includes the MWI advocates) always think in terms of objects, e.g. apples. So when you're adding two apples and three apples, place the two groups next to one another, you're adding apples. Similar addition more or less applies to lengths of sticks, momenta and other conserved quantities, and even quantities such as voltages, currents, charges, and many others. But this "combining objects that exist simultaneously is addition" is fundamentally and completely wrong for wave functions in quantum mechanics. In quantum mechanics, addition of wave functions or density matrices roughly corresponds to "OR", not "AND", and "AND" must be expressed by multiplication. How can we understand the origin of this flagrant difference between the classical thinking and quantum mechanics? The primary reason is that quantum mechanics just isn't describing the objects themselves. It describes propositions we can make about objects. As Niels Bohr used to say, Physics is not a tool to describe how the reality is. Physics is a tool to say right things about what we can see. The basic building blocks such as the wave functions and projection operators don't describe and count objects but encode propositions, knowledge, information. For propositions and their probabilities (expectation values of the projection operators), addition is simply not "AND", addition is "OR". The right mathematical expression for "AND" is another operation, namely multiplication rather than addition. An MWI advocate could start to spread fog. It may be debatable which one it is, the difference between "AND" and "OR" isn't that important, anyway, and it may be up to centennial deep philosophical discussions which way it goes. Well, all these statements are pure rubbish. There isn't any ambiguity, confusion, or room for modifications. Addition and multiplication are completely different operations so you should better not confuse them. The right theory that is tested is the theory that says the same thing about the interpretation of addition and multiplication as I did. Be sure that if you modify its rules, the rules of quantum mechanics, by randomly replacing addition by multiplication and vice versa at various places, you will get a completely, qualitatively different theory that will yield a totally different description of the reality and it will disagree with almost all observations, including some extremely elementary ones. There just isn't any room for confusions and debates. Just like a 7-year-old schoolkid who invents arrogant excuses why she cannot learn the difference between the addition and multiplication (note that I am politically correct so I sometimes include "she" in similar sentences, especially if it increases the degree of realism), the MWI proponents should be given a failing grade and should be spanked. Be sure that any "technical" modification of my proof that there aren't many worlds will damage the theory so that it will become totally incompatible with the experimental tests. For example, if you suggested that the projection operator for "A and B" should be \(P_A+P_B\) rather than \(P_A P_B\), you will easily find out that the same rule used for any experimentally testable situation will lead to wrong predictions. In fact, pure thinking is enough to see that "AND" must be expressed by the product of the projection operators and not the sum. Using charge conservation to prove there aren't many worlds The fact that one electron can't suddenly be split to two electrons so that it would be both "here" and "there" may also be derived from charge conservation, angular momentum conservation, mass conservation, or other conservation laws. In quantum mechanics, such laws still hold. If the initial state \(\ket\psi\) is an eigenstate of the electric charge operator \(Q\),\[ Q\ket\psi = q\ket\psi, \] then, because \(QH=HQ\) i.e. the charge is conserved i.e. the symmetry generated by it is a symmetry of the Hamiltonian i.e. of the laws of physics, the final state will obey the same relationship with the same value of \(q\). But if there were an electron on both places, the electric charge could be shown to be doubled and different than the original one. That would conflict with the conservation law. Inflating the Hilbert space along the way Some people could say that my derivations are missing the point that there is an "Everett multiverse". I should have increased the size of the Hilbert space before the measurement etc. There are many wrong things about such a potential objection. First, the constancy of the dimension of the Hilbert space is a mathematical necessity. Especially because some MWI proponents including Brian Greene say that they want to be led by the most natural interpretation of the equations of quantum mechanics, it's totally indefensible to actually change the dimension of the Hilbert space along the way. It's surely not what quantum mechanics tells us to do. In fact, one may easily show that such a proliferation of the degrees of freedom couldn't lead to an internally consistent theory. It may be explained in many ways, e.g. by the quantum xerox no-go theorem. There can't be any evolution of a state in \({\mathcal H}\) to a state in a larger Hilbert space such as \({\mathcal H}\otimes {\mathcal H}\) because the evolution of the state vector in quantum mechanics is linear while the map \[ \ket\psi\to \ket\psi\otimes \ket\psi \] is not linear; it is bilinear or quadratic. If \(\ket x\) and \(\ket y\) were evolving to \(\ket{xx}\) and \(\ket{yy}\), respectively, then linearity would dictate that \(\ket{x+y}\) evolves to \(\ket{xx+yy}\) while the universal squaring formula would say that it should evolve to \(\ket{(x+y)^2}=\ket{xx+xy+yx+yy}\). These are different ket vectors on the larger Hilbert space because there are extra mixed terms. At any rate, it's a contradiction: in a quantum world, there can't be any gadget that creates two exact copies out of the arbitrary initial state. Another problem with the objection is that I actually haven't done any assumption about the non-existence of the "Everett multiverse". For example, in the fast "charge conservation" proof, \(Q\) could have meant the total electric charge in "all branches" of the world you could ever hypothesize. Clearly, if the number of worlds is being multiplied, the charge won't be conserved. That will be a problem because the symmetry generated by \(Q\) won't be a symmetry of the laws that control the "Everett multiverse" anymore. It won't be able to be exact at a fundamental level, you won't be able to use it to constrain the laws of physics, and so on. This "demise" will be fate of all the symmetries in physics (translations, rotations, Lorentz boost, parity, etc.) because all symmetries are related to a conservation law. One more problem with the "splitting of the Universes along the way" is that there can't possibly exist any justifiable rule about "when this splitting takes place". There aren't any sharp qualitative boundaries between phenomena in Nature. It's clear that there can't be any splitting during a sensitive interference experiment – because such an "elephant in china" converting the fuzzy quantum information into the classical one would surely destroy the interference pattern. The problem is that in principle, we may say the same thing about 2 particles, 3 particles, 100 particles, or \(10^{26}\) particles. In principle, the interference pattern involving an arbitrarily large system may be measured so the Universe is just not allowed to "split" into possibilities where different classical outcomes are realized because such a splitting would make the "reinterference" permanently impossible while it is arguably always possible in principle. In practice, there's a lot of irreversibility, "decoherence", but this process always depends on our inability to manipulate with the elementary building blocks of information too finely. Decoherence is an emergent phenomenon and it isn't sharp, either. There is no point during the decoherence process when you could say "now it's the right time for the universes to split into many worlds". Decoherence is just a continuous process in which the off-diagonal elements of the density matrix gradually decrease. They decrease faster and faster but they're never "quite" zero. Shannon told us that Brian Greene thinks that he and your humble correspondent have a "little disagreement" about a physics question. ;-) The little disagreement is about the existence of a paradigm shift in the 20th century science that would invalidate the previous framework of classical physics. I am sure it has happened in the 1920s; Brian Greene thinks that it hasn't happened so it is still possible to think about Nature in the "realist" way. Of course, I could also be saying it is a little disagreement, I have also been taught how to be diplomatic, polite, hypocritical, and dishonest. But I just don't think it's right to behave in this way. The disagreement is clearly about a major question, about the very existence of modern physics as something that is outside the box of classical physics. Brian Greene is really denying the existence of quantum mechanics; instead, he is suggesting that what we need are new theories (e.g. nonlocal ones or multiverse ones) within classical physics (although he and others prefer more obscure ways to describe the very same thing, ways that make the naked Emperor's new clothes look more fashionable and decent). The MWI chapter of The Hidden Reality by Brian Greene (whose Czech translation by me will be in the bookstores on Monday) really drove me up the wall many times because most of it is literally upside down. One repeatedly "learns" that if we want to describe the whole world in a uniform fashion, we must adopt the MWI ideology. Bohr et al. were incapable of doing so, so they preferred to live in their messy, marginally inconsistent system of ideas, and use behind-the-scene tricks to fight against the true messengers of the truth such as Hugh Everett III. This uses the right words except that the content is exactly the opposite of the truth. Bohr et al. always used legitimate, official, and transparent channels to discuss similar physics questions – e.g. in the Bohr-Einstein debates – and it is the MWI advocates who are using non-standard channels such as popular books to spread misconceptions. Equally importantly, the "universal validity of the laws for small and large objects" is an important consideration, indeed. But it unambiguously says that MWI is wrong and QM as understood by Bohr et al. and the followers – modern physicists – is the only plausible right answer. I have already mentioned why it is so. There just can't be any splitting of the worlds when one quantum particle is coherently and peacefully propagating through an experimental apparatus. The same comment applies to 2 or 3 particles so if we're using the laws of physics coherently for small as well as large systems, there can't ever be any "splitting of the Universes". An impressive song about the Higgs, a new genre of music. There is one more aspect of the unity that could be violated by the MWI advocates to defend the indefensible. They could say that the question "is there an electron here as well as an electron there", the question whose probability we calculated to be zero, shouldn't be answered by the rules of quantum mechanics i.e. by identifying the right projection operator and by computing its expectation value (interpreted as the probability of "Yes"). They could say that this is a question "above the system" that should be answered by some philosophical dogmas. But that's not how physics works or should work. Quantum mechanics has a way to answer all physically meaningful i.e. in principle observable questions and it is the same way for all the questions. In fact, there is nothing unusual about asking whether there are electrons at two places. This is the kind of questions that all of physics is composed of. If you were free (or even eager) to abandon your standardized theory and methodology to answer such questions and if you switched to some metaphysical dogmas just because this question about the many worlds is "ideologically sensitive", it would prove that the theory you may still be using for other questions isn't something you are taking seriously, isn't something to answer really important questions in physics. It would surely show that you have double standards and the technical theory you're using isn't universal and uniformly applicable because you often replace it by metaphysical dogmas. Your attitude would be completely analogous to the attitude of a fundamentalist Christian physicist who just chooses to believe that Jesus Christ could walk on the sea because the laws of gravity and hydrodynamics didn't have to apply and the non-nuclear conservation of carbon atoms could have been invalidated when he was converting water into wine. And I don't mention many other Jesus' hypothetical crimes against the laws of physics that such a physicist could be eager to overlook for political reasons. ;-) The MWI advocates prefer metaphysical dogmas and their naive classical intuition over the standardized quantum mechanical "shut up and calculate" approach to answer such questions about the electron on two places (or pretty much any other question in physics) because they haven't started to think in the quantum way yet. To think in the quantum way is to be deciding about the validity of propositions (or the probabilities of their being valid) and the procedure is always the same. One constructs the projection operator related to the proposition and calculates its expectation value in the quantum state. It's the probability and if the result is \(0\) or \(1\), we may be certain that the answer is "No" or "Yes", respectively. (The detailed arguments or calculations may proceed differently and avoid concepts such as "projection operators" but they must still agree with the general rules of quantum mechanics.) When we follow this totally universal quantum procedure – valid for questions about microscopic systems as well as macroscopic systems – carefully and rigorously, we will find out that quantum mechanics as it stands, in the same Copenhagen form as it has been known since the 1920s, answers all questions, including those that "look philosophically tainted", correctly i.e. in agreement with the experiments. Sidney Coleman gave many examples in his lecture Quantum Mechanics In Your Face. For example, it's often vaguely suggested by the MWI champions and other "Copenhagen deniers" that the experimenter could feel "both outcomes at the same moment". However, by the correct quantum procedure whose essence is absolutely identical to my discussion of the two positions of the electron at the beginning, we may actually find the answer to the question "whether the experimenter feels both outcomes at the same moment". We will convert the proposition to a projection operator, it has the form \(P=P_AP_B\) again, and because its expectation value is zero for totally analogous reasons as those at the top, it follows that according to quantum mechanics, the experimenter doesn't perceive both outcomes at the same moment. This is a completely physical question, not a metaphysical one, and quantum mechanics allows one to calculate the answer. It's just not the answer that the anti-Copenhagen bigots would like to see. Quantum mechanics doesn't predict "unambiguously" which of the outcomes will be perceived by the experimenter (spin is "up" or "down"?) but this uncertainty is something totally different than saying that he will perceive two outcomes. The number of outcomes he will perceive may be calculated unambiguously by the standard rules of quantum mechanics and the number is one. There is no room for "two worlds" or "two perceptions at the same moment". Which outcome will be felt has probabilities strictly between 0 and 100 percent so the answer isn't unequivocal. When the MWI-like folks are discussing these matters, they are constantly making lots of other totally rudimentary errors – and perhaps "deliberate errors" – aside from the confusion of addition and multiplication I mentioned above. A frequent one is to totally forget or deny that quantum mechanics predicts and remembers correlations (in their most general form known as entanglement) between any pairs, triplets, or larger groups of degrees of freedom and properties that may co-exist in the real world. For example, Coleman mentioned the cloud chamber example by Nevill Mott. A particle leaves the source in the cloud chamber. It is in the \(s\)-wave: its wave function is spherically symmetric so it has the same chance to move to each direction. So why does it create a straight line of bubbles in one direction rather than a spherically symmetric array of bubbles? Again, this may be interpreted as some super-deep metaphysical question that goes well beyond quantum mechanics and the Copenhagen interpretation may be claimed to be incapable of answering such questions. Except that there is nothing hard or metaphysical about this question at all. It is completely physical, quantum mechanics allows us to answer it using a very simple calculation, and the answer is right. There will be a straight line of bubbles because one may prove that due to some demonstrable entanglement between properties of the supersaturated water or alcohol at various points that the propagation of the charged particle causes, the direction of two newly created bubbles as seen from the source is always essentially the same. (One may prove that the charged particle only creates bubbles in a small region around its location; and one may prove that the position of the charged particle goes like \(\vec x = \vec p \cdot t / m\) where the momentum \(\vec p\) is essentially conserved. That's enough to see that the bubbles will be aligned.) So again, while quantum mechanics gives ambiguous predictions about the direction in which the "bubbly path" will be seen – all directions are equally likely – it actually does unambiguously predict that the bubbles will have a linear shape, they will only emerge along a straight semi-infinite track. There is absolutely no inconsistency between these two assertions. Any wrong idea that QM has to predict that the distribution of the bubbles is spherically symmetric boils down to a trivial error: the omission of the fact that the existence or absence of bubbles at a point is correlated with the existence or absence of bubbles at other points. In fact, the correlation is so tight that for each semi-infinite line, there are either bubbles everywhere along the line or there are no bubbles on it. And there is only one semi-infinite line. As I said many times, the people who have trouble with proper, i.e. Copenhagen or neo-Copenhagen laws of quantum mechanics, are always "eager" to simplify the quantum rules of the game prematurely and convert the situation to some "real physical object" way too early (well, one should really never do so but if one does it too early it may be more damaging). But Nature never does such mistakes. It remembers the wave function which knows about all the possible correlations between all the degrees of freedom, which knows about all the relative phases because they could matter, and only when an observable question has to be answered, it just calculates the right answer. The right calculation looks very different than any kind of reasoning in a classical world but it isn't too hard; it's really straightforward and in all situations in which classical physics used to work, it still gives the same answer (with tiny corrections). When the initial wave function for the charged particle in a cloud chamber is spherically symmetric, it doesn't imply that spherically asymmetric configurations of the bubbles at the end are forbidden i.e. predicted to have vanishing probabilities. On the contrary, we may prove (the right verb really is "calculate" because the proof boils down to the calculation of an expectation value of a projection operator) that the distribution of the bubbles will be spherically asymmetric – a semi-infinite line in a direction. There is no contradiction because the initial wave function isn't a real object such as a classical field, stupid. It's a quantum-generalized probabilistic distribution. A spherically symmetric probabilistic distribution (on a sphere) doesn't mean that the actual objects such as the particles (or, later, the bubbles they will create) are spherically symmetric. Instead, it means that the probability that the objects are found in one direction is the same as it is for another direction. But because the particle may be shown to be in a direction, we know that the actual measurements of positions will inevitably be spherically asymmetric. Is that really so hard to understand that the wave function in quantum mechanics is a generalization of a probability distribution – and not a generalization of a classical field? It encodes the information about the physical system, not the shape of the object itself. It is not really difficult to learn these things but some people just don't want to. And that's the memo. 1. I now fully understand your frustration when translating The Hidden Reality. Brian Greene seems to defend this Many Worlds approach saying it is "the most conservative framework for defining Quantum Physics". 2. Have you seen this hilarious video? A Capella Science - Rolling in the Higgs (Adele Parody) 3. Right, Shannon. It's about 30 pages of this stuff that keeps on going and it's repeated and repeated and everything is upside down. From the viewpoint of the history, I find it demoralizing and demotivating. Be smart and lucky enough to be one of 3-5 key people who realize the most important revolution of the 20th century science. It ain't easy. Is it worth it? Almost 90 years later, there will still be "mainstream" books published that you haven't really discovered anything, just muddled the waters, and you were a thug who was bullying brave original thinkers (proper word: crackpots of your age), and your theory is unable to do all the things (that it is actually totally able to do), and the original thinker's theory is surely better and more unified (although it isn't even well-defined, it doesn't describe anything correctly whatsoever, and is nothing else than defense of the intellectual inability, laziness, and dogmatism). Imagine you discover the most important thing of the 21st century now, looking at the Universe from a much more far-reaching, accurate, abstract, and unified perspective. Some people won't be able to get it so you will explain them why it's wrong. In 2100, there will be popular books sold to millions of people that you were a bully who used political tricks to modify inconvenient theses, and so on. It's terrible. I don't really know whether, if history is a good guide, I would like to discover the 21st-century counterpart of quantum mechanics. The hassle - not just hassle in your life but apparently also hassle in your "after-life" - may be just too intense. The "conservative" label is particularly silly for the MWI, indeed. Speculations about splitting worlds according to ad hoc rules no one has really meaningfully formulated, ever, because it's not really possible, are the least conservative thing one may imagine. Quantum mechanics is radical but it also preserves the basic scheme and collection of observables and their properties in physics more or less without change. One could say that quantum mechanics only differs from classical physics by having xp-px = i.hbar instead of xp-px=0. The commutator is just a little bit different, a tiny number called Planck's constant times i which must be there because the commutator is anti-Hermitian, and that's it. It's a very modest deformation of the particular laws of physics for the particular classical system we used to have (the classical limit of the quantum theory); one must just learn what it means to work with a theory where xp-px isn't zero but the classical physicists could still be told they were "approximately right" in the observable sense of approximations because their assumption xp-px=0 had an "approximately right results". Adding infinitely many randomly and vaguely fucking and reproducing universes just in order to deny that xp-px isn't zero i.e. x and p (or any observable pair) can't have well-defined properties at the same moment is just the maximally ad hoc, uncontrollable, "progressive", degenerative, irrational thing one can do. Such a framework isn't "approximately equal" to any previous theory, it's not building on anything. It's very clear that it's just a messy attempt to fake the correct theory by some unjustifiable building blocks. 4. Thanks, Victor, it's really impressive. The music genre is just like the Yellow Sisters but it also has the extra Higgs bonus points. ;-) 5. I have never read a clearer explanation of the basic idea of QM, Lubos. Those folks who claim that Schroedinger's cat is half alive and half dead need only to read the + sign as "or". It's so very simple. Regarding your small disagreement with Greene, I recall Sheldon's famous retort regarding LQG, "Small disagreement!! The Pope and Galileo had a small disagreement!". 6. Hahaha I knew you couldn't leave that mumbo jumbo on Scott's blog go unanswered. Yesterday I skimmed though all 100+ comments and was disappointed not to find yours ... does Scott have you on his "ignore" list? 7. Ha ha, that rocks :-))) ! And now I want to hear this particular parody of "Bohemian Rhapsody" they announced in the last few seconds, LOL :-D 8. Thanks, Gene, for your professionally loaded synergy. The quote from TBBT is of course a memorable one, LOL. ;-) 9. I find this article curious. I'm a believer in the Everett-Wheeler interpretation, and yet I agree with all the physics you describe here. And I don't recognise much of the version of EWI you present. Odd... The EWI is really about the quantum state of the observer, rather than the system being observed. The 'Copenhagen' view it is set up in opposition to is that you have a quantum system obeying quantum physics which is observed by a classical observer obeying classical physics - the transition from one to the other occuring through these projection operators and wavefunction collapse. The EWI view is that the system observed and the observer are both quantum systems, and that the process of observation is simply a temporary coupling of the two systems. It's similar to the way coupled oscillators give rise to normal modes. When coupling is introduced, the modes of vibration of the components become correlated, and turn out to be the eigenvectors of the interaction matrix. If one system starts in a superposition of eigenstates, and is observed by (interacts with) another system, the observer enters a superposition of correlated eigenstates, each state corresponding to an observer observing a particular state. EWI is 'conservative' in the sense that it simply extends without modification the postulates of quantum physics to the observer. It agrees absolutely regarding the physics of the system being observed. The 'many worlds' label comes from an attempt to explain to the lay public what such a quantum observer would experience. Because the different eigenstates are orthogonal, they don't interact, and thus each eigenstate of the superposition would be unaware of all the others. It would be *as if* there were multiple observers in multiple worlds, each seeing one outcome. Thus, if you want to prove EWI wrong, you need to show how conventional QM handles the quantum state of *observers* without them ever entering superpositions. Or to say how else you would interpret the experience of a quantum observer in a superposed state. 10. I haven't tried it for quite some time whether my comments would be censored now. The odds are always O(50%) for all such left-wing blogs. ;-) That's high enough a risk not to waste time. After all, my blog has a somewhat larger traffic than his blog so I am sure that a blog entry here is more visible than a comment at his blog. 11. Song about higgs is not completely new genre. Bjork did the whole album out of vocals only. 12. It seems to me that you only proved that the spin cannot be both up and down in the same world, but a multiple-world theory would say that the up and down occur in different worlds. So if we let Ua mean up in world A, and Db mean down in world B, then P(Ua) = P(Db) = P(Ua and Db). Of course P(Ua and Da) = 0. Similarly with charge conservation, the MWI is that the conservation law is that the charge is equal across worlds, not that the sum across worlds is equal to one world. 13. Sorry, Jonathan, I haven't made any assumption about the number of worlds so it applies to any number of "components". It is also complete nonsense that the charge conservation could mean something else than the conservation of the total charge - of the sum of all the contributions. If there exist many components of the universe, in the usual sense of the word "exist", then the total charge is indisputably the sum over them. The charge can't really mean anything else. The U(1) symmetry generated by the charge has to transform all the charged fields in all components of the Universe so the value of the charge as the quantity has to be the sum. There is no way to escape these facts. The only thing you can achieve by denying these facts is that you completely misunderstand Noether's relationship between charges and symmetries, too. 14. My impression was that Everett was groping towards the idea of decoherence but not really getting there in a comprehensible way. i.e. you don't need a "special rule" to "collapse" the wave function. Bohr et al. might never have believed that, but perhaps lesser minds who came after them did. 15. "I haven't presented "EWI" because I've never heard any theory that was claiming to be called "EWI". This is a totally bizarre term." That's the Everett-Wheeler Interpretation, which is the correct name for it. 'Many worlds' is a misnomer. "It's good that you explicitly say that "EWI" treats observers differently than the "observed system" because this was one of my primary "accusations", one that disagrees" I haven't said that. Observers and the observed are treated identically, and observation as a physical process is no different to any other sort of coupling or interaction. "The Copenhagen school was talking about observers in isolation but it never meant that such large bound states didn't follow the laws of quantum mechanics" No, it was an attempt to invent laws of quantum mechanics that would explain why a quantum reality *appeared* to observers to be classical. This was the 'wavefunction collapse'. The idea was that the physics proceeded according to reversible QM until it was 'observed', at which point it would collapse down to a single eigenstate according to some projection. But it was unclear what physically distinguished 'observation' from any other sort of interaction of systems, and all sorts of crazy ideas related to consciousness, gravity, size, complexity, and thermodynamics have been proposed. Everett's thesis was to simply say these devices were unecessary. If a quantum observer interacted with a quantum system, it would enter a superposition of mutually non-interacting states, each corresponding to the observation of one outcome. That is to say, unitary-evolution, reversible QM already fully explained our classical experience with no need to posit a non-reversible 'collapse'. Everett's physics had absolutely no new content - it was simply (a subset of) ordinary and already widely-accepted QM - it just applied it to the observer problem and found there was nothing difficult to explain. "Quantum states *always* enter superpositions. In the cloud chamber example, the whole system evolves into a superposition of states of a charged particle and bubbles in the same direction - superposition over all directions." Agreed. But the question is about the quantum state of the *observer* who observes the particle, which is a spherically-symmetric superposition of observers each seeing a particle in one direction. Having explained carefully that the issue was over the quantum state of the *observer*, and whether it was a superposition, I'm surprised to see you again point to the state of the *system being observed*. You need to address the question of whether the *observer* of the decaying particle, or Schrodinger's cat enters a superposition of observer states. What is the quantum state of the detection apparatus? Of the scientist? Can they ever be in superposition? 16. Dear Lubos, Even though I couldn't say exactly why, as a layman I intuitively felt the many worlds interpretation was inherently absurd the moment I heard. It struck me as the sort of thing a not-to-bright nerdy science-fiction fantasist smarty-pants would go for, like the idea of a technological singularity. 17. If I remember it correctly MWI is used to explain the different "worlds" in "Sliders" ... :-D 18. Jonathan is absolutely right. You should read his point again slowly and try to understand it. 19. All these things are completely untrue. You and your fellow many-world cranks may be using the term "Everett-Wheeler Interpretation" but it is a nonsensical term because Wheeler hasn't contributed anything to it aside from the deliberately obscuring title "relative state". It's nonsense that Bohr et al. wanted to "explain why things look classical" by assuming a demarcation line. And it's nonsense that the Copenhagen school ever assumed any "collapse". I have addressed this question about 500 times already, and so did Heisenberg, Bohr, Dirac, and others 85 years earlier. Yes, states always evolve into very general complex linear superpositions of any basis vectors one may choose. As I predicted, you would still be unable to notice in your new comment, and I just added you on the blacklist because your degree of idiocy is something I just won't suffer through again. 20. The whole point of MWI is to redefine what exists means. Also, by stating P⟨up|down⟩=0 you are assuming one world, which I assume you are calling component. I've never seen Noether's theorem reformulated to work for multiple worlds, but you definitely can't just take the one world formulation and apply it as is to a multiple world situation. Whether MWI is true or not is different from correctly representing the concept mathematically. My own opinion is that whether MWI is true or not is not knowable by us, if it has been correctly formulated. A misformulated version of it will definitely be false. 21. The fact that lesser minds - such as Everett himself - said lot of rubbish after 1925 is surely not a good reason to deny that the foundations of quantum mechanics, the framework of modern physics, was built correctly by Bohr, Heisenberg, Dirac, Pauli, and a few others, is it? 22. The whole point of MWI is to redefine what exists means. The only problem is that no one has ever seen such a point. You may dream about "redefining the existence" except that there isn't any "new" definition of existence. It's just a sequence of lies. If you disagree, could you tell me what your new definition and the new derivation of the dynamics of charge etc. is supposed to be? To offer a legitimate counter-evidence, you will have to rediscuss all these elementary derivations I did and offer your "alternative" derivation of all the experimentally tested conclusions for which the MWI discussion is inequivalent to proper quantum mechanics. You know very well that no such an alternative theory may exist. It's very obvious that there can't be any "intermediate" or "third way" of existence that would allow one to have one's cake and eat it, too. It doesn't matter whether the spin exists in another component of the Universe or not. If there is an extra electron anywhere, it carries an electric charge and the conserved quantity has to be the sum of the charges, and similarly for other quantities. Whether the universe is connected or not is an absolute detail, an irrelevant technicality. There isn't any "different kind of existence" in which the conserved quantities wouldn't have contributions from all pieces of the Universe. Also, it's not true that I was assuming one world for the orthogonality. The states are orthogonal whenever they're different-eigenvalue eigenstates of a Hermitian operator, e.g. J_z in this case; it's a trivial one-line proof in linear algebra. I don't need to make any assumption about the number of components in the Universe, it may be anything you want but the directly experimentally measured value is 1. 23. you are still stupid, accept it. 24. paradox: if the MWI were true there would be a universe where instead the MWI is not true. conclusion: MWI is interpretation is wrong! :-) sorry for my horrible english. 25. Luboš,I wonder if you saw following paper: 26. As ignorant as I am, I still venture to wonder, Are there any physically meaningful tests of the MWI? I note the citations in wiki: Deutsch, D., (1986) ‘Three experimental implications of the Everett interpretation’, in R. Penrose and C.J. Isham (eds.), Quantum Concepts of Space and Time, Oxford: The Clarendon Press, pp. 204-214. Plaga, R. (1997). "'Proposal for an experimental test of the many-worlds interpretation of quantum mechanics'". Foundations of Physics 27: 559–577. Are these authors presenting worthy suggestions? 27. Well, Lubos has argued persuasively that MWI can't be correct, but I don't think this reason can have anything to do with it. if MWI were correct, there couldn't be any universe in which it was not correct. What you are saying is essentially that in MWI any imaginable universe exists - this is not true which is I guess a minor nice thing that one can say about MWI - and that, there must be a universe in which multiple universes do not exist. I'm sorry but what? There is an interesting philosophical problem with MWI that does make me uncomfortable with the idea, Lubos's reasons aside: If all probabilities are somehow realized in "other worlds" there must be some worlds in which every event is an unlikely outcome. In such universes, disturbingly enough, quantum mechanics looks wrong from a statistical point of view: It makes predictions about how likely certain things are to happen, which would appear wrong, purely by chance, in a large number of universes. Not only is the idea that as a scientist you could find yourself in a universe where the correct theory makes unreliable predictions abominable, one could make the argument that it is very unlikely we happen to be in a universe where quantum mechanics looks good-which might lead into anthropic arguments... 28. In Wiki they link the plus sign to a "disjoint union" Is this your meaning or am I on the wrong track. 29. That one should need prove a theory false although it has no observable consequence and makes no computational advances is still another objection. The burden is on the mwi folk not on those who have so often proven that qm works. Mwi in all formulations add no new useful prediction. If one could not -although you demonstrate one can -demolish it, it would at best be a philosophical construct. If Fred, or anyone can point us to a consistent modification on Noether that would conserve any property such that we could at least retain the explanatory power of symmetry then only would Mwi even rise to not even wrong stature. 30. It's a simple statement about how to calculate probabilities. The probability that A or B occurs if the two events are mutually exclusive, is the sum of their probabilities P(A) +P(B) (if they aren't mutually exclusive you must subtract their joint probability (ie "and")). If you want to calculate the the probability that both events occur, well, actually they are mutually exclusive so their joint probability should be zero. But if they are independent it would be the product of their probabilities. 31. This URL is in response to the smoking monkey! 32. How is MWI explained in The Hidden Reality? I have some layman questions and I have a hard time googling any layman explanations, e.g., if the electron will have a 36% chance to have the spin "up" and 64% chance to have the spin "down" and both happen how exactly do the probabilities manifest themselves in this deterministic situation? 33. So, your own opinion is that the truth of MWI is not knowable by us. What the shit does that mean? Again, physics is not about existence; it is about what we can observe! Jesus! 34. The prejudice might get worse and worse as time passes but Nature always recognizes the truth. The bad thing though is that humanity will waste a lot of time with these MWI and other crackpots theories. That's the frustrating part. Still I'm here "interneting" with Dr Motl and just last month I spoke with Brian Greene. It feels like I'm close to the Gods'Fight. Cool :-) 35. In the past I used to shrug my shoulders and thought people persuing just wrong ideas (without trolling against others) are mostly harmless... But now I think if too many physicists keep doing things that can too obviously not work, this could cast a damning light on the whole field (of fundamental or theoretical physics for example) which would be dangerous ... :-/ 36. Sorry, I have to disagree with the and/or discussion. Schrodinger's cat isn't "alive or dead". That would be a hidden variables "interpretation". The point of the cat experiment is that sentences like "the cat " don't make sense, QM only gives probabilities for the result of measurements. In the case of "0.6|up> + 0.8|down>" , we could say: "The result of a measurement will be either |up> OR |down>", but we could just as validly say "The eigenvectors are |up> AND |down>". This is just a language issue, not an issue of understanding. 37. Hi Lubos You speak as if an MWI "split" happens when a photon hits a half-silvered mirror, for example . That's not what the theory says though; a "split" may only happen at the same time that an observation is made. Quantum states in linear superposition still exist in MWI and the mathematics is the same. Bohr said that the wavefunction collapses and the state is now an eigenstate of the measurement just made. MWI says that the universe splits into copies when the measurement is made, with one copy for each eigenstate that had a non-zero probability of being found. The issue of the split maybe occurring over a short period of time, is the same as the issue of Copenhagen wavefunction collapse maybe taking some time. Of course you may say that decoherence is superior to both of these views, and you may be right. However, Bohr didn't know about decoherence , he rejected MWI in favour of Copenhagen. So, there is no issue with "split states being unable to reinterfere with themself" or whatever. In the case of a Mach-Zend interferometer, for example, if the probability of detector B triggering is 0%, then MWI says that there is no splitting of universes, in this particular case MWI is just identical to Copenhagen. Also I am not sure why you bring up charge conservation etc. Each individual universe follows the laws of physics, including charge conservation. If the universe splits then there's twice as many electrons and twice as many protons. The Hilbert space that a wavefunction lives on, is something that is just within one universe. When a split occurs, each universe gets its own Hilbert space, the vectors in one space no longer have anything to do with the other space. The different universes, by definition, cannot interact with each other ever again. NB. I don't adhere to MWI personally, but the thing you refute in this post is not what Everett and his fans do adhere to. 38. Jorge Luis Borges, the famous Argentinian short story writer, wrote the story, "The Garden of the Forking Paths" in 1941, which suggested the idea of many worlds. David Deutsch has been championing this idea, and, in his popular book, "The Fabric of Reality" and in some papers, claims to have "proved" it using the double slit experiment. He claims that the interference results from passing single photons through a double slit can only be explained by MWI where "shadow photons" from the many worlds are interacting with the photon in this world.-----really bizarre "proof". It is sort of proof by incredulity or lack of imagination. I must admit that many worlds is a fun science fiction concept, but isn't plausible. Also, the brains entertaining it seriously really don't compare with the QM founders'---Dirac's 1930 book is extremely clear, and Heisenberg's famous paper is complete magic, if very hard for me to follow. 40. So you both agree that observers themselves can enter into superpositions? (that's pretty much how I understand MWI, that I myself am in some way in a quantum superposition) If so, it's not obvious to me what you're actually disagreeing over other than weather you like the words "many worlds interpretation"... 41. Those people never clearly say what they actually believe. A vast majority of them never clearly says whether superpositions of macroscopically different states are legitimate states and the minority is as split as the world can never get. ;-) Of course that my answer is Yes, it's the superposition principle. The reason why such superpositions aren't familiar from the "classical" perceptions is explained by decoherence but there is nothing fundamentally wrong about these superpositions. 42. Dear Old Wolf, one could perhaps agree that it is a language issue but your claims about the preferred language are still completely wrong. The sentence "the eigenvectors are up and down" is valid but it has nothing to do with the state vector "psi" itself so it is not equivalent to my original sentence. It only describes a priori possible choices. The sentence I mentioned was meant to only describe possible states whose coefficients are nonzero so it carries some information. Your sentence carries none. There are no "hidden variables" in the sentence "cat is dead or alive". It's just an ordinary logical statement using the conjunction OR. You surely don't want to prevent physicists from using the word "OR", do you? I assure you that physics or any science would be impossible without words like "OR". It's also untrue that statements "observable XY has value xy" may be meaningless. All such propositions are valid in general, by the basic rules of quantum mechanics. Histories constructed out of such sentences may fail to be "inconsistent" if I use the Gell-Mann-Hartle terminology but one surely can't ban any of these sentences at the level of general quantum mechanics, before the dynamics is considered. So it's not really "just" language. You misunderstand the physics, too. 43. Dear Old Wolf, when you say that there's a "split" after the measurement, you are back to the question "what is a measurement" (who has enough consciousness or whatever to be allowed to measure: now the same agents have the right to split the worlds!), the same question that was claimed to motivate the whole interpretation because the Copenhagen interpretation is said not to answer such questions. In reality, there can't of course be any splitting of the world and I have proved so. So it's not true that my description isn't relevant for this question. Some people just don't want to hear things that prove that they believe in wrong things. In the Copenhagen interpretation, the "measurement" is a somewhat arbitrarily defined threshold/event after which one may treat the information using the laws of classical physics. So one may talk about a "measurement" at a point behind which classical physics becomes an OK approximation, or later than that, but not before that. It's very important that it's a phenomenological theory only; nothing qualitative is actually changing about the world at the "moment of measurement". There isn't any "moment of measurement". Bohr has never said that the "wave function collapses". All of your comment is just pure bullshit. It's impossible to react to every sentence written by everyone who writes complete bullshit about quantum mechanics, about the way how the world actually works, about the mathematical possibilities how it could work, and about the history of science. All these things are being distorted, rewritten, rotated upside down, and you're a part of the problem, too. 44. I read The Fabric of Reality, too, when it came out. Deutsch also wrote that quantum computing, when it comes, will owe its power to massively parallel computations being performed on the qubits in many universes simultaneously that differ only by a smidgen. I guess the computer operators in our neighbor universes work for us... or we work for them :) Deutsch is near the top of my wishlist of people who I would like to see publish a guest blog at TRF, but ONLY if they stay around for the discussion afterward :) Borges was a genius writer, he surely mined theoretical physics for inspiration. 45. The Hidden Reality refers to the Garden of the Forking Paths, too, just mentioning it was Brian Greene's favorite literature on related topics. But one could seriously claim that if the "splitting worlds from quantum mechanics" is a legitimate insight in physics, it wasn't done by Everett for the first time but by this book. Everett hasn't made it more meaningful in any detectable sense. He just subtracted the references that make it obvious it is science fiction, and he only removed them because he was advised by Wheeler. 46. But you have to remember that this is the Copenhagen interpretation; "that observer is in a superposition" doesn't mean "there are two copies of that observer and they are in different states". It means "the wavefunction we should use to make predictions about the physical properties of that observer, is the mathematical sum of several other possible wavefunctions". That is always trivially true, as a mathematical statement; what people would be interested in, is if coherent superpositions of "macroscopically distinct" wavefunctions were ever needed to describe "observers" in the real world. Decoherence prevents this from happening; but as quantum computers become more advanced, we will get closer to having superpositions of observer-like cognitive processes. 47. The danger would be a huge waste of time (like a few centuries)... like religions did in the past (and still do in some part of the world). This would throw our civilisation into the dark ages of science. We are living exciting times with the LHC, space probes etc... I hope physicists won't waste it. They have a huge responsibility on humanity's evolution path. 48. Yeah that is right. I am worried too about the fact that even though, due to the advanced knowledge and technologies we have now, we could learn a lot about deep fundamental questions (probably even in the not so far future) this great chance could be gamed away ... I recently had a serious word with my colleague who borrowed "Vom Urknall zum Durchknall" to me, in order to tell him what I really think about it and the author, his misbehavior in Munich, etc ... :-D. I the course of our lively discussion I learned that he belongs to the sourballs who are of the opinion that it is completely legitimate to cut fundamental physics since it is not important compared to the "real world" problems humanity faces at present etc etc ... His office mate (who I considered to be quite a nice and friendly guy too) is even worse; if he had the power to do it, he would probably turn off the LHC immediately to "save energy" for example, and abolish fundamental physics to save the money for something that is "more useful to humanity". Boy how did this discussion upset me since I never thought that these two colleagues are among such hardcore sourballs :-(((. At the moment, I hardly manage to look them in the eyes without turning into an angry shadron again when we meet accidentally in our corridor, our small kitchen, or at our weekly group meetings, ect ... Happily the director of our institute is more reasonable: The day after the higgs-independence day he explicitely pointed out the discovery of the higgs as very important, we all should know about it and we should all learn how the higgs mechanism works by ourself if we do not know it already :-))). The second part of his speech I since then use as a pretext to read TRF even at work ... :-P :-D :-) 49. Thanks Eugene :-) Your translation sounds very nice in my ears, good for Focus Magazine to talk to real physicists instead of trolls ;-). Now after all Prof. Lüst seems to have friends in the media. Maybe Germany has to improve its image after the horrible appearance of our own local troll king in Munich, which leads the science journalists to pull themself together ... :-D Hm, Nicolai wants to directly quantize spacetime ... is he a LQG theorist ? The original German article with the pretty picture I will read as a nice and comforting bedtime story to sleep well and peacefully :-) 50. "The whole point of MWI is to redefine what exists means." That reminds me of Clinton's famous quote " It depends on what the meaning of the word "is" is." 51. Thanks for the translation Eugene. 52. Funny how these people all look alike to us ;-D (in TBBT Raj's dad confused Leonard with Sheldon and says "oh sorry, you all look alike to us"). 53. LOL, a fun scene. But I have some problems to believe it's genuine and there's this symmetry between the subraces as presented by TBBT. I may be wrong but I surely do believe that the diversity of the appearances between the whites is larger than for other races. Or is it really just because we're not optimized to distinguish other races finely enough and they're similar unequipped by the resolution for the whites in a symmetric way? 54. You're right. Whites have different hair colours, eyes colours, complexion. Asians and blacks don't have those "striking" differences. 55. european populations have the smallest genetical variety by far than any of the other mentioned "groups". hair colours yes, there is a big variety dna, no there isn't. 56. Dear Lubos, I think that your derivation doesn't show that the MWI is incorrect, because my interpretation of the projectors is a little bit different than yours. In your proof P_A=|up><observes down| and not the pure state you wrote. ( But your derivation doesn't really use the state, that's why I left this to the end. ) What do you think about my position? 57. I think Lubos has explained this very well, especially with his reference to Mott as well as his proofs. Fundamentally the confusion on this point resides in people's adherence to a notion of objective reality, which must be abandoned in the quantum world. As pointed out we have to think in terms of probabilities. Is there some probability that if you made a different decision in your life you could have been the next einstein? Certainly there is and that has to be factored into the evolving wave function, but does that mean there is some version of yourself leading the life of fame and fortune? No, and that's the point. You AND your doppelganger can not be the outcome of an observation. It has to be you OR your doppleganger. The use of superposition by MWI advocates is misleading. All the superposition preserves is the indeterminancy of an unobserved state. However, the state is NOT you AND your doppelganger. A superposed state is unique in itself. It is a state between mutual exclusive possibilities. This is very important because such states do not have classical analogs. MWI proponents want to change the what exist means before they understand what it means now. 58. To me, the many worlds interpretation means something like: "The entire universe can be modeled as a quantum system, and the outcomes of experiments can be predicted probabilistically by analyzing the evolving correlations/entanglements". ("Many worlds" comes from the fact that in this model, the whole universe is quantum, and therefore in a superposition of mutually exclusive states.) This seems to be what Fred's saying, and to me the question of weather MWI is valid is basically weather that's true or false. Can you derive the Born rule just by looking at the whole universe's state vector? Can you make a toy model of a quantum mechanical universe containing a scientist measuring the spin of an electron and conclude "he's got a 60% chance of seeing an UP", for example? That's the question I'm curious about. 59. "The entire universe can be modeled as a quantum system, and the outcomes of experiments can be predicted probabilistically by analyzing the evolving correlations/entanglements." Well, I would probably call this paragraph "a few words, far from complete words, describing quantum mechanics in its Copenhagen interpretation. This difference in our wording isn't just terminology, it's about the credit and rewriting of the history of physics because your terminology suggests that there is something in your sentence that the Copenhagen school didn't discover. There's nothing of the sort. So if one strips everything that is demonstrably wrong about "MWI" and anything ever connected with MWI, one is back to the Copenhagen quantum mechanics and a movement trying to deny that it was these men who actually made the revolution in the foundations of physics. 60. I do understand the physics, but we are disagreeing on which English words to use to describe it. Would you also say that in the double-slit experiment, the electron goes through one slit or the other? It's more commonly said that it goes through one slit and the other (i.e. both slits). 61. No, we are disagreeing on the substance. The propositions "A and B" and "A or B" are two completely different, inequivalent statements. The statement "the electron exists in slit A or slit B" is valid - with the disclaimer that one must avoid the wrong classical preconception that an objective answer exists. But the statement "an electron exists in the region of slit A *and* an electron exists in the region of slit B" is just wrong. It may be more common but it's wrong. You can't learn physics or logic or maths properly by choosing "more common" answers and sentences. You must choose more correct ones. 62. Right - fair enough. So according to you, people like me and Fred are copenhagen advocates, and according to us, you're a many worlds advocate. Well as long as people manage to communicate eventually... Is this maybe not what Brian Greene was referring to all along though? I mean I haven't read the book in question, but I imagine this was what he was angling at, maybe while introducing some slightly dodgy analogies to communicate with a lay audience...... I watched that Sidney Coleman lecture last night actually, and it seemed like he'd got most of the way to making the Born rule come from all the other postulates. Which was quite cool actually... This is doable right? Feel like doing a post on it? 63. I have no idea what you're talking about. Everything is upside down. I assure you that I have never been a MWI advocate according to myself and you have never grown enough to become a Copenhagen advocate. Still, the claim that outcomes - with most of the information stored in entanglement/correlation - is predicted probabilistically is what the Copenhagen school brought to the world. I have done dozens of posts on what you're saying, discussed a lecture on this topic by Sidney Coleman, and this blog post was another one. But you have probably missed *everything*. Probably deliberately so. 64. Dear Lubos, of course that + does not mean exactly OR in quantum mechanics because of interference. But for cats it becomes a good enough approximation of OR because of interference. 65. I just liked the way Sidney Coleman basically explained why an observer would get random results when measuring an electron's spin, without referring to the Born rule at all. He explained reduction of the wavefunction very clearly as well. It's something I'd never seen before, and I found it very interesting. It kind of seemed like you could pretty much the same argument to just totally junk the Born rule and derive it from the other postulates, but I've never seen the full derivation. Not only that, but I've heard people claim that such a derivation is impossible. What's your position on this? Btw I notice Coleman cited Everett in that Quantum Mechanics in your Face lecture.... *grin* 66. You're just an irrational asshole - sorry if you don't like my terminology: I just watched about 10+ additional episodes of Bullshit of Penn and Teller, wonderful. What should I do with your junk? All your opinions, priorities, interpretations, methods are just junk. The Born rule is nothing else than the rule that QM predicts the probabilities and they're equal to |c_i|^2 where c_i is the complex coefficient of a decomposition of the wave function to a basis of eigenstates. If one uses modern i.e. quantum physics, *every* meaningful question may be reduced to a question about eigenvalues of observables and every such a question may be answered by a calculation of the amplitudes followed by the Born rule. It's the most important fundamental pillar of all of modern science. The Born rule is exactly true, it is fundamental, it has earned Max Born a well-deserved Nobel prize, and you as well as everyone else who wants to "junk it" is a deranged scumbag and idiotic fucked-up asshole. Whether one may "derive it" from other postulates is completely irrelevant. One surely has to start with some postulates that are at least as strong and far-reaching as the Born rule. The previous sentence is a tautology; the greater strength of the other hypothetical postulates clearly follows from the fact that the Born rule can be derived from it. Your idiotic ad hominem comments about Coleman are completely distasteful, too. I am sure that there isn't an iota of difference between my comments on QM and his comments, I know that he mentioned Everett and so did I, and I also know that he credits the Copenhagen founding fathers with the discovery of quantum mechanics and all the foundations needed to what he has ever said about the inner workings of quantum mechanics. 67. Andrzej CzechowskiAug 28, 2012, 12:45:00 AM Many World Interpretation (as I understand it) assumes in effect that each path under the path integral correspond to a separate reality. Inserting unity in the form 1=Sum |><| is interpreted as a sum over different realities. So your argument misses the point, I think. The measuring instrument cannot be simultaneously in the states |1> and |2> (Landau) but Everett assumes it is (in different realities). I am sorry, I would be only to happy to see the MWI disproved. 68. Is Fred the same Fred that did silly heat experiments a while back? If so, he doesn’t know a whit of physics. That sums it up. 70. If I understand you, you are simply removing the process of collapse entirely from the theory (and from the world.) Every interaction results in a superposition, characterized by quantum mechanics, that continues to propagate. End of story. There is a single universe that is much more complex. The idea of collapse is a crutch to try to explain what we think we experience and is the root of all of the confusion about what an observer is and what constitutes an observation. That is how it was explained to me a long time ago in a private conversation with a mathematical physicist of some renown and it seemed a wonderful simplification with an enormously expanded view. I don't think Lubos' attack is relevant to that view. I stand ready to be eviscerated, even blacklisted. :-) 71. Andrzej, sloppiness is an important part of this philosophically prejudiced demagogy. The formula 1=Sum |› ‹| must be interpreted as the sum over *possible* realities, not actual realities. There is absolutely no ambiguity about this statement - it may be directly measured. The interpretation of the sum above is exactly the same as int dp dq / 2.pi.hbar the integral over the phase space in classical physics. This is not just an analogy; in the classical limit, the sum above reduces to the integral over the phase space. Now, the individual points of the phase space are clearly not realized simultaneously - the phase space is the complete set of possible states in which the physical system *may* be but the number of states in which it actually is is demonstrably equal to one. The fact that the squared amplitudes |c|^2 are probabilities isn't one of dozens of possible speculations; it is a completely directly observed experimental fact. We may just associate the wave function with a particular experimental situation and if we measure once, we don't see any wave function in the experiment because the wave function isn't observable - both in the linguistic and technical sense. If we measure many times, we see the probabilistic distributions related to the wave function in the usual QM way, thus proving that the wave function is a semi-finished product for (all) probability distributions. This is not a random guess; it is a claim that may be directly observed in experiments. Maybe you would first have to open your eyes. 72. There has never been any collapse in the proper, Copenhagen (and physically equivalent, probability-based instrumental) interpretation of quantum mechanics. It has always been a crutch, a deeply misleading popular metaphor for newspapers that kind of influenced even those who shouldn't have been influenced and who should know better. The collapse isn't an actual process; it is just our simplification of our own knowledge, the arbitrary moment of time after which we may just use the conditional probabilities assuming the observed facts and forget about the result of the probability distribution defined for values different from those that were already made known to us. 73. This is without a doubt the most clearer and simple explanation of the basic idea of QM. I agree that MW theories are hogwash. To start with, how are many world to be tested? Trusting many worlds would be like trusting a person who claims to have been given so called divine message in sleep.... Looking forward for more of your posts 74. I would say that it is rather just a simplification of our perspective to something we can grasp. That no such moment really exists, just a universe of propagating and expanding superpositions due to prior interactions and leading through subsequent interaction to more of the same. This is the sense in which there are "many worlds." What I perceive as "I" is purely historical and is just a locus through it among which there are many ("many" being an enormous understatement.) Out of all this our perception of singular events and their probability is mysterious and in some deep way relates to measure, which is the real domain of quantum mechanics. Back to the armchair where I belong. Thanks for giving me the floor for a moment. 75. Dear Don, fine, if this simplification or any other simplification helps anyone, he may use it to improve his life. But he shouldn't call it science. Science isn't about simplification at any cost; science only allows simplification as long as the theory remains accurate as a description of all the known and relevant phenomena. One may simplify the explanation of sex to children by telling them that babies are brought by storks. Such a simplification helps but it is not valid science. There is no genuine stork that is bringing babies - babies are born without any birds whatsoever (except for the bird that the Czech readers are thinking about now) - and in the very same sense, there is no collapse, no many worlds, no hidden variables, and so on. Quantum mechanics just predicts probabilities of outcomes directly, without any of these intermediate storks, and the assumption that there exists one of these storks leads to a direct conflict with experiments as long as one looks at the experiments comprehensively enough. 76. This is more a technical point I guess and doesn't change the content of the article but I was wondering if since the "multiplications" were direct products are the sums really direct sums? My intuitive interpretation of the two types of products as applied to probabilities in general is that states are like marbles in bags. If you have a product of states then it's like one state is a marble (not necessarily a specific marble) from bag 1 and the other is from bag 2 so you take the direct product of the states to symbolize the fact that the probability to have the state s1(x)s2 is the product of the probability to have s1 and the probability to have s2 whereas for a sum of states you're really saying that the two states are two marbles from the same bag so obviously when you draw you only get one or the other. 77. I hope I understand you well in which case you're right and it's important. Two marbles have states that live in the *tensor product* of the single-marble Hilbert spaces. It's important that the tensor product isn't the direct sum. The dimension of the tensor product is d1*d2 where d1,d2 are the dimensions of the single-marble Hilbert spaces. On the other hand, the dimension of the direct sum is d1+d2. The direct sum of two linear spaces may also be "geometrically" described as the Cartesian *product* of the two individual linear spaces. But this Cartesian product is really just a sum. What one needs for two marbles is the tensor product of the Hilbert spaces and the probabilities for conditions "marble 1 does something and marble 2 does something else" reduces to the product of the two individual probabilities if the marbles are unentagled. And yes, if one talks about one marble e.g. in the double slit experiment, its having more possibilities where it can be corresponds to extending the spaces as direct sum. So if a single marble can sit in one of 15 red holes or 4 blue holes added later, it may sit in one of 19 holes and the Hilbert space is the direct *sum* of the original 15-state and 4-state Hilbert spaces. The marble is in one of the 15+4 holes, so it's either in the red holes OR the blue holes. This "OR" is what corresponds to the direct *summation* of Hilbert spaces. It doesn't increase the number of objects; it only increases the number of mutually excluding states/properties that the objects may have. 78. Sorry, I don't know what happened with my previous post. Basically, I wanted to write two things: 1) Since the projectors are related to the observations of an observer, P_A*P_B=0 only means that the same observer cannot observe the electron in both states simultaneously. In the MWI, there are two different observers in two different worlds who observe the different states, so there is no contradiction. 2) I think that the MWI only works after decoherence, so the MW state should be described by a density operator, where the offdiagonal terms are zero. If the decoherence isn't complete (so there are small offdiagonal terms), than the MWI is only an approximate picture. 79. Dear Rezso, you probably used "smaller than" and "greater than" which you shouldn't in a partly HTML-enabled comment editor. 1) Your attempt to escape the unescapable by "restricting it to an observer in one world" would only be justifiable if you could also create the corresponding mathematical objects that would describe whether a meta-observer in the whole system of worlds may see electrons in both spin states - anywhere. If it is in principle impossible to talk about the observations in the whole "MWI multiverse", then the MWI multiverse obviously doesn't exist. Needless to say, such an attempt will fail because it's exactly equivalent to the previous problem with the word "observer" replaced by "meta-observer". There can't be any operator that expresses the existence of objects or their properties in the "multi-world" for the reasons I have already demonstrated and your newest "excuse" is just a terminological sleight-of-hand that tries to redefine the "observer" in such a way that the "multi-world" becomes inaccessible in principle. At any rate, if done correctly, the argument leads to an inescapable conclusion: the other worlds can't exist. 2) Decoherence is a meaningful theory that can be explained and verified by well-defined mathematical formulae but MWI is not. There isn't any "MWI after decoherence". MWI is a philosophical prejudice that was promoted decades *before* decoherence was discovered and it is not really equivalent to decoherence, either. And decoherence doesn't produce any "multiple worlds". It explains why some bases of states are more observable in the real complex world than others. You are talking about a non-existent combination of MWI (which is an ill-defined piece of rubbish) and decoherence (which is a homework exercise and totally indisputable consequence of ordinary quantum mechanics applied to states in which an interesting system is treated separately from the uncontrollable environment). 80. Dear Lubos, thank you for your answers. 1) I completely agree that we should not introduce extra structure for meta-observers, because in the MWI, we would like to describe the measurement process without the measurement axiom and without adding any extra structure to conventional unitary QM. 2) Do you think that decoherence alone can solve the measurement problem completely? Surely, it can explain why macroscopic objects behave in a classical way. And it is a physical process, no one can deny it's existence. But is it the final answer? The result of decoherence theory is a density operator for the system (after the environment is traced out). The probabilistic interpretation of this object should be put in by hand. For example, if you take a look at this article:, Joos concludes that decoherence is not the final answer and that there are only two possibilities for the good interpretation (if hidden variable theories such as pilot wave theory are excluded): 1) we should modify the Schrödinger equation to get a real, objective collapse 2) we should use the MWI. My opinion: I dislike option 1), because the Schrödinger-equation is equvialent to unitary time evolution, so even a small modification would lead to a completely different philosophy behind the equation. So, if I were forced to choose I would clearly go with option 2). What do you think? You clearly dislike option 2) but I suspect that you will say that you dislike option 1) too. 81. Dear Lubos, You wrote: "Also, I feel very uneasy about your sentence "the probabilistic interpretation of [density matrix] should be put by hand". Which hand? What is the sentence supposed to mean except for trying to spread some irrational and totally unjustified doubts by some rhetorical tricks? The density matrix is *by definition* the quantum version of the probability distributions on the phase space, so of course that it has a probabilistic interpretation, by definition." The meaning of my sentence was the following. In modern decoherence theories, you start with the wavefunction of the system+environment, build an orthogonal projector from it, and then, you trace over the environment to obtain the density operator of the system. So, there are no probabilities in the definition! First, the decoherence term in the master equation has to kill the offdiagonal matrix elements (in the preferred basis). After that, the remaining diagonal matrix elements can be interpreted as classical probabilities. What I wanted to say above is that this is a new assumption which is needed to connect the theory with experiments. This is why some people think that there is something more in the measurement problem. Of course, one can argue with this analysis. So now, I'm going to argue with myself. :) One can say that the properties of the density operator make a probabilistic interpretation natural. Hermicity means that the diagonal matrix elements are real, Tr=1 means that their sum is 1, and positivity means that all of them are positive. So a probabilistic interpretation is natural. Oh no, today it seems like that I have convinced myself that my previous argument was wrong. :S 82. You say "So, there are no probabilities in the definition!". That's a highly bizarre assertion. Whenever you want to interpret the calculations physically, you *need* to use the word probability because it's the only valid interpretation of the matrix elements of the density matrix, of the expectation values of projection operators, and so on. No, there is absolutely no "new assumption" in decoherence. Decoherence is just ordinary quantum mechanics applied to a particular kind of questions about the co-existence of a system with its environment. The interpretations of all objects such as density matrix are exactly the same as they always are in quantum mechanics. The probabilistic interpretation is not only natural but it's also one that may directly derived from observations and the only one that allows the theory to reduce to the previously known classical limits. 83. Hi Lubos, I highly admire your never-ending defence, against all-comers, of the probabilistic interpretation and encourage you to never stop, it's clearly how nature is Problem is, it's a a mix of psychological non-acceptance and miniscule logical loopholes (eg MWI, super-determinism, crazy godlike pilot waves) that allow the fretting deniers of nature's randomness a corner to fight from, and while you deal superbly well with the logical arguments, you'll never solve the psychiatry problems. FWIW, I agree with YOU. 84. Dear Lubos, okay, you convinced me that you are right. The probabilistic interpretation of the density operator follows naturally from it's mathematics, so nothing more (like MWI or something else) is needed, decoherence alone solves the measurement problem. But I still maintain that the fundamental definition of the density operator should use the partial trace and not the classical probabilities. And this is a difference between decoherence theory and ordinary QM (=Copenhagen Interpretation). In ordinary QM, the construction of the theory goes in the following order: 1) Wavefunction, unitary time evolution 2a) Measurement axiom, wavefunction collapse, classical probabilities 3a) Density operator is defined from the probabilities and from the corresponding collapsed wavefunctions But the decoherence motivated construction of QM goes as: 1) Wavefunction, unitary time evolution 2b) System+Environment, density operator is defined by a partial trace 3b) Classical probabilities emerge from the density operator after decoherence is complete So, I want to say that 2b) is a better definition for the density operator than 3a), because 3a) relies on the ad hoc wavefunction collapse rule, while 2b) doesn't. Do you agree with me in this? I want to know what you think of this. Do you think interstellar travel is possible, or science fiction? Hearing from a physicist would be great. 86. "What I think is that your position is just a linguistically powered rubbish that can't be given any interpretation that makes sense and you are just wasting my time. In an experiment with one electron, "electron exists with spin up" is exactly the same proposition as "electron has spin up". Trying to create any doubts about this is totally irrational. Also, if you used the MWI philosophy to arbitrarily insert existential quantifiers ("there exists a universe in which") in front of all propositions, you would totally screw all rules of logic about the propositions. You can't just add quantifiers without totally changing the logic. In particular, "electron has spin up" is the exact negation of "electron has spin down" but "there exists a universe with electron up" isn't complementary to "there exists a universe with electron down", especially because both propositions would almost certainly be "true" in an MWI. So this is experimentally excluded because we know that they're negations of each other. In such comments, I see that any discussion is totally hopeless after the first sentence. You say that we have different interpretations of projection operators. Holy fuck. How can you have a different interpretation of a projection operator? It is a very elementary object in principle, both mathematically and physically, and there is only one interpretation that is consistent with observations as well as logic and it's the interpretation of QM. The interpretation is that a projection operator is P obeying P^2 = P, we also want P^dagger=P, that is identified with the observable having No/Yes i.e. 0/1 eigenvalues answering to a question - namely the question Is the physical system in a state inside the lambda=1 eigenvalue (of P) subspace of the Hilbert space? The expectation value of this P in a pure state or Tr(P.rho) is the probability that the proposition holds. That's it. What the fuck is your interpretation? You're always promising some other interpretation but there isn't any. Crackpots like you are talking too much. In reality, the MWI babblers haven't even decided whether projection operators play any role in MWI at all. The reason they haven't decided is that none of the two answers makes any sense and they know it. At any rate, I am waiting for your prescription how to use projection operators to make the calculations in your non-Copenhagen framework, the MWI counterpart of my paragraph two paragraphs above. Before you actually have something of the sort, could you please kindly shut up and stop these meaningless tirades that only show one thing, namely that you're never willing to learn anything and you prefer to spit tons of this vague nonsensical mud over the Internet? 87. Dear Lubos, the comment you just replied to was my oldest comment in the thread. But it was broken, and I only removed "smaller than" and "greater than" symbols from it to make it work. "In such comments, I usually see that any discussion is totally hopeless after the first sentence." Actually, you already convinced me that you are right and the MWI is an incorrect interpretation. :) "Also, if you used the MWI philosophy to arbitrarily insert existential propositions, you would totally screw all rules of logic about the propositions. You can't just add quantifiers without totally changing the logic." Yes, you are right. I haven't thought about this, when I wrote my old post. But the old Copenhagen interpretation is incorrect too because it uses an ad hoc wavefunction collapse rule and the preferred basis is chosen by hand. Decoherence theory is the correct interpretation of quantum measurements, where classical probabilities naturally emerge from the density operator after decoherence is complete. But the wavefunction never really collapses. Previously, I thought that the decoherence interpretation can be naturally merged with the MWI, but you convinced me that I was wrong. 88. Your calculations and logic are both wrong. You're simply calculating the probability that electrons within the same time-line would exist in multiple states at once to be 0, which would obviously be true. Within the same time-line the probability of existing in different states would be 0, but that doesn't have anything to do with the MWI. All interpretations of the double-slit experiment use the same mathematics and quantum theory, the results of double-slit experiment are just interpreted differently. In MWI there is no collapse, you see the electron that exists in your time-line. So your disproof is all wrong. I'm certain that in the future multiple time-lines will be experimentally proven, though I'm not sure about the MWI in particular. 89. By a timeline, you probably meant a world line, right? ;-) Show me your "correct MWI" calculation of the same trivial thing but before you do so, please accept the fact that your comment is just a laymen's rubbish. 90. Very nice find, Victor. Also nice (in my opinion): Newton's First Law, by the Number Sixes and the Double Slit Experiment by Future Management Agency. Both of them Creative Commons licenses so you can't beat the price :D 91. As far as the proponents of many worlds go, it's not the laymen and self-proclaimed experts and prophets who just can't get their minds around the wave function being a subjective probability distribution because it mathematically looks like a classical wave for one particle, preferably spinless, that bother me. Most of those people are on the level of the Flat Earth Society. It's the slightly bigger names that subscribe to this ill-conditioned interpretation that freak me out, e.g. DeWitt, Zurek, perhaps partly Wheeler (he did talk about the wavefunction of the Universe) and I have a hunch that Sidney Coleman is something more of a fan of many worlds than you would like to think. The language that he uses in his In Your Face talk is kinda MWI-ish, e.g. at about 12:15 he says something like "we were in the branch that got spin up". Also he never says that he takes the wavefunction to be non-physical and his position seems to be "associated" with Everett, obviously, Yakir "Weak Measurement" Aharonov, David Albert, and Zurek. Now Zurek did a lot of great stuff on decoherence, but he subscribes to a modified many worlds interpretation of QM. Anyway, Sidney says a lot of smart things, but I'm really worried about these MWI-style statements. This might not be as serious as I take it to be, though it's hard to know, and even if it is of course none of Sidney's opinions reduce the amount of inconsistency of MWI. But it points the way to a curious psychological phenomenon, or problem, if you will. And that is that otherwise smart people, very smart even, who can extract wonders from the mathematics underlying our physical theories, reduce to complete morons when it comes to interpretational issues, the debating of which usually consists of very simple and irrelevant mathematics obscured to an arbitrary degree by metaphysical/science-philosophical vocabulary that they probably aren't really qualified to use. Take the recent work of 't Hooft and Weinberg for instance. I find that very mystifying. I find Zurek to be the most curious of these figures. In his paper at he advocates a variant of the MWI, of course called “relative-state” to make it more bland. Amongst other things he claims to derive the Born rule non-circularly (funny how after Deutsch et al’s failures it sounds like this is a specific type of its derivation :)) with the aid of envariance, a theoretical aid much in the spirit of the decoherence program. Unfortunately he truly doesn’t sound like he’s talking complete crap. He might be reaching for the deepest interpretational layers of quantum mechanics that can be reached without denying the objective existence of the wavefunction. Of course that might be worse than not reaching for them at all, since the consistency of the Copenhagen interpretation makes it completely unnecessary and it’s probably a ton of bullcrap anyway, only neatly worded and convoluted to the point when it looks convincing. But it does look convincing. Do you have an opinion on Zurek’s derivation? Especially, do you think you can identify a point at which possible inconsistencies arise? I tried debunking it myself but haven’t spotted the obvious problem yet. The only paper negatively addressing Zurek’s interpretation that I could find was by Ulrich Mohrhoff, a curious guy who I think is doing a good job in patiently explaining to the anti-quantum zealots why the probabilistic interpretation is the thing. But while ok, he seems to be doing stuff a bit differently from, say, consistent historians, so I am very much interested in your opinion. 92. Dear notallama, I would endorse a big part of what you write. Just a few comments. The "relative state interpretation" isn't Zurek's renamed stuff. It's the title, perhaps with formulation instead of interpretation, of Everett's 1957 thesis. So Zurek seems to be analyzing the *same* thing. Still, his 2007 paper starts by assuming the probabilistic interpretation, as far as I can read it. I also agree that Sidney Coleman himself used some MWI-like-sounding language. But as far as I can say, it's always just the language. He may have used the word "branch", perhaps because he was inspired to use it, but I don't see any indication that he would actually interpret the squared amplitudes as anything else than the probabilities or that he would try to look for a model where the wave function is "more real". Many of us have adopted certain phrases, especially because the strongest "proselytizers" when it comes to quantum mechanics are those who don't understand QM properly. (I remember a Czech physicist at my Alma Mater in Prague, Bedrich Velicky who knew very many famous world physicists, who always complained how universities don't teach the "real deal", but when it came to his "real deal", it was some naive "realist model", not remember which one.) So each of us picks a tolerable one among them, Coleman probably picked the Everett language as the most tolerable one but I don't think that it has influenced his thinking. I agree that those people are smart and brain-powerful when it comes to some technically more demanding questions but they just become complete idiots when the topic switches to interpretation. And I exactly agree with your observation that their technological capabilities suddenly evaporate and the most difficult maths is on the level of "squared amplitudes", and in most cases, they don't even square it right or they don't care whether it should be squared, and so on. It's completely weird. They probably see other "otherwise very smart" people who are doing the same thing so they feel justified to be equally breathtaking idiots. It's an infection of a sort. The 2007 Zurek paper is full of lots of redundant gibberish but of course I think that it's among the saner papers on the interpretational issues. It explains that MWI can't do almost anything right, as I read it, but one may supplement it with his insights - which are described by 50 different names or metaphors, decoherence, quantum Darwinism, einselection, envariance, and so on, and so on, but the essence is always the same mechanism - to get a sensible "interpretation". Also, if I guess right, the |c|^2 probabilities are extracted by looking at many states of an entangled complicated system including the environment chosen so large that each micro-outcome corresponds to a large number of microstates of the whole big system with the same absolute value of the amplitudes, by symmetries, the probabilities of each are probably claimed to be the same, which allows to "derive" |c|^2 in general by summing over many terms with the same absolute value. I think it's silly to think that this is more fundamental than the general rule for the situation in which the amplitudes have different absolute values - because they almost certainly have different absolute values, so it's contrived to assume that they should have the same absolute value. But it's a part of the hatred against everything that is quantum, including the simple Born rule. Some people just don't want it to be fundamental - well, one of the postulates or derived statements that are so closed to postulates by derivations that it makes no sense not to call them fundamental - and Zurek "partially" joins this idiotic movement in the paper. 93. You're basically just saying that it is impossible to observe an electron that is simultaneously spin up and spin down, and that every observation will confirm that charge is conserved. I doubt any MWI proponent would disagree with either statement or feel that it contradicts their interpretation. I think there's a valid philosophical objection to an interpretation that talks about the reality of alternate possibilities that can never be observed (if you can't possibly observe them, in what sense do they deserve to be called real?). But I don't see how you'd derive a mathematical contradiction to it, since at its heart it's just a very literal interpretation of the mathematics of the wave function. Personally, I don't see MWI as being much different than people talking about virtual particles in QFT. You'll never observe them, so are they real, or is it just a convenient way to visualize the math behind your theory? Probably the latter, but I'm not going to go to war over it and call people stupid monkeys if they talk about virtual particles. 94. MWI proponents may "feel" ;-) various things but science is not about feelings and the contradiction is there. It's not true at all that this multi-world fantasy is a "literal interpretation" of the wave function. It's a wrong interpretation designed as crutches for the stupid people but it has nothing to do with the right probabilistic interpretation and indeed, it contradicts it. As always, the key point of MWI that makes it incompatible with the real world - and with quantum mechanics - is the idea that there objectively exists some classical information that is independent of the observers and observations. This ain't the case. Sensible people call about virtual particles but they understand that they are not real physical particles. They're mathematical constructs contributing to probability amplitudes for processes involving real particles. But the point of the many worlds is different. It's the very point of MWI that those worlds are "real" in the classical sense, and this assumption may be shown and has been shown to contradict observations. If you're not getting it, you *are* a stupid ape. 95. You write: "But the statement "an electron exists in the region of slit A *and* an electron exists in the region of slit B" is just wrong (...) It's a point-like particle, there is only one electron (by charge conservation etc.), and it can't be in both slits at the same moment." But what about deBroigle's waves of matter? An electron can be described as a wave, and waves are certainly not just single points of space. All kinds of waves occuppy a region of space, so a wave can be in both slit A and B at the same time (just like Russia lays in both Europe and Asia, because it's not a point but an area) and there's nothing wrong with it. It's obvious in the case of double slit experiments performed with waves of water, for many people it's obvoius in the case of light, and I think there is no reason to think different in the case of electrons. 96. Dear Kmut, one could say that it was the very main purpose of this statement of mine to emphasize that the electron is *not* a classical wave. Prince de Broglie misunderstood those things much like you do, even after 1924 when he proposed his wave, which is why throwing his name around can't turn your invalid statements into valid ones. A classical water wave goes through both slits - one may detect "something" by an appropriate detector in both of them. But when an electron goes through the pair of slits, there is *nothing* that could ever be detected in both slits simultaneously. If you use a detector of any kind, call it a detector of waves, particles, disturbances, spirits, whatever, and if these detectors only operate in the regions around the two respective slits, they will never beep simultaneously. Also, the electron, unlike a classical wave, will always create a single point at the photographic plate. It's just not true that "there is nothing wrong with electron's being a classical wave". There's a lot of wrong things. A whopping 50% of statements one can make about waves are plain untrue about the electron. Just to be sure, many laymen don't get it: one wrong thing would invalidate your claim. But there are lots of waves to invalidate it, it's just wrong. A classical wave may be a method to think about the behavior of an electron or a quantum particle in some respects but it's surely not a valid model for all of its behavior. An electron is not a classical wave and the wave function isn't a classical wave, either. 97. Dear Lubos I really didnt want to make any ad hominem arguments. It certainly wasn't my purspose. I'm not a physicist but a person who would like to become one in the future, so my knowledge in this area is basic, especially when compared to yours. But you must know that it inst an unusual way for the people who teach QM to look on this problem from a differend side than you do. You say that electrons aren't classical waves because they cannot be measured in a classical way. But there are people who say that electrons are like classical waves because the Dirac equation, and more basic Schroedinger one describe a classical i.e. deterministic and unique time evolution so the real difference is in the act od measurement. In the quantum world the measurement is more "drastic" than in classical one. To observe waves on water we only need some light with energy too small to change the pattern in statistically significant way. But in quantum world energy of a wave we need to measure position of the electron is enough big to interfere with it. I've read recently some words by Wojciech Zurek, and I had an impression that he understads QM in in similar way stating that macroscopic objects are all quantum and the reason for which they dont behave like waves and for which they have unique lociaction, is that they're not enough separated from the environment, which is responsible for all this huge amount of interactions that forbid the macroscopic objects to behave like waves. Isnt this view just dual to yours? And how can one interprete interfference patternw in double slit experiment with electrons? If electron is point-like poarticle then what forbids it to behave like classical point-like particle and forces it to change its momentum? Is there any book particular where I could find all the answers to theese questions? Thank you in advance 98. "But there are people who say that electrons are like classical waves..." There are many people who say many dumb things and indeed, it's the main purpose of all these blog entries of mine to correct the widespread misconceptions. It's disappointing if you don't appreciate it and it's surprising that you seem to read this blog anyway even though the correction of stupidities said by people, especially if it is many people, is self-evidently the defining driver behind this blog. The wave functions evolve according to analogous "deterministic" equations as classical fields and waves but their physical interpretation is completely different so the "determinism" of Schrödinger's equation – or the Dirac equation promoted to a quantum equation for an actual system – does *not* translate to classical determinism of the real world which simply doesn't hold. 99. Interesting post but it appears you don't understand many worlds. Take your example of an electron in 0.6|up> + 0.8|down>. You wouldn't argue that there are "two" electrons there, that there is an electron with spin "up" as well as an electron with spin "down"? Of course not. Now lets say that electron interacts with an electron that is 1.0|up>. The joint state would be 0.6|up>|up> + 0.8|up>|down>. Similarly, here you wouldn't argue that there are some how four electrons? Of course not. Now lets say go back to the stern-gerlach experiment. All many worlds says is that if we don't introduce something new to quantum mechanics, scientists and measuring devices are also particles, so if they interact with a spin 1/2 particle: 0.60|up> + 0.80|down>. The end state is: 0.60|scientist sees up>|up> + 0.80|scientist sees down>|down>. And like before, there aren't two electrons, and there aren't two scientists. And no scientist is every going to see an electron up and an electron down. And this doesn't violate conservation of charge any more than 0.60|up> + 0.80|down> does. That is many worlds. 100. Sorry, the application of quantum mechanics to arbitrary systems is what conventional orthodox quantum mechanics is all about. It was the Copenhagen school that began to study molecules, metals etc. etc. using quantum mechanics. It is a complete lie that proper quantum mechanics has ever been claimed not to apply to arbitrarily big systems. If you don't have any two electrons (in total) representing one, you can't call it many worlds because it clearly has nothing to do with many worlds. There aren't many worlds if there is only one. 101. Unfortunately we do not get to decide what people call things. I agree many worlds is misleading name. You are right to claim that anyone who believes in what you call "many worlds" is a stupid monkey. Unfortunately the people who profess in many worlds would not agree that what you call "many worlds" is what they believe. As such when in discussions with those people your post is of little use to me which is disappointing. 102. Dear James, science is about learning objective laws while being disappointed also depends on your subjective feelings and preferences. So your being diisappointed, howevever I might prefer another outcome, doesn't imply that there is an iota of inaccuracy in what I wrote. You may use the phrase "many worlds" in any way you want, for example for "one world", you may twist the terminology in any way you invent, but you won't change anything about the fact that there's no viable modification of quantum mechanics, a theory that was first fully defined by the physicists who were meeting in Copenhagen and no one else. 103. You are really misinformed here, and you're stubbornness by calling other people monkeys shows that you are not open to the real ideas MWI proponents have. Since you are considering an isolated electron, first consider a state vector describing an observer measuring the electron. Then we see that observer neutral x measuring system -> observer measures up x electron spin up + observer measures down x electron spin down. Now what MWI adherents do is realistically interpret the resultant wave function and thus there must be two different equally real worlds. Now why exactly some worlds are more probable than others is a difficult question, but it is no more difficult then the question why probabilities arise in the Copenhagen interpretation. Probabilities can be interpreted as the probability of being in a certain branch. You're example is wrong because 1. you consider an isolated system without the observer. Decoherence shows us that a superposition of the electron will quickly get entangled with the observer. 2. you seem to be using the projection postulate in the sence of the Copenhagen interpretation that it gives the probability of the electron being in a state up or down. But in the MWI sence the probabilities can be seen as giving the probability of the observer being in a world in which up or a world in which down is measured. Surely being in both worlds (P_A * P_B) has probability zero, but that doesn't imply the MWI is false. Furthermore, charge and energy conservation are always derived in a single universe framework. And the MWI is pretty much consistent with this, as it can be shown that these quantities are in fact conserved within a single universe. So an observer will do measurements that are in correspondence with the conservation of these quantities. I think many people here have pointed out your fallacies but you keep on calling them monkeys. Though I myself am not yet convinced that MWI is the key to all our questions about QM, I am convinced that you do not understand the MWI debate correctly. 104. In the same arguement how can you dismiss many-worlds theory without testing it ? Shot down by your own logic.... 105. jesus christ you are pretentious. the most amusing bit of this whole "proof" is your tendancy to cobble together a mish-mash of different forms of math. you prepare an algebraic proof, and then try to convince us that your "proof" invalidated MWI by using binary operations (AND, OR, etc). clearly, as stated below in a more polite manner, you do not understand the fundamentals of what you are trying to discuss. 106. Interesting & I'm sympathetic but think the quick response would be to say that probabilities only come into play when superposition collapses. So the probabilities are only relevant when it comes to 'discovering which world you are in'. There isn't a specific world where someone would meet both outcomes at once.
d428a49b05e415e4
slug: schrodingerequation1 datepublished: 2018-06-01T07:16:58 dateupdated: 2018-06-01T07:28:07 tags: English Posts, Acedemic Notes excerpt: "QM can be introduced through various proxies and people can learn it as either" –- Unlike classical (Newtonian) mechanics where there seem to be a very intuitive and wildly accepted approach of learning, quantum mechanics can be introduced through various proxies and people can learn it as either: • Tricky model inspired by Black body radiation + Sketchy assumption • or, Elegant (but hard to solve) equations that follow axioms of quantum mechanics. In this 2-parts note, I will write myself a note that is most clear and logical for what I have learned so far. For this reason, I will introduce equations and relevant mathematical tools as granted, of course with reference for them. Interpretation of Wave Equation If I were to use one sentence to summarize Schrödinger equation (call it [SE] form now on): Complex number wave function \( \Psi (x,t) \) tells your probability amplitude about the particle. This is called statistical interpretation, it does not always work, but for elementary usage of [SE], we almost can always have this image in back of our head. Now, \(\Psi\) is not straight probability because it's a complex function, to get probability you will need to give it its own conjugate, naively: \[ P(x,t) = \Psi^*\times\Psi \] As for the reason why it has to be a complex-number function in the first place, is because if your think about \(e^{i\theta} = \cos{\theta}+i\sin{\theta}\), you will see you some what need a complex function to fully express a wave function across space and time. Schrödinger equation and its solution Knowing how we should look at \(\Psi(x,t)\) (which is just a 1-D case but it's okay), we can look at 1-D [SE] which the wave equation \(\Psi\) promised to satisfy: \[ -\frac{\hbar^2}{2m}\frac{\partial^2\Psi(x,t)}{\partial x^2} + V(x,t)\Psi = i\hbar \frac{\partial\Psi(x,t)}{\partial t} \] Here, \(\hbar\) is reduced Plank's constant, \(m\) is mass of the particle and \(V\) is the potential. Also bear in mind that it's a complex number equation. We tend to think free particle solution is the simplest solution to look at but turns out that has a vague physical picture, so we look at particle in a box solution first. But above all that, we need to invoke a mathematical (proof) that the solution is separable, meaning that \(\Psi(x,t) = \psi(x)\varphi(t)\). You see physicists make this separable argument a lot, this is because they will later rescue it by proving that all such solutions combined can provide a complete solution set. Given separable, we immediately can re-range the [SE] to get: \[ i\hbar \frac{1}{\varphi(t)} \frac{d\varphi}{dt} = -\frac{\hbar^2}{2m}\frac{1}{\psi(x)}\frac{d^2\psi(x)}{dx^2} +V \] This suggest that left and right both equal to a constant because they are independent in variables, we will suggestively give this constant \(E\). So far we have turned a two-variable ODE into 2 one-v ODEs: \[ \begin{cases} \frac{d\varphi}{dt} = \frac{iE}{\hbar}\varphi \rightarrow \varphi(t) = e^{-iEt/\hbar}\\ -\frac{\hbar^2}{2m}\frac{d^2\psi}{dx^2} +V\psi = E\psi \rightarrow ? \end{cases} \] It turns out the second one is hard to solve (easy once you know the answer of course, it's all about tricks). The take way is, once we solved it, we will get: \[ \Psi(x,t) = \sum_{n=1}^{\infty} c_n\Psi_n(x,t) = \sum_{n=1}^{\infty} c_n\psi_n(t)e^{-iE_nt/\hbar} \] Where you can see that the time term is snaped onto the spacial term. And all of different allowed \(E_n\) will give any valid general solution. In a box solution Now we consider particle in side a region that is walled by infinite potential outside: [1] This has the convineance that potential \(V=0\) inside, and we must have \(\Psi=0\) out side the box (because otherwise energy explodes). Another way to look at it is that, if we know there's a particle somewhere and we accept the said interpretation, we will expect: \[ \int_{-\infty}^{\infty}|\Psi|^2 dx = \int_{-\infty}^{\infty}\Psi^*\times\Psi dx =1 \] Because there is 1 and only 1 particle in our set-up. Plus the fact that \(\Psi\) should be continuous otherwise we will have different probability of spotting the particle approaching from left or right, which is un-physical. Given these 2 conditions, easy to see that out side the box, probability of finding particle must be zero and also \(\psi(x=0)=\psi(x=L)=0\). This is boundry condition (b.c.) We can focus on solving spatial part \(\psi(x)\) because we can snap the time term easily later. The spacial solution is just some wave form, we look at: and write \(k=\frac{\sqrt{2mE}}{\hbar}\) so \(\frac{d^2\psi}{dx^2} = -k^2\psi \). This is obviously a ODE with solution: \[ \psi = A\sin{kx} + B\cos{kx} \] Up bring b.c. into it, we see that \(B\) must goes to 0. Then because at \(x=L\): \[ A\sin{k L} = 0 \] we have \(k_n=\frac{n\pi}{L}\) which are discrete allowed values. From these allowed \(k\)'s, we can see that energy is also discrete because: \[ k=\frac{\sqrt{2mE}}{\hbar} \rightarrow E_n=\frac{\hbar^2k_n^2}{2m} \] Then we have a whole set of solutions with different energy and wave-form, the general solution will be any combination of them: \[ \Psi(x,t) = \sum \Psi_n(x,t) = \sum \psi_n(x)\varphi_n(t) = A\sin{k_n x}e^{-iE_nt/\hbar} \] \(A\) will need to be found by normalizing this, but it shouldn't be too hard. Finally, to visualize what these different solutions mean: Notice that energy goes higher, and the form of the wave function itself actually looks like standing wave on string. But remember that probability comes from \(|\Psi|^2\) not this wavefunction.
9df2bd84be22c8d6
Accurate and efficient method for many-body van der Waals interactions Alexandre Tkatchenko, Robert A. Distasio, Roberto Car, Matthias Scheffler Research output: Contribution to journalArticlepeer-review 960 Scopus citations An efficient method is developed for the microscopic description of the frequency-dependent polarizability of finite-gap molecules and solids. This is achieved by combining the Tkatchenko-Scheffler van der Waals (vdW) method with the self-consistent screening equation of classical electrodynamics. This leads to a seamless description of polarization and depolarization for the polarizability tensor of molecules and solids. The screened long-range many-body vdW energy is obtained from the solution of the Schrödinger equation for a system of coupled oscillators. We show that the screening and the many-body vdW energy play a significant role even for rather small molecules, becoming crucial for an accurate treatment of conformational energies for biomolecules and binding of molecular crystals. The computational cost of the developed theory is negligible compared to the underlying electronic structure calculation. Original languageEnglish (US) Article number236402 JournalPhysical review letters Issue number23 StatePublished - Jun 7 2012 All Science Journal Classification (ASJC) codes • Physics and Astronomy(all) Dive into the research topics of 'Accurate and efficient method for many-body van der Waals interactions'. Together they form a unique fingerprint. Cite this
f37027491424988c
Angle-dependent strong-field molecular ionization rates with tuned range-separated time-dependent density functional theory Document Type Publication Date Strong-field ionization and the resulting electronic dynamics are important for a range of processes such as high harmonic generation, photodamage, charge resonance enhanced ionization, and ionization-triggered charge migration. Modeling ionization dynamics in molecular systems from first-principles can be challenging due to the large spatial extent of the wavefunction which stresses the accuracy of basis sets, and the intense fields which require non-perturbative time-dependent electronic structure methods. In this paper, we develop a time-dependent density functional theory approach which uses a Gaussian-type orbital (GTO) basis set to capture strong-field ionization rates and dynamics in atoms and small molecules. This involves propagating the electronic density matrix in time with a time-dependent laser potential and a spatial non-Hermitian complex absorbing potential which is projected onto an atom-centered basis set to remove ionized charge from the simulation. For the density functional theory (DFT) functional we use a tuned range-separated functional LC-PBE∗, which has the correct asymptotic 1/r form of the potential and a reduced delocalization error compared to traditional DFT functionals. Ionization rates are computed for hydrogen, molecular nitrogen, and iodoacetylene under various field frequencies, intensities, and polarizations (angle-dependent ionization), and the results are shown to quantitatively agree with time-dependent Schrödinger equation and strong-field approximation calculations. This tuned DFT with GTO method opens the door to predictive all-electron time-dependent density functional theory simulations of ionization and ionization-triggered dynamics in molecular systems using tuned range-separated hybrid functionals. Publication Source (Journal or Book title) Journal of Chemical Physics This document is currently not available here.
cf158a61c23fa7e0
Search FQXi Georgina Woodward: "There are different aspects to perception. There is vision without..." in Seeing is Believing Nicholas I. Hosein: "Having a dual intelligence. One moderately gifted, the other unprecedented..." in Schrödinger’s A.I.... Steve Dufourny: "In Orch OR, quantum superpositions are ‘orchestrated’ by microtubules..." in Schrödinger’s A.I.... John Cox: "A proposal for dealing with errors in quantum computers by harnessing a..." in Quantum information... Georgina Woodward: "'verisimilitude' is a nice word." in Can We Feel What It’s... R.H. Joseph: ""[H]ow consciousness plays with quantum mechanics, our theory of the very..." in Can We Feel What It’s... Jim Snowdon: "If Earth rotated once a year, rather than once every 24 hours, every..." in The Nature of Time Jim Snowdon: "The constant rotational speed of San Diego is 1,408 kilometers per hour. ..." in The Nature of Time click titles to read articles Schrödinger’s A.I. Could Test the Foundations of Reality Physicists lay out blueprints for running a 'Wigner's Friend' experiment using an artificial intelligence, built on a quantum computer, as an 'observer.' Expanding the Mind (Literally): Q&A with Karim Jerbi and Jordan O'Byrne Using a brain-computer interface to create a consciousness 'add-on' to help test Integrated Information Theory. Quanthoven's Fifth A quantum computer composes chart-topping music, programmed by physicists striving to understand consciousness. The Math of Consciousness: Q&A with Kobi Kremnitzer Can We Feel What It’s Like to Be Quantum? October 7, 2022 Collapsing Physics: Q&A with Catalina Oana Curceanu Tests of a rival to quantum theory, taking place in the belly of the Gran Sasso d’Italia mountain, could reveal how the fuzzy subatomic realm of possibilities comes into sharp macroscopic focus. by Carinne Piekema FQXi Awardees: Catalina Oana Curceanu June 24, 2016 Bookmark and Share Catalina Oana Curceanu National Institute of Nuclear Physics, Frascati, Italy In 2015, physicist Catalina Oana Curceanu, of the National Institute of Nuclear Physics, in Frascati, Italy received an FQXi grant of $85,000 to investigate how the uncertain fuzziness of the microscopic quantum realm transitions to the definite macroscopic world we see around us. In particular, she and her colleagues are carrying out experiments to test how measurements force quantum systems to take definite properties—as she explains to Carinne Piekema. You plan to test a proposed solution to the so-called "quantum measurement problem." What is the measurement problem? The theory of quantum mechanics is very successful in describing the world and phenomena on a microscopic scale (electrons, atoms, and even molecules), but it starts to be questionable whether the same theory can describe macroscopic bodies, or aggregates of many, many atoms. The superposition principle tells us that microscopic bodies can be in various possible states at the same time (Schrödinger’s famous cat is both dead and alive). This is described mathematically using a "wavefunction," which comes from solving an equation derived by Erwin Schrödinger. However, when one performs a measurement only one definite answer arrives (the cat is either dead or alive): the wavefunction has collapsed. This is the famous "measurement problem." The question is then: how does the wavefunction collapse to generate the event we see? Our FQXi project deals with exactly this question. Different models have been suggested to explain this collapse from the micro to macro worlds. Which collapse model are you using and why? We aim to measure signals from a theory that modifies the Schrödinger equation, adding terms that induce the collapse of the wavefunction in a very natural way. The specific model we use is the "continuous spontaneous localization" model. This has the nice feature that it allows microscopic systems to remain in superposition for a long time, while it immediately collapses the wavefunction of big bodies. This collapse is thought to be induced by the interaction of the particles with a special collapsing "field." How can you test whether this hypothesized field exists, in the lab? We might see shadows of a new physics. - Catalina Oana Curceanu What we measure is the spontaneous radiation resulting from the interaction of particles with the collapsing field. Imagine for example a free electron moving in space: if there is no field to induce a collapse, the electron will go in a straight line forever. If, however, there is the interaction with this collapsing field, it will cause the electron to zigzag. Whenever the trajectory changes, the system emits radiation. This emission is not present in standard quantum mechanics, but is a unique feature of the collapse model and we are trying to measure it. Where does the collapsing field that causes the radiation come from? This radiation is not due to some field that is known, but would appear to be a weird phenomenon in which energy does not seem to be conserved. Of course, in reality, energy is conserved, but we would need to know the theory "beyond standard quantum mechanics" to conserve it properly. A hypothesis is that this field could be related to the gravitational field. Which quantum systems are you testing in the lab? We use an ultrapure germanium detector and measure the radiation emitted by the detector itself, searching for the spontaneous radiation emitted by the electrons as well as the protons of germanium atoms. Quantum Tunneling Will an experiment buried deep within the Gran Sasso d’Italia mountain reveal that an alternative model of the subatomic world is right? These must be very small signals in an environment that has a lot of radiation from other sources. How are you able to create circumstances clean enough to measure spontaneous radiation? We try to reduce the influence of cosmic radiation as much as possible, so we do our experiments in the belly of the Gran Sasso d’Italia mountain. The LNGS laboratory—three huge cathedral-like spaces in the mountain connected by galleries—is located half way in the 10km long tunnel that connects the cities of L’Aquila and Teramo. In the mountain, cosmic rays are reduced by a factor of a million with respect to ground experiments. To reduce the background even further, we use ultra pure germanium instruments. However, there is still some radiation from the materials we use to perform our experiments and from the environment (as radon). So we use Monte Carlo simulations for data analyses to see which part of the X-ray spectrum we measure does not come from the collapse, but from radionuclides present in the setup materials. Do you have any results yet? We have done preliminary measurements and are now analysing the data. We are finding out what part of our signal can be ascribed to the residual background. Some very interesting results are coming out which we hope to publish soon. What happens if you don’t see any radiation? Would that allow you to rule out such collapse models? Collapse models are characterised by the so-called "lambda-parameter," which describes the number of interactions of particles with the collapsing field per second. There are two limits proposed in theory. One is conservative (proposed by physicist GianCarlo Ghirardi at the University of Trieste, in Italy and others) in which lambda is 10-17 interactions per second. The other, put forward by physicist Steve Adler at the Institute of Advanced Studies, in Princeton, New Jersey, is 10-9. We can already exclude the latter and we hope to approach the other limit too. If we could somehow exclude Ghirardi’s limit then either the collapse models as they are have to be discarded, or more likely, they need to be modified. We might see shadows of a new physics. Do you think we are close to fully understanding the relationship between the quantum and classical worlds? I believe there is a continued search for deeper understanding, but I don’t believe there is a theory of everything yet. It might also be that we are with quantum mechanics where Newton was with respect to Einstein. With the numerous experiments that are going on today, I hope we might get a hint of a new theory and I hope to contribute by performing nice experiments. It is also a lot of fun! Comment on this Article • You may also include LateX equations into your post. Insert LaTeX Equation [hide] LaTeX Equation Preview preview equation clear equation insert equation into post at cursor Your name: (optional) Recent Comments Thanks for sharing, very useful information. You can visit my site for more information. thank you for sharing meet girl facebook Let us be very careful about classical and quantum concepts. Sometimes we talk of one such idea in quantum gravity which will also explain classical. The question of GR and its application in quantum is a huge problem Time ideas in both the theories does not match . We do not even know whether Einstein is correct in his GR approach. The fact of matter is gravity is not a potential or field of infinite range etc. Gravity is emergent due two types of gravitons. One is FERMION GRAVITON... read all article comments Please enter your e-mail address:
6ab36f60582ca251
Molecular Orbitals Can I calculate localised Molecular Orbitals? The calculation of molecular orbitals by their very definition is always delocalized. Canonical Orbitals are always spread-out over the molecule. An example of the ELI plot as it is available after a <b>NoSpherA2</b> refinement. An example of the ELI plot as it is available after a NoSpherA2 refinement. A nice way to visualize bonding without the use of fuzzy orbital localization schemes is the use of spatial electron localization descriptors. One example is the ELI (or the more outdated version, the ELF) or the Laplacian of the electron density. Here you can usually see bonds, lone-pairs, and valence deformations quite nicely. They are possible to compute from the wavefunction used in NoSpherA2. If you want to do that please have a look at the “NoSpherA2 Properties” Tab, where you can compute and visualize them. An example of the Laplacian An example of the Laplacian If you want to look at localized orbitals you can have a look at techniques like NBO or NLMO, Boys orbitals or other localization techniques. These attempt to minimize the spatial extension of orbitals by applying different schemes to make linear combinations. NBO tries to map them to the very old concept of Lewis-structures, Boys orbitals are simply minimized in spatial extent and therefore usually visually more appealing than canonical orbitals. This all comes at a cost that they are not direct solutions to the Schrödinger equation anymore.
ee2234485e70b1dc
Ab Initio Quantum Chemistry Methods Ab initio quantum chemistry methods are computational chemistry methods based on quantum chemistry. Ab initio quantum chemistry methods attempt to solve the electronic Schrödinger equation given the positions of the nuclei and the number of electrons in order to yield useful information such as electron densities, energies and other properties of the system.   Overall solutions Methods Classifications Hartree–Fock methods Hartree–Fock methods Hartree–Fock (HF) Restricted open-shell Hartree–Fock (ROHF) Unrestricted Hartree–Fock (UHF) Post-Hartree–Fock methods Moller–Plesset perturbation theory (MPn) Configuration interaction (CI) Post-Hartree–Fock methods Coupled cluster (CC) Quadratic configuration interaction (QCI) Multi-reference methods Multi-configurational self-consistent field (MCSCF including CASSCF and RASSCF) Multi-reference configuration interaction (MRCI) n-electron valence state perturbation theory (NEVPT) Complete active space perturbation theory (CASPTn) State universal multi-reference coupled-cluster theory (SUMR-CC) Method in details Ab Initio Quantum Chemistry Methods • Hartree–Fock and post-Hartree–Fock methods The simplest type of ab initio electronic structure calculation is the Hartree–Fock (HF) scheme, in which the instantaneous Coulombic electron-electron repulsion is not specifically taken into account. Only its average effect (mean field) is included in the calculation. • Valence bond methods Valence bond (VB) methods are generally ab initio although some semi-empirical versions have been proposed. Current VB approaches are: Generalized valence bond (GVB) Modern valence bond theory (MVBT) • Quantum Monte Carlo methods A method that avoids making the variational overestimation of HF in the first place is Quantum Monte Carlo (QMC), in its variational, diffusion, and Green's function forms. Our services Project name Ab Initio Quantum Chemistry Methods Samples requirement Our Ab initio quantum chemistry methods requires you to provide specific requirements. Timeline Decide according to your needs. Deliverables We provide you with raw data and analysis service. Price Inquiry CD ComputaBio can offer you but not limited to: CD ComputaBio' Ab initio quantum chemistry methods can significantly reduce the cost of later experiments. Ab Initio Quantum Chemistry Methods is a personalized and customized innovative scientific research service. Each project needs to be evaluated before the corresponding analysis plan and price can be determined. If you want to know more about service prices or technical details, please feel free to contact us. * For Research Use Only.
56c4af925db19add
Jewish culture Jewish culture is the culture of the Jewish people,[1] from its formation in ancient times until the current age. Judaism itself is not a faith-based religion, but an orthoprax and ethnoreligion, pertaining to deed, practice, and identity.[2] Jewish culture covers many aspects, including religion and worldviews, literature, media, and cinema, art and architecture, cuisine and traditional dress, attitudes to gender, marriage, and family, social customs and lifestyles, music and dance.[3] Some elements of Jewish culture come from within Judaism, others from the interaction of Jews with host populations, and others still from the inner social and cultural dynamics of the community. Before the 18th century, religion dominated virtually all aspects of Jewish life, and infused culture. Since the advent of secularization, wholly secular Jewish culture emerged likewise. Jewish festival in Tétouan, Morocco, 1865 Museum of Jewish culture in Bratislava Tombstones from a Jewish cemetery, 13th century, Paris There has not been a political unity of Jewish society since the united monarchy. Since then Israelite populations were always geographically dispersed (see Jewish diaspora), so that by the 19th century the Ashkenazi Jews were mainly located in Eastern and Central Europe; the Sephardi Jews were largely spread among various communities which lived in the Mediterranean region; Mizrahi Jews were primarily spread throughout Western Asia; and other populations of Jews lived in Central Asia, Ethiopia, the Caucasus, and India. (See Jewish ethnic divisions.) Although there was a high degree of communication and traffic between these Jewish communities – many Sephardic exiles blended into the Ashkenazi communities which existed in Central Europe following the Spanish Inquisition; many Ashkenazim migrated to the Ottoman Empire, giving rise to the characteristic Syrian-Jewish family name "Ashkenazi"; Iraqi-Jewish traders formed a distinct Jewish community in India; to some degree, many of these Jewish populations were cut off from the cultures which surrounded them by ghettoization, Muslim laws of dhimma, and the traditional discouragement of contact between Jews and members of polytheistic populations by their religious leaders. Medieval Jewish communities in Eastern Europe continued to display distinct cultural traits over the centuries. Despite the universalist leanings of the Enlightenment (and its echo within Judaism in the Haskalah movement), many Yiddish-speaking Jews in Eastern Europe continued to see themselves as forming a distinct national group — " 'am yehudi", from the Biblical Hebrew – but, adapting this idea to Enlightenment values, they assimilated the concept as that of an ethnic group whose identity did not depend on religion, which under Enlightenment thinking fell under a separate category. Constantin Măciucă writes of the existence of "a differentiated but not isolated Jewish spirit" permeating the culture of Yiddish-speaking Jews.[4] This was only intensified as the rise of Romanticism amplified the sense of national identity across Europe generally. Thus, for example, members of the General Jewish Labour Bund in the late 19th and early 20th centuries were generally non-religious, and one of the historical leaders of the Bund was the child of converts to Christianity, though not a practicing or believing Christian himself.[citation needed] Napoleon grants freedom to the Jews. 1806 print, in which Napoleon grants the Jews freedom to worship, represented by the hand given to the Jewish woman The Haskalah combined with the Jewish Emancipation movement under way in Central and Western Europe to create an opportunity for Jews to enter secular society. At the same time, pogroms in Eastern Europe provoked a surge of migration, in large part to the United States, where some 2 million Jewish immigrants resettled between 1880 and 1920. By 1931, shortly before The Holocaust, 92% of the World's Jewish population was Ashkenazi in origin. Secularism originated in Europe as series of movements that militated for a new, heretofore unheard-of concept called "secular Judaism". For these reasons, much of what is thought of by English-speakers and, to a lesser extent, by non-English-speaking Europeans as "secular Jewish culture" is, in essence, the Jewish cultural movement that evolved in Central and Eastern Europe, and subsequently brought to North America by immigrants. During the 1940s, the Holocaust uprooted and destroyed most of the Jewish communities living in much of Europe. This, in combination with the creation of the State of Israel and the consequent Jewish exodus from Arab lands, resulted in a further geographic shift. Defining secular culture among those who practice traditional Judaism is difficult, because the entire culture is, by definition, entwined with religious traditions: the idea of separate ethnic and religious identity is foreign to the Hebrew tradition of an " 'am yisrael". (This is particularly true for Orthodox Judaism.) Gary Tobin, head of the Institute for Jewish and Community Research, said of traditional Jewish culture: The dichotomy between religion and culture doesn't really exist. Every religious attribute is filled with culture; every cultural act filled with religiosity. Synagogues themselves are great centers of Jewish culture. After all, what is life really about? Food, relationships, enrichment … So is Jewish life. So many of our traditions inherently contain aspects of culture. Look at the Passover Seder — it's essentially great theater. Jewish education and religiosity bereft of culture is not as interesting.[5] Yaakov Malkin, Professor of Aesthetics and Rhetoric at Tel Aviv University and the founder and academic director of Meitar College for Judaism as Culture[6] in Jerusalem, writes: Today very many secular Jews take part in Jewish cultural activities, such as celebrating Jewish holidays as historical and nature festivals, imbued with new content and form, or marking life-cycle events such as birth, bar/bat mitzvah, marriage, and mourning in a secular fashion. They come together to study topics pertaining to Jewish culture and its relation to other cultures, in havurot, cultural associations, and secular synagogues, and they participate in public and political action coordinated by secular Jewish movements, such as the former movement to free Soviet Jews, and movements to combat pogroms, discrimination, and religious coercion. Jewish secular humanistic education inculcates universal moral values through classic Jewish and world literature and through organizations for social change that aspire to ideals of justice and charity.[7] In North America, the secular and cultural Jewish movements are divided into three umbrella organizations: the Society for Humanistic Judaism (SHJ), the Congress of Secular Jewish Organizations (CSJO), and Workmen's Circle. Jewish philosophy includes all philosophy carried out by Jews, or in relation to the religion of Judaism. The Jewish philosophy is extended over several main eras in Jewish history, including the ancient and biblical era, medieval era and modern era (see Haskalah). The ancient Jewish philosophy is expressed in the bible. According to Prof. Israel Efros the principles of the Jewish philosophy start in the bible, where the foundations of the Jewish monotheistic beliefs can be found, such as the belief in one god, the separation of god and the world and nature (as opposed to Pantheism) and the creation of the world. Other biblical writings that associated with philosophy are Psalms that contains invitations to admire the wisdom of God through his works; from this, some scholars suggest, Judaism harbors a Philosophical under-current[8] and Ecclesiastes that is often considered to be the only genuine philosophical work in the Hebrew Bible; its author seeks to understand the place of human beings in the world and life's meaning.[9] Other writings related to philosophy can be found in the Deuterocanonical books such as Sirach and Book of Wisdom. During the Hellenistic era, Hellenistic Judaism aspired to combine Jewish religious tradition with elements of Greek culture and philosophy. The philosopher Philo used philosophical allegory to attempt to fuse and harmonize Greek philosophy with Jewish philosophy. His work attempts to combine Plato and Moses into one philosophical system.[10] He developed an allegoric approach of interpreting holy scriptures (the bible), in contrast to (old-fashioned) literally interpretation approaches. His allegorical exegesis was important for several Christian Church Fathers and some scholars hold that his concept of the Logos as God's creative principle influenced early Christology. Other scholars, however, deny direct influence but say both Philo and Early Christianity borrow from a common source.[11] The opening page of Spinoza's magnum opus, Ethics Between the Ancient era and the Middle Ages most of the Jewish philosophy concentrated around the Rabbinic literature that is expressed in the Talmud and Midrash. In the 9th century Saadia Gaon wrote the text Emunoth ve-Deoth which is the first systematic presentation and philosophic foundation of the dogmas of Judaism. The Golden age of Jewish culture in Spain included many influential Jewish philosophers such as Moses ibn Ezra, Abraham ibn Ezra, Solomon ibn Gabirol, Yehuda Halevi, Isaac Abravanel, Nahmanides, Joseph Albo, Abraham ibn Daud, Nissim of Gerona, Bahya ibn Paquda, Abraham bar Hiyya, Joseph ibn Tzaddik, Hasdai Crescas and Isaac ben Moses Arama. The most notable is Maimonides who is considered, beside the Jewish world, to be a prominent philosopher and polymath in the Islamic and Western worlds. Outside of Spain, other philosophers are Natan'el al-Fayyumi, Elia del Medigo, Jedaiah ben Abraham Bedersi and Gersonides. Philosophy by Jews in Modern era was expressed by philosophers, mainly in Europe, such as Baruch Spinoza founder of Spinozism, whose work included modern Rationalism and Biblical criticism and laying the groundwork for the 18th-century Enlightenment.[12] His work has earned him recognition as one of Western philosophy's most important thinkers; Others are Isaac Orobio de Castro, Tzvi Ashkenazi, David Nieto, Isaac Cardoso, Jacob Abendana, Uriel da Costa, Francisco Sanches and Moses Almosnino. A new era began in the 18th century with the thought of Moses Mendelssohn. Mendelssohn has been described as the "'third Moses,' with whom begins a new era in Judaism," just as new eras began with Moses the prophet and with Moses Maimonides.[13] Mendelssohn was a German Jewish philosopher to whose ideas the renaissance of European Jews, Haskalah (the Jewish Enlightenment) is indebted. He has been referred to as the father of Reform Judaism, though Reform spokesmen have been "resistant to claim him as their spiritual father".[14] Mendelssohn came to be regarded as a leading cultural figure of his time by both Germans and Jews. The Jewish Enlightenment philosophy included Menachem Mendel Lefin, Salomon Maimon and Isaac Satanow. The next 19th century comprised both secular and religious philosophy and included philosophers such as Elijah Benamozegh, Hermann Cohen, Moses Hess, Samson Raphael Hirsch, Samuel Hirsch, Nachman Krochmal, Samuel David Luzzatto, and Nachman of Breslov founder of Breslov. The 20th century included the notable philosophers Jacques Derrida, Karl Popper, Emmanuel Levinas, Claude Lévi-Strauss, Hilary Putnam, Alfred Tarski, Ludwig Wittgenstein, A. J. Ayer, Walter Benjamin, Raymond Aron, Theodor W. Adorno, Isaiah Berlin and Henri Bergson. (c. 25 BCE–c. 50 CE)) Baruch Spinoza Moses Mendelssohn Schneur Zalman of Liadi Ludwig Wittgenstein Hannah Arendt Menachem Mendel Schneerson Education and politicsEdit A range of moral and political views is evident early in the history of Judaism, that serves to partially explain the diversity that is apparent among secular Jews who are often influenced by moral beliefs that can be found in Jewish scripture, and traditions. In recent centuries, secular Jews in Europe and the Americas have tended towards the liberal political left[citation needed], and played key roles in the birth of the 19th century's labor movement and socialism. While Diaspora Jews have also been represented in the conservative side of the political spectrum, even politically conservative Jews have tended to support pluralism more consistently than many other elements of the political right. Some scholars[15] attribute this to the fact that Jews are not expected to proselytize, derived from Halakha. This lack of a universalizing religion is combined with the fact that most Jews live as minorities in diaspora countries, and that no central Jewish religious authority has existed since 363 CE. Jews value education, and the value of education is strongly embedded in Jewish culture.[16][17] Economic activityEdit David Ricardo (1772–1823). He was one of the most influential of the classical economists[18][19] In the Middle Ages, European laws prevented Jews from owning land and gave them powerful incentive to go into other professions that non-Jewish Europeans were not willing to follow.[20] During the medieval period, there was a very strong social stigma against lending money and charging interest among the Christian majority. In most of Europe until the late 18th century, and in some places to an even later date, Jews were prohibited by Roman Catholic governments (and others) from owning land. On the other hand, the Church, because of a number of Bible verses (e.g., Leviticus 25:36) forbidding usury, declared that charging any interest was against the divine law, and this prevented any mercantile use of capital by pious Christians. As the Canon law did not apply to Jews, they were not liable to the ecclesiastical punishments which were placed upon usurers by the popes. Christian rulers gradually saw the advantage of having a class of men like the Jews who could supply capital for their use without being liable to excommunication, and so the money trade of western Europe by this means fell into the hands of the Jews. However, in almost every instance where large amounts were acquired by Jews through banking transactions the property thus acquired fell either during their life or upon their death into the hands of the king. This happened to Aaron of Lincoln in England, Ezmel de Ablitas in Navarre, Heliot de Vesoul in Provence, Benveniste de Porta in Aragon, etc. It was often for this reason that kings supported the Jews, and even objected to them becoming Christians (because in that case they could not be forced to give up money won by usury). Thus, both in England and in France the kings demanded to be compensated for every Jew converted. This type of royal trickery was one factor in creating the stereotypical Jewish role of banker and/or merchant. As a modern system of capital began to develop, loans became necessary for commerce and industry. Jews were able to gain a foothold in the new field of finance by providing these services: as non-Catholics, they were not bound by the ecclesiastical prohibition against "usury"; and in terms of Judaism itself, Hillel had long ago re-interpreted the Torah's ban on charging interest, allowing interest when it's needed to make a living.[citation needed] Science and technologyEdit The strong Jewish tradition of religious scholarship often left Jews well prepared for secular scholarship. In some times and places, this was countered by banning Jews from studying at universities, or admitting them only in limited numbers (see Jewish quota). Over the centuries, Jews have been poorly represented among land-holding classes, but far better represented in academia, professions, finance, commerce and many scientific fields. The strong representation of Jews in science and academia is evidenced by the fact that 193 persons known to be Jews or of Jewish ancestry have been awarded the Nobel Prize, accounting for 22% of all individual recipients worldwide between 1901 and 2014.[21] Of whom, 26% in physics,[22] 22% in chemistry[23] and 27% in Physiology or Medicine.[24] In the fields of mathematics and computer science, 31% of Turing Award recipients[25] and 27% of Fields Medal in mathematics[26] were or are Jewish. The structure of DNA. The Jewish X-ray crystallographer, Rosalind Franklin, made a crucial contribution to the discovery of DNA's structure, when she discovered its double helix structure with a backbone consisting of phosphate groups[27][28][29] The early Jewish activity in science can be found in the Hebrew Bible where some of the books contain descriptions of the physical world. Biblical cosmology provides sporadic glimpses that may be stitched together to form a Biblical impression of the physical universe. There have been comparisons between the Bible, with passages such as from the Genesis creation narrative, and the astronomy of classical antiquity more generally.[30] The Old Testament also contains various cleansing rituals. One suggested ritual, for example, deals with the proper procedure for cleansing a leper (Leviticus 14:1–32). It is a fairly elaborate process, which is to be performed after a leper was already healed of leprosy (Leviticus 14:3), involving extensive cleansing and personal hygiene, but also includes sacrificing a bird and lambs with the addition of using their blood to symbolize that the afflicted has been cleansed. The Torah proscribes Intercropping (Lev. 19:19, Deut 22:9), a practice often associated with sustainable agriculture and organic farming in modern agricultural science.[31][32] The Mosaic code has provisions concerning the conservation of natural resources, such as trees (Deuteronomy 20:19–20) and birds (Deuteronomy 22:6–7). During Medieval era astronomy was a primary field among Jewish scholars and was widely studied and practiced.[33] Prominent astronomers included Abraham Zacuto who published in 1478 his Hebrew book Ha-hibbur ha-gadol[34] where he wrote about the Solar System, charting the positions of the Sun, Moon and five planets.[34] His work served Portugal's exploration journeys and was used by Vasco da Gama and also by Christopher Columbus. The lunar crater Zagut is named after Zacuto's name. The mathematician and astronomer Abraham bar Hiyya Ha-Nasi authored the first European book to include the full solution to the quadratic equation x2 – ax + b = 0,[35] and influenced the work of Leonardo Fibonacci. Bar Hiyya proved by the method of indivisibles the following equation for any circle: S = LxR/2, where S is the surface area, L is the circumference length and R is radius.[36] German edition of the astronomy book De scientia motvs orbis, originally by Mashallah ibn Athari Garcia de Orta, Portuguese Renaissance Jewish physician, was a pioneer of Tropical medicine. He published his work Colóquios dos simples e drogas da India in 1563,[37] which deals with a series of substances, many of them unknown or the subject of confusion and misinformation in Europe at this period. He was the first European to describe Asiatic tropical diseases, notably cholera; he performed an autopsy on a cholera victim, the first recorded autopsy in India. Bonet de Lattes known chiefly as the inventor of an astronomical ring-dial by means of which solar and stellar altitudes can be measured and the time determined with great precision by night as well as by day. Other related personalities are Abraham ibn Ezra, whose the Moon crater Abenezra named after, David Gans, Judah ibn Verga, Mashallah ibn Athari an astronomer, The crater Messala on the Moon is named after him. Albert Einstein was a German-born theoretical physicist and is considered one of the most prominent scientists in history, often regarded as the "father of modern physics". His revolutionary work on the relativity theory transformed theoretical physics and astronomy during the 20th century. When first published, relativity superseded a 200-year-old theory of mechanics created primarily by Isaac Newton.[38][39][40] In the field of physics, relativity improved the science of elementary particles and their fundamental interactions, along with ushering in the nuclear age. With relativity, cosmology and astrophysics predicted extraordinary astronomical phenomena such as neutron stars, black holes, and gravitational waves.[38][39][40] Einstein formulated the well-known Mass–energy equivalence, E = mc2, and explained the photoelectric effect. His work also effected and influenced a large variety of fields of physics including the Big Bang theory (Einstein's General relativity influenced Georges Lemaître), Quantum mechanics and nuclear energy. Castle Romeo (nuclear test), a large number of Jewish scientists were involved in Project Manhattan The Manhattan Project was a research and development project that produced the first atomic bombs during World War II and many Jewish scientists had a significant role in the project.[41] The theoretical physicist Robert Oppenheimer, often considered the "father of the atomic bomb", was chosen to direct the Manhattan Project at Los Alamos National Laboratory in 1942. The physicist Leó Szilárd, that conceived the nuclear chain reaction; Edward Teller, "the father of the hydrogen bomb" and Stanislaw Ulam; Eugene Wigner contributed to theory of Atomic nucleus and Elementary particle; Hans Bethe whose work included Stellar nucleosynthesis and was head of the Theoretical Division at the secret Los Alamos laboratory; Richard Feynman, Niels Bohr, Victor Weisskopf and Joseph Rotblat. The mathematician and physicist Alexander Friedmann pioneered the theory that universe was expanding governed by a set of equations he developed now known as the Friedmann equations. Arno Allan Penzias, the physicist and radio astronomer co-discoverer of the cosmic microwave background radiation, which helped establish the Big Bang theory, the scientists Robert Herman and Ralph Alpher had also worked on that field. In quantum mechanics Jewish role was significant as well and many of most influential figures and pioneers of the theory were Jewish: Niels Bohr and his work on the atom structure, Max Born (Schrödinger equation), Wolfgang Pauli, Richard Feynman (Quantum chromodynamics), Fritz London work on London dispersion force and London equations, Walter Heitler and Julian Schwinger work on Quantum electrodynamics, Asher Peres a pioneer in Quantum information, David Bohm (Quantum potential). Sigmund Freud, known as the father of psychoanalysis, is one of the most influential scientists of the 20th century. In creating psychoanalysis, a clinical method for treating psychopathology through dialogue between a patient and a psychoanalyst,[42] Freud developed therapeutic techniques such as the use of free association and discovered transference, establishing its central role in the analytic process. Freud's redefinition of sexuality to include its infantile forms led him to formulate the Oedipus complex as the central tenet of psychoanalytical theory. His analysis of dreams as wish-fulfillments provided him with models for the clinical analysis of symptom formation and the mechanisms of repression as well as for elaboration of his theory of the unconscious as an agency disruptive of conscious states of mind.[43] Freud postulated the existence of libido, an energy with which mental processes and structures are invested and which generates erotic attachments, and a death drive, the source of repetition, hate, aggression and neurotic guilt.[44] The first functioning laser, created by Theodore H. Maiman in 1960[45][46] John von Neumann, a mathematician and physicist, made major contributions to a number of fields,[47] including foundations of mathematics, functional analysis, ergodic theory, geometry, topology, numerical analysis, quantum mechanics, hydrodynamics and game theory.[48] In made also a major work with computing and the development of the computer, he suggested and described a computer architecture called Von Neumann architecture and worked on linear programming, self-replicating machines, stochastic computing), and statistics. Emmy Noether was an influential mathematician known for her groundbreaking contributions to abstract algebra and theoretical physics. Described by many prominent scientists as the most important woman in the history of mathematics,[49][50][incomplete short citation] she revolutionized the theories of rings, fields, and algebras. In physics, Noether's theorem explains the fundamental connection between symmetry and conservation laws.[51] Israeli Shavit space launcher More remarkable contributors include Heinrich Hertz and Steven Weinberg in Electromagnetism; Carl Sagan, his contributions were central to the discovery of the high surface temperatures of Venus and known for his contributions to the scientific research of extraterrestrial life; Georg Cantor (creator of set theory); Felix Hausdorff (founder of topology); Edward Witten (M-theory); Vitaly Ginzburg and Lev Landau (Ginzburg–Landau theory); Yakir Aharonov (Aharonov–Bohm effect); Boris Podolsky and Nathan Rosen (EPR paradox);Moshe Carmeli (Gauge theory). Rudolf Lipschitz (Lipschitz continuity); Paul Cohen (Continuum hypothesis, Axiom of choice); Laurent Schwartz (theory of distribution); Grigory Margulis (Lie group); Richard M. Karp (Theory of computation); Adi Shamir (RSA, cryptography); Judea Pearl (Artificial intelligence, Bayesian network); Max Newman (Colossus computer); Carl Gustav Jacob Jacobi (Jacobi elliptic functions, Jacobian matrix and determinant, Jacobi symbol). Sidney Altman (Molecular biology, RNA); Melvin Calvin (Calvin Cycle); Otto Wallach (Alicyclic compound); Paul Berg (biochemistry of nucleic acids); Ada Yonath (Crystallography, structure of the ribosome); Dan Shechtman (Quasicrystal); Julius Axelrod and Bernard Katz (Neurotransmitter); Elie Metchnikoff (discovery of Macrophage); Selman Waksman (discovery of Streptomycin); Rosalind Franklin (DNA); Carl Djerassi (the pill); Stephen Jay Gould (Evolutionary biology); Baruch Samuel Blumberg (Hepatitis B virus); Jonas Salk and Albert Sabin (developers of the Polio vaccines); Paul Ehrlich (discovery of the Blood–brain barrier); In fields such as psychology and neurology: Otto Rank, Viktor Frankl, Stanley Milgram and Solomon Asch; linguistics: Noam Chomsky, Franz Boas, Roman Jakobson, Edward Sapir, Joseph Greenberg; and sociology: Theodor Adorno, Nathan Glazer, Erving Goffman, Georg Simmel. Beside Scientific discoveries and researches, Jews have created significant and influential innovations in a large variety of fields such as the listed samples: Siegfried Marcus- automobile pioneer, inventor of the first car; Emile Berliner- developer of the disc record phonograph; Mikhail Gurevich- co-inventor of the MIG aircraft; Theodore Maiman- inventor of the laser; Robert Adler- inventor of the wireless remote control for televisions; Edwin H. Land – inventor of Land Camera; Bob Kahn- inventor of TCP and IP; Bram Cohen- creator of Bittorent; Sergei Brin and Larry Page- creators of Google; Laszlo BiroBallpoint pen; Simcha Blass- Drip irrigation; Lee Felsenstein – designer of Osborne 1; Zeev Suraski and Andi Gutmans co-creators of PHP and founders of Zend Technologies; Ralph H. Baer, "The Father of Video Games". Garcia de Orta Sigmund Freud Albert Einstein Emmy Noether Niels Bohr John von Neumann Robert Oppenheimer Richard Feynman Literature and poetryEdit In some places where there have been relatively high concentrations of Jews, distinct secular Jewish subcultures have arisen.[52] For example, ethnic Jews formed an enormous proportion of the literary and artistic life of Vienna, Austria at the end of the 19th century, or of New York City 50 years later (and Los Angeles in the mid-late 20th century). Many of these creative Jews were not particularly religious people. In general, Jewish artistic culture in various periods reflected the culture in which they lived. Gutenberg Bible. The Bible was authored by Jews during the Iron Ages and the Classical era. It comprises cultural values, basic human values, mythology and religious beliefs of both Judaism and Christianity[53] Literary and theatrical expressions of secular Jewish culture may be in specifically Jewish languages such as Hebrew, Yiddish, Judeo-Tat or Ladino, or it may be in the language of the surrounding cultures, such as English or German. Secular literature and theater in Yiddish largely began in the 19th century and was in decline by the middle of the 20th century. The revival of Hebrew beyond its use in the liturgy is largely an early 20th-century phenomenon, and is closely associated with Zionism. Apart from the use of Hebrew in Israel, whether a Jewish community will speak a Jewish or non-Jewish language as its main vehicle of discourse is generally dependent on how isolated or assimilated that community is. For example, the Jews in the shtetls of Poland and the Lower East Side of New York City during the early 20th century spoke Yiddish at most times, while assimilated Jews in 19th and early 20th-century Germany spoke German, and American-born Jews in the United States speak English. Jewish authors have both created a unique Jewish literature and contributed to the national literature of many of the countries in which they live. Though not strictly secular, the Yiddish works of authors like Sholem Aleichem (whose collected works amounted to 28 volumes) and Isaac Bashevis Singer (winner of the 1978 Nobel Prize), form their own canon, focusing on the Jewish experience in both Eastern Europe, and in America. In the United States, Jewish writers like Philip Roth, Saul Bellow, and many others are considered among the greatest American authors, and incorporate a distinctly secular Jewish view into many of their works. The poetry of Allen Ginsberg often touches on Jewish themes (notably the early autobiographical works such as Howl and Kaddish). Other famous Jewish authors that made contributions to world literature include Heinrich Heine, German poet, Mordecai Richler, Canadian author, Isaac Babel, Russian author, Franz Kafka, of Prague, and Harry Mulisch, whose novel The Discovery of Heaven was revealed by a 2007 poll as the "Best Dutch Book Ever".[54] Hebrew Book Week in Jerusalem In Modern Judaism: An Oxford Guide, Yaakov Malkin, Professor of Aesthetics and Rhetoric at Tel Aviv University and the founder and academic director of Meitar College for Judaism as Culture in Jerusalem, writes: Secular Jewish culture embraces literary works that have stood the test of time as sources of aesthetic pleasure and ideas shared by Jews and non-Jews, works that live on beyond the immediate socio-cultural context within which they were created. They include the writings of such Jewish authors as Sholem Aleichem, Itzik Manger, Isaac Bashevis Singer, Philip Roth, Saul Bellow, S.Y. Agnon, Isaac Babel, Martin Buber, Isaiah Berlin, Haim Nahman Bialik, Yehuda Amichai, Amos Oz, A.B. Yehoshua, and David Grossman. It boasts masterpieces that have had a considerable influence on all of western culture, Jewish culture included – works such as those of Heinrich Heine, Gustav Mahler, Leonard Bernstein, Marc Chagall, Jacob Epstein, Ben Shahn, Amedeo Modigliani, Franz Kafka, Max Reinhardt (Goldman), Ernst Lubitsch, and Woody Allen.[7] Other notable contributors are Isaac Asimov author of the Foundation series and others such as I, robot, Nightfall and The Gods Themselves; Joseph Heller (Catch-22); R.L. Stine (Goosebumps series); J. D. Salinger (The Catcher in the Rye); Michael Chabon (The Amazing Adventures of Kavalier & Clay, The Yiddish Policemen's Union); Marcel Proust (In Search of Lost Time); Arthur Miller (Death of a Salesman and The Crucible); Will Eisner (A Contract with God); Shel Silverstein (The Giving Tree); Arthur Koestler (Darkness at Noon, The Thirteenth Tribe); Saul Bellow (Herzog); The historical novel series The Accursed Kings by Maurice Druon is an inspiration for George R. R. Martin's A Song of Ice and Fire novels.[55][56][57] Another aspect of Jewish literature is the ethical, called Musar literature. Among recipient of Nobel Prize in Literature, 13% were or are Jewish.[58] Hebrew poetry is expressed by various of poets in different eras of Jewish history. Biblical poetry is related to the poetry in biblical times as it expressed in the Hebrew Bible and Jewish sacred texts. In medieval times the Jewish poetry was mainly expressed by piyyutim and several poets such as Yehuda Halevi, Samuel ibn Naghrillah, Solomon ibn Gabirol, Moses ibn Ezra, Abraham ibn Ezra and Dunash ben Labrat. Modern Hebrew poetry is mostly related to the era of and after the revival of the Hebrew language, pioneered by Moshe Chaim Luzzatto in the Haskalah era and succeeded by poets such as Hayim Nahman Bialik, Nathan Alterman and Shaul Tchernichovsky. Yehuda Halevi (c. 1075–1141) Heinrich Heine Sholem Aleichem Franz Kafka Boris Pasternak Ayn Rand Isaac Asimov Allen Ginsberg Yiddish theatreEdit Hana Rovina in The Dybbuk (1920), a play by S. Ansky The Ukrainian Jew Abraham Goldfaden founded the first professional Yiddish-language theatre troupe in Iași, Romania in 1876. The next year, his troupe achieved enormous success in Bucharest. Within a decade, Goldfaden and others brought Yiddish theater to Ukraine, Russia, Poland, Germany, New York City, and other cities with significant Ashkenazic populations. Between 1890 and 1940, over a dozen Yiddish theatre groups existed in New York City alone, in the Yiddish Theater District, performing original plays, musicals, and Yiddish translations of theatrical works and opera. Perhaps the most famous of Yiddish-language plays is The Dybbuk (1919) by S. Ansky. Yiddish theater in New York in the early 20th century rivalled English-language theater in quantity and often surpassed it in quality. A 1925 New York Times article remarks, "…Yiddish theater… is now a stable American institution and no longer dependent on immigration from Eastern Europe. People who can neither speak nor write Yiddish attend Yiddish stage performances and pay Broadway prices on Second Avenue." This article also mentions other aspects of a New York Jewish cultural life "in full flower" at that time, among them the fact that the extensive New York Yiddish-language press of the time included seven daily newspapers.[59] In fact, however, the next generation of American Jews spoke mainly English to the exclusion of Yiddish; they brought the artistic energy of Yiddish theater into the American theatrical mainstream, but usually in a less specifically Jewish form. Yiddish theater, most notably Moscow State Jewish Theater directed by Solomon Mikhoels, also played a prominent role in the arts scene of the Soviet Union until Stalin's 1948 reversal in government policy toward the Jews. (See Rootless cosmopolitan, Night of the Murdered Poets.) Montreal's Dora Wasserman Yiddish Theatre continues to thrive after 50 years of performance. European theatreEdit From their Emancipation to World War II, Jews were very active and sometimes even dominant in certain forms of European theatre, and after the Holocaust many Jews continued to that cultural form. For example, in pre-Nazi Germany, where Nietzsche asked "What good actor of today is not Jewish?", acting, directing and writing positions were often filled by Jews. Both MacDonald and Jewish Tribal Review would generally be counted as anti-Semitic sources, but reasonably careful in their factual claims. "In Imperial Berlin, Jewish artists could be found in the forefront of the performing arts, from high drama to more popular forms like cabaret and revue, and eventually film. Jewish audiences patronized innovative theater, regardless of whether they approved of what they saw."[60] The British historian Paul Johnson, commenting on Jewish contributions to European culture at the Fin de siècle, writes that The area where Jewish influence was strongest was the theatre, especially in Berlin. Playwrights like Carl Sternheim, Arthur Schnitzler, Ernst Toller, Erwin Piscator, Walter Hasenclever, Ferenc Molnár and Carl Zuckmayer, and influential producers like Max Reinhardt, appeared at times to dominate the stage, which tended to be modishly left-wing, pro-republican, experimental and sexually daring. But it was certainly not revolutionary, and it was cosmopolitan rather than Jewish.[61] Jews also made similar, if not as massive, contributions to theatre and drama in Austria, Britain, France, and Russia (in the national languages of those countries). Jews in Vienna, Paris and German cities found cabaret both a popular and effective means of expression, as German cabaret in the Weimar Republic "was mostly a Jewish art form".[62] The involvement of Jews in Central European theatre was halted during the rise of the Nazis and the purging of Jews from cultural posts, though many emigrated to Western Europe or the United States and continued working there. English-language theatreEdit In the early 20th century the traditions of New York's vibrant Yiddish Theatre District both rivaled and fed into Broadway. In the English-speaking theatre Jewish émigrés brought novel theatrical ideas from Europe, such as the theatrical realist movement and the philosophy of Konstantin Stanislavski, whose teachings would influence many Jewish-American acting teachers such as the Yiddish theatre-trained acting theorist Stella Adler. Jewish immigrants were instrumental in the creation and development of the genre of musical theatre and earlier forms of theatrical entertainment in America, and would innovate the new, distinctly American, art form, the Broadway musical.[63]Brandeis University Professor Stephen J. Whitfield has commented that "More so than behind the screen, the talent behind the stage was for over half a century virtually the monopoly of one ethnic group. That is... [a] feature which locates Broadway at the center of Jewish culture".[64] New York University Professor Laurence Maslon says that "There would be no American musical without Jews… Their influence is corollary to the influence of black musicians on jazz; there were as many Jews involved in the form".[65] Other writers, such as Jerome Caryn, have noted that musical theatre and other forms of American entertainment are uniquely indebted to the contributions of Jewish-Americans, since "there might not have been a modern Broadway without the "Asiatic horde" of comedians, gossip columnists, songwriters, and singers that grew out of the ghetto, whether it was on the Lower East Side, Harlem (a Jewish ghetto before it was a black one), Newark, or Washington, D.C."[66] Likewise, in the analysis of Aaron Kula, director of The Klezmer Company, …the Jewish experience has always been best expressed by music, and Broadway has always been an integral part of the Jewish-American experience… The difference is that one can expand the definition of "Jewish Broadway" to include an interdisciplinary roadway with a wide range of artistic activities packed onto one avenue—theatre, opera, symphony, ballet, publishing companies, choirs, synagogues and more. This vibrant landscape reflects the life, times and creative output of the Jewish-American artist.[67] In the 19th and early 20th centuries the European operetta, a precursor the musical, often featured the work of Jewish composers such as Paul Abraham, Leo Ascher, Edmund Eysler, Leo Fall, Bruno Granichstaedten, Jacques Offenbach, Emmerich Kalman, Sigmund Romberg, Oscar Straus and Rudolf Friml; the latter four eventually moved to the United States and produced their works on the New York stage. One of the librettists for Bizet's Carmen (not an operetta proper but rather a work of the earlier Opéra comique form) was the Jewish Ludovic Halévy, niece of composer Fromental Halévy (Bizet himself was not Jewish but he married the elder Halevy's daughter, many have suspected that he was the descendant of Jewish converts to Christianity, and others have noticed Jewish-sounding intervals in his music).[68] The Viennese librettist Victor Leon summarized the connection of Jewish composers and writers with the form of operetta: "The audience for operetta wants to laugh beneath tears—and that is exactly what Jews have been doing for the last two thousand years since the destruction of Jerusalem".[69] Another factor in the evolution of musical theatre was vaudeville, and during the early 20th century the form was explored and expanded by Jewish comedians and actors such as Jack Benny, Fanny Brice, Eddie Cantor, The Marx Brothers, Anna Held, Al Jolson, Molly Picon, Sophie Tucker and Ed Wynn. During the period when Broadway was monopolized by revues and similar entertainments, Jewish producer Florenz Ziegfeld dominated the theatrical scene with his Follies. By 1910 Jews (the vast majority of them immigrants from Eastern Europe) already composed a quarter of the population of New York City, and almost immediately Jewish artists and intellectuals began to show their influence on the cultural life of that city, and through time, the country as a whole. Likewise, while the modern musical can best be described as a fusion of operetta, earlier American entertainment and African-American culture and music, as well as Jewish culture and music, the actual authors of the first "book musicals" were the Jewish Jerome Kern, Oscar Hammerstein II, George and Ira Gershwin, George S. Kaufman and Morrie Ryskind. From that time until the 1980s a vast majority of successful musical theatre composers, lyricists, and book-writers were Jewish (a notable exception is the Protestant Cole Porter, who acknowledged that the reason he was so successful on Broadway was that he wrote what he called "Jewish music").[70] Rodgers and Hammerstein, Frank Loesser, Lerner and Loewe, Stephen Sondheim, Leonard Bernstein, Stephen Schwartz, Kander and Ebb and dozens of others during the "Golden Age" of musical theatre were Jewish. Since the Tony Award for Best Original Score was instituted in 1947, approximately 70% of nominated scores and 60% of winning scores were by Jewish composers. Of successful British and French musical writers both in the West End and Broadway, Claude-Michel Schönberg and Lionel Bart are Jewish, among others. One explanation of the affinity of Jewish composers and playwrights to the musical is that "traditional Jewish religious music was most often led by a single singer, a cantor while Christians emphasize choral singing."[71] Many of these writers used the musical to explore issues relating to assimilation, the acceptance of the outsider in society, the racial situation in the United States, the overcoming of obstacles through perseverance, and other topics pertinent to Jewish Americans and Western Jews in general, often using subtle and disguised stories to get this point across.[72] For example, Kern, Rodgers, Hammerstein, the Gershwins, Harold Arlen and Yip Harburg wrote musicals and operas aiming to normalize societal toleration of minorities and urging racial harmony; these works included Show Boat, Porgy and Bess, Finian's Rainbow, South Pacific and The King and I. Towards the end of Golden Age, writers also began to openly and overtly tackle Jewish subjects and issues, such as Fiddler on the Roof and Rags; Bart's Blitz! also tackles relations between Jews and Gentiles. Jason Robert Brown and Alfred Uhry's Parade is a sensitive exploration of both anti-Semitism and historical American racism. The original concept that became West Side Story was set in the Lower East Side during Easter-Passover celebrations; the rival gangs were to be Jewish and Italian Catholic.[73] The ranks of prominent Jewish producers, directors, designers and performers include Boris Aronson, David Belasco, Joel Grey, the Minskoff family, Zero Mostel, Joseph Papp, Mandy Patinkin, the Nederlander family, Harold Prince, Max Reinhardt, Jerome Robbins, the Shubert family and Julie Taymor. Jewish playwrights have also contributed to non-musical drama and theatre, both Broadway and regional. Edna Ferber, Moss Hart, Lillian Hellman, Arthur Miller and Neil Simon are only some of the prominent Jewish playwrights in American theatrical history. Approximately 34% of the plays and musicals that have won the Pulitzer Prize for Drama were written and composed by Jewish Americans.[74] The Association for Jewish Theater is a contemporary organization that includes both American and international theaters that focus on theater with Jewish content. It has also expanded to include Jewish playwrights. Hebrew and Israeli theatreEdit Habima theater, 2021 The earliest known Hebrew language drama was written around 1550 by a Jewish-Italian writer from Mantua.[75] A few works were written by rabbis and Kabbalists in 17th-century Amsterdam, where Jews were relatively free from persecution and had both flourishing religious and secular Jewish cultures.[76] All of these early Hebrew plays were about Biblical or mystical subjects, often in the form of Talmudic parables. During the post-Emancipation period in 19th-century Europe, many Jews translated great European plays such as those by Shakespeare, Molière and Schiller, giving the characters Jewish names and transplanting the plot and setting to within a Jewish context. Modern Hebrew theatre and drama, however, began with the development of Modern Hebrew in Europe (the first Hebrew theatrical professional performance was in Moscow in 1918)[77] and was "closely linked with the Jewish national renaissance movement of the twentieth century. The historical awareness and the sense of primacy which accompanied the Hebrew theatre in its early years dictated the course of its artistic and aesthetic development".[78] These traditions were soon transplanted to Israel. Playwrights such as Natan Alterman, Hayyim Nahman Bialik, Leah Goldberg, Ephraim Kishon, Hanoch Levin, Aharon Megged, Moshe Shamir, Avraham Shlonsky, Yehoshua Sobol and A. B. Yehoshua have written Hebrew-language plays. Themes that are obviously common in these works are the Holocaust, the Arab–Israeli conflict, the meaning of Jewishness, and contemporary secular-religious tensions within Jewish Israel. The most well-known Hebrew theatre company and Israel's national theatre is the Habima (meaning "the stage" in Hebrew), which was formed in 1913 in Lithuania, and re-established in 1917 in Russia; another prominent Israeli theatre company is the Cameri Theatre, which is "Israel's first and leading repertory theatre".[79] Judeo-Tat theatreEdit Acting troupe in the play Ashig Garib. Judeo-Tat theatre. Derbent, USSR. 1984. First row - from left to right: Katya, Bikel Matatova. Second row - from left to right: musician Israel Izrailov, Roman Izyaev, Avshalum Nakhshunov, Raziil Ilyaguev, Abram Avdalimov. Third row - from left to right: Ilizir Abramov, Anatoly Yusupov, Israel Tsvaygenbaum. The first theatrical event by Mountain Jews took place in December 1903,[80] when Asaf Agarunov, a teacher and a Zionist, staged a story by Naum Shoykovich, translated from Hebrew, "The Burn for Burn," and staged it in honor of schoolteacher Nagdimuna ben Simona's (Shimunov) wedding.[80] In 1918, a drama studio was opened in Derbent, Soviet Union headed by Rabbi Yashaiyo Rabinovich.[80] In 1935, the first Soviet Union theatre opened in Derbent, which included three troupes – Russian, Mountain Jews and Turk. It was based on drama circles, which were led by Manashir and Khanum Shalumov. Initially, in the circle, men played the female roles. Later, women began to take part in the theatre.[81] In 1939, the Judeo-Tat theatre was the winner of the festival of theatres in Dagestan. During World War II, most of the actors were drafted into the army. Many theatre actors died in the war.[82] In 1943, the theatre resumed its work, and in 1948 it was closed. The official reason was its unprofitability.[82] In the 1960s, the theatre resumed its activities and experienced its second heyday. The actress, Akhso Ilyaguevna Shalumova (1909-1985), "Honored Artist of the Dagestan ASSR" returned to the theatre. She played the role of (Juhuri:Шими Дербенди) - Shimi Derbendi's wife - Shahnugor, based on the stories of writer Hizgil Avshalumov.[82] In the 1970s, the People's Judeo-Tat theatre was organized. For many years, its director was Abram Avdalimov, "Honored Cultural Worker of the Dagestan ASSR," singer, actor and playwright. His successor was Roman Izyaev, who was awarded the Order of the Badge of Honour for his meritorious service.[82] In the 1990s, the Judeo-Tat theatre experienced another crisis: it rarely held performances and did not have any premieres. Only in 2000, when it became a municipal theater, was it able to resume its activity. From 2000 to 2002, the theatre was headed by actor and musician Raziil Semenovich Ilyaguev (1945-2016), "Honored Worker of Culture of the Republic of Dagestan." For the next two years the theatre was headed by Alesya Isakova. In 2004, Lev Yakovlevich Manakhimov (1950-2021), "Honored Artist of the Republic of Dagestan," became the artistic director of the theatre. After the death of Manakhimov, Boris Yudaev became the head of the theatre. In the era when Yiddish theatre was still a major force in the world of theatre, over 100 films were made in Yiddish. Many are now lost. Prominent films included Shulamith (1931), the first Yiddish musical on film His Wife's Lover (1931), A Daughter of Her People (1932), the anti-Nazi film The Wandering Jew (1933), The Yiddish King Lear (1934), Shir Hashirim (1935), the biggest Yiddish film hit of all time Yidl Mitn Fidl (1936), Where Is My Child? (1937), Green Fields (1937), Dybuk (1937), The Singing Blacksmith (1938), Tevya (1939), Mirele Efros (1939), Lang ist der Weg (1948), and God, Man and Devil (1950). The roster of Jewish entrepreneurs in the English-language American film industry is legendary: Samuel Goldwyn, Louis B. Mayer, the Warner Brothers, David O. Selznick, Marcus Loew, and Adolph Zukor, Fox to name just a few, and continuing into recent times with such industry giants as super-agent Michael Ovitz, Michael Eisner, Lew Wasserman, Jeffrey Katzenberg, Steven Spielberg, and David Geffen. However, few of these brought a specifically Jewish sensibility either to the art of film or, with the sometime exception of Spielberg, to their choice of subject matter. The historian Eric Hobsbawm described the situation as follows:[83] It would be ... pointless to look for consciously Jewish elements in the songs of Irving Berlin or the Hollywood movies of the era of the great studios, all of which were run by immigrant Jews: their object, in which they succeeded, was precisely to make songs or films which found a specific expression for 100 per cent Americanness. A more specifically Jewish sensibility can be seen in the films of the Marx Brothers, Mel Brooks, or Woody Allen; other examples of specifically Jewish films from the Hollywood film industry are the Barbra Streisand vehicle Yentl (1983), or John Frankenheimer's The Fixer (1968). More recently, Call Me By Your Name (2017) can be given as an example of a movie with Jewish sensibility. Jewish film festivals are nowadays conducted in many major cities around the world as vehicles of introducing such films to wider audiences, including among others the Boston JFF, San Francisco JFF, Jerusalem JFF, etc. Radio and televisionEdit The first radio chains, the Radio Corporation of America and the Columbia Broadcasting System, were created by the Jewish-American David Sarnoff and William S. Paley, respectively. These Jewish innovators were also among the first producers of televisions, both black-and-white and color.[84] Among the Jewish immigrant communities of America there was also a thriving Yiddish language radio, with its "golden age" from the 1930s to the 1950s. Although there is little specifically Jewish television in the United States (National Jewish Television, largely religious, broadcasts only three hours a week), Jews have been involved in American television from its earliest days. From Sid Caesar and Milton Berle to Joan Rivers, Gilda Radner, and Andy Kaufman to Billy Crystal to Jerry Seinfeld, Jewish stand-up comedians have been icons of American television. Other Jews that held a prominent role in early radio and television were Eddie Cantor, Al Jolson, Jack Benny, Walter Winchell and David Susskind. More figures are Larry King, Michael Savage and Howard Stern. In the analysis of Paul Johnson, "The Broadway musical, radio and TV were all examples of a fundamental principle in Jewish diaspora history: Jews opening up a completely new field in business and culture, a tabula rasa on which to set their mark, before other interests had a chance to take possession, erect guild or professional fortifications and deny them entry."[85] One of the first televised situation comedies, The Goldbergs was set in a specifically Jewish milieu in the Bronx. While the overt Jewish milieu of The Goldbergs was unusual for an American television series, there were a few other examples, such as Brooklyn Bridge (1991–1993) and Bridget Loves Bernie. Jews have also played an enormous role among the creators and writers of television comedies: Woody Allen, Mel Brooks, Selma Diamond, Larry Gelbart, Carl Reiner, and Neil Simon all wrote for Sid Caesar; Reiner's son Rob Reiner worked with Norman Lear on All in the Family (which often engaged anti-semitism and other issues of prejudice); Larry David and Jerry Seinfeld created the hit sitcom Seinfeld, Lorne Michaels, Al Franken, Rosie Shuster, and Alan Zweibel of Saturday Night Live breathed new life into the variety show in the 1970s. More recently, American Jews have been instrumental to "novelistic" television series such as The Wire and The Sopranos. Variously acclaimed as one of the greatest television series of all time, The Wire was created by David Simon. Simon also served as executive producer, head writer, and show runner. Matthew Weiner produced the fifth and sixth seasons of The Sopranos and later created Mad Men. More remarkable contributors are David Benioff and D. B. Weiss, creators of Game of Thrones TV series; Ron Leavitt co-creator of Married... with Children; Damon Lindelof and J. J. Abrams, co-creators of Lost; David Crane and Marta Kauffman, creators of Friends; Tim Kring creator of Heroes; Sydney Newman co-creator of Doctor Who; Darren Star, creator Sex and the City and Melrose Place; Aaron Spelling, co-creator of Beverly Hills, 90210; Chuck Lorre, co-creator of The Big Bang Theory and Two and a Half Men; Gideon Raff, creator of Prisoners of War which Homeland is based on; Aaron Ruben and Sheldon Leonard co-creators of The Andy Griffith Show; Don Hewitt creator of 60 Minutes; Garry Shandling, co-creator of The Larry Sanders Show; Ed. Weinberger, co-creator of The Cosby Show; David Milch, creator of Deadwood; Steven Levitan, co-creator of Modern Family; Dick Wolf, creator of Law & Order; David Shore, creator House; Max Mutchnick and David Kohan creators of Will & Grace; Adam Horowitz and Edward Kitsis creators of Once Upon a Time (TV Series). There is also a significant role of Jews in acting by actors such as Sarah Jessica Parker, William Shatner, Leonard Nimoy, Mila Kunis, Zac Efron, Hank Azaria, David Duchovny, Fred Savage, Zach Braff, Noah Wyle, Adam Brody, Katey Sagal, Sarah Michelle Gellar, Alyson Hannigan, Michelle Trachtenberg, David Schwimmer, Lisa Kudrow and Mayim Bialik. Jewish musical contributions also tend to reflect the cultures of the countries in which Jews live, the most notable examples being classical and popular music in the United States and Europe. Some music, however, is unique to particular Jewish communities, such as Israeli music, Israeli folk music, Klezmer, Sephardic and Ladino music, and Mizrahi music. Classical musicEdit The Israel Philharmonic Orchestra's 70th Anniversary Before Emancipation, virtually all Jewish music in Europe was sacred music, with the exception of the performances of klezmorim during weddings and other occasions. The result was a lack of a Jewish presence in European classical music until the 19th century, with a very few exceptions, normally enabled by specific aristocratic protection, such as Salamone Rossi and Claude Daquin (the work of the former is considered the beginning of "Jewish art music").[86] After Jews were admitted to mainstream society in England (gradually after their return in the 17th century), France, Austria-Hungary, the German Empire, and Russia (in that order), the Jewish contribution to the European music scene steadily increased, but in the form of mainstream European music, not specifically Jewish music. Notable examples of Jewish Romantic composers (by country) are Charles-Valentin Alkan, Paul Dukas and Fromental Halevy from France, Josef Dessauer, Karl Goldmark and Gustav Mahler from Bohemia (most Austrian Jews during this time were native not to what is today Austria but the outer provinces of the Empire), Felix Mendelssohn and Giacomo Meyerbeer from Germany, and Anton and Nikolai Rubinstein from Russia. Singers included John Braham and Giuditta Pasta. There were very many notable Jewish violin and pianist virtuosi, including Joseph Joachim, Ferdinand David, Carl Tausig, Henri Herz, Leopold Auer, Jascha Heifetz, and Ignaz Moscheles. During the 20th century the number of Jewish composers and notable instrumentalists increased, as did their geographical distribution. Sample Jewish 20th-century composers include Arnold Schoenberg and Alexander von Zemlinsky from Austria, Hanns Eisler and Kurt Weill from Germany, Viktor Ullmann and Jaromír Weinberger from Bohemia and later the Czech Republic (the former perished at the Auschwitz extermination camps), George Gershwin and Aaron Copland from the United States, Darius Milhaud and Alexandre Tansman from France, Alfred Schnittke and Lera Auerbach from Russia, Lalo Schifrin and Mario Davidovsky from Argentina and Paul Ben-Haim and Shulamit Ran from Israel. There are some genres and forms of classical music that Jewish composers have been associated with, including notably during the Romantic period French Grand Opera. The most prolific composers of this genre included Giacomo Meyerbeer, Fromental Halévy, and the later Jacques Offenbach; Halevy's La Juive was based on Scribe's libretto very loosely connected to the Jewish experience. While orchestral and operatic music works by Jewish composers would in general be considered secular, many Jewish (as well as non-Jewish) composers have incorporated Jewish themes and motives into their music. Sometimes this is done covertly, such as the klezmer band music that many critics and observers believe lies in the third movement of Mahler's Symphony No. 1, and this type of Jewish reference was most common during the 19th century when openly displaying one's Jewishness would most likely hamper a Jew's chances at assimilation. During the 20th century, however, many Jewish composers wrote music with direct Jewish references and themes, e.g. David Amram (Symphony – "Songs of the Soul"), Leonard Bernstein (Kaddish Symphony, Chichester Psalms), Ernest Bloch (Schelomo), Arnold Schoenberg, Mario Castelnuovo-Tedesco (Violin Concerto no. 2) Kurt Weill (The Eternal Road) and Hugo Weisgall (Psalm of the Instant Dove). Giacomo Meyerbeer Fanny Mendelssohn Felix Mendelssohn Charles-Valentin Alkan Jacques Offenbach Anton Rubinstein Gustav Mahler Clara Haskil In the late twentieth century, prominent composers like Morton Feldman, Gyorgy Ligeti or Alfred Schnittke gave significant contributions to the history of contemporary music Popular musicEdit The great songwriters and lyricists of American traditional popular music and jazz standards were predominantly Jewish, including Harold Arlen, Jerome Kern, George Gershwin, Frank Loesser, Richard Rodgers and Irving Berlin. Deriving from Biblical traditions, Jewish dance has long been used by Jews as a medium for the expression of joy and other communal emotions.[87] Each Jewish diasporic community developed its own dance traditions for wedding celebrations and other distinguished events. For Ashkenazi Jews in Eastern Europe, for example, dances, whose names corresponded to the different forms of klezmer music that were played, were an obvious staple of the wedding ceremony of the shtetl.[88] Jewish dances both were influenced by surrounding Gentile traditions and Jewish sources preserved over time. "Nevertheless the Jews practiced a corporeal expressive language that was highly differentiated from that of the non-Jewish peoples of their neighborhood, mainly through motions of the hands and arms, with more intricate legwork by the younger men."[89] In general, however, in most religiously traditional communities, members of the opposite sex dancing together or dancing at times other than at these events was frowned upon. Jewish humor is the long tradition of humor in Judaism dating back to the Torah and the Midrash, but generally refers to the more recent stream of verbal, frequently self-deprecating and often anecdotal humor originating in Europe.[90] Jewish humor took root in the United States over the last hundred years, beginning with vaudeville, and continuing through radio, stand-up, film, and television.[91] A significant number of American comedians have been or are Jewish.[citation needed] Visual artsEdit "Death of King Saul", by Elie Marcuse (1848). (Tel Aviv Museum of Art) Compared to music or theater, there is less of a specifically Jewish tradition in the visual arts. The most likely and accepted reason is that, as has been previously shown with Jewish music and literature, before Emancipation Jewish culture was dominated by the religious tradition of aniconism. As most Rabbinical authorities believed that the Second Commandment prohibited much visual art that would qualify as "graven images", Jewish artists were relatively rare until they lived in assimilated European communities beginning in the late 18th century.[92][93] Despite fears by early religious communities of art being used for idolatrous purposes, Jewish sacred art is recorded in the Tanakh and extends throughout Jewish Antiquity and the Middle Ages.[94] The Tabernacle and the two Temples in Jerusalem form the first known examples of "Jewish art". During the first centuries of the Common Era, Jewish religious art also was created in regions surrounding the Mediterranean such as Syria and Greece, including frescoes on the walls of synagogues, of which the Dura Europas Synagogue was the only survivor,[95] prior to its destruction by ISIL in 2017, as well as the Jewish catacombs in Rome.[96][97] Zodiac Wheel Mosaic in the great synagogue of Tzippori (5th century) in Galilee, Israel A Jewish tradition of illuminated manuscripts in at least Late Antiquity has left no survivors, but can be deduced from borrowings in Early Medieval Christian art. A number of luxury pieces of gold glass from the later Roman period have Jewish motifs. Several Hellenistic-style floor mosaics have also been excavated in synagogues from Late Antiquity in Israel and Palestine, especially of the signs of the Zodiac, which was apparently acceptable in a low-status position on the floor. Some, such as that at Naaran, show evidence of a reaction against images of living creatures around 600 CE. The decoration of sarcophagi and walls at the cave cemetery at Beit She'arim shows a mixture of Jewish and Hellenistic motifs. However, for a period of several centuries between about 700 and 1100 CE there are scarcely any survivals of identifiably Jewish art. Middle Age Rabbinical and Kabbalistic literature also contain textual and graphic art, most famously illuminated haggadahs such as the Sarajevo Haggadah, and other manuscripts like the Nuremberg Mahzor. Some of these were illustrated by Jewish artists and some by Christians; equally some Jewish artists and craftsmen in various media worked on Christian commissions.[98] Outside of Europe, Yemenite Jewish silversmiths developed a distinctive style of finely wrought silver that is admired for its artistry. Johnson again summarizes this sudden change from a limited participation by Jews in visual art (as in many other arts) to a large movement by them into this branch of European cultural life: Again, the arrival of the Jewish artist was a strange phenomenon. It is true that, over the centuries, there had been many animals (though few humans) depicted in Jewish art: lions on Torah curtains, owls on Judaic coins, animals on the Capernaum capitals, birds on the rim of the fountain-basis in the 5th century Naro synagogue in Tunis; there were carved animals, too, on timber synagogues in eastern Europe – indeed the Jewish wood-carver was the prototype of the modern Jewish plastic artist. A book of Yiddish folk-ornament, printed at Vitebsk in 1920, was similar to Chagall's own bestiary. But the resistance of pious Jews to portraying the living human image was still strong at the beginning of the 20th century.[99] Wall painting in the Dura Europos synagogue, circa 250 CE There were few Jewish secular artists in Europe prior to the Emancipation that spread throughout Europe with the Napoleonic conquests. There were exceptions, and Salomon Adler was a prominent portrait painter in 18th-century Milan. The delay in participation in the visual arts parallels the lack of Jewish participation in European classical music until the nineteenth century, and which was progressively overcome with the rise of Modernism in the 20th century. There were many Jewish artists in the 19th century, but Jewish artistic activity boomed during the end of World War I. The Jewish artistic Renaissance has its roots in the 1901 Fifth Zionist Congress, which included an art exhibition featuring Jewish artists E.M. Lilien and Hermann Struck. The exhibition helped legitimize art as an expression of Jewish culture.[100] According to Nadine Nieszawer, "Until 1905, Jews were always plunged into their books but from the first Russian Revolution, they became emancipated, committed themselves in politics and became artists. A real Jewish cultural rebirth".[101] Individual Jews figured in the modern artistic movements of Europe— With the exception of those living in isolated Jewish communities, most Jews listed here as contributing to secular Jewish culture also participated in the cultures of the peoples they lived with and nations they lived in. In most cases, however, the work and lives of these people did not exist in two distinct cultural spheres but rather in one that incorporated elements of both. Itzhak Danziger Nimrod, 1939 The Israel Museum, Jerusalem Collection During the early 20th century Jews figured particularly prominently in the Montparnasse movement, and after World War II among the abstract expressionists: Alexander Bogen, Helen Frankenthaler, Adolph Gottlieb, Philip Guston, Al Held, Lee Krasner, Barnett Newman, Milton Resnick, Jack Tworkov, Mark Rothko, and Louis Schanker as well as among Contemporary artists, Modernists and Postmodernists.[102] Many Russian Jews were prominent in the art of scenic design, particularly the aforementioned Chagall and Aronson, as well as the revolutionary Léon Bakst, who like the other two also painted. One Mexican Jewish artist was Pedro Friedeberg; historians disagree as to whether Frida Kahlo's father was Jewish or Lutheran. Gustav Klimt was not Jewish, but nearly all of his patrons and several of his models were. Among major artists Chagall may be the most specifically Jewish in his themes. But as art fades into graphic design, Jewish names and themes become more prominent: Leonard Baskin, Al Hirschfeld, Peter Max, Ben Shahn, Art Spiegelman and Saul Steinberg. Jews have also played a very important role in medias other than painting; in photography some notable figures are André Kertész, Robert Frank, Helmut Newton, Garry Winogrand, Cindy Sherman, Steve Lehman,[103] and Adi Nes; in installation art and street art some notable figures are Sigalit Landau,[104] Dede,[105] and Michal Rovner. Camille Pissarro Amedeo Modigliani Diego Rivera Alexander Bogen Marc Chagall Comics, cartoons, and animationEdit Stan Lee (left) and Jack Kirby (right) made a major contribution to the American comic book industry. Their work includes The Avengers, Captain America, Fantastic Four, Spider-Man, and X-Men Graphic art, as expressed in the art of comics, has been a key field for Jewish artists as well. In the Golden and Silver ages of American comic books, the Jewish role was overwhelming and a large number of the medium's foremost creators have been Jewish.[106] Max Gaines was a pioneering figure in the creation of the modern comic book when in 1935 he published the first one called Famous Funnies.[107] In 1939, he founded, with Jack Liebowitz and Harry Donenfeld, All-American Publications (the AA Group).[108] The publication is known for the creation of several superheroes such as the original Atom, Flash, Green Lantern, Hawkman, and Wonder Woman. Donenfeld and Liebowitz were also the owners of National Allied Publications which distributed Detective Comics and Action Comics. That company was also a precursor of DC Comics. In 1939, the pulp magazine publisher Martin Goodman formed Timely Publications,[109] a company to be known, since the 1960s, as Marvel Comics. At Marvel, Artists such as Stan Lee, Jack Kirby,[110] Larry Lieber and Joe Simon created a large variety of characters and cultural icons including Spider-Man, Hulk, Captain America, Iron Man, Thor, Daredevil, and the teams Fantastic Four, Avengers, X-Men (including many of its characters) and S.H.I.E.L.D.. Stan Lee attributed the Jewish role in comics to the Jewish culture.[111] At DC Comics Jewish role was significant as well; the character of Superman, which was created by the Jewish artists Joe Shuster and Jerry Siegel,[106] is partly based on the biblical figure of Samson.[112] It was also suggested the Superman is partly influenced by Moses,[113][114] and other Jewish elements. More at DC Comics are Bob Kane, Bill Finger and Martin Nodell, creators of Green Lantern, Batman[106] and many related characters as Robin, The Joker, Riddler, Scarecrow and Catwoman; Gil Kane, co-creator of Atom and Iron Fist. Many of those involved in the later ages of comics are also Jewish, such as Julius Schwartz, Joe Kubert, Jenette Kahn, Len Wein, Peter David, Neil Gaiman, Chris Claremont and Brian Michael Bendis. There is also a large number of Jewish characters among comics superheroes such as Magneto, Quicksilver, Kitty Pryde, The Thing, Sasquatch, Sabra, Ragman, Legion, and Moon Knight, of whom were and are influenced by events in Jewish history and elements of Jewish life.[115] In 1944, Max Gaines founded EC Comics.[116] The company is known for specializing in horror fiction, crime fiction, satire, military fiction and science fiction from the 1940s through the mid-1950s, notably the Tales from the Crypt series, The Haunt of Fear, The Vault of Horror, Crime SuspenStories and Shock SuspenStories. Jewish artists that are associated with the publisher include Al Feldstein, Dave Berg, and Jack Kamen. Will Eisner was an American cartoonist and was known as one of the earliest cartoonists to work in the American comic book industry. He is the creator of the Spirit comics series and the graphic novel A Contract with God.[117] The Eisner Award was named in his honor, and is given to recognize achievements each year in the comics medium. Ralph Bakshi is a director of animated and live-action films, known for films such as Wizards (1977), The Lord of the Rings (1978), and Fire and Ice (1983) In 1952, William Gaines and Harvey Kurtzman founded Mad, an American humor magazine. It was widely imitated and influential, affecting satirical media as well as the cultural landscape of the 20th century, with editor Al Feldstein increasing readership to more than two million during its 1970s circulation peak.[118] Other known cartoonists are Lee Falk creator of The Phantom and Mandrake the Magician; The Hebrew comics of Michael Netzer creator of Uri-On and Uri Fink creator of Zbeng!; William Steig, creator of Shrek!; Daniel Clowes, creator of Eightball; Art Spiegelman creator of graphic novel Maus and Raw (with Françoise Mouly). In animation, Jewish animators role is expressed by many: Genndy Tartakovsky is the creator of several animation TV series such as Dexter's Laboratory and Samurai Jack;[119] Matt Stone co-creator of South Park; David Hilberman who helped animate Bambi and Snow White and the Seven Dwarfs; Friz Freleng, Looney Tunes; Ralph Bakshi, Fritz the Cat, Mighty Mouse: The New Adventures, Wizards, The Lord of the Rings, Heavy Traffic, Coonskin, Hey Good Lookin', Fire and Ice, and Cool World;[120] Alex Hirsch, creator of Gravity Falls; Dave Fleischer and Lou Fleischer, founders of Fleischer Studios; Max Fleischer, animation of Betty Boop, Popeye and Superman; Rebecca Sugar, creator of Steven Universe.[121] Several companies producing animation were founded by Jews, such as DreamWorks, which its products include Shrek, Madagascar, Kung Fu Panda and The Prince of Egypt; Warner Bros., whose animation division is known for cartoons such as Looney Tunes, Tiny Toon Adventures, Animaniacs, Pinky and the Brain and Freakazoid! . Jewish cooking combines the food of many cultures in which Jews have settled, including Middle Eastern, Mediterranean, Spanish, German and Eastern European styles of cooking, all influenced by the need for food to be kosher. Thus, "Jewish" foods like bagels,[122] hummus,[123] stuffed cabbage,[124] and blintzes all come from various other cultures. The amalgam of these foods, plus uniquely Jewish contributions like tzimmis,[125] cholent, gefilte fish[126] and matzah balls,[127] make up Jewish cuisine. Philo-Semitism (also spelled philosemitism) or Judeophilia is an interest in, respect for and an appreciation of Jewish people, their history, and their culture and the influence of Judaism, particularly on the part of a gentile.[128] Within the Jewish community, philo-Semitism includes an interest in Jewish culture and a love of things that are considered Jewish.[129] Very few Jews live in East Asian countries, but Jews are viewed in an especially positive light in some of them, partly owing to their shared wartime experiences during the Second World War. Examples include South Korea[130] and China.[131] In general, Jews are positively stereotyped as intelligent, business savvy and committed to family values and responsibility, while in the Western world, the first of the two aforementioned stereotypes more often have the negatively interpreted equivalents of guile and greed. In South Korean primary schools the Talmud is mandatory reading.[130] See alsoEdit 1. ^ Lawrence Schiffman, Understanding Second Temple and Rabbinic Judaism. KTAV Publishing House, 2003. p. 3. 2. ^ Biale, David, Not in the Heavens: The Tradition of Jewish Secular Thought, Princeton University Press, 2011, pp.5–6, 15 3. ^ Torstrick, Rebecca L., Culture and customs of Israel, Greenwood Press, 2004 4. ^ Măciucă, Constantin, preface to Bercovici, Israil, O sută de ani de teatru evriesc în România ("One hundred years of Yiddish/Jewish theater in Romania"), 2nd Romanian-language edition, revised and augmented by Constantin Măciucă. Editura Integral (an imprint of Editurile Universala), Bucharest (1998). ISBN 973-98272-2-5. See the article on the author for further information. 5. ^ The Emergence of a Jewish Cultural Identity Archived 2005-10-28 at the Wayback Machine, undated (2002 or later) on, reprinted from the National Foundation for Jewish Culture. Accessed 11 February 2006. 6. ^ "". Archived from the original on May 10, 2006. Retrieved September 18, 2017. 7. ^ a b Malkin, Y. "Humanistic and secular Judaisms." Modern Judaism An Oxford Guide, p. 107. 9. ^ "Introduction to Philosophy" by Dr Tom Kerns 10. ^ Moore, Edward (June 28, 2005). "Middle Platonism – Philo of Alexandria". The Internet Encyclopedia of Philosophy. ISSN 2161-0002. Retrieved December 20, 2012. 11. ^ Keener, Craig S (2003). The Gospel of John: A Commentary. Vol. 1. Peabody, Mass.: Hendrickson. pp. 343–347. 12. ^ Yalom, Irvin (February 21, 2012). "The Spinoza Problem". The Washington Post. Archived from the original on November 12, 2013. 13. ^ "Mendelssohn". Retrieved October 22, 2012. 14. ^ Wein (1997), p. 44. (Google books) 15. ^ Daniel J. Elazar, Judaism and Democracy: The Reality. Undated. Jerusalem Center for Public Affairs. Accessed February 11, 2006. 16. ^ "A Jewish Fight for Public Education". September 2, 2013. Retrieved October 7, 2014. 17. ^ "The Jewish Americans". PBS. Retrieved October 7, 2014. 18. ^ Sowell, Thomas (2006). On classical economics. New Haven, CT: Yale University Press. 19. ^ "David Ricardo – Policonomics". Retrieved September 18, 2017. 20. ^ The section on banking is drawn largely from the article "Usury" in the public domain Jewish Encyclopedia (1901–1906). 21. ^ "Jewish Nobel Prize Laureates". Retrieved September 18, 2017. 22. ^ JINFO. "Jewish Nobel Prize Winners in Physics". 23. ^ JINFO. "Jewish Nobel Prize Winners in Chemistry". 24. ^ JINFO. "Jewish Nobel Prize Winners in Medicine". 25. ^ JINFO. "Jewish Recipients of the ACM Turing Award". 26. ^ "Jewish Recipients of the Fields Medal". 27. ^ "Rosalind Franklin :: DNA from the Beginning". Retrieved September 18, 2017. 28. ^ "Rosalind Franklin: A Crucial Contribution". Retrieved September 18, 2017. 29. ^ "Rosalind Franklin's contributions to the study of DNA". Archived from the original on September 6, 2006. Retrieved September 18, 2017. 30. ^ Kurtz, J. H.; Simonton, T. D. (1857). "The Bible and Astronomy; An Exposition of the Biblical Cosmology, and Its Relations to Natural Science". Philadelphia: Lindsay & Blakiston. 31. ^ Andrews, D.J., A.H. Kassam. 1976. The importance of multiple cropping in increasing world food supplies. pp. 1–10 in R.I. Papendick, A. Sanchez, G.B. Triplett (Eds.), Multiple Cropping. ASA Special Publication 27. American Society of Agronomy, Madison, Wisconsin. 32. ^ The Journal of Applied Ecology, Vol. 19, No. 3 (Dec., 1982), pp. 901–916 (JSTOR Subscription required) 33. ^ "Science in Medieval Jewish Scholarship". Archived from the original on April 3, 2015. Retrieved September 18, 2017. 34. ^ a b "Zacuto, Abraham" in Glick, T., S.J. Livesy and F. Williams, editors, (2005) Medieval science, technology, and medicine: an encyclopedia, New York Routledge. 35. ^ Livio, Mario (2006). The Equation that Couldn't Be Solved. Simon & Schuster. ISBN 978-0743258210. 36. ^ Boaz Tsaban and David Garber. "The proof of Rabbi Abraham Bar Hiya Hanasi". Archived from the original on August 12, 2011. Retrieved March 28, 2011. 37. ^ "Garcia de Orta (1501/02-68)". Archived from the original on September 19, 2017. Retrieved September 18, 2017. 38. ^ a b Will, Clifford M (August 1, 2010). "Relativity". Grolier Multimedia Encyclopedia. Archived from the original on January 24, 2013. Retrieved August 1, 2010. 39. ^ a b Will, Clifford M (August 1, 2010). "Space-Time Continuum". Grolier Multimedia Encyclopedia. Archived from the original on January 25, 2013. Retrieved August 1, 2010. 40. ^ a b Will, Clifford M (August 1, 2010). "Fitzgerald–Lorentz contraction". Grolier Multimedia Encyclopedia. Archived from the original on January 25, 2013. Retrieved August 1, 2010. 41. ^ "Jews and the Atom Bomb". May 20, 2015. Archived from the original on May 20, 2015. Retrieved September 18, 2017.{{cite web}}: CS1 maint: bot: original URL status unknown (link) 42. ^ Ford & Urban 1965, p. 109 43. ^ Mannoni, Octave, Freud: The Theory of the Unconscious, London: NLB 1971, p. 49-51 44. ^ Mannoni, Octave, Freud: The Theory of the Unconscious, London: NLB 1971, pp. 146–47 45. ^ Maiman, T. H. (1960). "Stimulated optical radiation in ruby". Nature. 187 (4736): 493–494. Bibcode:1960Natur.187..493M. doi:10.1038/187493a0. S2CID 4224209. 48. ^ Glimm, p. vii 49. ^ Einstein, Albert (May 1, 1935), "Professor Einstein Writes in Appreciation of a Fellow-Mathematician", New York Times (May 5, 1935), retrieved April 13, 2008. Online at the MacTutor History of Mathematics archive. 50. ^ Alexandrov 1981, p. 100. 51. ^ Ne'eman, Yuval (1999), "The Impact of Emmy Noether's Theorems on XXIst Century Physics", in Teicher, M. (ed.), The Heritage of Emmy Noether, Israel Mathematical Conference Proceedings, Bar-Ilan University, American Mathematical Society, Oxford University Press, pp. 83–101, ISBN 978-0-19-851045-1, OCLC 223099225 52. ^ "Literature, Jewish". Retrieved July 13, 2015. 53. ^ "biblical literature". Retrieved September 18, 2017. 54. ^ "Leading Dutch writer Mulisch dies". Gulf Daily News. November 1, 2010. Retrieved November 1, 2010. 55. ^ Kamin, Debra (May 20, 2014). "The Jewish legacy behind Game of Thrones". The Times of Israel. Retrieved May 31, 2015. 56. ^ Martin, George R. R. (April 3, 2013). "My hero: Maurice Druon by George RR Martin". The Guardian. Retrieved June 24, 2015. 57. ^ Milne, Ben (April 4, 2014). "Game of Thrones: The cult French novel that inspired George RR Martin". BBC. Retrieved April 6, 2014. 58. ^ JINFO. "Jewish Nobel Prize Winners in Literature". 59. ^ Melamed, S.M., "The Yiddish Stage", New York Times, September 27, 1925 (X2). 60. ^ Berlin Metropolis: Jews and the New Culture, 1890–1918 Archived November 5, 2005, at the Wayback Machine, on the site of The Jewish Museum, New York. Accessed February 12, 2006. 61. ^ Johnson, Paul (1987). A History of the Jews, pg. 479. New York: Harper Perennial. – Erwin Piscator was a Lutheran Protestant (Nazi propagandists had claimed since 1927 that he was a "Jewish Bolshevik", though). 62. ^ Suzanne Weiss, Jewish cabaret singer brings songs of Berlin to Berkeley, The Jewish News Weekly of Northern California, September 27, 1996. Accessed February 12, 2006. 63. ^ "Broadway Celebrates 100 Years of National Yiddish Theatre Tonight – Playbill". Playbill. August 5, 2015. Retrieved September 18, 2017. 64. ^ Stephen J. Whitfield, Musical Theater (PDF). Brandeis Review, Winter/Spring 2000. Accessed February 11, 2006. 65. ^ Samantha M. Shapiro, The Arts: A Jewish Street Called Broadway Archived December 6, 2008, at the Wayback Machine. Hadassah Magazine, October 2004 Vol. 86 No.2. Accessed February 11, 2006. 66. ^ Charyn, Jerome. "Early Broadway's un-Jewish Jews." Midstream 50.1 (January 2004): 19(7). Expanded Academic ASAP. Thomson Gale. UC Irvine (CDL). March 9, 2006 67. ^ The Klezmer Company Breaks New Ground with Orchestral Klezmer Production "Jewish Broadway with Orchestra and Chorus" at FAU Archived 2006-09-09 at the Wayback Machine. Florida Atlantic University press release, February 8, 2005. Accessed 11 February 2006. 68. ^ Raphael Mostel, Carmen Comes Home, The Forward, May 7, 2004. Accessed February 12, 2006. 69. ^ Dr. Kenneth Libo Ph. D and Michael Skakun, The Persecution of Creativity: Jews, Music and Vienna Archived September 26, 2005, at the Wayback Machine, Center for Jewish History, April 16, 2004. Accessed February 12, 2006 70. ^ Michael Billig, Creating the American Musical Archived September 28, 2005, at the Wayback Machine. Originally from Rock 'N' Roll Jews (Five Leaves Publications), extracted on Accessed February 12, 2006. 71. ^ Jacob Baron, Jewish Composers Archived December 28, 2004, at the Wayback Machine, Machar, The Washington Congregation for Secular Humanistic Judaism, June 2, 2005. Accessed February 15, 2006. 72. ^ Alan Gomberg, op. cit. 73. ^ Arthur Laurents, Theater: West Side Story; The Growth of an Idea, New York Herald Tribune, August 4, 1957. Reproduced on Accessed February 12, 2006. 74. ^ JINFO. "Jewish Recipients of the Pulitzer Prize for Drama". 75. ^ Shimon Levy, The Development of Israeli Theatre– a brief overview Archived August 16, 2005, at the Wayback Machine. Credited to Ministry of Foreign Affairs, Jerusalem, 2000. Accessed February 12, 2006. 76. ^, Jewish Encyclopedia. Could not access February 12, 2006. 77. ^ Shimon Levy, op. cit. Archived August 16, 2005, at the Wayback Machine 78. ^ Orna Ben-Meir, Biblical Thematics in Stage Design for the Hebrew Theatre Archived April 15, 2005, at the Wayback Machine, Assaph, Section C, no. 11 (July 1999), p. 141 et. seq.. Accessed February 12, 2006. 79. ^ History of Israeli Theatre, on a Geocities site, credits and 80. ^ a b c Musakhanova G. B. Judeo-Tat literature. Makhachkala: Dagestan Book Publishing House, 1993. 81. ^ P. Agarunov. Theatrical art of Mountain Jews. // Magazine "Minyan", №5. 82. ^ a b c d Book (ru:«Самородки Дагестана») – "Gifted of Dagestan". Author: I. Mikhailova. Makhachkala, Russia. 2014. 83. ^ Hobsbawm, E. J. (2003), Interesting Times : A Twentieth-Century Life, Knopf Publishing Group, pp. 10–11, ISBN 9780375422348 84. ^ Johnson, op. cit.' p. 462-463. 85. ^ Johnson, op. cit. p. 462-463. 86. ^ "JEWISH MUSIC INSTITUTE – Western Classical Music". Retrieved September 18, 2017. 87. ^ Landa, M. J. (1926). The Jew in Drama, p. 17. New York: Ktav Publishing House (1969). Each Jewish diasporic community developed its own dance traditions for wedding celebrations and other distinguished events. 88. ^ Yiddish, Klezmer, Ashkenazic or 'shtetl' dances, Le Site Genevois de la Musique Klezmer. Accessed February 12, 2006. 90. ^ Tanny, Jarrod (2015). "The Anti-Gospel of Lenny, Larry and Sarah: Jewish Humor and the Desecration of Christendom". American Jewish History. 99 (2): 167–193. doi:10.1353/ajh.2015.0023. S2CID 162195868. 91. ^ Leo Rosten, The Joys of Yinglish 92. ^ Ismar Schorsch, Shabbat Shekalim Va-Yakhel 5755, commentary on Exodus 35:1 – 38:20. February 25, 1995. Accessed February 12, 2006. 93. ^ Velvel Pasternak, Music and Art, part of "12 Paths" on Accessed February 12, 2006. 94. ^ "Not a Pretty Picture". Haaretz. June 27, 2002. Retrieved September 18, 2017. 95. ^ Jessica Spitalnic Brockman, A Brief History of Jewish Art Archived January 14, 2006, at the Wayback Machine on Accessed February 12, 2006. 96. ^ Michael Schirber, Did Christians copy Jewish catacombs?, NBC News, July 20, 2005. Accessed February 12, 2006. 97. ^ Jona Lendering, The Jewish diaspora: Rome. Accessed February 12, 2006. 98. ^ Roza Bieliauskiene and Felix Tarm, Brief History of Jewish Art, Jewish Art Network. Accessed January 14, 2010. 99. ^ Johnson, op.cit., p. 411. 100. ^ "GW Libraries at the George Washington University – GW Libraries". Archived from the original on June 19, 2010. Retrieved September 18, 2017. 101. ^ Rebecca Assoun, Jewish artists in Montparnasse Archived September 29, 2007, at the Wayback Machine. European Jewish Press, July 19, 2005. Accessed February 12, 2006. 102. ^ Jewish Artists, Jewish Virtual Library, 2005. Accessed February 12, 2006. 103. ^, John Levy, "Review of The Tibetans", photo 8,, Lehman, Steve, The Tibetans: A Struggle to Survive (New York: How Town / Umbrage), 1998. 104. ^ See: Ohad Meromi in the online exhibition "Real Time" < Archived April 14, 2012, at the Wayback Machine>. 105. ^ Boulos, Nick (October 5, 2013). "Show and Tel Aviv: Israel's artistic coastal city". The Independent. Archived from the original on June 8, 2022. Retrieved April 1, 2014. 106. ^ a b c Booker, M. Keith (2012). Encyclopedia of Comic Books and Graphic Novels. pp. 504–505. ISBN 9780313357473. Retrieved March 30, 2015. 107. ^ Markstein, Donald D. "Don Markstein's Toonopedia: Famous Funnies". Retrieved September 18, 2017. 108. ^ "UAHC – Reform Judaism Magazine". Archived from the original on March 13, 2016. Retrieved September 18, 2017. 109. ^ Brod, Harry (2012). SUPERMAN IS JEWISH?: How Comic Book Superheroes Came to Serve Truth, Justice, and the Jewish-American Way. p. 66. ISBN 9781416598459. Retrieved March 30, 2015. 110. ^ Groth, Gary (May 23, 2012). "Jack Kirby Interview". The Comics Journal. Retrieved March 30, 2015. 111. ^ Hoffman, Jordan (April 29, 2012). "A marvel in comics". The Times of Israel. Retrieved March 30, 2015. 112. ^ Petrou, David Michael (1978). The Making of Superman the Movie, New York: Warner Books 113. ^ Jacobson, Howard (March 5, 2005). "Up, up and oy vey". The Times (UK). p. 5. 114. ^ The Mythology of Superman (DVD). Warner Bros. 2006. 115. ^ Baylen, Ashley (May 5, 2012). "Top 10 Jewish Marvel & DC Comics' Superheroes". Shalom Life. Archived from the original on April 10, 2015. Retrieved March 30, 2015. 116. ^ Markstein, Donald D. "Don Markstein's Toonopedia: EC Comics". Retrieved September 18, 2017. 117. ^ Inc., Will Eisner Studios. "A short biography -". Retrieved September 18, 2017. 118. ^ Winn, Marie (January 25, 1981). "What Became of Childhood Innocence?". The New York Times. Retrieved February 2, 2011. 119. ^ "The Way of the Samurai". The Jewish Journal. August 3, 2001. Retrieved March 24, 2007. 120. ^ "filmography". Retrieved September 18, 2017. 121. ^ Screenshot (October 11, 2017). "The Secret Jewish History Of Steven Universe". The Forward. Retrieved May 20, 2021. 122. ^ Roden, Claudia (1996). "The Book of Jewish Food: An Odyssey from Samarkand to New York”. Excerpt, retrieved April 7, 2015, from My Jewish Learning 123. ^ Vered, Ronit (May 13, 2017). "Why are Israeli Jews obsessed with hummus?". Haaretz. 124. ^ Eileen M. Lavine (September–October 2011). "Stuffed Cabbage: A Comfort Food for All Ages". Moment Magazine. Archived from the original on October 11, 2011. Retrieved October 3, 2011. 125. ^ Zeldes, Leah A. (September 1, 2010). "Eat this! Tzimmes, A sweet start to the Jewish New Year". Dining Chicago. Chicago's Restaurant & Entertainment Guide, Inc. Archived from the original on December 30, 2010. Retrieved September 1, 2010. 126. ^ Marks, Gil. Encyclopedia of Jewish Food. Houghton Mifflin. 127. ^ Roman, Alison (April 2, 2014). "How to Master Matzo Ball Soup". Bon Appetit. 128. ^ "The Genealogy of Morals", Part I, Section 16, tr. Walter Kaufmann 129. ^ The Encyclopedia of Christianity, Volume 4 by Erwin Fahlbusch, Geoffrey William Bromiley 130. ^ a b Alper, Tim. "Why South Koreans are in love with Judaism". The Jewish Chronicle. May 12, 2011. Retrieved February 8, 2014. 131. ^ Nagler-Cohen, Liron. "Chinese: 'Jews make money'". Ynetnews. April 23, 2012. Retrieved February 8, 2014. Further readingEdit • Landa, M.J. (1926). The Jew in Drama. New York: Ktav Publishing House (1969). • Stevens, Matthew. Jewish Film Directory: a guide to more than 1200 films of Jewish interest from 32 countries over 85 years. Trowbridge: Flicks Books, 1992 ISBN 0-9489117-2-7 298p. • Gabler, Neal. An Empire of Their Own: How the Jews Invented Hollywood. The Crown Publishing Group, 1988. ISBN 0-385-26557-3. • Veidlinger, Jeffrey. Jewish Public Culture in the Late Russian Empire. Bloomington: Indiana University Press, 2009. External linksEdit • The center for Jewish Art Collection • The City Congregation for Humanistic Judaism • Congress of Secular Jewish Organizations • Global Directory of Jewish Museums • News and reviews about Jewish literature and books • Festival of Jewish Theater and Ideas • The Bezalel Narkiss Index of Jewish Art • Heeb – an online magazine about Jewish culture • Gesher Galicia
e9ecee6404351e9e
There are three (at least that I'm aware of) commonly used approaches to obtain linear response properties (e.g electric polarizability, optical rotation, NMR shielding tensors) • Sum over states: The properties can formally be written as a sum of matrix elements of the perturbations $A$ and $B$ over all excited states. In practice, compute enough excited states to converge the property. Tends to converge slowly with the number of states. • Response functions: The properties can also be written in terms of response relations, which leads us instead to compute the perturbed density with respect to $A$ (or $B$) and contract it with $B$ (or $A$) to compute the property. • Derivatives: These properties are also derivatives of the energy with respect to these perturbations. One can derive analytic formulas or compute numerical derivatives. Analytical formulas are complex and don't (directly) apply to frequency dependent properties. Numerical derivatives require repeated calculations and tuning the step size. For optical rotation, and I believe most other linear properties, the second approach above has won out as the best way to do the computation in general. But I'm curious if this holds for computing nonlinear properties (e.g. $n^{\text{th}}$ hyperpolarizability, the Kerr Effect) as well or whether the cost/benefit analysis of these methods changes. 1 Answer 1 Options 2 and 3 appear to be the same: the response is almost always the response of the energy, since the wave function is determined by the energy principle. To clarify: in many methods (e.g. Hartree-Fock or CC) one computes the derivative of the energy functional with respect to the property (e.g. polarizability or NMR shielding constants); this turns out to lead to (generalized) response densities that you need to solve from the Schrödinger equation, and at the end you get your property by contracting the densities. There may also be several ways in which to choose your perturbation, e.g. for NMR shielding constants your variables are the external magnetic field and the nuclear shieldings. I forget the details, but the idea is this: you can first perturb with the shieldings, and then contract with the magnetic field response, but this obviously becomes horribly slow for many nuclei. Instead, you can first perturb with the magnetic field, solving the response equations for the 3 components of the field, after which you can get the shieldings just by contracting the fixed response to the external field with the relevant matrices for the individual nuclei; this approach scales to large numbers of atoms. • 1 $\begingroup$ I guess to me, it seems different computationally to solve the CPHF/KS equations for an electric field perturbation to determine the polarizability (my option 2) then it would be compute the numerical second derivative of the energy with respect to the electric field (option 3). If anything options 1 and 2 are more similar, as they mainly just take different approaches to using the response function (1 tries to truncate the exact expressions, while 2 uses an iterative approximation) link.springer.com/content/pdf/… $\endgroup$ – Tyberius Jul 27, 2020 at 17:37 • $\begingroup$ But option 3 also included the analytical route! $\endgroup$ Jul 28, 2020 at 9:32 • $\begingroup$ 1 and 2 are not the same, since the sum-of-states approach is untractable in almost every approach... $\endgroup$ Jul 28, 2020 at 9:33 Your Answer