chash
stringlengths
16
16
content
stringlengths
267
674k
e3353d8cbff5982d
Walter H. Schottky From Wikipedia, the free encyclopedia Jump to: navigation, search Walter H. Schottky Walter Hermann Schottky (1886-1976).jpg Born 23 July 1886 (1886-07-23) Zürich, Switzerland Died 4 March 1976 (1976-03-05) Pretzfeld, West Germany Residence Germany Nationality German Fields Physicist Institutions University of Jena University of Würzburg University of Rostock Siemens Research Laboratories Alma mater University of Berlin Doctoral advisor Max Planck Heinrich Rubens Notable students Werner Hartmann Known for Schottky effect Schottky barrier Schottky contact Schottky anomaly Screen-grid vacuum tube Ribbon microphone Ribbon loudspeaker Theory of Field emission Shot noise Notable awards Hughes medal (1936) Werner von Siemens Ring (1964) Walter Hermann Schottky (23 July 1886 – 4 March 1976) was a German physicist who played a major early role in developing the theory of electron and ion emission phenomena,[1] invented the screen-grid vacuum tube in 1915 and the pentode[citation needed] in 1919 while working at Siemens, co-invented the ribbon microphone and ribbon loudspeaker along with Dr. Erwin Gerlach in 1924[2] and later made many significant contributions in the areas of semiconductor devices, technical physics and technology. Early life[edit] Schottky's father was mathematician Friedrich Hermann Schottky (1851–1935). Schottky's father and mother had one daughter and two sons. His father was appointed professor of mathematics at the University of Zurich in 1882, and Schottky was born four years later. The family then moved back to Germany in 1892, where his father took up an appointment at the University of Marburg.[citation needed] Schottky graduated from the Steglitz Gymnasium in Berlin in 1904. He completed his B.S. degree in physics, at the University of Berlin in 1908, and he completed his Ph.D. in physics at the Humboldt University of Berlin in 1912, studying under Max Planck and Heinrich Rubens, with a thesis entitled: Zur relativtheoretischen Energetik und Dynamik. Schottky's postdoctoral period was spent at University of Jena (1912–14). He then lectured at the University of Würzburg (1919–23). He became a professor of theoretical physics at the University of Rostock (1923–27). For two considerable periods of time, Schottky worked at the Siemens Research laboratories (1914–19 and 1927–58). In 1924, Schottky co-invented the ribbon microphone along with Erwin Gerlach. The idea was that a very fine ribbon suspended in a magnetic field could generate electric signals. This led also to the invention of the ribbon loudspeaker by using it in the reverse order, but it was not practical until high flux permanent magnets became available in the late 1930s.[2] Major scientific achievements[edit] Possibly, in retrospect, Schottky's most important scientific achievement was to develop (in 1914) the well-known classical formula, now written E_{int}(x) = -\frac{q^2} {16\pi\epsilon_0{x}} Which computes the interaction energy between a point charge q and a flat metal surface, when the charge is at a distance x from the surface. Owing to the method of its derivation, this interaction is called the "image potential energy" (image PE). Schottky based his work on earlier work by Lord Kelvin relating to the image PE for a sphere. Schottky's image PE has become a standard component in simple models of the barrier to motion, M(x), experienced by an electron on approaching a metal surface or a metal–semiconductor interface from the inside. (This M(x) is the quantity that appears when the one-dimensional, one-particle, Schrödinger equation is written in the form Here, \hbar is Planck's constant divided by 2π, and m is the electron mass.) The image PE is usually combined with terms relating to an applied electric field F and to the height h (in the absence of any field) of the barrier. This leads to the following expression for the dependence of the barrier energy on distance x, measured from the "electrical surface" of the metal, into the vacuum or into the semiconductor: M(x) = \; h -eFx - e^2/4 \pi \epsilon_0 \epsilon_r x \;. Here, e is the elementary positive charge, ε0 is the electric constant and εr is the relative permittivity of the second medium (=1 for vacuum). In the case of a metal–semiconductor junction, this is called a Schottky barrier; in the case of the metal-vacuum interface, this is sometimes called a Schottky–Nordheim barrier. In many contexts, h has to be taken equal to the local work function φ. This Schottky–Nordheim barrier (SN barrier) has played an important role in the theories of thermionic emission and of field electron emission. Applying the field causes lowering of the barrier, and thus enhances the emission current in thermionic emission. This is called the "Schottky effect", and the resulting emission regime is called "Schottky emission". In 1923 Schottky suggested (incorrectly) that the experimental phenomenon then called autoelectronic emission and now called field electron emission resulted when the barrier was pulled down to zero. In fact, the effect is due to wave-mechanical tunneling, as shown by Fowler and Nordheim in 1928. But the SN barrier has now become the standard model for the tunneling barrier. Later, in the context of semiconductor devices, it was suggested that a similar barrier should exist at the junction of a metal and a semiconductor. Such barriers are now widely known as Schottky barriers, and considerations apply to the transfer of electrons across them that are analogous to the older considerations of how electrons are emitted from a metal into vacuum. (Basically, several emission regimes exist, for different combinations of field and temperature. The different regimes are governed by different approximate formulae.) When the whole behaviour of such interfaces is examined, it is found that they can act (asymmetrically) as a special form of electronic diode, now called a Schottky diode. In this context, the metal–semiconductor junction is known as a "Schottky (rectifying) contact'". Schottky's contributions, in surface science/emission electronics and in semiconductor-device theory, now form a significant and pervasive part of the background to these subjects. It could possibly be argued that – perhaps because they are in the area of technical physics – they are not as generally well recognized as they ought to be. He was awarded the Royal Society's Hughes medal in 1936 for his discovery of the Schrot effect (spontaneous current variations in high-vacuum discharge tubes, called by him the "Schrot effect": literally, the "small shot effect") in thermionic emission and his invention of the screen-grid tetrode and a superheterodyne method of receiving wireless signals. In 1964 he received the Werner von Siemens Ring honoring his ground-breaking work on the physical understanding of many phenomena that led to many important technical appliances, among them tube amplifiers and semiconductors. The invention of superheterodyne is usually attributed to Edwin Armstrong. However, Schottky published an article in the Proceedings of the IEEE that may indicate he had invented and patented something similar in Germany in 1918.[3] Walter Schottky Institute (Germany) was named after him. The Walter H. Schottky prize is named after him. Books written by Schottky[edit] • Thermodynamik, Julius Springer, Berlin, Germany, 1929. • Physik der Glühelektroden, Akademische Verlagsgesellschaft, Leipzig, 1928. See also[edit] 1. ^ Welker, Heinrich (June 1976). "Walter Schottky". Physics Today 29 (6): 63–64. Bibcode:1976PhT....29f..63W. doi:10.1063/1.3023533.  2. ^ a b "Historically Speaking". Hifi World. April 2008. Retrieved April 2012.  3. ^ Schottky, Walter (October 1926). "On the Origin of the Super-Heterodyne Method". Proceedings of the IRE 14 (5): 695–698. doi:10.1109/JRPROC.1926.221074.  External links[edit]
3df3fd097d84840b
Take the 2-minute tour × In 1 dimension what is the solution of the Schrödinger equation with potential $$ V(x) = V_r + i V_i $$ Potentials are constant. share|improve this question 1 Answer 1 The Hamiltonian $$H=T_{\text{kin}}+V_r+iV_i$$ will not be Hermitian as $$(iV_i)^*=-iV_i.$$ Technically, you can make an ansatz $$\Psi(x,t)=A\int\text{d}k\ \hat\Psi(k)\ \text e^{i(kx-\omega(k)t)}.$$ plug it into the differential equation and find $$\omega(k)=\frac{\hbar^2 k^2}{2m}+V_r+iV_i,$$ or $$\hbar k=\pm\sqrt{2m(E-iV_i)},$$ where $E$ is some real number/numbers. You also want a boundary condition. (As a vague remark, modelings of complex energies, which necessarily turn a phase like $\text e^{-i\omega}$ into something like the descending expression $\text e^{-\omega'}$, are associated with decay. But again, a particle that vanishes in time like that is probably not what you want to talk about.) share|improve this answer Comment to the answer(v2): It should be stressed in the answer that the wavenumber $k$ is a manifestly real variable, not complex. Therefore the corresponding energies $E_k\equiv \hbar\omega_k$ are not real but complex. –  Qmechanic Oct 15 '12 at 13:59 as a comment i heard that a Schroedinguer equatio with the potential $ V(X) = ix^{3} $ may have ALL the energies real :) –  Jose Javier Garcia Oct 15 '12 at 18:30 Your Answer
d55ddbb9acc25caf
Citation for this page in APA citation style.           Close Core Concepts Adequate Determinism Alternative Possibilities Causa Sui Causal Closure Chance Not Direct Cause Chaos Theory The Cogito Model Comprehensive   Compatibilism Conceptual Analysis Could Do Otherwise Default Responsibility Determination Fallacy Double Effect Either Way Emergent Determinism Epistemic Freedom Ethical Fallacy Experimental Philosophy Extreme Libertarianism Event Has Many Causes Frankfurt Cases Free Choice Freedom of Action "Free Will" Free Will Axiom Free Will in Antiquity Free Will Mechanisms Free Will Requirements Free Will Theorem Future Contingency Hard Incompatibilism Idea of Freedom Illusion of Determinism Laplace's Demon Liberty of Indifference Libet Experiments Master Argument Modest Libertarianism Moral Necessity Moral Responsibility Moral Sentiments Paradigm Case Random When?/Where? Rational Fallacy Same Circumstances Science Advance Fallacy Second Thoughts Soft Causality Special Relativity Standard Argument Temporal Sequence Tertium Quid Torn Decision Two-Stage Models Ultimate Responsibility Up To Us What If Dennett and Kane Did Otherwise? Mortimer Adler Rogers Albritton Alexander of Aphrodisias Samuel Alexander William Alston Louise Antony Thomas Aquinas David Armstrong Harald Atmanspacher Robert Audi Alexander Bain Mark Balaguer Jeffrey Barrett William Belsham Henri Bergson Isaiah Berlin Bernard Berofsky Robert Bishop Max Black Susanne Bobzien Emil du Bois-Reymond Hilary Bok Laurence BonJour George Boole Émile Boutroux Michael Burke Joseph Keim Campbell Rudolf Carnap Ernst Cassirer David Chalmers Roderick Chisholm Randolph Clarke Samuel Clarke Anthony Collins Antonella Corradini Diodorus Cronus Jonathan Dancy Donald Davidson Mario De Caro Daniel Dennett Jacques Derrida René Descartes Richard Double Fred Dretske John Dupré John Earman Laura Waddell Ekstrom Herbert Feigl John Martin Fischer Owen Flanagan Luciano Floridi Philippa Foot Alfred Fouilleé Harry Frankfurt Richard L. Franklin Michael Frede Gottlob Frege Peter Geach Edmund Gettier Carl Ginet Alvin Goldman Nicholas St. John Green H.Paul Grice Ian Hacking Ishtiyaque Haji Stuart Hampshire Sam Harris William Hasker Georg W.F. Hegel Martin Heidegger Thomas Hobbes David Hodgson Shadsworth Hodgson Baron d'Holbach Ted Honderich Pamela Huby David Hume Ferenc Huoranszki William James Lord Kames Robert Kane Immanuel Kant Tomis Kapitan Jaegwon Kim William King Hilary Kornblith Christine Korsgaard Saul Kripke Andrea Lavazza Keith Lehrer Gottfried Leibniz Michael Levin George Henry Lewes David Lewis Peter Lipton John Locke Michael Lockwood E. Jonathan Lowe John R. Lucas Ruth Barcan Marcus James Martineau Storrs McCall Hugh McCann Colin McGinn Michael McKenna Brian McLaughlin John McTaggart Paul E. Meehl Uwe Meixner Alfred Mele Trenton Merricks John Stuart Mill Dickinson Miller C. Lloyd Morgan Thomas Nagel Friedrich Nietzsche John Norton Robert Nozick William of Ockham Timothy O'Connor David F. Pears Charles Sanders Peirce Derk Pereboom Steven Pinker Karl Popper Huw Price Hilary Putnam Willard van Orman Quine Frank Ramsey Ayn Rand Michael Rea Thomas Reid Charles Renouvier Nicholas Rescher Richard Rorty Josiah Royce Bertrand Russell Paul Russell Gilbert Ryle Jean-Paul Sartre Kenneth Sayre Moritz Schlick Arthur Schopenhauer John Searle Wilfrid Sellars Alan Sidelle Ted Sider Henry Sidgwick Walter Sinnott-Armstrong Saul Smilansky Michael Smith Baruch Spinoza L. Susan Stebbing George F. Stout Galen Strawson Peter Strawson Eleonore Stump Francisco Suárez Richard Taylor Kevin Timpe Mark Twain Peter Unger Peter van Inwagen Manuel Vargas John Venn Kadri Vihvelin G.H. von Wright David Foster Wallace R. Jay Wallace Ted Warfield Roy Weatherford William Whewell Alfred North Whitehead David Widerker David Wiggins Bernard Williams Timothy Williamson Ludwig Wittgenstein Susan Wolf Michael Arbib Bernard Baars Gregory Bateson John S. Bell Charles Bennett Ludwig von Bertalanffy Susan Blackmore Margaret Boden David Bohm Niels Bohr Ludwig Boltzmann Emile Borel Max Born Satyendra Nath Bose Walther Bothe Hans Briegel Leon Brillouin Stephen Brush Henry Thomas Buckle S. H. Burbury Donald Campbell Anthony Cashmore Eric Chaisson Jean-Pierre Changeux Arthur Holly Compton John Conway John Cramer E. P. Culverwell Charles Darwin Terrence Deacon Louis de Broglie Max Delbrück Abraham de Moivre Paul Dirac Hans Driesch John Eccles Arthur Stanley Eddington Paul Ehrenfest Albert Einstein Hugh Everett, III Franz Exner Richard Feynman R. A. Fisher Joseph Fourier Lila Gatlin Michael Gazzaniga GianCarlo Ghirardi J. Willard Gibbs Nicolas Gisin Paul Glimcher Thomas Gold Brian Goodwin Joshua Greene Jacques Hadamard Patrick Haggard Stuart Hameroff Augustin Hamon Sam Harris Hyman Hartman John-Dylan Haynes Martin Heisenberg John Herschel Werner Heisenberg Jesper Hoffmeyer E. T. Jaynes William Stanley Jevons Roman Jakobson Pascual Jordan Ruth E. Kastner Stuart Kauffman Simon Kochen Stephen Kosslyn Ladislav Kovàč Rolf Landauer Alfred Landé Pierre-Simon Laplace David Layzer Benjamin Libet Seth Lloyd Hendrik Lorentz Josef Loschmidt Ernst Mach Donald MacKay Henry Margenau James Clerk Maxwell Ernst Mayr Ulrich Mohrhoff Jacques Monod Emmy Noether Howard Pattee Wolfgang Pauli Massimo Pauri Roger Penrose Steven Pinker Colin Pittendrigh Max Planck Susan Pockett Henri Poincaré Daniel Pollen Ilya Prigogine Hans Primas Adolphe Quételet Juan Roederer Jerome Rothstein David Ruelle Erwin Schrödinger Aaron Schurger Claude Shannon David Shiang Herbert Simon Dean Keith Simonton B. F. Skinner Roger Sperry Henry Stapp Tom Stonier Antoine Suarez Leo Szilard William Thomson (Kelvin) Peter Tse Heinz von Foerster John von Neumann John B. Watson Daniel Wegner Steven Weinberg Paul A. Weiss John Wheeler Wilhelm Wien Norbert Wiener Eugene Wigner E. O. Wilson H. Dieter Zeh Ernst Zermelo Wojciech Zurek Free Will Mental Causation James Symposium In information science, noise is generally the enemy of information. But some noise is the friend of freedom, since it is the source of novelty, of creativity and invention, and of variation in the biological gene pool. Too much noise is simply entropic and destructive. With the right level of noise, the cosmic creation process is not overcome by the chaos. When information is stored in any structure, from galaxies to minds, two fundamental physical processes occur. First is a collapse of a quantum mechanical wave function. Second is a local decrease in the entropy corresponding to the increase in information. Entropy greater than that must be transferred away to satisfy the second law of thermodynamics. If wave functions did not collapse, their evolution over time would be completely deterministic and information-preserving. Nothing new would emerge that was not implicitly present in the earlier states of the universe. It is ironic that noise, in the form of quantum mechanical wave function collapses, should be the ultimate source of new information (low or negative entropy), the very opposite of noise (positive entropy). Because quantum level processes introduce noise, information stored may have errors. When information is retrieved, it is again susceptible to noise, This may garble the information content. Despite the continuous presence of noise around them and inside them, biological systems have maintained and increased their invariant information content over billions of generations. Humans increase our knowledge of the external world, despite logical, mathematical, and physical uncertainty. Biological and intellectual information handling balance random and orderly processes by means of sophisticated error detection and correction schemes. The scheme we use to correct human knowledge is science, a combination of freely invented theories and adequately determined experiments. In Biology Molecular biologists have assured neuroscientists for years that the molecular structures involved in neurons are too large to be affected significantly by quantum noise. But neurobiologists know very well that there is noise in the nervous system in the form of spontaneous firings of an action potential spike, thought to be the result of random chemical changes at the synapses. This may or may not be quantum noise amplified to the macroscopic level. But there is no problem imagining a role for randomness in the brain in the form of quantum level noise that affects the communication of knowledge. Noise can introduce random errors into stored memories. Noise can create random associations of ideas during memory recall. Molecular biologists know that while most biological structures are remarkably stable, and thus adequately determined, quantum effects drive the mutations that provide variation in the gene pool. So our question is how the typical structures of the brain have evolved to deal with microscopic, atomic level, noise - both thermal and quantal noise. Can they ignore it because they are adequately determined large objects, or might they have remained sensitive to the noise for some reason? We can expect that if quantum noise, or even ordinary thermal noise, offered beneficial advantages, there would have been evolutionary pressure to take advantage of noise. Proof that our sensory organs have evolved until they are working at or near quantum limits is evidenced by the eye's ability to detect a single photon (a quantum of light energy), and the nose's ability to smell a single molecule. Biology provides many examples of ergodic creative processes following a trial and error model. They harness chance as a possibility generator, followed by an adequately determined selection mechanism with implicit information-value criteria. Darwinian evolution is the first and greatest example of a two-stage creative process, random variation followed by critical selection, but we will consider briefly two other such processes. Both are analogous to our two-stage Cogito model for the mind. One is at the heart of the immune system, the other provides quality control in protein/enzyme factories. Noise in the Cogito model The insoluble problem for previous two-stage models has been to explain how a random event in the brain can be timed and located - perfectly synchronized! - so as to be relevant to a specific decision. The answer is it cannot be, for the simple reason that quantum events are totally unpredictable. The Cogito solution is not single random events, one per decision, but many random events in the brain as a result of ever-present noise, both quantum and thermal noise, that is inherent in any information storage and communication system. The mind, like all biological systems, has evolved in the presence of constant noise and is able to ignore that noise, unless the noise provides a significant competitive advantage, which it clearly does as the basis for freedom and creativity. The only reasonable model for an indeterministic contribution is ever-present noise throughout the neural circuitry. We call it the Micro Mind. Quantum (and even some thermal) noise in the neurons is all we need to supply random unpredictable alternative possibilities. And indeterminism is NOT involved in the de-liberating Will. The major difference between Micro and Macro is how they process noise in the brain circuits. The first accepts it, the second suppresses it. Our "adequately determined" Macro Mind can overcome the noise whenever it needs to make a determination on thought or action. White Noise and Pink Noise. Noise (specifically audio noise) is described as having a color when the amount of power (energy) in different frequencies is not uniform. By analogy with the amount of energy in different light frequencies (or wavelengths), when the energy is larger than average in longer wavelengths (the red part of the visual spectrum), then the noise is called "pink," although there is nothing visual. Computer-generated noise may consist of random binary number sequences (1's and 0's). As long as the sequence is random, no statistical correlations or detectable patterns in the sequence, it is described as white noise. The Wiener process, is a mathematical construct based on white noise with a Gaussian probability distribution. Many naturally occurring processes exhibit white noise, including the Brownian motion of tiny particles suspended in a liquid. The atmosphere is considered a source of random white noise by They use radio antennae tuned between radio stations to generate random digit patterns from "atmospheric" white noise. Whether this noise is genuinely random in the sense of irreducible quantum randomness is a question of the relationship between thermal noise and quantal noise. Ultimately, this relationship depends on whether a classical gas is entirely deterministic (cf., deterministic chaos), and whether binary collisions of gas particles can be treated deterministically or must be treated quantum mechanically. If they are deterministic, then collisions are in principle time reversible. In quantum mechanics, microscopic time reversibility is taken to mean that the deterministic linear Schrödinger equation is time reversible. A careful quantum analysis shows that ideal reversibility fails even in the simplest conditions - the case of two particles in collision. When they collide, even structureless particles should not be treated as individual particles with single-particle wave functions, but as a single system with a two-particle wave function, because they are now entangled. Treating two atoms as a temporary molecule means we must use molecular, rather than atomic, wave functions. The quantum description of the molecule now transforms the six independent degrees of freedom into three for the molecule's center of mass and three more that describe vibrational and rotational quantum states. The possibility of quantum transitions between closely spaced vibrational and rotational energy levels in the "quasi-molecule' introduces uncertainty, which could be different for the hypothetical perfectly reversed path. Stochastic Noise. In probability theory, stochastic processes are random (indeterministic) processes that are contrasted with deterministic processes. Robert Kane on Noise In his latest attempts to find the location of where and when indeterminism contributes to free will, Kane suggests that it is noise. But the noise does not contribute randomness to generating alternative possibilities, as in our Cogito two-stage model. Instead, noise just interferes with decisions and makes them more difficult! "As it happens, on my libertarian account of free will, one does not need large-scale indeterminism in the brain, in the form, say, of macro-level wave function collapses (in the manner of the Penrose/Hameroff view mentioned by Vargas). Minute indeterminacies in the timings of firings of indeterminism neurons would suffice, because the indeterminism in my view plays only an interfering role, in the form of background noise. Indeterminism does not have to "do the deed" on its own, so to speak. One does not need a downpour of indeterminism in the brain, or a thunderclap, to get free will. Just a sprinkle will do." Four Views on Free Will, Fischer et al., p.183) For Teachers For Scholars Part Three - Value Part Five - Problems Normal | Teacher | Scholar
257a7be060382754
All Issues Volume 13, 2019 Volume 12, 2018 Volume 11, 2017 Volume 10, 2016 Volume 9, 2015 Volume 8, 2014 Volume 7, 2013 Volume 6, 2012 Volume 5, 2011 Volume 4, 2010 Volume 3, 2009 Volume 2, 2008 Volume 1, 2007 Inverse Problems & Imaging February 2019 , Volume 13 , Issue 1 Select all articles Hyperpriors for Matérn fields with applications in Bayesian inversion Lassi Roininen, Mark Girolami, Sari Lasanen and Markku Markkanen 2019, 13(1): 1-29 doi: 10.3934/ipi.2019001 +[Abstract](259) +[HTML](80) +[PDF](2476.56KB) We introduce non-stationary Matérn field priors with stochastic partial differential equations, and construct correlation length-scaling with hyperpriors. We model both the hyperprior and the Matérn prior as continuous-parameter random fields. As hypermodels, we use Cauchy and Gaussian random fields, which we map suitably to a desired correlation length-scaling range. For computations, we discretise the models with finite difference methods. We consider the convergence of the discretised prior and posterior to the discretisation limit. We apply the developed methodology to certain interpolation, numerical differentiation and deconvolution problems, and show numerically that we can make Bayesian inversion which promotes competing constraints of smoothness and edge-preservation. For computing the conditional mean estimator of the posterior distribution, we use a combination of Gibbs and Metropolis-within-Gibbs sampling algorithms. Inverse problems for the heat equation with memory Sergei A. Avdonin, Sergei A. Ivanov and Jun-Min Wang 2019, 13(1): 31-38 doi: 10.3934/ipi.2019002 +[Abstract](302) +[HTML](104) +[PDF](321.32KB) We study inverse boundary problems for one dimensional linear integro-differential equation of the Gurtin-Pipkin type with the Dirichlet-to-Neumann map as the inverse data. Under natural conditions on the kernel of the integral operator, we give the explicit formula for the solution of the problem with the observation on the semiaxis t>0. For the observation on finite time interval, we prove the uniqueness result, which is similar to the local Borg-Marchenko theorem for the Schrödinger equation. Magnetic moment estimation and bounded extremal problems Laurent Baratchart, Sylvain Chevillard, Douglas Hardin, Juliette Leblond, Eduardo Andrade Lima and Jean-Paul Marmorat 2019, 13(1): 39-67 doi: 10.3934/ipi.2019003 +[Abstract](150) +[HTML](73) +[PDF](3153.37KB) We consider the inverse problem in magnetostatics for recovering the moment of a planar magnetization from measurements of the normal component of the magnetic field at a distance from the support. Such issues arise in studies of magnetic material in general and in paleomagnetism in particular. Assuming the magnetization is a measure with L2-density, we construct linear forms to be applied on the data in order to estimate the moment. These forms are obtained as solutions to certain extremal problems in Sobolev classes of functions, and their computation reduces to solving an elliptic differential-integral equation, for which synthetic numerical experiments are presented. A partial inverse problem for the Sturm-Liouville operator on the lasso-graph Chuan-Fu Yang and Natalia Pavlovna Bondarenko 2019, 13(1): 69-79 doi: 10.3934/ipi.2019004 +[Abstract](195) +[HTML](107) +[PDF](350.7KB) The Sturm-Liouville operator with singular potentials on the lasso graph is considered. We suppose that the potential is known a priori on the boundary edge, and recover the potential on the loop from a part of the spectrum and some additional data. We prove the uniqueness theorem and provide a constructive algorithm for the solution of this partial inverse problem. Recovering two coefficients in an elliptic equation via phaseless information Vladimir G. Romanov and Masahiro Yamamoto 2019, 13(1): 81-91 doi: 10.3934/ipi.2019005 +[Abstract](140) +[HTML](86) +[PDF](355.95KB) For fixed \begin{document} $y \in \mathbb{R}^3$ \end{document}, we consider the equation \begin{document} $L u+k^2u = - δ(x-y), \>x \in \mathbb{R}^3$ \end{document}, where \begin{document} $L=\text{div}(n(x)^{-2}\nabla)+q(x)$ \end{document}, \begin{document} $k >0$ \end{document} is a frequency, \begin{document} $n(x)$ \end{document} is a refraction index and \begin{document} $q(x)$ \end{document} is a potential. Assuming that the refraction index \begin{document} $n(x)$ \end{document} is different from \begin{document} $1$ \end{document} only inside a bounded compact domain \begin{document} $Ω$ \end{document} with a smooth boundary \begin{document} $S$ \end{document} and the potential \begin{document} $q(x)$ \end{document} vanishes outside of the same domain, we study an inverse problem of finding both coefficients inside \begin{document} $Ω$ \end{document} from some given information on solutions of the elliptic equation. Namely, it is supposed that the point source located at point \begin{document} $y \in S$ \end{document} is a variable parameter of the problem. Then for the solution \begin{document} $u(x,y,k)$ \end{document} of the above equation satisfying the radiation condition, we assume to be given the following phaseless information \begin{document} $f(x,y,k)=|u(x,y,k)|^2$ \end{document} for all \begin{document} $x,y \in S$ \end{document} and for all \begin{document} $k≥ k_0>0$ \end{document}, where \begin{document} $k_0$ \end{document} is some constant. We prove that this phaseless information uniquely determines both coefficients \begin{document} $n(x)$ \end{document} and \begin{document} $q(x)$ \end{document} inside \begin{document} $Ω$ \end{document}. The regularized monotonicity method: Detecting irregular indefinite inclusions Henrik Garde and Stratos Staboulis 2019, 13(1): 93-116 doi: 10.3934/ipi.2019006 +[Abstract](286) +[HTML](94) +[PDF](1007.28KB) In inclusion detection in electrical impedance tomography, the support of perturbations (inclusion) from a known background conductivity is typically reconstructed from idealized continuum data modelled by a Neumann-to-Dirichlet map. Only few reconstruction methods apply when detecting indefinite inclusions, where the conductivity distribution has both more and less conductive parts relative to the background conductivity; one such method is the monotonicity method of Harrach, Seo, and Ullrich [17,15]. We formulate the method for irregular indefinite inclusions, meaning that we make no regularity assumptions on the conductivity perturbations nor on the inclusion boundaries. We show, provided that the perturbations are bounded away from zero, that the outer support of the positive and negative parts of the inclusions can be reconstructed independently. Moreover, we formulate a regularization scheme that applies to a class of approximative measurement models, including the Complete Electrode Model, hence making the method robust against modelling error and noise. In particular, we demonstrate that for a convergent family of approximative models there exists a sequence of regularization parameters such that the outer shape of the inclusions is asymptotically exactly characterized. Finally, a peeling-type reconstruction algorithm is presented and, for the first time in literature, numerical examples of monotonicity reconstructions for indefinite inclusions are presented. Nonconvex TGV regularization model for multiplicative noise removal with spatially varying parameters Hanwool Na, Myeongmin Kang, Miyoun Jung and Myungjoo Kang 2019, 13(1): 117-147 doi: 10.3934/ipi.2019007 +[Abstract](323) +[HTML](125) +[PDF](3461.13KB) In this article, we introduce a novel variational model for the restoration of images corrupted by multiplicative Gamma noise. The model incorporates a convex data-fidelity term with a nonconvex version of the total generalized variation (TGV). In addition, we adopt a spatially adaptive regularization parameter (SARP) approach. The nonconvex TGV regularization enables the efficient denoising of smooth regions, without staircasing artifacts that appear on total variation regularization-based models, and edges and details to be conserved. Moreover, the SARP approach further helps preserve fine structures and textures. To deal with the nonconvex regularization, we utilize an iteratively reweighted \begin{document}$\ell_1$\end{document} algorithm, and the alternating direction method of multipliers is employed to solve a convex subproblem. This leads to a fast and efficient iterative algorithm for solving the proposed model. Numerical experiments show that the proposed model produces better denoising results than the state-of-the-art models. Note on Calderón's inverse problem for measurable conductivities Matteo Santacesaria 2019, 13(1): 149-157 doi: 10.3934/ipi.2019008 +[Abstract](147) +[HTML](87) +[PDF](341.8KB) The unique determination of a measurable conductivity from the Dirichlet-to-Neumann map of the equation \begin{document} ${\rm{div}} (σ \nabla u) = 0$ \end{document} is the subject of this note. A new strategy, based on Clifford algebras and a higher dimensional analogue of the Beltrami equation, is here proposed. This represents a possible first step for a proof of uniqueness for the Calderón problem in three and higher dimensions in the \begin{document} $L^\infty$ \end{document} case. Inverse scattering problem for quasi-linear perturbation of the biharmonic operator on the line Teemu Tyni and Valery Serov 2019, 13(1): 159-175 doi: 10.3934/ipi.2019009 +[Abstract](130) +[HTML](82) +[PDF](412.59KB) We consider an inverse scattering problem of recovering the unknown coefficients of quasi-linearly perturbed biharmonic operator on the line. These unknown complex-valued coefficients are assumed to satisfy some regularity conditions on their nonlinearity, but they can be discontinuous or singular in their space variable. We prove that the inverse Born approximation can be used to recover some essential information about the unknown coefficients from the knowledge of the reflection coefficient. This information is the jump discontinuities and the local singularities of the coefficients. A reference ball based iterative algorithm for imaging acoustic obstacle from phaseless far-field data Heping Dong, Deyue Zhang and Yukun Guo 2019, 13(1): 177-195 doi: 10.3934/ipi.2019010 +[Abstract](131) +[HTML](69) +[PDF](542.74KB) In this paper, we consider the inverse problem of determining the location and the shape of a sound-soft obstacle from the modulus of the far-field data for a single incident plane wave. By adding a reference ball artificially to the inverse scattering system, we propose a system of nonlinear integral equations based iterative scheme to reconstruct both the location and the shape of the obstacle. The reference ball technique causes few extra computational costs, but breaks the translation invariance and brings information about the location of the obstacle. Several validating numerical examples are provided to illustrate the effectiveness and robustness of the proposed inversion algorithm. Simultaneously recovering potentials and embedded obstacles for anisotropic fractional Schrödinger operators Xinlin Cao, Yi-Hsuan Lin and Hongyu Liu 2019, 13(1): 197-210 doi: 10.3934/ipi.2019011 +[Abstract](135) +[HTML](69) +[PDF](421.86KB) Let \begin{document}$A∈{\rm{Sym}}(n× n)$\end{document} be an elliptic 2-tensor. Consider the anisotropic fractional Schrödinger operator \begin{document}$\mathscr{L}_A^s+q$\end{document}, where \begin{document}$\mathscr{L}_A^s: = (-\nabla·(A(x)\nabla))^s$\end{document}, \begin{document}$s∈ (0, 1)$\end{document} and \begin{document}$q∈ L^∞$\end{document}. We are concerned with the simultaneous recovery of \begin{document}$q$\end{document} and possibly embedded soft or hard obstacles inside \begin{document}$q$\end{document} by the exterior Dirichlet-to-Neumann (DtN) map outside a bounded domain \begin{document}$Ω$\end{document} associated with \begin{document}$\mathscr{L}_A^s+q$\end{document}. It is shown that a single measurement can uniquely determine the embedded obstacle, independent of the surrounding potential \begin{document}$q$\end{document}. If multiple measurements are allowed, then the surrounding potential \begin{document}$q$\end{document} can also be uniquely recovered. These are surprising findings since in the local case, namely \begin{document}$s = 1$\end{document}, both the obstacle recovery by a single measurement and the simultaneous recovery of the surrounding potential by multiple measurements are long-standing problems and still remain open in the literature. Our argument for the nonlocal inverse problem is mainly based on the strong uniqueness property and Runge approximation property for anisotropic fractional Schrödinger operators. A connection between uniqueness of minimizers in Tikhonov-type regularization and Morozov-like discrepancy principles Vinicius Albani and Adriano De Cezaro 2019, 13(1): 211-229 doi: 10.3934/ipi.2019012 +[Abstract](314) +[HTML](83) +[PDF](442.81KB) We state sufficient conditions for the uniqueness of minimizers of Tikhonov-type functionals. We further explore a connection between such results and the well-posedness of Morozov-like discrepancy principle. Moreover, we find appropriate conditions to apply such results to the local volatility surface calibration problem. 2017  Impact Factor: 1.465 Email Alert [Back to Top]
9918edbe377ca7a2
Tải bản đầy đủ - 0 (trang) Positron states in materials: density functional and quantum monte carlo studies Positron states in materials: density functional and quantum monte carlo studies Tải bản đầy đủ - 0trang electron and positron states and annihilation characteristics in materials in order to support the experimental research. To conserve energy and momentum, electrons and positrons usually annihilate by a second order process in which two photons are emitted [3, 4]. The process is shown in Fig. 1. At the first vertex the electron emits a photon, at the second vertex it emits a second photon and jumps into a negative energy state (positron). This phenomenon is analogous to Compton scattering and the calculation proceeds very much as the Compton scattering calculation [5]. The annihilation cross-section for a pair of total momentum is given by is the classical electron radius and Positron states in materials: DFT and QMC studies. In the non-relativistic limit this gives where is the relative velocity of the colliding particles. The first derivation of the positron annihilation cross-section formula was done by Dirac [6]. The annihilation rate is obtained on multiplying by the flux density is the density of positron-electron pairs with total momentum p. In the non-relativistic limit, the product is a constant, therefore are proportional. Second-quantized many-body formalism can be used to study positron annihilation in an electron gas and the electron-positron interaction can be discussed in terms of a Green function [7, 8, 9]. The density of positron-electron pairs with total momentum p can be written as are plane wave annihilation operators for the electron and the positron respectively and V is the volume of the sample. In terms of the corresponding point annihilation operators one has Substituting one obtains This formula can also be expressed as and the two-particle electron-positron Green’s function defined by where is a four-vector and T is the time-ordering operator. In the nonrelativistic limit, the annihilation rate is given by The total annihilation rate is obtained by integrating over p Therefore the effective density is given by The inverse of the total annihilation rate yields the positron lifetime which an important quantity in positron annihilation spectroscopies. One can go from the second-quantization representation to the configuration space, using the many body wave function The vector is the positron position, is an electron position and stands for the remaining electron coordinates One can show that is also given by After integrating over Eq. (15) can be expressed with two particle wave The summation is over all electron states and is the occupation number of the electron state labeled . is the two-particle wave function when the positron and electron reside in the same point. can be further written with the help of the positron and electron single particle wave functions respectively, and the so-called enhancement factor The enhancement factor is a manifestation of electron-positron interactions and it is always a crucial ingredient when calculating the positron lifetime. The Positron states in materials: DFT and QMC studies. independent particle model (IPM) assumes that there is no correlation between the positron and the electrons and that This approximation is justified only when the spectrum reflects quite well the momentum density of the system in absence of the positron. Many-body calculations for a positron in a homogeneous electron gas (HEG) have been used to model the electron-positron correlation. Kahana [8] used a Bethe-Golstone type ladder-diagram summation and predicted that the annihilation rate increases when the electron momentum approaches the Fermi as shown in Fig. 2. This momentum dependence is explained by the fact that the electrons deep inside the Fermi liquid cannot respond as effectively to the interactions as those near the Fermi surface. According to the many-body calculation by Daniel and Vosko [10] for the HEG, the electron momentum distribution is lowered just below the Fermi level with respect to the free electron gas. This Daniel-Vosko effect would oppose the increase of the annihilation rate near the Fermi momentum To describe the Kahana theory, it is convenient to define a momentum-dependent enhancement factor is the IPM partial annihilation rate. Stachowiak [11] has proposed a phenomenological formula for the increase of the enhancement factor given by is the electron gas parameter given by and is the electron density. This behavior of is quite sensitive to the construction of the many-body wave function. Experimentally, the peaking of should in principle be observable in alkali metals [12]. The Kahana theory in the plane-wave representation (corresponding to single particle wave functions in the HEG) can be generalized by using Bloch wave functions for a periodic ion lattice. This approach has been reviewed by Sormann [13]. An important conclusion is that the state dependence of the enhancement factor is strongly modified by the inhomogeneity and the lattice effects. Therefore in materials, which are not nearly-free-electron like, the Kahana momentum dependence of is probably completely hidden. The plane wave expansions used in the Bethe-Golstone equation can be slowly convergent to describe the cusp in the screening cloud. Choosing more appropriate functions depending on the electron-positron relative distance may provide more effective tools to deal with the problem. The Bethe-Golstone equation is equivalent to the Schrödinger equation for the electron-positron pair wave function where V is a screened Coulomb potential. The Pluvinage approximation [14] consists in finding two functions and such that the Schrödinger equation becomes separable. describes the orbital motion of the two particles ignoring each other, and describes the correlated motion. The correlated motion depends strongly on the initial electron state (without the presence of the positron). Obviously, the core and the localized and valence electrons are less affected by the positron than the valence orbitals. On the basis of the Pluvinage approximation, one can develop a theory for the momentum density of annihilating electron-positron. In practice, this leads to a scheme in which one first determines the momentum density for a given electron state within the IPM. When calculating the total momentum density this contribution is weighted by Positron states in materials: DFT and QMC studies. where is the partial annihilation rate of the electron state and is the same quantity in the IPM. This means that a state-dependent enhancement factor in Eq. (17). The partial annihilation rate is obtained as are the electron density for the state the total electron density and the state independent enhancement factor, respectively. If this theory is applied to the HEG it leads to the same constant enhancement factor to all electron states, i.e. there is no Kahana-type momentum dependence in the theory. In a HEG, the enhancement factor can be obtained by solving a radial Schrödinger-like equation [15, 16, 17] for an electron-positron pair interacting via an effective potential W Multiplying by and integrating gives This result shows that the enhancement factor is proportional to the expectation value of the effective electric The potential W can be determined within the hypernetted chain approximation (HNC) [16,17]. The bosonization method by Arponen and Pajanne [18] is considered to be superior over the HNC. The parametrization of their data, shown in Fig. 3, reads as [19] The only fitting parameter in this equation is the factor in the front of the square term. The first two terms are fixed to reproduce the high-density RPA limit [20] and the last term the low-density positronium (Ps) atom limit. There is an upper bound for i.e. [15] is the enhancement factor in the case of a proton and is the reduced mass of the electron-positron system. Eq. (28) is called the scaled proton formula and it is truly an upper bound, because we cannot expect a greater screening of a delocalized positron than that of a strongly localized proton. The positron annihilation rate in the HEG is given by the simple relation and the lifetime is shown in Fig. 4 for several electron densities. One can notice that saturates to the lifetime of Ps atom in free space (about 500 The DFT reduces the quantum-mechanical many-body problem to a set of manageable one-body problems [21]. It solves the electronic structure of a system in its ground state so that the electron density is the basic quantity. The DFT can be generalized to positron-electron systems by including the positron density as well; it is then called a 2-component DFT [22, 23]. The enhancement factor is treated as a function of the electron density the local density approximation (LDA) [22]. However, quite generally, the LDA underestimates the positron lifetime. In fact one expects that the strong electric field due to the inhomogeneity suppresses the electron-positron correlations in Positron states in materials: DFT and QMC studies. the same way as the Stark effect decreases the electron-positron density at zero distance for the Ps atom [18]. In the generalized gradient approximation (GGA) [19, 24] the effects of the nonuniform electron density are described in terms of the ratio between the local length scale of the density variations and the local Thomas-Fermi screening length The lowest order gradient correction to the LDA correlation hole density is proportional to the parameter This parameter is taken to describe also the reduction of the screening cloud close to the positron. For the HEG whereas in the case of rapid density variations approaches infinity. At the former limit the LDA result for the induced screening charge is valid and the latter limit should lead to the IPM result with vanishing enhancement. In order to interpolate between these limits, we use for the enhancement factor the form has been set so that the calculated and experimental lifetimes agree as well as possible for a large number of different types of solids. The effective positron potential is given by the total Coulomb potential plus the electron-positron correlation potential [22, 23]. The electron-positron po- tential per electron due to a positron impurity can be obtained via the HellmannFeynman theorem [25] as is the screening cloud density and Z is the electronpositron coupling constant. Let us suppose that the electron-positron correlation for an electron gas with a relevant density is mainly characterized by a single length Then for the electron-positron correlation energy, constant and the normalization factor of the screening cloud scales as for the dimension of space. Compared to the IPM result the electronpositron correlation increases the annihilation rate as which is proportional to the density of the screening cloud at the positron. Consequently, we have the following scaling law [26] The values of the correlation energy calculated by Arponen and Pajanne [18] obey the form of Eq. (33) quite well and the coefficient has a relatively small value of 0.11 Ry. Therefore, one can use in the practical GGA calculations the correlation energy which is obtained from the HEG result the scaling are the annihilation rates in the LDA model and in the GGA model, respectively. One can use for the correlation energy interpolation form of Ref. [23] obtained from Arponen and Pajanne calculation Positron Affinity The positron affinity is an energy quantity defined by where and are the electron and positron chemical potentials, respectively [27]. In the case of a semiconductor, is taken from the position of the top of the valence band. The affinity can be measured by positron re-emission spectroscopy [28]. The comparison of measured and calculated values for Positron states in materials: DFT and QMC studies. different materials is a good test for the electron-positron correlation potential. The Ps atom work function is given by [28] Since the Ps is a neutral particle, is independent of the surface dipole. The LDA shows a clear tendency to overestimate the magnitude of [19]. This overestimation can be traced back to the screening effects. In the GGA, the value of is improved with respect to experiment by reducing the screening charge. The calculated positron affinities within LDA and GGA against the corresponding experimental values for several metals are shown in Fig. 5. Kuriplach et al. [29] calculated for different polytypes of SiC and showed that the GGA agrees better with the experimental values than the LDA. Panda et al. showed that the computed affinities depend crucially on the electronpositron potential used in the calculation (LDA or GGA) and on the quality of the wave function basis set [30]. The result with a more accurate basis set for valence electrons and within GGA gives –3.92 eV for 3C-SiC, which is surprisingly close to the experimental Positron Lifetime The LDA underestimates systematically the positron lifetime in real materials. Sterne and Kaiser [31] suggested to use a constant enhancement factor of unity for core electrons. Plazaola et al. [32] showed that the positron lifetimes calculated for II-VI compound semiconductors are too short due to the LDA overestimation of the annihilation rate with the uppermost atom-II d electrons. Puska et al. [33] introduced a semiempirical model in order to decrease the positron annihilation rate in semiconductors and insulators. In the GGA these corrections are not necessary. In general, the agreement for the GGA with the experiment is excellent, as shown in Fig. 6. Moreover, Ishibashi et al. [34] have shown that the GGA reproduces the experimental values much better than the LDA even for the low-electron-density systems such as the molecular crystals The GGA can also be safely applied to the calculation of annihilation characteristics for positrons trapped at vacancies in solids [24]. Tài liệu bạn tìm kiếm đã sẵn sàng tải về Positron states in materials: density functional and quantum monte carlo studies Tải bản đầy đủ ngay(0 tr)
02f29b228342099f
Skip to main content [ "article:topic", "authorname:crowellb" ] Physics LibreTexts 13.3 Matter As a Wave • Page ID • zero_13_3.png [In] a few minutes I shall be all melted... I have been wicked in my day, but I never thought a little girl like you would ever be able to melt me and end my wicked deeds. Look out --- here I go! -- The Wicked Witch of the West As the Wicked Witch learned the hard way, losing molecular cohesion can be unpleasant. That's why we should be very grateful that the concepts of quantum physics apply to matter as well as light. If matter obeyed the laws of classical physics, molecules wouldn't exist. Consider, for example, the simplest atom, hydrogen. Why does one hydrogen atom form a chemical bond with another hydrogen atom? Roughly speaking, we'd expect a neighboring pair of hydrogen atoms, A and B, to exert no force on each other at all, attractive or repulsive: there are two repulsive interactions (proton A with proton B and electron A with electron B) and two attractive interactions (proton A with electron B and electron A with proton B). Thinking a little more precisely, we should even expect that once the two atoms got close enough, the interaction would be repulsive. For instance, if you squeezed them so close together that the two protons were almost on top of each other, there would be a tremendously strong repulsion between them due to the \(1/r^2\) nature of the electrical force. The repulsion between the electrons would not be as strong, because each electron ranges over a large area, and is not likely to be found right on top of the other electron. Thus hydrogen molecules should not exist according to classical physics. Quantum physics to the rescue! As we'll see shortly, the whole problem is solved by applying the same quantum concepts to electrons that we have already used for photons. 13.3.1 Electrons as waves We started our journey into quantum physics by studying the random behavior of matter in radioactive decay, and then asked how randomness could be linked to the basic laws of nature governing light. The probability interpretation of wave-particle duality was strange and hard to accept, but it provided such a link. It is now natural to ask whether the same explanation could be applied to matter. If the fundamental building block of light, the photon, is a particle as well as a wave, is it possible that the basic units of matter, such as electrons, are waves as well as particles? A young French aristocrat studying physics, Louis de Broglie (pronounced “broylee”), made exactly this suggestion in his 1923 Ph.D. thesis. His idea had seemed so farfetched that there was serious doubt about whether to grant him the degree. Einstein was asked for his opinion, and with his strong support, de Broglie got his degree. Only two years later, American physicists C.J. Davisson and L. Germer confirmed de Broglie's idea by accident. They had been studying the scattering of electrons from the surface of a sample of nickel, made of many small crystals. (One can often see such a crystalline pattern on a brass doorknob that has been polished by repeated handling.) An accidental explosion occurred, and when they put their apparatus back together they observed something entirely different: the scattered electrons were now creating an interference pattern! This dramatic proof of the wave nature of matter came about because the nickel sample had been melted by the explosion and then resolidified as a single crystal. The nickel atoms, now nicely arranged in the regular rows and columns of a crystalline lattice, were acting as the lines of a diffraction grating. The new crystal was analogous to the type of ordinary diffraction grating in which the lines are etched on the surface of a mirror (a reflection grating) rather than the kind in which the light passes through the transparent gaps between the lines (a transmission grating). a / A double-slit interference pattern made with neutrons. (A. Zeilinger, R. Gähler, C.G. Shull, W. Treimer, and W. Mampe, Reviews of Modern Physics, Vol. 60, 1988.) Although we will concentrate on the wave-particle duality of electrons because it is important in chemistry and the physics of atoms, all the other “particles” of matter you've learned about show wave properties as well. Figure a, for instance, shows a wave interference pattern of neutrons. It might seem as though all our work was already done for us, and there would be nothing new to understand about electrons: they have the same kind of funny wave-particle duality as photons. That's almost true, but not quite. There are some important ways in which electrons differ significantly from photons: 1. Electrons have mass, and photons don't. 2. Photons always move at the speed of light, but electrons can move at any speed less than \(c\). 3. Photons don't have electric charge, but electrons do, so electric forces can act on them. The most important example is the atom, in which the electrons are held by the electric force of the nucleus. 4. Electrons cannot be absorbed or emitted as photons are. Destroying an electron or creating one out of nothing would violate conservation of charge. (In section 13.4 we will learn of one more fundamental way in which electrons differ from photons, for a total of five.) Because electrons are different from photons, it is not immediately obvious which of the photon equations from chapter 11 can be applied to electrons as well. A particle property, the energy of one photon, is related to its wave properties via \(E=hf\) or, equivalently, \(E=hc/\lambda \). The momentum of a photon was given by \(p=hf/c\) or \(p=h/\lambda \). Ultimately it was a matter of experiment to determine which of these equations, if any, would work for electrons, but we can make a quick and dirty guess simply by noting that some of the equations involve \(c\), the speed of light, and some do not. Since \(c\) is irrelevant in the case of an electron, we might guess that the equations of general validity are those that do not have \(c\) in them: \[\begin{align*} E &= hf \\ p &= h/\lambda \end{align*}\] This is essentially the reasoning that de Broglie went through, and experiments have confirmed these two equations for all the fundamental building blocks of light and matter, not just for photons and electrons. The second equation, which I soft-pedaled in the previous chapter, takes on a greater important for electrons. This is first of all because the momentum of matter is more likely to be significant than the momentum of light under ordinary conditions, and also because force is the transfer of momentum, and electrons are affected by electrical forces. Example 12: The wavelength of an elephant \(\triangleright\) What is the wavelength of a trotting elephant? \(\triangleright\) One may doubt whether the equation should be applied to an elephant, which is not just a single particle but a rather large collection of them. Throwing caution to the wind, however, we estimate the elephant's mass at \(10^3\) kg and its trotting speed at 10 m/s. Its wavelength is therefore roughly \[\begin{align*} \lambda &= \frac{h}{p} \\ &= \frac{h}{mv} \\ &= \frac{6.63\times10^{-34}\ \text{J}\!\cdot\!\text{s}}{(10^3\ \text{kg})(10\ \text{m}/\text{s})} \\ &\sim 10^{-37}\ \frac{\left(\text{kg}\!\cdot\!\text{m}^2/\text{s}^2\right)\!\cdot\!\text{s}}{\text{kg}\!\cdot\!\text{m}/\text{s}} \\ &= 10^{-37}\ \text{m} \end{align*}\] The wavelength found in this example is so fantastically small that we can be sure we will never observe any measurable wave phenomena with elephants or any other human-scale objects. The result is numerically small because Planck's constant is so small, and as in some examples encountered previously, this smallness is in accord with the correspondence principle. Although a smaller mass in the equation \(\lambda =h/mv\) does result in a longer wavelength, the wavelength is still quite short even for individual electrons under typical conditions, as shown in the following example. Example 13: The typical wavelength of an electron \(\triangleright\) Electrons in circuits and in atoms are typically moving through voltage differences on the order of 1 V, so that a typical energy is \((e)(1\ \text{V})\), which is on the order of \(10^{-19}\ \text{J}\). What is the wavelength of an electron with this amount of kinetic energy? \(\triangleright\) This energy is nonrelativistic, since it is much less than \(mc^2\). Momentum and energy are therefore related by the nonrelativistic equation \(K=p^2/2m\). Solving for \(p\) and substituting in to the equation for the wavelength, we find \[\begin{align*} \lambda &= \frac{h}{\sqrt{2mK}} \\ &= 1.6\times10^{-9}\ \text{m} . \end{align*}\] This is on the same order of magnitude as the size of an atom, which is no accident: as we will discuss in the next chapter in more detail, an electron in an atom can be interpreted as a standing wave. The smallness of the wavelength of a typical electron also helps to explain why the wave nature of electrons wasn't discovered until a hundred years after the wave nature of light. To scale the usual wave-optics devices such as diffraction gratings down to the size needed to work with electrons at ordinary energies, we need to make them so small that their parts are comparable in size to individual atoms. This is essentially what Davisson and Germer did with their nickel crystal. These remarks about the inconvenient smallness of electron wavelengths apply only under the assumption that the electrons have typical energies. What kind of energy would an electron have to have in order to have a longer wavelength that might be more convenient to work with? What kind of wave is it? If a sound wave is a vibration of matter, and a photon is a vibration of electric and magnetic fields, what kind of a wave is an electron made of? The disconcerting answer is that there is no experimental “observable,” i.e., directly measurable quantity, to correspond to the electron wave itself. In other words, there are devices like microphones that detect the oscillations of air pressure in a sound wave, and devices such as radio receivers that measure the oscillation of the electric and magnetic fields in a light wave, but nobody has ever found any way to measure the electron wave directly. b / These two electron waves are not distinguishable by any measuring device. We can of course detect the energy (or momentum) possessed by an electron just as we could detect the energy of a photon using a digital camera. (In fact I'd imagine that an unmodified digital camera chip placed in a vacuum chamber would detect electrons just as handily as photons.) But this only allows us to determine where the wave carries high probability and where it carries low probability. Probability is proportional to the square of the wave's amplitude, but measuring its square is not the same as measuring the wave itself. In particular, we get the same result by squaring either a positive number or its negative, so there is no way to determine the positive or negative sign of an electron wave. Most physicists tend toward the school of philosophy known as operationalism, which says that a concept is only meaningful if we can define some set of operations for observing, measuring, or testing it. According to a strict operationalist, then, the electron wave itself is a meaningless concept. Nevertheless, it turns out to be one of those concepts like love or humor that is impossible to measure and yet very useful to have around. We therefore give it a symbol, \(\Psi \) (the capital Greek letter psi), and a special name, the electron wavefunction (because it is a function of the coordinates \(x\), \(y\), and \(z\) that specify where you are in space). It would be impossible, for example, to calculate the shape of the electron wave in a hydrogen atom without having some symbol for the wave. But when the calculation produces a result that can be compared directly to experiment, the final algebraic result will turn out to involve only \(\Psi^2\), which is what is observable, not \(\Psi \) itself. Since \(\Psi \), unlike \(E\) and \(B\), is not directly measurable, we are free to make the probability equations have a simple form: instead of having the probability density equal to some funny constant multiplied by \(\Psi^2\), we simply define \(\Psi \) so that the constant of proportionality is one: \[\begin{equation*} (\text{probability distribution}) = \Psi ^2 . \end{equation*}\] Since the probability distribution has units of \(\text{m}^{-3}\), the units of \(\Psi \) must be \(\text{m}^{-3/2}\). Discussion Question ◊ Frequency is oscillations per second, whereas wavelength is meters per oscillation. How could the equations \(E=hf\) and \(p=h/\lambda\) be made to look more alike by using quantities that were more closely analogous? (This more symmetric treatment makes it easier to incorporate relativity into quantum mechanics, since relativity says that space and time are not entirely separate.) 13.3.2 Dispersive waves A colleague of mine who teaches chemistry loves to tell the story about an exceptionally bright student who, when told of the equation \(p=h/\lambda \), protested, “But when I derived it, it had a factor of 2!” The issue that's involved is a real one, albeit one that could be glossed over (and is, in most textbooks) without raising any alarms in the mind of the average student. The present optional section addresses this point; it is intended for the student who wishes to delve a little deeper. Here's how the now-legendary student was presumably reasoning. We start with the equation \(v=f\lambda \), which is valid for any sine wave, whether it's quantum or classical. Let's assume we already know \(E=hf\), and are trying to derive the relationship between wavelength and momentum: \[\begin{align*} \lambda &= \frac{v}{f} \\ &= \frac{vh}{E} \\ &= \frac{vh}{\frac{1}{2}mv^2} \\ &= \frac{2h}{mv} \\ &= \frac{2h}{p} . \end{align*}\] c / Part of an infinite sine wave. The mistaken assumption is that we can figure everything out in terms of pure sine waves. Mathematically, the only wave that has a perfectly well defined wavelength and frequency is a sine wave, and not just any sine wave but an infinitely long sine wave, c. The unphysical thing about such a wave is that it has no leading or trailing edge, so it can never be said to enter or leave any particular region of space. Our derivation made use of the velocity, \(v\), and if velocity is to be a meaningful concept, it must tell us how quickly stuff (mass, energy, momentum, ...) is transported from one region of space to another. Since an infinitely long sine wave doesn't remove any stuff from one region and take it to another, the “velocity of its stuff” is not a well defined concept. Of course the individual wave peaks do travel through space, and one might think that it would make sense to associate their speed with the “speed of stuff,” but as we will see, the two velocities are in general unequal when a wave's velocity depends on wavelength. Such a wave is called a dispersive wave, because a wave pulse consisting of a superposition of waves of different wavelengths will separate (disperse) into its separate wavelengths as the waves move through space at different speeds. Nearly all the waves we have encountered have been nondispersive. For instance, sound waves and light waves (in a vacuum) have speeds independent of wavelength. A water wave is one good example of a dispersive wave. Long-wavelength water waves travel faster, so a ship at sea that encounters a storm typically sees the long-wavelength parts of the wave first. When dealing with dispersive waves, we need symbols and words to distinguish the two speeds. The speed at which wave peaks move is called the phase velocity, \(v_p\), and the speed at which “stuff” moves is called the group velocity, \(v_g\). d / A finite-length sine wave. An infinite sine wave can only tell us about the phase velocity, not the group velocity, which is really what we would be talking about when we refer to the speed of an electron. If an infinite sine wave is the simplest possible wave, what's the next best thing? We might think the runner up in simplicity would be a wave train consisting of a chopped-off segment of a sine wave, d. However, this kind of wave has kinks in it at the end. A simple wave should be one that we can build by superposing a small number of infinite sine waves, but a kink can never be produced by superposing any number of infinitely long sine waves. e / A beat pattern created by superimposing two sine waves with slightly different wavelengths. Actually the simplest wave that transports stuff from place to place is the pattern shown in figure e. Called a beat pattern, it is formed by superposing two sine waves whose wavelengths are similar but not quite the same. If you have ever heard the pulsating howling sound of musicians in the process of tuning their instruments to each other, you have heard a beat pattern. The beat pattern gets stronger and weaker as the two sine waves go in and out of phase with each other. The beat pattern has more “stuff” (energy, for example) in the areas where constructive interference occurs, and less in the regions of cancellation. As the whole pattern moves through space, stuff is transported from some regions and into other ones. If the frequency of the two sine waves differs by 10%, for instance, then ten periods will be occur between times when they are in phase. Another way of saying it is that the sinusoidal “envelope” (the dashed lines in figure e) has a frequency equal to the difference in frequency between the two waves. For instance, if the waves had frequencies of 100 Hz and 110 Hz, the frequency of the envelope would be 10 Hz. To apply similar reasoning to the wavelength, we must define a quantity \(z=1/\lambda \) that relates to wavelength in the same way that frequency relates to period. In terms of this new variable, the \(z\) of the envelope equals the difference between the \(z's\) of the two sine waves. The group velocity is the speed at which the envelope moves through space. Let \(\Delta f\) and \(\Delta z\) be the differences between the frequencies and \(z's\) of the two sine waves, which means that they equal the frequency and \(z\) of the envelope. The group velocity is \(v_g=f_{envelope}\lambda_{envelope}=\Delta f/\Delta \)z. If \(\Delta f\) and \(\Delta z\) are sufficiently small, we can approximate this expression as a derivative, \[\begin{equation*} v_g = \frac{df}{dz} . \end{equation*}\] This expression is usually taken as the definition of the group velocity for wave patterns that consist of a superposition of sine waves having a narrow range of frequencies and wavelengths. In quantum mechanics, with \(f=E/h\) and \(z=p/h\), we have \(v_g=dE/dp\). In the case of a nonrelativistic electron the relationship between energy and momentum is \(E=p^2/2m\), so the group velocity is \(dE/dp=p/m=v\), exactly what it should be. It is only the phase velocity that differs by a factor of two from what we would have expected, but the phase velocity is not the physically important thing. 13.3.3 Bound states Electrons are at their most interesting when they're in atoms, that is, when they are bound within a small region of space. We can understand a great deal about atoms and molecules based on simple arguments about such bound states, without going into any of the realistic details of atom. The simplest model of a bound state is known as the particle in a box: like a ball on a pool table, the electron feels zero force while in the interior, but when it reaches an edge it encounters a wall that pushes back inward on it with a large force. In particle language, we would describe the electron as bouncing off of the wall, but this incorrectly assumes that the electron has a certain path through space. It is more correct to describe the electron as a wave that undergoes 100% reflection at the boundaries of the box. Like a generation of physics students before me, I rolled my eyes when initially introduced to the unrealistic idea of putting a particle in a box. It seemed completely impractical, an artificial textbook invention. Today, however, it has become routine to study electrons in rectangular boxes in actual laboratory experiments. The “box” is actually just an empty cavity within a solid piece of silicon, amounting in volume to a few hundred atoms. The methods for creating these electron-in-a-box setups (known as “quantum dots”) were a by-product of the development of technologies for fabricating computer chips. f / Three possible standing-wave patterns for a particle in a box. For simplicity let's imagine a one-dimensional electron in a box, i.e., we assume that the electron is only free to move along a line. The resulting standing wave patterns, of which the first three are shown in the figure, are just like some of the patterns we encountered with sound waves in musical instruments. The wave patterns must be zero at the ends of the box, because we are assuming the walls are impenetrable, and there should therefore be zero probability of finding the electron outside the box. Each wave pattern is labeled according to \(n\), the number of peaks and valleys it has. In quantum physics, these wave patterns are referred to as “states” of the particle-in-the-box system. The following seemingly innocuous observations about the particle in the box lead us directly to the solutions to some of the most vexing failures of classical physics: The particle's energy is quantized (can only have certain values). Each wavelength corresponds to a certain momentum, and a given momentum implies a definite kinetic energy, \(E=p^2/2m\). (This is the second type of energy quantization we have encountered. The type we studied previously had to do with restricting the number of particles to a whole number, while assuming some specific wavelength and energy for each particle. This type of quantization refers to the energies that a single particle can have. Both photons and matter particles demonstrate both types of quantization under the appropriate circumstances.) The particle has a minimum kinetic energy. Long wavelengths correspond to low momenta and low energies. There can be no state with an energy lower than that of the \(n=1\) state, called the ground state. The smaller the space in which the particle is confined, the higher its kinetic energy must be. Again, this is because long wavelengths give lower energies. Example 14: Spectra of thin gases A fact that was inexplicable by classical physics was that thin gases absorb and emit light only at certain wavelengths. This was observed both in earthbound laboratories and in the spectra of stars. The figure on the left shows the example of the spectrum of the star Sirius, in which there are “gap teeth” at certain wavelengths. Taking this spectrum as an example, we can give a straightforward explanation using quantum physics. g / The spectrum of the light from the star Sirius. Energy is released in the dense interior of the star, but the outer layers of the star are thin, so the atoms are far apart and electrons are confined within individual atoms. Although their standing-wave patterns are not as simple as those of the particle in the box, their energies are quantized. When a photon is on its way out through the outer layers, it can be absorbed by an electron in an atom, but only if the amount of energy it carries happens to be the right amount to kick the electron from one of the allowed energy levels to one of the higher levels. The photon energies that are missing from the spectrum are the ones that equal the difference in energy between two electron energy levels. (The most prominent of the absorption lines in Sirius's spectrum are absorption lines of the hydrogen atom.) Example 15: The stability of atoms In many Star Trek episodes the Enterprise, in orbit around a planet, suddenly lost engine power and began spiraling down toward the planet's surface. This was utter nonsense, of course, due to conservation of energy: the ship had no way of getting rid of energy, so it did not need the engines to replenish it. Consider, however, the electron in an atom as it orbits the nucleus. The electron does have a way to release energy: it has an acceleration due to its continuously changing direction of motion, and according to classical physics, any accelerating charged particle emits electromagnetic waves. According to classical physics, atoms should collapse! The solution lies in the observation that a bound state has a minimum energy. An electron in one of the higher-energy atomic states can and does emit photons and hop down step by step in energy. But once it is in the ground state, it cannot emit a photon because there is no lower-energy state for it to go to. Example 16: Chemical bonds I began this section with a classical argument that chemical bonds, as in an \(\text{H}_2\) molecule, should not exist. Quantum physics explains why this type of bonding does in fact occur. When the atoms are next to each other, the electrons are shared between them. The “box” is about twice as wide, and a larger box allows a smaller energy. Energy is required in order to separate the atoms. (A qualitatively different type of bonding is discussed on page 891. Example 23 on page 887 revisits the \(\text{H}_2\) bond in more detail.) h / Two hydrogen atoms bond to form an \(\text{H}_2\) molecule. In the molecule, the two electrons' wave patterns overlap , and are about twice as wide. Discussion Questions ◊ Neutrons attract each other via the strong nuclear force, so according to classical physics it should be possible to form nuclei out of clusters of two or more neutrons, with no protons at all. Experimental searches, however, have failed to turn up evidence of a stable two-neutron system (dineutron) or larger stable clusters. These systems are apparently not just unstable in the sense of being able to beta decay but unstable in the sense that they don't hold together at all. Explain based on quantum physics why a dineutron might spontaneously fly apart. ◊ The following table shows the energy gap between the ground state and the first excited state for four nuclei, in units of picojoules. (The nuclei were chosen to be ones that have similar structures, e.g., they are all spherical in shape.) nucleus energy gap (picojoules) 4He 3.234 16O 0.968 40Ca 0.536 208Pb 0.418 Explain the trend in the data. 13.3.4 The uncertainty principle and measurement Eliminating randomness through measurement? A common reaction to quantum physics, among both early-twentieth-century physicists and modern students, is that we should be able to get rid of randomness through accurate measurement. If I say, for example, that it is meaningless to discuss the path of a photon or an electron, one might suggest that we simply measure the particle's position and velocity many times in a row. This series of snapshots would amount to a description of its path. A practical objection to this plan is that the process of measurement will have an effect on the thing we are trying to measure. This may not be of much concern, for example, when a traffic cop measure's your car's motion with a radar gun, because the energy and momentum of the radar pulses are insufficient to change the car's motion significantly. But on the subatomic scale it is a very real problem. Making a videotape through a microscope of an electron orbiting a nucleus is not just difficult, it is theoretically impossible. The video camera makes pictures of things using light that has bounced off them and come into the camera. If even a single photon of visible light was to bounce off of the electron we were trying to study, the electron's recoil would be enough to change its behavior significantly. The Heisenberg uncertainty principle i / Werner Heisenberg (1901-1976). Heisenberg helped to develop the foundations of quantum mechanics, including the Heisenberg uncertainty principle. He was the scientific leader of the Nazi atomic-bomb program up until its cancellation in 1942, when the military decided that it was too ambitious a project to undertake in wartime, and too unlikely to produce results. This insight, that measurement changes the thing being measured, is the kind of idea that clove-cigarette-smoking intellectuals outside of the physical sciences like to claim they knew all along. If only, they say, the physicists had made more of a habit of reading literary journals, they could have saved a lot of work. The anthropologist Margaret Mead has recently been accused of inadvertently encouraging her teenaged Samoan informants to exaggerate the freedom of youthful sexual experimentation in their society. If this is considered a damning critique of her work, it is because she could have done better: other anthropologists claim to have been able to eliminate the observer-as-participant problem and collect untainted data. The German physicist Werner Heisenberg, however, showed that in quantum physics, any measuring technique runs into a brick wall when we try to improve its accuracy beyond a certain point. Heisenberg showed that the limitation is a question of what there is to be known, even in principle, about the system itself, not of the ability or inability of a specific measuring device to ferret out information that is knowable but not previously hidden. Suppose, for example, that we have constructed an electron in a box (quantum dot) setup in our laboratory, and we are able to adjust the length \(L\) of the box as desired. All the standing wave patterns pretty much fill the box, so our knowledge of the electron's position is of limited accuracy. If we write \(\Delta x\) for the range of uncertainty in our knowledge of its position, then \(\Delta x\) is roughly the same as the length of the box: \[\begin{equation*} \Delta x \approx L \end{equation*}\] If we wish to know its position more accurately, we can certainly squeeze it into a smaller space by reducing \(L\), but this has an unintended side-effect. A standing wave is really a superposition of two traveling waves going in opposite directions. The equation \(p=h/\lambda \) really only gives the magnitude of the momentum vector, not its direction, so we should really interpret the wave as a 50/50 mixture of a right-going wave with momentum \(p=h/\lambda \) and a left-going one with momentum \(p=-h/\lambda \). The uncertainty in our knowledge of the electron's momentum is \(\Delta p=2h/\lambda\), covering the range between these two values. Even if we make sure the electron is in the ground state, whose wavelength \(\lambda =2L\) is the longest possible, we have an uncertainty in momentum of \(\Delta p=h/L\). In general, we find \[\begin{equation*} \Delta p \gtrsim h/L , \end{equation*}\] with equality for the ground state and inequality for the higher-energy states. Thus if we reduce \(L\) to improve our knowledge of the electron's position, we do so at the cost of knowing less about its momentum. This trade-off is neatly summarized by multiplying the two equations to give \[\begin{equation*} \Delta p\Delta x \gtrsim h . \end{equation*}\] Although we have derived this in the special case of a particle in a box, it is an example of a principle of more general validity: The Heisenberg uncertainty principle It is not possible, even in principle, to know the momentum and the position of a particle simultaneously and with perfect accuracy. The uncertainties in these two quantities are always such that \(\Delta p\Delta x \gtrsim h\). (This approximation can be made into a strict inequality, \(\Delta p\Delta x>h/4\pi\), but only with more careful definitions, which we will not bother with.) Note that although I encouraged you to think of this derivation in terms of a specific real-world system, the quantum dot, no reference was ever made to any specific laboratory equipment or procedures. The argument is simply that we cannot know the particle's position very accurately unless it has a very well defined position, it cannot have a very well defined position unless its wave-pattern covers only a very small amount of space, and its wave-pattern cannot be thus compressed without giving it a short wavelength and a correspondingly uncertain momentum. The uncertainty principle is therefore a restriction on how much there is to know about a particle, not just on what we can know about it with a certain technique. Example 17: An estimate for electrons in atoms \(\triangleright\) A typical energy for an electron in an atom is on the order of \((\text{1 volt})\cdot e\), which corresponds to a speed of about 1% of the speed of light. If a typical atom has a size on the order of 0.1 nm, how close are the electrons to the limit imposed by the uncertainty principle? \(\triangleright\) If we assume the electron moves in all directions with equal probability, the uncertainty in its momentum is roughly twice its typical momentum. This only an order-of-magnitude estimate, so we take \(\Delta p\) to be the same as a typical momentum: \[\begin{align*} \Delta p \Delta x &= p_{typical} \Delta x \\ &= (m_{electron}) (0.01c) (0.1\times10^{-9}\ \text{m}) \\ &= 3\times 10^{-34}\ \text{J}\!\cdot\!\text{s} \end{align*}\] This is on the same order of magnitude as Planck's constant, so evidently the electron is “right up against the wall.” (The fact that it is somewhat less than \(h\) is of no concern since this was only an estimate, and we have not stated the uncertainty principle in its most exact form.) If we were to apply the uncertainty principle to human-scale objects, what would be the significance of the small numerical value of Planck's constant? Measurement and Schrödinger's cat On p. 847 I briefly mentioned an issue concerning measurement that we are now ready to address carefully. If you hang around a laboratory where quantum-physics experiments are being done and secretly record the physicists' conversations, you'll hear them say many things that assume the probability interpretation of quantum mechanics. Usually they will speak as though the randomness of quantum mechanics enters the picture when something is measured. In the digital camera experiments of section 13.2, for example, they would casually describe the detection of a photon at one of the pixels as if the moment of detection was when the photon was forced to “make up its mind.” Although this mental cartoon usually works fairly well as a description of things they experience in the lab, it cannot ultimately be correct, because it attributes a special role to measurement, which is really just a physical process like all other physical processes.4 If we are to find an interpretation that avoids giving any special role to measurement processes, then we must think of the entire laboratory, including the measuring devices and the physicists themselves, as one big quantum-mechanical system made out of protons, neutrons, electrons, and photons. In other words, we should take quantum physics seriously as a description not just of microscopic objects like atoms but of human-scale (“macroscopic”) things like the apparatus, the furniture, and the people. The most celebrated example is called the Schrödinger's cat experiment. Luckily for the cat, there probably was no actual experiment --- it was simply a “thought experiment” that the German theorist Schrödinger discussed with his colleagues. Schrödinger wrote: One can even construct quite burlesque cases. A cat is shut up in a steel container, together with the following diabolical apparatus (which one must keep out of the direct clutches of the cat): In a Geiger tube [radiation detector] there is a tiny mass of radioactive substance, so little that in the course of an hour perhaps one atom of it disintegrates, but also with equal probability not even one; if it does happen, the counter [detector] responds and ... activates a hammer that shatters a little flask of prussic acid [filling the chamber with poison gas]. If one has left this entire system to itself for an hour, then one will say to himself that the cat is still living, if in that time no atom has disintegrated. The first atomic disintegration would have poisoned it. Now comes the strange part. Quantum mechanics describes the particles the cat is made of as having wave properties, including the property of superposition. Schrödinger describes the wavefunction of the box's contents at the end of the hour: The wavefunction of the entire system would express this situation by having the living and the dead cat mixed ... in equal parts [50/50 proportions]. The uncertainty originally restricted to the atomic domain has been transformed into a macroscopic uncertainty... At first Schrödinger's description seems like nonsense. When you opened the box, would you see two ghostlike cats, as in a doubly exposed photograph, one dead and one alive? Obviously not. You would have a single, fully material cat, which would either be dead or very, very upset. But Schrödinger has an equally strange and logical answer for that objection. In the same way that the quantum randomness of the radioactive atom spread to the cat and made its wavefunction a random mixture of life and death, the randomness spreads wider once you open the box, and your own wavefunction becomes a mixture of a person who has just killed a cat and a person who hasn't.5 Discussion Questions ◊ Compare \(\Delta p\) and \(\Delta x\) for the two lowest energy levels of the one-dimensional particle in a box, and discuss how this relates to the uncertainty principle. ◊ On a graph of \(\Delta p\) versus \(\Delta \)x, sketch the regions that are allowed and forbidden by the Heisenberg uncertainty principle. Interpret the graph: Where does an atom lie on it? An elephant? Can either \(p\) or \(x\) be measured with perfect accuracy if we don't care about the other? 13.3.5 Electrons in electric fields So far the only electron wave patterns we've considered have been simple sine waves, but whenever an electron finds itself in an electric field, it must have a more complicated wave pattern. Let's consider the example of an electron being accelerated by the electron gun at the back of a TV tube. Newton's laws are not useful, because they implicitly assume that the path taken by the particle is a meaningful concept. Conservation of energy is still valid in quantum physics, however. In terms of energy, the electron is moving from a region of low voltage into a region of higher voltage. Since its charge is negative, it loses electrical energy by moving to a higher voltage, so its kinetic energy increases. As its electrical energy goes down, its kinetic energy goes up by an equal amount, keeping the total energy constant. Increasing kinetic energy implies a growing momentum, and therefore a shortening wavelength, j. j / An electron in a gentle electric field gradually shortens its wavelength as it gains energy. The wavefunction as a whole does not have a single well-defined wavelength, but the wave changes so gradually that if you only look at a small part of it you can still pick out a wavelength and relate it to the momentum and energy. (The picture actually exaggerates by many orders of magnitude the rate at which the wavelength changes.) But what if the electric field was stronger? The electric field in a TV is only \(\sim10^5\) N/C, but the electric field within an atom is more like \(10^{12}\) N/C. In figure l, the wavelength changes so rapidly that there is nothing that looks like a sine wave at all. We could get a rough idea of the wavelength in a given region by measuring the distance between two peaks, but that would only be a rough approximation. Suppose we want to know the wavelength at point \(P\). The trick is to construct a sine wave, like the one shown with the dashed line, which matches the curvature of the actual wavefunction as closely as possible near \(P\). The sine wave that matches as well as possible is called the “osculating” curve, from a Latin word meaning “to kiss.” The wavelength of the osculating curve is the wavelength that will relate correctly to conservation of energy. l / A typical wavefunction of an electron in an atom (heavy curve) and the osculating sine wave (dashed curve) that matches its curvature at point P. k / The wavefunction's tails go where classical physics says they shouldn't. We implicitly assumed that the particle-in-a-box wavefunction would cut off abruptly at the sides of the box, k/1, but that would be unphysical. A kink has infinite curvature, and curvature is related to energy, so it can't be infinite. A physically realistic wavefunction must always “tail off” gradually, k/2. In classical physics, a particle can never enter a region in which its interaction energy \(U\) would be greater than the amount of energy it has available. But in quantum physics the wavefunction will always have a tail that reaches into the classically forbidden region. If it was not for this effect, called tunneling, the fusion reactions that power the sun would not occur due to the high electrical energy nuclei need in order to get close together! Tunneling is discussed in more detail in the following subsection. 13.3.6 The Schrödinger equation In subsection 13.3.5 we were able to apply conservation of energy to an electron's wavefunction, but only by using the clumsy graphical technique of osculating sine waves as a measure of the wave's curvature. You have learned a more convenient measure of curvature in calculus: the second derivative. To relate the two approaches, we take the second derivative of a sine wave: \[\begin{align*} \frac{d^2}{dx^2}\sin(2\pi x/\lambda) &= \frac{d}{dx}\left(\frac{2\pi}{\lambda}\cos\frac{2\pi x}{\lambda}\right) \\ &= -\left(\frac{2\pi}{\lambda}\right)^2 \sin\frac{2\pi x}{\lambda} \end{align*}\] Taking the second derivative gives us back the same function, but with a minus sign and a constant out in front that is related to the wavelength. We can thus relate the second derivative to the osculating wavelength: \[\begin{equation*} \frac{d^2\Psi}{dx^2} = -\left(\frac{2\pi}{\lambda}\right)^2\Psi \tag{1}\end{equation*}\] This could be solved for \(\lambda \) in terms of \(\Psi \), but it will turn out below to be more convenient to leave it in this form. Applying this to conservation of energy, we have \[\begin{align*} \begin{split} E &= K + U \\ &= \frac{p^2}{2m} + U \\ &= \frac{(h/\lambda)^2}{2m} + U \end{split} \tag{2} \end{align*}\] Note that both equation (1) and equation (2) have \(\lambda^2\) in the denominator. We can simplify our algebra by multiplying both sides of equation (2) by \(\Psi \) to make it look more like equation (1): \[\begin{align*} E \cdot \Psi &= \frac{(h/\lambda)^2}{2m}\Psi + U \cdot \Psi \\ &= \frac{1}{2m}\left(\frac{h}{2\pi}\right)^2\left(\frac{2\pi}{\lambda}\right)^2\Psi + U \cdot \Psi \\ &= -\frac{1}{2m}\left(\frac{h}{2\pi}\right)^2 \frac{d^2\Psi}{dx^2} + U \cdot \Psi \end{align*}\] Further simplification is achieved by using the symbol \(\hbar\) (\(h\) with a slash through it, read “h-bar”) as an abbreviation for \(h/2\pi \). We then have the important result known as the \labelimportantintext{Schrödinger equation}: \[\begin{equation*} E \cdot \Psi = -\frac{\hbar^2}{2m}\frac{d^2\Psi}{dx^2} + U \cdot \Psi \end{equation*}\] (Actually this is a simplified version of the Schrödinger equation, applying only to standing waves in one dimension.) Physically it is a statement of conservation of energy. The total energy \(E\) must be constant, so the equation tells us that a change in interaction energy \(U\) must be accompanied by a change in the curvature of the wavefunction. This change in curvature relates to a change in wavelength, which corresponds to a change in momentum and kinetic energy. Considering the assumptions that were made in deriving the Schrödinger equation, would it be correct to apply it to a photon? To an electron moving at relativistic speeds? Usually we know right off the bat how \(U\) depends on \(x\), so the basic mathematical problem of quantum physics is to find a function \(\Psi (x\)) that satisfies the Schrödinger equation for a given interaction-energy function \(U(x)\). An equation, such as the Schrödinger equation, that specifies a relationship between a function and its derivatives is known as a differential equation. The detailed study of the solution of the Schrödinger equation is beyond the scope of this book, but we can gain some important insights by considering the easiest version of the Schrödinger equation, in which the interaction energy \(U\) is constant. We can then rearrange the Schrödinger equation as follows: \[\begin{align*} \frac{d^2\Psi}{dx^2} &= \frac{2m(U-E)}{\hbar^2} \Psi , \text{which boils down to} \frac{d^2\Psi}{dx^2} &= a\Psi , \end{align*}\] where, according to our assumptions, \(a\) is independent of \(x\). We need to find a function whose second derivative is the same as the original function except for a multiplicative constant. The only functions with this property are sine waves and exponentials: \[\begin{align*} \frac{d^2}{dx^2}\left[\:q\sin(rx+s)\:\right] &= -qr^2\sin(rx+s) \\ \frac{d^2}{dx^2}\left[qe^{rx+s}\right] &= qr^2e^{rx+s} \end{align*}\] The sine wave gives negative values of \(a\), \(a=-r^2\), and the exponential gives positive ones, \(a=r^2\). The former applies to the classically allowed region with \(U\lt E\). m / Tunneling through a barrier. This leads us to a quantitative calculation of the tunneling effect discussed briefly in the preceding subsection. The wavefunction evidently tails off exponentially in the classically forbidden region. Suppose, as shown in figure m, a wave-particle traveling to the right encounters a barrier that it is classically forbidden to enter. Although the form of the Schrödinger equation we're using technically does not apply to traveling waves (because it makes no reference to time), it turns out that we can still use it to make a reasonable calculation of the probability that the particle will make it through the barrier. If we let the barrier's width be \(w\), then the ratio of the wavefunction on the left side of the barrier to the wavefunction on the right is \[\begin{equation*} \frac{qe^{rx+s}}{qe^{r(x+w)+s}} = e^{-rw} . \end{equation*}\] Probabilities are proportional to the squares of wavefunctions, so the probability of making it through the barrier is \[\begin{align*} P &= e^{-2rw} \\ &= \exp\left(-\frac{2w}{\hbar}\sqrt{2m(U-E)}\right) \end{align*}\] n / The electrical, nuclear, and total interaction energies for an alpha particle escaping from a nucleus. If we were to apply this equation to find the probability that a person can walk through a wall, what would the small value of Planck's constant imply? Example 18: Tunneling in alpha decay Naively, we would expect alpha decay to be a very fast process. The typical speeds of neutrons and protons inside a nucleus are extremely high (see problem 20). If we imagine an alpha particle coalescing out of neutrons and protons inside the nucleus, then at the typical speeds we're talking about, it takes a ridiculously small amount of time for them to reach the surface and try to escape. Clattering back and forth inside the nucleus, we could imagine them making a vast number of these “escape attempts” every second. Consider figure n, however, which shows the interaction energy for an alpha particle escaping from a nucleus. The electrical energy is \(kq_1q_2/r\) when the alpha is outside the nucleus, while its variation inside the nucleus has the shape of a parabola, as a consequence of the shell theorem. The nuclear energy is constant when the alpha is inside the nucleus, because the forces from all the neighboring neutrons and protons cancel out; it rises sharply near the surface, and flattens out to zero over a distance of \(\sim 1\) fm, which is the maximum distance scale at which the strong force can operate. There is a classically forbidden region immediately outside the nucleus, so the alpha particle can only escape by quantum mechanical tunneling. (It's true, but somewhat counterintuitive, that a repulsive electrical force can make it more difficult for the alpha to get out.) In reality, alpha-decay half-lives are often extremely long --- sometimes billions of years --- because the tunneling probability is so small. Although the shape of the barrier is not a rectangle, the equation for the tunneling probability on page 870 can still be used as a rough guide to our thinking. Essentially the tunneling probability is so small because \(U-E\) is fairly big, typically about 30 MeV at the peak of the barrier. Example 19: The correspondence principle for \(E>U\) The correspondence principle demands that in the classical limit \(h\rightarrow0\), we recover the correct result for a particle encountering a barrier \(U\), for both \(E\lt U\) and \(E>U\). The \(E\lt U\) case was analyzed in self-check H on p. 870. In the remainder of this example, we analyze \(E>U\), which turns out to be a little trickier. The particle has enough energy to get over the barrier, and the classical result is that it continues forward at a different speed (a reduced speed if \(U>0\), or an increased one if \(U\lt0\)), then regains its original speed as it emerges from the other side. What happens quantum-mechanically in this case? We would like to get a “tunneling” probability of 1 in the classical limit. The expression derived on p. 870, however, doesn't apply here, because it was derived under the assumption that the wavefunction inside the barrier was an exponential; in the classically allowed case, the barrier isn't classically forbidden, and the wavefunction inside it is a sine wave. o / A particle encounters a step of height \(U\lt E\) in the interaction energy. Both sides are classically allowed. A reflected wave exists, but is not shown in the figure. We can simplify things a little by letting the width \(w\) of the barrier go to infinity. Classically, after all, there is no possibility that the particle will turn around, no matter how wide the barrier. We then have the situation shown in figure o. The analysis is the same as for any other wave being partially reflected at the boundary between two regions where its velocity differs, and the result is the same as the one found on p. 367. The ratio of the amplitude of the reflected wave to that of the incident wave is \(R = (v_2-v_1)/(v_2+v_1)\). The probability of reflection is \(R^2\). (Counterintuitively, \(R^2\) is nonzero even if \(U\lt0\), i.e., \(v_2>v_1\).) This seems to violate the correspondence principle. There is no \(m\) or \(h\) anywhere in the result, so we seem to have the result that, even classically, the marble in figure p can be reflected! p / The marble has zero probability of being reflected from the edge of the table. (This example has \(U\lt0\), not \(U>0\) as in figures o and q). The solution to this paradox is that the step in figure o was taken to be completely abrupt --- an idealized mathematical discontinuity. Suppose we make the transition a little more gradual, as in figure q. As shown in problem 17 on p. 380, this reduces the amplitude with which a wave is reflected. By smoothing out the step more and more, we continue to reduce the probability of reflection, until finally we arrive at a barrier shaped like a smooth ramp. More detailed calculations show that this results in zero reflection in the limit where the width of the ramp is large compared to the wavelength. q / Making the step more gradual reduces the probability of reflection. Three dimensions For simplicity, we've been considering the Schrödinger equation in one dimension, so that \(\Psi\) is only a function of \(x\), and has units of \(\text{m}^{-1/2}\) rather than \(\text{m}^{-3/2}\). Since the Schrödinger equation is a statement of conservation of energy, and energy is a scalar, the generalization to three dimensions isn't particularly complicated. The total energy term \(E\cdot\Psi\) and the interaction energy term \(U\cdot\Psi\) involve nothing but scalars, and don't need to be changed at all. In the kinetic energy term, however, we're essentially basing our computation of the kinetic energy on the squared magnitude of the momentum, \(p_x^2\), and in three dimensions this would clearly have to be generalized to \(p_x^2+p_y^2+p_z^2\). The obvious way to achieve this is to replace the second derivative \(d^2\Psi/dx^2\) with the sum \(\partial^2\Psi/\partial x^2+ \partial^2\Psi/\partial y^2+ \partial^2\Psi/\partial z^2\). Here the partial derivative symbol \(\partial\), introduced on page 216, indicates that when differentiating with respect to a particular variable, the other variables are to be considered as constants. This operation on the function \(\Psi\) is notated \(\nabla^2\Psi\), and the derivative-like operator \(\nabla^2=\partial^2/\partial x^2+ \partial^2/\partial y^2+ \partial^2/\partial z^2\) is called the Laplacian. It occurs elswehere in physics. For example, in classical electrostatics, the voltage in a region of vacuum must be a solution of the equation \(\nabla^2V=0\). Like the second derivative, the Laplacian is essentially a measure of curvature. Example 20: Examples of the Laplacian in two dimensions \(\triangleright\) Compute the Laplacians of the following functions in two dimensions, and interpret them: \(A=x^2+y^2\), \(B=-x^2-y^2\), \(C=x^2-y^2\). \(\triangleright\) The first derivative of function \(A\) with respect to \(x\) is \(\partial A/\partial x=2x\). Since \(y\) is treated as a constant in the computation of the partial derivative \(\partial/\partial x\), the second term goes away. The second derivative of \(A\) with respect to \(x\) is \(\partial^2 A/\partial x^2=2\). Similarly we have \(\partial^2 A/\partial y^2=2\), so \(\nabla^2 A=4\). All derivative operators, including \(\nabla^2\), have the linear property that multiplying the input function by a constant just multiplies the output function by the same constant. Since \(B=-A\), and we have \(\nabla^2 B=-4\). For function \(C\), the \(x\) term contributes a second derivative of 2, but the \(y\) term contributes \(-2\), so \(\nabla^2 C=0\). The interpretation of the positive sign in \(\nabla^2 A=4\) is that \(A\)'s graph is shaped like a trophy cup, and the cup is concave up. The negative sign in the result for \(\nabla^2 B\) is because \(B\) is concave down. Function \(C\) is shaped like a saddle. Since its curvature along one axis is concave up, but the curvature along the other is down and equal in magnitude, the function is considered to have zero concavity over all. Example 21: A classically allowed region with constant \(U\) In a classically allowed region with constant \(U\), we expect the solutions to the Schrödinger equation to be sine waves. A sine wave in three dimensions has the form \[\begin{equation*} \Psi = \sin\left( k_x x + k_y y + k_z z \right) . \end{equation*}\] When we compute \(\partial^2\Psi/\partial x^2\), double differentiation of \(\sin\) gives \(-\sin\), and the chain rule brings out a factor of \(k_x^2\). Applying all three second derivative operators, we get \[\begin{align*} \nabla^2\Psi &= \left(-k_x^2-k_y^2-k_z^2\right)\sin\left( k_x x + k_y y + k_z z \right) \\ &= -\left(k_x^2+k_y^2+k_z^2\right)\Psi . \end{align*}\] The Schrödinger equation gives \[\begin{align*} E\cdot\Psi &= -\frac{\hbar^2}{2m}\nabla^2\Psi + U\cdot\Psi \\ &= -\frac{\hbar^2}{2m}\cdot -\left(k_x^2+k_y^2+k_z^2\right)\Psi + U\cdot\Psi \\ E-U &= \frac{\hbar^2}{2m}\left(k_x^2+k_y^2+k_z^2\right) , \end{align*}\] which can be satisfied since we're in a classically allowed region with \(E-U>0\), and the right-hand side is manifestly positive. Use of complex numbers In a classically forbidden region, a particle's total energy, \(U+K\), is less than its \(U\), so its \(K\) must be negative. If we want to keep believing in the equation \(K=p^2/2m\), then apparently the momentum of the particle is the square root of a negative number. This is a symptom of the fact that the Schrödinger equation fails to describe all of nature unless the wavefunction and various other quantities are allowed to be complex numbers. In particular it is not possible to describe traveling waves correctly without using complex wavefunctions. Complex numbers were reviewed in subsection 10.5.5, p. 603. This may seem like nonsense, since real numbers are the only ones that are, well, real! Quantum mechanics can always be related to the real world, however, because its structure is such that the results of measurements always come out to be real numbers. For example, we may describe an electron as having non-real momentum in classically forbidden regions, but its average momentum will always come out to be real (the imaginary parts average out to zero), and it can never transfer a non-real quantity of momentum to another particle. r / 1. Oscillations can go back and forth, but it's also possible for them to move along a path that bites its own tail, like a circle. Photons act like one, electrons like the other. 2. Back-and-forth oscillations can naturally be described by a segment taken from the real number line, and we visualize the corresponding type of wave as a sine wave. Oscillations around a closed path relate more naturally to the complex number system. The complex number system has rotation built into its structure, e.g., the sequence 1, \(i\), \(i^2\), \(i^3\), ... rotates around the unit circle in 90-degree increments. 3. The double slit experiment embodies the one and only mystery of quantum physics. Either type of wave can undergo double-slit interference. A complete investigation of these issues is beyond the scope of this book, and this is why we have normally limited ourselves to standing waves, which can be described with real-valued wavefunctions. Figure r gives a visual depiction of the difference between real and complex wavefunctions. The following remarks may also be helpful. Neither of the graphs in r/2 should be interpreted as a path traveled by something. This isn't anything mystical about quantum physics. It's just an ordinary fact about waves, which we first encountered in subsection 6.1.1, p. 340, where we saw the distinction between the motion of a wave and the motion of a wave pattern. In both examples in r/2, the wave pattern is moving in a straight line to the right. The helical graph in r/2 shows a complex wavefunction whose value rotates around a circle in the complex plane with a frequency \(f\) related to its energy by \(E=hf\). As it does so, its squared magnitude \(|\Psi|^2\) stays the same, so the corresponding probability stays constant. Which direction does it rotate? This direction is purely a matter of convention, since the distinction between the symbols \(i\) and \(-i\) is arbitrary --- both are equally valid as square roots of \(-1\). We can, for example, arbitrarily say that electrons with positive energies have wavefunctions whose phases rotate counterclockwise, and as long as we follow that rule consistently within a given calculation, everything will work. Note that it is not possible to define anything like a right-hand rule here, because the complex plane shown in the right-hand side of r/2 doesn't represent two dimensions of physical space; unlike a screw going into a piece of wood, an electron doesn't have a direction of rotation that depends on its direction of travel. Example 22: Superposition of complex wavefunctions \(\triangleright\) The right side of figure r/3 is a cartoonish representation of double-slit interference; it depicts the situation at the center, where symmetry guarantees that the interference is constuctive. Suppose that at some off-center point, the two wavefunctions being superposed are \(\Psi_1=b\) and \(\Psi_2=bi\), where \(b\) is a real number with units. Compare the probability of finding the electron at this position with what it would have been if the superposition had been purely constructive, \(b+b=2b\). \(\triangleright\) The probability per unit volume is proportional to the square of the magnitude of the total wavefunction, so we have \[\begin{equation*} \frac{P_{\text{off center}}}{P_{\text{center}}} = \frac{|b+bi|^2}{|b+b|^2} = \frac{1^2+1^2}{2^2+0^2} = \frac{1}{2} . \end{equation*}\] Discussion Questions ◊ The zero level of interaction energy \(U\) is arbitrary, e.g., it's equally valid to pick the zero of gravitational energy to be on the floor of your lab or at the ceiling. Suppose we're doing the double-slit experiment, r/3, with electrons. We define the zero-level of \(U\) so that the total energy \(E=U+K\) of each electron is positive. and we observe a certain interference pattern like the one in figure i on p. 844. What happens if we then redefine the zero-level of \(U\) so that the electrons have \(E\lt0\)? The figure shows a series of snapshots in the motion of two pulses on a coil spring, one negative and one positive, as they move toward one another and superpose. The final image is very close to the moment at which the two pulses cancel completely. The following discussion is simpler if we consider infinite sine waves rather than pulses. How can the cancellation of two such mechanical waves be reconciled with conservation of energy? What about the case of colliding electromagnetic waves? Quantum-mechanically, the issue isn't conservation of energy, it's conservation of probability, i.e., if there's initially a 100% probability that a particle exists somewhere, we don't want the probability to be more than or less than 100% at some later time. What happens when the colliding waves have real-valued wavefunctions \(\Psi\)? Complex ones? What happens with standing waves? The figure shows a skateboarder tipping over into a swimming pool with zero initial kinetic energy. There is no friction, the corners are smooth enough to allow the skater to pass over the smoothly, and the vertical distances are small enough so that negligible time is required for the vertical parts of the motion. The pool is divided into a deep end and a shallow end. Their widths are equal. The deep end is four times deeper. (1) Classically, compare the skater's velocity in the left and right regions, and infer the probability of finding the skater in either of the two halves if an observer peeks at a random moment. (2) Quantum-mechanically, this could be a one-dimensional model of an electron shared between two atoms in a diatomic molecule. Compare the electron's kinetic energies, momenta, and wavelengths in the two sides. For simplicity, let's assume that there is no tunneling into the classically forbidden regions. What is the simplest standing-wave pattern that you can draw, and what are the probabilities of finding the electron in one side or the other? Does this obey the correspondence principle?
654e4961a98aac09
The Most Important Equation In The Universe An illustration of our cosmic history, from the Big Bang until the present, within the context of the expanding Universe. The first Friedmann equation describes all of these epochs, from inflation to the Big Bang to the present and far into the future, perfectly accurately, even today.NASA / WMAP science team Last week, Perimeter Institute ran a feature where they asked 14 scientists what their favorite equation was, and why. There were many great answers from many different areas of research, from thermodynamics to pure mathematics. Many people went with fundamental equations, like the law of gravity, Newton's famous F = ma, or the Schrödinger equation, which governs quantum particles. I had the honor of being included in this list, and the answer I gave was none of these. Instead, the equation I picked was a very specific one: the first Friedmann equation, which is derived from Einstein's General Relativity under a specific set of circumstances. A photo of Ethan Siegel at the American Astronomical Society's hyperwall in 2017, along with the first Friedmann equation at right.Perimeter Institute / Harley Thronson When they asked why I picked that equation, here's what I said: "The first Friedmann equation describes how, based on what is in the universe, its expansion rate will change over time. If you want to know where the Universe came from and where it's headed, all you need to measure is how it is expanding today and what is in it. This equation allows you to predict the rest!" The story of Friedmann, his equation, and what it teaches us about our Universe is a story that every science enthusiast should know. Countless scientific tests of Einstein's general theory of relativity have been performed, subjecting the idea to some of the most stringent constraints ever obtained by humanity. Einstein's first solution was for the weak-field limit around a single mass, like the Sun; he applied these results to our Solar System with dramatic success.LIGO scientific collaboration / T. Pyle / Caltech / MIT In 1915, Einstein put forth his theory of General Relativity, which related the curvature of spacetime on one hand to the presence of matter and energy in the Universe on the other. As John Wheeler put it many years later, spacetime tells matter how to move; matter tells spacetime how to curve. Einstein's theory, in one fell swoop, reproduced all the previous successes of Newton's gravity, explained the intricacies of Mercury's orbit (which Newton's theory couldn't), and made a new prediction for the bending of starlight, which was spectacularly confirmed during the total solar eclipse of 1919. The only problem? In order to prevent the Universe from collapsing in on itself, Einstein needed to add a cosmological constant — an ad hoc fix for the fact that static spacetimes were unstable in General Relativity — to his theory. It was ugly, it was finely-tuned, and it had no other motivation. Alexander Friedmann was just 33 when he wrote down the Friedmann equations and predicted an expanding Universe. Three years later, his life would be tragically cut short by illness.E. A. Tropp, V. Ya. Frenkel & A. D. Chernin; Cambridge University Press Enter Friedmann. In 1922, just three years after the eclipse confirmation, Friedmann found an elegant way to save the Universe while simultaneously doing away with the cosmological constant: don't assume that it's static. Instead, Friedmann argued, assume that it is as we observe it, full of matter and radiation, and allowed to be curved. Assume, further, that it's roughly isotropic and homogeneous, which are mathematical words meaning "the same in all directions" and "the same at all locations." If you make these assumptions, two equations pop out: the Friedmann equations. They tell you that the Universe isn't static, but rather that it either expands or contracts depending on what the expansion rate and the contents of your Universe are. Best of all, they tell you how the Universe evolves with time, arbitrarily far into the future or past. The expected fates of the Universe (top three illustrations) all correspond to a Universe where the matter and energy fights against the initial expansion rate. In our observed Universe, a cosmic acceleration is caused by some type of dark energy, which is hitherto unexplained.E. Siegel / Beyond the Galaxy What's remarkable is that Friedmann put this out before we discovered that the Universe was expanding; before Hubble even discovered that there were galaxies beyond the Milky Way in the Universe! It wouldn't be until the next year that Hubble would identify Cepheid variable stars in Andromeda, teaching us its distance and placing it far outside of our own galaxy. Furthermore, it wouldn't be until the late 1920s that Georges Lemaître and later, independently, Hubble, would put the redshift-and-distance figures together to conclude that the Universe was expanding. By that time, the young Friedmann had already tragically died of typhoid fever, which he had contracted while returning from his honeymoon in 1925. Hubble's discovery of a Cepheid variable in Andromeda galaxy, M31, opened up the Universe to us, giving us the observational evidence we needed for galaxies beyond the Milky Way and leading to the expanding Universe.E. Hubble, NASA, ESA, R. Gendler, Z. Levay and the Hubble Heritage Team Yet his scientific legacy was indisputable, and became even more so as we came to understand cosmology better. The first Friedmann equation is the most important of the two, since it's the most easy and straightforward to tie to observations. On one side, you have the equivalent of the expansion rate (squared), or what's colloquially known as the Hubble constant. (It's not truly a constant, since it can change as the Universe expands or contracts over time.) It tells you how the fabric of the Universe expands or contracts as a function of time. The first Friedmann equation, as conventionally written today (in modern notation), where the left side details the Hubble expansion rate and the evolution of spacetime, and the right side includes all the different forms of matter and energy, along with spatial curvature.LaTeX / public domain On the other side is literally everything else. There's all the matter, radiation, and any other forms of energy that make up the Universe. There's the curvature intrinsic to space itself, dependent on whether the Universe is closed (positively curved), open (negatively curved), or flat (uncurved). And there's also the "Λ" term: a cosmological constant, which can either be a form of energy or can be an intrinsic property of space. An illustration of how spacetime expands when it’s dominated by Matter, Radiation or energy inherent to space itself. All three of these solutions are derivable from the Friedmann equations.E. Siegel Either way, this is the equation that relates how the Universe expands, quantitatively, to what makes up the matter and energy within it. Measure what's in your Universe today and how fast it's expanding today, and you can extrapolate forwards or backwards by arbitrary amounts. You can know how the Universe was expanding in the distant past or immediately after the Big Bang. You can know whether it will recollapse or not (it won't), or whether the expansion rate will asymptote to zero (it won't) or remain positive forever (it will). The Universe doesn't just expand uniformly, but has tiny density imperfections within it, which enable us to form stars, galaxies, and clusters of galaxies as time goes on. Adding density inhomogeneities to the first Friedmann equation is the starting point for understanding what the Universe looks like today.E.M. Huff, the SDSS-III team and the South Pole Telescope team; graphic by Zosia Rostomian And perhaps most spectacularly, you can add imperfections atop this smooth background. The density imperfections you put into your Universe tell you how large-scale structure grows and forms, what will grow into a galaxy/cluster and what won't, and what will become gravitationally bound versus what will be driven apart. All of this can be derived from one single equation: the first Friedmann equation. There is a large suite of scientific evidence that supports the picture of the expanding Universe and the Big Bang. The small number of input parameters and the large number of observational successes and predictions that have been subsequently verified are among the hallmarks of a successful scientific theory. The Friedmann equation describes it all.NASA / GSFC Although Friedmann's life was short, his influence cannot be overstated. He was the first to derive the General Relativity solution that describes our Universe: an expanding Universe filled with matter. Although it was independently derived, later, by three others — Georges Lemaître, Howard Robertson, and Arthur Walker — Friedmann fully realized its implications and applications, and even came up with the first solutions for exotically curved spaces. He was an influential teacher as well; his most famous pupil was George Gamow, who would later go on to apply Friedmann's work to the expanding Universe to create the Big Bang Theory of our cosmic origin. A visual history of the expanding Universe includes the hot, dense state known as the Big Bang and the growth and formation of structure subsequently. George Gamow, a student of Friedmann, was clearly heavily influenced by him in coming up with the idea of the Big Bang from whence this picture derives.NASA / CXC / M. Weiss Nearly a century after his most famous work, Friedmann's equations have been extended to a Universe containing an inflationary origin, dark matter, neutrinos, and dark energy. Yet they're still perfectly valid, with no additions or modifications required to account for these tremendous advances. While we can all argue about the relative merits of Einstein, Newton, Maxwell, Feynman, Boltzmann, Hawking, and many others, when it comes to the expanding Universe, Friedmann's first equation is the only one you need. It connects the matter and energy that's present to the expansion rate today, in the past, and in the future, and allows you to know the fate and history of the Universe from measurements we can make today. As far as the fabric of our Universe is concerned, this equation takes the crown as the single most important.
125c694526636348
Skip to main content Chemistry LibreTexts • Page ID • Lecture 1: The Rise of Quantum Mechanics Classical mechanics is unable to explain certain phenomena observed in nature including the emission of blackbody radiators that is sensitive to the temperature of the radiator. This distribution follow's Planck distribution which can be used to derive two other experimental Laws (Wien's and Stefan-Boltzmann laws). A key finding is that the energy given off by a blackbody was not continuous, but given off at certain specific wavelengths, in regular increments. • Lecture 2: The Rise of Quantum Mechanics Classical mechanics is unable to explain certain phenomena observed in nature. The photoelectron effect has several experimental observations that break with classical predictions. Einstein proposed a solution that light is quantized given with each quantum of light is called a photon. And the energy is proportional to its frequency. This was an impressive argument in that it said light is not always a wave, but can be a particle. This duality also applies to matter. • Lecture 3: The Rise of Quantum Mechanics Hydrogen atom emission spectra consist of "lines" rather than a continuum expected of classical mechanics. These lines were separated into different classes. Rydberg showed that a simple single equation can predict the energies of these transitions by introducing two integers of unknown origin.  While the photoelectron effect demonstrated that light can be wave-like and particle-like (e.g., "photon"), de Broglie demonstrated that matter also exhibits wave-like and particle-like behavior. • Lecture 4: Bohr atom and Heisenberg Uncertainty The Bohr atom was the first successful description of a quantum atom from basic principles (either as a particle or as a wave, both were discussed). From a particle perspective, stable orbits are predicted from the result of opposing forces (Coloumb's force vs. centripetal force). From a wave perspective, stable "standing waves" are predicted . The Bohr atom predicts quantized energies. Heisenberg's Uncertainly principle  argues that trajectories do not exist in quantum mechanics. • Lecture 5: Classical Wave Equations and Solutions Schrödinger Equation is a wave equation that is used to describe quantum mechanical system and is akin to Newtonian mechanics in classical mechanics. The Schrödinger Equation is an eigenvalue/eigenvector problem. To use it we have to recognize that observables are associated with linear operators that "operate" on the wavefunction. • Lecture 6: Schrödinger Equation The Schrödinger Equation has solutions called wavefunctions. The time-dependent Schrödinger Equation results in time-dependent wavefunctions with both spatial aspect and a temporal aspects.  The time-independent Schrödinger Equation results in time-independent wavefunctions with only a spatial aspect. Which one we use dependents if their is an explicit time-dependence in the Hamiltonian. It is important to recognize that wavefunctions ALWAYS have a temporal part (we typically ignore though). • Lecture 7: Operators, Free Particles and the Quantum Superposition Principle Wavefunctions have a probabilistic interpretation, more specifically, the wavefunction squared (or to be more exact, the Ψ∗Ψ is a probability density). To get a probability, we have to integrate Ψ∗Ψ over an interval. The probabilistic interpretation means Ψ∗Ψ  must be finite, nonnegative and not infinite. and that the wavefunctions must be normalized.  We the introduced the particle in the box, which is "easy" to solve the Schrödinger Equation to get oscillatory wavefunctions. • Lecture 8: Topical Overview of PIB and Postulates QM This lecture focused on gaining an intuition of wavefunctions with an emphasis on the particle in the box. Specifically, we considered the four principal properties of continuous distributions and applied it to the particle in the box.  We want to develop an intuition behind how the energy and wavefunctions change in PIB when mass is increased, when box length is increased and when quantum number n is increased.  We ended the discussion discussing that eigenstates of an operator are orthogonal. • Lecture 9: More on PIB and Orthonormality We continued the discussion of the PIB and the intuition we want from the model system. We revised the time-dependent solutions to the model system (which is always there). We emphasized not only that the total wavefunction must be oscillating in time (although we often ignore that in this class), it has both a real and imaginary component (we will revisit that again later on). We discussion symmetry of functions and integration over odd integrands and ended on the topic of orthonormality. • Lecture 10: Expectation values, 2D-PIB and Heisenberg Uncertainty Principle We extend the 1D particle in a box to the 2-D and 3D cases. From this we identified a few interesting phenomena including multiple quantum numbers  and degeneracy where multiple wavefunctions share the identical energy. We were able to provide a quantitative backing in using the Heisenberg Uncertainty principle from wavefuctions in terms of the standard deviations and we ended the lecture on the five postulates of quantum mechanics. • Lecture 11: Vibrations Three aspects were addressed: (1) Introduction of the commutator which is meant to evaluate is two operators commute. Not every pair of operators will commute meaning the order of operations matter. (2) Redefine the Heisenberg Uncertainty Principle now within the context of commutators to identify if any two quantum measurements can be simultaneously evaluated. (3) We introduction of vibrations,  including the harmonic oscillator potential were qualitatively shown (via Java application). • Lecture 12: Vibrational Spectroscopy of Diatomic Molecules We first introduce bra-key notation as a means to simplify the manipulation of integrals. We introduced a qualitative discussion of IR spectroscopy and then focused on "selection rules" for what vibrations are "IR-active." The two criteria we got discussed were (1) the vibration requires a changing dipole moment and (2) that \(\Delta v = \pm 1\) required for the transition (within harmonic oscillators). These selection rules can be derived from the concept of a transition moment and symmetry. • Lecture 13: Harmonic Oscillators and Rotation of Diatomic Molecules Symmetry (and direct product tables for odd/even functions) were discussed and showed Harmonic Oscillator wavefunctions alternated between even and odd due to Hermite polynomial component, which affects the transition moment integral so only transitions in the IR between adjacent wavefunctions will be allowed (i.e., no harmonics). This is an approximation and  the Taylor expansion of an arbitrary potential shows that anharmonic terms must be used. We introduced the Morse oscillator & rotations. • Lecture 14: Chalk talk review of Oscillators Projector difficulties resulted in a chalk talk/discussion involving quantum harmonic oscillators, harmonic oscillators eigenstates, anhamonicity, Morse potential etc.for class instead of intended presentation. • Lecture 15: 3D Rotations and Microwave Spectroscopy We continue our discussion of the solutions to the 3D rigid rotor: The wavefunctions (the spherical harmonics), the energies (and degeneracies) and the TWO quantum numbers (\(J\) and \(m_J\)) and their ranges. We discussed that the components of the angular momentum operator are subject to the Heisenberg uncertainty principle and cannot be know to infinite precision simultaneously, however the magnitude of angular momentum and any component can be. This results in the vectoral representation. • Lecture 16: Linear Momentum and Electronic Spectroscopy The potential, Hamiltonian and Schrödinger equation for the Hydrogen atom is introduced. The solution of which involves radial and angular components. The latter is just the spherical harmonics derived from the rigid rotor systems. The radial component is a function of four terms: a normalization constant, associated Laguerre polynomial, a nodal function, and an exponential decay. We also discussed that the energy is a function of only one quantum number and that there is a degeneracy to address • Lecture 17: Hydrogen-like Solutions While there are three quantum numbers in the solutions to the corresponding Schrodinger equation, that the energy only is a function of n . We continued our discussion of the radial component of the wavefunctions as a product of four terms that crudely results in an exponentially decaying amplitude as a function of distance from the nucleus scaled by a pair of polynomials. We discussed the volume and shell element in spherical space and introduce the radial distribution function. • Lecture 18: Orbital Angular Momentum, Spectroscopy and Multi-Electron Atoms Angular moment of an electron is described by the \(l\) quantum number. The \(m_l\) quantum number designates the orientation of that angular moment wrt the z-axis. The degeneracy can be partial broken by an applied magnetic fields. There is not always do a one-to-one correspondence between quantum numbers and orbitals. Basic electronic spectroscopy was reviewed and specifically selection rules. The impossible to solve He system was discussed requiring approximations; a poor one was introduced. • Lecture 19: Variational Method, Effective Charge, and Matrix Representation Three aspects were addressed: (1) We continued discussing the complications of electron-electron repulsions and showed ignoring it is really pretty poor. (2) We can qualitatively address them by introducing an effective charge within a shielding and penetration perspective. (3) We motivated variational method by arguing the energy of a trial wavefunction will be lowest when it most likely resembles the true wavefunction (the same for the corresponding energies). • Lecture 20: Variational Method Approximation and Linear Varational Method The variational method approach requires postulating a trial wavefunction and calculating the energy of that function as a function of the parameters of that trail wavefunction. Then we can minimize the energy as a function of these parameters and the closer the wavefunction "looks" like the true wavefunction, the closer the trail energy matches the true energy. Several example trial wavefunctions for the He atom are discussed. We introduce the matrix representation of Quantum mechanics. • Lecture 21: Linear Variational Theory This lecture reviews the basic steps in variational method, the linear variational method and the linear variation method with functions that have parameters that can float (e.g., a linear combination of Gaussians with variable widths in ab initio chemistry calculations). The latter two will be more applicable in the discussions of molecules using atomic orbitals as the basis set (th LCAO approximation). The final approximation, perturbation theory is introduced, but not used in an example. • Lecture 22: Perturbation Theory The basic steps perturbation theory is discussed including its application to the energy and wavefunctions.  A reminder of the orbital approximation was discussed (where an N-electron wavefunction can be described as N 1-electron orbitals that resemble the hydrogen atom wavefunctions). A consequence of the orbital approximation is the ability to construct electron configurations which are filled by the aufbau principle. However, the aufbau principle is only a guideline and not a hardfast rule. • Lecture 23: Electron Spin, Indistinguishability and Slater Determinants This lecture address two unique aspects of electrons: spin and indistinguishability and how they couple into describing multi-electron wavefunctions.  The spin results in an angular momentum that follows the same properties of orbital angular moment including commutators and uncertainty effect.  The Slater determinant wavefunction is introduced as a way to consistently address both properties. • Lecture 24: Coupling Angular Momenta and Atomic Term Symbols Last lecture address how the different orbital angular momenta of multi-electron atoms couple to break degeneracies predicted from the "Ignorance is Bliss" approximation (i.e., the hydrogen atom). Total angular momenta are introduced along with multiplicity. Atomic term symbols are discussed along with all three of Hund's rules to identify the most stable combination of angular momenta for a specific electron configuration. • Lecture 25: Molecules and Molecular Orbital Theory The application of term symbols to describe atomic spectroscopy is demonstrated. The corresponding selection rules are discussed. The Born-Approximation is introduced to help solve the N-bodies Schrödinger equation of molecules. This introduces the concept of a potential energy curve (surface). The LCAO is introduced as a mechanism to solve for Molecular Orbitals (MOs). • Lecture 26: Populating Molecular Orbitals: σ and π Orbitals From this LCAO-MO approach arises the Coulomb, Exchange (similar to HF calculations of atoms), and Overlap integrals. The concept of bonding and anti-bonding orbitals results.The application of LCAO toward molecular orbitals is demonstrated including linear variational theory and secular equations. • Lecture 27: Molecular Orbitals and Diatomics Bond order, bond length and bond energy is emphasized for H2 species. Simple MO theory does not predicted He dimers.  Bond order too • Lecture Extra: Hartree vs. Hartree-Fock, SCF, and Koopman's Theorem The consequence of indistinguishability in electronic structure calculations. The Hartree and Hartree-Fock (HF)caclulations were introduced within the Self-Consistent-Field (SCF) approach (similar to numerical evaluation of minima). The Hartree method treats electrons via only as an average repulsion energy and the HF approach using Slater determinant wavefunctions introduces an exchange energy term. Ionization energy and electron affinities are discussed within the context of Koopman's theorem. • Lecture Extra II: Molecular Orbitals with higher Energy Atomic Orbitals The MOs of first row diatomics is discussed including both π and σ MOs. The MO diagram is presented. Bond order, bond length, and bond energies are emphasized. The flip over of pi/sigma MO is demonstrated and the paramagnetism of oxygen is a natural conclusion of MO theory. Thumbnail: Michael Faraday delivering a Christmas lecture at the Royal Institution. ca. 1856. Image used with permission (Public Domain).
88338459b527c814
string theory FAQ String theory physics, mathematical physics, philosophy of physics Surveys, textbooks and lecture notes theory (physics), model (physics) experiment, measurement, computable physics Quantum field theory What is string theory? More conceptually, the premise is that There are a host of educated guesses of what non-perturbative string theory might be, if anything, but it remains unknown. At some point the term M-theory had been established for whatever that non-perturbative theory is, but even though it already has a name, it still remains unknown. (Or rather: its full incarnation remains unknown. What is well defined is 11-dimensional supergravity with some M-brane effects and gauge enhancement included, and that is what is presently being studied under the name “M-theory”, see for instance at M-theory on G2-manifolds; and see also at F-theory.) Then why not consider perturbative pp-brane scattering for any pp? The above motivation of perturbative string theory as the evident result of replacing the definition of an S-matrix perturbation theory, via 1-dimensional Feynman diagrams encoding worldlines of particles, by 2-dimensional diagrams (Riemann surfaces) encoding worldsheets of strings raises the evident question: why stop at strings? Why not consider an S-matrix built as the sum of the correlators of a worldvolume field theory for each p+1p+1-dimensional manifold, encoding the propagation of a membrane for p=2p = 2, and generally of (what is called) a p-brane? To answers this, it is again crucial to distinguish between the perturbation theory and the non-perturbative theory. On the one hand, study of the string perturbation theory shows that indeed strings interact with and gives rise to pp-branes for many different values of pp. But the dynamics of all these higher dimensional branes itself seems to be intrinsically non-perturbative. What does not seem to exist is a sensible perturbation series for pp-brane scattering with p>1p \gt 1. The reason is that it is hard and seems impossible to make sense of this. There are two technical problems: 1. for p>1p \gt 1 then the standard worldvolume action functionals (Nambu-Goto action), are not renormalizable; 2. for p>1p \gt 1 the moduli spaces of p+1-manifolds are not controllable. So for p>1p \gt 1 one a) does not know how to define the “Feynman amplitudes” and b) even if one did, one does not know how against what to integrate them. Each of these two problems in itself makes a pp-brane perturbation theory for p>1p \gt 1 be hard to come by. That is, incidentally, the very reason for the term “M-theory”. First there had been the observation that the super 1-brane in 10d target spacetimes is accompanied by a 2-brane in 11d target spacetime, now called the M2-brane (for the history see Mike Duff, The World in Eleven Dimensions). This suggested the evident idea that there ought to be perturbation theory for 2-branes – called membranes, hence that there ought to be “membrane theory” in direct analogy with “string theory”. But the above two problems make a direct such analogy unlikely. Nevertheless, since there might be a less obvious, more sophisticated kind of analogy, Edward Witten proposed to say “M-theory” as an abbreviation, not to commit himself to what exactly might be really going on, and leaving open for the future if “M” is for “membrane” or for something else: M stands for magic, mystery, or membrane, according to taste (Witten 95) What are the equations of string theory? All local field theories in physics are prominently embodied by key equations, their equations of motion. For instance classical gravity (general relativity) is essentially the theory of Einstein's equations, quantum mechanics is governed by the Schrödinger equation, and so forth. But perturbative string theory is not a local field theory. Instead it is an S-matrix theory (see What is string theory?). Therefore instead of being given by an equation that picks out the physical trajectories, it is given by a formula for how to compute scattering amplitudes. That formula is the string perturbation series: it says that the probability amplitude for n inn_{in} asymptotic states of strings coming in (into a particle collider experiment, say), scattering, and n outn_{out} other asymptotic string states emerging (and hitting a detector, say) is a sum over all Riemann surfaces with (n in,n out)(n_{in}, n_{out})-punctures of the n-point functions of the given 2d CFT that defines the string vacuum (see at What is a string vacuum?). More in detail, a string background is equivalently a choice of 2d SCFT of central charge 15 (a “2-spectral triple”), and in terms of this the formula for the S-matrix element/scattering amplitude for a bunch of asymptotic string states ψ in 1,,ψ in n in\psi^1_{in}, \cdots, \psi^{n_{in}}_{in} coming in, and a bunch of states ψ out 1,,ψ out n out\psi^1_{out}, \cdots, \psi^{n_{out}}_{out} coming out is schematically of the form S ψ in 1,,ψ in n in,ψ out 1,,ψ out n out=gλ gmodulispaceof(n in,n out)puncturedsuperRiemannsurfacesΣ g n in,n outofgenusg(SCFTCorrelatoroverΣofstatesψ in 1,,ψ in n in,ψ out 1,,ψ out n out) S_{\psi^1_{in}, \cdots, \psi^{n_{in}}_{in}, \psi^1_{out}, \cdots, \psi^{n_{out}}_{out}} \;=\; \underset{g \in \mathbb{N}}{\sum} \lambda^g \underset{ {moduli \; space \; of} \atop {{(n_{in},n_{out}) punctured} \atop {{super\; Riemann \; surfaces} \atop {{\Sigma^{n_{in}, n_{out}}_g} \atop {of\; genus\; g}}}} }{ \int } \left( SCFT \; Correlator \; over \; \Sigma \; of \; states\; {\psi^1_{in}, \cdots, \psi^{n_{in}}_{in}, \psi^1_{out}, \cdots, \psi^{n_{out}}_{out}} \right) expressing the S-matrix element (scattering amplitude) shown on the left as a formal power series in the string coupling constant with coefficients the integrals over moduli space of super Riemann surfaces of the worldsheet correlators (nn-point functions) for the given incoming and outgoing string states. With more technical details filled in, this formula reads as follows: for the bosonic string, as found in Polchinski 01, volume 1, equation (5.3.9) and for the superstring, as found in Polchinski 01, volume 2, equation (12.5.24) This is the equation that defines perturbative string theory. And this is of just the same form as the Feynman perturbation series perturbative quantum field theory, the only difference being that the latter is more complicated: there one has to sum over Feynman diagrams with labeling for all intermediate particles (virtual particles) and with some choice of renormalization to make the integrals well defined, whereas here we simply sum over all super Riemann surfaces and that’s it. The different intermediate virtual particles as well as the renormalization counterterms are all taken care of by the higher string modes, encoded in the worldsheet CFT correlators. There was a time in the 1960s, when quantum field theorists around Geoffrey Chew proposed that precisely such formulas for S-matrix elements should be exactly what defines a quantum field theory, this and nothing else. The idea was to do away with an explicit concept of spacetime and local interactions, and instead declare that all there is to be said about physics is what is seen by particles that probe the physics by scattering through it. This is an intrisically quantum approach, where there need not be any classical action functional defined in terms of spacetime geometry. Instead, all there is a formula for the outcome of scattering experiments. Historically, this radical perspective fell out of fashion for a while with the success of QCD and the quark model in its formulation as as local field theory coming from an action functional: Yang-Mills theory. But fashions come and go, and the original idea of Geoffrey Chew and the S-matrix approach continues to make sense in itself, and it is this form of a physical theory that perturbative string theory is an example of. Ironically, more recently, the S-matrix-perspective also becomes fashionable again in Yang-Mills theory itself, with people noticing that scattering amplitudes at least in super Yang-Mills theory have good properties that are essentially invisible when expressing them as vast sums of Feynman diagram contributions as obtained from the action functional. For more on this see at amplituhedron. On the other hand, there is also an analog of the second quantized field-theory-with-equations for string scattering: this is called string field theory, and this again is given by equations of motion. For instance the equations of motion of closed string field theory are of the form Qψ+12ψψ+16ψψψ+=0, Q \psi + \tfrac{1}{2} \psi \star \psi + \tfrac{1}{6} \psi \star \psi \star \psi + \cdots = 0 \,, where Ψ\Psi is the string field, QQ is the BRST operator and \star is the string field star product. (For the bosonic string this is due to Zwiebach 92, equation (4.46), for the superstring this is in Sen 15 equation (2.22)). The string field Ψ\Psi has infinitely many components, one for each excitation mode of the string. Its lowest excitations are the modes that correspond to massless fundamental particles, such as the graviton. Expanding the equations of motion of string field theory in mode expansions (“level expansion”) does reproduce the equations of motions of these fields as a perturbation series around a background solution and together with higher curvature corrections. Why is string theory controversial? As a theory of the observable universe that is supposed to checked by experiment and which predicts that all fundamental particles of the standard model of particle physics are secretly, if one probes at high enough energy, excitations of superstrings, string theory is an unproven hypothesis. Current particle accelerator technology (notably the LHC) are about 15 orders of magnitude (hence far, far) away from the energy scale at which these strings would manifest themselves directly (at least for many models in string phenomenology). Does string theory make predictions? How? In string theory this happens with all the parameters. (Except for one single constant: the string coupling constant. From the perspective of “M-theory” even that disappears. See at string theory – scales.) There is no external choice of parameter, but there remains the choice of studying “solutions to the equations of motion” (which in string theory means: choices of 2d CFTs) which might model observed physics. That is why in string theory instead of adjusting parameters one searches solutions. Since these are also called “string vacua” (see at What is a string vacuum?), one searches vacua. The infamous term “landscape of string theory vacua” refers to attempts to understand the space of possibilities here more globally. But very little is actually known to date. Aside: How do physical theories generally make predictions, anyway? 1. posit a theory; 4. if experiment disagrees with the predictions then Is string theory testable? Is string theory causal, given that it is not local on the string scale? In an S-matrix theory such as perturbative string theory (see above at What is string theory) the property of causality is embodied by the fact that the S-matrix shows certain analyticity features. (Therefore the S-matrix approach to quantum field theory is often referred to as “the analytic S-matrix”). Since, as opposed to a fundamental particle, the string is extended, at the string scale string theory is not given by a local field theory. This superficially seems to suggest that at such scales also causality might be violated in string theory. However, computation shows that the string scattering S-matrix comes out suitably analytic and causal (e.g. Martinec 95). A detailed analysis for how this comes about has been given in (Erler-Gross 04). They write: Perhaps then it comes as a surprise that critical string theory produces an analytic S-matrix consistent with macroscopic causality. In absence of any other known theoretical mechanism which might explain this, despite appearances one is lead to believe that string interactions must be, in some sense, local. We find that string theory avoids problems with nonlocality in a surprising way. In particular, we find that the Witten vertex is “local enough” to allow for a nonsingular description of the theory which is completely local along a single null direction. unlike lightcone string field theory, it is clear that cubic string field theory at least has a local limit where all spacetime coordinates are taken to the midpoint. We investigate this limit with a careful choice of regulator and show that at any stage the theory is nonsingular but arbitrarily close to being local and manifestly causal. We believe that the existence of this limit, though singular, must account for the macroscopic causality of the string S-matrix. Thus, string theory is local enough to avoid the inconsistencies of a theory which is acausal and nonlocal in time, but is nonlocal enough to make string theory different from quantum field theory Does string theory predict supersymmetry? To understand this question and its answer, it is important to know that in general symmetries in physics (and in mathematics) come in a local and in a global flavor. For instance the theory of gravity is a theory which has as a local symmetry the Poincaré group, including the Lorentz group. But any model of the theory – a spacetime – may or may not have global Lorentz symmetry (“Killing vector fields”). In fact, the generic solution to the Einstein equations has no global Lorentz symmetry left. Indeed, global Lorentz symmetry of spacetime with all matter and force fields in it would mean that the world looks the same if we arbitrarily translate in some direction, or arbitrarily rotate in some plane. It would be rather bizarre to live in a spacetime with such a property! (Mathematically the distinction is this: given a coset space G/HG/H, then the corresponding Klein geometry has global GG-symmetry, while the corresponding Cartan geometries have local GG-symmetry.) (Mathematically this is now the situation of super Cartan geometry.) (This result is known as the Goddard-Thorn no-ghost theorem with GSO projection.) So: string theory implies that if there are fermions at all, then there is local supersymmetry, hence supergravity. strings&fermionssupergravity strings \;\; \& \;\; fermions \;\;\; \Rightarrow \;\;\; supergravity In fact the generic model has a priori no reason to preserve any low energy global supersymmetry below the Planck scale at all (see e.g. Giudice-Strumia 11, p. 1-2), just as the generic solution of Einstein's equations does not preserve any Lorentz group symmetry. The condition that a Kaluza-Klein compactification of 10-dimensional supergravity to 4d exhibits precisely one global supersymmetry is equivalent to the compactification space being a Calabi-Yau manifold. While this is a famous condition that has been extensively studied (see at supersymmetry and Calabi-Yau manifolds), nothing in the theory seemed to require KK-compactification on Calabi-Yau manifolds. These were originally considered because for phenomenological reasons it is (or was) expected that our observed world exhibits global low-energy supersymmetry. However, recently arguments for a theoretical preference for N=1N=1 supersymmetric compactifications have been advanced after all (FSS19, Sec. 3.4, Acharya 19). What is a string vacuum? The collection of all string vacua, possibly subject to some assumptions, has come to be called the landscape of string theory vacua. See at What does it mean to say that string theory has a landscape of vacua? How/why does string theory depend on “backgrounds”? By design, all this applies also to perturbative string theory. Did string theory provide any insight relevant in experimental particle physics? Details on this are linked to at What is the relationship between string theory and quantum field theory? Recall from some of the previous answers: Isn’t it fatal that the string perturbation series does not converge? No. On the contrary, the Feynman perturbation series of every QFT of interest is supposed to have vanishing radius of convergence (Dyson 52) and be just an asymptotic series. Because the expansion parameter is the coupling constant and if the series had a finite radius of convergence, there would be also negative values of the coupling for which the correlators are convergent. See at non-perturbative effect for more. How do strings model massive particles? How is string theory related to the theory of gravity? This means the following: Do the extra dimensions lead to instability of 4 dimensional spacetime? The parameters of size and shape of the compactified dimensions in string theory, and in fact in any Kaluza-Klein compactification, are called “moduli”. Since they are part of the higher dimensional metric, they are components of the higher dimensional field of gravity and hence are dynamical fields that evolve. The problem of their stability, hence the question whether there are dynamical mechanisms that make for instance the size of the compactified space remain stably at a given value, is famous as the problem of moduli stabilization in string theory. This problem used to be open until around 2002. Then it was realized that vacuum expectation values (VEVs) of the higher form fields (“fluxes”) present in string theory generically induce effective potentials for moduli that may stabilize them, at least for fluctuations that preserve the given special holonomy (CY-compactifications of string theory or G2-holonomy compactifications for M-theory). For type IIB string theory/F-theory this was argued in the influential article KKLT 03. An analogous moduli stabilization mechanism was also argued for M-theory on G2-manifolds by Acharya 02. It is the counting of all the many possible ways of stabilizing moduli via fluxes in type IIB that led to the now infamous discussion of the landscape of type II string theory vacua (see also below). In any case, there seems to be no lack of solutions of the stability problem. In fact, more precisely, a perturbative string theory vacuum for string perturbation theory is a choice of super-2d CFT of central charge 15 (see What is a string vacuum?), and each of them induces such an effective background (by a mechanism indicated at 2-spectral triple). This imposes considerably more constraints than one has to solve to find a solution to just Einstein-Yang-Mills equations (for instance “modular invariance” of the 2d theory, etc.). As a result, it is considerably harder to find a backround string vacuum for string theory than for its non UV-complete non-renormalized effective Einstein-Yang-Mills-Dirac-Higgs theory. write on page 9 of their accurate study of orientifold backgrounds: Is string theory mathematically rigorous? More in detail: For the moment, see the following for more: Last revised on June 18, 2019 at 19:12:22. See the history of this page for a list of all contributions to it.
4cede095561c4e6b
First cycle degree courses Second cycle degree courses Single cycle degree courses School of Science Course unit SCP8082709, A.A. 2019/20 Information concerning the students who enrolled in A.Y. 2018/19 Information on the course unit Degree course Second cycle degree in SC2443, Degree course structure A.Y. 2018/19, A.Y. 2019/20 bring this page with you Number of ECTS credits allocated 6.0 Type of assessment Mark Course unit English denomination INFORMATION THEORY AND COMPUTATION Website of the academic structure http://physicsofdata.scienze.unipd.it/2019/laurea_magistrale Department of reference Department of Physics and Astronomy Mandatory attendance No Language of instruction English Teacher in charge SIMONE MONTANGERO FIS/03 Course unit code Course unit name Teacher in charge Degree course code ECTS: details Type Scientific-Disciplinary Sector Credits allocated Educational activities in elective or integrative disciplines FIS/03 Material Physics 6.0 Course unit organization Period First semester Year 2nd Year Teaching method frontal Type of hours Credits Teaching Hours of Individual study Lecture 6.0 48 102.0 No turn Start of activities 30/09/2019 End of activities 18/01/2020 Show course schedule 2019/20 Reg.2018 course timetable Prerequisites: Quantum mechanics and elements of programming. Target skills and knowledge: The course aims to introduce the students to tensor network methods, one of the most versatile simulation approach exploited in quantum science. It will provide a hands-on introduction to these methods and will present a panoramic overview of some of tensor network methods most successful and promising applications. Indeed, they are routinely used to characterize low-dimensional equilibrium and out-of-equilibrium quantum processes to guide and support the development of quantum science and quantum technologies. Recently, it has also been put forward their possible exploitation in computer science applications such as classification and deep learning algorithms. Examination methods: The exam will be a final project composed of programming, data acquisition, and analysis, which will be discussed orally. Assessment criteria: The student will be evaluated in terms of: - The knowledge of the course content; - The programming skill and the quality of the written code; - The data analysis and presentation; - The physical analysis and global understanding of the treated problem. Course unit contents: Basics in computational physics 1. Large matrix diagonalization 2. Numerical integration, optimizations, and solutions of PDE 3. Elements of Gnuplot, modern FORTRAN, python 4. Elements of object-oriented programming 5. Schrödinger equation (exact diagonalization, Split operator method, Suzuki-trotter decomposition, ...) Basics of quantum information: 1. Density matrices and Liouville operators 2. Many-body Hamiltonians and states (Tensor products, Liouville representation, ...) 3. Entanglement measures 4. Entanglement in many-body quantum systems 1. Numerical Renormalization Group 2. Density Matrix Renormalization group 3. Introduction to tensor networks 4. Tensor network properties 5. Symmetric tensor networks 6. Algorithms for tensor networks optimization 7. Exact solutions of benchmarking models 1. Critical systems 2. Topological order and its characterization 3. Adiabatic quantum computation 4. Quantum annealing of classical hard problems 5. Kibble-Zurek mechanism 6. Optimal control of many-body quantum systems 7. Open quantum systems (quantum trajectories, MPDO, LPTN, ...) 8. Tensor networks for classical problems: regressions, classifications, and deep learning. Planned learning activities and teaching methods: The course will be composed of lessons in class and programming labs. Additional notes about suggested reading: The course will be based on lecture notes and other electronic and hard copy didactical material (Ph.D. thesis, documentation etc.) Textbooks (and optional supplementary readings) • Montangero, Simone, Introduction to tensor network methodsnumerical simulations of low-dimensional many-body quantum systemsSimone Montangero. Cham: Springer, 2018. Cerca nel catalogo Innovative teaching methods: Teaching and learning strategies • Lecturing • Laboratory • Problem based learning • Case study • Interactive lecturing • Working in group • Questioning • Problem solving Innovative teaching methods: Software or applications used • Latex
ef61d4b3a7cd755e
How Do Quantum Computers Work? By: | May 28th, 2019 How Do Quantum Computers Work Image by Gerd Altmann from Pixabay Understanding Data Computing in Classical Computers Have you ever wondered how letters and words gets stored and processed on your desktop, smartphone, laptop and hard drive? Do we really understand what happens when we press the letters on our keypads? Currently there is the classical “bit” which is nothing more than an electrical state of “language” called Binary.  This “bit” can either be presented in a “1” which is a positive electrical state or a “0”, which is a negative electrical state. These states are all predetermined. These “bits” are then added into an 8 “bit” word called a “byte” which presents a specific character i.e. a letter A. This single input into the processing stage will also be equal to the same single output of the processing stage. Quantum Computing With the use of superposition, the fundamental principle of quantum computing or processing is derived. What is superposition? It can be defined as taking two or more know classic bit/byte states and adding them together electrically in subatomic particles to create a transmission medium for the information known as “qubits”. The resultant of the superposition will have a different valid quantum state than the original input states and represents the combination of the original input states. This will have a huge impact on the processing (analogue to digital and digital to analogue conversions – PCM) time and will require less power. Mathematically, it refers the Schrödinger equation where it states that the linear superposition of solutions (the original input states i.e. 0 and 1’s) will produce a linear solution, albeit different (qubits). This means that the probabilities of measuring 0 or 1 for a qubit are in general neither 0.0 nor 1.0, and multiple measurements made on qubits in identical states will not always give the same result. To distinguish between these results, quantum logic gates using algorithms (0.0 and/or 1.0) have to be used instead of the classical logic gates (0 or 1’s) Who Is in The Race? With High Tech Companies like Google, IBM, Rigetti and D-Wave in the race, there are already experimental cloud based 20 qubit systems available to users. In January 2019, IBM launched IBM Q System One, its first integrated quantum computing system for commercial use, opening up the playing field for new and existing computing experiences that will in the not too far future, be part of our everyday existence.   Louie Gerhard Specialized in the Mechanical, Engineering and IT Technical environment with over 33 years experience. More articles from Industry Tap...
65088841dfc55ae2
All Issues Volume 40, 2020 Volume 39, 2019 Volume 38, 2018 Volume 37, 2017 Volume 36, 2016 Volume 35, 2015 Volume 34, 2014 Volume 33, 2013 Volume 32, 2012 Volume 31, 2011 Volume 30, 2011 Volume 29, 2011 Volume 28, 2010 Volume 27, 2010 Volume 26, 2010 Volume 25, 2009 Volume 24, 2009 Volume 23, 2009 Volume 22, 2008 Volume 21, 2008 Volume 20, 2008 Volume 19, 2007 Volume 18, 2007 Volume 17, 2007 Volume 16, 2006 Volume 15, 2006 Volume 14, 2006 Volume 13, 2005 Volume 12, 2005 Volume 11, 2004 Volume 10, 2004 Volume 9, 2003 Volume 8, 2002 Volume 7, 2001 Volume 6, 2000 Volume 5, 1999 Volume 4, 1998 Volume 3, 1997 Volume 2, 1996 Volume 1, 1995 Discrete & Continuous Dynamical Systems - A August 2017 , Volume 37 , Issue 8 Select all articles Gevrey estimates for one dimensional parabolic invariant manifolds of non-hyperbolic fixed points Inmaculada Baldomá, Ernest Fontich and Pau Martín 2017, 37(8): 4159-4190 doi: 10.3934/dcds.2017177 +[Abstract](1601) +[HTML](5) +[PDF](580.3KB) We study the Gevrey character of a natural parameterization of one dimensional invariant manifolds associated to a parabolic direction of fixed points of analytic maps, that is, a direction associated with an eigenvalue equal to 1. We show that, under general hypotheses, these invariant manifolds are Gevrey with type related to some explicit constants. We provide examples of the optimality of our results as well as some applications to celestial mechanics, namely, the Sitnikov problem and the restricted planar three body problem. Existence of minimal flows on nonorientable surfaces José Ginés Espín Buendía, Daniel Peralta-salas and Gabriel Soler López 2017, 37(8): 4191-4211 doi: 10.3934/dcds.2017178 +[Abstract](1613) +[HTML](1) +[PDF](480.4KB) Surfaces admitting flows all whose orbits are dense are called minimal. Minimal orientable surfaces were characterized by J.C. Benière in 1998, leaving open the nonorientable case. This paper fills this gap providing a characterization of minimal nonorientable surfaces of finite genus. We also construct an example of a minimal nonorientable surface with infinite genus and conjecture that any nonorientable surface without combinatorial boundary is minimal. Positive ground state solutions for a quasilinear elliptic equation with critical exponent Yinbin Deng and Wentao Huang 2017, 37(8): 4213-4230 doi: 10.3934/dcds.2017179 +[Abstract](2353) +[HTML](17) +[PDF](467.2KB) In this paper, we study the following quasilinear elliptic equation with critical Sobolev exponent: which models the self-channeling of a high-power ultra short laser in matter, where N ≥ 3; 2 < p < 2* = $\frac{{2N}}{{N -2}}$ and V (x) is a given positive potential. Combining the change of variables and an abstract result developed by Jeanjean in [14], we obtain the existence of positive ground state solutions for the given problem. Separated nets arising from certain higher rank $\mathbb{R}^k$ actions on homogeneous spaces Changguang Dong 2017, 37(8): 4231-4238 doi: 10.3934/dcds.2017180 +[Abstract](1526) +[HTML](3) +[PDF](342.9KB) We prove that separated net arising from certain higher rank $\mathbb R.k$ actions on homogeneous spaces is bi-Lipschitz equivalent to a lattice. On nonlocal symmetries generated by recursion operators: Second-order evolution equations M. Euler, N. Euler and M. C. Nucci 2017, 37(8): 4239-4247 doi: 10.3934/dcds.2017181 +[Abstract](1622) +[HTML](5) +[PDF](287.4KB) We introduce a new type of recursion operator suitable to generate a class of nonlocal symmetries for those second-order evolution equations in $1+1$ dimension which allow the complete integration of their time-independent versions. We show that this class of evolution equations is $C$-integrable (linearizable by a point transformation). We also discuss some applications. The geometric discretisation of the Suslov problem: A case study of consistency for nonholonomic integrators Luis C. garcía-Naranjo and Fernando Jiménez 2017, 37(8): 4249-4275 doi: 10.3934/dcds.2017182 +[Abstract](1495) +[HTML](10) +[PDF](2359.8KB) Geometric integrators for nonholonomic systems were introduced by Cortés and Martínez in [4] by proposing a discrete Lagrange-D'Alembert principle. Their approach is based on the definition of a discrete Lagrangian $L_d$ and a discrete constraint space $D_d$. There is no recipe to construct these objects and the performance of the integrator is sensitive to their choice. Cortés and Martínez [4] claim that choosing $L_d$ and $D_d$ in a consistent manner with respect to a finite difference map is necessary to guarantee an approximation of the continuous flow within a desired order of accuracy. Although this statement is given without proof, similar versions of it have appeared recently in the literature. We evaluate the importance of the consistency condition by comparing the performance of two different geometric integrators for the nonholonomic Suslov problem, only one of which corresponds to a consistent choice of $L_d$ and $D_d$. We prove that both integrators produce approximations of the same order, and, moreover, that the non-consistent discretisation outperforms the other in numerical experiments and in terms of energy preservation. Our results indicate that the consistency of a discretisation might not be the most relevant feature to consider in the construction of nonholonomic geometric integrators. Analysis of a Cahn--Hilliard system with non-zero Dirichlet conditions modeling tumor growth with chemotaxis Harald Garcke and Kei Fong Lam 2017, 37(8): 4277-4308 doi: 10.3934/dcds.2017183 +[Abstract](1906) +[HTML](16) +[PDF](528.0KB) We consider a diffuse interface model for tumor growth consisting of a Cahn--Hilliard equation with source terms coupled to a reaction-diffusion equation, which models a tumor growing in the presence of a nutrient species and surrounded by healthy tissue. The well-posedness of the system equipped with Neumann boundary conditions was found to require regular potentials with quadratic growth. In this work, Dirichlet boundary conditions are considered, and we establish the well-posedness of the system for regular potentials with higher polynomial growth and also for singular potentials. New difficulties are encountered due to the higher polynomial growth, but for regular potentials, we retain the continuous dependence on initial and boundary data for the chemical potential and for the order parameter in strong norms as established in the previous work. Furthermore, we deduce the well-posedness of a variant of the model with quasi-static nutrient by rigorously passing to the limit where the ratio of the nutrient diffusion time-scale to the tumor doubling time-scale is small. Orbital stability and uniqueness of the ground state for the non-linear Schrödinger equation in dimension one Daniele Garrisi and Vladimir Georgiev 2017, 37(8): 4309-4328 doi: 10.3934/dcds.2017184 +[Abstract](1889) +[HTML](4) +[PDF](460.1KB) We prove that standing-waves which are solutions to the non-linear Schrödinger equation in dimension one, and whose profiles can be obtained as minima of the energy over the mass, are orbitally stable and non-degenerate, provided the non-linear term satisfies a Euler differential inequality. When the non-linear term is a combined pure power-type, then there is only one positive, symmetric minimum of prescribed mass. On coupled Dirac systems Wenmin Gong and Guangcun Lu 2017, 37(8): 4329-4346 doi: 10.3934/dcds.2017185 +[Abstract](1559) +[HTML](10) +[PDF](473.8KB) In this paper, we show the existence of solutions for the coupled Dirac system where $M$ is an $n$-dimensional compact Riemannian spin manifold, $D$ is the Dirac operator on $M$, and $H:\Sigma M\oplus \Sigma M\to \mathbb{R}$ is a real valued superquadratic function of class $C^1$ in the fiber direction with subcritical growth rates. Our proof relies on a generalized linking theorem applied to a strongly indefinite functional on a product space of suitable fractional Sobolev spaces. Furthermore, we consider the $\mathbb{Z}_2$-invariant $H$ that includes a nonlinearity of the form where $f(x)$ and $g(x)$ are strictly positive continuous functions on $M$ and $p, q>1$ satisfy In this case we obtain infinitely many solutions of the coupled Dirac system by using a generalized fountain theorem. Statistical and deterministic dynamics of maps with memory Paweł Góra, Abraham Boyarsky, Zhenyang LI and Harald Proppe 2017, 37(8): 4347-4378 doi: 10.3934/dcds.2017186 +[Abstract](1564) +[HTML](1) +[PDF](1471.3KB) We consider a dynamical system to have memory if it remembers the current state as well as the state before that. The dynamics is defined as follows: $x_{n+1}=T_{\alpha }(x_{n-1}, x_{n})=\tau (\alpha \cdot x_{n}+(1-\alpha)\cdot x_{n-1}), $ where $\tau$ is a one-dimensional map on $I=[0, 1]$ and $0 < \alpha < 1$ determines how much memory is being used. $T_{\alpha }$ does not define a dynamical system since it maps $U=I\times I$ into $I$. In this note we let $\tau $ be the symmetric tent map. We shall prove that for $0 < \alpha < 0.46, $ the orbits of $\{x_{n}\}$ are described statistically by an absolutely continuous invariant measure (acim) in two dimensions. As $\alpha $ approaches $0.5 $ from below, that is, as we approach a balance between the memory state $x_{n-1}$ and the present state $x_{n}$, the support of the acims become thinner until at $\alpha =0.5$, all points have period 3 or eventually possess period 3. For $% 0.5 < \alpha < 0.75$, we have a global attractor: for all starting points in $U$ except $(0, 0)$, the orbits are attracted to the fixed point $(2/3, 2/3).$ At $%\alpha=0.75, $ we have slightly more complicated periodic behavior. Livšic theorem for banach rings Genady Ya. Grabarnik and Misha Guysinsky 2017, 37(8): 4379-4390 doi: 10.3934/dcds.2017187 +[Abstract](1304) +[HTML](5) +[PDF](373.7KB) The Livšic Theorem for Hölder continuous cocycles with values in Banach rings is proved. We consider a transitive homeomorphism ${\sigma :X\to X}$ that satisfies the Anosov Closing Lemma and a Hölder continuous map ${a:X\to B^\times}$ from a compact metric space $X$ to the set of invertible elements of some Banach ring $B$. The map $a(x)$ is a coboundary with a Hölder continuous transition function if and only if $a(\sigma^{n-1}p)\ldots a(\sigma p)a(p)$ is the identity for each periodic point $p=\sigma^n p$. Exact azimuthal internal waves with an underlying current Hung-Chu Hsu 2017, 37(8): 4391-4398 doi: 10.3934/dcds.2017188 +[Abstract](1453) +[HTML](9) +[PDF](297.0KB) In this paper, we present an explicit and exact solution of the nonlinear governing equations including Coriolis and centripetal terms for internal azimuthal waves with a uniform current in the $\beta$-plane approximation near the equator. This solution is described in the Lagrangian framework. The unidirectional azimuthal internal trapped are symmetric about the equator and propagate eastward above the thermocline and beneath the near-surface layer. Existence of heterodimensional cycles near Shilnikov loops in systems with a $\mathbb{Z}_2$ symmetry Dongchen Li and Dmitry V. Turaev 2017, 37(8): 4399-4437 doi: 10.3934/dcds.2017189 +[Abstract](1360) +[HTML](6) +[PDF](796.5KB) We prove that a pair of heterodimensional cycles can be born at the bifurcations of a pair of Shilnikov loops (homoclinic loops to a saddle-focus equilibrium) having a one-dimensional unstable manifold in a volume-hyperbolic flow with a $\mathbb{Z}_2$ symmetry. We also show that these heterodimensional cycles can belong to a chain-transitive attractor of the system along with persistent homoclinic tangency. On the uniqueness of solution to generalized Chaplygin gas Marko Nedeljkov and Sanja Ružičić 2017, 37(8): 4439-4460 doi: 10.3934/dcds.2017190 +[Abstract](1839) +[HTML](8) +[PDF](745.1KB) The main object of the paper is finding a unique solution to Riemann problem for generalized Chaplygin gas model. That is a model of the dark energy in Universe introduced in the last decade. It permits an infinite mass concentration so one has to consider solutions containing the Dirac delta function. Although it was easy to construct solution to any Riemann problem, the usual admissibility conditions, overcompressiveness, do not exclude unwanted delta-type waves when a classical solution exists. We are using Shadow Wave approach in order to solve that uniqueness problem since they are well adopted for using Lax entropy–entropy flux conditions and there is a rich family of convex entropies. Normalization in Banach scale Lie algebras via mould calculus and applications Thierry Paul and David Sauzin 2017, 37(8): 4461-4487 doi: 10.3934/dcds.2017191 +[Abstract](1459) +[HTML](5) +[PDF](588.4KB) We study a perturbative scheme for normalization problems involving resonances of the unperturbed situation, and therefore the necessity of a non-trivial normal form, in the general framework of Banach scale Lie algebras (this notion is defined in the article). This situation covers the case of classical and quantum normal forms in a unified way which allows a direct comparison. In particular we prove a precise estimate for the difference between quantum and classical normal forms, proven to be of order of the square of the Planck constant. Our method uses mould calculus (recalled in the article) and properties of the solution of a universal mould equation studied in a preceding paper. Existence, nonexistence and uniqueness of positive stationary solutions of a singular Gierer-Meinhardt system Rui Peng, Xianfa Song and Lei Wei 2017, 37(8): 4489-4505 doi: 10.3934/dcds.2017192 +[Abstract](1701) +[HTML](5) +[PDF](453.1KB) This paper is concerned with the stationary Gierer-Meinhardt system with singularity: where $-\infty < p < 1$, $-1 < s$, and $q, r, d_1, d_2$ are positive constants, $a_1, \, a_2$ are nonnegative constants, $\rho_1, \, \rho_2$ are smooth nonnegative functions and $\Omega\subset \mathbb{R}^d\, (d\geq1)$ is a bounded smooth domain. New sufficient conditions, some of which are necessary, on the existence of classical solutions are established. A uniqueness result of solutions in any space dimension is also derived. Previous results are substantially improved; moreover, a much simpler mathematical approach with potential application in other problems is developed. Non-autonomous stochastic evolution equations in Banach spaces of martingale type 2: Strict solutions and maximal regularity Tôn Việt Tạ 2017, 37(8): 4507-4542 doi: 10.3934/dcds.2017193 +[Abstract](1764) +[HTML](12) +[PDF](482.6KB) This paper is devoted to studying a non-autonomous stochastic linear evolution equation in Banach spaces of martingale type 2. We construct unique strict solutions to the equation and show their maximal regularity. The abstract results are then applied to stochastic diffusion equations. Measurable sensitivity via Furstenberg families Tao Yu 2017, 37(8): 4543-4563 doi: 10.3934/dcds.2017194 +[Abstract](2071) +[HTML](28) +[PDF](472.8KB) Let $(X, T)$ be a topological dynamical system, and $\mu$ be a $T$-invariant Borel probability measure on $X$. Let $\mathcal{F}$ be a family of subsets of $\mathbb{Z}_+$. We introduce notions of $\mathcal{F}$-sensitivity for $\mu$ and block $\mathcal{F}$-sensitivity for $\mu$. Let $\mathcal{F}_t$ (resp. $\mathcal{F}_{ip}$) be the families consisting of thick sets (resp. IP-sets). The following Auslander-Yorke's type dichotomy theorems are obtained: (1) a minimal system is either $\mathcal{F}_{t}$-sensitive for $\mu$ or an almost one-to-one extension of its maximal equicontinous factor. (2) a minimal system is either block $\mathcal{F}_{t}$-sensitive for $\mu$ or a proximal extension of its maximal equicontinous factor. (3) a minimal system is either block $\mathcal{F}_{ip}$-sensitive for $\mu$ or an almost one-to-one extension of its $\infty$-step nilfactor. We also introduce the notion of topological $l$-sensitivity, and construct a minimal system which is $l$-sensitive but not $(l+1)$-sensitive for $l\in\mathbb{N}$. Ground state solutions for Hamiltonian elliptic system with inverse square potential Jian Zhang, Wen Zhang and Xianhua Tang 2017, 37(8): 4565-4583 doi: 10.3934/dcds.2017195 +[Abstract](2083) +[HTML](9) +[PDF](470.3KB) In this paper, we study the following Hamiltonian elliptic system with gradient term and inverse square potential for $x\in\mathbb{R}^{N}$, where $N\geq3$, $\mu\in\mathbb{R}$, and $V(x)$, $\vec{b}(x)$ and $H(x, u, v)$ are $1$-periodic in $x$. Under suitable conditions, we prove that the system possesses a ground state solution via variational methods for sufficiently small $\mu\geq0$. Moreover, we provide the comparison of the energy of ground state solutions for the case $\mu>0$ and $\mu=0$. Finally, we also give the convergence property of ground state solutions as $\mu\to0^+$. Corrigendum to "Minimality of the horocycle flow on laminations by hyperbolic surfaces with non-trivial topology" Fernando Alcalde Cuesta, Françoise Dal'Bo, Matilde Martínez and Alberto Verjovsky 2017, 37(8): 4585-4586 doi: 10.3934/dcds.2017196 +[Abstract](1752) +[HTML](9) +[PDF](177.1KB) 2018  Impact Factor: 1.143 Email Alert [Back to Top]
ac6c540a0c34f212
The energy and 1/2 factor in Schrödinger’s equation Schrödinger’s equation, for a particle moving in free space (so we have no external force fields acting on it, so V = 0 and, therefore, the Vψ term disappears) is written as: We already noted and explained the structural similarity with the ubiquitous diffusion equation in physics: ∂φ(x, t)/∂t = D·∇2φ(x, t) with x = (x, y, z) The big difference between the wave equation and an ordinary diffusion equation is that the wave equation gives us two equations for the price of one: ψ is a complex-valued function, with a real and an imaginary part which, despite their name, are both equally fundamental, or essential. Whatever word you prefer. 🙂 That’s also what the presence of the imaginary unit (i) in the equation tells us. But for the rest it’s the same: the diffusion constant (D) in Schrödinger’s equation is equal to (1/2)·(ħ/meff). Why the 1/2 factor? It’s ugly. Think of the following: If we bring the (1/2)·(ħ/meff) to the other side, we can write it as meff/(ħ/2). The ħ/2 now appears as a scaling factor in the diffusion constant, just like ħ does in the de Broglie equations: ω = E/ħ and k = p/ħ, or in the argument of the wavefunction: θ = (E·t − p∙x)/ħ. Planck’s constant is, effectively, a physical scaling factor. As a physical scaling constant, it usually does two things: 1. It fixes the numbers (so that’s its function as a mathematical constant). 2. As a physical constant, it also fixes the physical dimensions. Note, for example, how the 1/ħ factor in ω = E/ħ and k = p/ħ ensures that the ω·t = (E/ħ)·t and k·x = (p/ħ)·x terms in the argument of the wavefunction are both expressed as some dimensionless number, so they can effectively be added together. Physicists don’t like adding apples and oranges. The question is: why did Schrödinger use ħ/2, rather than ħ, as a scaling factor? Let’s explore the question. The 1/2 factor We may want to think that 1/2 factor just echoes the 1/2 factor in the Uncertainty Principle, which we should think of as a pair of relations: σx·σp ≥ ħ/2 and σE·σ≥ ħ/2. However, the 1/2 factor in those relations only makes sense because we chose to equate the fundamental uncertainty (Δ) in x, p, E and t with the mathematical concept of the standard deviation (σ), or the half-width, as Feynman calls it in his wonderfully clear exposé on it in one of his Lectures on quantum mechanics (for a summary with some comments, see my blog post on it). We may just as well choose to equate Δ with the full-width of those probability distributions we get for x and p, or for E and t. If we do that, we get σx·σp ≥ ħ and σE·σ≥ ħ. It’s a bit like measuring the weight of a person on an old-fashioned (non-digital) bathroom scale with 1 kg marks only: do we say this person is x kg ± 1 kg, or x kg ± 500 g? Do we take the half-width or the full-width as the margin of error? In short, it’s a matter of appreciation, and the 1/2 factor in our pair of uncertainty relations is not there because we’ve got two relations. Likewise, it’s not because I mentioned we can think of Schrödinger’s equation as a pair of relations that, taken together, represent an energy propagation mechanism that’s quite similar in its structure to Maxwell’s equations for an electromagnetic wave (as shown below), that we’d insert (or not) that 1/2 factor: either of the two representations below works. It just depends on our definition of the concept of the effective mass. The 1/2 factor is really a matter of choice, because the rather peculiar – and flexible – concept of the effective mass takes care of it. However, we could define some new effective mass concept, by writing: meffNEW = 2∙meffOLD, and then Schrödinger’s equation would look more elegant: ∂ψ/∂t = i·(ħ/meffNEW)·∇2ψ Now you’ll want the definition, of course! What is that effective mass concept? Feynman talks at length about it, but his exposé is embedded in a much longer and more general argument on the propagation of electrons in a crystal lattice, which you may not necessarily want to go through right now. So let’s try to answer that question by doing something stupid: let’s substitute ψ in the equation for ψ = a·ei·[E·t − p∙x]/ħ (which is an elementary wavefunction), calculate the time derivative and the Laplacian, and see what we get. If we do that, the ∂ψ/∂t = i·(1/2)·(ħ/meff)·∇2ψ equation becomes: i·a·(E/ħei∙(E·t − p∙x)/ħ = i·a·(1/2)·(ħ/meff)(p2/ħ2ei∙(E·t − p∙x)  ⇔ E = (1/2)·p2/meff = (1/2)·(m·v)2/meff ⇔ meff = (1/2)·(m/E)·m·v2 ⇔ meff = (1/c2)·(m·v2/2) = m·β2/2 Hence, the effective mass appears in this equation as the equivalent mass of the kinetic energy (K.E.) of the elementary particle that’s being represented by the wavefunction. Now, you may think that sounds good – and it does – but you should note the following: 1. The K.E. = m·v2/2 formula is only correct for non-relativistic speeds. In fact, it’s the kinetic energy formula if, and only if, if m ≈ m0. The relativistically correct formula for the kinetic energy calculates it as the difference between (1) the total energy (which is given by the E = m·c2 formula, always) and (2) its rest energy, so we write: 2. The energy concept in the wavefunction ψ = a·ei·[E·t − p∙x]/ħ is, obviously, the total energy of the particle. For non-relativistic speeds, the kinetic energy is only a very small fraction of the total energy. In fact, using the formula above, you can calculate the ratio between the kinetic and the total energy: you’ll find it’s equal to 1 − 1/γ = 1 − √(1−v2/c2), and its graph goes from 0 to 1. Now, if we discard the 1/2 factor, the calculations above yield the following: i·a·(E/ħ)·ei∙(E·t − p∙x)/ħ = −i·a·(ħ/meff)(p22ei∙(E·t − p∙x)/ħ  ⇔ E = p2/meff = (m·v)2/meff ⇔ meff = (m/E)·m·v2 ⇔ meff = m·v2/c= m·β2 In fact, it is fair to say that both definitions are equally weird, even if the dimensions come out alright: the effective mass is measured in old-fashioned mass units, and the βor β2/2 factor appears as a sort of correction factor, varying between 0 and 1 (for β2) or between 0 and 1/2 (for β2/2). I prefer the new definition, as it ensures that meff becomes equal to m in the limit for the velocity going to c. In addition, if we bring the ħ/meff or (1/2)∙ħ/meff factor to the other side of the equation, the choice becomes one between a meffNEW/ħ or a 2∙meffOLD/ħ coefficient. It’s a choice, really. Personally, I think the equation without the 1/2 factor – and, hence, the use of ħ rather than ħ/2 as the scaling factor – looks better, but then you may argue that – if half of the energy of our particle is in the oscillating real part of the wavefunction, and the other is in the imaginary part – then the 1/2 factor should stay, because it ensures that meff becomes equal to m/2 as v goes to c (or, what amounts to the same, β goes to 1). But then that’s the argument about whether or not we should have a 1/2 factor because we get two equations for the price of one, like we did for the Uncertainty Principle. So… What to do? Let’s first ask ourselves whether that derivation of the effective mass actually makes sense. Let’s therefore look at both limit situations. 1. For v going to c (or β = v/c going to 1), we do not have much of a problem: meff just becomes the total mass of the particle that we’re looking at, and Schrödinger’s equation can easily be interpreted as an energy propagation mechanism. Our particle has zero rest mass in that case ( we may also say that the concept of a rest mass is meaningless in this situation) and all of the energy – and, therefore, all of the equivalent mass – is kinetic: m = E/cand the effective mass is just the mass: meff = m·c2/c= m. Hence, our particle is everywhere and nowhere. In fact, you should note that the concept of velocity itself doesn’t make sense in this rather particular case. It’s like a photon (but note it’s not a photon: we’re talking some theoretical particle here with zero spin and zero rest mass): it’s a wave in its own frame of reference, but as it zips by at the speed of light, we think of it as a particle. 2. Let’s look at the other limit situation. For v going to 0 (or β = v/c going to 0), Schrödinger’s equation no longer makes sense, because the diffusion constant goes to zero, so we get a nonsensical equation. Huh? What’s wrong with our analysis? Well… I must be honest. We started off on the wrong foot. You should note that it’s hard – in fact, plain impossible – to reconcile our simple a·ei·[E·t − p∙x]/ħ function with the idea of the classical velocity of our particle. Indeed, the classical velocity corresponds to a group velocity, or the velocity of a wave packet, and so we just have one wave here: no group. So we get nonsense. You can see the same when equating p to zero in the wave equation: we get another nonsensical equation, because the Laplacian is zero! Check it. If our elementary wavefunction is equal to ψ = a·ei·(E/ħ)·t, then that Laplacian is zero. Hence, our calculation of the effective mass is not very sensical. Why? Because the elementary wavefunction is a theoretical concept only: it may represent some box in space, that is uniformly filled with energy, but it cannot represent any actual particle. Actual particles are always some superposition of two or more elementary waves, so then we’ve got a wave packet (as illustrated below) that we can actually associate with some real-life particle moving in space, like an electron in some orbital indeed. 🙂 I must credit Oregon State University for the animation above. It’s quite nice: a simple particle in a box model without potential. As I showed on my other page (explaining various models), we must add at least two waves – traveling in opposite directions – to model a particle in a box. Why? Because we represent it by a standing wave, and a standing wave is the sum of two waves traveling in opposite directions. So, if our derivation above was not very meaningful, then what is the actual concept of the effective mass? The concept of the effective mass I am afraid that, at this point, I do have to direct you back to the Grand Master himself for the detail. Let me just try to sum it up very succinctly. If we have a wave packet, there is – obviously – some energy in it, and it’s energy we may associate with the classical concept of the velocity of our particle – because it’s the group velocity of our wave packet. Hence, we have a new energy concept here – and the equivalent mass, of course. Now, Feynman’s analysis – which is Schrödinger’s analysis, really – shows we can write that energy as: E = meff·v2/2 So… Well… That’s the classical kinetic energy formula. And it’s the very classical one, because it’s not relativistic. 😦 But that’s OK for relatively small-moving electrons! [Remember the typical (relative) velocity is given by the fine-structure constant: α = β = v/c. So that’s impressive (about 2,188 km per second), but it’s only a tiny fraction of the speed of light, so non-relativistic formulas should work.] Now, the meff factor in this equation is a function of the various parameters of the model he uses. To be precise, we get the following formula out of his model (which, as mentioned above, is a model of electrons propagating in a crystal lattice): meff = ħ2/(2·A·b2 ) Now, the b in this formula is the spacing between the atoms in the lattice. The A basically represents an energy barrier: to move from one atom to another, the electron needs to get across it. I talked about this in my post on it, and so I won’t explain the graph below – because I did that in that post. Just note that we don’t need that factor 2: there is no reason whatsoever to write E+ 2·A and E2·A. We could just re-define a new A: (1/2)·ANEW = AOLD. The formula for meff then simplifies to ħ2/(2·AOLD·b2) = ħ2/(ANEW·b2). We then get an Eeff = meff·vformula for the extra energy. Eeff = meff·v2?!? What energy formula is that? Schrödinger must have thought the same thing, and so that’s why we have that ugly 1/2 factor in his equation. However, think about it. Our analysis shows that it is quite straightforward to model energy as a two-dimensional oscillation of mass. In this analysis, both the real and the imaginary component of the wavefunction each store half of the total energy of the object, which is equal to E = m·c2. Remember, indeed, that we compared it to the energy in an oscillator, which is equal to the sum of kinetic and potential energy, and for which we have the T + U = m·ω02/2 formula. But so we have two oscillators here and, hence, twice the energy. Hence, the E = m·c2 corresponds to m·ω0and, hence, we may think of as the natural frequency of the vacuum. Therefore, the Eeff = meff·v2 formula makes much more sense. It nicely mirrors Einstein’s E = m·c2 formula and, in fact, naturally merges into E = m·c for v approaching c. But, I admit, it is not so easy to interpret. It’s much easier to just say that the effective mass is the mass of our electron as it appears in the kinetic energy formula, or – alternatively – in the momentum formula. Indeed, Feynman also writes the following formula: meff·v = p = ħ·k Now, that is something we easily recognize! 🙂 So… Well… What do we do now? Do we use the 1/2 factor or not? It would be very convenient, of course, to just stick with tradition and use meff as everyone else uses it: it is just the mass as it appears in whatever medium we happen to look it, which may be a crystal lattice (or a semi-conductor), or just free space. In short, it’s the mass of the electron as it appears to us, i.e. as it appears in the (non-relativistic) kinetic energy formula (K.E. = meff·v2/2), the formula for the momentum of an electron (p = meff·v), or in the wavefunction itself (k = p/ħ = (meff·v)/ħ. In fact, in his analysis of the electron orbitals, Feynman (who just follows Schrödinger here) drops the eff subscript altogether, and so the effective mass is just the mass: meff = m. Hence, the apparent mass of the electron in the hydrogen atom serves as a reference point, and the effective mass in a different medium (such as a crystal lattice, rather than free space or, I should say, a hydrogen atom in free space) will also be different. The thing is: we get the right results out of Schrödinger’s equation, with the 1/2 factor in it. Hence, Schrödinger’s equation works: we get the actual electron orbitals out of it. Hence, Schrödinger’s equation is true – without any doubt. Hence, if we take that 1/2 factor out, then we do need to use the other effective mass concept. We can do that. Think about the actual relation between the effective mass and the real mass of the electron, about which Feynman writes the following: “The effective mass has nothing to do with the real mass of an electron. It may be quite different—although in commonly used metals and semiconductors it often happens to turn out to be the same general order of magnitude: about 0.1 to 30 times the free-space mass of the electron.” Hence, if we write the relation between meff and m as meff = g(m), then the same relation for our meffNEW = 2∙meffOLD becomes meffNEW = 2·g(m), and the “about 0.1 to 30 times” becomes “about 0.2 to 60 times.” In fact, in the original 1963 edition, Feynman writes that the effective mass is “about 2 to 20 times” the free-space mass of the electron. Isn’t that interesting? I mean… Note that factor 2! If we’d write meff = 2·m, then we’re fine. We can then write Schrödinger’s equation in the following two equivalent ways: 1. (meff/ħ)·∂ψ/∂t = i·∇2ψ 2. (2m/ħ)·∂ψ/∂t = i·∇2ψ Both would be correct, and it explains why Schrödinger’s equation works. So let’s go for that compromise and write Schrödinger’s equation in either of the two equivalent ways. 🙂 The question then becomes: how to interpret that factor 2? The answer to that question is, effectively, related to the fact that we get two waves for the price of one here. So we have two oscillators, so to speak. Now that‘s quite deep, and I will explore that in one of my next posts. Let me now address the second weird thing in Schrödinger’s equation: the energy factor. I should be more precise: the weirdness arises when solving Schrödinger’s equation. Indeed, in the texts I’ve read, there is this constant switching back and forth between interpreting E as the energy of the atom, versus the energy of the electron. Now, both concepts are obviously quite different, so which one is it really? The energy factor E It’s a confusing point—for me, at least and, hence, I must assume for students as well. Let me indicate, by way of example, how the confusion arises in Feynman’s exposé on the solutions to the Schrödinger equation. Initially, the development is quite straightforward. Replacing V by −e2/r, Schrödinger’s equation becomes: As usual, it is then assumed that a solution of the form ψ (r, t) =  e−(i/ħ)·E·t·ψ(r) will work. Apart from the confusion that arises because we use the same symbol, ψ, for two different functions (you will agree that ψ (r, t), a function in two variables, is obviously not the same as ψ(r), a function in one variable only), this assumption is quite straightforward and allows us to re-write the differential equation above as: To get this, you just need to actually to do that time derivative, noting that the ψ in our equation is now ψ(r), not ψ (r, t). Feynman duly notes this as he writes: “The function ψ(rmust solve this equation, where E is some constant—the energy of the atom.” So far, so good. In one of the (many) next steps, we re-write E as E = ER·ε, with E= m·e4/2ħ2. So we just use the Rydberg energy (E≈ 13.6 eV) here as a ‘natural’ atomic energy unit. That’s all. No harm in that. Then all kinds of complicated but legitimate mathematical manipulations follow, in an attempt to solve this differential equation—attempt that is successful, of course! However, after all these manipulations, one ends up with the grand simple solution for the s-states of the atom (i.e. the spherically symmetric solutions): En = −ER/nwith 1/n= 1, 1/4, 1/9, 1/16,…, 1 So we get: En = −13.6 eV, −3.4 eV, −1.5 eV, etcetera. Now how is that possible? How can the energy of the atom suddenly be negative? More importantly, why is so tiny in comparison with the rest energy of the proton (which is about 938 mega-electronvolt), or the electron (0.511 MeV)? The energy levels above are a few eV only, not a few million electronvolt. Feynman answers this question rather vaguely when he states the following: “There is, incidentally, nothing mysterious about negative numbers for the energy. The energies are negative because when we chose to write V = −e2/r, we picked our zero point as the energy of an electron located far from the proton. When it is close to the proton, its energy is less, so somewhat below zero. The energy is lowest (most negative) for n = 1, and increases toward zero with increasing n.” We picked our zero point as the energy of an electron located far away from the proton? But we were talking the energy of the atom all along, right? You’re right. Feynman doesn’t answer the question. The solution is OK – well, sort of, at least – but, in one of those mathematical complications, there is a ‘normalization’ – a choice of some constant that pops up when combining and substituting stuff – that is not so innocent. To be precise, at some point, Feynman substitutes the ε variable for the square of another variable – to be even more precise, he writes: ε = −α2. He then performs some more hat tricks – all legitimate, no doubt – and finds that the only sensible solutions to the differential equation require α to be equal to 1/n, which immediately leads to the above-mentioned solution for our s-states. The real answer to the question is given somewhere else. In fact, Feynman casually gives us an explanation in one of his very first Lectures on quantum mechanics, where he writes the following: “If we have a “condition” which is a mixture of two different states with different energies, then the amplitude for each of the two states will vary with time according to an equation like a·eiωt, with ħ·ω = E0 = m·c2. Hence, we can write the amplitude for the two states, for example as: ei(E1/ħ)·t and ei(E2/ħ)·t And if we have some combination of the two, we will have an interference. But notice that if we added a constant to both energies, it wouldn’t make any difference. If somebody else were to use a different scale of energy in which all the energies were increased (or decreased) by a constant amount—say, by the amount A—then the amplitudes in the two states would, from his point of view, be ei(E1+A)·t/ħ and ei(E2+A)·t/ħ All of his amplitudes would be multiplied by the same factor ei(A/ħ)·t, and all linear combinations, or interferences, would have the same factor. When we take the absolute squares to find the probabilities, all the answers would be the same. The choice of an origin for our energy scale makes no difference; we can measure energy from any zero we want. For relativistic purposes it is nice to measure the energy so that the rest mass is included, but for many purposes that aren’t relativistic it is often nice to subtract some standard amount from all energies that appear. For instance, in the case of an atom, it is usually convenient to subtract the energy Ms·c2, where Ms is the mass of all the separate pieces—the nucleus and the electrons—which is, of course, different from the mass of the atom. For other problems, it may be useful to subtract from all energies the amount Mg·c2, where Mg is the mass of the whole atom in the ground state; then the energy that appears is just the excitation energy of the atom. So, sometimes we may shift our zero of energy by some very large constant, but it doesn’t make any difference, provided we shift all the energies in a particular calculation by the same constant.” It’s a rather long quotation, but it’s important. The key phrase here is, obviously, the following: “For other problems, it may be useful to subtract from all energies the amount Mg·c2, where Mg is the mass of the whole atom in the ground state; then the energy that appears is just the excitation energy of the atom.” So that’s what he’s doing when solving Schrödinger’s equation. However, I should make the following point here: if we shift the origin of our energy scale, it does not make any difference in regard to the probabilities we calculate, but it obviously does make a difference in terms of our wavefunction itself. To be precise, its density in time will be very different. Hence, if we’d want to give the wavefunction some physical meaning – which is what I’ve been trying to do all along – it does make a huge difference. When we leave the rest mass of all of the pieces in our system out, we can no longer pretend we capture their energy. This is a rather simple observation, but one that has profound implications in terms of our interpretation of the wavefunction. Personally, I admire the Great Teacher’s Lectures, but I am really disappointed that he doesn’t pay more attention to this. 😦 Quantum Mechanics: The Other Introduction About three weeks ago, I brought my most substantial posts together in one document: it’s the Deep Blue page of this site. I also published it on Amazon/Kindle. It’s nice. It crowns many years of self-study, and many nights of short and bad sleep – as I was mulling over yet another paradox haunting me in my dreams. It’s been an extraordinary climb but, frankly, the view from the top is magnificent. 🙂  The offer is there: anyone who is willing to go through it and offer constructive and/or substantial comments will be included in the book’s acknowledgements section when I go for a second edition (which it needs, I think). First person to be acknowledged here is my wife though, Maria Elena Barron, as she has given me the spacetime:-) and, more importantly, the freedom to take this bull by its horns. Below I just copy the foreword, just to give you a taste of it. 🙂 Another introduction to quantum mechanics? Yep. I am not hoping to sell many copies, but I do hope my unusual background—I graduated as an economist, not as a physicist—will encourage you to take on the challenge and grind through this. I’ve always wanted to thoroughly understand, rather than just vaguely know, those quintessential equations: the Lorentz transformations, the wavefunction and, above all, Schrödinger’s wave equation. In my bookcase, I’ve always had what is probably the most famous physics course in the history of physics: Richard Feynman’s Lectures on Physics, which have been used for decades, not only at Caltech but at many of the best universities in the world. Plus a few dozen other books. Popular books—which I now regret I ever read, because they were an utter waste of time: the language of physics is math and, hence, one should read physics in math—not in any other language. But Feynman’s Lectures on Physics—three volumes of about fifty chapters each—are not easy to read. However, the experimental verification of the existence of the Higgs particle in CERN’s LHC accelerator a couple of years ago, and the award of the Nobel prize to the scientists who had predicted its existence (including Peter Higgs and François Englert), convinced me it was about time I take the bull by its horns. While, I consider myself to be of average intelligence only, I do feel there’s value in the ideal of the ‘Renaissance man’ and, hence, I think stuff like this is something we all should try to understand—somehow. So I started to read, and I also started a blog ( to externalize my frustration as I tried to cope with the difficulties involved. The site attracted hundreds of visitors every week and, hence, it encouraged me to publish this booklet. So what is it about? What makes it special? In essence, it is a common-sense introduction to the key concepts in quantum physics. However, while common-sense, it does not shy away from the math, which is complicated, but not impossible. So this little book is surely not a Guide to the Universe for Dummies. I do hope it will guide some Not-So-Dummies. It basically recycles what I consider to be my more interesting posts, but combines them in a comprehensive structure. It is a bit of a philosophical analysis of quantum mechanics as well, as I will – hopefully – do a better job than others in distinguishing the mathematical concepts from what they are supposed to describe, i.e. physical reality. Last but not least, it does offer some new didactic perspectives. For those who know the subject already, let me briefly point these out: I. Few, if any, of the popular writers seems to have noted that the argument of the wavefunction (θ = E·t – p·t) – using natural units (hence, the numerical value of ħ and c is one), and for an object moving at constant velocity (hence, x = v·t) – can be written as the product of the proper time of the object and its rest mass: θ = E·t – p·x = E·t − p·x = mv·t − mv·v·x = mv·(t − v·x) ⇔ θ = m0·(t − v·x)/√(1 – v2) = m0·t’ Hence, the argument of the wavefunction is just the proper time of the object with the rest mass acting as a scaling factor for the time: the internal clock of the object ticks much faster if it’s heavier. This symmetry between the argument of the wavefunction of the object as measured in its own (inertial) reference frame, and its argument as measured by us, in our own reference frame, is remarkable, and allows to understand the nature of the wavefunction in a more intuitive way. While this approach reflects Feynman’s idea of the photon stopwatch, the presentation in this booklet generalizes the concept for all wavefunctions, first and foremost the wavefunction of the matter-particles that we’re used to (e.g. electrons). II. Few, if any, have thought of looking at Schrödinger’s wave equation as an energy propagation mechanism. In fact, when helping my daughter out as she was trying to understand non-linear regression (logit and Poisson regressions), it suddenly realized we can analyze the wavefunction as a link function that connects two physical spaces: the physical space of our moving object, and a physical energy space. Re-inserting Planck’s quantum of action in the argument of the wavefunction – so we write θ as θ = (E/ħ)·t – (p/ħ)·x = [E·t – p·x]/ħ – we may assign a physical dimension to it: when interpreting ħ as a scaling factor only (and, hence, when we only consider its numerical value, not its physical dimension), θ becomes a quantity expressed in newton·meter·second, i.e. the (physical) dimension of action. It is only natural, then, that we would associate the real and imaginary part of the wavefunction with some physical dimension too, and a dimensional analysis of Schrödinger’s equation tells us this dimension must be energy. This perspective allows us to look at the wavefunction as an energy propagation mechanism, with the real and imaginary part of the probability amplitude interacting in very much the same way as the electric and magnetic field vectors E and B. This leads me to the next point, which I make rather emphatically in this booklet:  the propagation mechanism for electromagnetic energy – as described by Maxwell’s equations – is mathematically equivalent to the propagation mechanism that’s implicit in the Schrödinger equation. I am, therefore, able to present the Schrödinger equation in a much more coherent way, describing not only how this famous equation works for electrons, or matter-particles in general (i.e. fermions or spin-1/2 particles), which is probably the only use of the Schrödinger equation you are familiar with, but also how it works for bosons, including the photon, of course, but also the theoretical zero-spin boson! In fact, I am personally rather proud of this. Not because I am doing something that hasn’t been done before (I am sure many have come to the same conclusions before me), but because one always has to trust one’s intuition. So let me say something about that third innovation: the photon wavefunction. III. Let me tell you the little story behind my photon wavefunction. One of my acquaintances is a retired nuclear scientist. While he knew I was delving into it all, I knew he had little time to answer any of my queries. However, when I asked him about the wavefunction for photons, he bluntly told me photons didn’t have a wavefunction. I should just study Maxwell’s equations and that’s it: there’s no wavefunction for photons: just this traveling electric and a magnetic field vector. Look at Feynman’s Lectures, or any textbook, he said. None of them talk about photon wavefunctions. That’s true, but I knew he had to be wrong. I mulled over it for several months, and then just sat down and started doing to fiddle with Maxwell’s equations, assuming the oscillations of the E and B vector could be described by regular sinusoids. And – Lo and behold! – I derived a wavefunction for the photon. It’s fully equivalent to the classical description, but the new expression solves the Schrödinger equation, if we modify it in a rather logical way: we have to double the diffusion constant, which makes sense, because E and B give you two waves for the price of one! In any case, I am getting ahead of myself here, and so I should wrap up this rather long introduction. Let me just say that, through my rather long journey in search of understanding – rather than knowledge alone – I have learned there are so many wrong answers out there: wrong answers that hamper rather than promote a better understanding. Moreover, I was most shocked to find out that such wrong answers are not the preserve of amateurs alone! This emboldened me to write what I write here, and to publish it. Quantum mechanics is a logical and coherent framework, and it is not all that difficult to understand. One just needs good pointers, and that’s what I want to provide here. As of now, it focuses on the mechanics in particular, i.e. the concept of the wavefunction and wave equation (better known as Schrödinger’s equation). The other aspect of quantum mechanics – i.e. the idea of uncertainty as implied by the quantum idea – will receive more attention in a later version of this document. I should also say I will limit myself to quantum electrodynamics (QED) only, so I won’t discuss quarks (i.e. quantum chromodynamics, which is an entirely different realm), nor will I delve into any of the other more recent advances of physics. In the end, you’ll still be left with lots of unanswered questions. However, that’s quite OK, as Richard Feynman himself was of the opinion that he himself did not understand the topic the way he would like to understand it. But then that’s exactly what draws all of us to quantum physics: a common search for a deep and full understanding of reality, rather than just some superficial description of it, i.e. knowledge alone. So let’s get on with it. I am not saying this is going to be easy reading. In fact, I blogged about much easier stuff than this in my blog—treating only aspects of the whole theory. This is the whole thing, and it’s not easy to swallow. In fact, it may well too big to swallow as a whole. But please do give it a try. I wanted this to be an intuitive but formally correct introduction to quantum math. However, when everything is said and done, you are the only who can judge if I reached that goal. Of course, I should not forget the acknowledgements but… Well… It was a rather lonely venture, so I am only going to acknowledge my wife here, Maria, who gave me all of the spacetime and all of the freedom I needed, as I would get up early, or work late after coming home from my regular job. I sacrificed weekends, which we could have spent together, and – when mulling over yet another paradox – the nights were often short and bad. Frankly, it’s been an extraordinary climb, but the view from the top is magnificent. I just need to insert one caution, my site ( includes animations, which make it much easier to grasp some of the mathematical concepts that I will be explaining. Hence, I warmly recommend you also have a look at that site, and its Deep Blue page in particular – as that page has the same contents, more or less, but the animations make it a much easier read. Have fun with it! Jean Louis Van Belle, BA, MA, BPhil, Drs. The Imaginary Energy Space Original post: Intriguing title, isn’t it? You’ll think this is going to be highly speculative and you’re right. In fact, I could also have written: the imaginary action space, or the imaginary momentum space. Whatever. It all works ! It’s an imaginary space – but a very real one, because it holds energy, or momentum, or a combination of both, i.e. action. 🙂 So the title is either going to deter you or, else, encourage you to read on. I hope it’s the latter. 🙂 In my post on Richard Feynman’s exposé on how Schrödinger got his famous wave equation, I noted an ambiguity in how he deals with the energy concept. I wrote that piece in February, and we are now May. In-between, I looked at Schrödinger’s equation from various perspectives, as evidenced from the many posts that followed that February post, which I summarized on my Deep Blue page, where I note the following: 1. The argument of the wavefunction (i.e. θ = ωt – kx = [E·t – p·x]/ħ) is just the proper time of the object that’s being represented by the wavefunction (which, in most cases, is an elementary particle—an electron, for example). 2. The 1/2 factor in Schrödinger’s equation (∂ψ/∂t = i·(ħ/2m)·∇2ψ) doesn’t make all that much sense, so we should just drop it. Writing ∂ψ/∂t = i·(m/ħ)∇2ψ (i.e. Schrödinger’s equation without the 1/2 factor) does away with the mentioned ambiguities and, more importantly, avoids obvious contradictions. Both remarks are rather unusual—especially the second one. In fact, if you’re not shocked by what I wrote above (Schrödinger got something wrong!), then stop reading—because then you’re likely not to understand a thing of what follows. 🙂 In any case, I thought it would be good to follow up by devoting a separate post to this matter. The argument of the wavefunction as the proper time Frankly, it took me quite a while to see that the argument of the wavefunction is nothing but the t’ = (t − v∙x)/√(1−v2)] formula that we know from the Lorentz transformation of spacetime. Let me quickly give you the formulas (just substitute the for v): In fact, let me be precise: the argument of the wavefunction also has the particle’s rest mass m0 in it. That mass factor (m0) appears in it as a general scaling factor, so it determines the density of the wavefunction both in time as well as in space. Let me jot it down: ψ(x, t) = a·ei·(mv·t − p∙x) = a·ei·[(m0/√(1−v2))·t − (m0·v/√(1−v2))∙x] = a·ei·m0·(t − v∙x)/√(1−v2) Huh? Yes. Let me show you how we get from θ = ωt – kx = [E·t – p·x]/ħ to θ = mv·t − p∙x. It’s really easy. We first need to choose our units such that the speed of light and Planck’s constant are numerically equal to one, so we write: = 1 and ħ = 1. So now the 1/ħ factor no longer appears. [Let me note something here: using natural units does not do away with the dimensions: the dimensions of whatever is there remain what they are. For example, energy remains what it is, and so that’s force over distance: 1 joule = 1 newton·meter (1 J = 1 N·m. Likewise, momentum remains what it is: force times time (or mass times velocity). Finally, the dimension of the quantum of action doesn’t disappear either: it remains the product of force, distance and time (N·m·s). So you should distinguish between the numerical value of our variables and their dimension. Always! That’s where physics is different from algebra: the equations actually mean something!] Now, because we’re working in natural units, the numerical value of both and cwill be equal to 1. It’s obvious, then, that Einstein’s mass-energy equivalence relation reduces from E = mvc2 to E = mv. You can work out the rest yourself – noting that p = mv·v and mv = m0/√(1−v2). Done! For a more intuitive explanation, I refer you to the above-mentioned page. So that’s for the wavefunction. Let’s now look at Schrödinger’s wave equation, i.e. that differential equation of which our wavefunction is a solution. In my introduction, I bluntly said there was something wrong with it: that 1/2 factor shouldn’t be there. Why not? What’s wrong with Schrödinger’s equation? When deriving his famous equation, Schrödinger uses the mass concept as it appears in the classical kinetic energy formula: K.E. = m·v2/2, and that’s why – after all the complicated turns – that 1/2 factor is there. There are many reasons why that factor doesn’t make sense. Let me sum up a few. [I] The most important reason is that de Broglie made it quite clear that the energy concept in his equations for the temporal and spatial frequency for the wavefunction – i.e. the ω = E/ħ and k = p/ħ relations – is the total energy, including rest energy (m0), kinetic energy (m·v2/2) and any potential energy (V). In fact, if we just multiply the two de Broglie (aka as matter-wave equations) and use the old-fashioned v = λ relation (so we write E as E = ω·ħ = (2π·f)·(h/2π) = f·h, and p as p = k·ħ = (2π/λ)·(h/2π) = h/λ and, therefore, we have = E/h and p = h/p), we find that the energy concept that’s implicit in the two matter-wave equations is equal to E = m∙v2, as shown below: 1. f·λ = (E/h)·(h/p) = E/p 2. v = λ ⇒ f·λ = v = E/p ⇔ E = v·p = v·(m·v) ⇒ E = m·v2 Huh? E = m∙v2? Yes. Not E = m∙c2 or m·v2/2 or whatever else you might be thinking of. In fact, this E = m∙v2 formula makes a lot of sense in light of the two following points. Skeptical note: You may – and actually should – wonder whether we can use that v = λ relation for a wave like this, i.e. a wave with both a real (cos(-θ)) as well as an imaginary component (i·sin(-θ). It’s a deep question, and I’ll come back to it later. But… Yes. It’s the right question to ask. 😦 [II] Newton told us that force is mass time acceleration. Newton’s law is still valid in Einstein’s world. The only difference between Newton’s and Einstein’s world is that, since Einstein, we should treat the mass factor as a variable as well. We write: F = mv·a = mv·= [m0/√(1−v2)]·a. This formula gives us the definition of the newton as a force unit: 1 N = 1 kg·(m/s)/s = 1 kg·m/s2. [Note that the 1/√(1−v2) factor – i.e. the Lorentz factor (γ) – has no dimension, because is measured as a relative velocity here, i.e. as a fraction between 0 and 1.] Now, you’ll agree the definition of energy as a force over some distance is valid in Einstein’s world as well. Hence, if 1 joule is 1 N·m, then 1 J is also equal to 1 (kg·m/s2)·m = 1 kg·(m2/s2), so this also reflects the E = m∙v2 concept. [I can hear you mutter: that kg factor refers to the rest mass, no? No. It doesn’t. The kg is just a measure of inertia: as a unit, it applies to both mas well as mv. Full stop.] Very skeptical note: You will say this doesn’t prove anything – because this argument just shows the dimensional analysis for both equations (i.e. E = m∙v2 and E = m∙c2) is OK. Hmm… Yes. You’re right. 🙂 But the next point will surely convince you! 🙂 [III] The third argument is the most intricate and the most beautiful at the same time—not because it’s simple (like the arguments above) but because it gives us an interpretation of what’s going on here. It’s fairly easy to verify that Schrödinger’s equation, ∂ψ/∂t = i·(ħ/2m)·∇2ψ equation (including the 1/2 factor to which I object), is equivalent to the following set of two equations: [In case you don’t see it immediately, note that two complex numbers a + i·b and c + i·d are equal if, and only if, their real and imaginary parts are the same. However, here we have something like this: a + i·b = i·(c + i·d) = i·c + i2·d = − d + i·c (remember i= −1).] Now, before we proceed (i.e. before I show you what’s wrong here with that 1/2 factor), let us look at the dimensions first. For that, we’d better analyze the complete Schrödinger equation so as to make sure we’re not doing anything stupid here by looking at one aspect of the equation only. The complete equation, in its original form, is: schrodinger 5 Notice that, to simplify the analysis above, I had moved the and the ħ on the left-hand side to the right-hand side (note that 1/= −i, so −(ħ2/2m)/(i·ħ) = ħ/2m). Now, the ħfactor on the right-hand side is expressed in J2·s2. Now that doesn’t make much sense, but then that mass factor in the denominator makes everything come out alright. Indeed, we can use the mass-equivalence relation to express m in J/(m/s)2 units. So our ħ2/2m coefficient is expressed in (J2·s2)/[J/(m/s)2] = J·m2. Now we multiply that by that Laplacian operating on some scalar, which yields some quantity per square meter. So the whole right-hand side becomes some amount expressed in joule, i.e. the unit of energy! Interesting, isn’t it? On the left-hand side, we have i and ħ. We shouldn’t worry about the imaginary unit because we can treat that as just another number, albeit a very special number (because its square is minus 1). However, in this equation, it’s like a mathematical constant and you can think of it as something like π or e. [Think of the magical formula: eiπ = i2 = −1.] In contrast, ħ is a physical constant, and so that constant comes with some dimension and, therefore, we cannot just do what we want. [I’ll show, later, that even moving it to the other side of the equation comes with interpretation problems, so be careful with physical constants, as they really mean something!] In this case, its dimension is the action dimension: J·s = N·m·s, so that’s force times distance times time. So we multiply that with a time derivative and we get joule once again (N·m·s/s = N·m = J), so that’s the unit of energy. So it works out: we have joule units both left and right in Schrödinger’s equation. Nice! Yes. But what does it mean? 🙂 Well… You know that we can – and should – think of Schrödinger’s equation as a diffusion equation – just like a heat diffusion equation, for example – but then one describing the diffusion of a probability amplitude. [In case you are not familiar with this interpretation, please do check my post on it, or my Deep Blue page.] But then we didn’t describe the mechanism in very much detail, so let me try to do that now and, in the process, finally explain the problem with the 1/2 factor. The missing energy There are various ways to explain the problem. One of them involves calculating group and phase velocities of the elementary wavefunction satisfying Schrödinger’s equation but that’s a more complicated approach and I’ve done that elsewhere, so just click the reference if you prefer the more complicated stuff. I find it easier to just use those two equations above: The argument is the following: if our elementary wavefunction is equal to ei(kx − ωt) = cos(kx−ωt) + i∙sin(kx−ωt), then it’s easy to proof that this pair of conditions is fulfilled if, and only if, ω = k2·(ħ/2m). [Note that I am omitting the normalization coefficient in front of the wavefunction: you can put it back in if you want. The argument here is valid, with or without normalization coefficients.] Easy? Yes. Check it out. The time derivative on the left-hand side is equal to: And the second-order derivative on the right-hand side is equal to: So the two equations above are equivalent to writing: 1. Re(∂ψB/∂t) =   −(ħ/2m)·Im(∇2ψB) ⇔ ω·cos(kx − ωt) = k2·(ħ/2m)·cos(kx − ωt) 2. Im(∂ψB/∂t) = (ħ/2m)·Re(∇2ψB) ⇔ ω·sin(kx − ωt) = k2·(ħ/2m)·sin(kx − ωt) So both conditions are fulfilled if, and only if, ω = k2·(ħ/2m). You’ll say: so what? Well… We have a contradiction here—something that doesn’t make sense. Indeed, the second of the two de Broglie equations (always look at them as a pair) tells us that k = p/ħ, so we can re-write the ω = k2·(ħ/2m) condition as: ω/k = vp = k2·(ħ/2m)/k = k·ħ/(2m) = (p/ħ)·(ħ/2m) = p/2m ⇔ p = 2m You’ll say: so what? Well… Stop reading, I’d say. That p = 2m doesn’t make sense—at all! Nope! In fact, if you thought that the E = m·v2  is weird—which, I hope, is no longer the case by now—then… Well… This p = 2m equation is much weirder. In fact, it’s plain nonsense: this condition makes no sense whatsoever. The only way out is to remove the 1/2 factor, and to re-write the Schrödinger equation as I wrote it, i.e. with an ħ/m coefficient only, rather than an (1/2)·(ħ/m) coefficient. Huh? Yes. As mentioned above, I could do those group and phase velocity calculations to show you what rubbish that 1/2 factor leads to – and I’ll do that eventually – but let me first find yet another way to present the same paradox. Let’s simplify our life by choosing our units such that = ħ = 1, so we’re using so-called natural units rather than our SI units. [Again, note that switching to natural units doesn’t do anything to the physical dimensions: a force remains a force, a distance remains a distance, and so on.] Our mass-energy equivalence then becomes: E = m·c= m·1= m. [Again, note that switching to natural units doesn’t do anything to the physical dimensions: a force remains a force, a distance remains a distance, and so on. So we’d still measure energy and mass in different but equivalent units. Hence, the equality sign should not make you think mass and energy are actually the same: energy is energy (i.e. force times distance), while mass is mass (i.e. a measure of inertia). I am saying this because it’s important, and because it took me a while to make these rather subtle distinctions.] Let’s now go one step further and imagine a hypothetical particle with zero rest mass, so m0 = 0. Hence, all its energy is kinetic and so we write: K.E. = mv·v/2. Now, because this particle has zero rest mass, the slightest acceleration will make it travel at the speed of light. In fact, we would expect it to travel at the speed, so mv = mc and, according to the mass-energy equivalence relation, its total energy is, effectively, E = mv = mc. However, we just said its total energy is kinetic energy only. Hence, its total energy must be equal to E = K.E. = mc·c/2 = mc/2. So we’ve got only half the energy we need. Where’s the other half? Where’s the missing energy? Quid est veritas? Is its energy E = mc or E = mc/2? It’s just a paradox, of course, but one we have to solve. Of course, we may just say we trust Einstein’s E = m·c2 formula more than the kinetic energy formula, but that answer is not very scientific. 🙂 We’ve got a problem here and, in order to solve it, I’ve come to the following conclusion: just because of its sheer existence, our zero-mass particle must have some hidden energy, and that hidden energy is also equal to E = m·c2/2. Hence, the kinetic and the hidden energy add up to E = m·c2 and all is alright. Huh? Hidden energy? I must be joking, right? Well… No. Let me explain. Oh. And just in case you wonder why I bother to try to imagine zero-mass particles. Let me tell you: it’s the first step towards finding a wavefunction for a photon and, secondly, you’ll see it just amounts to modeling the propagation mechanism of energy itself. 🙂 The hidden energy as imaginary energy I am tempted to refer to the missing energy as imaginary energy, because it’s linked to the imaginary part of the wavefunction. However, it’s anything but imaginary: it’s as real as the imaginary part of the wavefunction. [I know that sounds a bit nonsensical, but… Well… Think about it. And read on!] Back to that factor 1/2. As mentioned above, it also pops up when calculating the group and the phase velocity of the wavefunction. In fact, let me show you that calculation now. [Sorry. Just hang in there.] It goes like this. The de Broglie relations tell us that the k and the ω in the ei(kx − ωt) = cos(kx−ωt) + i∙sin(kx−ωt) wavefunction (i.e. the spatial and temporal frequency respectively) are equal to k = p/ħ, and ω = E/ħ. Let’s now think of that zero-mass particle once more, so we assume all of its energy is kinetic: no rest energy, no potential! So… If we now use the kinetic energy formula E = m·v2/2 – which we can also write as E = m·v·v/2 = p·v/2 = p·p/2m = p2/2m, with v = p/m the classical velocity of the elementary particle that Louis de Broglie was thinking of – then we can calculate the group velocity of our ei(kx − ωt) = cos(kx−ωt) + i∙sin(kx−ωt) wavefunction as: vg = ∂ω/∂k = ∂[E/ħ]/∂[p/ħ] = ∂E/∂p = ∂[p2/2m]/∂p = 2p/2m = p/m = v [Don’t tell me I can’t treat m as a constant when calculating ∂ω/∂k: I can. Think about it.] Fine. Now the phase velocity. For the phase velocity of our ei(kx − ωt) wavefunction, we find: vp = ω/k = (E/ħ)/(p/ħ) = E/p = (p2/2m)/p = p/2m = v/2 So that’s only half of v: it’s the 1/2 factor once more! Strange, isn’t it? Why would we get a different value for the phase velocity here? It’s not like we have two different frequencies here, do we? Well… No. You may also note that the phase velocity turns out to be smaller than the group velocity (as mentioned, it’s only half of the group velocity), which is quite exceptional as well! So… Well… What’s the matter here? We’ve got a problem! What’s going on here? We have only one wave here—one frequency and, hence, only one k and ω. However, on the other hand, it’s also true that the ei(kx − ωt) wavefunction gives us two functions for the price of one—one real and one imaginary: ei(kx − ωt) = cos(kx−ωt) + i∙sin(kx−ωt). So the question here is: are we adding waves, or are we not? It’s a deep question. If we’re adding waves, we may get different group and phase velocities, but if we’re not, then… Well… Then the group and phase velocity of our wave should be the same, right? The answer is: we are and we aren’t. It all depends on what you mean by ‘adding’ waves. I know you don’t like that answer, but that’s the way it is, really. 🙂 Let me make a small digression here that will make you feel even more confused. You know – or you should know – that the sine and the cosine function are the same except for a phase difference of 90 degrees: sinθ = cos(θ + π/2). Now, at the same time, multiplying something with amounts to a rotation by 90 degrees, as shown below. Hence, in order to sort of visualize what our ei(kx − ωt) function really looks like, we may want to super-impose the two graphs and think of something like this: You’ll have to admit that, when you see this, our formulas for the group or phase velocity, or our v = λ relation, do no longer make much sense, do they? 🙂 Having said that, that 1/2 factor is and remains puzzling, and there must be some logical reason for it. For example, it also pops up in the Uncertainty Relations: Δx·Δp ≥ ħ/2 and ΔE·Δt ≥ ħ/2 So we have ħ/2 in both, not ħ. Why do we need to divide the quantum of action here? How do we solve all these paradoxes? It’s easy to see how: the apparent contradiction (i.e. the different group and phase velocity) gets solved if we’d use the E = m∙v2 formula rather than the kinetic energy E = m∙v2/2. But then… What energy formula is the correct one: E = m∙v2 or m∙c2? Einstein’s formula is always right, isn’t it? It must be, so let me postpone the discussion a bit by looking at a limit situation. If v = c, then we don’t need to make a choice, obviously. 🙂 So let’s look at that limit situation first. So we’re discussing our zero-mass particle once again, assuming it travels at the speed of light. What do we get? Well… Measuring time and distance in natural units, so c = 1, we have: E = m∙c2 = m and p = m∙c = m, so we get: E = m = p Waw ! E = m = p ! What a weird combination, isn’t it? Well… Yes. But it’s fully OK. [You tell me why it wouldn’t be OK. It’s true we’re glossing over the dimensions here, but natural units are natural units and, hence, the numerical value of c and c2 is 1. Just figure it out for yourself.] The point to note is that the E = m = p equality yields extremely simple but also very sensible results. For the group velocity of our ei(kx − ωt) wavefunction, we get: vg = ∂ω/∂k = ∂[E/ħ]/∂[p/ħ] = ∂E/∂p = ∂p/∂p = 1 So that’s the velocity of our zero-mass particle (remember: the 1 stands for c here, i.e. the speed of light) expressed in natural units once more—just like what we found before. For the phase velocity, we get: vp = ω/k = (E/ħ)/(p/ħ) = E/p = p/p = 1 Same result! No factor 1/2 here! Isn’t that great? My ‘hidden energy theory’ makes a lot of sense.:-) However, if there’s hidden energy, we still need to show where it’s hidden. 🙂 Now that question is linked to the propagation mechanism that’s described by those two equations, which now – leaving the 1/2 factor out, simplify to: 1. Re(∂ψ/∂t) = −(ħ/m)·Im(∇2ψ) 2. Im(∂ψ/∂t) = (ħ/m)·Re(∇2ψ) Propagation mechanism? Yes. That’s what we’re talking about here: the propagation mechanism of energy. Huh? Yes. Let me explain in another separate section, so as to improve readability. Before I do, however, let me add another note—for the skeptics among you. 🙂 Indeed, the skeptics among you may wonder whether our zero-mass particle wavefunction makes any sense at all, and they should do so for the following reason: if x = 0 at t = 0, and it’s traveling at the speed of light, then x(t) = t. Always. So if E = m = p, the argument of our wavefunction becomes E·t – p·x = E·t – E·t = 0! So what’s that? The proper time of our zero-mass particle is zero—always and everywhere!? Well… Yes. That’s why our zero-mass particle – as a point-like object – does not really exist. What we’re talking about is energy itself, and its propagation mechanism. 🙂 While I am sure that, by now, you’re very tired of my rambling, I beg you to read on. Frankly, if you got as far as you have, then you should really be able to work yourself through the rest of this post. 🙂 And I am sure that – if anything – you’ll find it stimulating! 🙂 The imaginary energy space Look at the propagation mechanism for the electromagnetic wave in free space, which (for = 1) is represented by the following two equations: 1. B/∂t = –∇×E 2. E/∂t = ∇×B [In case you wonder, these are Maxwell’s equations for free space, so we have no stationary nor moving charges around.] See how similar this is to the two equations above? In fact, in my Deep Blue page, I use these two equations to derive the quantum-mechanical wavefunction for the photon (which is not the same as that hypothetical zero-mass particle I introduced above), but I won’t bother you with that here. Just note the so-called curl operator in the two equations above (∇×) can be related to the Laplacian we’ve used so far (∇2). It’s not the same thing, though: for starters, the curl operator operates on a vector quantity, while the Laplacian operates on a scalar (including complex scalars). But don’t get distracted now. Let’s look at the revised Schrödinger’s equation, i.e. the one without the 1/2 factor: On the left-hand side, we have a time derivative, so that’s a flow per second. On the right-hand side we have the Laplacian and the i·ħ/m factor. Now, written like this, Schrödinger’s equation really looks exactly the same as the general diffusion equation, which is written as: ∂φ/∂t = D·∇2φ, except for the imaginary unit, which makes it clear we’re getting two equations for the price of one here, rather than one only! 🙂 The point is: we may now look at that ħ/m factor as a diffusion constant, because it does exactly the same thing as the diffusion constant D in the diffusion equation ∂φ/∂t = D·∇2φ, i.e: 1. As a constant of proportionality, it quantifies the relationship between both derivatives. 2. As a physical constant, it ensures the dimensions on both sides of the equation are compatible. So the diffusion constant for  Schrödinger’s equation is ħ/m. What is its dimension? That’s easy: (N·m·s)/(N·s2/m) = m2/s. [Remember: 1 N = 1 kg·m/s2.] But then we multiply it with the Laplacian, so that’s something expressed per square meter, so we get something per second on both sides. Of course, you wonder: what per second? Not sure. That’s hard to say. Let’s continue with our analogy with the heat diffusion equation so as to try to get a better understanding of what’s being written here. Let me give you that heat diffusion equation here. Assuming the heat per unit volume (q) is proportional to the temperature (T) – which is the case when expressing T in degrees Kelvin (K), so we can write q as q = k·T  – we can write it as: heat diffusion 2 So that’s structurally similar to Schrödinger’s equation, and to the two equivalent equations we jotted down above. So we’ve got T (temperature) in the role of ψ here—or, to be precise, in the role of ψ ‘s real and imaginary part respectively. So what’s temperature? From the kinetic theory of gases, we know that temperature is not just a scalar: temperature measures the mean (kinetic) energy of the molecules in the gas. That’s why we can confidently state that the heat diffusion equation models an energy flow, both in space as well as in time. Let me make the point by doing the dimensional analysis for that heat diffusion equation. The time derivative on the left-hand side (∂T/∂t) is expressed in K/s (Kelvin per second). Weird, isn’t it? What’s a Kelvin per second? Well… Think of a Kelvin as some very small amount of energy in some equally small amount of space—think of the space that one molecule needs, and its (mean) energy—and then it all makes sense, doesn’t it? However, in case you find that a bit difficult, just work out the dimensions of all the other constants and variables. The constant in front (k) makes sense of it. That coefficient (k) is the (volume) heat capacity of the substance, which is expressed in J/(m3·K). So the dimension of the whole thing on the left-hand side (k·∂T/∂t) is J/(m3·s), so that’s energy (J) per cubic meter (m3) and per second (s). Nice, isn’t it? What about the right-hand side? On the right-hand side we have the Laplacian operator  – i.e. ∇= ·, with ∇ = (∂/∂x,  ∂/∂y,  ∂/∂z) – operating on T. The Laplacian operator, when operating on a scalar quantity, gives us a flux density, i.e. something expressed per square meter (1/m2). In this case, it’s operating on T, so the dimension of ∇2T is K/m2. Again, that doesn’t tell us very much (what’s the meaning of a Kelvin per square meter?) but we multiply it by the thermal conductivity (κ), whose dimension is W/(m·K) = J/(m·s·K). Hence, the dimension of the product is  the same as the left-hand side: J/(m3·s). So that’s OK again, as energy (J) per cubic meter (m3) and per second (s) is definitely something we can associate with an energy flow. In fact, we can play with this. We can bring k from the left- to the right-hand side of the equation, for example. The dimension of κ/k is m2/s (check it!), and multiplying that by K/m(i.e. the dimension of ∇2T) gives us some quantity expressed in Kelvin per second, and so that’s the same dimension as that of ∂T/∂t. Done!  In fact, we’ve got two different ways of writing Schrödinger’s diffusion equation. We can write it as ∂ψ/∂t = i·(ħ/m)·∇2ψ or, else, we can write it as ħ·∂ψ/∂t = i·(ħ2/m)·∇2ψ. Does it matter? I don’t think it does. The dimensions come out OK in both cases. However, interestingly, if we do a dimensional analysis of the ħ·∂ψ/∂t = i·(ħ2/m)·∇2ψ equation, we get joule on both sides. Interesting, isn’t it? The key question, of course, is: what is it that is flowing here? I don’t have a very convincing answer to that, but the answer I have is interesting—I think. 🙂 Think of the following: we can multiply Schrödinger’s equation with whatever we want, and then we get all kinds of flows. For example, if we multiply both sides with 1/(m2·s) or 1/(m3·s), we get a equation expressing the energy conservation law, indeed! [And you may want to think about the minus sign of the  right-hand side of Schrödinger’s equation now, because it makes much more sense now!] We could also multiply both sides with s, so then we get J·s on both sides, i.e. the dimension of physical action (J·s = N·m·s). So then the equation expresses the conservation of actionHuh? Yes. Let me re-phrase that: then it expresses the conservation of angular momentum—as you’ll surely remember that the dimension of action and angular momentum are the same. 🙂 And then we can divide both sides by m, so then we get N·s on both sides, so that’s momentum. So then Schrödinger’s equation embodies the momentum conservation law. Isn’t it just wonderfulSchrödinger’s equation packs all of the conservation laws!:-) The only catch is that it flows back and forth from the real to the imaginary space, using that propagation mechanism as described in those two equations. Now that is really interesting, because it does provide an explanation – as fuzzy as it may seem – for all those weird concepts one encounters when studying physics, such as the tunneling effect, which amounts to energy flowing from the imaginary space to the real space and, then, inevitably, flowing back. It also allows for borrowing time from the imaginary space. Hmm… Interesting! [I know I still need to make these points much more formally, but… Well… You kinda get what I mean, don’t you?] To conclude, let me re-baptize my real and imaginary ‘space’ by referring to them to what they really are: a real and imaginary energy space respectively. Although… Now that I think of it: it could also be real and imaginary momentum space, or a real and imaginary action space. Hmm… The latter term may be the best. 🙂 Isn’t this all great? I mean… I could go on and on—but I’ll stop here, so you can freewheel around yourself. For  example, you may wonder how similar that energy propagation mechanism actually is as compared to the propagation mechanism of the electromagnetic wave? The answer is: very similar. You can check how similar in one of my posts on the photon wavefunction or, if you’d want a more general argument, check my Deep Blue page. Have fun exploring! 🙂 So… Well… That’s it, folks. I hope you enjoyed this post—if only because I really enjoyed writing it. 🙂 OK. You’re right. I still haven’t answered the fundamental question. So what about  the 1/2 factor? What about that 1/2 factor? Did Schrödinger miss it? Well… Think about it for yourself. First, I’d encourage you to further explore that weird graph with the real and imaginary part of the wavefunction. I copied it below, but with an added 45º line—yes, the green diagonal. To make it somewhat more real, imagine you’re the zero-mass point-like particle moving along that line, and we observe you from our inertial frame of reference, using equivalent time and distance units. spacetime travel So we’ve got that cosine (cosθ) varying as you travel, and we’ve also got the i·sinθ part of the wavefunction going while you’re zipping through spacetime. Now, THINK of it: the phase velocity of the cosine bit (i.e. the red graph) contributes as much to your lightning speed as the i·sinθ bit, doesn’t it? Should we apply Pythagoras’ basic r2 = x2 + yTheorem here? Yes: the velocity vector along the green diagonal is going to be the sum of the velocity vectors along the horizontal and vertical axes. So… That’s great. Yes. It is. However, we still have a problem here: it’s the velocity vectors that add up—not their magnitudes. Indeed, if we denote the velocity vector along the green diagonal as u, then we can calculate its magnitude as: u = √u2 = √[(v/2)2 + (v/2)2] = √[2·(v2/4) = √[v2/2] = v/√2 ≈ 0.7·v So, as mentioned, we’re adding the vectors, but not their magnitudes. We’re somewhat better off than we were in terms of showing that the phase velocity of those sine and cosine velocities add up—somehow, that is—but… Well… We’re not quite there. Fortunately, Einstein saves us once again. Remember we’re actually transforming our reference frame when working with the wavefunction? Well… Look at the diagram below (for which I  thank the author) special relativity In fact, let me insert an animated illustration, which shows what happens when the velocity goes up and down from (close to) −c to +c and back again.  It’s beautiful, and I must credit the author here too. It sort of speaks for itself, but please do click the link as the accompanying text is quite illuminating. 🙂 The point is: for our zero-mass particle, the x’ and t’ axis will rotate into the diagonal itself which, as I mentioned a couple of times already, represents the speed of light and, therefore, our zero-mass particle traveling at c. It’s obvious that we’re now adding two vectors that point in the same direction and, hence, their magnitudes just add without any square root factor. So, instead of u = √[(v/2)2 + (v/2)2], we just have v/2 + v/2 = v! Done! We solved the phase velocity paradox! 🙂 So… I still haven’t answered that question. Should that 1/2 factor in Schrödinger’s equation be there or not? The answer is, obviously: yes. It should be there. And as for Schrödinger using the mass concept as it appears in the classical kinetic energy formula: K.E. = m·v2/2… Well… What other mass concept would he use? I probably got a bit confused with Feynman’s exposé – especially this notion of ‘choosing the zero point for the energy’ – but then I should probably just re-visit the thing and adjust the language here and there. But the formula is correct. Thinking it all through, the ħ/2m constant in Schrödinger’s equation should be thought of as the reciprocal of m/(ħ/2). So what we’re doing basically is measuring the mass of our object in units of ħ/2, rather than units of ħ. That makes perfect sense, if only because it’s ħ/2, rather than ħthe factor that appears in the Uncertainty Relations Δx·Δp ≥ ħ/2 and ΔE·Δt ≥ ħ/2. In fact, in my post on the wavefunction of the zero-mass particle, I noted its elementary wavefunction should use the m = E = p = ħ/2 values, so it becomes ψ(x, t) = a·ei∙[(ħ/2)∙t − (ħ/2)∙x]/ħ = a·ei∙[t − x]/2. Isn’t that just nice? 🙂 I need to stop here, however, because it looks like this post is becoming a book. Oh—and note that nothing what I wrote above discredits my ‘hidden energy’ theory. On the contrary, it confirms it. In fact, the nice thing about those illustrations above is that it associates the imaginary component of our wavefunction with travel in time, while the real component is associated with travel in space. That makes our theory quite complete: the ‘hidden’ energy is the energy that moves time forward. The only thing I need to do is to connect it to that idea of action expressing itself in time or in space, cf. what I wrote on my Deep Blue page: we can look at the dimension of Planck’s constant, or at the concept of action in general, in two very different ways—from two different perspectives, so to speak: 1. [Planck’s constant] = [action] = N∙m∙s = (N∙m)∙s = [energy]∙[time] 2. [Planck’s constant] = [action] = N∙m∙s = (N∙s)∙m = [momentum]∙[distance] Hmm… I need to combine that with the idea of the quantum vacuum, i.e. the mathematical space that’s associated with time and distance becoming countable variables…. In any case. Next time. 🙂 Before I sign off, however, let’s quickly check if our a·ei∙[t − x]/2 wavefunction solves the Schrödinger equation: • ∂ψ/∂t = −a·ei∙[t − x]/2·(i/2) • 2ψ = ∂2[a·ei∙[t − x]/2]/∂x=  ∂[a·ei∙[t − x]/2·(i/2)]/∂x = −a·ei∙[t − x]/2·(1/4) So the ∂ψ/∂t = i·(ħ/2m)·∇2ψ equation becomes: a·ei∙[t − x]/2·(i/2) = −i·(ħ/[2·(ħ/2)])·a·ei∙[t − x]/2·(1/4) ⇔ 1/2 = 1/4 !? The damn 1/2 factor. Schrödinger wants it in his wave equation, but not in the wavefunction—apparently! So what if we take the m = E = p = ħ solution? We get: • ∂ψ/∂t = −a·i·ei∙[t − x] • 2ψ = ∂2[a·ei∙[t − x]]/∂x=  ∂[a·i·ei∙[t − x]]/∂x = −a·ei∙[t − x] So the ∂ψ/∂t = i·(ħ/2m)·∇2ψ equation now becomes: a·i·ei∙[t − x] = −i·(ħ/[2·ħ])·a·ei∙[t − x] ⇔ 1 = 1/2 !? We’re still in trouble! So… Was Schrödinger wrong after all? There’s no difficulty whatsoever with the ∂ψ/∂t = i·(ħ/m)·∇2ψ equation: • a·ei∙[t − x]/2·(i/2) = −i·[ħ/(ħ/2)]·a·ei∙[t − x]/2·(1/4) ⇔ 1 = 1 • a·i·ei∙[t − x] = −i·(ħ/ħ)·a·ei∙[t − x] ⇔ 1 = 1 What these equations might tell us is that we should measure mass, energy and momentum in terms of ħ (and not in terms of ħ/2) but that the fundamental uncertainty is ± ħ/2. That solves it all. So the magnitude of the uncertainty is ħ but it separates not 0 and ± 1, but −ħ/2 and −ħ/2. Or, more generally, the following series: …, −7ħ/2, −5ħ/2, −3ħ/2, −ħ/2, +ħ/2, +3ħ/2,+5ħ/2, +7ħ/2,… Why are we not surprised? The series represent the energy values that a spin one-half particle can possibly have, and ordinary matter – i.e. all fermions – is composed of spin one-half particles. To  conclude this post, let’s see if we can get any indication on the energy concepts that Schrödinger’s revised wave equation implies. We’ll do so by just calculating the derivatives in the ∂ψ/∂t = i·(ħ/m)·∇2ψ equation (i.e. the equation without the 1/2 factor). Let’s also not assume we’re measuring stuff in natural units, so our wavefunction is just what it is: a·ei·[E·t − p∙x]/ħ. The derivatives now become: • ∂ψ/∂t = −a·i·(E/ħ)·ei∙[E·t − p∙x]/ħ • 2ψ = ∂2[a·ei∙[E·t − p∙x]/ħ]/∂x=  ∂[a·i·(p/ħ)·ei∙[E·t − p∙x]/ħ]/∂x = −a·(p22ei∙[E·t − p∙x]/ħ So the ∂ψ/∂t = i·(ħ/m)·∇2ψ = i·(1/m)·∇2ψ equation now becomes: a·i·(E/ħ)·ei∙[E·t − p∙x]/ħ = −i·(ħ/m)·a·(p22ei∙[E·t − p∙x]/ħ  ⇔ E = p2/m = m·v2 It all works like a charm. Note that we do not assume stuff like E = m = p here. It’s all quite general. Also note that the E = p2/m closely resembles the kinetic energy formula one often sees: K.E. = m·v2/2 = m·m·v2/(2m) = p2/(2m). We just don’t have the 1/2 factor in our E = p2/m formula, which is great—because we don’t want it! :-) Of course, if you’d add the 1/2 factor in Schrödinger’s equation again, you’d get it back in your energy formula, which would just be that old kinetic energy formula which gave us all these contradictions and ambiguities. 😦 Finally, and just to make sure: let me add that, when we wrote that E = m = p – like we did above – we mean their numerical values are the same. Their dimensions remain what they are, of course. Just to make sure you get that subtle point, we’ll do a quick dimensional analysis of that E = p2/m formula: [E] = [p2/m] ⇔ N·m = N2·s2/kg = N2·s2/[N·m/s2] = N·m = joule (J) So… Well… It’s all perfect. 🙂 Post scriptum: I revised my Deep Blue page after writing this post, and I think that a number of the ideas that I express above are presented more consistently and coherently there. In any case, the missing energy theory makes sense. Think of it: any oscillator involves both kinetic as well as potential energy, and they both add up to twice the average kinetic (or potential) energy. So why not here? When everything is said and done, our elementary wavefunction does describe an oscillator. 🙂 Schrödinger’s equation in action This post is about something I promised to write about aeons ago: how do we get those electron orbitals out of Schrödinger’s equation? So let me write it now – for the simplest of atoms: hydrogen. I’ll largely follow Richard Feynman’s exposé on it: this text just intends to walk you through it and provide some comments here and there. Let me first remind you of what that famous Schrödinger’s equation actually represents. In its simplest form – i.e. not including any potential, so then it’s an equation that’s valid for free space only—no force fields!—it reduces to: i·ħ∙∂ψ/∂t = –(1/2)∙(ħ2/meff)∙∇2ψ Note the enigmatic concept of the efficient mass in it (meff), as well as the rather awkward 1/2 factor, which we may get rid of by re-defining it. We then write: meffNEW = 2∙meffOLD, and Schrödinger’s equation then simplifies to: • ∂ψ/∂t + i∙(V/ħ)·ψ = i(ħ/meff)·∇2ψ • In free space (no potential): ∂ψ/∂t = i∙(ħ/meff)·∇2ψ In case you wonder where the minus sign went, I just brought the imaginary unit to the other side. Remember 1/= −i. 🙂 Now, in my post on quantum-mechanical operators, I drew your attention to the fact that this equation is structurally similar to the heat diffusion equation – or to any diffusion equation, really. Indeed, assuming the heat per unit volume (q) is proportional to the temperature (T) – which is the case when expressing T in degrees Kelvin (K), so we can write q as q = k·T  – we can write the heat diffusion equation as: heat diffusion 2 Moreover, I noted the similarity is not only structural. There is more to it: both equations model energy flows. How exactly is something I wrote about in my e-publication on this, so let me refer you to that. Let’s jot down the complete equation once more: In fact, it is rather surprising that Feynman drops the eff subscript almost immediately, so he just writes: schrodinger 5 Let me first remind you that ψ is a function of position in space and time, so we write: ψ = ψ(x, y, z, t) = ψ(r, t), with (x, y, z) = r. And m, on the other side of the equation, is what it always was: the effective electron mass. Now, we talked about the subtleties involved before, so let’s not bother about the definition of the effective electron mass, or wonder where that factor 1/2 comes from here. What about V? V is the potential energy of the electron: it depends on the distance (r) from the proton. We write: V = −e2/│r│ = −e2/r. Why the minus sign? Because we say the potential energy is zero at  large distances (see my post on potential energy). Back to Schrödinger’s equation. On the left-hand side, we have ħ, and its dimension is J·s (or N·m·s, if you want). So we multiply that with a time derivative and we get J, the unit of energy. On the right-hand side, we have Planck’s constant squared, the mass factor in the denominator, and the Laplacian operator – i.e. ∇= ·, with ∇ = (∂/∂x,  ∂/∂y,  ∂/∂z) – operating on the wavefunction. Let’s start with the latter. The Laplacian works just the same as for our heat diffusion equation: it gives us a flux density, i.e. something expressed per square meter (1/m2). The ħfactor gives us J2·s2. The mass factor makes everything come out alright, if we use the mass-equivalence relation, which says it’s OK to express the mass in J/(m/s)2. [The mass of an electron is usually expressed as being equal to 0.5109989461(31) MeV/c2. That unit uses the E = m·cmass-equivalence formula. As for the eV, you know we can convert that into joule, which is a rather large unit—which is why we use the electronvolt as a measure of energy.] To make a long story short, we’re OK: (J2·s2)·[(m/s)2/J]·(1/m2) = J! Perfect. [As for the Vψ term, that’s obviously expressed in joule too.] In short, Schrödinger’s equation expresses the energy conservation law too, and we may express it per square meter or per second or per cubic meter as well, if we’d wish: we can just multiply both sides by 1/m2 or 1/s or 1/mor by whatever dimension you want. Again, if you want more detail on the Schrödinger equation as an energy propagation mechanism, read the mentioned e-publication. So let’s get back to our equation, which, taking into account our formula for V, now looks like this: Feynman then injects one of these enigmatic phrases—enigmatic for novices like us, at least! “We want to look for definite energy states, so we try to find solutions which have the form: ψ (r, t) =  e−(i/ħ)·E·t·ψ(r).” At first, you may think he’s just trying to get rid of the relativistic correction in the argument of the wavefunction. Indeed, as I explain in that little booklet of mine, the –(p/ħ)·x term in the argument of the elementary wavefunction ei·θ =  ei·[(E/ħ)·t – (p/ħ)·x] is there because the young Comte Louis de Broglie, back in 1924, when he wrote his groundbreaking PhD thesis, suggested the θ = ω∙t – kx = (E∙t – px)/ħ formula for the argument of the wavefunction, as he knew that relativity theory had already established the invariance of the four-vector (dot) product pμxμ = E∙t – px = pμ‘xμ‘ = E’∙t’ – p’x’. [Note that Planck’s constant, as a physical constant, should obviously not depend on the reference frame either. Hence, if the E∙t – px product is invariant, so is (E∙t – px)/ħ.] So the θ = E∙t – px and the θ = E0∙t’ = E’·t’ are fully equivalent. Using lingo, we can say that the argument of the wavefunction is a Lorentz scalar and, therefore, invariant under a Lorentz boost. Sounds much better, doesn’t it? 🙂 But… Well. That’s not why Feynman says what he says. He just makes abstraction of uncertainty here, as he looks for states with a definite energy state, indeed. Nothing more, nothing less. Indeed, you should just note that we can re-write the elementary a·ei[(E/ħ)·t – (p/ħ)·x] function as e−(i/ħ)·E·t·ei·(p/ħ)·x]. So that’s what Feynman does here: he just eases the search for functional forms that satisfy Schrödinger’s equation. You should note the following: 1. Writing the coefficient in front of the complex exponential as ψ(r) = ei·(p/ħ)·x] does the trick we want it to do: we do not want that coefficient to depend on time: it should only depend on the size of our ‘box’ in space, as I explained in one of my posts. 2. Having said that, you should also note that the ψ in the ψ(r, t) function and the ψ in the ψ(r) denote two different beasts: one is a function of two variables (r and t), while the other makes abstraction of the time factor and, hence, becomes a function of one variable only (r). I would have used another symbol for the ψ(r) function, but then the Master probably just wants to test your understanding. 🙂 In any case, the differential equation we need to solve now becomes: Huh? How does that work? Well… Just take the time derivative of e−(i/ħ)·E·t·ψ(r), multiply with the i·ħ in front of that term in Schrödinger’s original equation  and re-arrange the terms. [Just do it: ∂[e−(i/ħ)·E·t·ψ(r)]/∂t = −(i/ħ)·E·e−(i/ħ)·E·t·ψ(r). Now multiply that with i·ħ: the ħ factor cancels and the minus disappears because i= −1.] So now we need to solve that differential equation, i.e. we need to find functional forms for ψ – and please do note we’re talking ψ(r) here – not ψ(r, t)! – that satisfy the above equation. Interesting question: is our equation still Schrödinger’s equation? Well… It is and it isn’t. Any linear combination of the definite energy solutions we find will also solve Schrödinger’s equation, but so we limited the solution set here to those definite energy solutions only. Hence, it’s not quite the same equation. We removed the time dependency here – and in a rather interesting way, I’d say. The next thing to do is to switch from Cartesian to polar coordinates. Why? Well… When you have a central-force problem – like this one (because of the potential) – it’s easier to solve them using polar coordinates. In fact, because we’ve got three dimensions here, we’re actually talking a spherical coordinate system. The illustration and formulas below show how spherical and Cartesian coordinates are related:  x = r·sinθ·cosφ; y = r·sinθ·sinφ; zr·cosθ As you know, θ (theta) is referred to as the polar angle, while φ (phi) is the azimuthal angle, and the coordinate transformation formulas can be easily derived. The rather simple differential equation above now becomes the following monster: new de Huh? Yes, I am very sorry. That’s how it is. Feynman does this to help us. If you think you can get to the solutions by directly solving the equation in Cartesian coordinates, please do let me know. 🙂 To tame the beast, we might imagine to first look for solutions that are spherically symmetric, i.e. solutions that do not depend on θ and φ. That means we could rotate the reference frame and none of the amplitudes would change. That means the ∂ψ/∂θ and ∂ψ/∂φ (partial) derivatives in our formula are equal to zero. These spherically symmetric states, or s-states as they are referred to, are states with zero (orbital) angular momentum, but you may want to think about that statement before accepting it. 🙂 [It’s not  that there’s no angular momentum (on the contrary: there’s lots of it), but the total angular momentum should obviously be zero, and so that’s what meant when these states are denoted as = 0 states.] So now we have to solve: de 3 Now that looks somewhat less monstrous, but Feynman still fills two rather dense pages to show how this differential equation can be solved. It’s not only tedious but also complicated, so please check it yourself by clicking on the link. One of the steps is a switch in variables, or a re-scaling, I should say. Both E and r are now measured as follows: The complicated-looking factors are just the Bohr radius (r= ħ2/(m·e2) ≈ 0.528 Å) and the Rydberg energy (E= m·e4/2·ħ2 ≈ 13.6 eV). We calculated those long time ago using a rather heuristic model to describe an atom. In case you’d want to check the dimensions, note eis a rather special animal. It’s got nothing to do with Euler’s number. Instead, eis equal to ke·qe2, and the ke here is Coulomb’s constant: ke = 1/(4πε0). This allows to re-write the force between two electrons as a function of the distance: F = e2/r2This, in turn, explains the rather weird dimension of e2: [e2] = N·e= J·m. But I am digressing too much. The bottom line is: the various energy levels that fit the equation, i.e. the allowable energies, are fractions of the Rydberg energy, i.e. E=m·e4/2·ħ2. To be precise, the formula for the nth energy level is: E= − ER/n2. The interesting thing is that the spherically symmetric solutions yield real-valued ψ(r) functions. The solutions for n = 1, 2, and 3 respectively, and their graph is given below. graphAs Feynman writes, all of the wave functions approach zero rapidly for large r (also, confusingly, denoted as ρ) after oscillating a few times, with the number of ‘bumps’ equal to n. Of course, you should note that you should put the time factor back in in order to correctly interpret these functions. Indeed, remember how we separated them when we wrote: ψ(r, t) =  ei·(E/ħ)·t·ψ(r) We might say the ψ(r) function is sort of an envelope function for the whole wavefunction, but it’s not quite as straightforward as that. :-/ However, I am sure you’ll figure it out. States with an angular dependence So far, so good. But what if those partial derivatives are not zero? Now the calculations become really complicated. Among other things, we need these transformation matrices for rotations, which we introduced a very long time ago. As mentioned above, I don’t have the intention to copy Feynman here, who needs another two or three dense pages to work out the logic. Let me just state the grand result: • We’ve got a whole range of definite energy states, which correspond to orbitals that form an orthonormal basis for the actual wavefunction of the electron. • The orbitals are characterized by three quantum numbers, denoted as ln and m respectively: • The is the quantum number of (total) angular momentum, and it’s equal to 0, 1, 2, 3, etcetera. [Of course, as usual, we’re measuring in units of ħ.] The l = 0 states are referred to as s-states, the = 1 states are referred to as p-states, and the = 2 states are d-states. They are followed by f, g, h, etcetera—for no particular good reason. [As Feynman notes: “The letters don’t mean anything now. They did once—they meant “sharp” lines, “principal” lines, “diffuse” lines and “fundamental” lines of the optical spectra of atoms. But those were in the days when people did not know where the lines came from. After f there were no special names, so we now just continue with g, h, and so on.] • The is referred to as the ‘magnetic’ quantum number, and it ranges from −l to +l. • The n is the ‘principle’ quantum number, and it goes from + 1 to infinity (∞). How do these things actually look like? Let me insert two illustrations here: one from Feynman, and the other from Wikipedia. The number in front just tracks the number of s-, p-, d-, etc. orbital. The shaded region shows where the amplitudes are large, and the plus and minus signs show the relative sign of the amplitude. [See my remark above on the fact that the ψ factor is real-valued, even if the wavefunction as a whole is complex-valued.] The Wikipedia image shows the same density plots but, as it was made some 50 years later, with some more color. 🙂 This is it, guys. Feynman takes it further by also developing the electron configurations for the next 35 elements in the periodic table but… Well… I am sure you’ll want to read the original here, rather than my summaries. 🙂 Congrats ! We now know all what we need to know. All that remains is lots of practical exercises, so you can be sure you master the material for your exam. 🙂 Schrödinger’s equation and the two de Broglie relations Original post: I’ve re-visited the de Broglie equations a couple of times already. In this post, however, I want to relate them to Schrödinger’s equation. Let’s start with the de Broglie equations first. Equations. Plural. Indeed, most popularizing books on quantum physics will give you only one of the two de Broglie equations—the one that associates a wavelength (λ) with the momentum (p) of a matter-particle: λ = h/p In fact, even the Wikipedia article on the ‘matter wave’ starts off like that and is, therefore, very confusing, because, for a good understanding of quantum physics, one needs to realize that the λ = h/p equality is just one of a pair of two ‘matter wave’ equations: 1. λ = h/p 2. f = E/h These two equations give you the spatial and temporal frequency of the wavefunction respectively. Now, those two frequencies are related – and I’ll show you how in a minute – but they are not the same. It’s like space and time: they are related, but they are definitely not the same. Now, because any wavefunction is periodic, the argument of the wavefunction – which we’ll introduce shortly – will be some angle and, hence, we’ll want to express it in radians (or – if you’re really old-fashioned – degrees). So we’ll want to express the frequency as an angular frequency (i.e. in radians per second, rather than in cycles per second), and the wavelength as a wave number (i.e. in radians per meter). Hence, you’ll usually see the two de Broglie equations written as: 1. k = p/ħ 2. ω = E/ħ It’s the same: ω = 2π∙f and f = 1/T (T is the period of the oscillation), and k = 2π/λ and then ħ = h/2π, of course! [Just to remove all ambiguities: stop thinking about degrees. They’re a Babylonian legacy, who thought the numbers 6, 12, and 60 had particular religious significance. So that’s why we have twelve-hour nights and twelve-hour days, with each hour divided into sixty minutes and each minute divided into sixty seconds, and – particularly relevant in this context – why ‘once around’ is divided into 6×60 = 360 degrees. Radians are the unit in which we should measure angles because… Well… Google it. They measure an angle in distance units. That makes things easier—a lot easier! Indeed, when studying physics, the last thing you want is artificial units, like degrees.] So… Where were we? Oh… Yes. The de Broglie relation. Popular textbooks usually commit two sins. One is that they forget to say we have two de Broglie relations, and the other one is that the E = h∙f relationship is presented as the twin of the Planck-Einstein relation for photons, which relates the energy (E) of a photon to its frequency (ν): E = h∙ν = ħ∙ω. The former is criminal neglect, I feel. As for the latter… Well… It’s true and not true: it’s incomplete, I’d say, and, therefore, also very confusing. Why? Because both things lead one to try to relate the two equations, as momentum and energy are obviously related. In fact, I’ve wasted days, if not weeks, on this. How are they related? What formula should we use? To answer that question, we need to answer another one: what energy concept should we use? Potential energy? Kinetic energy? Should we include the equivalent energy of the rest mass? One quickly gets into trouble here. For example, one can try the kinetic energy, K.E. = m∙v2/2, and use the definition of momentum (p = m∙v), to write E = p2/(2m), and then we could relate the frequency f to the wavelength λ using the general rule that the traveling speed of a wave is equal to the product of its wavelength and its frequency (v = λ∙f). But if E = p2/(2m) and f = v/λ, we get: p2/(2m) = h∙v/λ ⇔  λ = 2∙h/p So that is almost right, but not quite: that factor 2 should not be there. In fact, it’s easy to see that we’d get de Broglie’s λ = h/p equation from his E = h∙f equation if we’d use E = m∙v2 rather than E = m∙v2/2. In fact, the E = m∙v2 relation comes out of them if we just multiply the two and, yes, use that v = λ relation once again: But… Well… E = m∙v2? How could we possibly justify the use of that formula? The answer is simple: our v = f·λ equation is wrong. It’s just something one shouldn’t apply to the complex-valued wavefunction. The ‘correct’ velocity formula for the complex-valued wavefunction should have that 1/2 factor, so we’d write 2·f·λ = v to make things come out alright. But where would this formula come from? Well… Now it’s time to introduce the wavefunction. The wavefunction You know the elementary wavefunction: ψ = ψ(x, t) = ei(ωt − kx) = ei(kx − ωt) = cos(kx−ωt) + i∙sin(kx−ωt) As for terminology, note that the term ‘wavefunction’ refers to what I write above, while the term ‘wave equation’ usually refers to Schrödinger’s equation, which I’ll introduce in a minute. Also note the use of boldface indicates we’re talking vectors, so we’re multiplying the wavenumber vector k with the position vector x = (x, y, z) here, although we’ll often simplify and assume one-dimensional space. In any case… So the question is: why can’t we use the v = f·λ formula for this wave? The period of cosθ + isinθ is the same as that of the sine and cosine function considered separately: cos(θ+2π) + isin(θ+2π) = cosθ + isinθ, so T = 2π and f = 1/T = 1/2π do not change. So the f, T and λ should be the same, no? No. We’ve got two oscillations for the price of one here: one ‘real’ and one ‘imaginary’—but both are equally essential and, hence, equally ‘real’. So we’re actually combining two waves. So it’s just like adding other waves: when adding waves, one gets a composite wave that has (a) a phase velocity and (b) a group velocity. Huh? Yes. It’s quite interesting. When adding waves, we usually have a different ω and k for each of the component waves, and the phase and group velocity will depend on the relation between those ω’s and k’s. That relation is referred to as the dispersion relation. To be precise, if you’re adding waves, then the phase velocity of the composite wave will be equal to vp = ω/k, and its group velocity will be equal to vg = dω/dk. We’ll usually be interested in the group velocity, and so to calculate that derivative, we need to express ω as a function of k, of course, so we write ω as some function of k, i.e. ω = ω(k). There are number of possibilities then: 1. ω and k may be directly proportional, so we can write ω as ω = a∙k: in that case, we find that vp = vg = a. 2. ω and k are not directly proportional but have a linear relationship, so we can write write ω as ω = a∙k + b. In that case, we find that vg = a and… Well… We’ve got a problem calculating vp, because we don’t know what k to use! 3. ω and k may be non-linearly related, in which case… Well… One does has to do the calculation and see what comes out. 🙂 Let’s now look back at our ei(kx − ωt) = cos(kx−ωt) + i∙sin(kx−ωt) function. You’ll say that we’ve got only one ω and one k here, so we’re not adding waves with different ω’s and k’s. So… Well… What? That’s where the de Broglie equations come in. Look: k = p/ħ, and ω = E/ħ. If we now use the correct energy formula, i.e. the kinetic energy formula E = m·v2/2 (rather than that nonsensical E = m·v2 equation) – which we can also write as E = m·v·v/2 = p·v/2 = p·p/2m = p2/2m, with v = p/m the classical velocity of the elementary particle that Louis de Broglie was thinking of – then we can calculate the group velocity of our ei(kx − ωt) = cos(kx−ωt) + i∙sin(kx−ωt) as: vg = dω/dk = d[E/ħ]/d[p/ħ] = dE/dp = d[p2/2m]/dp = 2p/2m = p/m = v However, the phase velocity of our ei(kx − ωt) is: So that factor 1/2 only appears for the phase velocity. Weird, isn’t it? We find that the group velocity (vg) of the ei(kx − ωt) function is equal to the classical velocity of our particle (i.e. v), but that its phase velocity (vp) is equal to v divided by 2. Hmm… What to say? Well… Nothing much—except that it makes sense, and very much so, because it’s the group velocity of the wavefunction that’s associated with the classical velocity of a particle, not the phase velocity. In fact, if we include the rest mass in our energy formula, so if we’d use the relativistic E = γm0c2 and p = γm0v formulas (with γ the Lorentz factor), then we find that vp = ω/k = E/p = (γm0c2)/(γm0v) = c2/v, and so that’s a superluminal velocity, because v is always smaller than c! What? That’s even weirder! If we take the kinetic energy only, we find a phase velocity equal to v/2, but if we include the rest energy, then we get a superluminal phase velocity. It must be one or the other, no? Yep! You’re right! So that makes us wonder: is E = m·v2/2 really the right energy concept to use? The answer is unambiguous: no! It isn’t! And, just for the record, our young nobleman didn’t use the kinetic energy formula when he postulated his equations in his now famous PhD thesis. So what did he use then? Where did he get his equations? I am not sure. 🙂 A stroke of genius, it seems. According to Feynman, that’s how Schrödinger got his equation too: intuition, brilliance. In short, a stroke of genius. 🙂 Let’s relate these these two gems. Schrödinger’s equation and the two de Broglie relations Erwin Schrödinger and Louis de Broglie published their equations in 1924 and 1926 respectively. Can they be related? The answer is: yes—of course! Let’s first look at de Broglie‘s energy concept, however. Louis de Broglie was very familiar with Einsteins’ work and, hence, he knew that the energy of a particle consisted of three parts: 1. The particle’s rest energy m0c2, which de Broglie referred to as internal energy (Eint): this ‘internal energy’ includes the rest mass of the ‘internal pieces’, as he put it (now we call those ‘internal pieces’ quarks), as well as their binding energy (i.e. the quarks’interaction energy); 2. Any potential energy it may have because of some field (so de Broglie was not assuming the particle was traveling in free space), which we’ll denote by V: the field(s) can be anything—gravitational, electromagnetic—you name it: whatever changes the energy because of the position of the particle; 3. The particle’s kinetic energy, which we wrote in terms of its momentum p: K.E. = m·v2/2 = m2·v2/(2m) = (m·v)2/(2m) = p2/(2m). Indeed, in my previous posts, I would write the wavefunction as de Broglie wrote it, which is as follows: In those post – such as my post on virtual particles – I’d also note how a change in potential energy plays out: a change in potential energy, when moving from one place to another, would change the wavefunction, but through the momentum only—so it would impact the spatial frequency only. So the change in potential would not change the temporal frequencies ω= Eint + p12/(2m) + V1 and ω= Eint + p22/(2m) + V2. Why? Or why not, I should say? Because of the energy conservation principle—or its equivalent in quantum mechanics. The temporal frequency f or ω, i.e. the time-rate of change of the phase of the wavefunction, does not change: all of the change in potential, and the corresponding change in kinetic energy, goes into changing the spatial frequency, i.e. the wave number k or the wavelength λ, as potential energy becomes kinetic or vice versa. So is that consistent with what we wrote above, that E = m·v2? Maybe. Let’s think about it. Let’s first look at Schrödinger’s equation in free space (i.e. a space with zero potential) once again: Schrodinger's equation 2 If we insert our ψ = ei(kx − ωt) formula in Schrödinger’s free-space equation, we get the following nice result. [To keep things simple, we’re just assuming one-dimensional space for the calculations, so ∇2ψ = ∂2ψ/∂x2. But the result can easily be generalized.] The time derivative on the left-hand side is ∂ψ/∂t = −iω·ei(kx − ωt). The second-order derivative on the right-hand side is ∂2ψ/∂x2 = (ik)·(ik)·ei(kx − ωt) = −k2·ei(kx − ωt) . The ei(kx − ωt) factor on both sides cancels out and, hence, equating both sides gives us the following condition: iω = −(iħ/2m)·k2 ⇔ ω = (ħ/2m)·k2 Substituting ω = E/ħ and k = p/ħ yields: E/ħ = (ħ/2m)·p22 = m2·v2/(2m·ħ) = m·v2/(2ħ) ⇔ E = m·v2/2 Bingo! We get that kinetic energy formula! But now… What if we’d not be considering free space? In other words: what if there is some potential? Well… We’d use the complete Schrödinger equation, which is: schrodinger 5 Huh? Why is there a minus sign now? Look carefully: I moved the iħ factor on the left-hand side to the other when writing the free space version. If we’d do that for the complete equation, we’d get: Schrodinger's equation 3I like that representation a lot more—if only because it makes it a lot easier to interpret the equation—but, for some reason I don’t quite understand, you won’t find it like that in textbooks. Now how does it work when using the complete equation, so we add the −(i/ħ)·V·ψ term? It’s simple: the ei(kx − ωt) factor also cancels out, and so we get: iω = −(iħ/2m)·k2−(i/ħ)·V ⇔ ω = (ħ/2m)·k+ V/ħ Substituting ω = E/ħ and k = p/ħ once more now yields: E/ħ = (ħ/2m)·p22 + V/ħ = m2·v2/(2m·ħ) + V/ħ = m·v2/(2ħ) + V/ħ ⇔ E = m·v2/2 + V Bingo once more! The only thing that’s missing now is the particle’s rest energy m0c2, which de Broglie referred to as internal energy (Eint). That includes everything, i.e. not only the rest mass of the ‘internal pieces’ (as said, now we call those ‘internal pieces’ quarks) but also their binding energy (i.e. the quarks’interaction energy). So how do we get that energy concept out of Schrödinger’s equation? There’s only one answer to that: that energy is just like V. We can, quite simply, just add it. That brings us to the last and final question: what about our vg = result if we do not use the kinetic energy concept, but the E = m·v2/2 + V + Eint concept? The answer is simple: nothing. We still get the same, because we’re taking a derivative and the V and Eint just appear as constants, and so their derivative with respect to p is zero. Check it: vg = dω/dk = d[E/ħ]/d[p/ħ] = dE/dp = d[p2/2m + V + Eint ]/dp = 2p/2m = p/m = v It’s now pretty clear how this thing works. To localize our particle, we just superimpose a zillion of these ei(ωt − kx) equations. The only condition is that we’ve got that fixed vg = dω/dk = v relationhip, but so we do have such fixed relationship—as you can see above. In fact, the Wikipedia article on the dispersion relation mentions that the de Broglie equations imply the following relation between ω and k: ω = ħk2/2m. As you can see, that’s not entirely correct: the author conveniently forgets the potential (V) and the rest energy (Eint) in the energy formula here! What about the phase velocity? That’s a different story altogether. You can think about that for yourself. 🙂 I should make one final point here. As said, in order to localize a particle (or, to be precise, its wavefunction), we’re going to add a zillion elementary wavefunctions, each of which will make its own contribution to the composite wave. That contribution is captured by some coefficient ai in front of every eiθi function, so we’ll have a zillion aieiθi functions, really. [Yep. Bit confusing: I use here as subscript, as well as imaginary unit.] In case you wonder how that works out with Schrödinger’s equation, the answer is – once again – very simple: both the time derivative (which is just a first-order derivative) and the Laplacian are linear operators, so Schrödinger’s equation, for a composite wave, can just be re-written as the sum of a zillion ‘elementary’ wave equations. So… Well… We’re all set now to effectively use Schrödinger’s equation to calculate the orbitals for a hydrogen atom, which is what we’ll do in our next post. In the meanwhile, you can amuse yourself with reading a nice Wikibook article on the Laplacian, which gives you a nice feel for what Schrödinger’s equation actually represents—even if I gave you a good feel for that too on my Essentials page. Whatever. You choose. Just let me know what you liked best. 🙂 Oh… One more point: the vg = dω/dk = d[p2/2m]/dp = p/m = calculation obviously assumes we can treat m as a constant. In fact, what we’re actually doing is a rather complicated substitution of variables: you should write it all out—but that’s not the point here. The point is that we’re actually doing a non-relativistic calculation. Now, that does not mean that the wavefunction isn’t consistent with special relativity. It is. In fact, in one of my posts, I show how we can explain relativistic length contraction using the wavefunction. But it does mean that our calculation of the group velocity is not relativistically correct. But that’s a minor point: I’ll leave it for you as an exercise to calculate the relativistically correct formula for the group velocity. Have fun with it! 🙂 Note: Notations are often quite confusing. One should, generally speaking, denote a frequency by ν (nu), rather than by f, so as to not cause confusion with any function f, but then… Well… You create a new problem when you do that, because that Greek letter nu (ν) looks damn similar to the v of velocity, so that’s why I’ll often use f when I should be using nu (ν). As for the units, a frequency is expressed in cycles per second, while the angular frequency ω is expressed in radians per second. One cycle covers 2π radians and, therefore, we can write: ν = ω/2π. Hence, h∙ν = h∙ω/2π = ħ∙ω. Both ν as well as ω measure the time-rate of change of the phase of the wave function, as opposed to k, i.e. the spatial frequency of the wave function, which depends on the speed of the wave. Physicists also often use the symbol v for the speed of a wave, which is also hugely confusing, because it’s also used to denote the classical velocity of the particle. And then there’s two wave velocities, of course: the group versus the phase velocity. In any case… I find the use of that other symbol (c) for the wave velocity even more confusing, because this symbol is also used for the speed of light, and the speed of a wave is not necessarily (read: usually not) equal to the speed of light. In fact, both the group as well as the phase velocity of a particle wave are very different from the speed of light. The speed of a wave and the speed of light only coincide for electromagnetic waves and, even then, it should be noted that photons also have amplitudes to travel faster or slower than the speed of light. Quantum-mechanical operators The operator concept schrodinger 5 diffusion equation heat flow Some remarks on notation Matrix mechanics In fact, that’s just like writing: Quantum-mechanical operators Aij ≡ 〈 i | A | j 〉 | φ 〉 = A | ψ 〉 operator 2  〈 ψ | φ 〉 = 〈 φ | ψ 〉* The energy operator (H) E average H |ηi〉 = Eii〉 = |ηiEi The average value operator (A) Lzav = 〈 ψ | Lψ 〉 In fact, further generalization yields the following grand result: The energy operator for wavefunctions (H) e average continuous function double integral Eav = ∫ ψ*(xH ψ(x) dx Eav = ∫ ψ*(rH ψ(r) dV The position operator (x) average position The momentum operator (px) φ(p) = 〈 mom p | ψ 〉 momentum operator Dirac’s delta function and Schrödinger’s equation in three dimensions Dirac’s delta function δ(x)dx = 1 xx’ 〉 = δ(x − x’) Feynman summarizes it all together as follows: Now we really know it all, don’t we? 🙂 Schrödinger’s equation in three dimensions schrodinger 3 schrodinger 4 schrodinger 5 Schrödinger’s equation: the original approach Of course, your first question when seeing the title of this post is: what’s original, really? Well… The answer is simple: it’s the historical approach, and it’s original because it’s actually quite intuitive. Indeed, Lecture no. 16 in Feynman’s third Volume of Lectures on Physics is like a trip down memory lane as Feynman himself acknowledges, after presenting Schrödinger’s equation using that very rudimentary model we developed in our previous post: “We do not intend to have you think we have derived the Schrödinger equation but only wish to show you one way of thinking about it. When Schrödinger first wrote it down, he gave a kind of derivation based on some heuristic arguments and some brilliant intuitive guesses. Some of the arguments he used were even false, but that does not matter; the only important thing is that the ultimate equation gives a correct description of nature.” So… Well… Let’s have a look at it. 🙂 We were looking at some electron we described in terms of its location at one or the other atom in a linear array (think of it as a line). We did so by defining base states |n〉 = |xn〉, noting that the state of the electron at any point in time could then be written as: |φ〉 = ∑ |xnCn(t) = ∑ |xn〉〈xn|φ〉 over all n The Cn(t) = 〈xn|φ〉 coefficient is the amplitude for the electron to be at xat t. Hence, the Cn(t) amplitudes vary with t as well as with x. We’ll re-write them as Cn(t) = C(xn, t) = C(xn). Note that the latter notation does not explicitly show the time dependence. The Hamiltonian equation we derived in our previous post is now written as: iħ·(∂C(xn)/∂t) = E0C(xn) − AC(xn+b) − AC(xn−b) Note that, as part of our move from the Cn(t) to the C(xn) notation, we write the time derivative dCn(t)/dt now as ∂C(xn)/∂t, so we use the partial derivative symbol now (∂). Of course, the other partial derivative will be ∂C(x)/∂x) as we move from the count variable xto the continuous variable x, but let’s not get ahead of ourselves here. The solution we found for our C(xn) functions was the following wavefunction: C(xn) = a·ei(k∙xn−ω·t) ei∙ω·t·ei∙k∙xn ei·(E/ħ)·t·ei·k∙xn We also found the following relationship between E and k: E = E0 − 2A·cos(kb) Now, even Feynman struggles a bit with the definition of E0 and k here, and their relationship with E, which is graphed below. Indeed, he first writes, as he starts developing the model, that E0 is, physically, the energy the electron would have if it couldn’t leak away from one of the atoms, but then he also adds: “It represents really nothing but our choice of the zero of energy.” This is all quite enigmatic because we cannot just do whatever we want when discussing the energy of a particle. As I pointed out in one of my previous posts, when discussing the energy of a particle in the context of the wavefunction, we generally consider it to be the sum of three different energy concepts: 1. The particle’s rest energy m0c2, which de Broglie referred to as internal energy (Eint), and which includes the rest mass of the ‘internal pieces’, as Feynman puts it (now we call those ‘internal pieces’ quarks), as well as their binding energy (i.e. the quarks’ interaction energy). 2. Any potential energy it may have because of some field (i.e. if it is not traveling in free space), which we usually denote by U. This field can be anything—gravitational, electromagnetic: it’s whatever changes the energy of the particle because of its position in space. 3. The particle’s kinetic energy, which we write in terms of its momentum p: m·v2/2 = m2·v2/(2m) = (m·v)2/(2m) = p2/(2m). It’s obvious that we cannot just “choose” the zero point here: the particle’s rest energy is its rest energy, and its velocity is its velocity. So it’s not quite clear what the E0 in our model really is. As far as I am concerned, it represents the average energy of the system really, so it’s just like the E0 for our ammonia molecule, or the E0 for whatever two-state system we’ve seen so far. In fact, when Feynman writes that we can “choose our zero of energy so that E0 − 2A = 0″ (so the minimum of that curve above is at the zero of energy), he actually makes some assumption in regard to the relative magnitude of the various amplitudes involved. We should probably think about it in this way: −(i/ħ)·E0 is the amplitude for the electron to just stay where it is, while i·A/ħ is the amplitude to go somewhere else—and note we’ve got two possibilities here: the electron can go to |xn+1〉,  or, alternatively, it can go to |xn−1〉. Now, amplitudes can be associated with probabilities by taking the absolute square, so I’d re-write the E0 − 2A = 0 assumption as: E0 = 2A ⇔ |−(i/ħ)·E0|= |(i/ħ)·2A|2 Hence, in my humble opinion, Feynman’s assumption that E0 − 2A = 0 has nothing to do with ‘choosing the zero of energy’. It’s more like a symmetry assumption: we’re basically saying it’s as likely for the electron to stay where it is as it is to move to the next position. It’s an idea I need to develop somewhat further, as Feynman seems to just gloss over these little things. For example, I am sure it is not a coincidence that the EI, EIIEIII and EIV energy levels we found when discussing the hyperfine splitting of the hydrogen ground state also add up to 0. In fact, you’ll remember we could actually measure those energy levels (E= EII = EIII = A ≈ 9.23×10−6 eV, and EIV = −3A ≈ −27.7×10−6 eV), so saying that we can “choose” some zero energy point is plain nonsense. The question just doesn’t arise. In any case, as I have to continue the development here, I’ll leave this point for further analysis in the future. So… Well… Just note this E0 − 2A = 0 assumption, as we’ll need it in a moment. The second assumption we’ll need concerns the variation in k. As you know, we can only get a wave packet if we allow for uncertainty in k which, in turn, translates into uncertainty for E. We write: ΔE = Δ[E0 − 2A·cos(kb)] Of course, we’d need to interpret the Δ as a variance (σ2) or a standard deviation (σ) so we can apply the usual rules – i.e. var(a) = 0, var(aX) = a2·var(X), and var(aX ± bY) = a2·var(X) + b2·var(Y) ± 2ab·cov(X, Y) – to be a bit more precise about what we’re writing here, but you get the idea. In fact, let me quickly write it out: var[E0 − 2A·cos(kb)] = var(E0) + 4A2·var[cos(kb)] ⇔ var(E) = 4A2·var[cos(kb)] Now, you should check my post scriptum to my page on the Essentials, to see how the probability density function of the cosine of a randomly distributed variable looks like, and then you should go online to find a formula for its variance, and then you can work it all out yourself, because… Well… I am not going to do it for you. What I want to do here is just show how Feynman gets Schrödinger’s equation out of all of these simplifications. So what’s the second assumption? Well… As the graph shows, our k can take any value between −π/b and +π/b, and therefore, the kb argument in our cosine function can take on any value between −π and +π. In other words, kb could be any angle. However, as Feynman puts it—we’ll be assuming that kb is ‘small enough’, so we can use the small-angle approximations whenever we see the cos(kb) and/or sin(kb) functions. So we write: sin(kb) ≈ kb and cos(kb) ≈ 1 − (kb)2/2 = 1 − k2b2/2. Now, that assumption led to another grand result, which we also derived in our previous post. It had to do with the group velocity of our wave packet, which we calculated as: = dω/dk = (2Ab2/ħ)·k Of course, we should interpret our k here as “the typical k“. Huh? Yes… That’s how Feynman refers to it, and I have no better term for it. It’s some kind of ‘average’ of the Δk interval, obviously, but… Well… Feynman does not give us any exact definition here. Of course, if you look at the graph once more, you’ll say that, if the typical kb has to be “small enough”, then its expected value should be zero. Well… Yes and no. If the typical kb is zero, or if is zero, then is zero, and then we’ve got a stationary electron, i.e. an electron with zero momentum. However, because we’re doing what we’re doing (that is, we’re studying “stuff that moves”—as I put it unrespectfully in a few of my posts, so as to distinguish from our analyses of “stuff that doesn’t move”, like our two-state systems, for example), our “typical k” should not be zero here. OK… We can now calculate what’s referred to as the effective mass of the electron, i.e. the mass that appears in the classical kinetic energy formula: K.E. = m·v2/2. Now, there are two ways to do that, and both are somewhat tricky in their interpretation: 1. Using both the E0 − 2A = 0 as well as the “small kb” assumption, we find that E = E0 − 2A·(1 − k2b2/2) = A·k2b2. Using that for the K.E. in our formula yields: meff = 2A·k2b2/v= 2A·k2b2/[(2Ab2/ħ)·k]= ħ2/(2Ab2) 2. We can use the classical momentum formula (p = m·v), and then the 2nd de Broglie equation, which tells us that each wavenumber (k) is to be associated with a value for the momentum (p) using the p = ħk (so p is proportional to k, with ħ as the factor of proportionality). So we can now calculate meff as meff = ħk/v. Substituting again for what we’ve found above, gives us the same: meff = 2A·k2b2/v = ħ·k/[(2Ab2/ħ)·k] = ħ2/(2Ab2) Of course, we’re not supposed to know the de Broglie relations at this point in time. 🙂 But, now that you’ve seen them anyway, note how we have two formulas for the momentum: • The classical formula (p = m·v) tells us that the momentum is proportional to the classical velocity of our particle, and m is then the factor of proportionality. • The quantum-mechanical formula (p = ħk) tells us that the (typical) momentum is proportional to the (typical) wavenumber, with Planck’s constant (ħ) as the factor of proportionality. Combining both combines the classical and quantum-mechanical perspective of a moving particle: v = ħk I know… It’s an obvious equation but… Well… Think of it. It’s time to get back to the main story now. Remember we were trying to find Schrödinger’s equation? So let’s get on with it. 🙂 To do so, we need one more assumption. It’s the third major simplification and, just like the others, the assumption is obvious on first, but not on second thought. 😦 So… What is it? Well… It’s easy to see that, in our meff = ħ2/(2Ab2) formula, all depends on the value of 2Ab2. So, just like we should wonder what happens with that kb factor in the argument of our sine or cosine function if b goes to zero—i.e. if we’re letting the lattice spacing go to zero, so we’re moving from a discrete to a continuous analysis now—we should also wonder what happens with that 2Ab2 factor! Well… Think about it. Wouldn’t it be reasonable to assume that the effective mass of our electron is determined by some property of the material, or the medium (so that’s the silicon in our previous post) and, hence, that it’s constant really. Think of it: we’re not changing the fundamentals really—we just have some electron roaming around in some medium and all that we’re doing now is bringing those xcloser together. Much closer. It’s only logical, then, that our amplitude to jump from xn±1 to xwould also increase, no? So what we’re saying is that 2Ab2 is some constant which we write as ħ2/meff or, what amounts to the same, that Ab= ħ2/2·meff. Of course, you may raise two objections here: 1. The Ab= ħ2/2·meff assumption establishes a very particular relation between A and b, as we can write A as A = [ħ2/(2meff)]·b−2 now. So we’ve got like an y = 1/x2 relation here. Where the hell does that come from? 2. We were talking some real stuff here: a crystal lattice with atoms that, in reality, do have some spacing, so that corresponds to some real value for b. So that spacing gives some actual physical significance to those xvalues. Well… What can I say? I think you should re-read that quote of Feynman when I started this post. We’re going to get Schrödinger’s equation – i.e. the ultimate prize for all of the hard work that we’ve been doing so far – but… Yes. It’s really very heuristic, indeed! 🙂 But let’s get on with it now! We can re-write our Hamiltonian equation as: = (E0−2A)C(xn) + A[2C(xn) − C(xn+b) − C(xn−b) = A[2C(xn) − C(xn+b) − C(xn−b)] Now, I know your brain is about to melt down but, fiddling with this equation as we’re doing right now, Schrödinger recognized a formula for the second-order derivative of a function. I’ll just jot it down, and you can google it so as to double-check where it comes from: second derivative Just substitute f(x) for C(xn) in the second part of our equation above, and you’ll see we can effectively write that 2C(xn) − C(xn+b) − C(xn−b) factor as: formula 1 We’re done. We just iħ·(∂C(xn)/∂t) on the left-hand side now and multiply the expression above with A, to get what we wanted to get, and that’s – YES! – Schrödinger’s equation: Schrodinger 2 Whatever your objections to this ‘derivation’, it is the correct equation. For a particle in free space, we just write m instead of meff, but it’s exactly the same. I’ll now give you Feynman’s full quote, which is quite enlightening: “We do not intend to have you think we have derived the Schrödinger equation but only wish to show you one way of thinking about it. When Schrödinger first wrote it down, he gave a kind of derivation based on some heuristic arguments and some brilliant intuitive guesses. Some of the arguments he used were even false, but that does not matter; the only important thing is that the ultimate equation gives a correct description of nature. The purpose of our discussion is then simply to show you that the correct fundamental quantum mechanical equation [i.e. Schrödinger’s equation] has the same form you get for the limiting case of an electron moving along a line of atoms. We can think of it as describing the diffusion of a probability amplitude from one point to the next along the line. That is, if an electron has a certain amplitude to be at one point, it will, a little time later, have some amplitude to be at neighboring points. In fact, the equation looks something like the diffusion equations which we have used in Volume I. But there is one main difference: the imaginary coefficient in front of the time derivative makes the behavior completely different from the ordinary diffusion such as you would have for a gas spreading out along a thin tube. Ordinary diffusion gives rise to real exponential solutions, whereas the solutions of Schrödinger’s equation are complex waves.” So… That says it all, I guess. Isn’t it great to be where we are? We’ve really climbed a mountain here. And I think the view is gorgeous. 🙂 Oh—just in case you’d think I did not give you Schrödinger’s equation, let me write it in the form you’ll usually see it: schrodinger 3 Done! 🙂
31634d2ea8ac8b23
Wednesday, January 06, 2010 Is Physics Cognitively Biased? Recently we discussed the question “What is natural?” Today, I want to expand on the key point I was making. What humans find interesting, natural, elegant, or beautiful originates in brains that developed through evolution and were shaped by sensory input received and processed. This genetic history also affects the sort of question we are likely to ask, the kind of theory we search for, and how we search. I am wondering then may it be that we are biased to miss clues necessary for progress in physics? It would be surprising if we were scientifically entirely unbiased. Cognitive biases caused by evolutionary traits inappropriate for the modern world have recently received a lot of attention. Many psychological effects in consumer behavior, opinion and decision making are well known by now (and frequently used and abused). Also the neurological origins of religious thought and superstition have been examined. One study particularly interesting in this context is Peter Brugger et al’s on the role of dopamine in identifying signals over noise. If you bear with me for a paragraph, there’s something else interesting about Brugger’s study. I came across this study mentioned in Bild der Wissenschaft (a German popular science magazine, high quality, very recommendable), but no reference. So I checked Google scholar but didn’t find the paper. I checked the author’s website but nothing there either. Several Google web searches on related keywords however brought up first of all a note in NewScientist from July 2002. No journal reference. Then there’s literally dozens of articles mentioning the study after this. Some do refer to, some don’t refer to the NewScientist article, but they all sound like they copied from each other. The article was mentioned in Psychology Today, was quoted in Newspapers, etc. But no journal reference anywhere. Frustrated, I finally wrote to Peter Brugger asking for a reference. He replied almost immediately. Turns out the study was not published at all! Though it is meanwhile, after more than 7 years, written up and apparently in the publication process, I find it astonishing how much attention a study could get without having been peer reviewed. Anyway, Brugger was kind enough to send me a copy of the paper in print, so I know now what they actually did. To briefly summarize it: they recruited two groups of people, 20 each. One were self-declared believers in the paranormal, the other one self-declared skeptics. This self-description was later quantified with commonly used questionnaires like the Australian Sheep-Goat Scale (with a point scale rather than binary though). These people performed two tasks. In one task they were briefly shown (short) words that sometimes were sensible words, sometimes just random letters. In the other task they were briefly shown faces or just random combination of facial features. (These both tasks apparently use different parts of the brain, but that’s not so relevant for our purposes. Also, they were shown both to the right and left visual field separately for the same reason, but that’s not so important for us either.) The participants had to identify a “signal” (word/face) from the “noise” (random combination) in a short amount of time, too short to use the part of the brain necessary for rational thought. The researchers counted the hits and misses. They focused on two parameters from this measurement series. The one is the trend of the bias: whether it’s randomly wrong, has a bias for false positives or a bias for false negatives (Type I error or Type II error). The second parameter is how well the signal was identified in total. The experiment was repeated after a randomly selected half of the participants received a high dose of levodopa (a Parkinson medication that increases the dopamine level in the brain), the other half a placebo. The result was the following. First, without the medication the skeptics had a bias for Type II errors (they more often discarded as noise what really was a signal), whereas the believers had a bias for Type I errors (they more often saw a signal where it was really just noise). The bias was equally strong for both, but in opposite directions. It is interesting though not too surprising that the expressed worldview correlates with unconscious cognitive characteristics. Overall, the skeptics were better at identifying the signal. Then, with the medication, the bias of both skeptics and believers tended towards the mean (random yes/no misses), but the skeptics overall became as bad at identifying signals as the believers who stayed equally bad as without extra dopamine. The researcher’s conclusion is that the (previously made) claim that dopamine generally increases the signal to noise ratio is wrong, and that certain psychological traits (roughly the willingness to believe in the paranormal) correlates with a tendency to false positives. Moreover, other research results seem to have shown a correlation between high dopamine levels and various psychological disorders. One can roughly say if you fiddle with the dose you’ll start seeing “signals” everywhere and eventually go bonkers (psychotic, paranoid, schizoid, you name it). Not my field, so I can’t really comment on the status of this research. Sounds plausible enough (I’m seeing a signal here). In any case, these research studies show that our brain chemistry contributes to us finding patters and signals, and, in extreme, also to assign meaning to the meaningless (there really is no hidden message in the word-verification). Evolutionary, type I errors in signal detection are vastly preferable: It’s fine if a breeze moving leaves gives you an adrenaline rush but you only mistake a tiger for a breeze once. Thus, today the world is full of believers (Al Gore is the antichrist) and paranoids who see a tiger in every bush/a feminist in every woman. Such overactive signal identification has also been argued to contribute to the wide spread of religions (a topic that currently seems to be fashionable). Seeing signals in noise is however also a source of creativity and inspiration. Genius and insanity, as they say, go hand in hand. It seems however odd to me to blame religion on a cognitive bias for Type I errors. Searching for hidden relations on the risk that there are none per se doesn’t only characterize believers in The Almighty Something, but also scientists. The difference is in the procedure thereafter. The religious will see patterns and interpret them as signs of God. The scientist will see patterns and look for an explanation. (God can be aptly characterized as the ultimate non-explanation.) This means that Brugger’s (self-)classification of people by paranormal beliefs is somewhat besides the point (it likely depends on the education). You don’t have to believe in ESP to see patterns where there are none. If you read physics blogs you know there’s an abundance of people who have “theories” for everything from the planetary orbits, over the mass of the neutron, to the value of the gravitational constant. One of my favorites is the guy who noticed that in SI units G times c is to good precision 2/100. (Before you build a theory on that noise, recall that I told you last time the values of dimensionful parameters are meaningless.) The question then arises, how frequently do scientists see patterns where there are none? And what impact does this cognitive bias have on the research projects we pursue? Did you know that the Higgs VEV is the geometric mean of the Planck mass and the 4th root of the Cosmological Constant? Ever heard of Koide’s formula? Anomalous alignments in the CMB? The 1.5 sigma “detection?” It can’t be coincidence our universe is “just right” for life. Or can it? This then brings us back to my earlier post. (I warned you I would “expand” on the topic!) The question “What is natural” is a particularly simple and timely example where physicists search for an explanation. It seems though I left those readers confused who didn’t follow my advice: If you didn’t get what I said, just keep asking why. In the end the explanation is one of intuition, not of scientific derivation. It is possible that the Standard Model is finetuned. It’s just not satisfactory. For example Lubos Motl, a blogger in Pilsen, Czech Republic, believes that naturalness is not an assumption but “tautologically true.” As “proof” he offers us that a number is natural when it is likely. What is likely however depends on the probability distribution used. This argument is thus tautological indeed: it merely shifts the question what is a natural from the numbers to what is a natural probability distribution. Unsurprisingly then, Motl has to assume the probability distribution is not based on an equation with “very awkward patterns,” and the argument collapses to “you won't get too far from 1 unless special, awkward, unlikely, unusual things appear.” Or in other words, things are natural unless they’re unnatural. (Calling it Bayesian inference doesn’t improve the argument. We’re not talking about the probability of a hypothesis, the hypothesis is the probability.) I am mentioning this sad case because it is exactly the kind of faulty argument that my post was warning of. (Motl also seems to find the cosine function more natural than the exponential function. As far as I am concerned the exponential function is very natural. Think otherwise? Well, zis why I’m saying it’s not a scientific argument.) The other point that some readers misunderstood is my opinion on whether or not asking questions of naturalness is useful. I do think naturalness is a useful guide. The effectiveness of the human brain to describe Nature might be unreasonable (or at least unexplained), but it’s definitely well documented. Dimensionless numbers that are much larger or smaller than one have undeniably an itch-factor. I’m not claiming one should ignore this itch. But be aware that this want for explanation is an intuition, call it a brain child. I am not saying thou shell disregard your intuition. I say thou shell be clear what is intuition and what derivation. Don’t misconstrue for a signal what is none. And don’t scratch too much. But more importantly it is worthwhile to as ask what formed our intuitions. On the one hand they are useful. On the other hand we might have evolutionary blind spots when it comes to scientific theories. We might ask the wrong questions. We might be on the wrong path because we believe to have seen a face in random noise, and miss other paths that could lead us forward. When a field has been stuck for decades one should consider the possibility something is done systematically wrong. To some extend that possibility has been considered recently. Extreme examples for skeptics in science are proponents of the multiverse, Max Tegmark with his Mathematical Universe ahead of all. The multiverse is possibly the mother of all Type II errors, a complete denial that there is any signal. In Tegmark’s universe it’s all just math. Tegmark unfortunately fails to notice it’s impossible for us to know that a theory is free of cognitive bias which he calls “human baggage.” (Where is the control group?) Just because we cannot today think of anything better than math to describe Nature doesn't mean there is nothing. Genius and insanity... For what the multiversists are concerned, the “principle of mediocrity” has dawned upon them, and now they ask for a probability distribution in the multiverse according to which our own universe is “common.” (Otherwise they had nothing left to explain. Not the kind of research area you want to work in.) That however is but a modified probabilistic version of the original conundrum: trying to explain why our theories have the features they have. The question why our universe is special is replaced by why is our universe especially unspecial. Same emperor, different clothes. The logical consequence of the multiversial way is a theory like Lee Smolin’s Cosmological Natural Selection (see also). It might take string theorists some more decades to notice though. (And then what? It’s going to be highly entertaining. Unless of course the main proponents are dead by then.) Now I’m wondering what would happen if you gave Max Tegmark a dose of levodopa? It would be interesting if a version of Brugger’s test was available online and we could test for a correlation between Type I/II errors and sympathy for the multiverse (rather than a believe in ESP). I would like to know how I score. While I am a clear non-believer when it comes to NewScientist articles, I do see patterns in the CMB ;-) [Click here if you don't see what I see] The title of this post is of course totally biased. I could have replaced physics with science but tend to think physics first. Conclusion: I was asking may it be that we are biased to miss clues necessary for progress in physics? I am concluding it is more likely we're jumping on clues that are none. Purpose: This post is supposed to make you think about what you think about. Reminder: You're not supposed to comment without first having completely read this post. 1. Ooh, isn't that weird?! Your initials are carved into the CMB! I saw the blue face of the devil first. That's definitely there as well. Isn't science great! :) 2. If modern particle physics is cognitively biased, the biases are subtle. I'd say subtler than the assumptions about geometry (Euclidean) and time (Newtonian) that prevailed before Einstein. 3. Now if we could only look at the CMBs of all of Tegmark's other universes, what would be the great message be that the Romans placed there ?-) Of course, thought and perception are necessarily biased by the sense receptors and the brains and physiology each of us is equipped with – and by the history of our experiences, personal and collective. What we try to do, especially with science, is to use experience to gradually separate signal from noise. And we can do that no better than is allowed by the set of tools we're born with and which we add to, as a result of added experience and understanding. Because our 'equipment' varies slightly for genetic and other accidental reasons, so will our biases. But the strategies for enhancing S/N, should tend to reduce the net effect of bias on THOSE DIFFERENCES. We may never be able to be overcome other 'biases', that relate to our finite shared biology and experiences. 4. Although it matters to the essence of the question, let's put aside the intuitive sense that we "really exist" in a way distinguishing us from modal-realist possible worlds. (IMHO, it's not a mere coincidence between the sense of vivid realness in consciousness and the issue of "this is a real world, dammit!) Consider the technical propriety of claiming the world is "all math." That to me, implies that a reasonable mathematical model of "what happens" can be made. As far as I am concerned, the collapse problem in QM makes that impossible. We don't really know how to take a spread out, superposed wave function and make it pop into some little space or specific outcome. Furthermore, "real randomness" cannot come from math, which is deterministic! (I mean the outcomes themselves, not cheating by talking about the probabilities as a higher abstraction etc.) Same issue for "flowing time" and maybe more. Some people think they can resolve the perplexity through a very flawed, circular argument that I'm glad looks suspect to Roger Penrose too. Just griping isn't enough, see my post on decoherence at my link. But in any case this is not elegant, smooth mathematics. Many say, that renormalization is kind of a scam too. Maybe it's some people's cognitive bias to imagine that the universe must be mathematical, or their cognitive dissonance to fail to accept that the universe really doesn't play along - but the universe really isn't a good "mathematical model." I think that's more important than e.g. how many universes there are. 5. Bee, Just wanted to point out that the study said nothing about pattern recognition. In fact, from what you stated about the duration of time ("too short to use the part of the brain necessary for rational thought") to make the decision, no pattern recognition was involved or affected by the test: patterns take thought to see. So, while I agree that pattern recognition is an evolutionary boon, is involved in creativity, and is present in both scientists and "believers", that says nothing about the quality of the patterns being observed. Bad signal-vs.-noise separation would, obviously, lead to bad patterns (GIGO, anyone?), but even good signal-vs.-noise separation could lead to bad patterns. The study results seem to say that what was affected wasn't the interpreted quality of the signal (which wasn't tested), just whether it *was* a signal or was just noise. The correlation between "believers" and false signal detection might be more related to the GIGO issue rather than an assumed increase in pattern detection ability. 6. "...too short to use the part of the brain necessary for rational thought." I wonder what that phrase means. 7. From AWT perspective, modern physics is dual to philosophy. While philosophers cannot see quantitative relations even at the case,their derivation is quite trivial and straightforward, formally thinking physicists often cannot see qualitative relations between phenomena - even at the case, such intuitive understanding would be quite trivial. Because we are seeing objects as a pinpoint particles from sufficient distance in space-time, Aether theory considers most distant (i.e. "fundamental") reality composed of inertial points, i.e. similar to dense gas, which is forming foamy density fluctuations. Philosophers tend to see chaotic portion of reality, where energy spreads via longitudinal waves, whereas physicists are looking for "laws", i.e. density fluctuations itself, where energy spreads in atemporal way of transversal waves. It means, physicists tend to see gradient and patterns even at the case, when these patterns are of limited scope in space-time and it tends to extrapolate these patterns outside of their applicability scope - as Bee detected correctly. Lubos Motl is particularly good case to demonstrate such bias, because he is loud and strictly formally thinking person. Bee is woman and thinking of women is more holistic & plural, which is the reason, why women aren't good in math in general. Nevertheless she's still biased by her profession, too. I don't think, any real physicist can detect bias of his proffession exactly, just because (s)he is immanent part of it. 8. For what it's worth, I saw nothing that I could identify in the CMB. The study you cite is cute, but as with most psychological studies, it doesn't pay to try to milk the data for more than is actually there. Thinking you detect a signal and being willing to act on a signal are not the same thing, although in this simplistic, no-risk situation, they are made to appear to be. And science isn't just about how many times you say 'ooh!' in response to what you think is a signal. Science is very much about having that 'signal' validated by others using independent means. I'm really not sure who or what you are trying to jab with this post, other than the poke at ESP. And I'm seconding Austin with respect to pattern recognition. :) 9. /*..Extreme examples for skeptics in science are proponents of the multiverse..*/ From local perspective of CMB Universe appears like fractal foam of density fluctuations, where positive curvature is nearly balanced by this negative one. The energy/information is spreading through this foam in circles or loops simmilar to Mobius strip due the dispersion and subtle portion of every transversal wave is returning back to the observer in form of subtle gravitational, i.e. longitudinal waves. We should realize, there is absolutely no methaphysics into such perspective, as it's all just a consequence of emergent geometry. But this dispersion results into various supersymmetry phenomena, where strictly formally thinking people are often adhering to vague concepts and vice-versa. For example, many philosophers are obsessed by searching for universal hidden law of Nature or simply God, which drives everything. Whereas many formally thinking people are proposing multiverse concept often. We can find many examples of supersymmetry in behavior of dogmatic people, as they're often taking an opinions, which are in direct contradiction to their behavior. We are often talking about inconsistency in thinking in this connection, but it's just a manifestation of dual nature of information spreading inside of random systems. 10. Supersymmetry in thinking could be perceived as a sort of mental correction of biased perceiving of reality, although in unconscious, i.e. intuitive way. But there is a dual result of dispersion, which leads into mental singularities, i.e. black holes in causal space-time. The strictly formally thinking people often tend to follow not just vague and inconsistent opinions, but they're often of "too consistent" opinions often, which leads them into dogmatic, self-confirmatory thinking. The picture of energy spreading through metamaterial foam illustrates this duality in thinking well: portion of energy gets always dispersed into neighborhood, another portion of energy is always ending in singularity. Unbiaselly thinking people never get both into schematic, fundamentalistic thinking, both into apparently logically inconsistent opinions, which contradicts their behavior. Their way of thinking is atemporal, which means it follows "photon sphere" of causal space-time. From this perspective, the people dedicated deeply to their ideas, like Hitler or Lenin weren't evils by their nature, they were just "too consequential" in their thinking about "socially righteous" society. The most dangerous people aren't opportunists, but blindly thinking fanatists. The purpose of such rationalization isn't to excuse their behavior - but to understand its emergence and to avoid it better in future. Their neural wave packets spreads in transversal waves preferably, which makes them often ingenial in logical, consequential way of thinking. But at the moment, when energy density of society goes down during economical or social crisis, society is behaving like boson condensate or vacuum, where longitudinal waves are weak - and such schematically thinking fanatics can become quite influential. 11. /*..what is a natural from the numbers to what is a natural probability distribution..*/ This is a good point, but in AWT the most natural is probability distribution in ideal dense Boltzmann gas. I don't know, how such probability appears and if it could be replaced by Boltzmann distribution - but it could be simulated by particle collisions (i.e. causual events in space-time) inside of very dense gas, which makes it predictable and testable. 12. /*.. the effectiveness of the human brain to describe Nature might be unreasonable (or at least unexplained..*/ It's because it's a product of long-term adaptation: the universe is a fractal foam, so that human brain maintains a fractal foam of solitons to predict it's behavior as well, as possible. Therefore both character, both the wavelength of brain waves correspond the CMB wavelength (or diameter of black hole model of observable Universe). From perspective of AWT or Boltzmann brain Universe appears like random clouds or Perlin noise. A very subtle portion of this fluctuations would interact with the rest of noise in atemporal way, i.e. via transversal waves preferably. This makes anthropic principle a tautology: deep sea sharks are so perfectly adopted to bottom of oceans from exsintric perspective, they could perceive their environment as perfectly adopted to sharks from insintric persective of these sharks. These two perspectives are virtually indistinguishable each other from sufficiently general perspective. In CMB noise we can see the Universe both from inside via microwave photons, both from outside via gravitational waves or gravitons. We can talk about black hole geometry in this connection The effectiveness of the human brain to describe Nature might be unreasonable (or at least unexplained - but basically it's just a consequence of energy spreading in chaotic particle environment, which has its analogies even at the water surface. 13. The reason, why contemporary physics cannot get such trivial connections its adherence to strictly causal, i.e. insintric perspective. Its blind refusal of Aether concept is rather a consequence, then the reason of this biased stance. We know, mainstream physics has developed into duality of general relativity and quantum mechanics, but its general way of thinking still remains strictly causal, i.e. relativistic by its very nature. Their adherence to formal models just deepens such bias (many things, which cannot be derived can still be simulated by particle models, for example). From this reason, physicists cannot imagine the things from their (slightly) more general exsintric perspective due their adherence to (misunderstood) Popper's methodology, because exsintric perspective it's unavailable for experimental validation by its very definition - so it's virtually unfalsifiable from this perspective. We cannot travel outside of our Universe to make sure, how it appears - which makes impossible for physicists to think about it from more general perspective. 14. Low Math, Meekly Interacting10:52 PM, January 06, 2010 Of course we're prone to bias. That's why science works better than faith or philosophy: Nature doesn't care what we want. I don't think bias is bad per se, though. It's difficult to make progress without a preconceived notion of what the goal might be. Even if that notion is completely wrong, at least picking an angle of attack and following it will eventually lead one to recognize their error and readjust, hopefully. Without some bias, we flail around at random. It's when we can't temper our biases with observation and experiment that science really runs into trouble. Dopamine is implicated in motivation, drive, the reward mechanism we inherited from our hunter-gatherer ancestors. It's good to love the chase; it keeps us fed when we're hungry, even if we can't see the food yet. Mice deprived of dopamine in certain brain regions literally starve to death for want of any desire to get up and eat. And no genius accomplishes anything without drive. So let there be bias. But let there be evidence, too, and a hunger to find it. 15. Hi Bee, “This post is supposed to make you think about what you think about.” Well gauging from the responses thus far, all it’s managed is to have many to remind others as to how they are suppose to think rather than give reason as to why. To me that simply serves to demonstrate that there are more people who are convinced the world should be as they think it should be, as opposed to those concerned as how to best learn to discover the way it presents itself as being. So these wonderings as how one is best able to judge signal from noise, is just the modern way of asking how one is able to find what is truth as opposed to what are merely the shadows. That would have the sceptics on dopamine to be like the freed prisoner when first returned to the darkness of Plato's cave to be asked again to measure the shadows, while the believers on dopamine would be how that same prisoner found himself when first freed to the upper world. So what then would Plato have said is the best way to judge signal from noise. To do this one has to introspect themselves in relation to the world, before one can excogitate about it, rather than consider only what one can imagine is how the world necessarily must be, for it then is only a projection of self and thus merely a shadow. So all the talk of the effect of observation on reality or our world is the way it is as to accommodate our existence, seems to be just what those prisoners in Plato’s cave must have thought and for the same reason. So I apologise if this seems nothing more than philosophy, yet is that not what’s asked we considered here, as what constitutes being good natural philosophy. -Plato- Allegory of the Cave 16. It's very hard to guess what "bias" is supposed to mean in these contexts. Our brains like to keep things simple, to find economical descriptions of reality. With the help of math, though, those descriptions become florid indeed. Whatever the biases of the human brain, we know that (some) humans are damn good at sniffing out the laws of nature, because they have found so many of them. Did our prejucices about space and time retard relativity, or our prejudices about causality retard quantum mechanics? Maybe a little but not for long. Neither could plausibly have been discovered 70 years earlier than they were. Engineers are very familiar with the problem of detecting a signal in noise. The trick is to steer an optimal route between missed signals and false alarms. Your experiment suggests that dopamine moves the needle in favor of higher tolerance for false alarms than missed signals. 17. Testable predictions and experimental testing are the only known way to verify which patterns/ideas are useful and which are "robust" and "compelling" but not useful in understanding nature. One in a million can reliably use intuition as a guide in science. 18. CIP: It means 140 msec. The paper didn't indeed say why 140 msec, but I guess the reason is roughly what I wrote. If you have time to actually "read" rather than "recognize" the word, you'd just test for illiteracy. Best, 19. This is why I read this blog. Happy New Year, Bee! 20. Austin, Anonymous: With "pattern recognition" I was simply referring to finding the face/word in random noise. You seem to refer to pattern recognition as pattern in a time series instead, sorry, I should have been clearer on that. However, you might find the introduction of this paper interesting which more generally is about the issue of mistakenly assigning meaning to the meaningless rspt causal connections where there are none. It's very readable. This paper (it seems to be an introduction to a special issue) also mentions the following "The meaningfulness of a coincidence is in the brain of the beholder, and while ‘‘meaningless coincidences’’ do not invite explanatory elaborations, those considered meaningful have often lured intelligent people into a search for underlying rules and laws (Kammerer, 1919, for a case study)." Seems like there hasn't been much research on that though. Best, 21. Dear Arun: I wasn't so much thinking about particle physics (except possibly for the 1.5 sigma detections) but more about the attempt to go beyond that. Best, 22. Hi Len, I agree with what you say. However, it has been shown in other context that knowing about a bias can serve as a corrective instance. Ie just telling people to be rational has the effect of them indeed being more rational. Best, 23. Neil: There's a whole field of mathematics, called stochastic, dedicated to randomness. It deals with variables that have no certain value that's the whole point. I thus don't know in which way you think "math is deterministic" (deterministic is a statement about a time evolution). In any case, I believe Tegmark favors the many worlds interpretation, so no collapse. Best, 24. /* which way you think "math is deterministic"..*/ Math is atemporal, which basically means, what you get is always what you put in - and the result of derivation doesn't change with time. Which is good for theorists - but it makes math a nonrealistic represenation of dynamical reality. 25. Zephir: That a derivation is atemporal does not prevent maths from describing something as a function of a parameter rspt as a function of coordinates on a manifold. Best, 26. I know, but this function is still fixed in time. Instead of this, our reality is more close to dynamic particle simulation. We should listen great men of fictional history and their moms. 27. Zephir: That a function (rather than it's values) is "fixed in time" is an illdefined statement. The function is a map from one space to another space. To speak of something like constancy (being "fixed") with a parameter you first need to explain what you mean with that. Best, 28. The only way you can deal with bias is to find a good reason for every assertion you make and to provide a consistent, well defined theoretical explanation based on the evidence and on the accumulated knowledge in your area. That's the best think you can do I guess and your assertion will be debated. The diversity and pluralism of educated opinions is the best chance we have to filter any bias. The fact that you've raised the question of bias with your post is a living prove of that. 29. Hi Bee, Just as a straight forward question from a layperson to a professional researcher in respect to what underlies this post, that is to ask if you consider physics turning ever closer to becoming the study of natural phenomena by those influenced primarily by their beliefs, rather than by their reason as grounded in doubt? As a follow up question, if you then consider this to be true, what measures would you find that need to be taken to correct this as to have physics better serve its intended purpose as it relates to discovering how the world works as it does? 30. Giotis: Yes, that's why I've raised the question. One should however distinguish between cognitive and social bias. Diversity and pluralism of opinions might work well to counteract social bias, but to understand and address cognitive bias one also needs to know better what that bias might look like. Plurality might not do. Best, 31. Hi Phil, Here as in most aspects of life it's a matter of balance. Neither doubt nor belief alone will do. I don't know if there's a trend towards more belief today than at other times in the history of science and I wouldn't know how to quantify that anyway. What I do see however is a certain sloppiness in argumentation possibly based on the last century's successes, and a widespread self-confidence that one "knows" (rather than successfully explains) which I find very unhealthy. I personally keep it with Socrates "The only real wisdom is knowing you know nothing." This is why more often than not my writing comes in the sort of a question rather than an answer. Not sure that answers your question, but for what I think should be done is to keep asking. Best, 32. Hi Phil, Regarding your earlier comment, yes, one could say some introspection every now and then could not harm. Maybe I'm just nostalgic, but science has had a long tradition of careful thought, discussion and argumentation that I feel today is very insufficiently communicated and lived. Best, 33. This comment has been removed by the author. 34. Hi Bee, Well how could I argue with you pointing to Socrates for inspiration as his is the seed of this aspect of doubt as it relates to science? The only thing I would add is that Plato only expanded as to remind we are all prisoners and are better to be constantly reminded that we are; which of course is what you propose as the only remedy for bias. So would you not agree that the best sages of science usually are the ones that hold fast to this vision and that how they came to their conclusions are perhaps the better lessons , rather than what they actually have us come to know. “But hitherto I have not been able to discover the cause of those properties of gravity from phænomena, and I frame no hypotheses; for whatever is not deduced from the phænomena is to be called an hypothesis; and hypotheses, whether metaphysical or physical, whether of occult qualities or mechanical, have no place in experimental philosophy. In this philosophy particular propositions are inferred from the phænomena, and afterwards rendered general by induction. Thus it was that the impenetrability, the mobility, and the impulsive force of bodies, and the laws of motion and of gravitation, were discovered. And to us it is enough that gravity does really exist, and act according to the laws which we have explained, and abundantly serves to account for all the motions of the celestial bodies, and of our sea.” -Isaac Newton- Principia Mathematica Oh yes this has me remember how I was surprised that Stefan a few days past did not with a post remind us of the birthday of this important sage of science :-) 35. Hi Phil, Well, reporting on dead scientists' birthday gets somewhat dull after a while. For what I am concerned what makes a good scientist and what doesn't is whether his or her work is successful. This might to some extend be a matter of luck or being at the right time at the right moment. But there are certainly things one can do to enhance the likeliness of success, a good education ahead of all. What other traits are useful to success depends on the research you're conducting, so your question is far too general for an all-encompassing reply. We previously discussed the four stages of science that Shneider suggested, and while I have some reservations on the details of his paper I think he's making a good point there. The trait you mention and what I was also concerned with I'd think is instrumental for what Shneider calls 1st stage science. Best, 36. Aye, yai, yai. Once again Bee, you have been singled out for criticism at The Reference Frame, in particular this blarticle. Click here for the review in question by Lubos. If it's not too much trouble, a review by you of Lubos' review would be appreciated. Based on our previous discussion it will not be published there, or more to point you will not attempt to do so based on previous experience with TRF, therefore we humbly beseech thee to respond here under the reply section of this very blarticle that inspired Lubos to generate so many, many, very, very many words. (And for an added bonus, he trashes Sean Carroll's new book as well) Thanks in advance. Okay, what about the Ulam's Spiral Or, what about Pascal's triangle? Have you noticed any patterns? This goes to the question then of what is invented versus what is discovered? As to Wmap, don't you see this?:) The color represents the strength of the polarized signal seen by WMAP - red strong/blue weak. The signal seen in these maps comes mostly from our Galaxy. It is strongest at 23 GHz, weakest at 61 and 94 GHz. This multi-frequency data is used to subtract the Galactic signal and produce the CMB map shown (top of this page). These images show a temperature range of 50 microKelvin. 38. Yes but the fact that you've raised the question of possible bias proves that humans (due to the pluralism of opinions) have the capability to take the factor of cognitive bias under consideration and maybe even attempt to take alternative roads due to that. You are part of the human race aren't you? So this proves my point:-) 39. Steven: Lubos "criticism" is as usual a big joke. It consists of claiming I said things I didn't say and then making fun of them. It's terribly dumb and in its repetition also unoriginal. Just some examples: - I meanwhile explicitly stated two times that I do not think arguments or naturalness "have no room in physics" as Lubos claims. He is either indeed unable to grasp the simplest sentences I write or he pretends to be. In the above post I wrote "I do think naturalness is a useful guide." How can one possibly misunderstand this sentence if one isn't illiterate or braindead? - Lubos summary of my summary of Brugger's paper is extremely vague and misleading. Eg he writes "Skeptics converged closer to believers when they were "treated" by levodopa" but one doesn't really know what converged towards what. As I said as far as the bias is concerned they both converge towards the mean. This also means they converged to each other but isn't the same. - Lubos says that "The biases are the reasons why the people are overly believing or why they excessively deny what can be seen. Sabine Hossenfelder doesn't like this obvious explanation - that the author of the paper has offered, too." First in fact, the authors were very accurate in their statements. What their research has shown is a correlation, not a causation. Second, I certainly haven't "denied" this possible explanation. That this is not only a correlation but also a causation is exactly why I have asked whether physics is cognitively biased, so what's his point? And so on and so forth. It is really too tiring and entirely fruitless to comment on all his mistakes. Note also that he had nothing to say to my criticism of his earlier article. There was a time when I was thinking I should tell him when he makes a mistake, but I had to notice that he is not even remotely interested in having a constructive exchange. He simply doesn't like me and the essence of his writing is to invent reasons why I'm dumb and hope others are stupid enough to believe him. It's a behavior not appropriate for a decent scientist. Best, 40. Hi Giotis, Yes, sure, I agree that we should be able to address and understand cognitive bias in science and that this starts with awareness that is easier to be found in a body that is pluralistic. What I was saying is that relying on plurality might bring up the question but not be the solution. (Much like brainstorming might bring up ideas but not their realization). Btw: The package is on the way. Please send us a short note when it arrives just so we know it didn't get lost. 41. Typo: Should have been "arguments of naturalness" not "arguments or naturalness" 42. Bee, what I mean by "deterministic" math is that the math process can't actually *produce* the random results. Just saying "this variable has no specific value" etc. is "cheating" (in the sense philosophers use it), because you have to "put in the values by hand." Such math either produces "results" which are the probability distributions - not actual sequences of results - or in actual application, the user "cheats" by using some outside source of randomness or pseudo-randomness like digits of roots. (Such sequences are themselves of course, wholly determined by the process - they just have the right mix that is not predictable to anyone not knowing what they came from. In that sense, they merely appear "random.") I think most philosophers of the foundations of mathematics would agree with me. As for MWI, I still ask: why doesn't the intitial beam splitter of a MZI split the wave into two worlds, thus preventing the interference at all? 43. Hi Bee, I actually read a book by Julian Baggini called: 'A very short Introduction to Atheism.' Baggini writes about evidence vs. supernatural and about naturalism. Where do we find evidence ? We all know: only in an experiment. But as you said it must be a good experiment. That means, as you too said, it must be based on a 'good' initialization. For example, for dark matter and dark energy must be found a detector. The correct detector is needed to be found. What is the correct detector in case of dark matter or dark energy ? Best Kay 44. Neil: I don't know what you mean with "math process producing a result." Stochastic processes produce results. The results just are probabilistic, which is the whole point. There is no "result" beyond that. I'm not "putting in values by hand," the only information there is is the distribution of the values. You are stuck on the the quite common idea that the result actually "has" a value, and then you don't see how math give you this value. Best, 45. Sure Bee, I'll do that. Thanks. 46. The CMB is a remarkably coincident map of the Earth: Europe, Asia, and Africa to the right; the Americas to the left. Is physical theory bent by pareidolia? Physics is obsessed with symmetries: S((U2)xU(3)) or U(1)xSU(2)xSU(3) for the Standard model, then SUSY and SUGRA. String theory is born of fundamental symmetries, then whacked to lower symmetries toward observables (never quite arriving). Umpolung! Remove symmetries and test (not talk) physics for flaws. Chemistry (pharma!) is explicit: Does the vacuum differentially interact with massed chirality? pdf pp. 25-27, calculation of the chiral case 1) Two solid single crystal spheres of quartz in enantiomorphic space groups P3(1)21 and P3(2)21 are plated wtih superconductor, cooled, and Meissner effect levitated in hard vacuum beind the usual shieldings. If they spontaneously reproducibly spin in opposite directions, there's your vacuum background. 2) Teleparallel gravitation in Weitzenböck space specifically allows Equivalence Principle violation by opposite parity mass distributions, falsifying metric gravitation in pseudo-Riemannian space. A parity Eotvos experiment is trivial to perform, again using single crystals of space groups P3(1)21 and P3(2)21 quartz. Glycine gamma-polymorph in enantiomorphic space groups P3(1) and P3(2) is a lower symmetry case and is charge-polarized, with 1.6 times the atom packing density of quartz. Theoretic grandstanding has produced nothing tangible after 25 years of celebrated pontification. Gravitation theories are geometries arising from postulated "beautiful" symmetries. They are vulnerable to geometric falsification (e.g., Euclid vs. elliptic and hyperbolic triangles). Somebody should look. 47. Bee, what I mean is, the mathematical machinery can't produce the actual random results directly. That means, a sequence like 4,1,3,3,0,6, ... or something else instead. It just treats the randomness as an abstraction. If you can find a way for the *operation* to produce an actual sequence of random numbers or etc., please explain and show the results. REM the same operation must produce different sequences other times it is "run" or it isn't really random. (I don't think you can, since any known "operation" will produce the same result each time - again, if you don't "cheat" by pulling results from outside. Hence, taking sqrt 2 provides a specific sequence, and it will every time you do it. Even if you said, it can be either negative or positive if you consider x^2 = 2, *you* are still going to decide which to show each time. Otherwise, it is just the set of solutions. In a random variable, it represents a class of outputs - that is not the same, as having a mechanism to produce varying results each time. Don't you think, if a math process could do that, chip mfrs would use that instead of either seeded pseudo-random generators, or an actual physical process? If you are thinking in terms of practical use, all I can say is: I mean, the logical definition that a worker in FOM would use, and I think they agree with me with few exceptions. Please think it through carefully, tx. 48. Neil: I understand what you're saying but you don't understand what I'm saying. You are implicitly assuming reality "is" something more than a process that is (to some extend) "really" probabilisitc. You're thinking it instead "really is" the sequence and the sequence is not the random variable. That is your point but it is a circular argument: you think reality can't be probabilistic because a probabilistic distribution is not real. Define "not real," see the problem? Best, 49. Bee, Giotis: Specifically with regard to QFT - how well-defined does one have to be? Are we well-defined enough? 50. One can never be well-defined enough. The pitfalls in physics as in economics and biology are the hidden assumptions people forget about because they are either "obvious" (cognitive bias) or "everybody makes them" (social bias). Best, 51. Bee, I mean very carefully what I said about the specific point I made: that *math* can't produce such "really random" results, but only describe them in the abstract. But if we were talking at cross purposes, then we could both be right about our separate points. As for yours: I am assuming nothing about the universe or what it has to be like. But if we appreciate the first point above, and then look at the universe: we find "random" results supposedly coming out. The universe does produce actual sequences and events, not (unless you dodge via MWI) a mere abstraction of a space of probable outcomes. If actual outcomes, sample sequences which are the true 'data' from experiments, are genuinely "random" in the manner I described, then: (1) The universe produces "random" sequences upon demand. (2) They can't - as particulars - be produced by a mathematical process. (3): The universe is therefore not "just math", and MUH is invalid. That is not a circular argument. It is a valid course of deduction from a starting assumption (about math, supported by the consensus of the FOM community AFAIK), which is compared to the apparent behavior of the universe, with a disjoint deduced thereby. As for what "real" means, who knows for sure? But we do know how math works, we know how nature works, and it cannot IMHO be entirely the same. 52. Neil: But I said in the very beginning it's a many world's picture. The MUH doesn't only mean all that you think is "real" is mathematics but all that is math is real. Best, 53. Neil: You're right, I didn't say that, I just thought I said it. Sorry for the misunderstanding. Best, 54. Well it depends. The simplest example is the divergence of the vacuum energy. You just subtract it in QFT saying that only energy differences matter if gravity is not considered and QFT is not a complete theory anyway. Are you satisfied with the explanation? Some people are not very happy with all these divergences and their handling with the renormalization procedure. Also perturbative QFT misses a number of phenomena. So somebody could say that is not well defined or is well defined in a certain regime under certain preconditions. The main issue though is that you'll always find people who challenge the existing knowledge, ask questions and doubt the truth of the given explanations if they are not well defined (as Bee does in her post). That's why I talked about pluralism, diversity of opinions, open dialogue and open/free access to knowledge, as a remedy even for the cognitive bias. And I'm not talking about physics or science only but generally. 55. Neil: Maybe what I said becomes clearer this way. Your problem is that stochastic doesn't offer a "process to produce" a sequence. Since the sequence is what you believe is real, reality can't be all described by math. What I'm asking is how do you know that only the sequence is "real" and not all the possible sequences the random variable can produce? I'm saying it's circular because you're explaining the maths isn't "real" because "reality" isn't the math, and I'm asking how do you know the latter. (Besides, just to be clear on this, I am neither a fan of MUH nor MWI.) Best, 56. Neil, have you read some of Gregory Chaitin's work who believes randomness lies at the heart of mathematics? Good article here: "My Omega number possesses infinite complexity and therefore cannot be explained by any finite mathematical theory. This shows that in a sense there is randomness in pure mathematics." I think as a clear example, the distribution of prime numbers appears to be fundamentally random - cannot be predicted from any algorithm. But the positions are clearly defined in mathematics. So that's fundamental randomness right at the heart of maths. 57. While I think it is very important to examine the question of bias, I also think it is a very sticky wicket. Observing signals among noise is an extremely individualistic thing. It is a fact that some gifted individuals can pick signals out of the noise but can't explain how they do it. Or they explain it and it isn't rational to the rest of us. For instance the brains of many idiot savants, (and also some normally functioning individuals, which is much more rare), can calculate numbers in their head using shapes and colors that they visualize. Others can memorize entire phonebooks using similar methods. Visualization cues are often key to these abilities. To most of us it would seem like very good intuition because most brains don't work like that, but I think that is a mistake. I certainly think there are rare individuals who can do similar things in other fields of study. But scientists are often too biased in their reductionist philosophy to accept it. They assume that because a particular individuals brain doesn't work the way their's do any explanation for how the "calculation" was done is that is was just good intuition. That conclusion itself is an overly reductionist conclusion. 58. Whew, what a metaphysical morass. Well, about what MUH people think is true: it is rather clear, they say that all mathematical descriptions are equally "real" in the way our world is, but furthermore there is no other way to be "real" (ie, modal realism.) So there isn't any valid: "hey, we are in a materially real world, but that conceptual alteration where things are a little different 'does not really exist' except as an unactualized abstraction." MT et al would say, there is no distinction between unactualized, and actualized (as a "material" world) abstractions. Poor Madonna, was she wrong? But I don't agree anyway. BTW, MUH doesn't really prove MWI unless you can connect all the histories to get retrodiction of the quantum probabilities. Bee, Andrew: Now, about math: the stochastic variable represents the set of outcomes. Why do I know only the particular outcome is real? Well, in an actual experiment that's what you get. How can I make that more clear? Of course the math is "real", but so is the outcome in a real universe. There is a mismatch. Please don't go around the issue of which is real. Both are real in their own ways, they just can't be equivalent. You can't construct a mathematical engine to produce such output. I don't think Chaitin's number can produce *different* results each time one uses the process. Like I said, I want to see such output produced. It is not consensus to disbelieve in the deterministic nature of math. As for primes, that is a common misunderstanding regarding pseudo-random sequences. The sequence of primes is *fixed*, that is what matters regardless of what it looks like. If you calculate the primes, you get the *same* sequence each time, OK?! But a quantum process produces one sequence one time, another sequence another (in "our world" at least, and let them prove any more.) Folks, I shouldn't have to slog through all this. Check some texts on the foundations of math, I doubt many disagree with me. Bee - I can't get email notify any more. Hi Neil, yes, but that's not the definition of "random" - that it "produces a different number each time". If I produce an algorithm that produces "1", "2", "3" etc. then it is producing "a different number each time" but that is clearly not random. No, the definition of a random sequence is one which cannot be algorithmically compressed to something simpler (e.g., the sequence 1,2,3 can clearly be compressed down to a much simpler algorithm). I can assure you, the distribution of the primes (or the decimals of pi, for example) is truly random in that it cannot be further compressed. Random quantum behaviour would be described by such a truly random sequence in that the behvaiour cannot be compressed to a simpler algorithm (i.e., a simpler deterministic algorithm). Neil: "Folks, I shouldn't have to slog through all this. Check some texts on the foundations of math, I doubt many disagree with me." I actually think most would disagree, Neil. See more on algorithmically random sequence 60. Bee: Of course we are not unbiased. The brain does Bayesian inference (whether consciously or not) and Bayesian inference depends in part on a prior estimate of the probability distribution over the possible observed data. This prior distribution unavoidably introduces bias into cognition. Since this prior distribution is encoded in one’s current brain state at the moment one begins to process a newly observed datum, no two people will bring the exact same bias to any given inference. This is as true of low-level perceptual inference of the kind studied by Brugger as it is of high-level abstract inductive inference of the kind that gives rise to scientific theories. Equally unavoidably, we are predisposed by the structures of our brains to describe the world in terms of certain archetypical symbols, which you may think of as eigenvectors of the brain state. The structure of each brain is determined by a complex interplay between genetic factors and the entire history of that brain from the moment of conception. Thus, there are bound to be species-wide biases as well as cultural and individual predispositions in the way we describe what we see, the questions can we ask about it, and the answers we are able to accept. The only remedy for such biases is the scientific method, practiced with complete intellectual honesty and total disregard for accepted doctrine and dogma -- to the extent that this is humanly possible. Unfortunately, in recent times, this process is becoming increasingly hobbled by a number of destructive trends. Firstly, we have allowed indoctrination to become the primary goal of our education system. Where once it was considered self-evident that the purpose of education is “to change an empty mind into an open one,” educators now claim explicitly that the most important role of education is “to inculcate the right attitude towards society.” Secondly, the unavoidable imperfections of the peer review process have been co-opted by political special-interest groups as well as the personal fiefdoms and in-groups of influential scientists, so the very process that is supposed to guard against bias is now perpetuating it. This can be seen in every modern science; specific recent examples include psychology, sociology, anthropology, archeology, climatology, physics and mathematics. Thirdly, widespread misunderstanding of the content of quantum theory has lead many to doubt that “objective reality” even exists. This, in turn, is used by so-called “philosophers” of the post-modern persuasion to call the very idea of “rational thought” into question. Well, if objective reality and rational thought are disallowed, then only blind superstition and ideological conformity are left. Is it any wonder, then, that progress in science (as distinct from technology) is grinding to a halt? 61. Bee, you ask exactly the right question. If I may paraphrase it thus: "What cognitive or social biases have ( become embedded in and )impeded Science from developing a truly compelling and comprehensive Quantum Gravity unification cosmology & philosophy? " ( say provisionally, cQGc). In a soon to be released monograph, 3 such impediments and biases with far ranging theoretical consequences are identified. In appreciation of this and many of your previous blog postings and since you ask, I feel compelled to answer your question in some detail with this sneak preview of some of the introduction from that monograph, edited only slightly to accomodate the context of this post. "... however our senses, which can fall victim to optical illusions and other cognitive biases, only generate the rawest form of data for Science which applies to these measurement, rigor and axiomatic philosophical principles to weed out such biases to generate the positivistic consensus reality Science seeks to fully describe and explain. Despite this ideal, a great many scientists themselves ( and their theories) still fall victim to the incorrect cognitive bias that our consensus reality is continuous rather than being discrete and positivistic and there is widespread subscription to the mistaken idea that Science is uncovering reality as it 'really is'. This is to mistake the map for the territory it depicts. In a May 2009 essay for Physics Today David Mermin reminds us of the importance of not falling victim to this mistaken thinking. This failure in many to respect the positivistic rudder in Science has been with us since the days of the Copenhagen School and the Bohr/Einstein debates and is the first of 3 major impediments to discovering a cQGc. The deep divide and raging debate ( indeed crisis) which philosophically divides the theoretical physics community regarding the invalidity of mistaken notions of ManyWorlds, MultiVerses and Anthropic rationalizations is not just about the absence of some sort Popperian critical tests of such models but rather, their invalidity that so many fail to accept is based on the blatant violation of intrinsic QM positivism these ideas embody. .../ cont. in Pt.2 62. ... Part.2 The 2.nd impediment has been whimsical or careless nomenclature and/or careless use of language which has resulted in sloppy philosophizing and the embedding into our inquiries, certain misapprehensions regarding precisely what it is we seek to explain. So for example, none of the observational evidence in support of the big bang in any way supports the assertion that this was the birth of the Universe but rather, all we can infer is that the big bang was the 'birth' or phase change of SpaceTime, a subset of Universe, from a state of near SpaceTime_lessness to what we observe today. Philosophically, how can the Universe in its totality, go from a timeless state of (presumably) perfect stasis ( or non-existence)to a timeful state as we observe today. Note how this simple clarification immediate resolves 2 deep questions. Creation ex nihilo and "Why is there something rather than nothing ? " The latter is a positivistic non-sequitur as there is no evidence whatsoever that the Universe was ever in a state of non-existence and Science, being positivistic, need not explain those things which never occur, only those which have or are allowed occur. The 3.rd impediment has been mis-use or runaway abuse of Newton's Hypothetico-Deductive (HD) method where, for example, we begin with say, an Inflation Conjecture to HD resolve certain issues but before very long, we have Eternal Inflation and then we have baby universes popping off everywhere, in abject violation of positivism not to mention SpaceTime Invariance. Similarly, the HD proposal of a string as the fundamental entity of our consensus reality to better interpret a dataset formarly known as the scattering matrix which then becomes String Theory which then becomes Superstring Theory which then becomes matrix theory which then becomes M-Theory perfectly forgets that searching for a fundamental object of our consensus reality is like looking for the most fundamental word in the dictionary. Our consensus reality is intrinsically relational and this fact is the lesson we should take from Goedel's Incompleteness Theorem (GI). So, the mistake here is to take or overly rely on Conjectures as established results and build further HD conjectures on top as also established. In passing, i would further observe that a string can only support a vibratory state (or wave mechanics) by remembering that such a string must have tension, a property which seems to me is conceptually lost when one connects the ends of the string to inadmissably conjure up the first loop to force fit the consequences of one's initial, flawed, HD conjecture. The invocation of convenient quantum fluctuations to force fit Inflation in the face CMB anisotropies is another yet example of such erroneous reasoning. Science is the formal system which can never succeed, in principle, in bootstrapping itself to a generally covariant absolute statement of Truth like "This is the Universe as it really is". (The URL under my name for this comment will take you to a talk which strongly suggests that even Stephen Hawking subscribes to the concept of a reality as it 'really is' ). ... / cont Pt.3 63. ... part.3 So, even a derivation of a cQGc from first principles which would be a proof in any other context, remains undecidably True while at the same time we will know it to be provisionally true( lower case t) because of its comprehensiveness and the absence of a counter-example. GI is actually the only legitimate anthropic principle we may recognize in Science and arises from the fact that all our formal systems (languages, Science, etc) are all arbitrary convential human inventions which can only self-consistently describe the consensus reality we positivistically observe and are able to measure or infer, consistent with our nature as an inextricable subsystem of that consensus reality. My personal mnemonic for GI is "More truth than Proof" So Bee, I hope this goes some way to answering your question and while I feel sure none of it comes as any surprise to you( though other aspects of the monograph might when you someday read it). I hope this respopnse helps and accurately clarifies some things for your readership in answer to your question. Thanks again, 64. Bee, Neil, and Andrew: Regarding your ongoing exchange, I would like to emphasise that there is no point in trying to distinguish between “truly random” and “pseudo-random.” Any process which takes place in finite time can only depend on a finite amount of information, and it takes infinite information to distinguish between “truly random” and “pseudo-random.” Chaitin’s criterion regarding where to stop and declare that we are “close enough to truly random for practical purposes” is as good as any other -- perhaps better than most. In addition, probability distributions merely enumerate possibilities. Therefore, the distributions that follow from our mathematical models apply only to the models, and not to the real world. We may, for example, make an idealized model of coin tossing, which is governed by a binomial distribution. But that distribution only enumerates the possibilities inherent in the idealized model. In the real world, the odds are not 50/50; the dynamics depends very sensitively on initial conditions, and there is no limit to the number of factors we may choose to take into account or neglect as extraneous. Thus our choice of a probability distribution describes our state of knowledge about coin tossing. In respect of phenomena in the real world, we may choose to treat them analytically as though they are governed by some particular probability distribution. But in so doing, it would be a mistake to ascribe objective reality to that distribution. The “true distribution” is as unknowable as the “true value” of a measurement. The best we can do is to approximate these things with varying degrees of accuracy. Hopefully, our accuracy improves as we learn more about the real world. Of course, it is also a mistake to claim that these values and distributions don’t exist, just because they are unknowable. The very fact that these things can be repeatably approximated shows that they are indeed, objectively real. Of that we can be certain, despite being equally certain that we can never know them with perfect precision. 65. Canadian_Phil: I would remind you that reality needs no help from you or your putative “consensus” to be what it is. If our consensus is not converging on an ever-more-accurate approximation of an objective reality that exists independent of any of us, then we are wasting our time with solipsistic nonsense. 66. Well, things are made more difficult by various senses of "random" that are used in various contexts. Yes, there is such a thing as a 'random' sequence per se. BTW it should have been clear, I meant about a process that produces a different sequence of numbers each time it is run. In other words, it's *action* is random. A mathematical process cannot do that. So even if there are other ways to be "random", my essential point is correct: the universe cannot be "made from math" because math is deterministic. That is the key point, "deterministic", more than the precise definition of "randomness" which also gets hung on on pseudo-randomness etc. The digits of pi may be "random" in the sense of appearances but their order is determined by the definition, and it will be the same time after time. That makes those digits "predictable." That is equivalent to the physical point: determinism v. (claimed) inherent predictability. I also still maintain that the most cogent thinkers in foundations of mathematics agree with me in the context I make. 67. (REM also that in the sense used to claim that certain phenomena are "truly random", that is meant to imply that there is nothing we can know that would show us reliably what would happen next. Sure, if I just look at a sequence of digits it may "appear" random and to various tests, as the definitions admit. But once I found out that they were generated by eg the deterministic mathematics behind deriving a root, then I would know what was coming next etc. Andrew - since you are interested in QM issues, pls. take a look at my own blog post on decoherence. A bit clunky now, but explains how we could experimentally recover information that conventional assumptions would say, was lost.) 68. /*...Al Gore is the antichrist...*/ LOL, how did you come into it? 69. Regarding arguments about infinite complexity, I'd like to make a small correction. The information content of pi can be contained in a finite algorithm so it contains only a finite amount of information. I think there are similar algorithms for generating prime numbers as well? 70. I see now that 'anonymous' already made this point much better than I did! 71. If someone were to make an unfortunate comment like: "What I was aiming at is that unlike all other systems the universe is perfectly isolated." Someone else might respond: "What “universe” is she talking about? The local observable universe? The entire Universe [rather poorly sampled!]? We have so little hard evidence in cosmology that it is ill-advised for us to make such sweeping and absolute statements about something we know very little about. Then again, cosmologists and theoretical physicists are: “often wrong, but never in doubt”. Blind leading the blindly credulous into benightedness? 72. Ulrich: I was using the word "universe" in the old fashioned sense to mean "all there is." I have expressed several times (most recently in a comment at CV) that already the word "multiverse" is meaningless since the universe is already everything. But words that become common use are not always good choices. Besides this, I would recommend that instead of posting as "Anonymous" you check the box Name/URL below the comment window and enter a name. You don't have to enter a URL. That's because our comment sections get easily confusing if there's several aonymouses. Best, 73. Zephir: Read it on a blog. You find plenty of numerology regarding Al Gore'e evilness if you Google "Al Gore Anticrist 666." 74. Neil: You cannot. That's why it's circular. It doesn't matter whether you call it "real" or "actual," you have some idea of what it is that you cannot define. (This is not your fault, it's not possible.) Let me repeat what I said earlier. In which sense are the other outcomes "not real?" How do you know that? It occurred to me yesterday that this is way too complicated to see why MUH is not "invalid" for the reasons you mention. (What I wrote in my post is not that MUH cannot be but that Tegmark's claim it can be derived rather than assumed is false. It's such sloppiness in argumentation that I was complaining to Phil about.) Forget about your "sequence" with which you have a problem, and take your own reality at a time t_0. Let's call this Neil(t_0). I leave it to you whether you want Neil just to be your brain or include your body, clothes, girlfriend, doesn't matter. Point is, MUH says you're a mathematical structure and all mathematical structures are equally real somewhere in the level 4 multiverse (or whatever he calls it). Now note that by assuming this you have assumed away any problem of the sort you're mentioning. You do not need to produce your past or future and some sensible sequence, all you really need is Neil(t_0) who BELIEVES he has a past. And that you have already by assumption. (Come to think of it, somehow this smells Barbourian to me.) This of course doesn't explain anything, which is exactly why I find it pointless. Best, 75. Anonymous (6:54 PM, January 07, 2010), First the same recommendation to you as to Ulrich: Please chose Name/URL below the comment window and enter a name (or at least a number) because the comment sections get easily confusing with various anonymouses. (If I could I would disable anonymous comments, but I can only do so when I also disable the pseudonymous ones, thus I unfortunately keep repeating this over and over again.) I agree with you on the first and second point. I don't know what to make of the third and given that I've never heard of it despite having spent more than a decade in fundamental research I doubt that there are many of my colleagues who believe "rational thought is disallowed," and thus there cannot be much to the problem you think it is. Best, 76. Hi Canadian Phil, -Neils Bohr After reading through your long treatise it appears to boil down to have the above statement of Bohr to be just generalized to all of physics. I would say that you’re thinking and that of Mermin’s echoes the same sentiment, which I would contend more indicative as to what the problem is in modern physics, rather then what should be considered as a remedy. So if I were to pick someone who stood for the counter of your position it would be J.S. Bell, as he so often reminded that much of what we consider as truth is not forced upon us by what the experiments tell us, yet rather directly from deliberate theoretical choice. The type of theoretical choices he was referencing being the ones resultantly formed of the sort of scientific ambiguity and sloppiness which are exactly the type you support. “Even now the de Broglie - Bohm picture is gene rally ignored, and not taught to students. I think this is a great loss. For that picture exercises the mind in a very salutary way. “ -J.S, Bell-Introductory remarks at Naples-Amalfi meeting,May 7, 1984. “Why is the pilot wave picture ignored in textbooks? Should it not be taught, not as the only way, but as an antidote to the prevailing complacency? To show that vagueness, subjectivity, and indeterminism are not forced on us by experimental facts, but by deliberate theoretical choice?” -J.S. Bell-“On the impossible pilot wave”, Foundations of Physics, 12 (1982) pp 989-99. P.S. I must apologize for my two previous erasers, yet this to was simply to rid my own thoughts of the ills Bell complained about :-) 77. Phil & Phil: We discussed Mermin's pamphlet here, please stick to the topic. Best, PS: Canadian Phil, I'm afraid the other Phil is also Canadian. 78. Janne: "Regarding arguments about infinite complexity, I'd like to make a small correction. The information content of pi can be contained in a finite algorithm so it contains only a finite amount of information." Yes, you're quite right. I realised after I wrote it but I hoped no one would notice! The decimals of pi are certainly not random as they can be produced by a very simple algorithm. The distribution of the primes is a different thing altogether, which I believe is genuinely random (i.e., cannot be produced by a simpler algorithm. At least, they are random if someone can prove the Riemann Hypothesis - there's a great article : The Music of the Primes. Neil, I think your criticism of the MUH is not so much based on randomness at all, but more the idea that ANY mathematical structure is unvarying with respect to time and so cannot represent the universe. However, this isn't a valid criticism of Tegmark's idea as he proposed a block universe mathematical structure in his original paper which would, of course, be unvarying with time but would appear to change with time for any observer inside the universe. Here is an extract from Tegmark's paper: "We need to distinguish between two different ways of viewing the external physical reality: the outside view or bird perspective of a mathematician studying the mathematical structure and the inside view or frog perspective of an observer living in it. A first subtlety in relating the two perspectives involves time. Recall that a mathematical structure is an abstract, immutable entity existing outside of space and time. If history were a movie, the structure would therefore correspond not to a single frame of it but to the entire videotape." So the entire mathematical structure might be fixed and immutable, but to the frog everything still appears to be moving in time. I don't think it's possible to simply criticise Tegmark's work on that basis - he did his job very well. It's a superb paper, really all-encompassing, well worth putting aside a day to read it. But I don't think his conclusion is right (hardly anyone does, it appears). (Good luck with your work on decoherence, Neil. I was interested a while back but I've had my fill of it). 79. Hi Bee, As I would say that what Bell was referring to has directly to do with what is asked here , as to whether physics is cognitively biased, I would wonder as to how that has my remarks as being off topic? Perhaps you feel that my contention was meant as support for a particular theory, which if that be the case I can assure you it certainly is not as I don’t have a particular theory I favour. Actually all I was asking to be considered is the contention of Bell that vagueness, ambiguity and sloppiness are primary what stands as being the noise which currently prevents it from being able to discover what nature is, rather then only what we might be able to say about it. 80. Phil: I was just saying for prevention if you want to discuss Mermin's essay, please don't do it here since Canadian Phil doesn't seem to know we previously discussed it. Best, 81. Hi Bee, I see your point. as perhaps this post is more meant to ponder the cause(s) of bias, rather then what any particular one might be. Still though as in medicine it is hard to discover the mechanism of disease without first examining its symptoms. That would be as science would have us look to experiment to consider what begs explanation. Then of course with the aid of this examination to find if of any the explanations offered to be correct, only if it can further have us understand the mechanism as to able to predict further what this would demand. In the case of medicine this is confirmed when such understanding has rendered a cure that exceeds those found sometimes as only resultant of a belief one has, rather than able to demonstrate one has an understanding as to how that suggests reason as to why. So I see this whole thing that’s called science as a continuing process to delve ever deeper to discover the underlying mechanisms of the world, rather then have it become something that prevents us from finding them. I’m thus reminded of Newton’s statement that he could offer no explanation of gravity yet only able to predict it’s actions and that should be enough and yet Einstein was not intimated into accepting such a limitation and resultantly able to come up with a mechanism which has proven us able to understand more than Newton thought as being relevant or to have utility. So simply put as I see a person of science is not one who at any point is able to accept the answer for how or why as simply because, as if they do that forms to be the gratest bias which prevents its success. 82. Bee, with all due respect you are making the wrong choice about who has the burden of proof about our world and the various unique "outcomes" we observe, v. the idea that there are more of them. Let's say we do an actual quantum experiment (like MZ interferometer with a phase difference) and get sequence 1 0 0 1 1 1 0 1 0 1 0 0 1 1 ... That is an "actual result" that is not AFAWK computable from some particular algorithm. It is not like the digits of pi: they are logically necessary (and hence, deterministically reproducible) consequences of a particular mathematical operation. It is not my job to "prove" or even have the burden of argument, that all other possible sequences of hits from the MZI "exist" somewhere as other than raw abstractions, like "all possible chess games." The burden of proof is on you and anyone who believes in physobabble concepts like MWI. Until that is demonstrated or at least solidly supported, I have the right to claim the upper hand (not "certainty"; but so what) about there being a distinction between "natural" QM process outcomes, and the logically necessary and fixed results of mathematical operations. 83. (I mean not my job to prove they aren't there.) It seems such randomness has some order to it?:) 85. Ok, but where do you get the justification for the received wisdom that "the universe is perfectly isolated" in any meaningful physical sense? Why are not scientists more careful and humble in their intuitive beliefs? 86. Bee: Thanks for explaining how to post under a pseudonym. I am the Anonymous from 6:54 PM, 8:05 PM, and 8:09 PM on January 07. The “widespread misunderstanding of the content of quantum theory” I was referring to includes, inter alia, the notion that a quantum system has no properties until they are brought into existence by the observer through an act of measurement. This sort of nonsense not only retards the progress of physics, but gives rise to all manner of pernicious superstition and mystical hocus-pocus, wrapped in a false mantle of scientific objectivity. In my view, enormous damage has been done, not only to physics, but to all of science – and indeed to the very concept of objective rationality – by those who mistakenly read an ontological content into the famous statement of Neils Bohr, quoted by Phil Warnell above. Let me repeat it here for convenience: This is an explicit warning not to ascribe the “weirdness” of the quantum formalism to the real physical world, but since the day the words were uttered, there has been an apparently irresistible urge to do the exact opposite. Bohr was not alone in suffering such misinterpretation. Schrödinger originally introduced us to his cat as a caution against ascribing physical reality to the superposition of states, yet Schrödinger’s cat was made famous by others who deviously used it to support precisely what Schrödinger argued against. And Bell’s theorem is ubiquitously used in support of spooky claims about quantum measurement, effectively drowning out Bell’s own opinion of hidden-variable theories, as made clear by another quote from page 997 of the article quoted by Phil Warnell: “What is proved by impossibility proofs is lack of imagination.” Of course, those who indulge in mystical interpretations of quantum mechanics do not believe they are disallowing rational thought; they think they are being deep. But their stance is nonetheless profoundly anti-rational; it leaks out of physics into metaphysics and philosophy, and from there, into the rest of post-modern thought. It lends credence to such notions as “the quantum law of attraction” (otherwise known by Oprah fans as “the secret”), not to mention the idea that reality is a matter of consensus. The first is a thinly veiled return to sympathetic magic, and the second is a kind of quantum solipsism that results from treating the “intersubjective rationality” of Jürgen Habermas as legitimate epistemology, instead of recognizing it as a degenerative disease of the rational faculty. 87. Ulrich: "Everything there is" is perfectly isolated from everything else, since it's damned hard to interact with nothing. Best, 88. Neil: I already said above I don't believe in MWI. Unfortunately, since you are the one claiming you have a "proof" that MUH can't describe reality it's on you to formulate your proof in well-defined terms, which you fail to do. Your three step procedure makes use of the notion of a "production" which is undefined and your other arguments continue to assume a notion of what is "not real" that makes your argument circular. Look, read my last comment addressed to you and you'll notice that you can stand on your feet and wiggle your toes but there is no way to proof what you want to proof without having to assume some particular notion of reality already. Andrew got it exactly right: your problem is that you believe there has to be some actual time-sequence, some "production" (what is a production if not a time-sequence?). I'm telling you you don't need a time-sequence. You don't need, in fact, any sort of sequence or even ordering. All you need to capture your reality in maths is one timeless instant of Neil(now). That's not how you might think about reality, but there's no way to prove that's not your reality. Best, I wonder how one could have ever been lead through to the "entanglement processes" without ever first going through Bell? I mean sure, at first it was about Einstein and spooky, and now it's not such a subject to think it has through time become entwined with something metaphysical and irrelevant (thought experiments about elephants)because one would like to twist the reality according too? I mean what was Perose and Susskind thinking?:) Poetically, it has cast a segment of the population toward connotations of "blind men." Make's one think their house is some how "more appealing" as a totally subjective remark. So indeed one has to be careful how we can cast dispersions upon the "rest of society" while we think we are safe in our "own interactions" to think we are totally within the white garment of science. I hear you.:) 90. Plato: Yes, that’s a perfect example of the sort of drivel that results when you think a probability is a property of a particle. It makes smart guys say dumb things... 91. Ain Soph, Your choice of a handle reminded me of a term that just came to me as if I had heard it before but the spelling was different. Is there any correlation? 92. Hi Ain Soph, I must say I was intrigued by what you said last as to where our prejudices and preconceptions can lead us to, even though they may appear as sound science. I would for the most part agree with what you said in such regard, except for the role of Bohr and what his intentions where as driven by his own philosophical and metaphysical center. To serve as evidence of my contention goes back to the very beginnings of the Copenhagen interpretation’s creation and the sheer force of will Bohr had to serve in having it become as ambiguous and sloppy as many find it now. That would be when Heisenberg first arrived at the necessity for uncertainty with his principle and with his microscope example attempted to lend physical meaning to it all. Bohr of course staunchly opposed such an attempt and argued Heisenberg even when taken to bed in sickness until he finally relented and altered his view to match that of Bohr’s. So my way of reading this coupled with the content of his rebuttal of EPR has given me reason to find that while Bohr may not as Einstein being guilty at times of telling nature how it should be,was guilty of having the audacity of insisting what nature would allow us to ultimately know. I’ve then have long asked, which is the greater transgression as to enabling physics to progress; that being convinced nature having certain limits in regards to what’s reasonable or rather the only limiting quality it has is in it restricting having anyone able to find the reason in them. So in light of this I don’t know what your answer would be, yet I consider the second as being the most unscientific and thus harmful of the two biases; as the first can be falsified by experiment, while the latter prevents one from even bothering making an attempt. Fortunately for science there always have been and I hope always will be those like Einstein, Bohm and Bell, who refuse to be so intimidated as to feel restricted to look. 93. Bee: Precisely. How interesting that you should recognize the reference... 94. Plato: My last post should have been addressed to you, not Bee. 95. Phil: I get the impression that you’ve spent quite a bit more time studying the history of the subject than I, so I will defer to your greater knowledge of it. It seems, then, that I have always given Bohr the benefit of more doubt than there actually is. The quotation we have both commented on actually doesn’t appear in print anywhere under the by-line of Neils Bohr. It was attributed to him by Aage Petersen in an article that appeared in Bull. Atom. Sci. 19:7 in 1963, a year after Bohr’s death. I had always thought that Petersen rather overstated the case – especially in the third sentence – and that Bohr’s own stance must have been more sane. But perhaps not. Another, who gleefully conflated mysticism and quantum mechanics, was J. R. Oppenheimer. For example, his 1953 Reith Lectures left his listeners to ponder such ersatz profundities as the following: “If we ask whether the electron is at rest, we must say no; if we ask whether it is in motion, we must say no. The Buddha has given such answers when interrogated as to the conditions of a man’s self after his death; but they are not familiar answers for the tradition of seventeenth- and eighteenth-century science.” Disturbingly, this strikes me not so much as cognitive bias as deliberate obfuscation. True things are said in a way that invites the listener to jump to false conclusions. 96. "Ulrich: "Everything there is" is perfectly isolated from everything else, since it's damned hard to interact with nothing. Best, B." If you give it a little more thought, you may be forced to concede that the "perfectly isolated" assumption lacks any rigorous scientific meaning. Certainly no empirical proof in sight. By the way, you and your colleagues: (1) Do not know what the dark matter is [and that's = or > than 90% of your "everthing"]. (2) Do know what physical process give rise to "dark energy" phenomena. (3) Do not have an empirical clue about the size of the Universe. (4) Do not have more than description and arm-waving when it comes to explaining the existence and unique properties of galaxies. Wake up! Stop swaggering around like arrogant twits, pretending to a comprehensive knowledge that you most certainly do not possess. Einstein spoke the truth when he said: "All our science when measured against reality [read nature] is primitive and childish, and yet it is the most precious thing we have." THAT is the right attitude, and it is a two-part attitude, and both parts are mandatory for all scientists. Real change is on its way, 97. Ulrich: It's not an assumption. The universe is a thermodynamically perfectly isolated system according to all definitions that I can think of. If you claim it is not, please explain in which way it is not isolated. As for the rest of your comment, yes, these are presently open questions in physics. I have never "pretended" I know the answer, so what's your point. Besides this, your comments are not only insulting, they are also off-topic. Please re-read our comment rules. Thanks, 98. Ulrich: Real change is on its way? What – you’re going to learn some manners? 99. The very name "Ain Soph" suggests a lack of manners. 100. Hi Arun, I find Ain Soph to be quite a respectful name as to serve as a reminder that when it comes to science since its central premise denies ever considering there be made allowable such a privilege position to have it then able to deny their be reason as to look away from finding explanation, for as Newton reminded in respect to any such propositions: 101. Hi Ain Soph, Well I don’t know which of us are more studied when it comes to the history of the foundations, as it appears you’ve looked at it pretty closely. My only objection being it seems as of late there appears to be a little rewriting of it as to give Bohr a pass on what his role in all this was and what camp he represented, as to have him thought as misunderstood rather than its primary advocate. Of course we don’t have any of them with us here today as to ask directly, yet still I think things are made pretty clear between what they left of their thoughts and their legacy made evident with the general attitudes of the scientists of the following generation. My thoughts are this obfuscation as you call it, has simply reincarnated itself in things like many universes, many worlds, all is math and so many of the other approaches in which the central premise of each is to have made unapproachable exactly what needs to be approached. I must say your moniker is an excellent symbol as to what all these amount to as being when it comes to natural philosophy. So as such I would agree that anytime things in science are devised which prevents one from being able to ask a question meant to enable one to find the solution to something that begs explanation, that’s the time to no longer have it considered as science since it’s lost its reason to be. That’s to say there is no harm in having biases as long as the method assures these can be exposed for what they are with allowing them to be proven to be wrong as they apply to nature. 102. Hi Bee, I think this whole question of biases come down to considering one thing, that being the responsibility of physics is to have recognized and give explanation to nature’s biases, rather than being able to justify our own. So yes it does all depend on biases with having reality itself having the only ones that are relevant. 103. Phil, Its fine. I have not been able to decipher what you and Ain Soph are saying anyway. 104. Hi Ain soph It is not by relation that I can say I am Jewish...but that I understood that different parts of society have their relation to religion, and you piqued my interest by "the spelling" and how it sounded to me. It bothered me as to where I had seen it. As in science, I do not like to see such divisions, based on a perceived notion that has been limited by our own choosing "to identify with" what ever part of science that seems to bother people about other parts of science as if religion then should come between us. You determined your position long before you choose the name. The name only added to it as if by some exclamation point. Not only do I hear you but I see you too.:) 105. Plato: Yes, the spelling is the irony. But Copenhagen is not Cefalu. Or is it? 106. Oops, my bad! Let me just reiterate: and leave it at that. Almost. Science, unlike cast-in-stone religions, is self-correcting. It may be a slow process, but I should trust the process. Come on you grumbler [not to mention sock puppet], have a little faith! 107. Phil: Oh, great. Whenever the revisionists go to work on a discipline, expect trouble! If the Copenhagen Orthodox Congregation, the Bohmians, the Consistent Historians, the Einselectionists, the Spontaneous Collapsicans, and the Everettistas can’t communicate now, just wait until revisionism has cut every last bit of common ground out from under them! 108. Ulrich: Grumbler? Sock Puppet? What is it with you? You can’t maintain a civil tone from one end of a 100-word post to the other? Clean up your act, Ulrich, or I will just ignore you. 109. Hi Arun, ”Its fine. I have not been able to decipher what you and Ain Soph are saying anyway.” Now I feel that I’ve contributed to the confusion, rather than having things made a little clearer, which is probably my fault. To have it simply put as to what for instance Bell’s main contention and complaint was is that things like superposition that lead to contentions such as taking too seriously things such as the collapse of the wave function and the measurement problem more generally, are the result of particular theoretical choices, rather then what’s mandated by experiment. So Bell’s fear, if you would have it called that, is the impediment such concepts have as physics attempts to move forward to develop even a deeper understanding. That’s why he used concepts such as ‘beables’ in place of things like ‘observables’ for instance in an attempt to avoid such prejudices and preconceptions. 110. Hi Ain Soph, I actually don’t have much concern the historical revisionists will be able to increase the confusion any greater than it already is. My only concern is when deeper theories are being considered is that the researchers are clear as to what begs explanation and what really doesn’t. That’s to have them able to distinguish what concepts the use are result of only particular theoretical choice and which ones are required solely by what experiment have as necessary. That’s to simply to have recognized what serves to increase understanding, while what only serves as impediments in such regard. 111. Phil: It’s true that it would be hard to increase the confusion beyond its current level. But historical revisionism could make things worse by erasing the “trail of crumbs” that marks how we got here. Personally, when I’m confused, I often find the only remedy is to backtrack to a place where I wasn’t confused and start over from there. Quantum mechanics is, these days, presented to students as a formal axiomatic system. As such, it is internally consistent, and consistent also with a wide variety of experimental results. But it is inadequate as a physical theory. So some of the axioms need to be adjusted, but which ones? And in what way? The axiomatic system itself gives us no help in that regard, and simply trying random alternatives is an exercise in futility. The more familiar we are with the existing system, the harder it is to think of sensible alternatives, and the very success of the current theory guarantees that any alternative we try will almost certainly be worse. Indeed, the literature of the past century is littered with such attempts, including some truly astounding combinations of formal virtuosity and physical vacuity. So, to have any hope of progress, I think we must trace back over the history of the formulation of the present theory, and reconsider why this particular set of axioms was chosen, what alternatives were considered, why they were rejected, and by whom. We need to reconsider which choices were made for good reason after sober debate, which ones were tacitly absorbed without due consideration because they were part of “what everybody knew” at the time, and which ones were adopted as a result of deferring to the vigorous urgings of charismatic individuals. We are, as you say, badly lost. But let that trail of crumbs be erased, and we may well find ourselves hopelessly lost. And that is why historical revisionism is so dangerous. 112. Phil: “... he used concepts such as ‘beables’ in place of things like ‘observables’ for instance in an attempt to avoid such prejudices and preconceptions.” I must confess that I cringe every time I read a paper about beables. Yes, the term avoids prejudices and preconceptions, but it is also completely devoid of valid physical insight. Thus it throws the baby out with the bathwater. For me, it makes thinking about the underlying physics even harder and actually strengthens the stranglehold of the formal axiomatic system we are trying to escape. 113. Ain Soph: But Copenhagen is not Cefalu. Or is it? Oh please the understanding about what amounts to today's methods has been the journey "through the historical past" and the lineage of teacher and students has not changed in it's methodology. Some of the younger class of scientist would like to detach themselves from the old boys and traditin. Spread their wings. Woman too, cast to a system that they too want to break free of. So now, such a glorious image to have painted a crippled old one to extreme who is picking at brick and mortar. How nice. Historical revisionist? They needed no help from me. "I’m a Platonist — a follower of Plato — who believes that one didn’t invent these sorts of things, that one discovers them. In a sense, all these mathematical facts are right there waiting to be discovered."Harold Scott Macdonald (H. S. M.) Coxeter These things are in minds that I have no control over, so, how shall I look to them but as revisionists of the way the world works now. Even, Hooft himself:) 114. Ain Soph: I'm not really sure what point you're trying to make. If anything then the common present-day interpretation of quantum mechanics is an overcompensation for a suspected possible bias: we're naturally tending towards a realist interpretation, thus students are taught to abandon their intuitions. If this isn't accompanied by sufficient reflection I'm afraid though it just backlashes. Btw, thanks for Bell's impossibility quote! I should have used that for fqxi essay! Best, 115. Hi Ain Soph, As to the revisionists it’s true that they might cause some reason for concern. However there is the other side of the coin where those like Guido Bacciagaluppi & Antony Valentini who are telling the story from the opposite perspective of the prevailing paradigm and so I suspect the crumbs will always remain as to be followed. I am surprised you’re not a ‘beables’ appreciator for it was Bell’s way of emphasizing that QM had to be stripped bare first of such motions before it had any chance of being reconstructed in such a way it would serve to be a consistant theory that can take one to the experimental results without interjecting provisos that don’t stem from the formalism. This has me mindful of a pdf I have of a hand written note Bell handed to a colleague during a conference he attended year ago that listed thw words he thought should be forbidden to be used in any serious conversation regarding the subject which were “ system, apparatus, microscopic, macroscopic, reversible, irreversible, observable, measurement, for all practical purposes ”. So I don’t know as to exactly how you feel about it, yet to me this appears as a good place to start. 116. Plato: For the past few hundred years, Western civilization has enjoyed an increasingly secular and rational world view – a view which practiced science as natural philosophy and revered knowledge as an end in itself. The result was an unprecedented proliferation of freedom and prosperity throughout the Western world. But that period peaked around the turn of the last century, and has been in decline for almost a hundred years. Now we value science primarily for the technology we can derive from it. And the love of knowledge is being pushed aside by a resurgence of mysticism and virulent anti-rationalism. This is not just young Turks making their mark. This is barbarian hordes at the gates. And yes, we are now witnessing a return to a preoccupation with gods and goddesses and magic, just like the last time a world-dominating civilization went into decline. The result was a thousand years of ignorance and serfdom. This time, the result may be less pleasant. 117. Bee: “If this isn't accompanied by sufficient reflection I'm afraid though it just backlashes.” Yes. Exactly my point. Students today are not encouraged to reflect and develop insight. They are encouraged to memorize formal axioms and practice with them until they can produce detailed calculations of already known phenomena. They are thereby trained to use quantum mechanics in the development of new technologies, but they are not educated in a way that would allow them to move beyond the accepted axiomatic system in any principled way. Bourbakism and the Delphi method ensure that the questions and beliefs of the vast majority remain well within the approved limits. 118. Phil (and Bee - this further amplifies my reply to you): While I agree with the intent of banishing misconceptions and preconceptions, I disagree with the method of inventing semantically sterile new terminology. For example, in moving from Euclidean to hyperbolic geometry, one can simply amend the parallel postulate, claim that geometric insight is therefore of no further use, and deduce the theorems of hyperbolic geometry by the sterile, rote application of axioms. Or one can draw a picture of a hyperbolic surface, and enlist one’s geometric insight to understand how geometry changes when the parallel postulate is amended. One ends up proving the same theorems, but one gets to them much faster, and much more surely. And one understands them much better. In short, I think teaching students to abandon their intuitions does more harm than good. Having abandoned them, what choice remains to them but to mimic the cognitive biases of their instructor? 119. This comment has been removed by the author. 120. Hi Ain Soph, You talk about amending axioms instead of eliminating them from being ones; this is exactly how the type of ambiguity and sloppiness that Bell complained about arose in the first place. What an axiom or postulate represents in math or theory is a self evident truth, which either is to be considered so or not. What would it mean to amend an axiom, could that mean for instance that the fifth postulate holds every day except Tuesdays? No I’m sorry that’s the type of muddled headed thinking that has had QM become what it is, with all the ad hoc rules and decisions as to how and when they are to apply. The fact is in deductive (or inductive) reasoning a postulate is or it isn’t, with no exceptions allowed, otherwise it has lost all its ability to be considered as logic. This then is exactly what a ‘beable’ is, as being something that you consider as a postulate (prerequisite) or not, it either is or it isn’t. What then falls out is then consistent with nature as it presents or it doesn’t. So that’s why for instance Bell liked the pilot wave explanation, since when asked is it particle or wave , such a restriction of premise didn’t satisfy what nature demonstrated as being both particle and wave. Therefore the concept of ‘beables’ is not to have what is possible ignored, yet quite the opposite. So where for instance the pilot wave picture is referred to as being a hidden variables theory, Bell would counter that standard QM is a denied variables theory. This is to find that it makes no sense to have axioms amended, they either are or they’re not, otherwise it just isn’t a method of reason. So what’s asked for is not that intuitions be ignored, rather that when such intuitions are incorporated into theory there be a way to assess its validity where nature is the arbitrator of what is truth and not the theorist. 121. Phil: Sorry, I should have been more clear. Essentially, the parallel postulate holds that the sum of the internal angles of any triangle is equal to 180 degrees. If we amend that to read “greater than” then we get hyperbolic geometry. (And “less than” gives us elliptic geometry.) 122. Hi Ain Soph, What you are talking about is not amending a postulate, yet rather to define or set parameters where there isn’t one. What the fifth postulate is doesn’t allow for what you propose in either case, so therefore it must be eliminated to even have it considered. That’s like people believing Einstein set the speed of light as a limit, rather than him realizing this speed was a limit resultant of being a logical consequence of his actual premises; which are there is no preferred frame of reference, such that whenever anyone is arbitrarily chosen the laws of nature will present as the same. The speed of light then being a limit falls out of these axioms and not needed in addition as to have it to be. That is it’s not an axiom yet rather a direct consequence of them. So then if you want things to always be hyperbolic or elliptic geometrically would require an axiom and not a parameter to mandate it be so. Whether it is less or greater than 180 degrees holds no significant where such parameters are just special cases as the one it being compared , with itself also just a special case where no such axiom to have it be so exists. So for me a true explanation is found when things are no longer simply parameters, yet rather consequences of premise (or axioms). Of course one could insist all such things are indeed arbitrarily chosen, which on the surface sounds reasonable, yet it still begs the question be answered how is it these parameters hold at all to present a reality that has them as fixed. So my way of thinking, being consistent with Bell’s, is to be a scientist is to find the world as a construct mandated by logic and to think otherwise just isn’t science. This I would call the first axiom of science, where that of Descartes’ being the second, which is to give us and not reality reason to think we might discover what, how and why it is as it is. 123. Ain Sof:Now we value science primarily for the technology we can derive from it. No, as I see it, you are the harbinger of that misfortune. What can possible be derived from developing measure that extend our views of the universe? "Only" human satisfaction? Shall we leave these things unquestionable then and satisfactory, as to the progression you have seen civilization make up to this point? You intertwine the responsibility of, and confuse your own self as to what is taking place in society, cannot possibly be taking place within your own mind?:)yet, you have "become it" and diagnosed the projection incorrectly from my point of view:) You could not possibly be wrong?:) 124. Phil, He is right in relation to this geometric sense and recognizes this to be part of the assessment of what exists naturally. Gauss was able to provide such examples with a mountain view as a move to geometrically fourth dimensional thinking. As to lineage, without Gauss and Riemann, Einstein would have not geometrically made sense. This is what Grossman did for Einstein by introduction. Wheeler, a Kip Thorne. 125. In context, Phil: Emphasis mine. Businessmen value Science for the technology it can produce. Governments, sometimes. There is of course this little thing called "National Defense" such that even if a country is not on a war-footing, they at least seek the technology that puts them on an even-footing with other governments that may put the war-foot on them. USSR vs USA in the 20th century, Iran vs Israel and the West today, and there are many other examples throughout history. But we knew that. I'm just reminding. I believe Ain Soph was railing against the Politico-Economic "human" system that places Engineering above Science, and I hope I've explained why. I don't see where Ain Soph was being the harbinger of that reality; rather, he was pointing it out. Governments also support Theory, and that's key. Questions regarding how many Theorists are actually needed non-withstanding, we do need them. Businesses know this, and cull theorists only when they are on the edge of a breakthrough. They haven't the time to waste on things that will pay off 10-20 years down the road. They want applications, now. Yesterday would be better. Two examples: Bell Labs up through the mid-1990's, and Intel. Intel used Quantum Physics as was known, specifically Surface Physics. Bell Labs on the other hand had no reason to work on "Pure Research," yet they did. But Bell Labs was part of the communications monopoly AT&T, which had more money than God, AND the US Government poured lots of money into Bell Labs as well to the point you couldn't tell where AT&T ended and the government began, so the Labs were an example of yes, Government funding, at least partially. Enter our new age, where rich folks like Branson and Lazaridis etc. are picking up the slack. There has been a shift in funding sources, especially with governments hard pressed to meet budgets, and when that happens, Theory always takes a hit. 126. It's late and also I don't think Bee really gets my point (did anyone else?) about randomness in nature v. the deterministic nature of math. But for the record some clarification is needed. First, it's not really a matter of my having or claiming to have a disproof of MWI. But it is accepted logical practice that the one postulating something more than we know or "have", has the burden of proof. Also, I am saying that *if* the world is not MWI then it cannot be represented by deterministic math - which is different from saying, "it is not" MWI and thus cannot be represented by math. (Bee, either you need to sharpen up your logical analysis of semantics, or I wasn't clear enough.) Furthermore, I don't "believe" that there has to be some actual sequence, I am saying that such specific sequences are what we actually find. But now I see the source of much confusion of you and Andrew T: you thought, I was conflating the idea that "real flowing time" couldn't be mathematically modeled with the other idea that a mathematical process can produce other than the specific sequence it logically "has to", such as digits of roots. But that isn't what I meant. It doesn't matter whether time actually flows or not, or if we live in a block universe. The issue is, the sequence produced in say a run of quantum-random processes is thought to be literally random and undetermined. That means it was not logically mandated in advance by some specific choice such as "to take the digits of the cube root of 23." Sure, some controlling authority could pick a different seed each time for every quantum experiment, but "who would do that"? But if there isn't such a game-changer, then every experiment on a given particle would yield the same results each time. That is the point, and it is supported by the best thinking in foundations. Above all, try to get someone's point. 127. (OK, I still may have confused the issue about "time" by saying "in advance." The point is: even in a block universe with no "real time", then various sequences of e.g. hits in a quantum experiment would have to be generated by separate, different math processes. That sequence means the ordered set of numbers, whether inside "real time" or just a list referring to ordering in a block of space-time. So one run would need to take e.g. the sqrt of 5, another the cube root of 70, another 11 + pi, etc. Something would have to pick out various generators to get the varying results. If that isn't finally clear to anyone still dodging and weaving on this, you can't be helped. Not a lot understand the implication of this over the doorway in this "new institution called Perimeter Institute" but yes, if one were hell bent toward research money for militarization then indeed such technologies could or might seem as to the status of our civilization. Part of this Institution called PI I believe is what Ain Sof is clarifying to Phil, is an important correlation along side of all the physics, and does not constitute the idea of militarization, but research about the current state of the industry. That is a "cold war residue" without the issue being prevalent has now been transferred to men in caves. Fear driven. The larger part of society does not think this way? So in essence you can now see Ain sof's bias.:) 129. Neil: I apologize in case I mistakenly mangled your statement about MWI, that was not my intention. About the burden of proof: you're the one criticizing somebody else's work, it's on you to clarify your criticism. I think you have meanwhile noticed what the problem is with your most recent statement. I think I said everything I had to say and have nothing to add. You are still using undefined expressions like "process" and "picking" and "generation of sequences." Let me just repeat that there is no need to "generate" a sequence, "pick" specific numbers or anything of that sort. It seems however this exchange is not moving forward. Best, 130. This comment has been removed by the author. 131. For those who want to venture further. 132. (Note: all of the following is relevant to the subject of cognitive bias in physics, being concerned with the validity of our models and the use of math per cognitive model of the world.) Bee: thanks for admitting some confusion, and it may be a dead end but I feel a need to defend some of my framings of the issue. I don't know why you have so much trouble with my terms. We have actual experiments which produce sequences which appear "random", and which are not known to be determined by the initial state. That is already a given. I was just saying as analysis that a set of sequences that are not all the same as each other, cannot be generated by a uniform mathematical process (like, the exact same "program" inside each muon or polarizing filter.) If there was the same math operation or algorithm there each time, it would have to produce the same result each "time" or instance. Find someone credible who doesn't agree in the terms as framed, and I'll take it seriously. Steven C: Your post is gone (well, in my email box - and I handle use of author-deleted comments with great carefulness), but I want you to see this anyway: As for this particular squabble, it isn't any more my stubbornness than anyone else who disagrees and keeps on posting. I had to, since I was often misunderstood and am expressing (believe or like it or not) the consensus position in foundations of mathematics and physics. Read up on foundations and find re determinism and logical necessity v. true randomness. Your agreeing with Orzel in that infamous thread at Uncertain Principles doesn't mean his defense of the decoherence interpretation was valid. Most of my general complaints are the same as those made by Roger Penrose (as in Shadows of the Mind.) He made, like I did, the point that DI uses a circular argument: if you put the sort of statistics caused by collapse into the density matrix to begin with, then scrambling phases produces the same "statistics" as one would get for a classical mixture. Uh yeah, but only because "statistics" are fed into the DM in the first place. Otherwise, the DM would just be a description of the spread of amplitudes per se. You have to imagine a collapse process to turn those amplitudes - certain or varied as the case may be- into statistical isolation. The DI is a circular argument, which is a logical fallacy not excused or validated by "knowing more physics." Would you be dismissive of Penrose? As for MWI: if possible measurements produce "splits" but there is nothing special about the process of measurement or measuring devices per se, then wouldn't the first BS in a MZ interferometer instigate a "split" into two worlds? That is, one world in which the photon went the lower path, another world where it went the other path? But if that happened, then we wouldn't see the required interference pattern in any world (or as ensemble) because future evolution would not recombine the separated paths at BS2. Reflect on that awhile, heh. 133. [part two of long comment] At Uncertain Principles I critiqued Orzel's specific example - his choice - which used "split photons" in a MZI subject to random environmental phase changes. He made the outrageous argument that, if the phase varies from instance to instance, the fact that the collective (ensemble) interference pattern is spoiled (like it would be *if* photons went out as if particles, from one side of the BS2 or the other) somehow explains why we don't continue to see them as superpositions. But that is absurd. If you believe in the model, the fact that the phase varies in subsequent or prior instances can't have any effect on what happens during a given run. (One critique of many - suppose the variation in phase gets worse over time - then, there is no logical cut-off point to include a set of instances to construct a DM from an average spread, see?) At the end he said of the superposed states "they just don't interfere" - which is meaningless, since in a single instance the amplitudes should just add regardless of what the phase is. Sure, we can't "show" interference in a pretty, consistent way if the phase changes, but Orzel's argument that the two cases are literally (?) equivalent (despite the *model* being the problem anyway, not FAPP concerns) is a sort of post-modern philosophical mumbo jumbo. My reply was "philosophy" too, but at least it was valid philosophical reasoning and not circular and not making sloppy, semantically cute use of the ensemble concept. (How can I appreciate his or similar arguments if it isn't even clear what is being stated or refuted?) Funny that you would complain about philosophy v. experiment, when the DM is essentially an "interpretation" of QM not a way to find different results. Saying decoherence happens in X tiny moment and look, no superposition! - doesn't prove that the interpretation is correct. We already knew, the state is "collapsed" whenever we look. Finally, I did actually just propose a literal experiment to retrieve amplitude data that should be lost according to common understanding of creating effective (only that!) mixtures due to phase changes. It's the same sort of setup Orzel used, only with unequal amplitude split at BS1. You should be interested in that (go look, or again but carefully), it can actually be done. It's importance goes beyond the DI as interpretation, since such information is considered lost, period, in traditional theory not even counting interpretative issues. I do like your final advice: TRY, brutha, to expand your horizons. Is all I'm saying. Yes, indeed! I do try - now, will you? BTW Bee and I get along fine, despite tenaciously arguing over mere issues, and are good Facebook Friends. We send each other hearts and aquarium stuff etc. - I hope that's OK with Stefan! (Stefan, I will send a Friend request to you too, so you feel better about it.) I also have her back when she's picked on by LuMo or peppered by Zephir. 134. (correction, and then I leave it alone for awhile)- I meant to say, in paragraph #1 of second comment: [Not, "during a given run." - heh, ironic but I can see it's a mistake to conflate the two.] 135. Phil: I don’t know what to make of your last two posts. Surely you’re not unfamiliar with non-Euclidean geometry? In flat Euclidean space, the geodesics are straight lines and the sum of the internal angles of any triangle is equal to 180 degrees. In a negatively curved space, the geodesics are hyperbolae and the sum of the internal angles of any triangle is less than 180 degrees. In a positively curved space, the geodesics are ellipses and the sum of the internal angles of any triangle is greater than 180 degrees. These are facts. And in each case, they are also logical necessities that follow from the geometric structure of the space. (note: I sloppily reversed greater and less in my last post) We believe that space is negatively curved on a cosmological scale, and we know that live on the surface of a spheroid, which is positively curved. So this parameter which you claim I make up and set arbitrarily is actually very real and determined by measurable properties of real things. Now, to return to my point: It would be foolish to study non-Euclidean geometry by abandoning our geometric insight, just because that insight was developed in a Euclidean context. Rather, we should use our geometric insight to see precisely what must be generalized in moving from Euclidean to non-Euclidean geometry, and to understand how and why the generalizations are possible and when they are necessary. The same remark applies to the study of special relativity, where the finite speed of light leads to an indefinite metric which induces a hyperbolic geometry, and Lorentz boosts are nothing other than 4-dimensional hyperbolic rotations. And the same remark applies to quantum mechanics, where the non-vanishing of Planck’s constant induces a hyperbolic projective mapping from the Bloch sphere to the complex Hilbert space and causes probabilities to appear noncommutative and complex. It is precisely by retaining our geometric insight that the apparent paradoxes of these subjects are most easily resolved and understood to be nothing more than logical necessities that follow from the underlying geometric structure. 136. Plato: Certainly, there are many things I could be wrong about. But, be that as it may... I see that the philosophical trends of the last century are an outright attack on rationality, replacing reason with rhetoric whose primary aim is to deconstruct Western culture. I see that research is funded by agencies uninterested in the pursuit of knowledge except as a source of economic advantage and weapons production. I see that universities have been transformed from academies of learning into vocational schools and centers of indoctrination. These are simple observations, easily seen by anyone who looks with open eyes. Thus they cannot possibly be projections of anything taking place within my own mind. 137. ain soph, "I see" then, you have no biases and I am not blind.:) Good clarity on the subject of geometrical propensities. Good stuff. 138. Neil B: “Find someone credible who doesn't agree in the terms as framed, and I'll take it seriously.” “Would you be dismissive of Penrose?” For someone who likes to decry logical fallacies, you’re awfully fond of the argument from authority... By the way, quantum states don’t collapse. State functions collapse. Just as my observation of the outcome of a coin toss collapses my statistical description of the coin, but does not change the coin at all. Given sufficient sensitivity to initial conditions, arbitrarily small amounts of background noise are all it takes to make nominally identical experiments come out different every time, in ways that are completely unpredictable, yet conform to certain statistical regularities. Now, if you can clearly define the difference between “completely unpredictable” and “truly random” in any operationally meaningful, non-circular way, then you may have a point. Otherwise I have no more difficulty dismissing your argument than some of the ill-considered arguments made by Penrose. 139. Plato: Yup. That’s our story, and we’re stickin’ to it... 140. Ain SopH: No, I'm not all that fond of the argument from authority, if you mean that if so-and-so believes it, it must be true. Your understanding of that fallacy seems a little tinny and simpleminded, because the point is that such a person's belief doesn't mean the opinion has to be true. However, neither should a major figure's opinion be taken lightly, which is why I actually said to SC: "Would you be dismissive of Penrose?" instead of, "Penrose said DI was crap, so it must be." But you do have a point, so remember: if the majority of physicists now like the DI and/or MWI, that isn't really evidence of it being valid. This statement by you is incredibly misinformed: By the way, quantum states don’t collapse. State functions collapse. Just as my observation of the outcome of a coin toss collapses my statistical description of the coin, but does not change the coin at all. Uh, you didn't realize that the wave function can't be just a description of classical style ignorance, because parts can interfere with each other? That if I shot BBs at double slits, the pattern would be just two patches? Referring to abstractions like "statistical description" doesn't tell me what you think is "really there" in flight. Well, do you believe in pilot wave theory, what? What is going from emitter, through both (?) slits and then a screen, etc? Pardon my further indulgence in the widely misunderstood "fallacy of argument from authority", but all those many quantum physicists, great and common, were just wasting their wonder, worrying why we couldn't realistically model this behavior? That only a recent application of tricky doubletalk and unverifiable, bong-style notions like "splitting into infinite other worlds" somehow makes it all OK? 141. [two of three, and then I rest awhile] Before I go into this, please don't confuse the discussion about whether math can model true randomness (it can't, whether Bee gets it or not), with the specific discussion of decoherence and randomness there. They are related but not exactly the same. Now: you are right that the background noise can make certain experiments turn out differently each time (roughly, since they might be the same result!) but with a certain statistics. But what does that show? Does it show that there weren't really e.g two different wave states involved, or that we don't have to worry what happened to the one we don't find in a given instance? No. First, the question it begs is, why are the results statistical in the first place instead of a continued superposition of amplitudes, and why are they those statistics and not some other. If you apply a collapse mechanism R to a well-ordered ensemble of cases of a WF, then you can get the nice statistics that show it must involve interference. If R acts on a disordered WF ensemble, then statistics can be generated that are like mixtures. Does that prove jack squat about how we can avoid introducing R to get those results? No. If something (like, on paper, a clueless decoherence advocate who applies the squared amplitude rule to get the statistics, and who doesn't even realize he has just fallaciously introduced through the back door the very process he thinks he is trying to "explain") hadn't applied R to the WFs, there wouldn't be *a statistics* of any kind, orderly or disorderly. (On paper, that something could be a clueless decoherence advocate who applies the squared amplitude rule to get the statistics, and who doesn't even realize he has just circularly and fallaciously introduced through the back door the very process he thinks he will "explain.") There would be just shifting amplitudes. It is the process R that produces mixture-like statistics from disordered sets of WFs, not the MLSs that explain/produce/whatever "the appearance" of R through a cutesy, backwards, semantic slight of hand. Your point about whether such kinds of sequences could be distinguished (as if two processes that were different in principle could not produce identical results anyway, which is a FAPP conceit that does not treat the model problems) is moot, it isn't even the key issue anyway. The key issue is: why any statistics or sequences at all, from superpositions of deterministically evolving wave functions. So we don't know what R is or how it can whisk away the unobserved part of a superposition, etc. This is what a great mind like Roger Penrose "gets", and a philosophically careless, working-physics Villager like Orzel does not. I'm not sure what you get about this line of reasoning since you didn't actually deal with my specific complaints or examples. Note my critique of MWI, per that the first BS in a MZ setup should split the worlds before the wave trains can even be brought back together again. 142. Here's another point: the logical and physical status of the density matrix, creating mixtures, and the effects of decoherence shouldn't depend on whether someone knows the secret of how it is composed. But if I produce a "mixture" of |x> and |y> sequential photons by switching a polarizer around, I know what the sequence is. Whether someone else can later confidently find that particular polarization sequence depends on whether I tell them - it isn't a consistent physical trait. Someone not in on the plan would have to consider the same sequence to be "random", and just as if a sequence of diagonal pol, CP, etc. as shown by a density matrix. But it *can't be the same* since the informed confederate can retrieve the information that the rubes can't. So the DM can't really describe nature, it isn't a trait as though e.g. a given photon might really "be" a DM or mixture instead of a pure state or superposition. Hence, in the MZ with decoherence that supposedly shows how the state approaches a true mixture, everything changes if someone knows what the phase changes are. That person can correct for the known phase changes, and recover perfect interference. How can the shifting patterns be real mixtures if you can do that? Oh, BTW - a "random" pattern that is known in advance (like I tell you, it's sqrt 23) "looks just like" a really random pattern that you don't or can't know, but it can make all the difference in the world, see? Finally, I said I worked up a proposal to experimentally recover some information that we'd expect to be lost by decoherence, and it seems you or the other deconauts never checked it out. It may be a rough draft, but it's there. 143. Arun: That’s an interesting paper, although it suffers greatly under the influence of “critical theory” and goes out of its way to rewrite history in terms of economic class struggle. After reading its unflattering description German universities of the nineteenth century, one can only wonder how such reprehensible places could have given us Planck, Heisenberg, Schrödinger, Minkowski, Stückelberg, Graßmann, Helmholtz, Kirchhoff, Boltzmann, Riemann, Gauss, Einstein... Clearly, those places were doing something right. Something we’re not doing, otherwise we would be getting comparable results. But the paper refuses to acknowledge that, and studiously avoids giving the reader any reason to search for what that something might be. Some of the paper’s criticisms are not without substance, but that’s all the paper does: it criticises. And thus it makes an excellent example of the corrosive influence of historical revisionism, and how critical theory is used to undermine Western culture. 144. Neil: You continue to make the same mistake, you still start with postulating something that is "actual" and what we "know" and is "given" what I'm telling you we actually don't know without already making further assumptions. I really don't know what else to say. Look, take a random variable X. It exists qua definition somewhere in the MUH. It has a space of values, call them {x_i}. It doesn't matter if they're discrete or continuous. If you want a sequence, each value corresponds to a path, call it a history. All of these paths exists qua definition somewhere in the MUH because they belong to the "mathematical structure". Your "existence" is one of the x_i(t), and has a particular history. But you don't need to "generate" one particular sequence, you just have to face that what you think is "real" is but a tiny part of what MUH assumes is "real." Besides, this is very off-topic, could we please come back to the topic of this post? Best, 145. Hi Ain Soph, Of course I’m familiar with non-Euclidian geometry, as it simply refers to geometries that exclude the fifth postulate. I’m also quite aware that GR is totally dependent upon it . The point I was attempting to make is what the difference is between the axioms of a theory and any free parameters it contains. It could be said for instance what forces non-Euclidian geometry upon GR is its postulate of covariance which has the architecture of space time mandated by the matter/energy contained. However particularly what that (non-Euclidian) geometry is in terms of the whole universe is not determined by this postulate, yet rather the free parameter known as the cosmological constant; that is whether it be closed, flat or open. So my contention is that to fix this variable one needs to replace the parameter with an axiom that will as a consequence mandate what this should be, whether that be within the confines of GR or a theory which is to supersede it. Anyway somehow or other I don’t believe either you or Plato understand the point I’ ve attempted to make and thus rather then just repeat what I said I’ll just leave it there. 146. Bee: could we please come back to the topic of this post? I second the motion. I'm sure we're jumping on wrong clues on all sorts of things*, and thanks for the Brugger psychology-testing stuff. Awesome. I do know something about Dopamine since a close family member was turned into a paranoid schizophrenic thanks to a single does of LSD her so-called "friend" put in her mashed potatoes at lunchtime one day. The results were horrific, the girl went "nuts" to use the vernacular. Too much Dopamine=Very Bad. Well, I've long felt everyone suffers to some degree some amount of mental illness. The Brugger test confirms that in my mind. *So our brains aren't perfect, yet I believe the ideal is community, in Physics that means peer review, to sort out the weaknesses of one individual by contrasting their ideas with multiples of those better informed, not all of whom will agree of course, and not all of whom will be right. So consensus is important, before testing proves or disproves, or is even devised. Regarding assumptions (whether true or false), I think that is the job of a (Real not Pop) Philosopher, going all the way back to good ol' Aristotle and his "Logic" stuff. George Musser sums it up better than I, as so: I leave you with pure cheek: Andrew Thomas: Ooh, isn't that weird?! Your initials are carved into the CMB! I didn't see that in the oval Bee featured. I DID see "S'H", which I interpret as God confirming my SHT, or SH Theory, aka "Shit Happens" Theory. What a merry prankster that God dude is, what a joker man, putting it out there right on the CMB for all to see! Well, he DID invent the Platypus, so that's your first clue. ;-) Clearly, those places were doing something right.... Well, experiments did not cost an arm and a leg and did not ever require hundreds of scientists or a satellite launch in those days. As one biographer pointed out, even upto Einstein's middle age, it was possible for a person to read all the relevant literature; the exponential growth since has made it impossible. Lastly, in the areas where the constraints mentioned above don't hold we're doing fine - e.g, genetics and molecular biology, computing, etc. It is just that you - we - do not recognize the pioneers in those fields to have such genius; that is a definite cognitive bias on our part. 148. From Columbus to Shackleton - the West had a great tradition of explorers, but now nobody is discovering new places on the Earth - must be an attack by the forces of unreason on the foundations of Western civilization. I mean, what else could it be? 149. Hi Phil, You mustn't become discouraged as to not understanding your point or not, all the better taken in stride. There is to me a succession( who was Unruh's teacher?) and advancement of thought about the subjects according to the environment one is predisposed too. Your angle, your bias, is the time you spent with, greatly, before appearing on the scene here. Your comments are then taken within "this context" as I see it. Part of our communication problem has been what Ain Sof is showing. This has been my bias. Ain Sof doesn't have any.:) Why I hold to the understanding of what Howard Burton was looking for in the development of the PI institution was a "personal preference of his own" in relation to the entrance too, is what constitutes all the science there plus this quest of his. So as best I can understand "axiom" I wanted to move geometrical propensity toward what iS "self evident." Feynman's path integral models. Feynman based his ideas on Dirac's axiom "as matrices." I am definitively open to corrections my our better educated peers. Here was born the idea of time in relation to the (i) when it was inserted in the matrices? How was anti-matter ascertained? Feynman's toy models then serve to illustrate? Let the wrath be sent down here to the layman's understandings. 150. Arun: You’re not suggesting we’ve mapped out physics with anywhere near the completeness with which we’ve mapped out the Earth, are you? 151. Plato: “This has been my bias. Ain Sof doesn't have any.” Now, now... What I said was that certain trends are so obvious that I’m certain I’m seeing something that is really there, and not projecting my own stuff onto the world. Of course, being certain of it is no guarantee that its true... 152. Phil: Once again, I agree wholeheartedly with what Bell is trying to accomplish by adopting the word, “beable,” but I loose more by abandoning the correct parts of my understanding of words, like “observation” and “property,” than I gain from the tabula rasa that comes with the word, “beable.” Bell himself recognised that the introduction of the word was a two-edged sword. In his 1975 paper, “The Theory of Local Beables,” he writes The name is deliberately modeled on “the algebra of local observables.” The terminology, be-able as against observ-able is not designed to frighten with metaphysic those dedicated to realphysic. It is chosen rather to help in making explicit some notions already implicit in, and basic to, ordinary quantum theory. For, in the words of Bohr, “it is decisive to recognize that, however far the phenomena transcend the scope of classical physical explanation, the account of all evidence must be expressed in classical terms.” It is the ambition of the theory of local beables to bring these “classical terms” into the mathematics, and not relegate them entirely to the surrounding talk. [emphasis in the original] Two or three paragraphs later, he adds One of the apparent non-localities of quantum mechanics is the instantaneous, over all space, “collapse of the wave function” on “measurement.” But this does not bother us if we do not grant beable status to the wave function. We can regard it simply as a convenient but inessential mathematical device for formulating correlations between experimental procedures and experimental results, i.e., between one set of beables and another. Now, for someone who is thoroughly steeped in the orthodox view that probabilities are objectively real properties of physical systems, I suppose it can be useful to adopt the word, “beable,” to remind themselves that a probability isn’t one. But the real danger of introducing this term is that it tempts one to treat the concept as being relevant only in the quantum context. Thus it opens the door to a new misconception while throwing the old one out the window. So I think the preferable way to combat this kind of cognitive bias is to realize that its root lies in the widespread misapprehension of the concept of probability. And this is where I draw a parallel to my remarks about geometric insight, because we must use our insight to see precisely what must be generalized in moving from classical to quantum mechanics, and to understand how and why the generalizations are possible and when they are necessary. The key realisation is that probable inference is the generalization of deductive inference from the two-element field {0,1} to the real interval (0,1). That alone should be enough to counteract any tendency to ascribe objective reality to probabilities (i.e., treat them as beables), even in a classical context. The we must generalize again to vector probabilities in statistical mechanics, and finally to spinor probabilities in quantum mechanics. As I remarked above: my observation of the outcome of a coin toss collapses my statistical description of the coin, but does not change the coin at all. Once we realize that this same idea still applies when probabilities are generalized from to scalars to spinors, and manifests in the latter case as the collapse of the wavefunction, the “weirdness” of quantum mechanics evaporates, and along with it, the need for terms like “beable.” 153. Ain Sof, Ah!....there is hope for you then.:) Dennis William Siahou Sciama FRS (November 18, 1926–December 18, 1999) Sciama also strongly influenced Roger Penrose, who dedicated his The Road to Reality to Sciama's memory. The 1960s group he led in Cambridge (which included Ellis, Hawking, Rees, and Carter), has proved of lasting influence. Alma mater University of Cambridge Doctoral advisor Paul Dirac Doctoral students John D. Barrow George Ellis Gary Gibbons Stephen Hawking Martin Rees David Deutsch Brandon Carter 154. Hi Ain Soph, So you are basically saying that as long as the statistical description is relegated to being an instrument for calculating outcome, rather then what embodies as being the mechanics (machinery) of outcome, then we don’t need anything else to keep us from making false assumptions. This to me sounds like what someone like Feynman would say who contended that the path integral completely explained least action as what mandates what we find as outcome. I’m sorry yet for me that just doesn’t cut it, as it assigns the probabilities as being the machinery itself. The whole point of Bell’s ‘beable’ concept is to force us to look at exactly what hasn’t been explained physically, rather than having us able to ignore their existance. That’s to say, that yes the reality of the coin is not affected by it being flipped, yet one still has to ask what constitutes being the flipper, even before it is considered as landed by observation to be an outcome. What you are asking by analogy to be done is to accept that a lottery drum’s outcomes are explained without describing what forces the drum to spin. The fact is probability is reliant on action and all action requires an actuator. So if your model has particles as being the coins you still have to give a physical reality to not just the drum, yet what has it to be spun? If your model has the physicality of reality as strictly being waves, then you are in a worse situation, for although you have accounted for what represents as being the actuator of the spin, yet left with nothing to be spun as to be able to observe as an outcome. This is exactly the kind of thing that Bell was attempting to have laid bare with his inequality, as it indicated that the formalism (math) of QM mandated outcomes that required a correlated action demonstrated in outcomes separated by space and time exceeding that of ‘c’ and yet had no mechanism within its physical description that would account for such outcomes. So yes I would agree that the mathematics allows us to calculate outcome, yet it doesn’t then by itself able to embody the elements that have them to be and thus that’s why ‘beables’ should be trusted when evaluating the logical reality of models. Further, one could say that’s why we have explanations like many worlds as attempting to give probability a physical space for Feynman’s outcomes, or Cramer’s model to find the time instead as the solution. Then again we have all as being math contending that there is no physical embodiment of anything yet only the math which as far as I can see is what your contention would force us to consider as true. I don’t know about how you see all this, yet for me their seems to be room for other more direct physically attached explanations for what we call to be reality, which as you say would not force us to throw the baby, which in this case being reality, out with the bath water, with them only representing our false assumptions and prejudices. So yes I agree that one and one must lead to there being two, yet they both must still be allowed to exist to have such a result found as being significant tp begin with. 155. While the reality of the coin is seemingly not affected by the collapse of the statistical distribution describing its position, the same cannot be said about the electron. There are no hidden variables maintaining the reality of the electron while its wave function evolves or collapses. 156. It's a shame that Bee and I continue to disagree about the issue of determinism in math v. the apparent or actual "true randomness" of the universe. I don't even agree that it's off-topic, Bee, since it is very relevant the core issue of whether we do and/or should project our own cognitive biases on the universe. One of those modern biases apparently is the idea of mechanism, that outcomes should be determined by initial conditions. Well, that is how "math" works, but maybe not the universe. Perhaps Bee is thinking I'm trying to find purely internal contradictions in the MUH, but I'm not. It can be made OK by itself, in that every possibility does "exist" Platonically, and there is no other distinction to make (like, some are "real stuff" and others aren't.) That's the argument the modal realists make. In such a superspace, it is indeed true that the entire space of values of a random variable exist. It's like "all possible chess games" as an ideal. But it isn't like a device that can produce one sequence one time it is "run", another sequence in another instance etc. It is a "field." And no it doesn't matter what is continuous or discrete, that's beside the point of deterministically having to produce consistent outputs when actually *used.* But my point is, we don't know that MUH is rightly framed. What we have is one world we actually see, and unless I have "some of what MWI enthusiasts are smoking" I do not "see" or know of any other worlds. In our known world, measurably "identical particles" do not have identical behavior. That is absurd in logical, deterministic terms. We do have specific and varying outcomes of experiments, that is something we know and does not come from assumptions. I am sure you misunderstood, since you are aware of the implications of our being able to prepare a bunch of "identical" neutrons. One might decay after 5 minutes, another after 23 minutes. If there was an identical clockwork equivalent, the same "equation" or whatever inside each neutron, then each neutron would last the same duration. I think almost everyone agrees on that much, they just can't agree on "why" they have different lifetimes. In an MUH, we'd still have to account for different histories of different particles, given the deterministic nature of math. There are ways to do that. The world lines could be like sticks cut to different links. In such a case there is no real causality, just various 4-D structures in a block universe "with no real flowing time." Or, each particle could have its own separate equation or math process (like sqrt 3 for one neutron, cube of 1776 for another.) But the particles could not all be identical mathematical entities, and glibly saying "random variable" *inside each one* would not work. If each neutron started with the same inner algorithm or any actual math structure or process, it would last as long as any other. That is accepted in foundations of math, prove me wrong if you dare. 157. It is possible for different math-only "worlds" to have different algorithms. But if the same one applied to all particles in that world, then every particle would have to act the same since math is deterministic. Hence, we'd have the 5-minute-neutron world, the 23-minute-neutron world, etc. Our universe is clearly not like that, as empirically given. Hence, each of those apparently identical particles must have some peculiar nature in it, that is not describable my mathematical differences. And Ain Soph is IMHO wrong to say we can't ascribe probability to a single particle. Would you deny, that if I throw down one die it has "1/6 chance of showing a three"? Would you, even if the landing destroyed the die after that? What other choice do we have? Yes, we find out via an ensemble. But then why does the ensemble produce that statistics unless each element has some "property" of maybe doing or not, per some concept of chance? I think we have no choice. And as for thinking the wave function is just a way of talking about chances, isn't beable, whatever: then what do you think is the character of particle and photons in flight? If you want to deny realness and say our phenomenal world is like The Matrix, fine, but at least you can't have it both ways. And however overlong or crabby some of the comments might be, it would be instructive to consider some of my critique of DI/DM/MWI. Sorry there is much confusion over determinism and causality here, but there just is, period! Again, this is relevant. But no more about it per se unless someone prompts with yet another critique per se! (;-) Yet note, the general point is part of the continuing sub-thread here over quantum reality, since it has to be. 158. Last note for now: Thanks Phil for some very cogent comments, supporting my outlook in general but in your more dignified style ;-) As for the tossed coin: REM that in a classical world, for one coin to land on it's head and other, tails; was a pre-determined outcome of the prior state actually being a little different in each case! The coin destined to come up heads was already tipping that way, and at an earliet time its tipping that way was from a slightly different flip of my wrist, etc. It's not about whether the observation changes the coin, it's about the whole process being rigged in advance. One could think of the whole process as like a structure in space-time, with one outcome being one entire world-bundle, and the other outcome being another world-bundle. They are genuinely different (however slightly) all the way through! But in QM, we imagine two "identical states" from which outcomes are, incredible, different. (It is easy for forget, that really is logically incredible as Feynman noted, since we'd gotten used to it being the apparent case.) As I painstakingly explained, that is not derivable from coin-toss style reasoning. If you believe that the other outcomes really exist somewhere, it's your job to bring photos, samples, whatever or else just be a mystic. 159. Jules Henri Poincare (1854-1912) Mathematics and Science:Last Essays 8 Last Essays Let rolling pebbles be left subject to chance on the side of a mountain, and they will all end by falling into the valley. If we find one of them at the foot, it will be a commonplace effect which will teach us nothing about the previous history of the pebble; A Short History of Probability The Pascalian triangle(marble drop experiment perhaps) presented the opportunity for numbers systems to materialize out of such probabilities? If one assumes "all outcomes" one then believes that for every invention to exist, it only has to be discovered. These were Coxeters thoughts as well. Yet now we move beyond Boltzmann, to entropic valuations. The Topography of Energy Resting in the Valleys then becomes a move beyond the notions of true and false and becomes a culmination of all the geometrical moves ever considered? Sorry just had to get it out there for consideration. 160. Just consider the "gravity of the situation:) and deterministic valuation of the photon in flight has distinctive meanings in that context?:) But they have identical wave functions. 162. Phil: You conclude from my argument exactly the opposite of what I intended to show. Nothing could more clearly demonstrate the confounding effects of cognitive bias. I’m NOT saying that your cognition is biased and mine isn’t. I’m saying that a mismatch between our preconceptions leads us to ascribe opposite meaning to the same sentences – with or without beables! This results in a paradoxical state of affairs: it seems we agree, even though our attempts to express that agreement make is seem like we disagree. Okay, so let me try again... In my view, the probabilities are anything but the machinery! They are nothing more than a succinct way of encoding my knowledge of the state and structure of the machinery. Neither my view nor Feynman’s nor Bell’s treats probabilities as beables. The wave fronts of the functions which satisfy the Schrödinger equation are nothing other than the iso-surfaces of the classical action, which satisfies the Hamilton-Jacobi equation. The apparently non-local stationary action principle is enforced by the completely local Euler-Lagrange equation. This is no more or less mysterious than the apparently non-local interference of wave-functions. In the last analysis, they stem from the same root. Thus amplitudes are not real things. They are merely bookkeeping devices that record our knowledge about the space-time structure of the problem, while abstracting away much of the detail by representing its net effect as quantum phase. This is what Schrödinger was trying to tell us with his thought experiment about the cat. By the same token, probabilities are not real things. They, too, are only bookkeeping devices, which quantify our ignorance of details. A correctly assigned probability distribution is as wide as possible, given everything we know. It is therefore not surprising that our estimated probability distribution becomes suddenly much sharper when we update it with the results of a measurement. It was 1935 when Hermann showed that von Neumann’s no-hidden-variables argument was circular. It was 1946 when Cox showed that the calculus of Kolmogorovian probabilities is the only consistent way to generalize Boole’s calculus of deductive reasoning to deal with uncertainty. It was 1952 when Bohm published a completely deterministic, statistical, hidden-variables theory of quantum phenomena. It was 1957 when Jaynes showed that probabilities in statistical mechanics have no objective existence outside the mind of the observer. From 1964 to the end of his life, Bell could not disabuse people of the false notion that his theorem proved spooky action at a distance. An now, in 2010, cognitive bias still prevents the majority of physicists from connecting the dots. 163. Neil B: “prove me wrong if you dare” Ha! This is trivially easy. Each neutron in your example exists in its own unique milieu of external influences. Thus they are identical machines operating on different inputs, which therefore give different outputs. Their dependence on initial conditions is very sensitive, so there is no correlation between the moments at which different particles decay, even if they are very close together. Only the half life survives as a statistical regularity. QED. “Would you deny, that if I throw down one die it has 1/6 chance of showing a three? ... What other choice do we have?” I claim that the state of the die will evolve as determined by its initial conditions and various influences that affect it in transit and modify its trajectory. Since I have imperfect knowledge of the initial conditions, and cannot predict the transient influences, and since know that the final state depends very sensitively on these things, I have no rational choice but to treat the problem statistically. I will assign equal probabilities to the six faces only if I believe that the apparent symmetries of the die are real, and I will believe that only if I lack evidence to the contrary. However, if I see the die come up three, over and over again, I will have no rational choice but to adjust my assignment of probabilities, which amounts to revising my estimation about the symmetries. So you see, these is a statements of “having no rational choice but to assign certain probabilities” are statements about me, and about the evolution of my knowledge about the die. They are not statements about the die. With each observation I make, my estimate of the probabilities changes, but the die remains the same. And nowhere in any of that did I say anything about an ensemble. No ensemble is required. If you think you need an ensemble, then you have already accepted many-worlds, whether you think you have, or not. “It’s not about whether the observation changes the coin, it’s about the whole process being rigged in advance.” The belief that this is not true of quantum phenomena is one the cognitive biases that result from the incorrect understanding of the nature of probability. See also my remarks in reply to Phil. 164. Ain Soph, I appreciate finally getting a considered response. However, your position about environmental influences on something so fundamental as particle life-expectancy is very unorthodox and very unsupported AFAIK by any experiments. So you are a deterministic, who thinks there is some particular reason one neutron decays after one span, and for another to last a different span? Then we should be more able to do two things: 1. Make batches from different sources and environments, that have varying tendencies to decay even if we can control the environment. If there's a clockwork inside each neutron, we should be able to create batches with at least some varying time spectra, such as lumping towards a particular span etc. But no such batches can be made, can they? Nor can we (2.) Do things to the particles to stress them into later being short-lived, or long-lived, etc. That is unheard of. Most telling, is that if we let a bunch of particles decay for awhile and take, say, the remaining 1%, that i% decays from then on in the same probabilistic manner as the batch did as a whole up to that point. It is incredible, for a bunch of somethings with deterministic structure to have a subset which last longer, but then has no further distinction after that time is up. The remaining older and older neutrons can keep being separated out, and no residual signal of a deterministic structure can be found after they've "held off" for all that time. They'd have to be like the silly old homunculus theory of human sperm, like endless Russian dolls waiting for any future contingency (look it up.) It is absurd, sorry. It's looking at actual nuts and bolts and not semantics or understanding about "probability" that best shows the point. You're right about the probability just being bookkeeping or coding of ignorance in a classical world, but our world is probably (!) not like that. A fresh neutron should be like a die with the same facing up each time and just falling straight down. (BTW, an ensemble is the set of trials or particles in one world, it does not have to mean MWI. The other copy of a particle in our world is just as good a repletion as having the ostensible same thing happen elsewhere too.) The actual evidence supports the logically absurd idea that genuinely identical particles and states (empirical and theoretical basis up to the moment the similarity is shattered by a measurement or decay event, etc.) sometimes do one thing, sometimes another, for no imaginable reason as we understand and can model causality. Why? Because the universe is just weird. And it isn't about understanding "probability" per se, which of course does not really exist in math anyway - all the outcomes are precoded into earlier conditions etc., which means it's a matter of whether pseudo-random patterns that would seem to pass the smell test had been "put in by hand", in the Laplacian sense, by God or whatever started up the universe's clockwork. It is about understanding what our universe is like, when it is involved in what we loosely call "probability", without truly understanding what that means in the real world. It is wrong to project and impose our supposed philosophical needs or prejudices upon it. BTW, I was hoping you'd look at my experiment about recovering data after decoherence. 165. Neil B: Of the two issues you raise, you are wrong about the first one and right about the second one. In both cases, the correct understanding of the issue supports my argument. Firstly, if all neutrons are identical, then we definitely should not be able to prepare batches of neutrons with differing parameters. Further, particle decay obeys Poisson statistics, which are shift invariant. Hence knowing how long a given particle has lived tells you nothing about how much longer you can expect it to survive. Secondly, there is indeed something you can do to “stress” a neutron to systematically affect its half-life: you can put it into different nuclei, or leave it free. In that way, you can vary the half-life of a neutron from about 886 seconds to infinity. A “fresh” neutron will be in some unpredictable state determined by the unknown details of the process that created it. An ensemble is not an actual set of particles or trials. People use ensemble arguments when they want to define probabilities as the limiting frequencies of occurrence in an infinite number or trials. But determining what would happen if we could perform an infinite number of trials is based on symmetry arguments of the kind I’ve already outlined. If you can do that correctly, you don’t need an ensemble. If you can’t, all the ensembles in the multiverse won’t help you. Many-worlds is an attempt to rescue limiting frequencies in cases where postulating more than one trial makes no sense. For example, what is the probability that the sun will go nova in the next five minutes? Many-worlders claim to find that probability by counting the fraction of parallel universes in which the sun actually does go nova in the next five minutes. Yeah. Right. Many-worlds is the last resort of frustrated frequentists, desperately searching for ensembles in all the wrong places. Oh, and... what experiment about recovering data after decoherence? 166. Neil: What I was thinking you were saying is that MUH is in conflict with observation: "If actual outcomes, sample sequences which are the true 'data' from experiments, are genuinely "random" ... then... MUH is invalid. And I've tried to explain you several times why MUH is not in conflict with observation. Of course we don't know that MUH is correct. It's an assumption, as I've been telling you several times already. All I've been saying is that it is tautologically (by assumption) not in conflict with reality. About the neutrons: Their behavior is described by a random process. All values this process can take "exist" mathematically in the same sense. You just only see one particular value. Is the same I've been telling you several times now already. Nobody ever said you must be able to "see" all of the mathematical reality. This is one of the assumptions you have been implicitly making that I tried to point out. Incidentally you just called my replies to you inconsiderate. Which, given the time that I have spent, I don't find very considerate myself. Best, 167. This comment has been removed by the author. 168. This comment has been removed by the author. 169. HI Ain Soph , Perhaps as you say each of our biases have had us to see that we disagree in places that we don’t. There’s not much more that could be said about this discussion about beables, since as you admit whether the concept is useful to avoid biases really depends on your own biases.:-) Just a couple of comments as to what you said, then I think we should put this to rest, at least in terms of this blog. The first being would be to say I disagree that Feynman didn’t consider probabilities as a beable, for he certainly did. I won’t defend this other than just to say that you would have to point me to something more specific that would convince me otherwise. Lastly what Hermann demonstrated as what was wrong with von Neumann’s proof was not that it was a circular argument, yet rather it assigned the logic of the averaged value of an assemble to situations where they were they just couldn’t be demanded logically to hold. As to all the back and forth comments in respect to probability, what these in the end represent comes down to whether one believes, rather then knows, if there is such an entity as the random set, that is outside of it being something that can only be defined mathematically by what it isn’t, rather then what it is. This reminds of a time some years back when I was playing craps late into the evening in Atlantic City, with noticing one fellow off to the side scribbling each roll of the dice on a note pad. When it came time for me to leave the table, I asked this fellow if he believed what he was keeping track of would help him to win, with his reply being of course because it was all a matter of the probabilities. Then to continue I asked had he never heard that Einstein said that God doesn’t play dice and he replied yes I have, so what does that have to do with it. I then said what he meant which is of importance here is that even god could not have random work to have something known to be real, so then what chance do you think you have in being able to succeed? :-) 170. Hi Ain Soph, Just one thing I forgot to add is that from what I’m able to gather you are one of those that consider the working of reality as being that of a computer. Actually I have no difficulty with this as long as a computer is not limited to being digital. The way I look at it with respect to having both waves and particles as beables this would have this computer to be analogue which is restricted to digital output :-) 171. Bee, I don't know why you think I implied or said your replies were inconsiderate. When I said it's a shame we continue to disagree, I meant in the usual sense of "it's unfortunate it's that way" rather than "shame" over something bad. Or you might be confusing my use of "considered" in a reply to Ain Soph not you, in which I said I appreciated finally getting such a reply? The word "considered" means put effective thought into the comment instead of just tossing off IMHO assumptions etc. It does not mean the same as "considerate", meaning caring about someone else, being polite etc. REM that I am cross-talking to you and A.S. about nearly the same point, since you both seem to accept determinism (or its viability) and don't seem to appreciate my point about neutrons and math structures, etc. Perhaps also you have some lingering soft spots in practical English, although your writing is in general excellent and shows correct parsing of our terms and grammar at a high level. Note that English is full of pitfalls of words and phrases that mean very differently per context. Note also that when two people keep debating and neither yields, then both are "stubborn" in principle. I suggest seeking third-party insight, which I predict will be a consensus in the field of foundations (not applied math) that identical math structures must produce identical results (as A.S. now seems to admit - saying it's a matter of environmental influence, about which more in due course), and that a field of possibilities is just an abstraction. Hence it is not possibly a way to get one identical particle to last one duration, and another one; another duration. It is not a "machinery" for producing differential results in application. That is so regardless of what kind of universe we are in or how many others there are etc. Either we pre-ordain the behavior in the Laplacian sense, or it is inexplicably random and varying despite the identical beginning states. This is not my own idiosyncratic notion, but supported by extensive reading of historical documents in science and phil-sci that included works of founders of QM etc. Sure, we can't figure out "how can this be?" - it's just the breaks. In any case I'm sorry you felt put-down, but you can be relieved that isn't what I meant. 172. Further possible confusion: in practical (English?) discourse, if a comment is addressed to soandso then the statement: "Soandso, I appreciate finally getting a considered response..." is supposed to mean, "I appreciate finally getting _____"[from you] rather than, "I appreciate getting _____" from at least someone, at all, period. I'm not being a nit-picker about trivia, just don't want anyone to feel slighted. Ain Soph: I mean, the proposed experiment I describe at my name-linked blog, the latest post "Decoherence interpretation falsified?" (It's a draft.) Please, look it over, comment etc. 173. Neil: Thank you for the English lesson and I apologize for any confusion in case "inconsiderate" is not the opposite of "considered," which is what I meant. Yes, I was referring to your earlier comment addressed at Ain Soph. Your statement, using your description, implies that you think I have not "put effective thought" into my comments, which I find inappropriately dismissive. In fact, if you read through our exchange, I have given you repeatedly arguments why your claim is faulty which you never addressed. I am not "tossing off" assumptions, I am telling you that your logical chain is broken, and why so. It is not that I do not "appreciate" your point, I am telling you why you cannot use it to argue MUH is in disagreement with observation. This is not a "debate," Neil, it is you attempting an argumentum ad nauseum. Finally, to put things into the right perspective, nowhere have I stated whether I "accept" determinism or not, and for the argument this is irrelevant anyway. Nevertheless, when it comes to matters of opinion, I have told you several times already that I don't beleive neither in MUH nor in MWI. I am just simply telling you that your argumentation is not waterproof. Best, 174. Bee, I think you missed my followup to the explanation about "considered" - as I said there, I meant to Ain Soph that he/she had finally given me a "considered" [IMHO] reply, not that finally "someone" had - which would mean, no one else had either! So can we finally be straight about that, since you were not meant to be included? As for argumentum ad nauseum, I note that you keep mostly repeating yourself as well so wouldn't that apply to both of us if so? Also, I have provided some new ideas such as the example of neutrons, moving beyond more abstract complaints. So let's forget about MUH for awhile (and since it involves accepting "all possible math structures", which goes beyond merely saying that this world is fully describable by math.) Note also that even if a person's argument is not airtight, it can still be the most plausible one. Also, AFAIK I do have majority support (or used to?) in the sci-phil community. 175. BTW, I just got a FBF acceptance from Stefan! Thanks. The blog is good "you guys" (another colloquialism that in English can now include any gender) overall. To other readers: Bee's FB page is cute and interesting, much more than the typical scientist's. 176. Neil: Okay, let's forget about the considerable considerations, this is silly anyway. Of course I am repeating myself, because you are not addressing my arguments. Look, I am afraid that I read your "new ideas" simply as attempts to evade a reply to my arguments. But besides this, I addressed the neutrons already above. Best, 177. Hi Bee, ”I am just simply telling you that your argumentation is not waterproof.” Interesting much of this conversation ends up focused around semantics and looking at what you said to Neil reminded me it at times can be non trivial. That is particularly in today’s scientific climate I’d rather have my theory be bullet proof, while less concerned if it be water proof, as there is a significant diference between being all wet and dead:-) c.c. Neil Bates 178. I think we've come a long way from Spooky.:) An Introduction to String Theory A Talk by Steuard Jensen, 11 Feb 2004 179. Plato: I wonder if your piece on imaging with entangled photons is the same idea as this stunning report: Wired Magazine Danger Room Air Force Demonstrates ‘Ghost Imaging’ * By Sharon Weinberger Email Author * June 3, 2008 | * 11:00 am | * Categories: Science! Air Force funded researchers say they’ve made a breakthrough in a process called "ghost imaging" that could someday enable satellites to take pictures through clouds. 180. Phil: You have a point about Feynman. Although, on page 37 of his 1985 book, QED, we find ... the price of this great advancement of science is a retreat by physics to the position of being able to calculate only the probability that a photon will hit a detector, without offering a good model of how it actually happens. which draws a clear distinction between what we can calculate (probabilities) and what actually happens (beables), yet on page 82 he says ... the more you see how strangely Nature behaves, the harder it is to make a model that explains how even the simplest phenomena actually work. So theoretical physics has given up on that. by which I think he really means that he, himself, has given up on it -- which is sad, because his path integrals build such a clear bridge between quantum phase and classical action; they are bound to play a central role in the defeat of quantum mysticism. Also, I think your analogue computer, restricted to digital output, is an excellent metaphor! At least to first order. It reminds me of Anton Zeilinger’s remark that “a photon is just a click in a photon detector.” 181. Ain Soph, I think what you call "quantum mysticism" is just what nature is like. Why must She make sense? She is not like the Queen of England, she is like Lady Gaga: "I'm a freak bitch, baby!" About neutrons: yes in an extreme case, inside a nucleus, neutrons are stable. But in the bound state they are exchanging with other nucleons, not a proper dodge regarding in-flight differences. You seem to admit that a real mechanism would mean we could make a batch of "five minute neutrons", but almost no one thinks we could. We can't even make a batch that has a bias, etc. That is absurd. The consistent, Poisson distribution is "mystical", it is absurd. The alternative would be a ridiculous Rube-Goldberg world where intricate arrangements were made to program each little apparently identical particle with a mechanism that could never be exposed, never tricked into revealing the contrivance by how we grouped the particles, how we made them, waiting them out, nothing. The universe can't do that. It's something to accept. Again, re the proposed information recovery experiment: I describe it in my blog. 182. Neil B: Tyrranogenius??!!?! Ree-hee-hee-ly... Ahem. Anyway... I took a look at your post about recovering information after decoherence, and I pretty much agree with most of what you wrote. But let’s be clear about what this really implies about the nature of probability. This little thought experiment of yours clearly demonstrates my point, that there is nothing special or mysterious about quantum probabilities; they nothing other than classical probabilities applied to things that have a phase. There is a somewhat analogous experiment in statistical mechanics. One puts a drop of black ink in a viscous white fluid contained in the thin annular space between two transparent, rigid cylinders. Then one turns the outer cylinder relative to the inner one, and watches as the ink dot is smeared out around the circumference, becomes an increasingly diffuse grey region until it finally disappears completely. If the rotation is continued long enough, the distribution of ink can be made arbitrarily close to uniform, both circumferentially and longitudinally. Eventually, one concludes that entropy has increased to a maximum and the information about the original location of the ink drop has been irreversibly lost. However, if one then reverses the relative rotation of the cylinders, one can watch as the ink drop is reconstituted, returning to its original state exactly when the net relative rotation of the cylinders returns to zero. This works better with more viscous fluids, but only because that makes it easier to reverse the process. The ease of demonstrating the principle depends on the viscosity, but the principle itself does not. And the principle is this: information is never lost in a real process, but it can be transformed in ways that make it prohibitively difficult to recover. Of course, “prohibitively difficult” is in the eye of the beholder. It is not a statement about the system; it is a statement about the observer. If you come along after I’ve turned the cylinders, not knowing that they were turned, and I challenge you to find the ink droplet, you will measure the distribution of ink, find it so close to uniform that you declare the difference to be statistically insignificant, and conclude that my challenge is impossible, saying the information is irretrievably lost. That is, you will say that the mixture is thermalized. But then I say, no, this is not a mixed state at all; it is an entangled state. And to prove it, I turn the cylinders backwards until the ink drop reappears. Voila! So you see, the question is not, when is the information lost, but rather, at what point is recovering it more trouble than it’s worth? And the answer depends on what you know about the history of the situation. The moral of the story is, one man’s information is another man’s noise. There is no such thing as “true randomness.” And this is the real lesson to be learned from the whole messy subject of decoherence. 183. Neil B: I say, “if all neutrons are identical, then we definitely should not be able to prepare batches of neutrons with differing parameters.” And you reply “you seem to admit that a real mechanism would mean we could make a batch of five minute neutrons.” No wonder you think other people’s posts are not carefully considered. You don’t pay attention to what they write. 184. Ain Soph, thanks for looking at my blog and getting the point about recovering information, even if we don't agree about the significance (REM, I say we can recover the original biased of amplitude differences, not specific events.) As for the name, well it's supposed to be cute and creative. Neutrons: but the statements are flip sides of the same point: Right, so if they weren't identical, and were deterministic (as you seem to think they must be, and "God only knows" what Bee really thinks IMHO but I'll leave her alone anymore), then we would be able to prepare a batch of "five minute neutrons." They would of course be, a whole bunch that were the same as, that portion, that lasts five minutes, of a normal motley crew of varying life times. Of course almost no one thinks we can do that, hence neutrons are likely identical, hence looking for mechanism to break the mystical potential of events is likely hopeless. You need to think less one-dimensionally? 185. Hi Ain Soph, So I guess on the question of probabilities we find Feynman to have given them a physicality that just can’t be justified. He also held similar notions as to the meaning of information and what that implied in terms of physical reality, as what is to be considered as real physically and what isn’t. I think what separates the way we each look at all of this is rooted in what of our most basic of biases be and that what forms as constituting our ontological centres. So when I say analogue rendering only digital results, I mean just that, with having to attach a separate and distinct entity to both, while you seem able to have only one thing stand as being both. To me this is reminiscent of when as a child I would get these puzzles where one connects the dots , where after tracing between the dots something would appear as a figure, such a s boy or girl’s face, or some inanimate object. I find your way of looking at the world is to just to see the dots, while the lines between are spaces having no meaning or consequence. However for me it is to have the figure as the place that no matter where the dots are looked for and even when not found still exists, as do the dots. Now as much as I hate to admit it, this is one bias that I fear each of us will never be able to discard and as such as for both of us the how and the why of the world will be looked for from two distinctly different perspectives. That said I have no compliant as you being a Feynman fan, as then you having come by it honestly for this information (digital) perspective of reality can be attributed largely to him. To quote Mehra’s biography of Feynman ‘Beat of A Different Drum’ under the heading ‘24.2 Information as a Physical Reality’ (page 530)Feynman’s thoughts on this in summation reads: “This example has demonstrated that the information of the system contributes to its entropy and that information is a well-defined physical quantity, which enters into conservation laws” The thing is I have no problem with this statement, other to echo Bell’s compliant when this all is information view was proposed in asking “information about what”. With the Feynman perspective as with his diagrams this information represented only what the correlated assembly (the group of dots yield) without regard for what formed to be the cause of the correlations as in his diagrams having those wavy lines in between to be assigned no physically yet required all in the same. So once again for me it’s not how it happens that physics can demonstrate so well why there be no hidden variables, yet rather how can it even consider it a good beginning to deny made so evident to be deduced by reason of experiment. This is where I find the quantum mysticism to begin as the same reason given by Einstein to Heisenberg when he explained : "...every theory in fact contains unobservable quantities. The principle of employing only observable quantities simply cannot be consistently carried out." Anyway despite our biases I have to respect that you take your position seriously, as do I, yet I convinced that no matter what the outcome each would be more then grateful if an experiment could be devised which could have made clear which is simply wishful thinking and which is nature’s way of being. 186. Phil, REM that experimental proposal of mine that you've read (and correctly understood, as did Ain Soph.) If we can recover such information about input amplitudes after the phases are scrambled - and the direct optical calcuation, which has never been wrong, says we can - that is a game changer. The output from BS2 in my setup "should" be a "mixture" in QM, ie equivalent to whole photons randomly going out one face or the other. But if not, then the fundamentals have to be reworked and we can't use traditional DM or mixture language. I'm serious as a heart attack, it's not braggadocio but a clear logical consequence. (BTW anyone, that blog name is supposed to be cute and camp, not to worry.) 187. Remark to All: Many valid arguments have been presented over the years that should be, to use Neil’s phrase, “game changers.” But they’re not. Einstein, Schrödinger, Bohm and Bell, put together, were not able to counter the irrationality that was originated by the “Copenhagen Mafia” and continues to be aggressively promoted today. As we have strikingly demonstrated in this very thread, contemporary physics is hobbled by an inability to agree on the meaning of such basic terms as “reality,” “probability,” “random” and “quantum” -- just to name a few. Thus, endless semantic quibbling has been imported into physics and drowns out any vestige of substantive debate that could lead to real progress. Willis E. Lamb, awarded the 1955 Nobel Prize in Physics for discovering the Lamb shift, states categorically in a 1995 article [Appl. Phys. B, 60(2-3):77] that “there is no such thing as a photon. Only a comedy of errors and historical accidents led to its popularity among physicists.” But there is quite some evidence to indicate that these errors are anything but accidental. In Disturbing the Memory, an unpublished manuscript written by Edwin T. Jaynes in 1984, he describes why he had to switch from doing a Ph.D. in quantum electrodynamics under J. R. Oppenheimer to doing one on group theoretic foundations of statistical mechanics under Eugene Wigner: “Mathematically, the Feynman electromagnetic propagator made no use of [QED’s] superfluous degrees of freedom; it was equally well a Green’s function for an unquantized EM field. So I wanted to reformulate electrodynamics from the ground up without using field quantization. ... If this meant standing in contradiction with the Copenhagen interpretation, so be it. ... But I sensed that Oppenheimer would never tolerate a grain of this; he would crush me like an eggshell if I dared to express a word of such subversive ideas. “Oppenheimer would never countenance any retreat from the Copenhagen position, of the kind advocated by Schrödinger and Einstein. He derived some great emotional satisfaction from just those elements of mysticism that Schrödinger and Einstein had deplored, and always wanted to make the world still more mystical, and less rational. ... Some have seen this as a fine humanist trait. I saw it increasingly as an anomaly -- a basically anti-scientific attitude in a person posing as a scientist.” Whether or not it started out that way, in the end the truth was of no importance in all of this, as exemplified by Oppenheimer’s remark (quoted by F. David Peat, on page 133 of Infinite Potential, his 1997 biography of David Bohm): “If we cannot disprove Bohm, then we must agree to ignore him.” There are other stories like this. Is this evidence of an innocent cognitive bias, or something more dangerous? 188. Ain Soph, I must admit to still being confused by your position, for on one hand you seem to agree with Lamb that there is no such thing as a photon and then have empathy for Bohm. This itself turns out to be completely opposite views ontologically which I would have thought you would have to first settle to move forward if only for yourself. So I would ask which is it to be Lamb or Bohm? 189. Phil: Actually, I think it would be a mistake to follow either too dogmatically. I’m not sure that the two of them are as incompatible as they seem at first sight, unless one insists on treating fields and particles identically. I can see the appeal in that, but it does cause a lot of problems. To be sure, the renormalization procedures of quantum electrodynamics can be made to yield impressive numerical accuracy, but this in itself does not validate the underlying physics: Ptolemaic epicycles can be made to reproduce planetary motions with arbitrary accuracy, even though the underlying model is essentially theological. For radiation, Jaynes notes that only emission and absorption can be shown unequivocally to be quantized, and that only two coefficients are required to completely specify each field mode. Lamb gave completely classical treatments of the laser and the Mössbauer effect, showing that neither photons nor phonons are strictly required. Jaynes also showed that the arguments claiming that the Lamb shift, stimulated emission and the Casimir effect prove the physical reality of the zero-point energy are circular; they assume that these things are quantum effects at the outset. For every effect that is commonly held to prove the physical reality of the zero-point energy, one can find an alternative classical derivation from electromagnetic back-reaction. So I have yet to see a valid argument that compels me to quantize the field. In regard to Bohm, I must say that I prefer the crisp clarity of his earlier work to his later dalliance with the mysticism of the implicate order. His demonstration, together with Aharonov, that the vector potential is a real physical entity, was masterful. And his pilot wave theory proves unequivocally that a hidden-variables theory is possible. But I am not ready to commit to any detailed interpretation of the pilot wave, primarily because of the treatment of the Dirac equation given by Hestenes, who takes the zitterbewegung to be physically real. From that starting point, one can not only construct models of the electron reminiscent of the non-radiating current distributions of Barut and Zanghi, but one can also recover the full U(1) x SU(2) x SU(3) symmetry of the standard model. In short, unless your interest in physics is motivated only by the desire to build gadgets, it would be a grave error to follow David Mermin’s curt injunction to “shut up and calculate.” 190. Ain Soph, I think you are blaming the wrong agents here! It's not the fault of scientists and philosophers trying to get a handle on the odd situation of quantum mechanics. Sure, many of them sure aren't doing the best they can - I rap in particular, the wretched circular argument and semantic slight of hand of advocates for decoherence as excuse for collapse or not seeing macro supers. No, the "fault" is not in (most of ;-) us but is in the stars: it's the universe just, really, being weird. It really doesn't make sense. Why should it? But yeah, maybe pilot waves can do something but I consider it a cheesy kludge. And even if it handles "particles" trajectories (uh, I'm still trying to imagine what funny kind of nugget a "photon" would be ... polarized light in 2 DOFs? Differing coherence lengths based on formation time? Hard for even the real Ain Soph to straighten out), what about neutron and muon decay and all that? As for my proposed experiment: as I said, its significance transcends interpretation squabbles. Nor is it in vein to previous paradoxes. It means getting more information out than was thought possible before. I say that is indeed a game changer 191. Neil: “... the universe just, really, being weird. It really doesn’t make sense.” That’s exactly what THEY want you to think! Seriously: that has got to be the most self-defeating bullsh!t I’ve ever heard. Reality cannot be inconsistent with itself. A is A. Of course it makes sense. We just haven’t figured it out yet. Quantum mechanics is no weirder than classical mechanics. The universe may very well be non-local, but simply cannot be acausal. If you learn nothing else from John Bell and David Bohm, learn that. 192. Reality cannot be inconsistent with itself. It isn't, I suppose in the circularly necessary and thus worthless sense - it's just inconsistent with what is conceptually convenient or sensible to us (or to being put into MUH.) A is A. Of course it makes sense. Randian QM? Just as unrealistic as for the human world. Quantum mechanics is no weirder than classical mechanics. Yes, it is. Even with concepts like PWs, what are we going to do about electron orbitals and shells, their not radiating unless perturbed - and then the process of radiation itself, tunneling and all that (and still, neutrons and muons etc which almost no one accepts as being a matter of outside diddling. What about the particles that don't last long enough to be diddled?) So I guess you think everything is determined, so we have to worry why each muon decayed at all those specific times etc. What a clunky mess, why not let it go? My reply is: It can be whatever it wants to be. I think, horribly to the usual suspects around here, that it's here first for a "big" reason like our existence, and only second to be logically nice. That may be mysticism but so is the idea the second purpose is uppermost. They were bright folks but ideological imposers. I want to say Bell should know better because of entangled properties (that are not supposed to be like "Bertlmann's socks" which are preselected ordinary properties) but maybe he thought their was a clever way to set it all up. But even if you imagine a pre-related pair of photons, the experimenter has to be dragged into the conspiracy too. Bob has to be forced to set a polarizer at some convenient angle, so he and Alice can get the same result. It's not enough for the photons to be eg "really at 20 degrees linear polarized" because if A & B use 35 degrees, they still get the same result as each other. Yet it can't be an inevitable result of the 15 degree difference either, since there is a pattern of yes and no - the correlation is what matters. If pilot waves can arrange all that, they might as well just be the real entities anyway. BTW your anonymity is your business, but if you drop a blog name etc. it might be worthwhile. PS I've had a heck of trouble with Google tonight, are the Chinese really messing with them that much? 193. Causality and determinism are two different ideas. The world can be causal and non-deterministic. 194. This comment has been removed by the author. 195. Hi Ain Soph, Well that was certainly as nice way of dancing around the question and perhaps as such you feel you suffer being less biased and maybe rightfully so. In this respect I guess I’m not as fortunate as you, for I see the world as something that’s always moving from becoming to being, as to be driven there by potential. So no matter which way you care to express it, for me there must be something that stands for the source of potential and another that stands for its observed result and both must be physical in nature for them to be considered as real. The fact is nature has demonstrated to be biased, through things like symmetry, conservation and probability, with then having these biases manifest themselves consequentially as invariance, covariance, action principle and so on. The job of science is then by way of observation (experiment) to discover how nature is biased and then through the use of reason to consider how such biases must be necessary to find things the way that they are; or in other words why. However, if all that a scientist feels their job as being is to figure out the recipe for having things to be real, without seeing it required to ask why, that is their failing and not a bias mandated by science itself. This is the bias expressed first by Newton himself, which Bohr merely served to echo later that those like Descartes, Einstein and Bohm never did agree with. So I find in relation to science this to be the only bias that holds any significance in terms of its ultimate success. - Albert Einstein- September 1944[Born-Einstein Letters], 196. Arun said: "Causality and determinism are two different ideas. The world can be causal and non-deterministic." Very true. If event A happens, then either event B or C might happen. In which case event B or C would be caused by event A, but it would still be non-deterministic. Ain Soph: "Quantum mechanics is no weirder than classical mechanics." Well, It seems pretty weird to me! Do you have access to some information the rest of us don't have? 197. Andrew - good distinction about causality v. determinism. That's basically what I meant when disagreeing with Ain Soph, forgetting the difference. Hence, we can't IMHO explain the specifics of the outcomes. But in common use, "causality" is made to be about the timing itself, so people say "the decay was not caused to be at that specific time by some pre-existing process, or law (the "law" such as it is, applies only to the probability being X.) You would likely have an interest in my proposal to recover apparently lost amplitude information. It's couched in terms of disproving that decoherence solves macro collapse problems, but there is no need to agree with me about that particular angle. Getting the scrambled info back is significant in any case, and the expectation it couldn't be is orthodox and not a school debate. I've gotten some interest from e.g blogger quantummoxie, but indeed I need a diagram! 198. Neil: Reality may look strange when you can’t take the speed of light to be infinite, or neglect the quantum of action, or treat your densities as delta functions, and even stranger in the face of all three. But that’s not the same as not making sense. A is A... these days, you hear it in the form, “it is what it is.” To deny it is to deny reason. But you just blow it off with a non sequitur. I guess that’s what you have to do if you want to believe that reality makes no sense. In your reply to Andrew, you are back to pretending there is a difference between “completely unpredictable” and “truly random.” Again, I guess you have to, otherwise you can’t cling to the idea that reality makes no sense. By the way, thanks for the expression of interest, but I don’t have a blog. Comment moderation on this blog is turned on.
0b7453fb9c1ca03f
Postulate 1. The state of a quantum mechanical system iscompletely specified by a function that depends on the coordinates ofthe particle(s) and on time. This function, called the wave functionor state function, has the important property thatisthe probability that the particle lies in the volume elementlocatedatattime t. The wavefunction must satisfy certain mathematical conditionsbecause of this probabilistic interpretation. For the case of asingle particle, the probability of finding it somewhere is 1, sothat we have the normalization condition It is customary to normalize particle wavefunctions to 1. Thewavefunction must be single-valued, continuous, and finite. Postulate 2. To every observable in classical mechanics therecorresponds a linear, Hermitian operator in quantum mechanics.If we require that the expectation value of an operatorisreal, thenmustbe a Hermitian operator. Postulate 3. In any measurement of the observable associated withoperatortheonly values that will ever be observed are the eigenvaluesassociatedwith that operator, which satisfy the eigenvalue equation This postulate captures the central point of quantummechanics--the values of dynamical variables can be quantized(although it is still possible to have a continuum of eigenvalues inthe case of unbound states). If the system is in an eigenstateofwitheigenvaluethenany measurement of the quantitywillyield Although measurements must always yield an eigenvalue, the statedoes not have to be an eigenstate of initially.An arbitrary state can be expanded in the complete set ofeigenvectors ofas where the summation may be infinite. In this case we only knowthat the measurement ofwillyield one of the eigenvalues ofbutwe don't know which one. However, we do know the probability thateigenvaluewilloccur--it is the absolute value squared of the coefficient, A consequence is that, after measurement ofyieldssome eigenvaluethewavefunction immediately ``collapses'' into the correspondingeigenstateorin the case thatisdegenerate, so has more than one corresponding eigenvector,thenbecomesthe projection ofontothe degenerate subspace). Thus, measurement affects the state of thesystem. This fact is used in many elaborate experimental tests ofquantum mechanics. Postulate 4. If a system is in a state described by a normalized wavefunctionthenthe average value of the observable corresponding toisgiven by Postulate 5. The wavefunction or state function of a system evolves in timeaccording to the time-dependent Schrödinger equation Postulate 6. The total wavefunction must be antisymmetric with respect tothe interchange of all coordinates of one fermion with those ofanother. Electronic spin must be included in this set of coordinates. The Pauli exclusion principle is a direct result ofthis&nbsp;antisymmetry principle. Add comment Security code
f1befc92c1efbcf0
Saturday, July 26, 2014 Continuing Human Evolution: Fallacies and Prospects Where are we going, and where do we want to go, genetically speaking? Of all the politically correct stances, the genetic unimprovability of humanity is perhaps the most inviolable. Even while we work daily, even feverishly, to improve other aspects of our material and cultural existence, our biology remains an ethical red zone, where nothing can be done and no infringement placed on individual replication. As a response to the abuse of eugenics in the 20th century, and to the deep philosophical problems involved, this is understandable. But it does not do justice to the underlying science. Two recent books have raised the issue again, as does our growing knowledge of biology generally. Marlene Zuk, in her book "Paleofantasy: what evolution really tells us about sex, diet, and how we live" explains that we have been evolving all along, up to the present day. (Review.) To think that we are no more than rock-throwing homonids trapped in cubicles and talking to our computers is a bit inaccurate. We have many issues related to insufficient evolutionary adaptation, such as back problems, diet issues with the modern fare of fats, salts, and sweets. But we have also adapted successfully to much else on  a recent time scale, like high altitude, diets of milk, and malaria. Perhaps also to reading, sewing, and music. But this only scratches the surface. There are very likely many other, more complex traits that have been honed and reshaped over the millennia of recent human evolution. The technology to detect all this in our genomes is only slowly arriving. The implications of this are clear enough. Humans have been evolving and adapting to many conditions up to the present day, and doing so differently in different areas of the world, leading to the evident differences among human beings. Humans are not created equal, and the question arises whether differences extend to psychological and behavioral traits as well. The other book is "A Farewell to Alms", by Gregory Clark. He puts a very provocative hypothesis about the success of English society over the last millenium. England ran roughshod over the rest of the world during its imperial heyday, authored the industrial revolution, and basically did away with the Malthusian conundrum of human reproduction continually outstripping food production. What happened? Clark conjectures a genetic hypothesis, that over hundreds of years, the rich in England consistently had more children than the poor, creating a constant diffusion of behavioral traits of competence downward through the social hierarchy. As the linked review makes clear, Clark is an economist, not a geneticist. The genetics of such an hypothesis are entirely unknown. All we have to go on are the many observations of family traits and twin studies, which indicate strongly that many psychological / behavioral traits are heritable to a large degree. Once one accepts that, one can easily see that, whatever one's view of the "competence" and other bourgeois values of the rich and successful of England, humans all over the world have been evolving along some kind of path in these dimensions, continuously. Human nature is not set or unalterable, just as it is highly diverse. The two go hand in hand, as diversity + time + unequal reproduction = evolution. (A third book by Thomas Suddendorf, on what separates us from apes in the biological sense, is relevant as well.) For example, not all students gain equally from an education; not all people learn music equally well, or play sports equally well. Bach, Beethoven, and Mozart each came from families of professional musicians. We are not created equal, other than in self-granted political and spiritual terms. The long fixation of the left on creating social progress solely through social policy and equalization, however well-intentioned, is not a fully realistic program, and in the long term, can have some dark consequences. On the other hand, the fixation of the right on making our economic competition as brutal as possible is not only cruel, but in genetic terms largely pointless, since in the moderately compassionate developed world, economic success is untethered from, indeed inverse to, reproductive success in broad terms. Every society has a status system by which some behaviors or aspects are honored, while others are dishonored, to the point of death. Clark describes a society that, in his view, aligned its status system with reproductive success, thus building whatever those status-valued traits were into future populations. Assortive mating among high-status individuals created fierce competition for status, as did the consequences of failure: misery in addition to no reproduction. I am reading the tales of King Arthur currently, and that society fits the same template. Economic success was the direct fruit of political success, especially favor of the king, and was closely tied to the winning of damsels, estates, and reproductive success. We do not take our own status system quite so seriously, as higher status individuals tend to reproduce less than those of lower status. One problem with Clark's thesis is that what he found in England is very likely the rule in all societies of that stage and earlier, where the rich lord it over the poor in countless ways, including those having to do with reproduction. So it is quite difficult to use his hypothesis to explain the particular and unique competitive strengths of the British empire, though it can help explain in very broad terms the rise and fall of hierarchical cultures generally. He has to add the additional hypothesis that the nature of status and enrichment in medieval England was uniquely selective for competence of some imperial nature rather than, say, corruption, martial ruthlessness, or courtly obseqiousness. That is a significantly more difficult case to make. The genetic hypothesis may be thought to compete with another one, outlined by Jared Diamond in "Guns, Germs, and Steel". This posits that the great density, fertility, and species richness of Asia over the other continents like Africa and the Americas gave its humans special advantages that sped the development of agriculture, urban cultures, state formation, metal technology, as well as the epidemic diseases that overwhelmed proto-colonial subjects / competitiors. It was happenstance, in short, not genetics. I think it is fair to say that both hypotheses can easily coexist, since the genetic hypothesis of human (eugenic) evolution through linkage of status to reproduction does not necessarily say anything about races or colonial competitions. It is a process that has been universal over all pre-modern cultures, operating more or less in parallel everywhere. Every society has its social and reproductive hierarchy, which succeeding generations embody in genetic terms. One can also note that eugenic policies remain quite commonplace in the world today, principally in the form of religious competition to reproduce. Catholicism tries to extend its faith by high reproductive rates, bans on birth control, etc., as does Mormonism. Islam allows the socially destructive practice of polygamy, allowing high-status males many more wives and children than low status males. They clearly take their status system very seriously. In China, in contrast, the state has enforced a one-child policy, now somewhat loosened, which has kept the status quo, genetically speaking, though the advent of Yao Ming out of the Chinese basketball program is a good example of assortive mating still at work. And Latin America is notoriously, if inadvertantly, staging a "reconquista" of the US, through immigration and high birth rates. So evolution is happening, and the future of humanity will look different from its past. The question is whether religious & accidental eugenics are the only acceptable kind, or whether other forms are worth contemplating. Before going into details, I'll note that the urgency of this issue is extremely low. Climate heating is a far, far more urgent threat to our future happiness. The whole issue may also be moot in the face of technological development. Genetic engineering may eventually allow detailed and insightful reprogramming of our genomes, with far greater practical and ethical implications than tinkering with reproduction policies. And after the robots take over or we upload ourselves to the cloud, well ... then the biological evolution of humanity will be a quaint memory. The degree of effort needed for a conscious reproduction policy is also very small. Selection pressures in the single digits or below can have strong effects over long periods of time. The point of such a policy would simply be to link whatever we deem valuable about ourselves as humans to our future state. Should we take our ambient status system more seriously? I don't think so- quite the opposite, in fact. The psychopathy of the most successful people in finance and business is legendary. Money is simply not a good metric in this complicated age, of human worth and ability to create a happy, moral, and prosperous future for humanity. Education is a bit more systematic and fair as a criterion, but also tends to reinforce inherited status as much as reveal individual merit. What other criterion would serve, and what mechanism could be used? Ethically and philosophically as well, it is extremely difficult to come up with any criterion that gives the state the power to say that one person is of greater existential value than another, however much we do so day in and day out in our economic, social, military, penal, and sexual lives. Additionally, lacking the implicit competitive mechanisms of economic, political and social success linking to reproduction, the state would need to step in explicitly. There is, however, an innate contradiction of the democratic state, founded on legal and existential equality, getting into the explicit business of judging and making us unequal. Yet ... I met a lady recently who lives in her car, and who told me she has five children, none of whom can take care of her, being as impulsive and improvident as she, including several cases of drug abuse. It is heartbreaking, and a little disturbing. A policy that nudges us communally in a more positive direction would be beneficial in the long run. I think the key issue is the vastly different perceived costs of having children between high and low status groups in developed countries. At the high end, each child implies hundreds of thousands of dollars in education, college, sports, enrichment, tech gadgets, etc. etc. On the low end, the expense of an extra mouth to feed is negligible, and may even be covered by the government. When expectations are low, so are costs. One solution is to create a more socially supportive environment by making education free, subsidizing child care, and making medical care free. Indeed, I would abolish private schools and make all schooling on the same public level; government run at the pre-college level, and non-profit in higher levels. This would at least level the playing field in family (cost) planning. A second policy would be to guarantee work to everyone. Much teen pregnancy is the result of aimlessness and hopelessness, which could be ameliorated by integrating young people into a universal expectation of work and usefulness. Everyone should be trained if not college educated, and the government should offer jobs, on the pattern of the Depression-era jobs programs, to anyone not competitive enough to be recruited by the private labor market, but not disabled. Work at decent pay (substantially above welfare levels) should be a right, completing the safety net in a way that encourages responsibility and long-term life planning. These two types of social policy would help relieve the frightening cost of having children for those with middle-class expectations, while encouraging everyone to engage in family planning and personal development that might go some distance to leveling the fertility playing field. Saturday, July 19, 2014 Quantum Consciousness? The demarcation problem in science meets New Age tech talk. What is science and what is not science? The difference is not terribly clear, an issue called the demarcation problem. Is theology a science of the supernatural realm? Is psi research on extrasensory perception science, as it uses scientific methods? Is string theory in physics science, even though its chances of empirical validation seem rather slim? It isn't very clear. Science tends to be whatever scientists do and view as valid in their expert communities. Whacky ideas may migrate in from fringe areas, (atoms, endosymbiosis, plate tectonics, ulcer-causing bacteria), turning from non-science into science once evidence appears. Conversely, long-hallowed ideas within science may turn out to be complete rubbish, like space-ethers, geocentrism, and the medical humors. There is fringe - mainstream traffic, though it tends to be rather light these days, since mainstream scientists generally know what they are doing. Unfortunately the fringe areas are enormous, populated by people highly motivated to push pet theories that tend to have some psychological motivator. Psi research is a good example, which responds to our hopeful magical thinking that somehow, some way, even though those darn materialists don't have a drop of imagination in their brains, humans can indeed sense the emotions of others far away, levitate objects, detect water through dousing rods, and see behind playing cards. At least a little, right? The scientific fringe is part of a broader cultural miasma of misinformation, from Fox news to Herbalife to Koch political subversion to mundane political campaigns and commercial advertisements. We live in a flurry of BS coming at us from all directions, and typically, following the motivation and the funding source is a critical tool to gauge the truthiness of claims. Russia's shameless campaign of lying about Ukraine is perhaps the moment's most egregious and deadly example. So science is far from alone in living in a perilous epistemological swamp. It just tries to do a better job by way of disinterested institutions, public practices, empirical adjudication, and all the other standards that come under the so-called scientific method. Can we deploy such methods on interesting topics, or are they intrinsically confined to uninteresting ones? The mother of all demarcation nightmares has been creationism. Otherwise known as creation science, or intelligent design. The motivation is obvious: support traditional intuitions (and some scriptural readings) to deny that humans are animals. Credentialed scientists have been deployed, glossy textbooks written, museums established, articles and books written, evidence cherry-picked, school boards subverted, all to push a theory that the scientific community dimissed many decades ago. But given enough science-y paraphernalia, they could make a decent case, at least in the popular media, that they were engaging in science. A spineless political system was reduced to mouthing the mantra that schools should "teach the controversy". Thankfully that controversy has died down in recent years, and the professional community feels less threatened by cultural bulldozing. Nevertheless, the needle has hardly budged in the population at large, of which 42% believe in creationism outright, and 31% more believe that evolution was guided by god, which is pretty much the opposite of the whole point of evolutionary theory as currently understood. And abroad, the Islamic world is almost uniformly creationist. It is a testament to the strength of psychological intuitions and archetypes, as well as media echo chambers. Some recent discussions have gotten me interested in another area of motivated science, which is quantum consciousness. Here, it is obvious that our intuition (and a great deal of theology) militates against a materialist view of the brain and mind. The mind-body "problem" has been perennial fodder for philosophy. Could our minds be the subjective product of nerve firings in our brains? No way! Despite the rather obvious empirical parameters that show just that, intuitionally-driven models have always looked elsewhere, invoking souls of various sorts, which typically have the added bonus of immortality, another intuited archetype. The latest version of this is the movement among philosophers to posit a cosmic consciousness (Nagel, Chalmers), which in hand-waving fashion hypothesizes that somehow, consciousness is a basic property of the cosmos, with Jain-ist particles of consciousness in every object, implying that such things as plants, and even rocks, may be conscious. It seems like a total surrender to obfuscation and mysticism, descending from the grandiose premise that, because they have been unable to figure out how it all works, no one else can either. Einstein may be able to get away with such foundational cosmic speculations, but even for him, it took more than handwaving about how no one could explain the speed of light. One science-y form of this is quantum consciousness, where the mystery of consciousness is creatively linked to the very hard-science-y paradoxes of quantum mechanics to come up with .. something again quite vague, but the idea is that since quantum entanglement can allow instant communication of a sort at great distances, and perform outrageous computations, that this resolves those amazing capacities of our minds. Quantum mechanics has been drafted into numerous pseudoscience fields of this sort, actually. The specific example of this field that is most advanced, in its quotient of science-y tech-talk and academic paraphernalia, is the Orch-OR theories propounded by Stuart Hameroff and Roger Penrose. Penrose is a Sir, and an eminent physicist and mathematician. Hameroff is a professor of anesthesiology and psychology at the University of Arizona. Their output of papers has been prodigious, and they host an annual conference on the topic, funded by Deepak Chopra's foundations among other interested parties. They are not charlatans, really, but I think they have totally lost the thread in this case. They present a magisterial review of their own theory in 2014. Penrose starts off by laying the premise of their case- that due to Kurt Gödels' work, the human ability to be certain about things is mathematically impossible, which necessitates a non-conventional solution to consciousness. "Critical of the viewpoint of ‘strong artificial intelligence’ (‘strong AI’), according to which all mental processes are entirely computational, both books [by Penrose] argued, by appealing to Gödel's theorem and other considerations, that certain aspects of human consciousness, such as understanding, must be beyond the scope of any computational system, i.e. ‘non-computable’. ...  The non-computable ingredient required for human consciousness and understanding, Penrose suggested, would have to lie in an area where our current physical theories are fundamentally incomplete, though of important relevance to the scales that are pertinent to the operation of our brains."  "As shown by Gödel's theorem, Penrose described how the mental quality of ‘understanding’ cannot be encapsulated by any computational system and must derive from some ‘non-computable’ effect. Moreover, the neurocomputational approach to volition, where algorithmic computation completely determines all thought processes, appears to preclude any possibility for independent causal agency, or free will. Something else is needed. What non-computable factor may occur in the brain?" Well, the fact is that humans are not that certain about things. Religions may be, but that is an emotional, not a formal, issue. We operate by Bayesian statistics, where new evidence alters our beliefs, which are always tentative and evolving as we gain experience, at least for those who are empirically engaged at all. We are not operating from a tight set of axioms, per the Gödellian system, which we transcend to understand novel or paradoxical truths in some inexplicable way (it only seems that way on LSD!). So this premise seems rather nonsensical, and the whole project starts off on a very sour note. Not only that, but the authors then go on to propose a solution (with quantum qubits migrating in microtubules) that, first, is physically impossible in the brain, and second, doesn't evade Gödel's theory anyhow, being just another form of computation. In Gödel's terms, we are very incomplete systems, whether quantum or not, but seem to get by despite that. Hameroff's part is to focus on microtubules, which he has identified as the locus of consciousness by way of his studies of anesthesia. In mainstream science, microtubules are cytoskeletal structures, play a central role in orchestrating mitosis and cell shape, and serve as roads for the transport of cargo, which is particularly relevant in neurons, where the distance between the cell nucleus / body and its far projections can be measured in feet. These cells need constant traffic of cargoes over the microtubule network to maintain function. His proposal is that general anesthetics work by destabilizing microtubules in the brain, or at least their quantum computations. This is itself, apart from its implications in quantum consciousness, a fringe hypothesis. Current thinking in the field is very focused on ion channels and neurotransmitter receptors as the targets, though it has been difficult to pin down the specifics. General anesthetics tend to be membrane-soluble, which leads to hypotheses about their having very broad effects on membranes (not a strong theory on its own anymore) or on proteins embedded in membranes which would naturally bind to hydrophobic chemicals as they do to membrane lipids. It doesn't help one fringe hypothesis to be dependent another one like this, for even if consciousness is not solved soon, the target of anesthesia is likely to be, by normal progress in the mainstream of neuroscience / molecular biology. One mainstream review states: "Anesthetics are pharmacological agents that target specific central nervous system receptors. Once they bind to their brain receptors, anesthetics modulate remote brain areas and end up interfering with global neuronal networks, leading to a controlled and reversible loss of consciousness." It is worth noting that Penrose and Hameroff's review is extensively referenced, with citations to some work that shows, for instance, that microtubules can bind anesthetics. But this was done at such high concentrations, and found among so many other proteins that also bind, that it looks like clutching at straws. They even resort to a little bit of lying, towards the end where they enumerate predictions of their theory: "Actions of psychoactive drugs, including antidepressants, involve neuronal microtubules. This [prediction] indeed appears to be the case. Fluoxitene (Prozac) acts through microtubules [167]; anesthetics also act through MTs [86]." The anesthetic cited here is anthracene, which is more a poison and general chemical than an anesthetic. It is not used in medicine at all. There are plenty of chemicals that will knock out frogs (which were the subject here) without telling us much about anesthetics as a specific class. The Prozac reference is highly problematic as well, since Prozac is called an SSRI for a reason. It binds to and inhibits serotonin uptake pumps, and that is thought to be its primary mode of action. If it binds (at again, very high concentrations) to microtubules as cited, that would be a side-effect, not the primary mode of action. Additionally, if microtubule dynamics are altered to some degree by this drug, why do all the other SSRIs with different structures work? The only thing they have in common is their binding and inhibition of the serotonin transporter. This kind of highly selective, indeed misleading, citing is a big red flag, to add to the red flag of psychological motivation. At the core of the vast enterprise is the propostion that somehow, gravitation, quantum mechanics, and microtubules hosting qubits impinge somehow on their host neurons help their computations escape the Gödellian trap ... and simultaneously constitute atoms of consciousness: "The Orch-OR [orchestrated objective reduction] scheme adopts DP [Diósi–Penrose objective reduction, which is a version of a quantum gravity theory] as a physical proposal, but it goes further than this by attempting to relate this particular version of OR to the phenomenon of consciousness. Accordingly, the ‘choice’ involved in any quantum state-reduction process would be accompanied by a (miniscule) proto-element of experience, which we refer to as a moment of proto-consciousness, but we do not necessarily refer to this as actual consciousness for reasons to be described." So a choice made by qubits in this scheme is instantaneous, solving the timing issues that makes free will impossible in a normal materialist theory. It also reflects the Copenhagen interpretation of quantum mechanics where an observer must be invoked to collapse (reduce) the wave function of quantum entities like electrons. The tiny observers apparently add up, in the end, to what we experience as consciousness. One problem, among very many, is that this sets up another mind-body conundrum. In the religious soul theories, the soul is immaterial, so it is hard to explain how it receives perceptions from the brain and injects its decisions back into that brain, which is at least acknowledged as the conduit for human behavior and sensation, if not its computational processor. Some interface is required, like the pineal gland in the system of Descarte. But in such an interface, how are physical atoms moved by immaterial, supernatural entities? There is no easy way to deal with this, other than waving it away with assertions of pan-soul-ism, where there is no localized interface, and the soul pervades everthing in some magical way. With the microtubules, the authors claim that they might communicate with each other across the brain via gap junctions, which are small portals leading directly from one cell to another. But not only does normal nerve conduction show little effect from these junctions, indicating that they are typically not highly connected with other cells, but microtubules from one cell do not enter other cells through such junctions, (they stop at the border), so there really can't be a direct network. So the authors back up and say that the microtubules might affect their host nerve function, which then makes the whole theory nearly pointless, since a mere potentiation of normal nerve function gets us back into normal neurobiology and whatever that can accomplish in generating consciousness. "The most logical strategic site for coherent microtubule Orch OR and consciousness is in post-synaptic dendrites and soma (in which microtubules are uniquely arrayed and stabilized) during integration phases in integrate-and-fire brain neurons. Synaptic inputs could ‘orchestrate’ tubulin states governed by quantum dipoles, leading to tubulin superposition in vast numbers of microtubules all involved quantum-coherently together in a large-scale quantum state, where entanglement and quantum computation takes place during integration. The termination, by OR, of this orchestrated quantum computation at the end of integration phases would select microtubule states which could then influence and regulate axonal firings, thus controlling conscious behavior." One might also note in passing that the superposition of vast numbers of coherent entangled quantum entities in the brain is judged impossible by experts in the relevant fields. They have been laboring mightily to set up qubit computers in vacuums near absolute zero with handfuls of electrons. The idea that this could be done easily on a massive scale in the liquid, warm brain would cause some surprise and shock. In the end, despite the intense New Age interest in this kind of speculation, and its extensive scholarly apparatus, it is at the far-out fringe of brain studies. At a regular neuroscience conference, Hameroff attends, but the issue of quantum consciousness is nowhere else in sight. A physicist comes with a stray poster that also invokes quantum computation, but the session devoted to mechanisms of consciousness is cleanly and clearly mainstream. They are not interested. In demarcation terms, Hameroff and colleagues have academic positions and publish their thoughts, but these are not fruitful thoughts, as they use heavily cherry-picked data for support, and sponsor no evident empirical progress in their program, which thus remains an edifice of rather wild speculation. I am knowledgeable, but not an expert, and to me, it looks like a big snow job more than a serious scientific theory, from premises through the elaborate contents, to the conclusions. At its heart, there is a -magic happens here- kind of quality to the invocations of quantum effects that are supposed to solve non-problems like free will, or significant problems like subjective consciousness that are best left as single problems rather than compounded with significant mysteries from radically separate fields like quantum gravity. There is also an unwillingness to recognize the great deal of mainstream work that undermines the theory. For instance, consciousness is quite well timed in its occurrence relative to other brain events like perception and willed action. There is no reason to demand instantaneous action / computation when it is well known that consciousness trails perceptions by hundreds of milliseconds, and also trails various types of reflex actions and even the opening phenomena of willed actions by similar amounts. It has a function of global integration and monitoring, rest assured. But intuition is, as usual, a poor guide to what is really going on. Another issue is the localization of consciousness. Is your liver conscious? Are your toe nails? I don't think so, which speaks to the plausibility of cosmic consciousness theories implying the consciousness of rocks, plants, etc. Indeed, most processes in the brain are unconscious. Yet all neurons have microtubules in profusion, indeed all cells do, so theories connecting their cosmic capabilities with consciousness turn on their specific arrangement or augmentation, which ends up little better, indeed far worse, than mainstream theories about the arrangement, connectivity, and other properties of nerve cells whose relationship to thought is rather more plausible. Saturday, July 12, 2014 Chaperones Step in to Increase Evolution's Promiscuity Mutations are usually bad, but can be made less bad with some protein folding help. Proteins are the central mechanics of life, catalyzing the reactions, lifting the loads, purifying the fluids, and expressing the genes. They are also the most frequent targets of damaging mutations in their encoding genes. Some of these mutations happen outside the amino acid coding regions, in regulatory areas, but the most dramatic ones are typically within the coding region, changing the amino acid sequence. Such mutations can not only change the function of a protein, but can also change its stability- its ability to fold and stay folded. Random polypeptides typically do not fold in any coherent way, in contrast to biologically evolved proteins, so the ability to fold tightly is weakened by most mutations. Here is where saviors come into the picture- the chaperones, which are proteins specialized to help other proteins fold. The most massive of these feature a large cave whose internal surface can switch, with consumption of ATP, between hydrophobic and hydrophilic surfaces. It is like sticking your protein inside a jiffy-pop dome that can alternately encourage unfolding and folding in a protected, isolated space, until the protein gets it all together. Remember that the insides of proteins tend to be hydrophobic, and the outsides tend to be hydrophilic, which helps to direct the folding path in normal proteins, in addition to detailed spatial and electrostatic relationships among the amino acids. Structural aspects of one chaperone, GroES/GroEL, within which other proteins (purple & yellow, in d) other proteins can fold in isolation. As shown in C, the interior can switch, using ATP, between hydrophobic and hydrophilic surfaces, encouraging folding and unfolding, respectively. Thus even without knowing how the protein is supposed to fold, this chaperone can, given enough time and energy, bias the folding pathway towards folding. It also seems to be able to detect that it contains a folded protein, prompting opening of the chamber. Like stability, solubility is another danger. Proteins that are not sufficiently hydrophilic on their outsides tend to glomb together and create scrambled clots in cells, like we find in brains with dementia. Keeping proteins from aggregating is another job for chaperone proteins, which can pull such proteins apart, and / or tag them for degradation altogether. A recent paper describes how chaperones can help to buffer organisms from the effects of modestly deleterious mutations, allowing a more diverse landscape of mutations in a population than otherwise possible. This is not a new idea, but the researchers try to put a more quantitative spin on it, and look in novel places for the effects chaperones have. "For instance, enhancing chaperone capacity through over-expression has been directly shown to promote enzyme evolvability. Chaperones have been found to act both as genotypic and phenotypic capacitors." Some of the most interesting bits of work concern protein interaction networks. Over the years, biologists have found that the cell full of proteins is a little like the internet, with lots of interactions between various proteins. There are some very dense sub-networks with a few hub proteins that have many partners, and then more peripheral proteins. These hub proteins tend to have a bit more disorder in their protein structure, perhaps because their many interactions require a bit of hydrophobic exposed area for each one (or else a floppy region that hides the interaction area prior to the partner's appearance). So these proteins are particularly dependent on chaperone assistance in folding: "Upon deletion of [chaperone] SSB, 50% of the hub proteins, but less than 10% of the non-hubs aggregate immediately." Hub proteins, by the author's analysis, tend to be more frequent "clients" of chaperones such as Hsp90 and Hsp70, and have more disordered regions in their structure. They also do an interesting survey of the degree of rewiring of such protein networks between the distantly related yeast species, S. cerevisiae and S. pombe, to set up a metric of which proteins have maintained pretty much the same function, vs those whose partnerships have changed the most (i.e. become "rewired"). They then use two metrics of evolutionary change within protein sequences, the non-synonymous vs synonymous mutation rate, and second they devise a new conservative vs non-conservative change metric for mutations that are non-synonymous. This allows them to find that while highly rewired proteins tend to have lower than average non-synonymous mutation rates, befitting their key status in their networks, (i.e. increased purifying selection), they also have higher than average non-conservative amino acid mutations, perhaps as a sign of positive selection at key spots, plus, of course, the assistance that chaperones provide to enable structurally marginal mutants to survive. "[Chaperone] Hsp90 was found to promote protein evolutionary rates in strong substrates [dependent on Hsp90 assistance for folding] when assessed by dN or ω [metrics of degree of change in protein sequences]." Such analyses can help us dive ever deeper into the details of evolutionary history, given the vast resources of the genome sequences now available at every turn. In this case, it emphasizes that while each protein, indeed each nucleotide position, is an individual case, we can make some crude generalizations from sequence to function and back again. For the chaperones, it is evident that they are most important for difficult proteins, whether by intrinsic design in the interaction vs independent folding tradeoff, or temporarily after a damaging mutation, which may be resolved back to the original state eventually, or to some new state with new significance for the organism. • Confessions of a FOX youth. • Krugman on the prudential argument for higher interest rates: "No, what the BIS is arguing is that there is some other appropriate rate, defined as a rate sufficiently high to discourage bubbles, and that central banks should target this rate even though it is above the Wicksellian natural rate – or, equivalently, that the economy should be kept permanently depressed in order to curb the irrational exuberance of investors." • Much current international conflict revolves around energy. • Summers on our policeman role. • Ponzis are for pikers.. real frauds get into banking. • Hard money for old people. • The labor market should have been frictionless and efficient by now.. what happened? • Walgreens to move to Switzerland, sort of. • This week in the WSJ; nothing like the pot calling the kettle black: "Our political system is adept at making use of people like Mr. Steyer. Democrats will gladly spend his $100 million, then go back to their real environmental business, which is green cronyism." • Economic graph of the week ... how real & potential GDP has declined through our recession, for no reason other than gross macroeconomic mismanagement. Saturday, July 5, 2014 Give a Guy a Hammer ... Mathematician Max Tegmark thinks the fundamental reality is math. A review of Our Mathematical Universe. The unknown seems to drive us into conniptions, whether one's habit of thought is theology, science, or formal philosophy. The idea that the fundamental reality of our cosmos might be inexlicable is as foreign to the most advanced scientist as it was to the earliest shaman. So there we are. Physicists are knocking their heads against several walls such as dark energy, the proper interpretation of quantum mechanics, the union of quantum mechanics and relativity / gravity, and of course, the origin of the universe. They have virtually run out of experimental options, the colliders having become as super as they are realistically going to get. What now? One can sense this fix from recent years of the magazine Scientific American, which runs ever more fanciful articles about the nature of the universe under the heading of physics. Speculation is running rampant, and the field seems to be gradually leaving the orbit of reason. What is time? What is space? Quantum foam, strings, etc.. All worthy questions, but far too speculative and sketchy to be fed to lay readers. A recent entrant in the cosmic speculation derby is Max Tegmark with a book about how the universe is all a big mathematical structure. It is an excellent book in most respects, very readable and fair on the known science. Even sensible in a pontifical denouement of social policy. He has the most sterling credentials as an MIT physics professor, cosmologist, and protoge of John Wheeler. I should add that I am no expert in the least respect here, so I am just offering an educated lay perspective on the book and its ideas, as presented. There are excellent aspects also to his cosmological speculations. For instance, he develops a helpful hierarchy of multiverse categories, this being a book largely about multiverses: Level 1 multiverse: This is the notion that inflation during the big bang gave rise not only to the region of space we can see, but to much more. How much more? Hard to say, but it could be rather enormous, all within the product of the big bang we date to ~13.8 billion years ago. Level 2 multiverse: Here the additional notion is added that inflation, the key process that we know of from the big bang, could have been a continuous process, not just producing our universe, but many, indeed an infinite number, of others in a process that is still going on. It adds the idea that these others might have different basic physics- different constants, symmetries, etc. Why this would be is due to the unboundedness of our current theories of what might have gone on. So why not everything possible? Level 3 multiverse: Hugh Everett came up with an interpretation of quantum mechanics that contradicts the Copenhagen interpretation, and posits that the Schrödinger equation never "collapses". It just spawns other realities where events we think occur randomly actually occur in all possibilities, each in its own reality. This does not imply the multiplication of mass and energy into these other universes, but the superposition of an infinity of different possibilities in the mathematical space of quantum mechanics- the Hilbert space- of which we see only one sample at any moment. So it all looks the same as the Copenhagen collapse interpretation. Level 4 multiverse: This is Tegmark's special theory, where not only does the level 2 multiverse generate an infinity of universes with different laws from some originating ur-structure, but even the most basic mathematical structure- his ultimate reality- can differ to generate alternate inflation (or non-inflation) regimes, of evey possible type. Indeed, he speculates that every computable mathematical structure exists and generates its own To be brief, I can easily understand the level 1 multiverse, and don't have a big problem with the level 3 multiverse of quantum mechanics. The others are a different story. Level 2 seems a cop-out, interpreting a lack of knowledge and specification about the universe as a permissive free-for-all where everything possible occurs. The premise is, as Tegmark notes, that our universe has about 32 numbers from which physicists can, in principle, calculate all physical aspects of our universe (not counting the pending conundrums of dark energy and dark matter, among others). And the values of these numbers are, of course, quite important. Any little change here or there would blow us to smithereens. So how did they get set up? There are two basic approaches. The traditional way was to say god did it, end of story. A slightly more updated version is to look into the matter scientifically and keep hunting for simplifying and unifying theories, especially using mathematics. This has been the job of physics for several centuries, and seems to have arrived at a sizeable set of irreducible particles and forces, but can't seem to break through to a universal theory. The most modern way is to say that all the possibilities occur in all possible universes, of which there are an infinity, and we find ourselves, naturally, only in the one that lets glorious us happen. Ergo, the level 2 multiverse. What is the prospect of yet more simplifying and unifying insights into the universe(es)? I have no idea. But the multiverse hypotheses seem to give up prematurely, and to what end? Even with a virtual infinity of universes, the chance that we get one that has 32 numbers, some possibly irrational, and thus almost impossible to get precisely right, ranging over countless orders of magnitude, still seems slim. So I am still rooting for a unifying explanation rather than a ramifying one whose sense is saved only by the anthropic principle. And that is really what we are talking about at this point- a rooting interest in where scientific speculation heads, since no evidence to date decides among these possibilities, and evidence may never do so. Now we get to the weirdest part of the book- the level 4 multiverse, or Tegmark's theory that reality, at its base, is math, not just that it is described by math. And that all possible mathematical structures give rise to their own multi-multiversi, etc., ad infinitum. This is all more than a little fanciful. And his arguments, forming the core of the book and the armature around which so much else is built, are surprisingly weak. The beginning premise is that external reality exists, separate from us, and even separate from us as observers. This is not at all hard to accept. After all, the universe had to roil and moil for quite some time before we were here to observe it, so the people who posit reality as a figment of our imaginations, or quantum-wise demand observation as the requirement of reality, do not have much to stand on. Then Tegmark goes on with the rest of his argument, which I abridge: "If we assume that reality exists independently of humans, then for a description to be complete, it must also we well-defined according to nonhuman entities- aliens or supercomputers, say- that lack any understanding of human concepts." "This means that it [a master theory of everything] must contain no concepts at all! In other words, it must be a purely mathematical theory, with no explanations or 'postulates' as in quantum textbooks ..."  "Taken together, this implies the Mathematical Universe Hypothesis, i.e. that the external physical reality described by the ToE [theory of everything] is a mathematical structure."  "This means that our physical world no only is described by mathematics, but that it is mathematical (a mathematical structure), making us self-aware parts of a giant mathematical object. A mathematical structure is an abstract set of entities with relations between them. The entities have no "baggage": they have no properties whatsoever except those relations." There, in a nutshell, is his argument. Note the slight of hand of getting from a description of reality to the reality itself. He explains himself later on: "I'm writing is rather than corresponds to here, because if two structures are equivalent, then there's no meaningful sense in which they are not one and the same, as emphasized by Israeli philosopher Marius Cohen." I can't say that this is convincing, at least to one untutored in the arts. One can also ask whether the starting premise makes any sense. Why must a universe be describable by any entities at all, human or non-human? It could just exist in some way and for some reason we can not understand or describe. The assumption is that there is a theory of everything, which I would certainly like to see. But I don't think it is a given that such a thing exists, let alone that it needs to have the describability property Tegmark claims for it. It could just as well be undescribable, and filled with the relatively arbitrary properties we actually see. The one thing such a theory must be is consistent enough internally to produce a reality that has the symmetries and durable properties ("laws", constants, etc.) that we see in our versions of physics. And that, of course, is why mathematics is such a useful tool in physics, not because rocks are equations, but because our reality has, necessarily, the kinds of strucures and consistencies that we can use mathematics to describe. The ultimate theory may end up being a beautifully simple equation one can write on a T-shirt (as Tegmark dreams), but we don't know that yet, and it is very hard to see how that could be, with so many simple mathematical structures already known and tested in this respect. Are strings simple? Probably not. And why one would want to theorize our reality as being a math structure ... that is admittedly beyond me. Tegmark claims that, among other benefits, this gets rid of an infinite regress issue, as we look for ever more fundamental particles and principles. (Though we have reached an end in particle terms, not being able to divide the electrons and quarks any further.) Having the most fundamental "one" be a total abstraction, and indeed every possible total abstraction in his level 4 multiverse, buys finality at the cost of nonsensicality, little better than the turtles or deities of yore. Specifically, it is Platonism revived, thinking that what is in our minds (where math is, exclusively) is the fabric of the universe, not its map. Indeed, one suspects in the end that this book is another edition in the old-as-humanity tradition of seeing the origins of the cosmos in the mirror. • The supremes are losing their minds. Hobby Lobby will live in infamy. • Can Muslim companies mess with their employees' healthcare and personal lives too? • The tortured reasoning of turning money into "free" speech. • Voters vote for climate action. How does money vote? • Bob Cringely: Bitcoins have come in from the cold. • Sectarianism, insurrection and theocracy ... not just somewhere else. • What money does to our minds. • On being a disposable worker at Walmart. • Bill Mitchell on European economic policy: groupthink followed by fiasco. • Jobs and the US economy.. have the green shoots finally arrived? We could have been here far, far sooner. • This week in the WSJ: "The more you help unemployed people, the more unemployed people you'll have." • Economic quote of the week, from Joe Stiglitz:
acfb46f6767dc4d9
Erwin Schrödinger Austrian physicist Erwin Schrödinger, (born August 12, 1887, Vienna, Austria—died January 4, 1961, Vienna), Austrian theoretical physicist who contributed to the wave theory of matter and to other fundamentals of quantum mechanics. He shared the 1933 Nobel Prize for Physics with British physicist P.A.M. Dirac. Schrödinger entered the University of Vienna in 1906 and obtained his doctorate in 1910, upon which he accepted a research post at the university’s Second Physics Institute. He saw military service in World War I and then went to the University of Zürich in 1921, where he remained for the next six years. There, in a six-month period in 1926, at the age of 39, a remarkably late age for original work by theoretical physicists, he produced the papers that gave the foundations of quantum wave mechanics. In those papers he described his partial differential equation that is the basic equation of quantum mechanics and bears the same relation to the mechanics of the atom as Newton’s equations of motion bear to planetary astronomy. Adopting a proposal made by Louis de Broglie in 1924 that particles of matter have a dual nature and in some situations act like waves, Schrödinger introduced a theory describing the behaviour of such a system by a wave equation that is now known as the Schrödinger equation. The solutions to Schrödinger’s equation, unlike the solutions to Newton’s equations, are wave functions that can only be related to the probable occurrence of physical events. The definite and readily visualized sequence of events of the planetary orbits of Newton is, in quantum mechanics, replaced by the more abstract notion of probability. This aspect of the quantum theory made Schrödinger and several other physicists profoundly unhappy, and he devoted much of his later life to formulating philosophical objections to the generally accepted interpretation of the theory that he had done so much to create. His most famous objection was the 1935 thought experiment that later became known as Schrödinger’s cat. A cat is locked in a steel box with a small amount of a radioactive substance such that after one hour there is an equal probability of one atom either decaying or not decaying. If the atom decays, a device smashes a vial of poisonous gas, killing the cat. However, until the box is opened and the atom’s wave function collapses, the atom’s wave function is in a superposition of two states: decay and non-decay. Thus, the cat is in a superposition of two states: alive and dead. Schrödinger thought this outcome “quite ridiculous,” and when and how the fate of the cat is determined has been a subject of much debate among physicists. In 1927 Schrödinger accepted an invitation to succeed Max Planck, the inventor of the quantum hypothesis, at the University of Berlin, and he joined an extremely distinguished faculty that included Albert Einstein. He remained at the university until 1933, at which time he reached the decision that he could no longer live in a country in which the persecution of Jews had become a national policy. He then began a seven-year odyssey that took him to Austria, Great Britain, Belgium, the Pontifical Academy of Science in Rome, and—finally in 1940—the Dublin Institute for Advanced Studies, founded under the influence of Premier Eamon de Valera, who had been a mathematician before turning to politics. Schrödinger remained in Ireland for the next 15 years, doing research both in physics and in the philosophy and history of science. During this period he wrote What Is Life? (1944), an attempt to show how quantum physics can be used to explain the stability of genetic structure. Although much of what Schrödinger had to say in this book has been modified and amplified by later developments in molecular biology, his book remains one of the most useful and profound introductions to the subject. In 1956 Schrödinger retired and returned to Vienna as professor emeritus at the university. Of all the physicists of his generation, Schrödinger stands out because of his extraordinary intellectual versatility. He was at home in the philosophy and literature of all the Western languages, and his popular scientific writing in English, which he had learned as a child, is among the best of its kind. His study of ancient Greek science and philosophy, summarized in his Nature and the Greeks (1954), gave him both an admiration for the Greek invention of the scientific view of the world and a skepticism toward the relevance of science as a unique tool with which to unravel the ultimate mysteries of human existence. Schrödinger’s own metaphysical outlook, as expressed in his last book, Meine Weltansicht (1961; My View of the World), closely paralleled the mysticism of the Vedanta. Because of his exceptional gifts, Schrödinger was able in the course of his life to make significant contributions to nearly all branches of science and philosophy, an almost unique accomplishment at a time when the trend was toward increasing technical specialization in these disciplines. Jeremy Bernstein The Editors of Encyclopaedia Britannica Learn More in these related Britannica articles: Erwin Schrödinger You have successfully emailed this. Error when sending the email. Try again later. Edit Mode Erwin Schrödinger Austrian physicist Tips For Editing Thank You for Your Contribution! Uh Oh Keep Exploring Britannica Email this page
737072490495fc3f
Monday, March 5, 2018 Thoughts on Jaynes's Breakdown of the Bicameral Mind The book that Jaynes wrote The book that Jaynes wrote Tuesday, February 13, 2018 Sapience and non-sapience DOCTOR:   I knew a Galactic Federation once, lots of different lifeforms so they appointed a justice machine to administer the law. ROMANA:  What happened? DOCTOR:   They found the Federation in contempt of court and blew up the entire galaxy. The Stones of Blood, Doctor Who, 1978. The biggest systemic threat atm to the future of civilization, I submit, is that we will design out of it the most important information-processing asset we have:  ourselves.  Sapient beings.  Granted, there is a lot of bad stuff going on in the world right now; I put this threat first because coping with other problems tends to depend on civilization's collective wisdom. That is, we're much less likely to get into trouble by successfully endowing our creations with sapience, than by our non-sapient creations leaching the sapience out of us.  I'm not just talking about AIs, though that's a hot topic for discussion lately; our non-sapient creations include, for a few examples, corporations (remember Mitt Romney saying "corporations are people"?), bureaucracy (cf. Franz Kafka), AIs, big data analysis, restrictive user interfaces, and totalitarian governments. I'm not saying AI isn't powerful, or useful.  I'm certainly not suggesting human beings are all brilliant and wise — although one might argue that stupidity is something only a sapient being can achieve.  Computers can't be stupid.  They can do stupid things, but they don't produce the stupidity, merely conduct and amplify it.  Including, of course, amplifying the consequences of assigning sapient tasks to non-sapient devices such as computers.  Stupidity, especially by people in positions of power, is indeed a major threat in the world; but as a practical matter, much stupidity comes down to not thinking rationally, thus failing to tap the potential of our own sapience.  Technological creations are by no means the only thing discouraging us from rational thought; but even in (for example) the case of religious "blind faith", technological creations can make things worse. To be clear, when I say "collective wisdom", I don't just mean addressing externals like global climate change; I also mean addressing us.  One of our technological creations is a global economic infrastructure that shapes most collective decisions about how the world is to run ("money makes the world go 'round").  We have some degree of control over how that infrastructure works, but limited control and also limited understanding of it; at some point I hope to blog about how that infrastructure does and can work; but the salient point for the current post is, if we want to survive as a species, we would do well to understand what human beings contribute to the global infrastructure.  Solving the global economic conundrum is clearly beyond the scope of this post, but it seems that this post is a preliminary thereto. I've mentioned before on this blog the contrast between sapience and non-sapience.  Here I mean to explore the contrast, and interplay, between them more closely.  Notably, populations of sapient beings have group dynamics fundamentally different from — and, seemingly, far more efficacious from an evolutionary standpoint than — the group dynamics of non-sapient constructs. Not only am I unconvinced that modern science can create sapience, I don't think we can even measure it. The sorcerer's apprentice Lies, damned lies, and statistics Pro-sapient tech Storytelling and social upheaval We seem to have talked ourselves into an inferiority complex.  Broadly, I see three major trends contributing to this. For one thing, advocates of science since Darwin, in attempting to articulate for a popular audience the profound implications of Darwinian theory, have emphasized the power of "blind" evolution, and in doing so they've tended to describe it in decision-making terms, rather as if it were thinking.  Evolution thinks about the ways it changes species over time in the same sense that weather thinks about eroding a mountain, which is to say, not at all.  Religious thinkers have tended to ascribe some divine specialness to human beings, and even scientific thinkers have shown a tendency, until relatively recently, to portray evolution as culminating in humanity; but in favoring objective observation over mysticism, science advocates have been pushed (even if despite themselves) into downplaying human specialness.  Moreover, science advocates in emphasizing evolution have also played into a strong and ancient religious tradition that views parts/aspects of nature, and Nature herself, as sapient (cf. my past remarks on oral society). Meanwhile, in the capitalist structure of the world we've created, people are strongly motivated to devise ways to do things with technology, and strongly motivated to make strong claims about what they can do with it.  There is no obvious capitalist motive for them to suggest technology might be inferior to people for some purposes, let alone for them to actually go out and look for advantages of not using technology for some things.  Certainly our technology can do things with algorithms and vast quantities of data that clearly could not be done by an unaided human mind.  So we've accumulated both evidence and claims for the power of technology, and neither for the power of the human mind. The third major trend I see is more insidious.  Following the scientific methods of objectivity highly recommended by their success in studying the natural world, we tried to objectively measure our intelligence; it seemed like a good idea at the time.  And how do you objectively measure it?  The means that comes to mind is to identify a standard, well-defined, structured task that requires intelligence (in some sense of the word), and test how well we do that task.  It's just a matter of finding the right task to test for... right?  No, it's not.  The reason is appallingly simple.  If a task really is well-defined and structured, we can in principle build technology to do it.  It's when the task isn't well-defined and structured that a sapient mind is wanted.  For quite a while this wasn't a problem.  Alan Turing proposed a test for whether a computer could "think" that it seemed no computer would be passing any time soon; computers were nowhere near image recognition; computers were hilariously bad at natural-language translation; computers couldn't play chess on the level of human masters. To be brutally honest, automated natural-language translation is still awful.  That task is defined by the way the human mind works — which might sound dismissive if you infer mere eccentricities of human thinking, but becomes quite profound if you take "the way the human mind works" to mean "sapience".  The most obvious way computers can do automatic translation well is if we train people to constrain their thoughts to patterns that computers don't have a problem with; which seemingly amounts to training people to avoid sapient thought.  (Training people to avoid sapient thought is, historically, characteristic of demagogues.)  Image processing is still a tough nut to crack, though we're making progress.  But chess has certainly been technologized.  It figures that would be the first-technologized of those tasks I've mentioned because it's the most well-defined and structured of them.  When it happened, I didn't take it as a sign that computers were becoming sapient, but rather a demonstration that chess doesn't strictly require whatever-it-is that distinguishes sapience.  I wasn't impressed by Go, either.  I wondered about computer Jeopardy!; but on reflection, that too is a highly structured problem, with no more penalty for a completely nonsensical wrong answer than for a plausible wrong one.  I'm not suggesting these aren't all impressive technological achievements; I'm suggesting the very objectivity of these measures hides the missing element in them — understanding. Recently in a discussion I read, someone described modern advances in AI by saying computers are getting 'better and better at understanding the world' (or nearly those words), and I thought, understanding is just what they aren't doing.  It seems to me the technology is doing what it's always done — getting better and better at solving classes of problems without understanding them.  The idea that the technology understands anything at all seems to me to be an extraordinary claim, therefore requiring extraordinary proof which I do not see forthcoming since, as remarked, we expect to be unable to test it by means of the most obvious sort of experiment (a structured aptitude test).  If someone wants to contend that the opposite claim I'm making is also extraordinary — the claim that we understand in a sense the technology does not — I'll tentatively allow that resolving the question in either direction may require extraordinary proof; but I maintain there are things we need to do in case I'm right. Somebody, I maintain, has to bring a big-picture perspective to bear.  To understand, in order to choose the goals of what our technology is set to do, in order to choose the structural paradigm for the problem, in order to judge when the technology is actually solving the problem and when the situation falls outside the paradigm.  In order to improvise what to do when the situation does fall outside the paradigm.  That somebody has to be sapient. For those skeptics who may wonder (keeping in mind I'm all for skepticism, myself) whether there is an unfalsifiable claim lurking here somewhere, note that we are not universally prohibited from observing the gap between sapience and non-sapience.  The difficulty is with one means of observation:  a very large and important class of experiments are predictably incapable of measuring, or even detecting, the gap.  The reason this does not imply unfalsifiability is that scientific inquiry isn't limited to that particular class of experiments, large and important though the class is; the range of scientific inquiry doesn't have specific formally-defined boundaries — because it's an activity of sapient minds. The gap is at least suggested by the aforementioned difficulty of automatic translation.  What's missing in automatic translation is understanding:  by its nature automatic translation treats texts for translation as strings to be manipulated, rather than indications about the reality in which their author is embedded.  Whatever is missed by automatic translation because it is manipulating strings without thinking about their meaning, that is a manifestation of the sapience/non-sapience gap.  Presumably, with enough work one could continue to improve automatic translators; any particular failure of translation can always be fixed, just as any standardized test can be technologized.  How small the automatic-translation shortfall can be made in practice, remains to be seen; but the shape of the shortfall should always be that of an automated system doing a technical manipulation that reveals absence of comprehension. Consider fly-by-wire airplanes, which I mentioned in a previous post.  What happens when a fly-by-wire airplane encounters a situation outside the parameters of the fly-by-wire system?  It turns control over to the human pilots.  Who often don't realize, for a few critical moments (if those moments weren't critical, we wouldn't be talking about them, and quite likely the fly-by-wire system would not have bailed) that the fly-by-wire system has stopped flying the plane for them; and they have to orient themselves to the situation; and they've mostly been getting practice at letting the fly-by-wire system do things for them.  And then when this stacked-deck of a situation leads to a horrible outcome, there are strong psychological, political, and economic incentives to conclude that it was human error; after all, the humans were in control at the denouement, right?  It seems pretty clear to me that, of the possible ways that one could try to divvy up tasks between technology and humans, the model currently used by fly-by-wire airplanes (and now, one suspects, drive-by-wire cars) is a poor model, dividing tasks for the convenience of whoever is providing the automation rather than for the synergism of the human/non-human ensemble.  It doesn't look as if we know how to design such systems for synergism of the ensemble; and it's not immediately clear that there's any economic incentive for us to figure it out.  Occasionally, of course, something that seems unprofitable has economic potential that's only waiting for somebody to figure out how to exploit it; if there is such potential here, we may need first to understand the information-processing characteristics of sapience better.  Meanwhile, I suggest, there is a massive penalty, on a civilization-wide scale (which is outside the province of ordinary economics), if we fail to figure out how to design our technology to nurture sapience.  It should be possible to nurture sapience without first knowing how it works, or even exactly what it does — though figuring out how to nurture it may bring us closer to those other things. I'll remark other facets of the inferiority-complex effect, as they arise in discussion, below. By the time I'm writing this post, I've moved further along a path of thought I mentioned in my first contentful post on this blog.  I wrote then that in Dawkins's original description of memetics, he made an understandable mistake by saying that memetic life was "still in its infancy, still drifting clumsily about in its primeval soup".  That much I'm quite satisfied with:  it was a mistake — memetic evolution has apparently proceeded about three to five orders of magnitude faster than genetic evolution, and has been well beyond primeval soup for millennia, perhaps tens of millennia — and it was an understandable mistake, at that.  I have more to say now, though, about the origins of the mistake.  I wrote that memetic organisms are hard to recognize because you can't observe them directly, as their primary form is abstract rather than physical; and that's true as far as it goes; but there's also something deeper going on.  Dawkins is a geneticist, and in describing necessary conditions under which replication gives rise to evolution, he assumed it would always require the sort of conditions that genetic replication needs to produce evolution.  In particular, he appears to have assumed there must be a mechanism that copies a basic representation of information with fantastically high fidelity. Now, this is a tricky point.  I'm okay with the idea that extreme-fidelity basic replication is necessary for genetic evolution.  It seems logically cogent that something would have to be replicated with extreme fidelity to support evolution-in-general (such as memetic evolution).  But I see no reason this extreme-fidelity replication would have to occur in the basic representation.  There's no apparent reason we must be able to pin down at all just what is being replicated with extreme fidelity, nor must we be able to identify a mechanism for extreme-fidelity copying.  If we stipulate that evolution implies something is being extreme-fidelity-copied, and we see that evolution is taking place, we can infer that some extreme-fidelity copying is taking place; but evolution works by exploiting what happens with indifference to why it happens.  We might find that underlying material is being copied wildly unfaithfully, yet somehow, beyond our ability to follow the connections, this copying preserves some inarticulable abstract property that leads to an observable evolutionary outcome.  Evolution would exploit the abstract property with complete indifference to our inability to isolate it. It appears that in the case of genetic evolution, we have identified a basic extreme-fidelity copying mechanism.  In fact, apparently it even has an error-detection-and-correction mechanism built into it; which certainly seems solid confirmation that such extreme fidelity was direly needed for genetic evolution or such a sophisticated mechanism would never have developed.  Yet there appears to be nothing remotely like that for memetic replication.  If memetic evolution really had the same sort of dynamics as genetic evolution, we would indeed expect memetic life to be "still drifting clumsily about in its primeval soup"; it couldn't possibly do better than that until it had developed a super-high-fidelity low-level replicating mechanism. Yet memetic evolution proceeds at, comparatively, break-neck pace, in spectacular defiance of the expectation.  Therefore we may suppose that the dynamics of memetic evolution are altered by some factor profoundly different from genetic evolution. I suggest the key altering factor of memetic evolution, overturning the dynamics of genetic evolution, is that the basic elements of the host medium — people, rather than chemicals — are sapient.  What this implies is that, while memetic replication involves obviously-low-fidelity copying of explicitly represented information, the individual hosts are thinking about the content, processing it through the lens of their big-picture sapient perspective.  Apparently, this can result in an information flow with abstract fixpoints — things that get copied with extreme fidelity — that can't be readily mapped onto the explicit representation (e.g., what is said/written).  My sense of this situation is that if it is even useful to explicitly posit the existence of discrete "memes" in memetic evolution, it might yet be appropriate to treat them as unknown quantities rather than pouring effort into trying to identify them individually.  It seems possible the wholesale discreteness assumption may be unhelpful as well — though ideas don't seem like a continuous fluid in the usual simple sense, either. This particular observation of the sapient/non-sapient gap is from an unusual angle.  When trying to build an AI, we're likely to think in terms of what makes an individual entity sapient; likewise when defining sapience.  The group dynamics of populations of sapients versus non-sapients probably won't (at a guess) help us in any direct way to build or measure sapience; but it does offer a striking view of the existence of a sapience/non-sapience gap.  I've remarked before that groups of people get less sapient at scale; a population of sapiences is not itself sapient; but it appears that, when building a system, mixing in sapient components can produce systemic properties that aren't attainable with uniformly non-sapient components, thus attesting that the two kinds of components do have different properties. This evolutionary property of networks of sapiences affords yet another opportunity to underestimate sapience itself.  Seeing that populations of humans can accumulate tremendous knowledge over time — and recognizing that no individual can hope to achieve great feats of intellect without learning from, and interacting with, such a scholastic tradition — and given the various motives, discussed above, for downplaying human specialness — it may be tempting to suppose that sapience is not, after all, a property of individuals.  However, cogito, ergo that's taking the idea of collective intelligence to an absurdity.  The evolutionary property of memetics I've described is not merely a property of how the network is set up; if it were, genetic evolution ought to have struck on it at some point. There are, broadly, three idealized models (at least three) of how a self-directing system can develop.  There's "blind evolution", which explores alternatives by maintaining a large population with different individuals blundering down different paths simultaneously, and if the population is big enough, the variety amongst individuals is broad enough, and the viable paths are close enough to blunder into, enough individuals will succeed well enough that the population evolves rather than going extinct.  This strategy isn't applicable to a single systemic decision, as with the now-topical issue of global climate change:  there's no opportunity for different individuals to live in different global climates, so there's no opportunity for individuals who make better choices to survive better than individuals who make poorer choices.  As a second model, there's a system directed by a sapience; the individual sapient mind who runs the show can plan, devising possible strategies and weighing their possible consequences before choosing.  It is also subject to all the weaknesses and fallibilities of individuals — including plain old corruption (which, we're reminded, power causes).  The third model is a large population of sapiences, evolving memetically — and that's different again.  I don't pretend to fully grok the dynamics of that third model, and I think it's safe to say no-one else does either; we're all learning about it in real time as history unfolds, struggling with different ways of arranging societies (governmentally, economically, what have you). A key weakness of the third model is that it only applies under fragile conditions; in particular, the conditions may be deliberately disrupted, at least in the short term; keeping in mind we're dealing with a population of sapiences each potentially deliberate.  When systemic bias or small controlling population interferes with the homogeneity of the sapient population, the model breaks down and the system control loses — at least, partly loses — its memetic dynamics.  This is a vulnerability shared in common by the systems of democracy and capitalism. The sorcerer's apprentice There are, of course, more-than-adequate ways for us to get into trouble by succeeding in giving our technology sapience.  A particularly straightforward one is that we give it sapience and it decides it doesn't want to do what we want it to.  In science fiction this scenario may be accompanied by a premise that the created sapience is smarter than we are — although, looking around at history, there seems a dearth of evidence that smart people end up running the show.  Even if they're only about as smart, and stupid, as we are, an influx of artificial sapiences into the general pool of sapience in civilization is likely to throw off the balance of the pool as a whole — either deliberately or, more likely, inadvertently.  One has only to ask whether sapient AIs should have the right to vote to see a tangle of moral, ethical, and practical problems cascading forth (with vote rigging on one side, slavery on the other; not forgetting that, spreading opaque fog over the whole, we have no clue how to test for sapience).  However, I see no particular reason to think we're close to giving our technology sapience; I have doubts we're even trying to do so, since I doubt we know where that target actually is, making it impossible for us to aim for it (though mistaking something else for the target is another opportunity for trouble).  Even if we could eventually get ourselves into trouble by giving our technology sapience, we might not last long enough to do so because we get ourselves into trouble sooner by the non-sapient-technology route.  So, back to non-sapience. A major theme in non-sapient information processing is algorithms:  rigidly specified instructions for how to proceed.  An archetypal cautionary tale about what goes wrong with algorithms is The Sorcerer's Apprentice, an illustration (amongst other possible interpretations) of what happens when a rigid formula is followed without sapient oversight of when the formula itself ceases to be appropriate due to big-picture perspective.  One might argue that this characteristic rigidity is an inherently non-sapient limitation of algorithms. It's not an accident that error-handling is among the great unresolved mysteries of programming-language design — algorithms being neither well-suited to determine when things have gone wrong, nor well-suited to cope with the mess when they do. Algorithmic rigidity is what makes bureaucracy something to complain about — blind adherence to rules even when they don't make sense in the context where they occur, evoking the metaphor of being tied up in red tape.  The evident dehumanizing effect of bureaucracy is that it eliminates discretion to take advantage of understanding arbitrary aspects of big picture; it seems that to afford full scope to sapience, maximizing its potential, one wants to provide arbitrary flexibility — freedom — avoiding limitation to discrete choices. A bureaucratic system can give lip service to "giving people more choices" by adding on additional rules, but this is not a route to the sort of innate freedom that empowers the potential of sapience.  To the contrary:  sapient minds are ultimately less able to cope with vast networks of complicated rules than technological creations such as computers — or corporations, or governments — are, and consequently, institutions such as corporations and governments naturally evolve vast networks of complicated rules as a strategy for asserting control over sapiences.  There are a variety of ways to describe this.  One might say that an institution, because it is a non-sapient entity in a sea of sapient minds, is more likely to survive if it has some property that limits sapient minds so they're less likely to overwhelm it.  A more cynical way to say the same thing is that the institution survives better if it finds a way to prevent people from thinking.  A stereotypical liberal conspiracy theorist might say "they" strangle "us" with complicated rules to keep us down — which, if you think about it, is yet another way of saying the same thing (other than the usual incautious assumption of conspiracy theorists, that the behavior must be a deliberate plot by individual sapiences rather than an evolved survival strategy of memetic organisms).  Some people are far better at handling complexity than others, but even the greatest of our complexity tolerances are trivial compared to those of our non-sapient creations.  Part of my point here is that I don't think that's somehow a "flaw" in us, but rather part of the inherent operational characteristics of sapience that shape the way it ought to be most effectively applied. Lies, damned lies, and statistics A second major theme in non-sapient information processing is "big data".  Where algorithms contrast with sapience in logical strategy, big data contrasts in sheer volume of raw data. These two dimensions — logical strategy and data scale — are evidently related.  Algorithms can be applied directly to arbitrarily-large-scale data; sapience cannot, which is why big data is the province of non-sapient technology.  I suggested in an earlier post that the device of sapience only works at a certain range of scales, and that the sizes of both our short- and our long-term memories may be, to some extent, essential consequences of sapience rather than accidental consequences of evolution.  Not everyone tops out at the same scale of raw data, of course; some people can take in a lot more, or a lot less, than others before they need to impose some structure on it.  Interestingly, this is pretty clearly not some sort of "magnitude" of sapience, as there have been acknowledged geniuses, of different styles, toward both ends of the spectrum; examples that come to mind, Leonard Euler (with a spectacular memory) and Albert Einstein (notoriously absent-minded). That we sapiences can "make sense" of raw data, imposing structure on it and thereby coping with masses of data far beyond our ability to handle in raw form, would seem to be part of the essence of what it means to be sapient.  The attendant limitation on raw data processing would then be a technical property of the Platonic realm in broadly the same sense as fundamental constants like π, e, etc., and distant kin to such properties of the physical realm as the conditions necessary for nuclear fusion. Sometimes, we can make sense of vast data sets, many orders of magnitude beyond our native capacity, by leveraging technological capacity to process more-or-less-arbitrarily large volumes of raw data and boil it down algorithmically, to a scale/form within our scope.  It should be clear that the success of the enterprise depends on how insightfully we direct the technology on how to boil down the data; essentially, we have to intuit what sorts of analysis will give us the right sorts of information to gain insight into the salient features of the data.  We're then at the short end of a data-mining lever; the bigger the data mine, the trickier it is to reason out how to direct the technological part of the operation.  It's also possible to deliberately choose an analysis that will give us the answer we want, rather than helping us learn about reality.  And thus are born the twin phenomena of misuse of statistics and abuse of statistics. There may be a temptation to apply technology to the problem of deciding how to mine the data.  That —it should be clear on reflection— is an illusion.  The technology is just as devoid of sapient insight when we apply it to the meta-analysis as when we applied it to the analysis directly; and the potential for miscues is yet larger, since technology working at the meta-level is in a position to make more biasing errors through lack of judgement. One might be tempted to think of conceptualization, the process by which we impose concepts on raw data to structure and thus make sense of it, as "both cause and cure" of our limited capacity to process raw data; but this would, imo, be a mistake of orientation.  Conceptualization — which seems to be the basic functional manifestation of sapience — may cause the limited-capacity problem, and it may also be the "cure", i.e., the means by which we cope with the problem, but neither of those is the point of conceptualization/sapience.  As discussed, sapience differs from non-sapient information processing in ways that don't obviously fit on any sort of spectrum.  Consider:  logically, our inability to directly grok big data can't be a "failure" unless one makes a value judgement that that particular ability is something we should be able to do — and making a value judgement is something that can only be meaningfully ascribed to a sapience. It's also rather common to imagine the possibility of a sapience of a different order, capable of processing vast (perhaps even arbitrarily vast) quantities of data.  This can result from —as noted earlier— portraying evolution as if it were a sapient process.  It may result from an extrapolation based on the existence of some people with higher raw-data tolerances than others; but this treats "intelligence" as an ordering correlated with raw data processing capacity — which, as I've noted above, it is not.  Human sapiences toward the upper end of raw data processing capacity don't appear to be "more sapient", rather it's more like they're striking a different balance of parameters.  Different strengths and weaknesses occur at different mixtures of the parameters, and this seems to me characteristic of an effect (sapience) that can only occur under a limited range of conditions, with the effect breaking down in different ways depending on which boundary of the range is crossed.  Alternatively, it has sometimes been suggested there should some sort of fundamentally different kind of mind, working on different principles than our own; but once one no longer expects this supposed effect to have anything to do with sapience as it occurs in humans, I see no basis on which to conjecture the supposed effect at all. There's also yet another opportunity here for us to talk ourselves into an inferiority complex.  We tend to break down a holistic situation into components for understanding, and then when things fail we may be inclined to ascribe failure to a particular component, rather than to the way the components fit together or to the system as a whole.  So when a human/technology ensemble fails, we're that much more likely to blame the human component. Pro-sapient tech How can we design technology to nurture sapience rather than stifle it?  Though I don't claim to grasp the full scope of this formidable challenge, I have some suggestions that should help. On the stifling side, the two big principles I've discussed are algorithms and scale; algorithms eliminate the arbitrary flexibility that gives sapience room to function, while vast masses of data overwhelm sapiences (technology handles arbitrarily large masses of data smoothly, not trying to grok big-picture implications that presumably grow at least quadratically with scale).  Evidently sapience needs full-spectrum access to the data (it can't react to what it doesn't know), needs to have hands-on experience from which to learn, needs to be unfettered in its flexibility to act on what it sees. Tedium should be avoided.  Aspects of this are likely well-known in some circles, perhaps know-how related to (human) assembly-line work; from my own experience, tedium can trip up sapience in a couple of ways, that blur into each other.  Repeating actions over and over can lead to inattention, so that when a case comes along that ought to be treated differently, the sapient operator just does the same thing yet again, either failing to notice it at all, or "catching it too late" (i.e., becoming aware of the anomaly after having already committed to processing it in the usual way).  On the other hand, paying full attention to an endless series of simple cases, even if they offer variations maintaining novelty, can exhaust the sapient operator's decision-making capacity; I, for one, find that making lots of little decisions drains me for a time, as if I had a reservoir of choice that, when depleted, refills at a limited natural rate.  (I somewhat recall a theory ascribed to Barack Obama that a person can only make one or two big decisions per day; same principle.) Another important principle to keep in mind is that sapient minds need experience.  Even "deep learning" AIs need training, but with sapiences the need is deeper and wider; the point is not merely to "train" them to do a particular task, important though that is, but to give them accumulated broad experience in the whole unbounded context surrounding whatever particular tasks are involved.  Teaching a student to think is an educator's highest aspiration.  An expert sapient practitioner of any trade uses "tricks of the trade" that may be entirely outside the box.  A typical metaphor for extreme forms of such applied sapient measures is 'chewing gum and baling wire'.  One of the subtle traps of over-reliance on technology is that if sapiences aren't getting plenty of broad, wide hands-on experience, when situations outside known parameters arise there will be no-one clueful to deal with it — even if the infrastructure has sufficiently broad human-accessible flexibility to provide scope for out-of-the-box sapient measures.  (An old joke describes an expert being called in to fix some sort of complex system involving pipes under pressure —recently perhaps a nuclear power plant, some older versions involve a steamboat— who looks around, taps a valve somewhere, and everything starts working again; the expert charges a huge amount of money —say a million dollars, though the figure has to ratchet up over time due to inflation— and explains, when challenged on the amount, that one dollar is for tapping the valve, and the rest is for knowing where to tap.) This presents an economic/social challenge.  The need to provide humans with hands-on experience is a long-term investment in fundamental robustness.  For the same reason that standardized tests ultimately cannot measure sapience, short-term performance on any sufficiently well-structured task can be improved by applying technology to it, which can lead to a search for ways to make tasks more well-structured — with a completely predictable loss of ability to deal with... the unpredictable.  I touched on an instance of this phenomenon when describing, in an earlier post, the inherent robustness of a traffic system made up of human drivers. Suppression of sapience also takes much more sweeping, long-term systemic forms.  A particular case that made a deep impression on me:  in studying the history of my home town I was fascinated that the earliest European landowners of the area received land grants from the king, several generations before Massachusetts residents rose up in rebellion against English rule (causing a considerable ruckus, which you may have heard about).  Those land grants were subject to proving the land, which is to say, demonstrating an ability to develop it.  Think about that.  We criticize various parties —developers, big corporations, whatever— for exploiting the environment, but those land grants, some four hundred years ago under a different system of government, required exploiting the land, otherwise the land would be taken away and given to someone else.  Just how profoundly is that exploitation woven into the fabric of Western civilization?  It appears to be quite beyond distinctions like monarchy versus democracy, capitalism versus socialism.  We've got hold of the tail of a vast beast that hasn't even turned 'round to where we can see the thing as a whole; it's far, far beyond anything I can tackle in this post, except to note pointedly that we must be aware of it, and be thinking about it. A much simpler, but also pernicious, source of long-term systemic bias is planning to add support for creativity "later".  Criticism of this practice could be drawn to quite reasonable tactical concerns like whether anyone will really ever get around to attempting the addition, and whether a successful addition would fail to take hold because it would come too late to overcome previously established patterns of behavior; the key criticism I recommend, though, is that strategically, creativity is itself systemic and needs to be inherent in the design from the start.  Anything tacked on as an afterthought would be necessarily inferior. To give proper scope for sapience, its input — the information presented to the sapient operator in a technological interface — should be high-bandwidth from an unbounded well of ordered complexity.  There has to be underlying rhyme-and-reason to what is presented, otherwise information overload is likely, but it mustn't be stoppered down to the sort of simple order that lends itself to formal, aka technological, treatment, which would defeat the purpose of bringing a sapience to bear on it.  Take English text as archetypical:  built up mostly from 26 letters and a few punctuation marks and whitespace, yet as one scales up, any formal/technological grasp on its complexity starts to fuzz until ultimately it gets entirely outside what a non-sapience can handle.  Technology sinks in the swamp of natural language, while to a sapience natural language comes... well, naturally.  This sort of emergent formal intractability seems a characteristic domain of sapience.  There is apparently some range of variation in the sorts of rhyme and reason involved; for my part, I favor a clean simple set of orthogonal primitives, while another sort of mind favors a less tidy primitive set (more-or-less the design difference between Scheme and Common Lisp). When filtering input to avoid simply overwhelming the sapient user, whitelisting is inherently more dangerous than blacklisting.  That is, an automatic filter to admit information makes an algorithmic judgement about what may be important, which judgement is properly the purview of sapience, to assess unbounded context; whereas a filter to omit completely predictable information, though it certainly can go wrong, has a better chance of working since it isn't trying to make a call about which information is extraneous, only about which information is completely predictable (if properly designed; censorship being one of the ways for it to go horribly wrong). On the output side —i.e., what the sapient operator is empowered to do— a key aspect is effective ability to step outside the framework.  Sets of discrete top-level choices are likely to stifle sapient creativity rather than enhance it (not to be confused with a set of building blocks, which would include the aforementioned letters-plus-punctuation).  While there is obvious advantage in facilities to support common types of actions, those facilities need to blend smoothly with robust handling of general cases, to produce graceful degradation when stepping off the beaten path.  Handling some approaches more easily than others might easily turn into systemic bias against the others — a highly context-dependent pitfall, on which the reason for less-supported behavior seems to be the pivotal factor.  (Consider the role of motive-for-deviation in the subjective balance between pestering the operator about an unconventional choice until they give it up, versus allowing one anomaly to needlessly propagate unchecked complications.) Storytelling and social upheaval A final thought, grounding this view of individual sapiences back into global systemic threats (where I started, at the top of the post). Have you noticed it's really hard to adapt a really good book into a really good movie?  So it seems to me.  When top-flight literature translates successfully to a top-flight movie, the literature is more likely to have been a short story.  A whole book is more likely to translate into a miniseries, or a set of movies.  I was particularly interested by the Harry Potter movies, which I found suffered from their attempt to fit far too much into each single movie; the Harry Potter books were mostly quite long, and were notable for their rich detail, and that couldn't possibly be captured by one movie per book without reducing the richness to something telegraphic.  The books were classics, for the ages; the movies weren't actually bad, but they weren't in the same rarefied league as the books.  (I've wondered if one could turn the Harry Potter book set into a television series, with one season per book.) The trouble in converting literature to cinematography is bandwidth.  From a technical standpoint this is counter-intuitive:  text takes vastly less digital storage than video; but how much of that data can be used as effective signal depends on what kind of signal is intended.  I maintain that as a storytelling medium, text is extremely high-bandwidth while video is a severe bottleneck, stunningly inefficient at getting the relevant ideas across if, indeed, they can be expressed at all.  In essence, I suggest, storytelling is what language has evolved for.  A picture may be worth a thousand words, but  (a) it depends on which words and which picture,  (b) it's apparently more like 84 words, and  (c) it doesn't follow that a thousand pictures are worth a thousand times as many words. In a post here some time back, I theorized that human language has evolved in three major stages (post).  The current stage in the developed world is literacy, in which society embraces written language as a foundation for acquiring knowledge.  The preceding stage was orality, where oral sagas are the foundation for acquiring knowledge, according to the theory propounded by Eric Havelock in his magnum opus Preface to Plato, where he proposes that Plato lived on the cusp of the transition of ancient Greek society from orality to literacy.  My extrapolation from Havelock's theory says that before the orality stage of language was another stage I've called verbality, which I speculate may have more-or-less resembled the peculiar Amazonian language Pirahã (documented by Daniel Everett in Don't Sleep There are Snakes).  Pirahã has a variety of strange features, but what particularly attracted my attention was that, adding up these features, Pirahã apparently does not and cannot support an oral culture; Pirahã culture has no history, art, or storytelling (does not), and the language has no temporal vocabulary, tense, or number system (cannot). 'No storytelling' is where this relates back to books-versus-movies.  The nature of the transition from verbality to orality is unclear to me; but I (now) conjecture that once the transition to orality occurs, there would then necessarily be a long period of linguistic evolution during which society would slowly figure out how to tell stories.  At some point in this development, writing would arise and after a while precipitate the transition to literacy.  But the written form of language, in order to support the transition to literate society, would particularly have to be ideally suited to storytelling. Soon after the inception of email as a communication medium came the development of emoticons:  symbols absent from traditional written storytelling but evidently needed to fill in for the contextual "body language" clues ordinarily available in face-to-face social interaction.  Demonstrating that social interaction itself is not storytelling as such, for which written language was already well suited without emoticons.  One might conjecture that video, while lower-storytelling-bandwidth than text, could have higher effective social-interaction-bandwidth than text.  And on the other side of the equation, emoticons also demonstrate that the new electronic medium was already being used for non-storytelling social interaction. For another glimpse into the character of the electronic medium, contrast the experience of browsing Wikibooks — an online library of some thousands of open-access textbooks — against the pre-Internet experience of browsing in an academic library. On Wikibooks, perhaps you enter through the main page, which offers you a search box and links to some top-level subject pages like Computing, Engineering, Humanities, and such.  Each of those top-level subject pages provides an array of subsections, and each subsection will list all its own books as well as listing its own sub-subsections, and so on.  The ubiquitous search box will do a string search, listing first pages that mention your chosen search terms in the page title, then pages that contain the terms somewhere in the content of the page.  Look at a particular page of a book, and you'll see the text, perhaps navigation links such as next/previous page, parent page, subpages; there might be a navigation box on the right side of the page that shows the top-level table of contents of the book. At the pre-Internet library, typically, you enter past the circulation desk, where a librarian is seated.  Past that, you come to the card catalog; hundreds of alphabetically labeled deep drawers of three-by-five index cards, each card cumulatively customized by successive librarians over decades, perhaps over more than a century if this is a long-established library.  (Side insight, btw:  that card catalog is, in its essence, a collaborative hypertext document very like a wiki.)  You may spend some time browsing through the catalog, flipping through the cards in various drawers, jotting down notes and using them to move from one drawer to another — a slower process than if you could move instantly from one to another by clicking an electronic link, but also a qualitatively richer experience.  At every moment, surrounding context bears on your awareness; other index cards near the one you're looking at, other drawers; and beyond that, strange though it now seems that this is worth saying, you are in a room, literally immersed in context.  Furniture, lights, perhaps a cork bulletin board with some notices on it; posters, signs, or notices on the walls, sometimes even thematic displays; miscellany (is that a potted plant over there?); likely some other people, quietly going about their own business.  The librarian you passed at the desk probably had some of their own stuff there, may have been reading a book.  Context.  Having taking notes on what you found in the card catalog and formulated a plan, you move on to the stacks; long rows of closely spaced bookcases, carefully labeled according to some indexing system referenced by the cards and jotted down in your notes, with perhaps additional notices on some of the cases — you're in another room — you come to the shelves, and may well browse through other books near what your notes direct you to, which you can hardly help noticing (not like an electronic system where you generally have to go out of your way to conjure up whatever context the system may be able to provide).  You select the particular book you want, and perhaps take it to a reading desk (or just plunk down on the carpet right there, or a nearby footstool, to read); and as you're looking at a physical book, you may well flip through the pages as you go, yet another inherently context-intensive browsing technique made possible by the physicality of the situation. What makes this whole pre-Internet experience profoundly different from Wikibooks — and I say this as a great enthusiast of Wikibooks — is the rich, deep, pervasive context.  And context is where this dovetails back into the main theme of this post, recognizing context as the special province of sapience. When the thriving memetic ecosystem of oral culture was introduced to the medium of written language, it did profoundly change things, producing literate culture, and new taxonomic classes of memetic organisms that could not have thrived in oral society (I'm thinking especially of scientific organisms); but despite these profound changes, the medium still thoroughly supported language, and context-intensive social interactions mostly remained in the realm of face-to-face encounters.  So the memetic ecosystem continued to thrive. Memetic ecosystem is where all of this links back to the earlier discussion of populations of sapiences. That discussion noted system self-direction through a population of sapiences can break down if the system is thrown out of balance.  And while the memetic ecosystem handily survived the transition to literacy, it's an open question what will happen with the transition to the Internet medium.  This time, the new medium is highly context-resistant while it aggressively pulls in social interactions.  With sapience centering on context aspects that are by default eliminated or drastically transformed in the transition, it seems the transition must have, somehow, an extreme impact on the way sapient minds develop.  If there is indeed a healthy, stable form of society to be achieved on the far side of this transition, I don't think we should kid ourselves that we know what that will look like, but it's likely to be very different, in some way or other, from the sort of stable society that preceded. The obvious forecast is social upheaval.  The new system doesn't know how to put itself together, or really even know for sure whether it can.  The old system is pretty sure to push back.  As I write this, I look at the political chaos in the United States —and elsewhere— and I see these forces at work. And I think of the word singularity. Friday, June 16, 2017 Co-hygiene and quantum gravity [l'Universo] è scritto in lingua matematica ([The Universe] is written in the language of mathematics) — Galileo Galilei, Il Saggiatore (The Assayer), 1623. Here's another installment in my ongoing exploration of exotic ways to structure a theory of basic physics.  In our last exciting episode, I backtraced a baffling structural similarity between term-rewriting calculi and basic physics to a term-rewriting property I dubbed co-hygiene.  This time, I'll consider what this particular vein of theory would imply about the big-picture structure of a theory of physics.  For starters, I'll suggest it would imply, if fruitful, that quantum gravity is likely to be ultimately unfruitful and, moreover, quantum mechanics ought to be less foundational than it has been taken to be.  The post continues on from there much further than, candidly, I had expected it to; by the end of this installment my immediate focus will be distinctly shifting toward relativity. To be perfectly clear:  I am not suggesting anyone should stop pursuing quantum gravity, nor anything else for that matter.  I want to expand the range of theories explored, not contract it.  I broadly diagnose basic physics as having fallen into a fundamental rut of thinking, that is, assuming something deeply structural about the subject that ought not to be assumed; and since my indirect evidence for this diagnosis doesn't tell me what that deep structural assumption is, I want to devise a range of mind-bendingly different ways to structure theories of physics, to reduce the likelihood that any structural choice would be made through mere failure to imagine an alternative. The structural similarity I've been pursuing analogizes between, on one side, the contrast of pure function-application with side-effect-ful operations in term-rewriting calculi; and on the other side, the contrast of gravity with the other fundamental forces in physics.  Gravity corresponds to pure function-application, and the other fundamental forces correspond to side-effects.  In the earlier co-hygiene post I considered what this analogy might imply about nondeterminism in physics, and I'd thought my next post in the series would be about whether or not it's even mathematically possible to derive the quantum variety of nondeterminism from the sort of physical structure indicated.  Just lately, though, I've realized there may be more to draw from the analogy by considering first what it implies about non-locality, folding in nondeterminism later.  Starting with the observation that if quantum non-locality ("spooky action at a distance") is part of the analog to side-effects, then gravity should be outside the entanglement framework, implying both that quantum gravity would be a non-starter, and that quantum mechanics, which is routinely interpreted to act directly from the foundation of reality by shaping the spectrum of alternative versions of the entire universe, would have to be happening at a less fundamental level than the one where gravity differs from the other forces. On my way to new material here, I'll start with material mostly revisited from the earlier post, where it was mixed in with a great deal of other material; here it will be more concentrated, with a different emphasis and perhaps some extra elements leading to additional inferences.  As for the earlier material that isn't revisited here — I'm very glad it's there.  This is, deliberately, paradigm-bending stuff, where different parts don't belong to the same conceptual framework and can't easily be held in the mind all at once; so if I hadn't written down all that intermediate thinking at the time, with its nuances and tangents, I don't think I could recapture it all later.  I'll continue here my policy of capturing the journey, with its intermediate thoughts and their nuances and tangents. Until I started describing λ-calculus here in earnest, it hadn't registered on me that it would be a major section of the post.  Turns out, though, my perception of λ-calculus has been profoundly transformed by the infusion of perspective from physics; so I found myself going back to revisit basic principles that I would have skipped lightly over twenty years ago, and perhaps even two years ago.  It remains to be seen whether developments later in this post will sufficiently alter my perspective to provoke yet another recasting of λ-calculus in some future post. Side-effect-ful variables Quantum scope Geometry and network Cosmic structure There were three main notions of computability in the 1930s that were proved equi-powerful by the Church-Turing thesis:  general recursive functions, λ-calculus, and Turing machines (due respectively to Jacques Herbrand and Kurt Gödel, to Alonzo Church, and to Alan Turing).  General recursive functions are broadly equational in style, λ-calculus is stylistically more applicative; both are purely functional.  Turing machines, on the other hand, are explicitly imperative.  Gödel apparently lacked confidence in the purely functional approaches as notions of mechanical calculability, though Church was more confident, until the purely functional approaches were proven equivalent to Turing machines; which to me makes sense as a matter of concreteness.  (There's some discussion of the history in a paper by Solomon Feferman; pdf.) This mismatch between abstract elegance and concrete straightforwardness was an early obstacle, in the 1960s, to applying λ-calculus to programming-language semantics.  Gordon Plotkin found a schematic solution strategy for the mismatch in his 1975 paper "Call-by-name, call-by-value and the λ-calculus" (pdf); one sets up two formal systems, one a calculus with abstract elegance akin to λ-calculus, the other an operational semantics with concrete clarity akin to Turing machines, then proves well-behavedness theorems for the calculus and correspondence theorems between the calculus and operational semantics.  The well-behavedness of the calculus allows us to reason conveniently about program behavior, while the concreteness of the operational semantics allows us to be certain we are really reasoning about what we intend to.  For the whole arrangement to work, we need to find a calculus that is fully well-behaved while matching the behavior of the operational semantics we want so that the correspondence theorems can be established. Plotkin's 1975 paper modified λ-calculus to match the behavior of eager argument evaluation; he devised a call-by-value λv-calculus, with all the requisite theorems.  The behavior was, however, still purely functional, i.e., without side-effects.  Traditional mathematics doesn't incorporate side-effects.  There was (if you think about it) no need for traditional mathematics to explicitly incorporate side-effects, because the practice of traditional mathematics was already awash in side-effects.  Mutable state:  mathematicians wrote down what they were doing; and they changed their own mental state and each others'.  Non-local control-flow (aka "goto"s):  mathematicians made intuitive leaps, and the measure of proof was understandability by other sapient mathematicians rather than conformance to some purely hierarchical ordering.  The formulae themselves didn't contain side-effects because they didn't have to.  Computer programs, though, have to explicitly encompass all these contextual factors that the mathematician implicitly provided to traditional mathematics.  Programs are usually side-effect-ful. In the 1980s Matthias Felleisen devised λ-like calculi to capture side-effect-ful behavior.  At the time, though, he didn't quite manage the entire suite of theorems that Plotkin's paradigm had called for.  Somewhere, something had to be compromised.  In the first published form of Felleisen's calculi, he slightly weakened the well-behavedness theorems for the calculus.  In another published variant he achieved full elegance for the calculus but slightly weakened the correspondence theorems between the calculus and the operational semantics.  In yet another published variant he slightly modified the behavior — in operational semantics as well as calculus — to something he was able to reconcile without compromising the strength of the various theorems.  This, then, is where I came into the picture:  given Felleisen's solution and a fresh perspective (each generation knows a little less about what can't be done than the generation before), I thought I saw a way to capture the unmodified side-effect-ful behavior without weakening any of the theorems.  Eventually I seized an opportunity to explore the insight, when I was writing my dissertation on a nearby topic.  To explain where my approach fits in, I need to go back and pick up another thread:  the treatment of variables in λ-calculus. Alonzo Church also apparently seized an opportunity to explore an insight when doing research on a nearby topic.  The main line of his research was to see if one could banish the paradoxes of classical logic by developing a formal logic that weakens reductio ad absurdum — instead of eliminating the law of the excluded middle, which was a favored approach to the problem.  But when he published the logic, in 1932, he mentioned reductio ad absurdum in the first paragraph and then spent the next several paragraphs ranting about the evils of unbound variables.  One gathers he wanted everything to be perfectly clear, and unbound variables offended his sense of philosophical precision.  His logic had just one possible semantics for a variable, namely, a parameter to be supplied to a function; he avoided the need for any alternative notions of universally or existentially quantified variables, by the (imho quite lovely) device of using higher-order functions for quantification.  That is (since I've brought it up), existential quantifier Σ applied to function F would produce a proposition ΣF meaning that there is some true proposition FX, and universal quantifier Π applied to F, proposition ΠF meaning that every proposition FX is true.  In essence, he showed that these quantifiers are orthogonal to variable-binding; leaving him with only a single variable-binding device, which, for some reason lost to history, he called "λ". λ-calculus is formally a term-rewriting calculus; a set of terms together with a set of rules for rewriting a term to produce another term.  The two basic well-behavedness properties that a term-rewriting calculus generally ought to have are compatibility and Church-Rosser-ness. Compatibility says that if a term can be rewritten when it's a standalone term, it can also be rewritten when it's a subterm of a larger term.  Church-Rosser-ness says that if a term can be rewritten in two different ways, then the difference between the two results can always be eliminated by some further rewriting.  Church-Rosser-ness is another way of saying that rewriting can be thought of as a directed process toward an answer, which is characteristic of calculi.  Philosophically, one might be tempted to ask why the various paths of rewriting ought to reconverge later, but this follows from thinking of the terms as the underlying reality.  If the terms merely describe the reality, and the rewriting lets us reason about its development, then the term syntax is just a way for us to separately describe different parts of the reality, and compatibility and Church-Rosser-ness are just statements about our ability (via this system) to reason separately about different aspects of the development at different parts of the reality without distorting our eventual conclusion about where the whole development is going.  From that perspective, Church-Rosser-ness is about separability, and convergence is just the form in which the separability appears in the calculus. The syntax of λ-calculus — which particularly clearly illustrates these principles — is T   ::=   x | (TT) | (λx.T)  . That is, a term is either a variable; or a combination, specifying that a function is applied to an operand; or a λ-expression, defining a function of one parameter.  The T in (λx.T) is the body of the function, x its parameter, and free occurrences of x in T are bound by this λ.  An occurrence of x in T is free if it doesn't occur inside a smaller context (λx.[ ]) within T.  This connection between a λ and the variable instances it binds is structural.  Here, for example, is a term involving variables x, y, and z, annotated with pointers to a particular binding λ and its variable instances: ((λx.((λy.((λx.(xz))(xy)))(xz)))(xy))  .   ^^                 ^     ^ The x instance in the trailing (xy) is not bound by this λ since it is outside the binding expression.  The x instance in the innermost (xz) is not bound since it is captured by another λ inside the body of the one we're considering.  I suggest that the three marked elements — binder and two bound instances — should be thought of together as the syntactic representation of a deeper, distributed entity that connects distant elements of the term. There is just one rewriting rule — one of the fascinations of this calculus, that just one rule suffices for all computation — called the β-rule: ((λx.T1)T2)   →   T1[x ← T2]   . The left-hand side of this rule is the redex pattern (redex short for reducible expression); it specifies a local pattern in the syntax tree of the term.  Here the redex pattern is that some particular parent node in the syntax tree is a combination whose left-hand child is a λ-expression.  Remember, this rewriting relation is compatible, so the parent node doesn't have to be the root of the entire tree.  It's important that this local pattern in the syntax tree includes a variable binder λ, thus engaging not only a local region of the syntax tree, but also a specific distributed structure in the network of non-local connections across the tree.  Following my earlier post, I'll call the syntax tree the "geometry" of the term, and the totality of the non-local connections its "network topology". The right-hand side of the rule specifies replacement by substituting the operand T2 for the parameter x everywhere it occurs free in the body T1; but there's a catch.  One might, naively, imagine that this would be recursively defined as x[x ← T]   =   T x1[x2 ← T]   =   x1   if x1 isn't x2 (T1 T2)[x ← T]   =   (T1[x ← T] T2[x ← T]) (λx.T1)[x ← T2]   =   (λx.T1) (λx1.T1)[x2 ← T2]   =   (λx1.T1[x2 ← T2])   if x1 isn't x2. This definition just descends the syntax tree substituting for the variable, and stops if it hits a λ that binds the same variable; very straightforward, and only a little tedious.  Except that it doesn't work.  Most of it does; but there's a subtle error in the rule for descending through a λ that binds a different variable, The trouble is, what if T1 contains a free occurrence of x2 and, at the same time, T2 contains a free instance of x1?  Then, before the substitution, that free instance of x1 was part of some larger distributed structure; that is, it was bound by some λ further up in the syntax tree; but after the substitution, following this naive definition of substitution, a copy of T2 is embedded within T1 with an instance of x1 that has been cut off from the larger distributed structure and instead bound by the inner λx1, essentially altering the sense of syntactic template T2.  The inner λx1 is then said to capture the free x1 in T2, and the resulting loss of integrity of the meaning of T2 is called bad hygiene (or, a hygiene violation).  For example, ((λy.(λx.y))x)   ⇒β   (λx.y)[y ← x] but under the naive definition of substitution, this would be (λx.x), because of the coincidence that the x we're substituting for y happens to have the same name as the bound variable of this inner λ.  If the inner variable had been named anything else (other than y) there would have been no problem.  The "right" answer here is a term of the form (λz.x), where any variable name could be used instead of z as long as it isn't "x" or "y".  The standard solution is to introduce a rule for renaming bound variables (called α-renaming), and restrict the substitution rule to require that hygiene be arranged beforehand.  That is, (λx1.T)   →   (λx2.T[x1 ← x2])   where x2 doesn't occur free in T (λx1.T1)[x2 ← T2]   =   (λx1.T1[x2 ← T2])   if x1 isn't x2 and doesn't occur free in T2. Here again, this may be puzzling if one thinks of the syntax as the underlying reality.  If the distributed structures of the network topology are the reality, which the syntax merely describes, then α-renaming is merely an artifact of the means of description; indeed, the variable-names themselves are merely an artifact of the means of description. Side-effect-ful variables Suppose we want to capture classical side-effect-ful behavior, unmodified, without weakening any of the theorems of Plotkin's paradigm.  Side-effects are by nature distributed across the term, and would therefore seem to belong naturally to its network topology.  In Felleisen's basic calculus, retaining the classical behavior and requiring the full correspondence theorems, side-effect-ful operations create syntactic markers that then "bubble up" through the syntax tree till they reach the top of the term, from which the global consequence of the side-effect is enacted by a whole-term-rewriting rule — thus violating compatibility, since the culminating rule is by nature applied to the whole term rather than to a subterm.  This strategy seems, in retrospect, to be somewhat limited by an (understandable) inclination to conform to the style of variable handling in λ-calculus, whose sole binding device is tied to function application at a specific location in the geometry.  Alternatively (as I seized the opportunity to explore in my dissertation), one can avoid the non-compatible whole-term rules by making the syntactic marker, which bubbles up through the term, a variable-binder.  These side-effect-ful bindings are no longer strongly tied to a particular location in the geometry; they float, potentially to the top of the term, or may linger further down in the tree if the side-effect happens to only affect a limited region of the geometry.  But the full classical behavior (in the cases Felleisen addressed) is captured, and Plotkin's entire suite of theorems are supported. The calculus in which I implemented this side-effect strategy (along with some other things, that were the actual point of the dissertation but don't apparently matter here) is called vau-calculus. Recall that the β-rule of λ-calculus applies to a redex pattern at a specific location in the geometry, and requires a binder to occur there so that it can also tie in to a specific element of the network topology.  The same is true of the side-effect-ful rules of the calculus I constructed:  a redex pattern occurs at a specific location in the geometry with a local tie-in to the network topology.  There may then be a substitutive operation on the right-hand side of the rule, which uses the associated element of the network topology to propagate side-effect-ful consequences back down the syntax tree to the entire encompassed subterm.  There is a qualitative difference, though, between the traditional substitution of the β-rule and the substitutions of the side-effect-ful operations.  A traditional substitution T1[x ← T2] may attach new T2 subtrees at certain leaves of the T1 syntax tree (free instances of x in T1), but does not disturb any of the pre-existing tree structure of T1.  Consequently, the only effect of the β-rule on the pre-existing geometry is the rearrangement it does within the redex pattern.  This is symmetric to the hygiene property, which assures (by active intervention if necessary, via α-renaming) that the only effect of the β-rule on the pre-existing network topology is what it does to the variable element whose binding is within the redex pattern.  I've therefore called the geometry non-disturbance property co-hygiene.  As long as β-substitution is the only variable substitution used, co-hygiene is an easily overlooked property of the β-rule since, unlike hygiene, it does not require any active intervention to maintain. The substitutions used by the side-effect-ful rewriting operations go to the same α-renaming lengths as the β-rule to assure hygiene.  However, the side-effect-ful substitutions are non-co-hygienic.  This might, arguably, be used as a technical definition of side-effects, which cause distributed changes to the pre-existing geometry of the term. Quantum scope Because co-hygiene is about not perturbing pre-existing geometry, it seems reasonable that co-hygienic rewriting operations should be more in harmony with the geometry than non-co-hygienic rewriting operations.  Thus, β-rewriting should be more in harmony with the geometry of the term than the side-effect-ful operations; which, subjectively, does appear to be the case.  (The property that first drew my attention to all this was that α-renaming, which is geometrically neutral, is a special case of β-substitution, whereas the side-effect-ful substitutions are structurally disparate from α-renaming.) And gravity is more in harmony with the geometry of spacetime than are the other fundamental forces; witness general relativity. Hence my speculation, by analogy, that one might usefully structure a theory of basic physics such that gravity is co-hygienic while the other fundamental forces are non-co-hygienic. One implication of this line of speculation (as I noted in the earlier post) would be fruitlessness of efforts to unify the other fundamental forces with gravity by integrating them into the geometry of spacetime.  If the other forces are non-co-hygienic, their non-affinity with geometry is structural, and trying to treat them in a more gravity-like way would be like trying to treat side-effect-ful behavior as structurally akin to function-application in λ-calculus — which I have long reckoned was the structural miscue that prevented Felleisen's calculus from supporting the full set of well-behavedness theorems. On further consideration, though, something more may be suggested; even as the other forces might not integrate into the geometry of spacetime, gravity might not integrate into the infrastructure of quantum mechanics.  All this has to do with the network topology, a non-local infrastructure that exists even in pure λ-calculus, but which in the side-effect-ful vau-calculus achieves what one might be tempted to call "spooky action at a distance".  Suppose that quantum entanglement is part of this non-co-hygienic aspect of the theory.  (Perhaps quantum entanglement would be the whole of the non-co-hygienic aspect, or, as I discussed in the earlier post, perhaps there would be other, non-quantum non-locality with interesting consequences at cosmological scale; then again, one might wonder if quantum entanglement would itself have consequences at cosmological scale that we have failed to anticipate because the math is beyond us.)  It would follow that gravity would not exhibit quantum entanglement.  On one hand, this would imply that quantum gravity should not work well as a natural unification strategy.  On the other hand, to make this approach work, something rather drastic must happen to the underpinnings of quantum mechanics, both philosophical and technical. We understand quantum mechanics as describing the shape of a spectrum of different possible realities; from a technical perspective that is what quantum mechanics describes, even if one doesn't accept it as a philosophical interpretation (and many do accept that interpretation, if only on grounds of Occam's Razor that there's no reason to suppose philosophically some other foundation than is supported technically).  But, shaped spectra of alternative versions of the entire universe seems reminiscent of whole-term rewriting in Felleisen's calculus — which was, notably, a consequence of a structural design choice in the calculus that actually weakened the internal symmetry of the system.  The alternative strategy of vau-calculus both had a more uniform infrastructure and avoided the non-compatible whole-term rewriting rules.  An analogous theory of basic physics ought to account for quantum entanglement without requiring wholesale branching of alternative universes.  Put another way, if gravity isn't included in quantum entanglement, and therefore has to diverge from the other forces at a level more basic than the level where quantum entanglement arises, then the level at which quantum entanglement arises cannot be the most basic level. Just because quantum structure would not be at the deepest level of physics, does not at all suggest that what lies beneath it must be remotely classical.  Quantum mechanics is mathematically a sort of lens that distorts whatever classical system is passed through it; taking the Schrödinger equation as demonstrative, iℏ Ψ  =   Ĥ Ψ , the classical system is contained in the Hamiltonian function Ĥ, which is plugged into the equation to produce a suitable spectrum of alternatives.  Hence my description of the quantum equation itself as basic.  But, following the vau-calculus analogy, it seems some sort of internal non-locality ought to be basic, as it follows from the existence of the network topology; looking at vau-calculus, even the β-rule fully engages the network topology, though co-hygienically. Geometry and network The above insights on the physical theory itself are mostly negative, indicating what this sort of theory of physics would not be like, what characteristics of conventional quantum math it would not have.  What sort of structure would it have? I'm not looking for detailed math, just yet, but the overall shape into which the details would be cast.  Some detailed math will be needed, before things go much further, to demonstrate that the proposed approach is capable of generating predictions sufficiently consistent with quantum mechanics, keeping in mind the well-known no-go result of Bell's Theorem.  I'm aware of the need; the question, though, is not whether Bell's Theorem can be sidestepped — of course it can, like any other no-go theorem, by blatantly violating one of its premises — but whether it can be sidestepped by a certain kind of theory.  So the structure of the theory is part of the possibility question, and needs to be settled before we can ask the question properly. In fact, one of my concerns for this sort of theory is that it might have too many ways to get around Bell's Theorem.  Occam's Razor would not look favorably on a theory with redundant Bell-avoidance devices. Let's now set aside locality for a moment, and consider nondeterminism.  Bell's Theorem calls (in combination with some experimental results that are, somewhat inevitably, argued over) for chronological nondeterminism, that is, nondeterminism relative to the time evolution of the physical system.  One might, speculatively, be able to approximate that sort of nondeterminism arbitrarily well, in a fundamentally non-local theory, by exploiting the assumption that the physical system under consideration is trivially small relative to the whole cosmos.  We might be able to draw on interactions with distant elements of the cosmos to provide a more-or-less "endless" supply of pseudo-randomness.  I considered this possibility in the earlier post on co-hygiene, and it is an interesting theoretical question whether (or, at the very least, how) a theory of this sort could in fact generate the sort of quantum probability distribution that, according to Bell's Theorem, cannot be generated by a chronologically deterministic local theory.  The sort of theory I'm describing, however, is merely a way to provide a local illusion of nondeterminism in a non-local theory with global determinism — and when we're talking chronology, it is difficult even to define global determinism (because, thanks to relativity, "time" is tricky to define even locally; made even trickier since we're now contemplating a theory lacking the sort of continuity that relativity relies upon; and is likely impossible to define globally, thanks to relativity's deep locality).  It's also no longer clear anymore why one should expect chronological determinism at all. A more straightforward solution, seemingly therefore favored by Occam's Razor, is to give up on chronological determinism and instead acquire mathematical determinism, by the arguably "obvious" strategy of supposing that the whole of spacetime evolves deterministically along an orthogonal dimension, converting unknown initial conditions (initial in the orthogonal dimension) into chronological nondeterminism.  I demonstrated the principle of this approach in an earlier post.  It is a bit over-powered, though; a mathematically deterministic theory of this sort — moreover, a mathematically deterministic and mathematically local theory of this sort — can readily generate not only a quantum probability distribution of the sort considered by Bell's Theorem, but, on the face of it, any probability distribution you like.  This sort of excessive power would seem rather disfavored by Occam's Razor. The approach does, however, seem well-suited to a co-hygiene-directed theory.  Church-Rosser-ness implies that term rewriting should be treated as reasoning rather than directly as chronological evolution, which seemingly puts term rewriting on a dimension orthogonal to spacetime.  The earlier co-hygiene post noted that calculi, which converge to an answer via Church-Rosser-ness, contrast with grammars, which are also term-rewriting systems but exist for the purpose of diverging and are thus naturally allied with mathematical nondeterminism whereas calculi naturally ally with mathematical determinism.  So our desire to exploit the calculus/physics analogy, together with our desire for abstract separability of parts, seems to favor this use of a rewriting dimension orthogonal to spacetime. A puzzle then arises about the notion of mathematical locality.  When the rewriting relation, through this orthogonal dimension (which I used to call "meta-time", though now that we're associating it with reasoning some other name is wanted), changes spacetime, there's no need for the change to be non-local.  We can apparently generate any sort of physical laws, quantum or otherwise, without the need for more than strictly local rewrite rules; so, again by Occam's Razor, why would we need to suppose a whole elaborate non-local "network topology"?  A strictly local rewriting rule sounds much simpler. Consider, though, what we mean by locality.  Both nondeterminism and locality must be understood relative to a dimension of change, thus "chronological nondeterminism"; but to be thorough in defining locality we also need a notion of what it means for two elements of a system state to be near each other.  "Yes, yes," you may say, "but we have an obvious notion of nearness, provided by the geometry of spacetime."  Perhaps; but then again, we're now deep enough in the infrastructure that we might expect the geometry of spacetime to emerge from something deeper.  So, what is the essence of the geometry/network distinction in vau-calculus? A λ-calculus term is a syntax tree — a graph, made up of nodes connected to each other by edges that, in this case, define the potential function-application relationships.  That is, the whole purpose of the context-free syntax is to define where the interactions — the redex patterns for applying the β-rule — are.  One might plausibly say much the same for the geometry of spacetime re gravity, i.e., location in spacetime defines the potential gravitational interactions.  The spacetime geometry is not, evidently, hierarchical like that of λ-calculus terms; that hierarchy is apparently a part of the function-application concept.  Without the hierarchy, there is no obvious opportunity for a direct physical analog to the property of compatibility in term-rewriting calculi. The network topology, i.e., the variables, provide another set of connections between nodes of the graph.  These groups of connection are less uniform, and the variations between them do not participate in the redex patterns, but are merely tangential to the redex patterns thus cuing the engagement of a variable structure in a rewriting transformation.  In vau-calculi the variable is always engaged in the redex through its binding, but this is done for compatibility; by guaranteeing that all the variable instances occur below the binding in the syntax tree, the rewriting transformation can be limited to that branch of the tree.  Indeed, only the λ bindings really have a fixed place in the geometry, dictated by the role of the variable in the syntactically located function application; side-effect-ful bindings float rather freely, and their movement through the tree really makes no difference to the function-application structure as long as they stay far enough up in the tree to encompass all their matching variable instances.  If not for the convenience of tying these bindings onto the tree, one might represent them as partly or entirely separate from the tree (depending on which kind of side-effect one is considering), tethered to the tree mostly by the connections to the bound variable instances.  The redex pattern, embedded within the geometry, would presumably be at a variable instance.  Arranging for Church-Rosser-ness would, one supposes, be rather more challenging without compatibility. Interestingly, btw, of the two classes of side-effects considered by vau-calculus (and by Felleisen), this separation of bindings from the syntax tree is more complete for sequential-state side-effects than for sequential-control side-effects — and sequential control is much more simply handled in vau-calculus than is sequential state.  I'm still wondering if there's some abstract principle here that could relate to the differences between various non-gravitational forces in physics, such as the simplicity of Maxwell's equations for electromagnetism. This notion of a binding node for a variable hovering outside the geometry, tethered more-or-less-loosely to it by connections to variable instances, has a certain vague similarity to the aggressive non-locality of quantum wave functions.  The form of the wave function would, perhaps, be determined by a mix of the nature of the connections to the geometry together with some sort of blurring effect resulting from a poor choice of representing structures; the hope would be that a better choice of representation would afford a more focused description. I've now identified, for vau-calculus, three structural differences between the geometry and the network. • The geometry contains the redex patterns (with perhaps some exotic exceptions). • The geometric topology is much simpler and more uniform than the network topology. • The network topology is treated hygienically by all rewriting transformations, whereas the geometry is treated co-hygienically only by one class of rewriting transformations (β). But which of these three do we expect to carry over to physics? The three major classes of rewriting operations in vau-calculus — function application, sequential control, and sequential state — all involve some information in the term that directs the rewrite and therefore belongs in the redex pattern.  All three classes of operations involve distributing information to all the instances of the engaged variable.  But, the three classes differ in how closely this directing information is tied to the geometry. For function application, the directing information is entirely contained in the geometry, the redex pattern of the β-rule, ((λx.T1)T2).  The only information about the variable not contained within that purely geometric redex pattern is the locations of the bound instances. For sequential control, the variable binder is a catch expression, and the bound variable instances are throw expressions that send a value up to the matching catch.  (I examined this case in detail in an earlier post.)  The directing information contained in the variable, beyond the locations of the bound instances, would seem to be the location of the catch; but in fact the catch can move, floating upward in the syntax tree, though moving the catch involves a non-co-hygienic substitutive transformation — in fact, the only non-co-hygienic transformation for sequential control.  So the directing information is still partly tied to the syntactic structure (and this tie is somehow related to the non-co-hygiene).  The catch-throw device is explicitly hierarchical, which would not carry over directly to physics; but this may be only a consequence of its relation to the function-application structure, which does carry over (in the broad sense of spacetime geometry).  There may yet be more to make of a side analogy between vau-calculus catch-throw and Maxwell's Equations. For sequential state, the directing information is a full-blown environment, a mapping from symbols to values, with arbitrarily extensive information content and very little relation to geometric location.  The calculus rewrite makes limited use of the syntactic hierarchy to coordinate time ordering of assignments — not so much inherently hierarchical as inherently tied to the time sequencing of function applications, which itself happens to be hierarchical — but this geometric connection is even weaker than for catch-throw, and its linkage to time ordering is more apparent.  In correspondence with the weaker geometric ties, the supporting rewrite rules are much more complicated, as they moderate passage of information into and out of the mapping repository. "Time ordering" here really does refer to time in broadly the same sense that it would arise in physics, not to rewriting order as such.  That is, it is the chronological ordering of events in the programming language described by the rewriting system, analogous to the chronological ordering of events described by a theory of physics.  Order of rewriting is in part related to described chronology, although details of the relationship would likely be quite different for physics where it's to do with relativity.  This distinction is confusing even in term-rewriting PL semantics, where PL time is strictly classical; one might argue that confusion between rewriting, which is essentially reasoning, and evaluation, which is the PL process reasoned about, resulted in the unfortunately misleading "theory of fexprs is trivial" result which I have discussed here previously. It's an interesting insight that, while part of the use of syntactic hierarchy in sequential control/state — and even in function application, really — is about compatibility, which afaics does not at all carry over to physics, their remaining use of syntactic hierarchy is really about coordination of time sequencing, which does occur in physics in the form of relativity.  Admittedly, in this sort of speculative exploration of possible theories for physics, I find the prospect of tinkering with the infrastructure of quantum mechanics not nearly as daunting as tinkering with the infrastructure of relativity. At any rate, the fact that vau-calculus puts the redex pattern (almost always) entirely within a localized area of the syntax, would seem to be more a statement about the way the information is represented than about the geometry/network balance.  That is, vau-calculus represents the entire state of the system by a syntactic term, so each item of information has to be given a specific location in the term, even if that location is chosen somewhat arbitrarily.  It is then convenient, for time ordering, to require that all the information needed for a transformation should get together in a particular area of the term.  Quantum mechanics may suffer from a similar problem, in a more advanced form, as some of the information in a wave function may be less tied to the geometry than the equations (e.g. the Schrödinger equation) depict it.  What really makes things messy is devices that are related to the geometry but less tightly so than the primary, co-hygienic device.  Perhaps that is the ultimate trade-off, with differently structured devices becoming more loosely coupled to the geometry and proportionately less co-hygienic. All of which has followed from considering the first of three geometry/network asymmetries:  that redex patterns are mostly contained in the geometry rather than the network.  The other two asymmetries noted were  (1) that the geometric structure is simple and uniform while the network structure is not, and  (2) that the network is protected from perturbation while the geometry is not — i.e., the operations are all hygienic (protecting the network) but not all are co-hygienic (protecting the geometry).  Non-co-hygiene complicates things only moderately, because the perturbations are to the simple, uniform part of the system configuration; all of the operations are hygienic, so they don't perturb the complicated, nonuniform part of the configuration.  Which is fortunate for mathematical treatment; if the perturbations were to the messy stuff, it seems we mightn't be able to cope mathematically at all.  So these two asymmetries go together.  In my more cynical moments, this seems like wishful thinking; why should the physical world be so cooperative?  However, perhaps they should be properly understood as two aspects of a single effect, itself a kind of separability, the same view I've recommended for Church-Rosser-ness; in fact, Church-Rosser-ness may be another aspect of the same whole.  The essential point is that we are able to usefully consider individual parts of the cosmos even though they're all interconnected, because there are limits on how aggressively the interconnectedness is exercised.  The "geometry" is the simple, uniform way of decomposing the whole into parts, and "hygiene" is an assertion that this decomposition suffices to keep things tractable.  It's still fair to question why the cosmos should be separable in this way, and even to try to build a theory of physics in which the separation breaks down; but there may be some reassurance, re Occam's Razor, in the thought that these two asymmetries (simplicity/uniformity, and hygiene) are two aspects of a single serendipitous effect, rather than two independently serendipitous effects. Cosmic structure Most of these threads are pointing toward a rewriting relation along a dimension orthogonal to spacetime, though we're lacking a good name for it atm (I tend to want to name things early in the development process, though I'm open to change if a better name comes along). One thread, mentioned above, that seems at least partly indifferent to the rewriting question, is that of changes in the character of quantum mechanics at cosmological scale.  This relates to the notion of decoherence.  It was recognized early in the conceptualization of quantum mechanics that a very small entangled quantum system would tend to interact with the rest of the universe and thereby lose its entanglement and, ultimately, become more classical. We can only handle the quantum math for very small physical systems; in fact, rather insanely small physical systems.  Intuitively, what if this tendency of entanglement to evaporate when interacting with the rest of the universe ceases to be valid when the size of the physical system is sufficiently nontrivial compared to the size of the whole universe?  In the traditional quantum mechanics, decoherence appears to be an all-or-nothing proposition, a strict dichotomy tied to the concept of observation.  If something else is going on at large scales, either it is an unanticipated implication of the math-that-we-can't-do, or it is an aspect of the physics that our quantum math doesn't include because the phenomena that would cause us to confront this aspect are many orders of magnitude outside anything we could possibly apply the quantum math to.  It's tantalizing that this conjures both the problem of observation, and the possibility that quantum mechanics may be (like Newtonian mechanics) only an approximation that's very good within its realm of application. The persistently awkward interplay of the continuous and discrete is a theme I've visited before.  Relativity appears to have too stiff a dose of continuity in it, creating a self-reference problem even in the non-quantum case (iirc Einstein had doubts on this point before convincing himself the math of general relativity could be made to work); and when non-local effects are introduced for the quantum case, continuity becomes overconstraining.  Quantum gravity efforts suffer from a self-reference problem on steroids (non-renormalizable infinities).  The Big Picture perspective here is that non-locality and discontinuity go together because a continuum — as simple and uniform as it is possible to be — is always going to be perceived as geometry. The non-local network in vau-calculus appears to be inherently discrete, based on completely arbitrary point-to-point connections defined by location of variable instances, with no obvious way to set up any remotely similar continuous arrangement.  Moreover, the means I've described for deriving nondeterminism from the network connections (on which I went into some detail in the earlier post) exploits the potential for chaotic scrambling of discrete point-to-point connections by following successions of links hopscotching from point to point.  While the geometry might seem more amenable to continuity, a truly continuous geometry doesn't seem consistent with point-to-point network connections, either, as one would then have the prospect of an infinitely dense tangle of network connections to randomly unrelated remote points, a sort of probability-density field that seems likely to wash out the randomness advantages of the strategy and less likely to be mathematically useful; so the whole rewriting strategy appears discrete in both the geometry and network aspects of its configuration as well as in the discrete rewriting steps themselves. The rewriting approach may suffer from too stiff a dose of discreteness, as it seems to force a concrete choice of basic structures.  Quantum mechanics is foundationally flexible on the choice of elementary particles; the mathematical infrastructure (e.g. the Schrödinger equation) makes no commitment on the matter at all, leaving it to the Hamiltonian Ĥ.  Particles are devised comparatively freely, as with such entities as phonons and holes.  Possibly the rewriting structure one chooses will afford comparable flexibility, but it's not at all obvious that one could expect this level of versatile refactoring from a thoroughly discrete system.  Keeping in mind this likely shortfall of flexibility, it's not immediately clear what the basic elements should be.  Even if one adopts, say, the standard model, it's unclear how that choice of observable particles would correspond to concrete elements in a discrete spacetime-rewriting system (in one "metaclassical" scenario I've considered, spacetime events are particle-like entities tracing out one-dimensional curves as spacetime evolves across an orthogonal dimension); and it is by no means certain that the observable elements ought to follow the standard model, either.  As I write this there is, part of the time, a cat sitting on the sofa next to me.  It's perfectly clear to me that this is the correct way to view the situation, even though on even moderately closer examination the boundaries of the cat may be ambiguous, e.g. at what point an individual strand of fur ceases to be part of the cat.  By the time we get down to the scale where quantum mechanics comes into play and refactoring of particles becomes feasible, though, is it even certain that those particles are "really" there?  (Hilaire Belloc cast aspersions on the reality of a microbe merely because it couldn't be seen without the technological intervention of a microscope; how much more skepticism is recommended when we need a gigantic particle accelerator?) Re the structural implications of quasiparticles (such as holes), note that such entities are approximations introduced to describe the behavior of vastly complicated systems underneath.  A speculation that naturally springs to mind is, could the underlying "elementary" particles be themselves approximations resulting from complicated systems at a vastly smaller scale; which would seem problematic in conventional physics since quantum mechanics is apparently inclined to stop at Planck scale.  However, the variety of non-locality I've been exploring in this thread may offer a solution:  by maintaining network connections from an individual "elementary" particle to remote, and rather arbitrarily scrambled, elements of the cosmos, one could effectively make the entire cosmos (or at least significant parts of it) serve as the vastly complicated system underlying the particle. It is, btw, also not certain what we should expect as the destination of a spacetime-rewriting relation.  An obvious choice, sufficient for a proof-of-concept theory (previous post), is to require that spacetime reach a stable state, from which there is either no rewriting possible, or further rewriting leaves the system state unchanged.  Is that the only way to derive a final state of spacetime?  No.  Whatever other options might be devised, one that comes to mind is some form of cycle, repeating a closed set of states of spacetime, perhaps giving rise to a set of states that would manifest in more conventional quantum math as a standing wave.  Speculatively, different particles might differ from each other by the sort of cyclic pattern they settle into, determining a finite — or perhaps infinite — set of possible "elementary particles".  (Side speculation:  How do we choose an initial state for spacetime?  Perhaps quantum probability distributions are themselves stable in the sense that, while most initial probability distributions produce a different final distribution, a quantum distribution produces itself.) Granting that the calculus/physics analogy naturally suggests some sort of physical theory based on a discrete rewriting system, I've had recurring doubts over whether the rewriting ought to be in the direction of time — an intuitively natural option — or, as discussed, in a direction orthogonal to spacetime.  At this point, though, we've accumulated several reasons to prefer rewriting orthogonal to spacetime. Church-Rosser-ness.  CR-ness is about ability to reason separately about the implications of different parts of the system, without having to worry about which reasoning to do first.  The formal property is that whatever order one takes these locally-driven inferences in ("locally-driven" being a sort of weak locality), it's always possible to make later inferences that reach a common conclusion by either path.  This makes it implausible to think of these inference steps as if they were chronological evolution. Bell's Theorem.  The theorem says, essentially, the probability distributions of quantum mechanics can't be generated by a conventionally deterministic local theory.  Could it be done by a non-local rewriting theory evolving deterministically forward in time?  My guess would be, probably it could (at least for classical time); but I suspect it'd be rather artificial, whereas my sense of the orthogonal-dimension rewriting approach (from my aforementioned proof-of-concept) is that it ought to work out neatly. Relativity.  Uses an intensively continuous mathematical infrastructure to construct a relative notion of time.  It would be rather awkward to set an intensively discrete rewriting relation on top of this relative notion of time; the intensively discrete rewriting really wants to be at a deeper level of reality than any continuous relativistic infrastructure, rather than built on top of it (just as we've placed it at a deeper level than quantum entanglement), with apparent continuity arising from statistical averaging over the discrete foundations.  Once rewriting is below relativity, there is no clear definition of a "chronological" direction for rewriting; so rewriting orthogonal to spacetime is a natural device from which to derive relativistic structure.  Relativity is however a quintessentially local theory, which ought to be naturally favored by a predominately local rewriting relation in the orthogonal dimension.  Deriving relativistic structure from an orthogonal rewriting relation with a simple causal structure also defuses the self-reference problems that have lingered about gravity. It's rather heartening to see this feature of the theory (rewriting orthogonal to spacetime) — or really any feature of a theory — drawing support from considerations in both quantum mechanics and relativity. The next phase of exploring this branch of theory — working from these clues to the sort of structure such a theory ought to have — seems likely to study how the shape of a spacetime-orthogonal rewriting system determines the shape of spacetime.  My sense atm is that one would probably want particular attention to how the system might give rise to a relativity-like structure, with an eye toward what role, if any, a non-local network might play in the system.  Keeping in mind that β-rule use of network topology, though co-hygienic, is at the core of what function application does and, at the same time, inspired my suggestion to simulate nondeterminism through repeatedly rescrambled network connections; and, likewise, keeping in mind evidence (variously touched on above) on the possible character of different kinds of generalized non-co-hygienic operations.
48b2a6c0af2d55ac
Desktop Notifications are  |  Get instant alerts on your desktop. Turn on desktop notifications? Remind me later. Darwin vs. Einstein The current battle for America is, as Angelo Codevilla has recently emphasized in his seminal essay, a war between the majority of Americans and America’s ruling class. This conflict is a reflection of a battle between the two greatest scientists of the past two centuries, Charles Darwin and Albert Einstein. Einstein famously claimed that “God does not play dice with the universe,” whereas Darwin claimed that God does, indeed, play dice with the universe. Codevilla pointed out the self-image of the ruling class rests on its belief that humans are the unforeseen outcome of chance mutations acted upon by natural selection. Not so. God decreed the evolution of humans before time began. The ruling class stands with Darwin. We stand with Einstein. In his 1859 book The Origin of Species, Darwin wrote that evolution by natural selection was completely consistent with determinism. However, by 1868, Darwin had realized that his theory of evolution required a fundamental indeterminism at the microscopic level. From the last chapter of his Variation of Animals and Plants Under Domestication: [If] we assume that each particular variation was from the beginning of all time preordained ... natural selection or survival of the fittest, must appear to us superfluous laws of nature. Darwin’s followers knew perfectly well his theory was a challenge to determinism. Woodrow Wilson said in a speech made just before he became president: [The] Constitution of the United States had been made under the dominion of the Newtonian Theory. ... The makers of our Federal Constitution ... constructed a government ... to display the laws of nature. Politics in their thought was a variety of mechanics. ... The government was to exist and move by virtue of the efficacy of "checks and balances." ... The trouble with the theory is that government is not a machine, but a living thing. It falls, not under the theory of the universe, but under the theory of organic life. It is accountable to Darwin, not to Newton. ... Society is a living organism and must obey the laws of life, not of mechanics ... a nation is a living thing and not a machine. This is nonsense. Everything is a machine. Atoms, molecules, living organisms, planets, stars, galaxies, and the entire universe are machines, all subject to the same laws of mechanics. It is often believed that the development of quantum mechanics undermined determinism. One of the familiar facts of quantum theory is the Heisenberg Uncertainty Principle, and it is generally believed that this Principle establishes that God does indeed play dice. Not true. The great physicist Max Planck pointed out long ago that the Schrödinger equation, the fundamental equation of quantum mechanics from which the Uncertainty Principle is mathematically derived, is even more deterministic than the equations of Newton that so annoyed Wilson. The limit of prediction given by Uncertainty Principle has been known for decades to be due to interference from universes that are parallel to ours, not from God playing dice. The existence of these other universes is a necessary mathematical consequence of the Schrödinger equation itself, or more generally, of Newton’s own mechanics in its most general form.
df7928053fe855ee
Bohr and Heisenberg plotting to take over. The multi-dimensionality of the Schrödinger equation of stdQM defies direct interpretation of its wave functions as phenomena taking place in physical 3d space, and so to save the budding quantum mechanics from collapse an interpretation in terms of probability of electron particle configuration was invented by Max Born quickly after the formation of Schrödinger’s equation in 1926. This was a Faustian deal, which has become the trade mark of modern physics, since it sold out the very soul of causality, determinism and rationality of classical macroscopic physics to gain the glory of a new world of microscopic physics in the hands of modern physicists. Many prominent physicists including Schrödinger, Einstein and De Broglie protested to the deal of sacrificing the soul of physics, but Born in team with Bohr and Heisenberg took the game since the only possible interpretation of the multi-dimensional wave function was probabilistic. But the price was very big and has led to endless fruitless  debate about mystics of atom physics without causality. The multi-d Schrödinger equation presented itself as a formal (trivial) generalisation to atoms with many electrons of Schrödinger’s equation for Hydrogen with one atom and as a formality without derivation from physical principles. Physicists in charge of stdQM thus view Schrödinger’s equation as given by God and beyond human understanding, with the amazing capability of having solutions which always exactly with observations. But is it really true that Schrödinger’s equation is given by God and that we know that it is perfect?  No, Schrödinger’s equation is constructed by a purely formal (trivial) mathematical generalisation by human minds without physics input. Moreover, since solutions by multi-dimensionality cannot be computed, it is impossible to check if solutions always agree with observation. The foundation of stdQM was thus shaky when it was laid in the late 1920s and it still is. This is the motivation for seeking a different generalisation to many electrons of Schrödinger’s equation for Hydrogen, and realQM then comes up as a natural candidate based on physical principles.
2983212b8c38d659
Wednesday, July 26, 2006 My countryman in Wikipedia I learned from a comment of anonymous to previous posting that Lubos Motl has written something to Wikipedia about finnish physicist Matti Pitkanen whom I happen to know quite well. Knowing Lubos the text could have been much more nasty and I am proud that Lubos sees the trouble of writing something about my countryman who is probably not experienced as any threat for string hegenomy. The text below is the stub by Lubos. Matti Pitkanen is a Finnish alternative theoretical physicist who has attempted to prove the Riemann hypothesis, worked with p-adic numbers, and proposes an unusual theory called TGD that no other physicist understands. I would like to suggest a couple of corrections. Pitkanen proposed a "strategy for proving Riemann hypothesis" (as a matter fact a proposal for a sketchy proof based on the identification of zeros as the spectrum of conformal weights of certain conformally invariant physical system: I understand why he choose the cautious formulation). I happen to know that Pitkanen is still working intensively with p-adic numbers and has some strange ideas about how to generalize the notion of number by fusing reals and p-adics to a larger structure. He seems also to believe that p-adic physics could provide the physics of cognition and intentionality. I would like to complete the stub but better not. I still remember the bloodthirsty furor stimulated by my attempt to fill the stub about TGD inspired theory of consciousness which is also one of the great passions of my countryman but not mentioned in the stub. At 3:09 AM, Anonymous Philippe VIOLA said... Hi, Matti. I quickly realized you're someone with a wide scientific knowledge. So, may I give a critical analysis of your work that is intended to be constructive and will probably help you in understanding the scientific community' position towards your work ? 1) In the texts you put on line, whatever the chapter, there's legion of suppositions, hypothesis, intuitive deductions. The conditional is the usual form. I would rather prefer equations and step-by-step rigorous mathematical proofs. Otherwise, nobody can accept what you propose for granted, even if it's correct. 2) I had a look at your paper on the quantization of Planck's constant. You base your reasoning on a fundamental paper by Da Rocha and Nottale. I uploaded the article in question. In there, there is no such quantization. The quantization formula does indeed involves the scattering coefficient D, but : a) Da Rocha & Nottale DO NOT identify their D with ihbar/2m for as such, they only say their gravitational equation is SCHRODINGER-LIKE, not SCHRODINGER'S !!! b) Their D remains real-valued ; c) the introduction of the integer n in their formula DO NOT apply to D at all : eq. (22) they give is E_n/m = -G²m²/(8D²n²) = -w_o²/2n². Consequently, only energy is quantized, which is perfectly normal... if there had been some quantization process of the scattering coefficient, as you suggest, we would have had a D(n) instead of D in the above formula and D itself would have been n-dependent. Of course, you can always read the product Dn as D(n), but I don't think it's the spirit of the work, basing myself on the reasoning that preceeds this formula. 3) As I already pointed it out to you in a former email (and you were besides not convinced either by a complex velocity), the Lagrangian Nottale proposes and that leads him to his derivation of the Schrödinger equation, this lagrangian is NO LONGER invariant. By Noether's theorem, this means that physical quantities are NO LONGER CONSERVED, or, to put it only slightly differently, that his Langrangian represents NOTHING PHYSICALLY OBSERVABLE ANYMORE... Maybe will he argue that Noether was wrong, after all ?... I face similar difficulties as you, however my own situation is completely different : I'm a SELF-TRAINED mathematical physicist... But, apart from conceptual mistakes in my concern of trying to find alternative models to existant ones too, my reasonings are exclusively based on CALCULUS and RIGOROUS MATHEMATICAL PROVES. Nowhere, in my work will you find hypothesis, suppositions and even less speculations. This actually uselessly spoils your own capacities. And I honnestly regret it. Stop theoretical suppositions, do a 100% rigorous work based on sole calculus and try your chance again. Maybe you'll find a ear this way. I hope so for you, mate. :-) At 6:35 AM, Blogger Matti Pitkanen said... I think that we have a different view about what the construction of physical theory means. It would be very easy to postulate a simple Lagrangian, deduce Feyman rules and calculate cross sections. Unfortunately this simple linear approach has failed to produce insights about really interesting problems during the last thirty years. We have standard model but no understanding about why it has the symmetries it has, no understanding about particle mass spectrum, etc. Superstring theory tried to continue this hyper-technical approach and I think that its proponents are ready to admit the failure of this approach and are realizing that nothing less than a paradigm shift is needed. The fundamental questions are simply such that the standard Lagrangian approach is useless: the questions why standard model symmetries, what is the origin of conformal invariance, what gives rise to the quantization of masses, how Poincare invariance could be consistent with gravitation, etc... require completely new level of thinking. Calculus provides no help in this kind of problem since one of the many challenges is to build the calculus! My own approach to these problems is essentially pattern completion/bootstrap based on the new paradigm defined by the fundamental assumption that spacetimes are surfaces in M^4xCP_2. I fullheartedly but proudly admit that this approach is almost a diametrical opposite to the process in which results are deduced by formal symbol manipulations from an action principle. After all, I am doing my best to test what the fundamental postulates could be! The strategy is the attempt to develop in detail the new ontology in close interaction with information provided by the rich spectrum of anomalies. That this approach is not completely futile is demonstrated by the fact that Lubos Motl sees the trouble of commenting it without claiming it to be crackpotness. He does not treat loop quantum gravity in the same manner. And we must remember that Lubos Motl must choose carefully his wordings: certain purposeful ambivalence is the only strategy in a situation when you are regarded as a soldier in the troops of the dominating theory. The claim that no one understand TGD is part of this policy. Of course, there are many people who can understand what I am saying. The barriers are basically psychological. Concerning Nottale's approach and mine. The only input in my approach is the empirical evidence for the quantization of planetary orbital radii according to Bohr rules and my interpretation is completely different from that of Nottale. I am happy that I learned from Nottale's work the formula of gravitational Planck constant consistent with Equivalence Principle. The rest is definitely something totally different. With Best Regards, At 11:13 AM, Anonymous Philippe VIOLA said... "After all, I am doing my best to test what the fundamental postulates could be!" I can see it and that's why I'm giving you my personal point of view on yur approach. :-) But you're a bit hard with theoretical physics : high-energy physics is not limited to superstring models and you know it. On mass hierarchy, the Gell-Mann-Okubo formula is rather good, isn't it ? As are QED, GSW and QCD, no ? Great advances in nuclear physics have been done these last 30 years thanks to these theories. In the late 1960s, string models were already bad. Adding them supersymmetry could only give something worse. Physics is something, physical sectarim is something else with no connection at all. You wanna do something different ? I can do nothing else but to strongly encourage you in this way ! You wanna explore a new kind of calculus ? Excellent ! But you're not technical enough for that : you are a THEORETICIAN. As such, you have to FORMALLY PROVE EVERYTHING YOU INVESTIGATE, EVERY SINGLE ASSERTION YOU MAKE. You have to give people MEANS TO CHECK YOUR DEDUCTIONS. Look, the paper preceding this one on your blog, where you speak of rotating magnetism : I didn't even read more than a few lines. Why ? Because there is no mean of checking what you say. It may be correct, it may be wrong. How can we know it ? Do you have a technical paper, with a FORMAL THEORY followed by CALCULATED FORECASTS ? See what I mean ? :-) At 1:21 PM, Anonymous Philippe VIOLA said... I finish my reply, interrupted by dinner. :-) So much for me : there's a technical ref at the end of your paper on magnetic anomaly. I'll have a look at it. About the quantization of Planck's constant: 1) there's no need for such a quantization, since there already exist one and for long : the famous Sommerfeld-Wilson conditions J_k = h.n_k, where the n_k's are integers. And it's not a quantization of h, but of Delaunay's phase action variables. A further quantization of h would give h(n) and lead to what ? If there was such a non linear dependence of the J_k's with respect to the n_k's, it would have already been experimentally detected, especially at large-scales. Do not forget, indeed, that Delaunay's treatment of separable systems was originally made for planetary motions... 2) A quantization of h is not physically nor mathematically consistent with the foundations of quantum theory, the historical development being : Delaunay -> Sommerfeld-Wilson -> Planck -> Einstein on Sommereld-Wilson -> De Broglie for the wave-mechanical approach -> Schrödinger for the statistical wave interpretation. There is no place nor any need for a further quantization of h in all these developments. I may be wrong, as usual, but if dark matter was, as you suggest, a large-scale quantum state of matter, I think its effects would have been detected for long too, at least about coherence phases. look at neutron stars, dwarves and even black holes. Sure that a cosmic-large superfluid would not remain undetected for long... That's what makes me seriously dubious about the inexistence of that so-called "dark matter", + additional theoretical arguments on our fundamental models, including classical RG, for the description and understanding of the observable universe. At 8:55 PM, Blogger Matti Pitkanen said... What I was talking about whas theorizing done after standard model, GUTS, super symmetric theories, string models. And I believe that some of the mathematical structures discovered during these years will remain a part of the future theory. By the way, Gell-Mann-Okubo formula might be an accident since flavour SU(3) itself can be seen as a mere accidental symmetry. p-Adic mass calculations provide different origin for the formula and predict the low lying hadron masses with a better accuracy. At high energies new exotic states are predicted and there are anomalies swept under carpet during giving support for these anomalies (Aleph anomaly, the bumps in the mass distribution of top quark). About rotating magnetic systems: these blog postings are simply short summaries. There is also a detailed article of length about 100 pages at my homepage providing a detailed model for the loss of weight based on catastrophe theoretic model and making some predictions. But this kind of accuracy is only formal accuracy. The really challenging part of the work is qualitative analysis, constructing a coherent view about the classical and possible new physics of this very complex system. Comparison of various options. And most importantly, tying the explanation of this anomaly to other anomalies. Explanation of just single anomaly cannot be taken seriously. An anomaly whose very existence is questionable is for a scientists able to tolarate uncertainties and working holistically, not for a blind mathematician! Best Regards, At 10:45 PM, Blogger Matti Pitkanen said... A comment related to the quantization of Planck constant. a) The quantization of Planck constant while staying in the framework of wave mechanics would of course be non-sense. The mathematical framework involved is much much more general: von Neumann algebras known as hyperfinite factors of type II_1 is the techical term. Quite a profound generalization of the notion of 8-D imbedding space H=M^4xCP_2 is needed in order to fuse the physics with different Planck constants to single one. This relates closely to quantum groups, non-commutative spaces, conformal field theories,... b) The quantization of Planck constant implies fractionization of the integer m characterizing angular momentum projection. Anyons and fractional statistics are established physics and a possible alternative interpretation would be in terms of quantization of h. Fractional quantum Hall effect could be integer quantum Hall with increased Planck constant. But this is something that should be worked out. c) Phases with large value of quantized Planck constant can be identified as dark matter. Quite many physicists agree about its existence and there are detailed maps about its distribution but we know practically nothing about its physics to make it "visible" by experimental means. Planetary Bohr orbits and other similar effects claimed by Nottale and others could be seen as an effects of quantum coherent dark matter on the distribution of visible matter. For intance, the model for Bohr quantization of planetary orbits predicts strong number theoretical constraints on the ratios of planetary masses (ratios of integers n defining n-polygons constructible using only ruler and compass) satisfied with 10 per cent accuracy and predicts correctly the ratio of the densities of visible and of dark matter+energy. More generally, the prediction that preferred values of n correspond to these polygons means very strong testable predictions. The hypothesis that quantum coherent dark matter controls biomatter is a testable hypothesis. A good example is the model of EEG predicting a fractal hierarchy of scaled EEGs and thus a hierarchy of biorhytms. With Best Regards, Matti At 1:40 AM, Anonymous Philippe VIOLA said... Quite many physicists will always prefer to introduce some kinds of "exotic" speculation rather than accepting that their models may be wrong. This is true for everything and remains true for the dark matter hypohesis. Instead of telling themselves "we may be wrong", they say "there's something wrong with the Universe"... I'm a 100% for a Bohr-like model for planetary orbits. But I remain deeply convinced that some dark matter, if quantum, would necessarily induce visible perturbations on ordinary matter. Moreover, it would produce energy. There, what do we have ? We have 95% of matter missing and this 95% is attributed to a new kind of matter that is soooo exotic that it cannot be detected and produces no detectable energy... What the heck can then be such a matter, if not darker than darkness itself ??? I prefer telling myself that darkness is definitely in the physicists' mind, not in cosmos... At 3:58 AM, Blogger Matti Pitkanen said... The question is about what interpretation one adopts for the anomalies and the situation here is very similar to that for quarks. All evidence for quarks is indirect but very few of us questions the reality of quarks anymore. Dark matter is visible via its gravitational interaction if one accepts standard view about gravitation. If one accepts TGD based view it is detectable via very many other effects and one ends up to a quantum biology in which dark matter implies a huge number of testable effects many of them essential for the model of say EEG. Detectability is possible only if we know how to detect: solitons is a second excellent example here. All this is about the belief system that one is ready to adopt to interpret the empirical facts and the most effective belief system wins in the long run. At 11:55 PM, Anonymous Philippe VIOLA said... "Detectability is possible only if we know how to detect: solitons is a second excellent example here." In which model ? Not in classical GR : a mathematical theorem from Lichnerowicz in the early 1950s showed that there was no gravitational solitons in GR. For centuries, not to say millenaries, physical theories have been built to model observational facts. Since GR, we started to build theories first, than confront them to observation. Today, as we investigate energies that are far beyond our facilities, we can no longer test our models. So the procedure has become to build speculations first, then confront them to theories and adapt the theoretical models so that it suits their creators. Where is physics, today ? What kind of a "science" are we now talking about, other than permanent self-satisfaction, mutual congratulations and, above all, unique thinking ? :-( At 2:02 AM, Blogger Matti Pitkanen said... I mentioned solitons as a general example: only for few decades ago no one knew about solitons. When they were discovered (or rediscovered) they were suddenly seen everywhere. Particle physics is testing of theories since signal-to-noise ratio is so high. It is however the stubborn belief on reductionism which has led to the catastrophic situation in string models. If you believe that the problem is to extend physics from electroweak length scale to Planck scale then experimental testing is out of question and theorizing reduces to a wreckless speculation the only possible hope related to possible large additional dimensions. If you are ready or forced to give up the reductionistic dogma you suddenly have an immense spectrum of anomalies covering the length scales from particle physics to cosmology. Theorists could not hope for anything better. If you are ready to take also consciousness seriously the situation improves further. I would like to mention the latest anomaly that I learned of. In the most recent New Scientist there is an article about evidence that the objects believed to be black holes might not actually be black holes: they possess magnetic field. This is what TGD predicts for the asymptotic state of the star as a rotating dynamo like object. The model is based on very simple assumption: gravitational 4-momentum current (related simply to Einstein tensor) is conserved as one can expect in stationary situation. This gives field equations analogous to minimal surface equations with the metric replaced with Einsten tensor. At 4:19 PM, Anonymous Philippe VIOLA said... Sorry, Matti, maybe it's because it's late (or rather early in the morning ;-) ), but I don't see the connection between having a magnetic field and not being a black hole. Where's the contradiction ? On the no hair theorem ? It's usually assumed a black hole only keeps its mass, electric charge and angular momentum. With a charge, you create an electric field. With the electric field, you create a magnetic one (Rot E = -dB/dt). So, where's the point ? 8-( At 9:28 PM, Blogger Matti Pitkanen said... I repeated blindly the statement in New Scientist article. If magnetic dipole moment can be counted as a hair then black hole has no hair excludes magnetic field. Static (as opposed to stationary as in TGD framework) situation does not allow rotational E since one must have dB/dt=0. I looked wither whether it might be possible to have vacuum extremals allowing imbedding of a piece of a magnetic dipole field in TGD framework. This does not seem plausible. In principle generic 4-surface allows the imbedding of a piece of an arbitrary gauge potential. Vacuum exremals have however 2-D CP_2 projection so that only two CP_2 coordinates are available and this leads to too strong integrability conditions. The construction of dynamo like solutions as vacuum extremals is however rather easy by using axial symmetry and stationarity and the TGD variant of field equations given by D_a(G^{ab}\partial_bh^k)=0 saying that gravitational four-momentum is locally conserved in a stationary situation leads to the TGD counterpart of black hole solutions. Its properties resemble those of "magnetars". What is unexpected that the density of gravitational mass is concentrated on a spherical shell. At 7:50 AM, Blogger Lumo said... Dear Matti, just to be sure: I wrote it because I was going through the list of "articles demanded in the physics category" in which, at that time, I was writing stubs for every entry I could. Hope that you found the stub to be a good and objective starting point. ;-) All the best Post a Comment << Home
dc54adc90ee9e37a
Kohn–Sham equations From Wikipedia, the free encyclopedia Jump to: navigation, search In physics and quantum chemistry, specifically density functional theory, the Kohn–Sham equation is the Schrödinger equation of a fictitious system (the "Kohn–Sham system") of non-interacting particles (typically electrons) that generate the same density as any given system of interacting particles.[1][2] The Kohn–Sham equation is defined by a local effective (fictitious) external potential in which the non-interacting particles move, typically denoted as vs(r) or veff(r), called the Kohn–Sham potential. As the particles in the Kohn–Sham system are non-interacting fermions, the Kohn–Sham wavefunction is a single Slater determinant constructed from a set of orbitals that are the lowest energy solutions to \left(-\frac{\hbar^2}{2m}\nabla^2+v_{\rm eff}(\mathbf r)\right)\phi_{i}(\mathbf r)=\varepsilon_{i}\phi_{i}(\mathbf r) This eigenvalue equation is the typical representation of the Kohn–Sham equations. Here, εi is the orbital energy of the corresponding Kohn–Sham orbital, φi, and the density for an N-particle system is \rho(\mathbf r)=\sum_i^N |\phi_{i}(\mathbf r)|^2. The Kohn–Sham equations are named after Walter Kohn and Lu Jeu Sham (沈呂九), who introduced the concept at the University of California, San Diego in 1965. Kohn–Sham potential[edit] In density functional theory, the total energy of a system is expressed as a functional of the charge density as E[\rho] = T_s[\rho] + \int d\mathbf r\ v_{\rm ext}(\mathbf r)\rho(\mathbf r) + V_{H}[\rho] + E_{\rm xc}[\rho] where Ts is the Kohn–Sham kinetic energy which is expressed in terms of the Kohn–Sham orbitals as T_s[\rho]=\sum_{i=1}^N\int d\mathbf r\ \phi_i^*(\mathbf r)\left(-\frac{\hbar^2}{2m}\nabla^2\right)\phi_i(\mathbf r), vext is the external potential acting on the interacting system (at minimum, for a molecular system, the electron-nuclei interaction), VH is the Hartree (or Coulomb) energy, V_{H}={e^2\over2}\int d\mathbf r\int d\mathbf{r}'\ {\rho(\mathbf r)\rho(\mathbf r')\over|\mathbf r-\mathbf r'|}. and Exc is the exchange-correlation energy. The Kohn–Sham equations are found by varying the total energy expression with respect to a set of orbitals to yield the Kohn–Sham potential as v_{\rm eff}(\mathbf r) = v_{\rm ext}(\mathbf{r}) + e^2\int {\rho(\mathbf{r}')\over|\mathbf r-\mathbf r'|}d\mathbf{r}' + {\delta E_{\rm xc}[\rho]\over\delta\rho(\mathbf r)}. where the last term v_{\rm xc}(\mathbf r)\equiv{\delta E_{\rm xc}[\rho]\over\delta\rho(\mathbf r)} is the exchange-correlation potential. This term, and the corresponding energy expression, are the only unknowns in the Kohn–Sham approach to density functional theory. An approximation that does not vary the orbitals is Harris functional theory. The Kohn–Sham orbital energies εi, in general, have little physical meaning (see Koopmans' theorem). The sum of the orbital energies is related to the total energy as E = \sum_{i}^N \varepsilon_i - V_{H}[\rho] + E_{\rm xc}[\rho] - \int {\delta E_{\rm xc}[\rho]\over\delta\rho(\mathbf r)} \rho(\mathbf{r}) d\mathbf{r} Because the orbital energies are non-unique in the more general restricted open-shell case, this equation only holds true for specific choices of orbital energies (see Koopmans' theorem). 1. ^ Kohn, Walter; Sham, Lu Jeu (1965). "Self-Consistent Equations Including Exchange and Correlation Effects". Physical Review 140 (4A): A1133–A1138. Bibcode:1965PhRv..140.1133K. doi:10.1103/PhysRev.140.A1133.  2. ^ Parr, Robert G.; Yang, Weitao (1994). Density-Functional Theory of Atoms and Molecules. Oxford University Press. ISBN 978-0-19-509276-9.
e87c0dfb06893da9
Erwin Schrödinger Wikipedia se Hian jaao: navigation, khojo This person is a Nobel prize winner Erwin Schrödinger Erwin Rudolf Josef Alexander Schrödinger (12 August 1887 Vienna me - 4 January 1961 Vienna me ) ek Austrian Physicist rahaa. Uu quantum mechanics ke theory ke ek founder rahaa aur uske 1933 me Nobel Prize in Physics milaa rahaa. Life[badlo | edit source] Schrödinger went to the Academic Gymnasium from 1898 to 1906. Afterwards he studied mathematics and physics in Vienna and wrote his habilitation up from 1910. He was soldier in World War I. Afterwards he got professorships in Zürich, Jena, Breslau and Stuttgart. In 1920 he married. in 1927 he went to Berlin to fellow Max Planck. After the take-over of power by the Nazis, Schrödinger left Germany and got a new professorship in Oxford. In 1933 he was awarded the Nobel Prize. Three years later he returned to Austria and became professor in Graz. In 1938 he had to leave Austria, because the Nazis had taken over government. He went to Dublin and became director of the School for Theoretical Physics. In 1956 he returned to Vienna and got a professorship for Theoretical Physics. He died of tuberculosis in 1961. Important work[badlo | edit source] Schrödinger's most important work is the wave mechanics - a formulation of quantum mechanics, and especially the Schrödinger equation. He also worked on the field of biophysics. He invented the concept of negentropy and helped to develop molecular biology. Other pages[badlo | edit source] Other websites[badlo | edit source] Wikimedia Commons me iske baare me chhaapa hae:
982ed74064f73c17
Take the 2-minute tour × Quantum mechanics: Suppose that there is a particle with orbital angular momentum $|L|$. But the particle also has spin quantity $|S|$. The question is, how do I reflect this into Schrodinger equation? I do know how Schrodinger equation becomes for each case - when a particle has particular orbital angular momentum and when a particle has some spin, but not when both occur. share|improve this question Your question makes little sense in the context of quantum mechanics. Particles don't follow paths, specifically not circles and spin in an intrinsic property, not one of motion. –  A.O.Tell Oct 12 '12 at 9:19 @A.O.Tell Modified the question. –  War Oct 12 '12 at 9:32 Just adding spin means you attach a tensor factor space containing the spin representation to the particle space. The schroedinger equation doesn't change unless you add an interaction term that incorporates spin. Which term that is depends on your actual physical model. –  A.O.Tell Oct 12 '12 at 9:36 2 Answers 2 up vote 0 down vote accepted The Schroedinger equation does not describe spin. If you need to describe spin as well, you should use the Pauli equation or the Dirac equation (for spin 1/2). share|improve this answer so angular momentum can be reflected, but spin can't? –  War Oct 12 '12 at 9:41 That's right, unless you use some unusual definition of the Schroedinger equation. –  akhmeteli Oct 12 '12 at 9:47 What is understood by Schrödinger equation here and how to interpret should? –  NikolajK Oct 12 '12 at 11:11 Schrödinger equation is understood as the second equation in en.wikipedia.org/wiki/Schr%C3%B6dinger_equation , marked "Time-dependent Schrödinger equation (single non-relativistic particle)". If you understand it as the first equation there (marked "Time-dependent Schrödinger equation (general)"), then you include, e.g., the Dirac equation there and what not. I cannot add anything to dictionary definitions of "should". –  akhmeteli Oct 12 '12 at 11:44 This answer is incorrect. The term Schrödinger's equation refers to any equation of the form $i\hbar\frac{d}{dt}|\Psi\rangle =\hat{H}|\Psi\rangle$, or coordinate representations of it. For a single spinless nonrelativistic particle, this reduces to the form $i\hbar\frac{\partial}{\partial t}\Psi(\mathbf{x},t)=-\frac{\hbar^2}{2m}\nabla^2\Psi(\mathbf{x},t)+V(\mathbf{x},‌​t)\Psi(\mathbf{x},t)$ you quote from Wikipedia. In other cases it can be quite different; one example of this is the Pauli equation, also known as the Schrödinger-Pauli equation –  Emilio Pisanty Oct 12 '12 at 16:59 I think we can talk about spin and spin interactions with the standard Schrodinger. Start with spin orbit coupling or LS Coupling Next see the Zeeman effect, and especially Paschen Bach You need perturbation theory to pick up on spin effects given standard Schrodinger model of the atom as seen on wikipedia: First order perturbation theory with these fine-structure corrections yields the following formula for the Hydrogen atom in the Paschen-Back limit:[2] share|improve this answer Your Answer
e9f1ac69c3c3faf6
Discover Interview: Roger Penrose Says Physics Is Wrong, From String Theory to Quantum Mechanics By Susan Kruglinski, Oliver Chanarin|Tuesday, October 06, 2009 Roger Penrose could easily be excused for having a big ego. A theorist whose name will be forever linked with such giants as Hawking and Einstein, Penrose has made fundamental contributions to physics, mathematics, and geometry. He reinterpreted general relativity to prove that black holes can form from dying stars. He invented twistor theory—a novel way to look at the structure of space-time—and so led us to a deeper understanding of the nature of gravity. He discovered a remarkable family of geometric forms that came to be known as Penrose tiles. He even moonlighted as a brain researcher, coming up with a provocative theory that consciousness arises from quantum-mechanical processes. And he wrote a series of incredibly readable, best-selling science books to boot. And yet the 78-year-old Penrose—now an emeritus professor at the Mathematical Institute, University of Oxford—seems to live the humble life of a researcher just getting started in his career. His small office is cramped with the belongings of the six other professors with whom he shares it, and at the end of the day you might find him rushing off to pick up his 9-year-old son from school. With the curiosity of a man still trying to make a name for himself, he cranks away on fundamental, wide-ranging questions: How did the universe begin? Are there higher dimensions of space and time? Does the current front-running theory in theoretical physics, string theory, actually make sense? Because he has lived a lifetime of complicated calculations, though, Penrose has quite a bit more perspective than the average starting scientist. To get to the bottom of it all, he insists, physicists must force themselves to grapple with the greatest riddle of them all: the relationship between the rules that govern fundamental particles and the rules that govern the big things—like us—that those particles make up. In his powwow with DISCOVER contributing editor Susan Kruglinksi, Penrose did not flinch from questioning the central tenets of modern physics, including string theory and quantum mechanics. Physicists will never come to grips with the grand theories of the universe, Penrose holds, until they see past the blinding distractions of today’s half-baked theories to the deepest layer of the reality in which we live. You come from a colorful family of overachievers, don’t you? My older brother is a distinguished theoretical physicist, a fellow of the Royal Society. My younger brother ended up the British chess champion 10 times, a record. My father came from a Quaker family. His father was a professional artist who did portraits—very traditional, a lot of religious subjects. The family was very strict. I don’t think we were even allowed to read novels, certainly not on Sundays. My father was one of four brothers, all of whom were very good artists. One of them became well known in the art world, Sir Roland. He was cofounder of the Institute of Contemporary Arts in London. My father himself was a human geneticist who was recognized for demonstrating that older mothers tend to get more Down syndrome children, but he had lots of scientific interests. How did your father influence your thinking? The important thing about my father was that there wasn’t any boundary between his work and what he did for fun. That rubbed off on me. He would make puzzles and toys for his children and grandchildren. He used to have a little shed out back where he cut things from wood with his little pedal saw. I remember he once made a slide rule with about 12 different slides, with various characters that we could combine in complicated ways. Later in his life he spent a lot of time making wooden models that reproduced themselves—what people now refer to as artificial life. These were simple devices that, when linked together, would cause other bits to link together in the same way. He sat in his woodshed and cut these things out of wood in great, huge numbers. Are Penrose tiles useful or just beautiful? Escher saw the article and was inspired by it? Is it true that you were bad at math as a kid? I was unbelievably slow. I lived in Canada for a while, for about six years, during the war. When I was 8, sitting in class, we had to do this mental arithmetic very fast, or what seemed to me very fast. I always got lost. And the teacher, who didn’t like me very much, moved me down a class. There was one rather insightful teacher who decided, after I’d done so badly on these tests, that he would have timeless tests. You could just take as long as you’d like. We all had the same test. I was allowed to take the entire next period to continue, which was a play period. Everyone was always out and enjoying themselves, and I was struggling away to do these tests. And even then sometimes it would stretch into the period beyond that. So I was at least twice as slow as anybody else. Eventually I would do very well. You see, if I could do it that way, I would get very high marks. You have called the real-world implications of quantum physics nonsensical. What is your objection? It doesn’t make any sense, and there is a simple reason. You see, the mathematics of quantum mechanics has two parts to it. One is the evolution of a quantum system, which is described extremely precisely and accurately by the Schrödinger equation. That equation tells you this: If you know what the state of the system is now, you can calculate what it will be doing 10 minutes from now. However, there is the second part of quantum mechanics—the thing that happens when you want to make a measurement. Instead of getting a single answer, you use the equation to work out the probabilities of certain outcomes. The results don’t say, “This is what the world is doing.” Instead, they just describe the probability of its doing any one thing. The equation should describe the world in a completely deterministic way, but it doesn’t. Erwin Schrödinger, who created that equation, was considered a genius. Surely he appreciated that conflict. Schrödinger was as aware of this as anybody. He talks about his hypothetical cat and says, more or less, “Okay, if you believe what my equation says, you must believe that this cat is dead and alive at the same time.” He says, “That’s obviously nonsense, because it’s not like that. Therefore, my equation can’t be right for a cat. So there must be some other factor involved.” So Schrödinger himself never believed that the cat analogy reflected the nature of reality? Oh yes, I think he was pointing this out. I mean, look at three of the biggest figures in quantum mechanics, Schrödinger, Einstein, and Paul Dirac. They were all quantum skeptics in a sense. Dirac is the one whom people find most surprising, because he set up the whole foundation, the general framework of quantum mechanics. People think of him as this hard-liner, but he was very cautious in what he said. When he was asked, “What’s the answer to the measurement problem?” his response was, “Quantum mechanics is a provisional theory. Why should I look for an answer in quantum mechanics?” He didn’t believe that it was true. But he didn’t say this out loud much. Yet the analogy of Schrödinger’s cat is always presented as a strange reality that we have to accept. Doesn’t the concept drive many of today’s ideas about theoretical physics? That’s right. People don’t want to change the Schrödinger equation, leading them to what’s called the “many worlds” interpretation of quantum mechanics. That interpretation says that all probabilities are playing out somewhere in parallel universes? It says OK, the cat is somehow alive and dead at the same time. To look at that cat, you must become a superposition [two states existing at the same time] of you seeing the live cat and you seeing the dead cat. Of course, we don’t seem to experience that, so the physicists have to say, well, somehow your consciousness takes one route or the other route without your knowing it. You’re led to a completely crazy point of view. You’re led into this “many worlds” stuff, which has no relationship to what we actually perceive. The idea of parallel universes—many worlds—is a very human-centered idea, as if everything has to be understood from the perspective of what we can detect with our five senses. The trouble is, what can you do with it? Nothing. You want a physical theory that describes the world that we see around us. That’s what physics has always been: Explain what the world that we see does, and why or how it does it. Many worlds quantum mechanics doesn’t do that. Either you accept it and try to make sense of it, which is what a lot of people do, or, like me, you say no—that’s beyond the limits of what quantum mechanics can tell us. Which is, surprisingly, a very uncommon position to take. My own view is that quantum mechanics is not exactly right, and I think there’s a lot of evidence for that. It’s just not direct experimental evidence within the scope of current experiments. In general, the ideas in theoretical physics seem increasingly fantastical. Take string theory. All that talk about 11 dimensions or our universe’s existing on a giant membrane seems surreal. You’re absolutely right. And in a certain sense, I blame quantum mechanics, because people say, “Well, quantum mechanics is so nonintuitive; if you believe that, you can believe anything that’s non­intuitive.” But, you see, quantum mechanics has a lot of experimental support, so you’ve got to go along with a lot of it. Whereas string theory has no experimental support. The book is called Fashion, Faith and Fantasy in the New Physics of the Universe. Each of those words stands for a major theoretical physics idea. The fashion is string theory; the fantasy has to do with various cosmological schemes, mainly inflationary cosmology [which suggests that the universe inflated exponentially within a small fraction of a second after the Big Bang]. Big fish, those things are. It’s almost sacrilegious to attack them. And the other one, even more sacrilegious, is quantum mechanics at all levels—so that’s the faith. People somehow got the view that you really can’t question it. A few years ago you suggested that gravity is what separates the classical world from the quantum one. Are there enough people out there putting quantum mechanics to this kind of test? No, although it’s sort of encouraging that there are people working on it at all. It used to be thought of as a sort of crackpot, fringe activity that people could do when they were old and retired. Well, I am old and retired! But it’s not regarded as a central, as a mainstream activity, which is a shame. After Newton, and again after Einstein, the way people thought about the world shifted. When the puzzle of quantum mechanics is solved, will there be another revolution in thinking? It’s hard to make predictions. Ernest Rutherford said his model of the atom [which led to nuclear physics and the atomic bomb] would never be of any use. But yes, I would be pretty sure that it will have a huge influence. There are things like how quantum mechanics could be used in biology. It will eventually make a huge difference, probably in all sorts of unimaginable ways. In your book The Emperor’s New Mind, you posited that consciousness emerges from quantum physical actions within the cells of the brain. Two decades later, do you stand by that? I think it will be beautiful. Next Page 1 of 2 Comment on this article Discover's Newsletter Collapse bottom bar Log in to your account Email address: Remember me Forgot your password? No problem. Click here to have it emailed to you. Not registered yet?
d24cf66de563f114
Physics Friday 55 Classically, a one-dimensional harmonic oscillator is a system with a mass under a restoring force proportional to displacement from the equilibrium position: F=-kx. The energy is , and the equation of motion has solution , where. Analogously, a one-dimensional quantum harmonic oscillator is a particle with Hamiltonian , where here p is the quantum momentum operator, and x the position operator. In the position basis, this is then A cursory examination of the expectation for energy shows that we can expect our energies to be non-negative. Most quantum mechanics books and courses I have encountered address the energy eigenstates by means of the raising and lowering operators. Instead, here we will use an analytical method of finding the eigenvalues to the time-independent Schrödinger equation Let us replace x and E with corresponding dimensionless variables: we define the dimensionless variables , . Then , the Schrödinger equation becomes To find a solution to this differential equation, we consider asymptotic behavior. Namely, when the energy is small, the ξ2 dominates over ε, and our equation approaches , which hints at a solution that behaves like a Gaussian. Adopting, then, a test form , and plugging this into our differential equation , we get or, eliminating the common Gaussian factor, Applying the power series method, we plug in to find Which gives recursion relation . Now, we return to the physics of the problem, namely that the wavefunctions have to be normalized. This means that ψ→0 as x→±∞. Now, considering that , and comparing to the series expansion of the gaussian, we see our series diverges too fast for ψ to be normalized, unless the series terminates. This requires for some n, which gives , and as E≥0, we see that the ground state has energy ; this is known as the ground state energy, or zero-point energy. Solving for the wavefunctions themselves requires more mathematics, and gives solutions involving the Hermite polynomials. Tags: , , , , , , , , , 2 Responses to “Physics Friday 55” 1. Monday Math 56 « Twisted One 151’s Weblog Says: […] includes a number of notable polynomial sequences (including the Hermite polynomials mentioned in a recent Physics Friday post). Note that the sequence Pn(x)=xn is a trivial example. Now, we can obtain from the relation , or […] 2. Physics Friday 58 « Twisted One 151’s Weblog Says: […] Friday 58 By twistedone151 In a previous Friday post, I demonstrated one method of determining the energy eigenvalues for the one-dimensional quantum […] Leave a Reply WordPress.com Logo Twitter picture Facebook photo Google+ photo Connecting to %s %d bloggers like this:
d070a29b620c699b
Citation for this page in APA citation style.           Close Mortimer Adler Rogers Albritton Alexander of Aphrodisias Samuel Alexander William Alston Louise Antony Thomas Aquinas David Armstrong Harald Atmanspacher Robert Audi Alexander Bain Mark Balaguer Jeffrey Barrett William Belsham Henri Bergson George Berkeley Isaiah Berlin Richard J. Bernstein Bernard Berofsky Robert Bishop Max Black Susanne Bobzien Emil du Bois-Reymond Hilary Bok Laurence BonJour George Boole Émile Boutroux Michael Burke Joseph Keim Campbell Rudolf Carnap Ernst Cassirer David Chalmers Roderick Chisholm Randolph Clarke Samuel Clarke Anthony Collins Antonella Corradini Diodorus Cronus Jonathan Dancy Donald Davidson Mario De Caro Daniel Dennett Jacques Derrida René Descartes Richard Double Fred Dretske John Dupré John Earman Laura Waddell Ekstrom Herbert Feigl John Martin Fischer Owen Flanagan Luciano Floridi Philippa Foot Alfred Fouilleé Harry Frankfurt Richard L. Franklin Michael Frede Gottlob Frege Peter Geach Edmund Gettier Carl Ginet Alvin Goldman Nicholas St. John Green H.Paul Grice Ian Hacking Ishtiyaque Haji Stuart Hampshire Sam Harris William Hasker Georg W.F. Hegel Martin Heidegger Thomas Hobbes David Hodgson Shadsworth Hodgson Baron d'Holbach Ted Honderich Pamela Huby David Hume Ferenc Huoranszki William James Lord Kames Robert Kane Immanuel Kant Tomis Kapitan Jaegwon Kim William King Hilary Kornblith Christine Korsgaard Saul Kripke Andrea Lavazza Keith Lehrer Gottfried Leibniz Michael Levin George Henry Lewes David Lewis Peter Lipton C. Lloyd Morgan John Locke Michael Lockwood E. Jonathan Lowe John R. Lucas Alasdair MacIntyre Ruth Barcan Marcus James Martineau Storrs McCall Hugh McCann Colin McGinn Michael McKenna Brian McLaughlin John McTaggart Paul E. Meehl Uwe Meixner Alfred Mele Trenton Merricks John Stuart Mill Dickinson Miller Thomas Nagel Friedrich Nietzsche John Norton Robert Nozick William of Ockham Timothy O'Connor David F. Pears Charles Sanders Peirce Derk Pereboom Steven Pinker Karl Popper Huw Price Hilary Putnam Willard van Orman Quine Frank Ramsey Ayn Rand Michael Rea Thomas Reid Charles Renouvier Nicholas Rescher Richard Rorty Josiah Royce Bertrand Russell Paul Russell Gilbert Ryle Jean-Paul Sartre Kenneth Sayre Moritz Schlick Arthur Schopenhauer John Searle Wilfrid Sellars Alan Sidelle Ted Sider Henry Sidgwick Walter Sinnott-Armstrong Saul Smilansky Michael Smith Baruch Spinoza L. Susan Stebbing Isabelle Stengers George F. Stout Galen Strawson Peter Strawson Eleonore Stump Francisco Suárez Richard Taylor Kevin Timpe Mark Twain Peter Unger Peter van Inwagen Manuel Vargas John Venn Kadri Vihvelin G.H. von Wright David Foster Wallace R. Jay Wallace Ted Warfield Roy Weatherford William Whewell Alfred North Whitehead David Widerker David Wiggins Bernard Williams Timothy Williamson Ludwig Wittgenstein Susan Wolf Michael Arbib Bernard Baars Gregory Bateson John S. Bell Charles Bennett Ludwig von Bertalanffy Susan Blackmore Margaret Boden David Bohm Niels Bohr Ludwig Boltzmann Emile Borel Max Born Satyendra Nath Bose Walther Bothe Hans Briegel Leon Brillouin Stephen Brush Henry Thomas Buckle S. H. Burbury Donald Campbell Anthony Cashmore Eric Chaisson Jean-Pierre Changeux Arthur Holly Compton John Conway John Cramer E. P. Culverwell Charles Darwin Terrence Deacon Louis de Broglie Max Delbrück Abraham de Moivre Paul Dirac Hans Driesch John Eccles Arthur Stanley Eddington Paul Ehrenfest Albert Einstein Hugh Everett, III Franz Exner Richard Feynman R. A. Fisher Joseph Fourier Lila Gatlin Michael Gazzaniga GianCarlo Ghirardi J. Willard Gibbs Nicolas Gisin Paul Glimcher Thomas Gold Brian Goodwin Joshua Greene Jacques Hadamard Patrick Haggard Stuart Hameroff Augustin Hamon Sam Harris Hyman Hartman John-Dylan Haynes Martin Heisenberg Werner Heisenberg John Herschel Jesper Hoffmeyer E. T. Jaynes William Stanley Jevons Roman Jakobson Pascual Jordan Ruth E. Kastner Stuart Kauffman Martin J. Klein Simon Kochen Stephen Kosslyn Ladislav Kovàč Rolf Landauer Alfred Landé Pierre-Simon Laplace David Layzer Benjamin Libet Seth Lloyd Hendrik Lorentz Josef Loschmidt Ernst Mach Donald MacKay Henry Margenau James Clerk Maxwell Ernst Mayr Ulrich Mohrhoff Jacques Monod Emmy Noether Abraham Pais Howard Pattee Wolfgang Pauli Massimo Pauri Roger Penrose Steven Pinker Colin Pittendrigh Max Planck Susan Pockett Henri Poincaré Daniel Pollen Ilya Prigogine Hans Primas Adolphe Quételet Juan Roederer Jerome Rothstein David Ruelle Erwin Schrödinger Aaron Schurger Claude Shannon David Shiang Herbert Simon Dean Keith Simonton B. F. Skinner Roger Sperry John Stachel Henry Stapp Tom Stonier Antoine Suarez Leo Szilard William Thomson (Kelvin) Peter Tse Vlatko Vedral Heinz von Foerster John von Neumann John B. Watson Daniel Wegner Steven Weinberg Paul A. Weiss John Wheeler Wilhelm Wien Norbert Wiener Eugene Wigner E. O. Wilson H. Dieter Zeh Ernst Zermelo Wojciech Zurek Free Will Mental Causation James Symposium Entanglement is a mysterious quantum phenomenon that is widely, but mistakenly, described as capable of transmitting information over vast distances faster than the speed of light. It has proved very popular with science writers, philosophers of science, and many scientists who hope to use the mystery to deny some of the basic concepts underlying quantum physics. Entanglement depends on two quantum properties that are simply impossible in "classical" physics. One is called nonlocality. The other is nonseparability. Each of these might be considered a mystery in its own right, but fortunately information physics (and the information interpretation of quantum mechanics) can explain them both, with no equations, in a way that should be understandable to the lay person. This may not be good news for the science writers and publishers who turn out so many titles each year claiming that quantum physics implies that there are multiple parallel universes, that the minds of physicists are manipulating "quantum reality," that there is nothing "really" there until we look at it, that we can travel backwards in time, that things can be in two places at the same time, that we can teleport material from one place to another, and of course that we can can send signals faster than the speed of light. Einstein's Discovery of Nonlocality and Nonseparability Albert Einstein was the first to see the nonlocal character of quantum phenomena. He may have seen it as early as 1905, the same year he published his special theory of relativity. But it was perfectly clear to him 22 years later (ten years after his general theory of relativity and his explanation of how quanta of light are emitted and absorbed by atoms), when he described it with a diagram on the blackboard at a conference of physicists from around the world in Belgium in 1927 at the fifth Solvay conference. In his contribution to the 1949 Schilpp memorial volume on Einstein, Niels Bohr provided a picture of what Einstein drew on the blackboard. photon passes through a slit The "nonlocal" effects at point B are just the probability of an electron being found at point B going to zero instantly (as if an action at a distance) the moment an electron is localized at point A Then in 1935, Einstein, Boris Podolsky, and Nathan Rosen proposed a thought experiment (known by their initials as EPR) to exhibit internal contradictions in the new quantum physics. Einstein hoped to show that quantum theory could not describe certain intuitive "elements of reality" and thus was either incomplete or, as he might have hoped, demonstrably incorrect. He and his colleagues Erwin Schrödinger, Max Planck, and others hoped for a return to deterministic physics, and the elimination of mysterious quantum phenomena like superposition of states and "collapse" of the wave function. EPR continues to fascinate determinist philosophers of science who hope to prove that quantum indeterminacy does not exist. Beyond the problem of nonlocality, the EPR thought experiment introduced the problem of "nonseparability." This mysterious phenomenon appears to transfer something physical faster than the speed of light. What happens actually is merely an instantaneous change in the immaterial information about probabilities or possibilities for locating a particle. The 1935 EPR paper was based on a question of Einstein's about two electrons fired in opposite directions from a central source with equal velocities. He imagined them starting at t0 some distance apart and approaching one another with high velocities. Then for a short time interval from t1 to t1 + Δt the particles are in contact with one another. Note that It was Einstein who discovered in 1924 the identical nature and indistinguishability of quantum particles After the particles are measured at t1, quantum mechanics describes them with a single two-particle wave function that is not the product of independent particle wave functions. Because electrons are indistinguishable particles, it is not proper to say electron 1 goes this way and electron 2 that way. (Nevertheless, it is convenient to label the particles, as we do in illustrations below.) Until the next measurement, it is misleading to think that specific particles have distinguishable paths. Einstein said correctly that at a later time t2, a measurement of one electron's position would instantly establish the position of the other electron - without measuring it explicitly. Schrödinger described the two electrons as "entangled" (verschränkt) at their first measurement, so "nonlocal" phenomena are also known as "quantum entanglement." Note that Einstein used conservation of linear momentum to calculate the position of the second electron. Although conservation laws are rarely cited as the explanation, they are the physical reason that entangled particles always produce correlated results. If the results were not always correlated, the implied violation of a fundamental conservation law would be a much bigger story than entanglement itself, as interesting as that is. This idea of something measured in one place "influencing" measurements far away challenged what Einstein thought of as "local reality." It came to be known as "nonlocality." Einstein called it "spukhaft Fernwirkung" or "spooky action at a distance." We prefer to describe this phenomenon as "knowledge at a distance." No action has been performed on the distant particle simply because we learn about its position. Note that this assumes the distant particle has not had any interaction with the environment. Einstein had objected to nonlocal phenomena as early as the Solvay Conference of 1927, when he criticized the collapse of the wave function as "instantaneous-action-at-a-distance" that prevents the wave from acting at more than one place on the screen." Einstein's concern was based on the idea that the wave might contain some kind of ponderable energy. At that time Schrödinger thought it might be distributed electricity. In these cases the instantaneous "collapse" of the wave function might violate Einstein's principle of relativity, a concern he first expressed in 1909. When we recognize that the wave function is only pure information about the probability of finding the electron somewhere, we see that there is no matter or energy travelling faster than the speed of light. Einstein's criticism somewhat resembles the criticisms by Descartes and others about Newton's theory of gravitation. Newton's opponents charged that his theory was "action at a distance" and instantaneous. Einstein's own theory of general relativity shows that gravitational influences travel at the speed of light and are mediated by a gravitational field that shows up as curved space-time. Note that when a probability function collapses to unity in one place and zero elsewhere, nothing physical is moving from one place to the other. When the nose of one horse crosses the finish line, its probability of winning goes to certainty, and the finite probabilities of the other horses, including the one in the rear, instantaneously drop to zero. This happens faster than the speed of light, since the last horse is in a "space-like" separation. In 1964, John Bell showed how the 1935 "thought experiments" of Einstein, Podolsky, and Rosen (EPR) could be made into real physical experiments. Bell put limits on the "hidden variables" that might restore a deterministic physics in the form of what he called an inequality, the violation of which would confirm standard quantum mechanics. Since Bell's work, many other physicists have defined other "Bell inequalities" and developed increasingly sophisticated experiments to test them. The first practical and workable experiments to test the EPR paradox were suggested by David Bohm. Instead of only linear momentum conservation, Bohm proposed using two electrons that are prepared in an initial state of known total spin. If one electron spin is 1/2 in the up direction and the other is spin down or -1/2, the total spin is zero. The underlying physical law of importance is a second conservation law, in this case the conservation of angular momentum. If electron 1 is prepared with spin down and electron 2 with spin up, the total angular momentum is also zero. This is called the singlet state. Quantum theory describes the two electrons as in a superposition of spin up ( + ) and spin down ( - ), The principles of quantum mechanics say that the prepared system is in a linear combination (or superposition) of these two states, and can provide only the probabilities of finding the entangled system in either the + - state or the - + state. Quantum mechanics does not describe the paths or the spins of the individual particles. Note that should measurements result in + + or - - state, that would violate the conservation of angular momentum. EPR tests can be done more easily with polarized photons than with electrons, which require complex magnetic fields. The first of these was done in 1972 by Stuart Freedman and John Clauser at UC Berkeley. They used oppositely polarized photons (one with spin = +1, the other spin = -1) coming from a central source. Again, the total photon spin of zero is conserved. Their data, in agreement with quantum mechanics, violated the Bell's inequalities to high statistical accuracy, thus providing strong evidence against local hidden-variable theories. For more on superposition of states and the physics of photons, see the Dirac 3-polarizers experiment. John Clauser, Michael Horne, Abner Shimony, and Richard Holt (known collectively as CHSH) and later Alain Aspect did more sophisticated tests. The outputs of the polarization analyzers were fed to a coincidence detector that records the instantaneous measurements, described as + -, - +, + +, and - - . The first two ( + - and - + ) conserve the spin angular momentum and are the only types ever observed in these nonlocality/entanglement tests. With the exception of some of Holt's early results that were found to be erroneous, no evidence has so far been found of any failure of standard quantum mechanics. And as experimental accuracy has improved by orders of magnitude, quantum physics has correspondingly been confirmed to one part in 1018, and the speed of the probability information transfer between particles has a lower limit of 106 times the speed of light. There has been no evidence for local "hidden variables." Nevertheless, experimenters continue to look for possible "loopholes" in the experimental results, such as detector inefficiencies that might be hiding results favorable to Einstein's picture of "local reality." Nicolas Gisin and his colleagues have extended the polarized photon tests of EPR and the Bell inequalities to a separation of 18 kilometers near Geneva. They continue to find 100% correlation and no evidence of the "hidden variables" sought after by Einstein and David Bohm. An interesting use of the special theory of relativity was proposed by Gisin's colleagues, Antoine Suarez and Valerio Scarani. They use the idea of hyperplanes of simultaneity. Back in the 1960's, C. W. Rietdijk and Hilary Putnam argued that physical determinism could be proved to be true by considering the experiments and observers A and B in the above diagram to be moving at high speed with respect to one another. Roger Penrose developed a similar argument in his book The Emperor's New Mind. He called it the Andromeda Paradox. Suarez and Scarani showed that for some relative speeds between the two observers A and B, observer A could "see" the measurement of observer B to be in his future, and vice versa. Because the two experiments have a "spacelike" separation (neither is inside the causal light cone of the other), each observer thinks he does his own measurement before the other. Gisin tested the limits on this effect by moving mirrors in the path to the birefringent crystals and showed that, like all other Bell experiments, the "before-before" suggestion of Suarez and Scarani did nothing to invalidate quantum mechanics. These experiments were able to put a lower limit on the speed with which the information about probabilities collapses, estimating it as at least thousands - perhaps millions - of times the speed of light and showed empirically that probability collapses are essentially instantaneous. Despite all his experimental tests verifying quantum physics, including the "reality" of nonlocality and entanglement, Gisin continues to explore the EPR paradox, considering the possibility that signals are coming to the entangled particles from "outside space-time." How Information Physics Explains Nonlocality, Nonseparability, and Entanglement Information physics starts with the fact that measurements bring new stable information into existence. In EPR the information in the prepared state of the two particles includes the fact that the total linear momentum and the total angular momentum are zero. New information requires an irreversible process that also increases the entropy more than enough to compensate for the information increase, to satisfy the second law of thermodynamics. It is this moment of irreversibility and the creation of new observable information that is the "cut" or Schnitt" described by Werner Heisenberg and John von Neumann in the famous problem of measurement Note that the new observable information does not require a "conscious observer" as Eugene Wigner and some other scientists thought. The information is ontological (really in the world) and not merely epistemic (in the mind). Without new information, there would be nothing for the observers to observe. Initially Prepared Information Plus Conservation Laws Conservation laws are the consequence of extremely deep properties of nature that arise from simple considerations of symmetry. We regard these laws as "cosmological principles." Physical laws do not depend on the absolute place and time of experiments, nor their particular direction in space. Conservation of linear momentum depends on the translation invariance of physical systems, conservation of energy the independence of time, and conservation of angular momentum the invariance under rotations. Recall that the EPR experiment starts with two electrons (or photons) prepared in an entangled state that is a mixture of pure two-particle states, each of which conserves the total angular momentum and, of course, conserves the linear momentum as in Einstein's original EPR example. This information about the linear and angular momenta is established by the initial state preparation (a measurement). Quantum mechanics describes the probability amplitude wave function ψ of the two-particle system as in a superposition of two-particle states. It is not a product of single-particle states, and there is no information about the identical indistinguishable electrons traveling along distinguishable paths. | ψ > = 1/√2) | + - > + 1/√2) | - + >         (1) The probability amplitude wave function ψ travels from the source (at the speed of light or less). Let's assume that at t0 observer A finds an electron (e1) with spin up. After the first measurement, new information comes into existence telling us that the wave function ψ has "collapsed" into the state | + - >. Just as in the two-slit experiment, probabilities have now become certainties. If the first measurement finds electron 1 is spin up, so the entangled electron 2 must be spin down to conserve angular momentum. And conservation of linear momentum tells us that at t0 the second electron is equidistant from the source in the opposite direction. As with any wave-function collapse, the probability amplitude information travels instantly. Unlike the two-slit experiment, where the collapse goes to a specific point in 3-dimensional configuration space, the "collapse" here is a "jump" or "projection" into one of the two possible 6-dimensional two-particle quantum states | + - > or | - + >. This makes "visualization" (Schrödinger's Anschaulichkeit) difficult or impossible, but the parallel with the collapse in the two-slit case provides an intuitive insight of sorts. If the measurement finds an electron (call it electron 1) as spin-up, then at that moment of new information creation, the two-particle wave function collapses to the state | + - > and electron 2 "jumps" into a spin-down state with probability unity (certainty). The results of observer B's measurement at a later time t1 is therefore determined to be spin down. Notice that Einstein's intuition that the result seems already "determined" or "fixed" before the second measurement is in fact correct. The result is determined by the law of conservation of momentum. But as with the distinction between determinism and pre-determinism in the free-will debates, the measurement by observer B was not pre-determined before observer A's measurement. It was simply determined by her measurement. Why do so few accounts of entanglement mention conservation laws? Although Einstein mentioned conservation in the original EPR paper, it is noticeably absent from later work. A prominent exception is Eugene Wigner, writing on the problem of measurement in 1963: If a measurement of the momentum of one of the particles is carried out — the possibility of this is never questioned — and gives the result p, the state vector of the other particle suddenly becomes a (slightly damped) plane wave with the momentum -p. This statement is synonymous with the statement that a measurement of the momentum of the second particle would give the result -p, as follows from the conservation law for linear momentum. The same conclusion can be arrived at also by a formal calculation of the possible results of a joint measurement of the momenta of the two particles. One can go even further: instead of measuring the linear momentum of one particle, one can measure its angular momentum about a fixed axis. If this measurement yields the value mℏ, the state vector of the other particle suddenly becomes a cylindrical wave for which the same component of the angular momentum is -mℏ. This statement is again synonymous with the statement that a measurement of the said component of the angular momentum of the second particle certainly would give the value -mℏ. This can be inferred again from the conservation law of the angular momentum (which is zero for the two particles together) or by means of a formal analysis. Hence, a "contraction of the wave packet" took place again. It is also clear that it would be wrong, in the preceding example, to say that even before any measurement, the state was a mixture of plane waves of the two particles, traveling in opposite directions. For no such pair of plane waves would one expect the angular momenta to show the correlation just described. This is natural since plane waves are not cylindrical waves, or since [the state vector has] properties different from those of any mixture. The statistical correlations which are clearly postulated by quantum mechanics (and which can be shown also experimentally, for instance in the Bothe-Geiger experiment) demand in certain cases a "reduction of the state vector." The only possible question which can yet be asked is whether such a reduction must be postulated also when a measurement with a macroscopic apparatus is carried out. [Considerations] show that even this is true if the validity of quantum mechanics is admitted for all systems. Visualizing Entanglement and Nonlocality Schrödinger said that his "Wave Mechanics" provided more "visualizability" (Anschaulichkeit) than the Copenhagen school and its "damned quantum jumps" as he called them. He was right. But we must focus on the probability amplitude wave function of the prepared two-particle state, and not attempt to describe the paths or locations of independent particles - at least until after some measurement has been made. We must also keep in mind the conservation laws that Einstein used to discover nonlocal behavior in the first place. Then we can see that the "mystery" of nonlocality is primarily the same mystery as the single-particle collapse of the wave function. As Richard Feynman said, there is only one mystery in quantum mechanics (the collapse of probability and the consequent statistical outcomes). In his 1935 paper, Schrödinger described the two particles in EPR as "entangled" in English, and verschränkt in German, which means something like cross-linked. It describes someone standing with arms crossed. In the time evolution of an entangled two-particle state according to the Schrödinger equation, we can visualize it - as we visualize the single-particle wave function - as collapsing when a measurement is made. The discontinuous "jump" is also described as the "reduction of the wave packet." This is apt in the two-particle case, where the superposition of | + - > and | - + > states is "projected" or "reduced: to one of these states, and then further reduced to the product of independent one-particle states | + > and | - >. In the two-particle case (instead of just one particle making an appearance), when either particle is measured we know instantly those properties of the other particle that satisfy the conservation laws, including its location equidistant from, but on the opposite side of, the source, and its other properties such as spin. Compare the collapse of the two-particle probability amplitude above to the single-particle collapse here. We can enhance our visualization of what might be happening between the time two entangled electrons are emitted with opposite spins and the time one or both electrons are detected. Quantum mechanics describes the state of the two electrons as in a linear combination of | + - > and | - + > states. We can visualize the electron moving left to be both spin up | + > and spin down | - >. And the electron moving right would be both spin down | - > and spin up | + >. We could require that when the left electron is spin up | + >, the right electron must be spin down | - >, so that total spin is always conserved. Consider this possible animation of the experiment, which illustrates the assumption that each electron is in a linear combination of up and down spin. It imitates the superposition (or linear combination) with up and down arrows on each electron oscillating quickly. Notice that if you move the animation frame by frame by dragging the dot in the timeline, you will see that total spin = 0 is conserved. When one electron is spin up the other is always spin down. Since quantum mechanics says we cannot know the spin until it is measured, our best estimate is a 50/50 probability between up and down. This is the same as assuming Schrödinger's Cat is 50/50 alive and dead. But what this means of course is simply that if we do a large number of identical experiments, the statistics for live and dead cats will be approximately 50/50%. We never observe/measure a cat that is both dead and alive! As Einstein noted, QM tells us nothing about individual cats. Quantum mechanics is incomplete in this respect. He is correct, although Bohr and Heisenberg insisted QM is complete, because we cannot know more before we measure, and reality is created (they say) when we do measure. Despite accepting that a particular value of an "observable" can only be known by a measurement (knowledge is an epistemological problem, Einstein asked whether the particle actually (really, ontologically) has a path and position before we measure it? His answer was yes. Here is an animation that illustrates the assumption that the two electrons are randomly produced in a spin-up and a spin-down state, and that they remain in those states no matter how far they separate, provided neither interacts until the measurement. Any interaction does what is described as decohering the two states. How Mysterious Is Entanglement? Some commentators say that nonlocality and entanglement are a "second revolution" in quantum mechanics, "the greatest mystery in physics," or "science's strangest phenomenon," and that quantum physics has been "reborn." They usually quote Erwin Schrödinger as saying "I consider [entanglement] not as one, but as the characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought." Schrödinger knew that his two-particle wave function could not have the same simple interpretation as the single particle, which can be visualized in ordinary 3-dimensional configuration space. And he is right that entanglement exhibits a richer form of the "action-at-a-distance" and nonlocality that Einstein had already identified in the collapse of the single particle wave function. But the main difference is that two particles acquire new properties instead of one, and they do it instantaneously (at faster than light speeds), just as in the case of a single-particle measurement. Nonlocality and entanglement are thus just another manifestation of Richard Feynman's "only" mystery. In both single-particle and two-particle cases paradoxes appear only when we attempt to describe independent particles following a path to measurement by observer A (and/or observer B). EPR "Loopholes" and Free Will Investigators who try to recover the "elements of local reality" that Einstein wanted, and who hope to eliminate the irreducible randomness of quantum mechanics that follows from wave functions as probability amplitudes, often cite "loopholes" in EPR experiments. For example, the "detection loophole" claims that the efficiency of detectors is so low that they are missing many events that might prove Einstein was right. Most all the loopholes have now been closed, but there is one loophole that can never be closed because of its metaphysical/philosophical nature. That is the "(pre-)determinism loophole." If every event occurs for reasons that were established at the beginning of the universe, then all the careful experimental results are meaningless. John Conway and Simon Kochen have formalized this loophole in what they call the Free Will Theorem. What Conway and Kochen are really describing is the indeterminism that quantum mechanics has introduced into the world. Although indeterminism is a requirement for human freedom, it is insufficient by itself to provide both "free" and "will". Philosophers and scientists (Roger Penrose, for example) have speculated that nonlocality might be involved in consciousness. For Teachers For Scholars Normal | Teacher | Scholar
820fa492410a06ba
 Second Quantization in Quantum Physics Means Something Different than What it is Thought to Mean San José State University Thayer Watkins Silicon Valley & Tornado Alley Second Quantization in Quantum Physics Means Something Different than What it is Thought to Mean Second Quantization is a body of physical analysis in quantum field theory that focuses on the occupation numbers of states rather than the physical states of particular particles. It talks of operations which create and which annihilate a particle in a field. When this is applied to the quantum analysis of a harmonic oscillator it is seen that what the term particle refers to second quantization is entirely different from what particle refers to in other contexts. Here is what the probability density function looks like for a particular level of energy. The number of peaks of the probability density function is 2n+1, where n is the principal quantum number.. What an increase in n adds to the solution is roughly depicted below. It is n which the creation and annihilation operators change. The details will be given later. First the mathematical background will be given. Mathematical Background Let V be a vector space of complex elements. A function that maps V onto V is called a transformation. A linear transformation is called an operator. Operators can be added and substracted. The concatenation of two operators is defined and will be referred to as the multiplication of operators. This multiplication of operators is associative (AB)C=A(BC) but not communicative AB≠BA,. The complex conjugate of an element u of V is denoted as u*. If an operator A maps v to u then the operator that maps v to u* is denoted as A* and is called the adjoint of A.. If V is finite dimensional then an operator is just a matrix. When V consists of functions it is infinite dimensional. Let A and B be operators on V. The commutator [A, B] is defined as [A, B] = AB − BA This leads to AB = BA + [A, B] BA = AB + [B, A] = AB − [A, B] Note that the commutator is anti-symmetric, AB=−BA. Some General Relationships for Operators on a Function Space Let A, B and C be operators. Then for the commutator [AB, C] [AB, C] = A[B, C] − [A, C]B [AB, C] = (AB)C − C(AB) and from the associativity of operator applications [AB, C] = A(BC) − (CA)B and from subtracting and adding A(CB) on the right [AB, C] = A(BC) − A(CB) + A(CB) − (CA)B and hence [AB, C] = A[B, C] + (AC)B − (CA)B and finally Now let A be any operator and A* its adjoint (conjugate operation). Applying the above indentity to [A*A, A] gives [A*A, A] = A*[A, A] + [A*, A]A but [A, A]=0, so [A*A, A] = [A*, A]A since commutation is antisymmetric [A*A, A] = −[A, A*]A [A*A, A*] = A*[A, A*] + [A*, A*]A = A*[A, A*] Canonical Quantization Canonical quantization is defined as [A, A*] = I where I is the identity operator. If A satisfies the canonical quantization condition then the above two relationships reduce to [A*A, A] = −A [A*A, A*] = A* These can also be expressed as (A*A)A = A(A*A)−A = A(A*A−I) (A*A)A* = A*(A*A)+A* = A*(A*A+I) An Eigenvector for an Operator and its Eigenvalue For and operator A, any element v of V such that Av = αv is called an eigenvector of A and α is its eigenvalue. The Notation of P.A.M. Dirac Dirac had a brilliant idea for labeling vectors. Instead of using an arbitrary letter to denote a vector, he suggested that it should be labeled by its eigenvalue. Thus if v is an eigenvector of A with the eigenvalue β then v is denoted as |β>. The symbology |..> Is part of what is known as Dirac's bra...ket notation. Let an eigenvalue of A*A be denoted as α. Expressed in the Dirac notation this means A*A|α> = α|α> Not only is |α> an eigenfunction of A*A, but A|α> is also an eigenfunction of A*A because (A*A)A|α> = A(A*A−I)|α> = A{α|α> − |α>} = A(α−1)|α> (A*A)(A|α>) = (α−1)(A|α>) So (A|α>) is also an eigenfunction of (A*A) but with an eigenvalue that is 1 less than that of |α>. A reapplication of the above would show that An|α> for n over some range is an eigenfunction of A*A with (α−n) as its eigenvalue. Therefore the eigenfunction of An|α> is denoted as |(α−n)>. Similarly A*|α> is an eigenfunction of A*A with an eigenvalue of (α+1) and therefore (A*)n|α> is an eigenfunction of A*A with an eigenvalue of (α+n). It is represented as |(α+n)>. Relationships between Eigenfunctions Let |α> be an eigenfunction of A*A. Then A|α> and A*|α> are also an eigenfunction of A*A. Thus A*|α> = |(α+1)> = C|α> A*|(α-1)> = α|(α-1)> A*A(A|α>) The Integralness of the Eigenvalues of A*A The eigenfunction of an operator cannot be the zero function. Therefore there must be an integer m such that Am|α> is an eigenfunction of A*A but Am+1 |α> is not. This implies that (α−m) is equal to 0 and hence α is equal to that integer m. Therefore the eigenvalues of A*A are necessarily integers from 0 to some maximum integer m. What we found above is that the assumption of canonical quantification for the commutator is sufficient to assure the existence of the creator, annihilator and number operator without any reference to the physics of the particles. A Physical System A harmonic oscillator is a mass m attached to a spring of stiffness coefficient k. The deviation from equilibrium is denoted as x. The total energy E IS E - ½mv² + ½kx² where v is velocity (dx/dt). The system oscillates sinusoidally at a frequency ω equal to (k/m)½. This means that the total energy may be expressed as E = ½mv² + ½mω²x² When this is expressed in terms of momentum p=mv the result is called the Hamiltonian function for the system; i.e., H = ½p²/m + ½kx² Quantum Analysis For the quantum theoretic analysis of the harmonic oscillator p and x in the Hamiltonian function must be replaced by their operator representations. The operator representation of the deviation x is very simple; it is just multiplication by x. The momentum operator is p^ = −ih(∂/∂x) where i is the imaginary unit, the square root of −1 and h is Planck's constant divided by 2π. Thus the Hamiltonian operator H^ for a harmonic oscillator is H^ = ½(p^)²/m + ½mω²(x^)² = −½h²(∂²/x∂²) Let φ(x) denote the complex-valued function such that its squared value |φ(x)|² is the probability density function at x. It is called the wave function and its values are determined as a solution to the time-independent Schrödinger equation H^φ(x) = Eφ(x) where E is the total energy of the oscillator. This equation has solutions only for discrete values of E, which are positive integers times ω. The positive integer for the system is called its principal quantum number and will be denoted as n. This is the first quantization of a harmonic oscillator. Second Quantization Consider the commutators of p^ and x^, [p^, x^]^φ = p^(x^φ) − x^(p^φ) = (p^x)φ+xp^φ − x(p^φ) = (p^x)φ = −ihφ [p^, x^] = −ih [x^, p^] = ih Now consider the two operators α =γ(x^ + βp^) α* = γ(x^ − βp^) where β=i/(mω) and γ=(mω/(2h))½. Now consider [α, α*], [α, α*] = [γ(x^ + βp^), γ(x^ − βp^)] = γ² [(x^ + βp^), (x^ − βp^)] [(x^ + βp^), (x^ − βp^)] = [x^, (x^ − βp^)] + [βp^, (x^ − βp^)] = [x^, x^] −β[x^, p^] + β[p^, x^] − β²[p^, p^] But [x^, x^]=0, [p^, p^]=0 and [p^, x^]=−[x^, p^] so [α, α*] = γ² (−2β)[x^, p^] = γ² (−2β) ih Replacing β and γ by their defined values gives [α, α*] = (mω/(2h))(−2i/(mω)(ih) = 1^ where 1^ is just the identity operation. This means that α satisfies the canonical quantification condition and therefore α is a creation operator, α* is an annihilation operator and α*α is a number (counting) operator. That is to say, according to the conventional presentation of second quantization, α operating on a field increases the number of particles by one, α* decreases the number of particles in a field by one and α*α counts the number of particles in a field. But what does the probability density function look like for a harmonic oscillator. Here is the solution for principal quantum number n equal to 30. The number of peaks of the probability density function is 2n+1. What an increase in n adds to the solution is roughly depicted below. The standard second quantization is, in effect, calling a pair of peaks of the probability density function a particle even though this does not correspond to a particle in the usual sense of the term. A photon, as a perturbation in an electromagnetic field, would fit the notion of particle as this term is used in second quantization. However, in general, the use of the term particle in second quantization anaysis is misleading, very misleading. Peaks in probability density correspond to states which a particle passes through in its periodic path. When a physical system is analyzed the eigenvalues correspond to energy quanta which may or may not have any correspondence to particles in the usual sense of that term. For example, consider a harmonic oscillator. Its energy is proportional to a integer n, called its principall quantum number. The number of peaks of the probability density function is 2n+1. HOME PAGE OF applet-magic HOME PAGE OF Thayer Watkins
63fefb39c551a656
Interpreting the Quantum World II: What Does It Mean? In the first installment of this series, we immersed ourselves in the quantum realm that lies beneath our everyday experience and discovered a universe that bears little resemblance to it. Instead of the solid, unambiguously well-behaved objects we’re familiar with, we encountered a unitary framework (\hat U) in which everything (including our own bodies!) is ultimately made of ethereal “waves of probability” wandering through immense configuration spaces along paths deterministically guided by well-formed differential equations and boundary conditions, and acquiring the properties we find in them as they rattle through a random pinball machine of collisions with “measurement” events (\hat M). This is all very elegant—even beautiful… but what does it mean? When my fiancé falls asleep in my arms, her tender touch, the warmth of her breath on my neck, and the fragrance of her hair hardly seem like mere probabilities being kicked around by dice-playing measurements. The refreshing drink of sparkling citrus water I just took doesn’t taste like one either. What is it that gives fire to this ethereal quantum realm? How does the Lord God breathe life into our probabilistic dust and bring about the classical universe of our daily lives (Gen. 2:7)? We finished by distilling our search for answers down to three fundamental dilemmas: 2)  What really happens when a deterministic, well-behaved \hat U evolution of the universe runs headlong into a seemingly abrupt, non-deterministic \hat M event? How do we get them to share their toys and play nicely with each other? 3)  If counterfactual definiteness is an ill-formed concept, why are we always left with only one experienced outcome? Why don’t we experience entangled realities? Physicists, philosophers, and theologians have been tearing their hair out over these questions for almost a century, and numerous interpretations have been suggested (more than you might imagine!). Most attempt to deal with 2), and from there, back out answers to 1) and 3). All deserve their own series of posts, so let me apologize in advance for only having time to do a fly-by of the more important ones here. In what follows I’ll give an overview of the most viable, and well-received interpretations to date, and finish with my own take on all of it. So, without further ado, here are our final contestants… This is the traditionally accepted answer given by the founding fathers of QM. According to Copenhagen, the cutting edge of reality is in \hat M. The world we exist in is contained entirely in our observations. Per the Born Rule, these are irreducibly probabilistic and non-local,and result in classically describable measurements. The wave function and its unitary history \hat U are mere mathematical artifices we use to describe the conditions under which such observations are made, and have no ontic reality of their own. In this sense, Copenhagen has been called a subjective, or epistemic interpretation because it makes our observations the measure of all things (pun intended :-) ). Although few physicists and philosophers would agree, some of the more radical takes on it have gone as far as to suggest that consciousness is the ultimate source of the reality we observe. Even so, few Copenhagen advocates believe the world doesn’t exist apart from us. The tree that falls in the woods does exist whether we’re there to see and hear it or not. What they would argue is that counterfactuals regarding the tree’s properties and those of whatever caused it to fall don’t instantiate if we don’t observe them. If no one sees the tree fall or experiences any downstream consequence of its having done so, then the question of whether it has or not is irreducibly ambiguous and we’re free to make assumptions about it. Several objections to Copenhagen have been raised. The idea that ontic reality resides entirely in non-local, phenomenologically discrete “collapse” events that are immune to further unpacking is unsatisfying. Science is supposed to explain things, not explain them away. It’s also difficult to see how irreducibly random \hat M events could be prepared by a rational, deterministic \hat U evolution if the wave function has no ontic existence of its own. To many physicists, philosophers, and theologians, this is less a statement about the nature or reality than the universe’s way of telling us that we haven’t turned over enough stones yet, and may not even be on the right path. For their part, Copenhagen advocates rightly point out that this is precisely what our experiments tell us—no more, no less. If the formalism correctly predicts experimental outcomes, they say, metaphysical questions like these are beside the point, if not flat-out ill-formed, and our physics and philosophy should be strictly instrumentalist—a stance for which physicist David Mermin coined the phrase “shut up and calculate". Many Worlds One response to Copenhagen is that if \hat U seems to be as rational and deterministic as the very real classical physics of our experience, perhaps that’s because it is. But that raises another set of questions. As we’ve seen, nothing about \hat U allows us to grant special status to any of the eigenstates associated with observable operators. If not, then we’re left with no reason other than statistical probability to consider any one outcome of an \hat M event to be any more privileged than another. Counterfactuals to what we don’t observe should have the same ontic status as those we do. If so, then why do our experiments seem to result in discrete irreducibly random and non-local “collapse” events with only one outcome? According to the Many Worlds (MWI) interpretation, they don’t. The universe is comprised of one ontically real, and deterministic wave function described by \hat U that’s local (in the sense of being free of “spooky-action-at-a-distance”) and there’s no need for hidden variables to explain \hat M events. What we experience as wave function “collapse” is a result of various parts of this universal wave function separating from each other as they evolve. Entangled states within it will be entangled while their superposed components remain in phase with each other. If/when they interact with some larger environment within it, they eventually lose their coherence with respect to each other and evolve to a state where they can be described by the wave functions of the individual states. When this happens, the entanglement has (for lack of a better term) “bled out” to a larger portion of the wave function containing the previous entanglement, and the environment it interacted with, and states are said to have decohered. Thus, the wave function of the universe never actually collapses anywhere—it just continues to decohere into the separate histories of previously entangles states that continue with their own \hat U histories, never interacting with each other again. As parts of the same universal wave function, all are equally real, and questions of counterfactual definiteness are ill-formed. The advantages of MWI speak for themselves. From a formal standpoint, a universe grounded on \hat U and decoherence that’s every bit as rational and well-behaved as the classical mechanics it replaced, certainly has advantages over one based on subjective hand grenade \hat M events. It deals nicely with the relativity-violating non-locality and irreducible indeterminacy that plague Copenhagen as well. And for reasons I won’t get into here, it also lends itself nicely to quantum field theory, and Feynmann path integral (“sum over histories”) methods that have proven to be very powerful. But its disadvantages speak just as loudly. For starters, it’s not at all clear that decoherence can fully account for what we directly experience as wave function collapse. Nor is it clear how MWI can make sense of the extremely well-established Born Rule. Does decoherence always lead to separate well-defined histories for every eigenstate associated with every observable that in one way or another participates in the evolution of \hat U? If not, then what meaning can be assigned to probabilities when some states decohere and others don’t. Even if it does, what reasons do we have for expecting that it should obey probabilistic constraints? And of course, we haven’t even gotten to the real elephant in the room yet—the fact that we’re also being asked to believe in the existence of an infinite number of entirely separate universes that we can neither observe, nor verify, even though the strict formalism of QM doesn’t require us to. Physics aside, for those of us who are theists this raises a veritable hornet’s nest of theological issues. As a Christian, what am I to make of the cross and God’s redemptive plan for us in a sandstorm of universes where literally everything happens somewhere to infinite copies of us all? It’s worth noting that some prominent Christian physicists like Don Page embrace MWI, and see in it God’s plan to ultimately gather all of us to Him via one history or another, so that eventually “every knee shall bow, and every tongue confess, and give praise to God (Rom. 14:11). While I understand where they’re coming from, and the belief that God will gather us all to Himself some day is certainly appealing, this strikes me as contrived and poised for Occam’s razor. In the end, despite its advantages, and with all due respect to Hawking and its other proponents, I don’t accept MWI because, to put it bluntly, it’s more than merely unnecessary—it’s bat-shit crazy. According to MWI there is, quite literally, a world out there somewhere in which I, Scott Church (peace be upon me), am a cross-dressing, goat worshipping, tantric massage therapist, with 12” Frederick’s of Hollywood stiletto heels (none of that uppity Victoria’s Secret stuff for me!), and D-cup breast implants… Folks, I am here to tell you… there isn’t enough vodka or LSD anywhere on this lush, verdant earth to make that believable! Whatever else may be said about this veil of tears we call Life, rest assured that indeterministic hand grenade \hat M events and “spooky action at a distance” are infinitely easier to take seriously. :D De Broglie–Bohm Bat-shit crazy aside, another approach would be to try separating \hat U and \hat M from each other completely. If they aren’t playing together at all, we don’t have to worry about whether they’ll share their toys. Without pressing that analogy too far, this is the basic idea behind the De Broglie-Bohm interpretation (DBB). According to DBB, particles do have definite locations and momentums, and these are subject to hidden variables. \hat U is real and deterministic, and per the Schrödinger equation governs the evolution of a guiding, or pilot wave function that exists separate from particles themselves. This wave function is non-local and does not collapse. For lack of a better word, particles “surf” on it, and \hat M events acting on them are governed by the local hidden variables. In our non-local singlet example from Part I, the two electrons were sent off with spin-state box lunches. All of this results in a formalism like that of classical thermodynamics, but with predictions that look much like the Copenhagen interpretation. In DBB the Born Rule is an added hypothesis rather than a consequence of the inherent wave nature of particles. There is no particle/wave duality issue of course because particles and the wave function remain separate, and Bell’s inequalities are accounted for by the non-locality of the latter. There’s a naturalness to DBB that resolves much of the “weirdness” that has plagued other interpretations of QM. But it hasn’t been well-received. The non-locality of its pilot wave \hat U still raises the whole “spooky action at a distance” issue that physicists and philosophers alike are fundamentally averse to. Separating \hat U from \hat M and duct-taping them together with hidden variables adds layers of complexity not present in other interpretations, and runs afoul of all the issues raised by the Kochen-Specker Theorem. We have to wonder whether our good friend Occam and his trusty razor shouldn’t be invited to this party. And like MWI, it’s brutally deterministic, and as such, subject to all the philosophical and theological nightmares that go along with that, not to mention our direct existential experience as freely choosing people. Even so, for a variety of reasons (including theories of a “sub-quantum realm” where hidden variables can also hide from Kochen-Specker) it’s enjoying a bit of a revival and does have its rightful place among the contenders. Consistent Histories As we’ve seen, the biggest challenge QM presents is getting \hat U and \hat M to play together nicely. Most interpretations try to achieve this by denying the ontological reality of one, and somehow rolling it up into the other. What if we denied the individual reality of both, and rolled them up into a larger ontic reality described by an expanded QM formalism? Loosely speaking, Consistent Histories (or Decoherent Histories) attempts to do this by generalizing Copenhagen to a quantum cosmology framework in which the universe evolves along the most internally consistent and probable histories available to it. Like Copenhagen, CH asserts that the wave function is just a mathematical construct that has no ontic reality of its own. Where it parts company is in its assertion that \hat U represents the wave function of the entire universe, and it never collapses. What we refer to as “collapse” occurs when some parts of it decohere with respect to larger parts leading, it is said, to macroscopically irreversible outcomes that are subject to the ordinary additive rules of classical probability. In CH, the potential outcomes of any observation (and thus, the possible histories the universe might follow) are classified by how homogeneous and consistent they are. This, it’s said, is what makes some of them more probable than others. A homogeneous history is one that can be described by a unique temporal sequence of single-outcome propositions, such as, “I woke up” > “I got out of bed” > “I showered” … Those that cannot be, such as ones that include statements like “I walked to the grocery store or drove there” are not. These events can be represented by a projection operator \hat P from which histories can be built, and the more internally consistent they are (per criteria contained in a class operator \hat P), the more probable they are. Thus, in CH \hat M is not a fundamental QM concept. The evolution of the universe is described by a mathematical construct, \hat U that can be interpreted as decohering into the most internally consistent (and therefore probable) homogeneous histories possible for it to. The paths these histories take give us a framework in which some sets of classical questions can be meaningfully asked, and other can’t. Returning to our electron singlet example, CH advocates would maintain that the wave function wasn’t entangled in any real physical sense. Rather, there are two internally consistent histories for the prepared electrons that could have emerged a spin measurement: Down/Up, and Up/Down. Down/Up/Up/Down isn’t a meaningful state, so it’s meaningless to say that the universe was “in” it. Rather, when the entire state of us/laboratory/observation is accounted for, we will find that the universe followed the history that was most consistent for that. There is no need to discriminate between observer and observed. Decoherence is enough to account for the whole history, so \hat M is a superfluous construct. CH advocates claim that it offers a cleaner, and less paradoxical interpretation of QM and classical effects than its competitors, and a logical framework for discriminating boundaries between classical and quantum phenomena. But it too has its issues. It’s not at all clear that decoherence is as macroscopically irreversible as it’s claimed to be, or that by itself it can fully account for our experience of \hat M. It also requires additional projection and class operator constructs not required by other interpretations, and these cannot be formulated to any degree practical enough to yield a complete theory. Objective Collapse Theories Of course, we could just make our peace with \hat U and \hat M. Objective collapse, or quantum mechanical spontaneous localization (QMSL) models maintain that the universe reflects both because the wave function is ontologically real, and “measurements” (perhaps interactions is a better term here) really do collapse it. According to QMSL theories, the wave function is non-local, but collapses locally in a random manner (hence, the “spontaneous localization”), or when some physical threshold is crossed. Either way, observers play no special role in the collapse itself. There are several variations on this theme. The Ghirardi–Rimini–Weber theory for instance, emphasizes random collapse of the wave function to highly probably stable states. Roger Penrose has proposed another theory based on energy thresholds. Particles have mass-energy that, per general relativity, will make tiny "dents" in the fabric of space-time. According to Penrose, in the entangled states of their wave function these will superpose as well, and there will be an associated energy difference that entangled states can only sustain up to a critical threshold energy difference (which he theorizes to be on the order of one Planck mass). When they decohere to a point where this threshold is exceeded, the wave function collapses per the Born Rule in the usual manner (Penrose, 2016). For our purposes, this interpretation pretty much speaks for itself and so do its advantages. Its disadvantages lie chiefly in how we understand and formally handle the collapse itself. For instance, it’s not clear this can be done mathematically without violating conservation of energy or bringing new, as-yet undiscovered physics to the game. In the QMSL theories that have been presented to date, if energy is conserved the collapse doesn’t happen completely, and we end up with left-over “tails” in the final wave function state that are difficult to make sense of with respect to the Born Rule. It has also proven difficult to render the collapse compliant with special relativity without creating divergences in probability densities (in other words, blowing up the wave function). Various QMSL theories have handled issues like this in differing ways, some more successfully than others, and research in his area continues. But to date, none of the theories on the table offers a slam-dunk. The other problem QMSL theories face is a lack of experimental verification. Random collapse theories like Ghirardi–Rimini–Weber could be verified if the spontaneous collapse of a single particle could be detected. But these are thought to be extremely rare, and to date, none have been observed. However, several tests for QMSL theories have been proposed (e.g. Marshall et al., 2003; Pepper et al., 2012; or Weaver et al., 2016 to name a few), and with luck, we’ll know more about them in the next decade or so (Penrose, 2016). There are many other interpretations of QM, some of which are more far-fetched than others. But the ones we’ve covered today are arguably the most viable, and as such, the most researched. As we’ve seen, all have their strengths and weaknesses. Personally, I lean toward Objective Collapse scenarios. It’s hard to believe that something as well-constrained and mathematically coherent as \hat U isn’t ontologically real. Especially when the alternative bedrock reality being offered is \hat M, which is haphazard and difficult to separate from our own subjective consciousness (the latter in particular smacks of solipsism, which has never been a very compelling, or widely-accepted point of view). Of the competing alternatives that would agree about \hat U, MWI is probably the strongest contender. But for reasons that by now should be disturbingly clear, it’s far easier for me to accept a non-local wave function collapse than its take on \hat M. Call me unscientific if you will, but ivory towers alone will never be enough to convince me that I have a cross-dressing, goat-worshipping, voluptuous doppelganger somewhere that no one can ever observe. Other interpretations don’t fare much better. Most complicate matters unnecessarily and/or deal with the collapse in ways that render \hat M deterministic. It’s been said that if your only tool is a hammer, eventually everything is going to look like a nail. It seems to me that such interpretations are compelling to many because they’re tidy. Physicists and philosophers adore tidy! Simple, deterministic models with well-defined differential equations and boundary conditions give them a fulcrum point where they feel safe, and from which they think they can move the world. This is fine for what it’s worth of course. Few would dispute the successes our tidy, well-formed theories have given us. But if the history of science has taught us anything, it’s that nature isn’t as enamored with tidiness as we are. Virtually all our investigations of QM tell us that indeterminism cannot be fully exorcized from \hat M, and the term “collapse” fits it perfectly. Outside the laboratory, everything we know about the world tells us we are conscious beings made in the image of our Creator. We are self-aware, intentional, and capable of making free choices—none of which is consistent with tidy determinism. Anyone who disputes that is welcome to come up with a differential equation and a self-contained set of data and boundary conditions that required me to decide on a breakfast sandwich rather than oatmeal this morning… and then collect their Nobel and Templeton prizes and retire to the lecture circuit. The bottom line is that we live in a universe that presents us with \hat U and \hat M. As far as I’m concerned, if the shoe fits I see no reason not to wear it. Yes, QMSL theories have their issues. But compared to other interpretations, its problems are formalistic ones of the sort I suspect will be dealt with when we’re closer to a viable theory of quantum gravity. When we as students are ready, our teacher will come. Until then, as Einstein once said, the world should be made as simple as possible, but no simpler. When I was in graduate school my thesis advisor used to say that when people can’t agree on the answer to some question one of two things is always true: Either there isn’t enough evidence to answer the question definitively, or we’re asking the wrong question. Perhaps many of our QM headaches have proven as stubborn as they are because we’re doing exactly that… asking the wrong questions. One possible case in point… physicists have traditionally considered \hat U to be sacrosanct—the one thing that above all others, only the worst apostates would ever dare to question. Atheist physicist Sean Carroll has gone so far as to claim that it proves the universe is past-eternal, and God couldn’t have created it! [There are numerous problems with that of course, but they’re beyond the scope of this discussion.] However, Roger Penrose is now arguing that we need to do exactly that (fortunately, he’s respected enough in the physics community that he can get away with such challenges to orthodoxy without being dismissed as a crank or heretic). He suggests that if we started with the equivalence principle of general relativity instead, we could formulate a QMSL theory of \hat U and \hat M that would resolve many, if not most QM paradoxes, and this is the basis for his gravitationally-based QMSL theory discussed above. Like its competitors, Penrose’s proposal has challenges of its own, not the least of which are the difficulties that have been encountered in producing a rigorous formulation \hat M along these lines. But of everything I’ve seen so far, I find it to be particularly promising! But then again, maybe the deepest secrets of the universe are beyond us. Isaac Newton once said, As scientists, we press on, collecting our shiny pebbles and shells on the shore of the great ocean with humility and reverence as he did. But it would be the height of hubris for us to presume that there’s no limit to how much of it we can wrap our minds around before we have any idea what’s beyond the horizon. As J. B. S. Haldane once said, "My own suspicion is that the Universe is not only queerer than we suppose, but queerer than we can suppose." (Haldane, 1928) Who knows? Perhaps he was right. God has chosen to reveal many of His thoughts to us. In His infinite grace, I imagine He’ll open our eyes to many more. But He certainly isn’t under any obligation to reveal them all, nor do we have any reason to presume that we could handle it if He did. But of course, only time will tell. One final thing… Astute readers may have noticed one big elephant in the room that I’ve danced around, but not really addressed yet… relativity. Position, momentum, energy, and time have been a big part of our discussion today… and they’re all inertial frame dependent, and our formal treatment of \hat U and \hat M must account for that. There are versions of the Schrödinger equation that do this—most notably the Dirac and Klein Gordon equations. Both however are semi-classical equations—that is, they dress up the traditional Schrödinger equation in a relativistic evening gown and matching handbag, but without an invitation to the relativity ball. For a ticket to the ball, we need to take QM to the next level… quantum field theory. But these are topics for another day, and I’ve rambled enough already… so once again, stay tuned!  Haldane, J. B. S. (1928). Possible worlds: And other papers. Harper & Bros.; 1st edition (1928). Available online at Accessed May 17, 2017. Marshall, W., Simon, C., Penrose, R., & Bouwmeester, D. (2003). Towards quantum superpositions of a mirror. Physical Review Letters, 91 (13). Available online at Accessed June 9, 2017. Pepper, B., Ghobadi, R., Jeffrey, E., Simon, C., & Bouwmeester, D. (2012). Optomechanical superpositions via nested interferometry. Physical review letters, 109 (2). Available online at Accessed June 9, 2017. Weaver, M. J., Pepper, B., Luna, F., Buters, F. M., Eerkens, H. J., Welker, G., ... & Bouwmeester, D. (2016). Nested trampoline resonators for optomechanics. Applied Physics Letters, 108 (3). Available online at Accessed June 9, 2017. About Scott Church I am a landscape photographer and I.T. professional in the greater Seattle area. I graduated from the University of Washington with a Bachelor's in Mechanical Engineering and a Masters in Applied Physics, and in a former life, I was an aerospace engineer. When I'm not writing or at work I can be found plying the waters of the Pacific Northwest for salmon, trout, and steelhead, or bushwacking with my camera gear. This entry was posted in Metaphysics, Physics. Bookmark the permalink. 24 Responses to Interpreting the Quantum World II: What Does It Mean? 1. Aron Wall says: Just for the record, I think that probably all of these interpretations are false, although I have only fragmentary and foggy ideas about what should replace them. But I thought this rundown of common interpretations will be helpful, even if I don't agree that objective collapse is the way to go. And with that unendorsement, I'd like to thank you Scott, for an interesting and provocative post! 2. kashyap vasavada says: Nice summary of interpretations. But I am surprised that you are leaning towards ontological reality of psi and U. To me it seems that the day you accepted wave-particle duality, you gave up on reality. None of us has seen a table (real!) which can be described as a particle or wave or both! 3. Scott Church says: Thanks Aron and Kashyap! Actually, Aron, I do agree with you... ultimately, I think they're all probably false as well. At the very least, I don't believe any of them are complete. I support QMSL theories over other interpretations only to the extent that my gut tells me they get a little bit closer to the truth, and their shortcomings strike me as the sort that might be dealt with by a fully developed quantum gravity theory... if, and when we ever come up with one. Beyond that, however, I'm not at all ready to sign up as a card-carrying member of that camp. Right now, if I had to put money down, I'd side with Haldane... in the end, I think we'll find that the universe is queerer than we can suppose. We should never stop searching for answers of course... we wouldn't be scientists if we did, nor would our worship be complete. But I learned a long time ago that there will always be some mystery in Creation, and a point where we'll just have to take off our sandals and recognize that we're standing on holy ground (Exodus 3:5). :-) 4. Mactoul says: I question the very notion of "a deterministic, well-behaved Û evolution of the universe". Wasn't QM formulated not for "the evolution of the universe"--a grandiose and scientifically dubious undertaking--but to interpret experiments on microscopic phenomena. The many problems and paradoxes mentioned are just created by unjustified extrapolation of the QM wavefunction of a specified microscopic system to the "entire universe"--whatever that means. Universe is defined as the totality of consistently interacting things. How do you ever know empirically that you have got the totality? The concept "universe" is thus non-empirical and thus can not be used in physics. Presumably the cosmologists are responsible for this notion of "wavefunction of the universe". Let them be! But an introduction to QM and its interpretations should avoid the dubious notions and stick with what is experimentally tried and true--which none of the cosmologists' notions are! 5. Valentin says: It would be very interesting to me which implications would have any of these interpretations of Quantum World for Hawking-Hartle's model and for Vilekin's model of the apparition of the universe. 6. TY says: Scott and Aron, You did touch on the notion of the “observer” in measurement and I would like to know Aron's thoughts and yours on whether there is a problem in QM from the very outset and that is assumes the "observer" is outside the mathematical formulations or descriptions of the physical world. But from what I’ve learned from the literature, even a physical device (like a Geiger counter) that is supposed to be taking the “measurements” becomes part of the system. So what is being measured? All that aside and the many paradoxes, QM works! so I won't say just because of the unexplained or explainable puzzles that QM is invalid. As your review suggests, there is no better theory or hypothesis. Good discussion. 7. Scott Church says: @Valentin, the Hartle-Hawking model is one solution of the Wheeler-DeWitt equation, which essentially, is a semiclassical boundary constraint on the wave function of the universe. Inasmuch as various interpretations of QM differ as to whether such a universal wave function exists, it would only be consistent with those that allow for one. Of the interpretations I discussed here, that would include de Broglie–Bohm, MWI, and Consistent Histories, but not Copenhagen or QMSL theories. Wikipedia has a good tabular comparison of these, and many other interpretations regarding a universal wave function, as well as the other points I covered. @Ty, I guess it depends on what we mean by "problem". As we saw, these, and other interpretations differ on the role of "observers" and "measurements" in QM, and to the extent that we're left with a lot of underlying mysteries about what both mean, yes. But that said, when Aron and I say that all of these interpretations are probably wrong, we are not talking about QM itself. As I commented in my first post, QM is arguably the most successful scientific theory in history. The one thing no one of consequence disputes is that as-is, it tells us a great deal about the universe. What's problematic is how we interpret that message ontologically and epistemologically, and whether any deeper, currently "hidden" layers of physics are involved. In any event, it's safe to say that whatever the real meaning of it all is, and however the latter question is answered, QM is showing us some of God's thoughts. If/when it is supplanted by a viable quantum gravity or so-called "everything" theory, you can rest assured that theory will reduce to QM in low-energy limits, the way general relativity reduces to classical mechanics. In their respective realms, none of these is any less valid than they've always been. :-) 8. Tom Rudelius says: Nice post, Scott. Two comments and a question: I think it's worth emphasizing that there is no such thing as "\emph{the} wavefunction" or "*the* observer" in the Copenhagen interpretation--the wavefunction is defined relative to an observer, and the question of what you want to count as observer vs. system is subjective. Of course, this leaves open the problem of defining ontic reality, but at least it solves other issues e.g. the Wigner's friend paradox. I disagree somewhat with the claim "Science is supposed to explain things, not explain them away." A common tactic in high-energy physics is to show that if a problem can't be solved, it can at least be put into a strong coupling regime where we don't expect to find a solution (see e.g. the last sentence of Witten's seminal '95 paper.) In the present case, it's unclear to me that science should be able to do more than to explain outcomes of experiments. Finally, a question: how would the MWI proponent explain black hole complementarity? The complementarity solution to the information paradox relies on the idea that no observer can simultaneously observe both copies of the information. But if MWI gives "observers" no special status, how would its proponents justify this solution? Or would they argue that complementarity is not the right answer to the information paradox? 9. Scott Church says: Thanks @Tom! I would agree with both of your comments. When discussing Copenhagen, my reference to "the" wave function and observer was just a choice of words when I was addressing them and wasn't meant to imply that per that interpretation they could be objectively rather than subjectively defined. In fact, the very subjectivity of Copenhagen is a big part of why many find it unsatisfying. And yes, in physics we often cannot do better than explaining the outcomes of experiments, and in the absence of anything more, that's where we stand. As Mermin said, "shut up and calculate". But the real holy grail, I think, is a model that's paradigmatically, as well as mathematically complete--one that does give us a more ontic explanation of reality as opposed to a merely epistemic one that we can calculate from. If the latter is what God has chosen to reveal to us so far then we accept that with grateful hearts. Pork chops can be wonderful, especially when grilled with the right marinade or rub, and to date, Copenhagen has been a fine pork chop. But God willing... that bacon-wrapped filet mignon is what we came out for! :-) Regarding MWI and black hole complementarity, to be honest, I'm not familiar enough with that issue to comment. Aron, do you have any thoughts...? 10. Mactoul says: Isn't it curious that none of the traditional treatments and examples of QM deal with the "wavefunction of the universe". The exposition here could deal with simple quantum mechanical examples such as double-silt experiment and how does it fares under various interpretations. 11. Aleksy says: Dear Scott, I was wondering what are your thoughts on IP's video 'Quantum Mechanics Debunks Materialism' All the best, 12. Scott Church says: @Aleksy, having watched the presentation, it seems a bit much to me to claim that QM debunks Materialism--at least in the usual sense of that term, which is synonymous with atheism. What it does refute is the deterministic classical worldview that remains completely characterized apart from observers, but strictly speaking, that doesn't require one to believe in God. But that said, I do believe that like the big bang (or more properly, Concordance cosmology), it raises some pretty significant issues for Materialism that aren't easily dismissed by its advocates. Not the least of these is the belief that bedrock reality--that which simply is--is restricted to physical matter alone, apart from any sort of consciousness or mind. At the very least, I'd say that QM renders it extremely difficult (if not impossible) for atheists to insist that science leaves no room for God without resorting to the same kind of dogmatic assertion they rightly accuse many fundamentalists of. [As a matter of fact, my next post (which I hope to have ready next week) will be addressing some of these very issues.] Cheers! :-) 13. Christopher says: Hello Scott and Aron, I am curious what your thoughts are on quantum Bayesianism? - Thank you for the interesting post! 14. Scott Church says: @Christopher, my apologies... Once again, I'm not familiar enough with QB to comment on it, so I'll defer to Aron on this as well. Thanks! :-) 15. valentin says: Many thanks, Scott! Aron wrote that many quantum cosmologists prefer a 'many world' interpretation for their model of the origin of the universe. Why do they not equally prefer for example a 'de Broglie-Bohm' interpretation? Is this interpretation more controversial than the 'many worlds' one? Is the its 'hidden variables' idea as wild as the 'multiplication of entities' idea suggested by the 'many worlds' interpretation? 16. Scott Church says: @Valentin, those hidden variables are DBB's biggest problem. In fact, it was one of the first attempts to leverage them to rescue physics from Copenhagen, and some would call it the hidden variables theory. Personally, I'm with you in that I don't consider hidden variables to be as wild as an infinite multiplication of entities. However, they do run afoul of the Bell inequalities and Kochen-Specker theorem. While DBB is enjoying a bit of a comeback as I mentioned, inasmuch as it's dependent on them, these are considerable hurdles to clear and few physicists are convinced that it will be able to. As for MWI, quantum cosmology attempts to apply QM to the origin and evolution of the universe as a whole, and as such will only admit interpretations that allow for a universal wave function. So if MWI is favored by quantum cosmologists it's probably because from a formal standpoint it's the most workable of those that do. Best. :-) 17. valentin says: Many thanks, Scott! 18. Andrew says: Good job, Scott. This is well written and quite interesting! 19. Aron Wall says: To try to answer some questions about these interpretations... As I tried to make clear in my QM I post, QM can be thought of as a modification of the usual rules of probability theory. It is one of the few ways to consistently modify probability theory that seems to make any kind of sense at all. Now, when it comes to classical probability theory, Bayesians believe that probability is best thought of as an individual's personal or "subjective" credences about how the world is (which are, however, rationally constrained by the axioms of probability theory, and I would say also by various rules of thumb about how to estimate prior probabilities). So a "Quantum Bayesian" would, it seems, interpret the wavefunction in much the same way: as an individual's best set of credences. As somebody sympathetic to classical Bayesianism, I have a prediliction to be sympathetic to this too... However, the Kochen-Specker theorem implies that, if we want to think of all quantum operators as having well defined values in some ontological sense, this does not just violate classical probability theory. It also violates classical logic, i.e. it is simply not possible to simultaneously assign a true physical value to all operators. That seems worse that just modifying probability theory... The contemporary "QBist" view avoids this problem by taking things in a radically subjective direction, by denying that we can talk about any kind of objective state of the universe; just the experiences of ourselves as a single observer. In this view, if I understand them correctly, QM is a set some pragmatically justified rules to predict what we observe. But it seems unsatisfying not to have a story about what is "really" going on with the system. So to my tastes, this is too subjective. Bohmian interpretations avoid conflict with Kochen-Specker in a different way; by saying that one privileged set of commuting operators (e.g. positions) have objectively well defined values, while others (e.g. momenta) do not. Position is the choice usually used in nonrelativistic QM, but this is arbitrary. However, it seems hard to decide how to generalize this to QFT. Many important symmetries (such as Lorentz invariance) seem to be broken, which is one reason many physicists find it unaesthetic. So I'm not sure how to make a Bohmian interpretation of quantum cosmology, but it probably involves some arbitrary choices. 20. Aron Wall says: A somewhat more "objective" way of defining Black Hole Complementarity, is to say that it's okay for observables at spacelike separation to fail to commute, so long as they are not in a single causal patch of the universe. And I think many of its proponents also believed in MWI. However, Black Hole Complementarity is currently in very serious trouble as a result of the "firewalls" thought experiment. Joe Polchinksi, at any rate, views this at refuting his previous belief in Complementarity. 21. Ned says: Hello all, What do you think of this response to the problem of hidden variables for Bohemian mechanics (I copied the text from this video, which looks to be put forward by academics/non-quacks): " The name Hidden Variable Theories refers to theories that substitute or supplement the wave function of QM by some other variable. This definition also applies to BM, where the wave function is supplemented by the actual position of the particle. Now, you can decide by yourself whether the term hidden is appropriate or not for particle positions. No-go theorems are not general theorems about hidden variables as defined above, even if they are often invoked when speaking in general terms about hidden variables. To really understand if they say something about BM or not, general terms are not sufficient, and we have to look at the theorems closer. For example, Kochen-Specker theorem says that it is not possible to describe quantum mechanical observables by variables independent of the experimental set-up. But in BM the outcomes of experiments are described precisely by quantum mechanical observables, not by classical variables, in perfect agreement with the theorem. Only positions are described by usual variables in addition to the wave function, but the Bohmian positions are the actual positions occupied by the particles during the whole evolution, and not results of position measurements, that are also described by quantum observables.In contrast, Bell’s Theorem can be formulated without even speaking about hidden variable theories: the theorem states that some predictions of QM, well confirmed by several experiments, can not be explained by any local theory. And BM is nonlocal, as well as QM is. In fact BM inspired Bell to investigate non-locality, finally leading him to discover his famous inequalities. Bell was one of the most prominent proponents of BM and wrote many articles explaining it in great detail. Bell’s Theorem is often misunderstood and reduced to a mere statement that excludes the possibility of substituting QM with a local classical theory. Conversely, it is an extremely important result, that requires us to change drastically our conception of the world, and that is the source of many difficulties in the reconciliation between QM and Relativity. " The video I took this from is here - 22. Andrew says: Scott can I ask you a simple question about the MWI? ... Do you thin protective measurements (if we had one) would be a problem for MWI? The way I think of it is, if you have a super imposed wave function in one universe then it doesn't correspond to a multiverse. 23. Scott Church says: Hello @Andrew, To be honest, I'm not sure what you mean by "protective measurements". [Aron, do you have any thoughts on that...?] The larger problem here is that these days terms like "worlds" or "universes" are used in a number of confusing ways, and the most common ones are largely misnomers. MWI is one case in point. What it refers to as "worlds", or "parallel universes" are, as you said, just separately decohering histories within a single universal wave function. Something similar happens with the so-called inflationary "multiverse". In that case, there's a single eternally inflating spacetime within which non-inflating reheated regions appear as "bubbles", or "bubble universes" when the inflaton relaxes to a ground state there. In both cases, whether one refers to the larger wave function or spacetime as the universe, or their separate decohered/reheated regions as "universes" is a choice of words. Unfortunately, for largely sensationalistic reasons that sell more books and magazines, this is rarely clarified in the popular press. :-) 24. Andrew says: Sure, well I agree and also decoherence is exponential so I suppose it doesn't really make sense to talk about total separate parallel worlds. Leave a Reply
f22802218044f6c6
Second quantized formulation of geometric phases The level crossing problem and associated geometric terms are neatly formulated by the second quantized formulation. This formulation exhibits a hidden local gauge symmetry related to the arbitrariness of the phase choice of the complete orthonormal basis set. By using this second quantized formulation, which does not assume adiabatic approximation, a convenient exact formula for the geometric terms including off-diagonal geometric terms is derived. The analysis of geometric phases is then reduced to a simple diagonalization of the Hamiltonian, and it is analyzed both in the operator and path integral formulations. If one diagonalizes the geometric terms in the infinitesimal neighborhood of level crossing, the geometric phases become trivial (and thus no monopole singularity) for arbitrarily large but finite time interval T . The integrability of Schrödinger equation and the appearance of the seemingly non-integrable phases are thus consistent. The topological proof of the Longuet-Higgins’ phase-change rule, for example, fails in the practical BornOppenheimer approximation where a large but finite ratio of two time scales is involved and T is identified with the period of the slower system. The difference and similarity between the geometric phases associated with level crossing and the exact topological object such as the Aharonov-Bohm phase become clear in the present formulation. A crucial difference between the quantum anomaly and the geometric phases is also noted. 3 Figures and Tables Cite this paper @inproceedings{Deguchi2005SecondQF, title={Second quantized formulation of geometric phases}, author={Shinichi Deguchi}, year={2005} }
cc6419f1bb5ec681
How a wave packet travels through a quantum electronic interferometer Together with Christoph Kreisbeck and Rafael A Molina I have contributed a blog entry to the News and Views section of the Journal of Physics describing our most recent work on Aharonov-Bohm interferometer with an imbedded quantum dot (article, arxiv). Can you spot Schrödinger’s cat in the result? Transition between the resistivity of the nanoring with and without embedded quantum dot. The vertical axis denotes the Fermi energy (controlled by a gate), while the horizontal axis scans through the magnetic field to induce phase differences between the pathways. Dusting off cometary surfaces: collimated jets despite a homogeneous emission pattern. Effective Gravitational potential of the comet (including the centrifugal contribution), the maximal value of the potential (red) is about 0.46 N/m, the minimal value (blue) 0.31 N/m computed with the methods described in this post. Effective Gravitational potential of the comet (including the centrifugal contribution), the maximal value of the potential (red) is about 0.46 N/m, the minimal value (blue) 0.31 N/m computed with the methods described in this post. The rotation period is taken to be 12.4043 h. Image computed with the OpenCL cosim code. Image (C) Tobias Kramer (CC-BY SA 3.0 IGO). Knowledge of GPGPU techniques is helpful for rapid model building and testing of scientific ideas. For example, the beautiful pictures taken by the ESA/Rosetta spacecraft of comet 67P/Churyumov–Gerasimenko reveal jets of dust particles emitted from the comet. Wouldn’t it be nice to have a fast method to simulate thousands of dust particles around the comet and to find out if already the peculiar shape of this space-potato influences the dust-trajectories by its gravitational potential? At the Zuse-Institut in Berlin we joined forces between the distributed algorithm and visual data analysis groups to test this idea. But first an accurate shape model of the comet 67P C-G is required. As published in his blog, Mattias Malmer has done amazing work to extract a shape-model from the published navigation camera images. 1. Starting from the shape model by Mattias Malmer, we obtain a re-meshed model with fewer triangles on the surface (we use about 20,000 triangles). The key-property of the new mesh is a homogeneous coverage of the cometary surface with almost equally sized triangle meshes. We don’t want better resolution and adaptive mesh sizes at areas with more complex features. Rather we are considering a homogeneous emission pattern without isolated activity regions. This is best modeled by mesh cells of equal area. Will this prescription yield nevertheless collimated dust jets? We’ll see… 2. To compute the gravitational potential of such a surface we follow this nice article by JT Conway. The calculation later on stays in the rotating frame anchored to the comet, thus in addition the centrifugal and Coriolis forces need to be included. 3. To accelerate the method, OpenCL comes to the rescue and lets one compute many trajectories in parallel. What is required are physical conditions for the starting positions of the dust as it flies off the surface. We put one dust-particle on the center of each triangle on the surface and set the initial velocity along the normal direction to typically 2 or 4 m/s. This ensures that most particles are able to escape and not fall back on the comet. 4. To visualize the resulting point clouds of dust particles we have programmed an OpenGL visualization tool. We compute the rotation and sunlight direction on the comet to cast shadows and add activity profiles to the comet surface to mask out dust originating from the dark side of the comet. This is what we get for May 3, 2015. The ESA/NAVCAM image is taken verbatim from the Rosetta/blog. Comparison of homogeneous dust model with ESA/NAVCAM Rosetta images. Comparison of homogeneous dust mode (left panel)l with ESA/NAVCAM Rosetta images. (C) Left panel: Tobias Kramer and Matthias Noack 2015. Right panel: (C) ESA/NAVCAM team CC BY-SA 3.0 IGO, link see text. Read more about the physics and results in our arxiv article T. Kramer et al.: Homogeneous Dust Emission and Jet Structure near Active Cometary Nuclei: The Case of 67P/Churyumov-Gerasimenko (submitted for publication) and grab the code to compute your own dust trajectories with OpenCL at Slow or fast transfer: bottleneck states in light-harvesting complexes Light-harvesting complex II, crystal structure 1RWT from Liu et al (Nature 2004, vol. 428, p. 287), rendered with VMD. The labels denote the designation of the chlorophyll sites (601-614). Chlorophylls 601,605-609 are of chlorophyll b type, the others of type a. In the previous post I described some of the computational challenges for modeling energy transfer in the light harvesting complex II (LHCII) found in spinach. Here, I discuss the results we have obtained for the dynamics and choreography of excitonic energy transfer through the chlorophyll network. Compared to the Fenna-Matthews-Olson complex, LHCII has twice as many chlorophylls per monomeric unit (labeled 601-614 with chlorophyll a and b types). Previous studies of exciton dynamics had to stick to simple exponential decay models based on either Redfield or Förster theory for describing the transfer from the Chl b to the Chl a sites. The results are not satisfying and conclusive, since depending on the method chosen the transfer time differs widely (tens of picoseconds vs picoseconds!). Exciton dynamics in LHCII. Exciton dynamics in LHCII computed with various methods. HEOM denotes the most accurate method, while Redfield and Förster approximations fail. To resolve the discrepancies between the various approximate methods requires a more accurate approach. With the accelerated HEOM at hand, we revisited the problem and calculated the transfer rates. We find slower rates than given by the Redfield expressions. A combined Förster-Redfield description is possible in hindsight by using HEOM to identify a suitable cut-off parameter (Mcr=30/cm in this specific case). Since the energy transfer is driven by the coupling of electronic degrees of freedom to vibrational ones, it of importance to assess how the vibrational mode distribution affects the transfer. In particular it has been proposed that specifically tuned vibrational modes might promote a fast relaxation. We find no strong impact of such modes on the transfer, rather we see (independent of the detailed vibrational structure) several bottleneck states, which act as a transient reservoir for the exciton flux. The details and distribution of the bottleneck states strongly depends on the parameters of the electronic couplings and differs for the two most commonly discussed LHCII models proposed by Novoderezhkin/Marin/van Grondelle and Müh/Madjet/Renger – both are considered in the article Scalable high-performance algorithm for the simulation of exciton-dynamics. Application to the light harvesting complex II in the presence of resonant vibrational modes (collaboration of Christoph Kreisbeck, Tobias Kramer, Alan Aspuru-Guzik). Again, the correct assignment of the bottleneck states requires to use HEOM and to look beyond the approximate rate equations. High-performance OpenCL code for modeling energy transfer in spinach With increasing computational power of massively-parallel computers, a more accurate modeling of the energy-transfer dynamics in larger and more complex photosynthetic systems (=light-harvesting complexes) becomes feasible – provided we choose the right algorithms and tools. OpenCL cross platform performance for tracking energy-transfer in the light-harvesting complex II found in spinach. OpenCL cross platform performance for tracking energy-transfer in the light-harvesting complex II found in spinach, see Fig. 1 in the article . Shorter values show higher perfomance. The program code was originally written for massively-parallel GPUs, but performs also well on the AMD opteron setup. The Intel MIC OpenCL variant does not reach the peak performance (a different data-layout seems to be required to benefit from autovectorization). The diverse character of hardware found in high-performance computers (hpc) seemingly requires to rewrite program code from scratch depending if we are targeting multi-core CPU systems, integrated many-core platforms (Xeon PHI/MIC), or graphics processing units (GPUs). To avoid the defragmentation of our open quantum-system dynamics workhorse (see the previous GPU-HEOM posts) across the various hpc-platforms, we have transferred the GPU-HEOM CUDA code to the Open Compute Language (OpenCL). The resulting QMaster tool is described in our just published article Scalable high-performance algorithm for the simulation of exciton-dynamics. Application to the light harvesting complex II in the presence of resonant vibrational modes (collaboration of Christoph Kreisbeck, Tobias Kramer, Alan Aspuru-Guzik). This post details the computational challenges and lessons learnt, the application to the light-harvesting complex II found in spinach will be the topic of the next post. In my experience, it is not uncommon to develop a nice GPU application for instance with CUDA, which later on is scaled up to handle bigger problem sizes. With increasing problem size also the memory demands increase and even the 12 GB provided by the Kepler K40 are finally exhausted. Upon reaching this point, two options are possible: (a) to distribute the memory across different GPU devices or (b) to switch to architectures which provide more device-memory. Option (a) requires substantial changes to existing program code to manage the distributed memory access, while option (b) in combination with OpenCL requires (in the best case) only to adapt the kernel-launch configuration to the different platforms. The OpenCL device fission extension allows to investigate the scaling of the QMaster code with the number of CPU cores. We observe a linear scaling up to 48 cores. The OpenCL device fission extension allows us to investigate the scaling of the QMaster code with the number of CPU cores. We observe a linear scaling up to 48 cores. QMaster implements an extension of the hierarchical equation of motion (HEOM) method originally proposed by Tanimura and Kubo, which involves many (small) matrix-matrix multiplications. For GPU applications, the usage of local memory and the optimal thread-grids for fast matrix-matrix multiplications have been described before and are used in QMaster (and the publicly available GPU-HEOM tool on While for GPUs the best performance is achieved using shared/local memory and assign one thread to each matrix element, the multi-core CPU OpenCL variant performs better with fewer threads, but getting more work per thread done. Therefore we use for the CPU machines a thread-grid which computes one complete matrix product per thread (this is somewhat similar to following the “naive” approach given in NVIDIA’s OpenCL programming guide, chapter 2.5). This strategy did not work very well for the Xeon PHI/MIC OpenCL case, which requires additional data structure changes, as we learnt from discussions with the distributed algorithms and hpc experts in the group of Prof. Reinefeld at the Zuse-Institute in Berlin. The good performance and scaling across the 64 CPU AMD opteron workstation positively surprised us and lays the groundwork to investigate the validity of approximations to the energy-transfer equations in the spinach light-harvesting system, the topic for the next post. GPU-HEOM 2d spectra computed at nanohub GPU-HEOM 2d spectra computed at nanohubComputed 2d spectra for the FMO complex for 0 picosecond delay time (upper panel) and 1 ps (lower panel). The GPU-HEOM computation takes about 40 min on the platform and includes all six Liouville pathways and averages over 4 spatial orientations. 1. login on (it’s free!) 2. switch to the gpuheompop tool 3. click the Launch Tool button (java required) You can select this preset from the Example selector. 10. Voila: your first FMO spectra appears. GPU and cloud computing conferences in 2014 Two conferences are currently open for registration related to GPU and cloud computing. I will be attending and presenting at both, please email me if you want to get in touch at the meetings. Oscillations in two-dimensional spectroscopy Transition from electronic coherence to a vibrational mode. Transition from electronic coherence to a vibrational mode made visible by Short Time Fourier Transform (see text). Over the last years, a debate is going on whether the observation of long lasting oscillatory signals in two-dimensional spectra are reflecting vibrational of electronic coherences and how the functioning of the molecule is affected. Christoph Kreisbeck and I have performed a detailed theoretical analysis of oscillations in the Fenna-Matthews-Olson (FMO) complex and in a model three-site system. As explained in a previous post, the prerequisites for long-lasting electronic coherences are two features of the continuous part of the vibronic mode density are: (i) a small slope towards zero frequency, and (ii) a coupling to the excitonic eigenenergy (ΔE) differences for relaxation. Both requirements are met by the mode density of the FMO complex and the computationally demanding calculation of two-dimensional spectra of the FMO complex indeed predicts long-lasting cross-peak oscillations with a period matching h/ΔE at room temperature (see our article Long-Lived Electronic Coherence in Dissipative Exciton-Dynamics of Light-Harvesting Complexes or arXiv version). The persistence of oscillations is stemming from a robust mechanism and does not require adding any additional vibrational modes at energies ΔE (the general background mode density is enough to support the relaxation toward a thermal state). But what happens if in addition to the background vibronic mode density additional vibronic modes are placed within the vicinity of the frequencies related electronic coherences? This fine-tuning model is sometimes discussed in the literature as an alternative mechanism for long-lasting oscillations of vibronic nature. Again, the answer requires to actually compute two-dimensional spectra and to carefully analyze the possible chain of laser-molecule interactions. Due to the special way two-dimensional spectra are measured, the observed signal is a superposition of at least three pathways, which have different sensitivity for distinguishing electronic and vibronic coherences. Being a theoretical physicists now pays off since we have calculated and analyzed the three pathways separately (see our recent publication Disentangling Electronic and Vibronic Coherences in Two-Dimensional Echo Spectra or arXiv version). One of the pathways leads to an enhancement of vibronic signals, while the combination of the remaining two diminishes electronic coherences otherwise clearly visible within each of them. Our conclusion is that estimates of decoherence times from two-dimensional spectroscopy might actually underestimate the persistence of electronic coherences, which are helping the transport through the FMO network. The fine tuning and addition of specific vibrational modes leaves it marks at certain spots of the two-dimensional spectra, but does not destroy the electronic coherence, which is still there as a Short Time Fourier Transform of the signal reveals. Computational physics on GPUs: writing portable code GPU-HEOM code comparison for various hardware. Runtime in seconds for our GPU-HEOM code on various hardware and software platforms. I am preparing my presentation for the simGPU meeting next week in Freudenstadt, Germany, and performed some benchmarks. In the previous post I described how to get an OpenCL program running on a smartphone with GPU. By now Christoph Kreisbeck and I are getting ready to release our first smartphone GPU app for exciton dynamics in photosynthetic complexes, more about that in a future entry. Getting the same OpenCL kernel running on laptop GPUs, workstation GPUs and CPUs, and smartphones/tablets is a bit tricky, due to different initialisation procedures and the differences in the optimal block sizes for the thread grid. In addition on a smartphone the local memory is even smaller than on a desktop GPU and double-precision floating point support is missing. The situation reminds me a bit of the “earlier days” of GPU programming in 2008. Besides being a proof of concept, I see writing portable code as a sort of insurance with respect to further changes of hardware (however always with the goal to stick with the massively parallel programming paradigm). I am also amazed how fast smartphones are gaining computational power through GPUs! Same comparison for smaller memory consumption. Note the drop in OpenCL performance for the NVIDIA K20c GPU. Here some considerations and observations: 1. Standard CUDA code can be ported to OpenCL within a reasonable time-frame. I found the following resources helpful: AMDs porting remarks Matt Scarpinos OpenCL blog 2. The comparison of OpenCL vs CUDA performance for the same algorithm can reveal some surprises on NVIDIA GPUs. While on our C2050 GPU OpenCL works a bit faster for the same problem compared to the CUDA version, on a K20c system for certain problem sizes the OpenCL program can take several times longer than the CUDA code (no changes in the basic algorithm or workgroup sizes). 3. The comparison with a CPU version running on 8 cores of the Intel Xeon machine is possible and shows clearly that the GPU code is always faster, but requires a certain minimal systems size to show its full performance. 4. I am looking forward to running the same code on the Intel Xeon Phi systems now available with OpenCL drivers, see also this blog. [Update June 22, 2013: I updated the graphs to show the 8-core results using Intels latest OpenCL SDK. This brings the CPU runtimes down by a factor of 2! Meanwhile I am eagerly awaiting the possibility to run the same code on the Xeon Phis…] Computational physics on the smartphone GPU Screenshot of the interacting many-body simulation on the Nexus 4 GPU. [Update August 2013: Google has removed the OpenCL library with Android 4.3. You can find an interesting discussion here. Google seems to push for its own renderscript protocol. I will not work with renderscript since my priorities are platform independency and sticking with widely adopted  standards to avoid fragmentation of my code basis.] I recently got hold of a Nexus 4 smartphone, which features a GPU (Qualcomm Adreno 320) and conveniently ships with already installed OpenCL library. With minimal changes I got the previously discussed many-body program code related to the fractional quantum Hall effect up and running. No unrooting of the phone is required to run the code example. Please use the following recipe at your own risk, I don’t accept any liabilities. Here is what I did: 1. Download and unpack the Android SDK from google for cross-compilation (my host computer runs Mac OS X). 2. Download and unpack the Android NDK from google to build minimal C/C++ programs without Java (no real app). 3. Install the standalone toolchain from the Android NDK. I used the following command for my installation: /home/tkramer/android-ndk-r8d/build/tools/ \ 4. Put the OpenCL programs and source code in an extra directory, as described in my previous post 5. Change one line in the cl.hpp header: instead of including <GL/gl.h> change to <GLES/gl.h>. Note: I am using the “old” cl.hpp bindings 1.1, further changes might be required for the newer bindings, see for instance this helpful blog 6. Transfer the OpenCL library from the phone to a subdirectory lib/ inside your source code. To do so append the path to your SDK tools and use the adb command: export PATH=/home/tkramer/adt-bundle-mac-x86_64-20130219/sdk/platform-tools:$PATH adb pull /system/lib/ 7. Cross compile your program. I used the following script, please feel free to provide shorter versions. Adjust the include directories and library directories for your installation. rm plasma_disk_gpu /home/tkramer/android-ndk-standalone/bin/arm-linux-androideabi-g++ -v -g \ -I. \ -I/home/tkramer/android-ndk-standalone/include/c++/4.6 \ -I/home/tkramer/android-ndk-r8d/platforms/android-5/arch-arm/usr/include \ -Llib \ -march=armv7-a -mfloat-abi=softfp -mfpu=neon \ -fpic -fsigned-char -fdata-sections -funwind-tables -fstack-protector \ -ffunction-sections -fdiagnostics-show-option -fPIC \ -fno-strict-aliasing -fno-omit-frame-pointer -fno-rtti \ -lOpenCL \ -o plasma_disk_gpu plasma_disk.cpp 8. Copy the executable to the data dir of your phone to be able to run it. This can be done without rooting the phone with the nice SSHDroid App, which by defaults transfers to /data . Don’t forget to copy the kernel .cl files: scp -P 2222 root@192.168.0.NNN: scp -P 2222 plasma_disk_gpu root@192.168.0.NNN: 9. ssh into your phone and run the GPU program: ssh -p 2222 root@192.168.0.NNN ./plasma_disk_gpu 64 16 10. Check the resulting data files. You can copy them for example to the Download path of the storage and use the gnuplot (droidplot App) to plot them. A short note about runtimes. On the Nexus 4 device the program runs for about 12 seconds, on a MacBook Pro with NVIDIA GT650M it completes in 2 seconds (in the example above the equations of motion for 16*64=1024 interacting particles are integrated). For larger particle numbers the phone often locks up. An alternative way to transfer files to the device is to connect via USB cable and to install the Android Terminal Emulator app. Next cd /data/data/jackpal.androidterm mkdir gpu chmod 777 gpu On the host computer use adb to transfer the compiled program and the .cl kernel and start a shell to run the kernel adb push /data/data/jackpal.androidterm/gpu/ adb push plasma_disk_gpu /data/data/jackpal.androidterm/gpu/ You can either run the program within the terminal emulator or use the adb shell adb shell cd /data/data/jackpal.androidterm/gpu/ ./plasma_disk_gpu 64 16 Let’s see in how many years todays desktop GPUs can be found in smartphones and which computational physics codes can be run! Computational physics & GPU programming: exciton lab for light-harvesting complexes (GPU-HEOM) goes live on User interface of the GPU-HEOM tool for light-harvesting complexes at Christoph Kreisbeck and I are happy to announce the public availability of the Exciton Dynamics Lab for Light- Harvesting Complexes (GPU-HEOM) hosted on You need to register a user account (its free), and then you are ready to use GPU-HEOM for the Frenkel exciton model of light harvesting complexes. In release 1.0 we support • calculating population dynamics  • tracking coherences between two eigenstates • obtaining absorption spectra • two-dimensional echo spectra (including excited state absorption) • … and all this for general vibronic spectral densities parametrized by shifted Lorentzians. I will post some more entries here describing how to use the tool for understanding how the spectral density affects the lifetime of electronic coherences (see also this blog entry). In the supporting document section you find details of the implemented method and the assumptions underlying the tool. We are appreciating your feedback for further improving the tool. We are grateful for the support of Prof. Gerhard Klimeck, Purdue University, director of the Network for Computational Nanotechnology to bring GPU computing to nanohub (I believe our tool is the first GPU enabled one at nanohub). If you want to refer to the tool you can cite it as: Christoph Kreisbeck; Tobias Kramer (2013), “Exciton Dynamics Lab for Light-Harvesting Complexes (GPU-HEOM),” (DOI:10.4231/D3RB6W248). and you find further references in the supporting documentation. I very much encourage my colleagues developing computer programs for theoretical physics and chemistry to make them available on platforms such as In my view, it greatly facilitates the comparison of different approaches and is the spirit of advancing science by sharing knowledge and providing reproducible data sets. Computational physics & GPU programming: interacting many-body simulation with OpenCL Trajectories in a two-dimensional interacting plasma simulation, reproducing the density and pair-distribution function of a Laughlin state relevant for the quantum Hall effect. Figure taken from Interacting electrons in a magnetic field: mapping quantum mechanics to a classical ersatz-system. In the second example of my series on GPU programming for scientists, I discuss a short OpenCL program, which you can compile and run on the CPU and the GPUs of various vendors. This gives me the opportunity to perform some cross-platform benchmarks for a classical plasma simulation. You can expect dramatic (several 100 fold) speed-ups on GPUs for this type of system. This is one of the reasons why molecular dynamics code can gain quite a lot by incorporating the massively parallel-programming paradigm in the algorithmic foundations. The Open Computing Language (OpenCL) is relatively similar to its CUDA pendant, in practice the setup of an OpenCL kernel requires some housekeeping work, which might make the code look a bit more involved. I have based my interacting electrons calculation of transport in the Hall effect on an OpenCL code. Another examples is An OpenCL implementation for the solution of the time-dependent Schrödinger equation on GPUs and CPUs (arxiv version) by C. Ó Broin and L.A.A. Nikolopoulos. Now to the coding of a two-dimensional plasma simulation, which is inspired by Laughlin’s mapping of a many-body wave function to an interacting classical ersatz dynamics (for some context see my short review Interacting electrons in a magnetic field: mapping quantum mechanics to a classical ersatz-system on the arxiv). Continue reading “Computational physics & GPU programming: interacting many-body simulation with OpenCL” Computational physics & GPU programming: Solving the time-dependent Schrödinger equation I start my series on the physics of GPU programming by a relatively simple example, which makes use of a mix of library calls and well-documented GPU kernels. The run-time of the split-step algorithm described here is about 280 seconds for the CPU version (Intel(R) Xeon(R) CPU E5420 @ 2.50GHz), vs. 10 seconds for the GPU version (NVIDIA(R) Tesla C1060 GPU), resulting in 28 fold speed-up! On a C2070 the run time is less than 5 seconds, yielding an 80 fold speedup. autocorrelation function in a uniform force field Autocorrelation function C(t) of a Gaussian wavepacket in a uniform force field. I compare the GPU and CPU results using the wavepacket code. The description of coherent electron transport in quasi two-dimensional electron gases requires to solve the Schrödinger equation in the presence of a potential landscape. As discussed in my post Time to find eigenvalues without diagonalization, our approach using wavepackets allows one to obtain the scattering matrix over a wide range of energies from a single wavepacket run without the need to diagonalize a matrix. In the following I discuss the basic example of propagating a wavepacket and obtaining the autocorrelation function, which in turn determines the spectrum. I programmed the GPU code in 2008 as a first test to evaluate the potential of GPGPU programming for my research. At that time double-precision floating support was lacking and the fast Fourier transform (FFT) implementations were little developed. Starting with CUDA 3.0, the program runs fine in double precision and my group used the algorithm for calculating electron flow through nanodevices. The CPU version was used for our articles in Physica Scripta Wave packet approach to transport in mesoscopic systems and the Physical Review B Phase shifts and phase π-jumps in four-terminal waveguide Aharonov-Bohm interferometers among others. Here, I consider a very simple example, the propagation of a Gaussian wavepacket in a uniform potential V(x,y)=-Fx, for which the autocorrelation function of the initial state ⟨x,y|ψ(t=0)⟩=1/(a√π)exp(-(x2+y2)/(2 a2)) is known in analytic form: ⟨ψ(t=0)|ψ(t)⟩=2a2m/(2a2m+iℏt)exp(-a2F2t2/(4ℏ2)-iF2t3/(24ℏ m)). Continue reading “Computational physics & GPU programming: Solving the time-dependent Schrödinger equation” The physics of GPU programming GPU cluster Me pointing at the GPU Resonance cluster at SEAS Harvard with 32x448=14336 processing cores. Just imagine how tightly integrated this setup is compared to 3584 quad-core computers. Picture courtesy of Academic Computing, SEAS Harvard. From discussions I learn that while many physicists have heard of Graphics Processing Units as fast computers, resistance to use them is widespread. One of the reasons is that physics has been relying on computers for a long time and tons of old, well trusted codes are lying around which are not easily ported to the GPU. Interestingly, the adoption of GPUs happens much faster in biology, medical imaging, and engineering. I view GPU computing as a great opportunity to investigate new physics and my feeling is that todays methods optimized for serial processors may need to be replaced by a different set of standard methods which scale better with massively parallel processors. In 2008 I dived into GPU programming for a couple of reasons: 1. As a “model-builder” the GPU allows me to reconsider previous limitations and simplifications of models and use the GPU power to solve the extended models. 2. The turn-around time is incredibly fast. Compared to queues in conventional clusters where I wait for days or weeks, I get back results with 10000 CPU hours compute time the very same day. This in turn further facilitates the model-building process. 3. Some people complain about the strict synchronization requirements when running GPU codes. In my view this is an advantage, since essentially no messaging overhead exists. 4. If you want to develop high-performance algorithm, it is not good enough to convert library calls to GPU library calls. You might get speed-ups of about 2-4. However, if you invest the time and develop your own know-how you can expect much higher speed-ups of around 100 times or more, as seen in the applications I discussed in this blog before. This summer I will lecture about GPU programming at several places and thus I plan to write a series of GPU related posts. I do have a complementary background in mathematical physics and special functions, which I find very useful in relation with GPU programming since new physical models require a stringent mathematical foundation and numerical studies.
a4d2ef8a54f43faf
Take the 2-minute tour × For $a$ being positive what are the quantisation conditions for an exponential potential? $$ - \frac{d^{2}}{dx^{2}}y(x)+ ae^{|x|}y(x)=E_{n}y(x) $$ with boundary conditions $$ y(0)=0=y(\infty) $$ I believe that the energies $ E_{n} $ will be positive and real I have read a similar paper: P. Amore, F. M. Fernández. Accurate calculation of the complex eigenvalues of the Schrödinger equation with an exponential potential. Phys. Lett. A 372 (2008), pp. 3149–3152, arXiv:0712.3375 [math-ph]. However, I get this strange quantisation condition $$ J_{2i\sqrt{E_{n}}}(\sqrt{-a})=0 $$ How can I handle this in the case $ a >0 $? share|improve this question Maybe you can show how you got to your quantization condition? –  Bernhard Dec 18 '12 at 11:50 the quantization condition is explained in the paper, due to the condition $ y(0)=0$ you get the quantizaton condition, in a similar way to the Airy function for the potential $ V(x)=x$ –  Jose Javier Garcia Dec 18 '12 at 11:53 Bessel functions of imaginary order and argument are relatively hard to manage but this DLMF section may be of help. If everything is done correctly then I would not be surprised by imaginary order and imaginary argument yielding real roots for $E_n$. –  Emilio Pisanty Dec 18 '12 at 13:11 The potential must be attractive to have positive $E_n$. For a repulsive potential one can get quantized $E_n$, but they may become negative and unbounded from below. –  Vladimir Kalitvianski Dec 18 '12 at 15:29 @JoseJavierGarcia Since this was tagged homework I imagine this is no longer as useful, but hopefully it will still help. It's some nice maths and some nice physics, anyway. –  Emilio Pisanty Aug 23 '13 at 21:29 1 Answer 1 up vote 5 down vote accepted The paper you quote covers a similar case, which was solved previously by S.T. Ma (Phys. Rev. 69 no. 11-12 (1946), p. 668), but deals with the scattering problem on the tail of the exponential - hence the complex energies. What follows is partly inspired by that paper but is quite distinct from it. The tricky part is not getting scared by the Bessel functions, but that's why we have the theory of special functions. For one, the exponential potential $e^{|x|}$ you ask for is an even function, which means that the corresponding eigenfunctions on $(-\infty, \infty)$ will be either even or odd. Therefore, they can be treated as eigenfunctions of the simpler potential $e^x$ on $(0,\infty)$, with boundary condition $\psi'(0)=0$ or $\psi(0)=0$, respectively. Since you ask about the latter condition, there is no point in keeping the absolute value. The problem, then, is the eigenvalue problem $$-\frac{d^2}{dx^2}\psi+A e^x \psi=E\psi\text{ under }\psi(0)=0=\psi(\infty).\tag{problem}$$ (A word on dimensions: To get the equation down to this form, we've had to set $\hbar, m$ and the length scale of the exponential to 1, by taking appropriate units of time, mass and length respectively. This means that there is no more dimensional freedom, and the hamiltonian has one free parameter, $A$, which will affect not only the scale of the spectrum (which you might expect as $A$ is a scaling on the potential) but also its structure.) Amore et al. treat this as a boundary-value problem in $\mathbb C$ and using a change to a complex variable. This complicates the issue more than is really necessary and for simplicity I will use only real variables, though this comes at the cost of dealing with modified Bessel functions instead of standard ones. The initial step is to change variable to $z=2\sqrt{A}e^{x/2}$, so that $Ae^x=z^2/4$ and derivatives transform as $$ \frac {\partial }{\partial x}=\frac {\partial z}{\partial x}\frac {\partial }{\partial z}=\frac {z }{2}\frac {\partial }{\partial z} \text{ so } \frac {\partial^2 }{\partial x^2} =\frac14\left( z^2\frac {\partial^2 }{\partial z^2}+z\frac {\partial }{\partial z} \right). $$ The final equation is thus $$ \left[ z^2\frac {\partial^2 }{\partial z^2}+z\frac {\partial }{\partial z}-(z^2+\nu^2) \right]\psi=0 \tag{equation} $$ where $\nu=i\sqrt{4E}$. (Yes. Some complexness is inevitable. No fear, it will eventually not matter.) This equation is Bessel's equation in modified form with index $\nu$. This is exactly the same as Bessel's equation for more normal situations; the index is complex but that is all. Two linearly independent solutions are the modified Bessel functions of the first and second kind, $I_\nu(z)$ and $K_\nu(z)$, so the general solution of $(\text{equation})$ looks like $$ \psi(z)=aI_\nu(z)+bK_{\nu}(z). $$ We then only need to impose the boundary conditions $\psi|_{z\rightarrow \infty}=\psi|_{z=2\sqrt{A}}=0$: • The condition at infinity requires that we set the coefficient of $I_\nu$ to zero, since the First Kind function always explodes. We could have done this from the start: $K_\nu$ is, by definition, the exponentially decaying solution, while $I_\nu$ grows exponentially. • The condition at $x=0$ then simply requires that $K_\nu(2\sqrt{A})=0$. In terms of energies, then, $$ \boxed{K_{2i\sqrt E}(2\sqrt{A})=0,} $$ and this is your quantization condition. As it happens, $K_\nu(z)$ is real for real $z$ and purely imaginary $\nu$. One way to prove this is via this integral representation: $$ K_{\nu}(x) =\sec( {\nu\pi}/{2})\int_{0}^{\infty}\cos(x \sinh\nolimits t)\cosh(\nu t)dt, $$ which is the analogue of Bessel's First Integral for $K_\nu$. I must confess, though, that my intuition is not as good here and I can't really point to the deep reason for that. Since $K_\nu$ is real here, for whatever reason, we can ask for its zeros. As with all Bessel zeros there is no chance of an elementary formula for them, but they can be found quite easily using numerical methods (for properties of the zeros, see this DLMF reference). For a taster, here are some graphs, in log-linear scale (so zeros show up as downward, log-like peaks), of $K_{2i\sqrt{E}}(2\sqrt{A})$ as a function of $E$, for a few different values of $A$. enter image description here While there isn't all that much to say about the energies from this, it is clear that there are a countable infinity of them, that they are bigger than $A$, and that their spacing increases with increasing $A$ and $n$ (why?) - but that's really all you'd really want to know! Just for completeness: the eigenfunctions themselves, then, are of the form $$\psi_n(x)=C_nK_{2i\sqrt{E_n}}\left( 2\sqrt{A}e^{x/2} \right).$$ It is interesting to note that the dependence in $n$ comes through the index instead of a coefficient before $x$. This is partly to ensure the very strict decay $\psi\sim e^{-\exp(x/2)}$, which is required by the very hard exponential wall of the potential. For some information on how these Bessel functions behave, try the Functions of Imaginary Order subsection in the DLMF; particularly important results are asymptotics on $K_{i|\nu|}$ at large $x$ and for the oscillatory region. The latter is $$ {K}_{{i\nu}}\left(z\right)=-\left(\frac{\pi}{% \nu\mathop{\sinh}\left(\pi\nu\right)}\right)^{{\frac{1}{2}}}% \mathop{\sin}\left(\nu\mathop{\ln}\left(\tfrac{1}{2}% z\right)-\gamma_{\nu}\right)+\mathop{O}\left(z^{2}\right), $$ so the asymptotic for the wavefunction is of the form $\psi(x)\sim\sin\left(\sqrt{E_n}x\right)$, as it should be. (Note, though, that this holds little physics beyond the standard: the information on the potential's variation is encoded in the change of the instantaneous frequency as in e.g. these formulas, and would require beefier maths.) share|improve this answer thanks :) Emilio this was very illustrative :) –  Jose Javier Garcia Feb 17 '14 at 12:00 Your Answer
470c09b40cef80bb
Take the 2-minute tour × Wheeler's delayed choice experiment is a variant of the classic double slit experiment for photons in which the detecting screen may or may not be removed after the photons had passed through the slits. If removed, there are lenses behind the screen refocusing the optics to reveal which slit the photon passed through sharply. How must this experiment be interpreted? • Does the photon only acquire wave/particle properties only at the moment of measurement, no matter how delayed it is? • Can measurements affect the past retrocausally? • What was the history of the photon before measurement? • What are the beables before the decision was made? share|improve this question A piece of friendly advice (not a criticism): if you are pursuing further insight into quantum mechanics, even just as a hobby, I would encourage you to abandon the "wave/particle duality" framework for thinking about it, at your soonest possible convenience. It really doesn't add any explanatory power, and it doesn't give you any help in understanding the actual mathematical formulation which does have explanatory power. As far as I'm concerned, this idea is a historical relic of the initial total confusion over what was going on with atoms and photons. –  Niel de Beaudrap Oct 16 '11 at 11:28 2 Answers 2 The actual meaning of the colloquial phrase of photons "acquiring particle properties" or "acting like a particle" is really nothing more than saying that photons interact locally and in discrete packages, despite being described much of the time by a spatially distributed wave-function. Photons, when they are left to travel freely, travel as waves. (The same is true of electrons and other matter/antimatter particles.) But photons can be absorbed by electrons, such as those in light detectors or photoplates; and despite the fact that the wave-function of the photon may be distributed across more than one such detector or more than one cell of the plate, we find that the photon is always absorbed at only one location. In the old days of quantum mechanics, one would say that the photon "acted like a wave through the slit, and like a particle at the plate". What one would say nowadays is that the photon evolved according to the Schrödinger equation until • it is interrupted by a measurement device, at which point the wavefunction collapses and gives a definite outcome for whatever that measurement device is measuring; or • until it has interacted in an uncontrolled (but consistent) manner with enough of the world around it that it decoheres, in which case it ends up being in a probabilistic mixture of states which are stable under that interaction. This may sound quite similar to the "wave/particle duality" way of saying things, but in practice it gives you a much better shot at understanding how a photon or electron will actually behave when you get your hands on the mathematics. (Incidentally, the question of "when something counts as a measurement device" is one which is still an open topic at a fundamental level, even though in practical terms we know enough to predict the outcomes of most experiments. A great number of physicists also believe that measurement is in some sense a special case of decoherence. This is all part of understanding the Measurement Problem of quantum mechanics.) As for the "history" and the "beables" prior to measurement, or prior to the decision of whether to make the measurement or not, these are questions of the interpretation of quantum mechanics. There is no commonly-agreed-upon answer. But the short story is that — no matter how long it took for you to decide whether or not to measure — if you don't measure, the trajectory of the photon is still described by the Schrödinger equation, and you can still cause different "possible paths" to interfere with one another (e.g. in a sum-over-histories description of the evolution of the particle). share|improve this answer The concept of "evolution relative to the Schroedinger equation" is an insightful means of considering your questions via a holistic interpretation of the reality to which most of modern physics seems to point. One should recognize that interacting with a measurement device is another aspect of interacting with "the world". This concept of the photon as a wave "interacting with the world" over "many paths" simultaneously is a much more significant element of determining the final outcome of all the interactions in which what we are taught to think of as a photon is involved than a semi-classical interpretation of the photon as a particle some of the time and a wave some of the time might suggest. (What we call a photon is, fundamentally, no more than our perception of the localization of a collection of properties associated with specific, quantum fields via a mode of "bundling" that interact in a pre-defined manner with particles with specific properties.) "The world" exists in a multi-dimensional framework that includes time. "The world" evolves in time, as must all experiments performed in "the world" that we detect as beings made of matter. The perception of a wave function as being uniquely linked to one photon that is somehow in one position in some temporal interval associated with measurement in a detector may be one cause of the questions that have been posed. Stop thinking of photons as highly localized in space and time in the same manner that one might think of a little ball as being highly localized in space and time when what is called a photon is, in fact, better conceived as a disperse, wave-like field that interacts with localized objects called atoms that we have learned to use to make what we call "particle detectors" (using a concept based on classical thinking). The stuff that we use to interact with the photon in a detector is in a complex form that we call "an atom". Because it is in this complex, atomic system, it is bound by rules that are defined by quantum physics relative to energy states and other particle properties. The photon in free space (in the classic, quantum description of what physics calls a "photon") is a free agent until it "gets mixed up" with the particle "crowd" that comprises the atom. The atom has a great deal of mass relative to the "photon", and it has a great deal of power to produce what we perceive as localized phenomenon in time and space, because the atom is, due to its mass (and, formally, momentum), a relatively localized phenomenon. Re-think the photon crudely as electro-magnetic energy in the environment (manifested in quantum fields) that is bundled by an atom due to the atom's relatively high momentum, which forces its probability wave to be relatively localized. Think of the electro-magnetic energy of what we call a "photon" as being highly interactive with "the world" until it has sufficiently interacted with the particle detector's energy "bundling" atoms to produce a result. At that point, we get some output data from the machine. Because of the highly interactive nature of the photon with the world around it given its "many paths" aspect, we may find the results a bit surprising if we are too used to digging a rut in the same logical path by forming what one author described as "cog-webs" that define the photon as a particle some of the time. If we think of detection of a photon by a measurement device as a means of localizing ("bundling") electro-magnetic energy that is disperse in the environment (that follows "many paths" that overlap with other photons' "many paths"), and think of the speed of light as the rate at which electro-magnetic energy can be localized by atoms (with mass) to produce quantized changes in energy in things called atoms that we can use to detect the electro-magnetic energy in the environment, then it might come as no surprise that we begin to gather data about the environment that exceeds our expectations in some three-dimensional, directional sense in a given, laboratory experiment. If we use an electro-magnetic energy source in a particular position sending energy in a more or less directional manner to provide most of that electro-magnetic energy being put into a given environment in which an experiment is occurring, it should come as no surprise that we detect information that is biased relative to a specific directional thumb-print in space and time, because most of the energy we gather carries a certain amount of information due to its point of origin, and objects in the related path. Because we are gathering electro-magnetic energy from the environment, should we expect all of the information reflected in what we call "a signal" to originate from one direction? If electro-magnetic energy bundling has a speed associated with it, any changes occurring during the "bundling" process that manipulate the directions from which energy can be gathered into a "bundle" will, most likely, be reflected in the results produced by the "bundling" process. Don't trap your mind in a logical framework around a specific, laboratory experimental set-up that amounts to a temporal, Rube Goldberg machine that may be more likely to hide the reality that we are observing than to reveal it if it manifests an anthropic inclination to localize source and detector (and thus disperse energy) in a directional sense based on our inclination to perceive cause and effect associated with spatial momentum in interactions involving large chunks of matter due to our experience with a universe in which entropy generally increases as time passes by biasing an experiment to investigate photons as particles, and, what a shock, that generates results that suggest that a photon is not just a wave, but a form of energy that can be localized by atoms after following many paths. Be VERY CAREFUL how you link relativity to quantum physics with what are still defined by many as mass-less particles. The "speed of light" is an extremely classical term premised on ancient, anthropic perspective. (Feynman even dared to conceive of instantaneous action at a distance relative to electro-magnetic theory, a concept which, in his original formulation, he later described in negative terms in his Nobel lecture.) What if Galileo had stepped onto a hillside that was distant from another on which stood a friend with a lantern and a shielding cloak, and conceptualized his experiment as an attempt to measure the rate at which electro-magnetic energy seeped from the environment into his eyes (comprised of atoms with mass and momentum) to form a bundled "quantum" of light energy that would cause the measuring neurons in his retina to fire and transmit a signal to the optical center of his brain localizing the bundle of energy in a specific area of his visual field after he gave the high sign to his distant buddy to remove the shield that blocked the lantern's light? Would we still think that there was something called a photon with a specific speed as it flew through space-time, or would we perceive what we call a "photon" as a bundle of properties associated with energy that is localized by atoms with mass (and momentum), that we once found it easy, given science's classical pedigree, to think of as a little ball flying through space? If a particle is free in space and not interacting with others in an atom, it is not required to be "quantized". Quantum physic's particle properties are associated with unique quantum fields. The concept of a field was created to explain transmission of energy between objects lacking a physical connection. Fully embrace the concept of waves and fields, and by-pass the "wave-particle duality" perspective relative to light as something that is sometimes one and sometimes the other and the strong, directional perspectives that accompany it. Your other questions should fade in the process. share|improve this answer Your Answer
766d44f5c5b11998
Hyperspace: A Scientific Odyssey through Parallel Universes, Format: Print Length Language: English Format: PDF / Kindle / ePub Size: 8.09 MB Downloadable formats: PDF Lunn at the University of Chicago had used the same argument based on the completion of the relativistic energy–momentum 4-vector to derive what we now call the de Broglie relation. [10] Unlike de Broglie, Lunn went on to formulate the differential equation now known as the Schrödinger equation, and solve for its energy eigenvalues for the hydrogen atom. In wave terms, instead of particle paths, we look at phases. Unlike classical computing bits that exist as either a 1 or 0, qubits can exist in multiple states at the same time due to the strange rules of quantum physics that dominate reality at very small scales. Pages: 359 Publisher: OUP Oxford (October 5, 1995) Satellite Communications Systems Effective Field Theories In particular, it is the separation of two adjacent peaks or troughs. The amplitude, a, of the wave is the greatest displacement of any particle from its equilibrium position. The period, T, is the time taken for any particle to undergo a complete oscillation. It is also the time taken for any wave to travel one wavelength An Introduction to Non-Perturbative Foundations of Quantum Field Theory (International Series of Monographs on Physics) http://sesstolica.ru/?library/an-introduction-to-non-perturbative-foundations-of-quantum-field-theory-international-series-of. When all of the work energy has been spent, a thermal distribution is once again exhibited. Depending on which energy level(s) are selectively populated, the work performed will vary and can include speeding the rate of a reaction in a catalytic manner, e.g., virtual thermal effects can replace chemical activation energies. ( Fukushima J. et al, 2010 ) In chemical and materials systems the work performed by the resonant EM waves can also shift the equilibrium of the system and produce dramatic changes in its chemical and material dynamics Perspectives of Strong Coupling Guage Theories http://ksscareer.com/?library/perspectives-of-strong-coupling-guage-theories. For example, sound waves propagate via air molecules colliding with their neighbors. When the molecules collide, they also bounce away from each other (a restoring force). This keeps the molecules from continuing to travel in the direction of the wave. The second main type, electromagnetic waves, do not require a medium. Instead, they consist of periodic oscillations of electrical and magnetic fields originally generated by charged particles, and can therefore travel through a vacuum , source: Evolution of Physical Laws: A download online download online. You could be forgiven for thinking that we haven’t got much closer to writing down the Schrödinger equation, but in fact there isn’t much more to do. Since we want to represent particles as waves, we need to think up an equation whose solution is a wave of some sort. (Remember, we’re jumping from postulate to postulate at this stage; there is no rigorous mathematical derivation.) The simplest form of wave is that represented by a sine or cosine wave, so if we want a wave with a particular frequency and wave number, we can postulate a function (in one dimension) of form is the amplitude (height) of the wave , source: Terahertz Technology and Applications (Proceedings of Spie) http://office-manual.com/?books/terahertz-technology-and-applications-proceedings-of-spie. What is having work done on it in a wave? 4. In which wave does the particles vibrate in the same direction as the wave? 6. In which wave does the particles vibrate perpendicularly to the direction of the wave? 9. Which of the following has the longer wavelength? 10. Which of the following has the larger amplitude , cited: Subband Adaptive Filtering: Theory and Implementation Subband Adaptive Filtering: Theory and? Let us see what happens when we superimpose two sine waves with different wavenumbers. Figure 1.5 shows the superposition of two waves with wavenumbers k1 = 4 and k2 = 5. Notice that the result is a wave with about the same wavelength as the two initial waves, but which varies in amplitude depending on whether the two sine waves are interfering constructively or destructively Lectures on the Theory of Few-Body Systems (Springer Series in Nuclear and Particle Physics) http://www.ronny-goerner.de/books/lectures-on-the-theory-of-few-body-systems-springer-series-in-nuclear-and-particle-physics. Standing waves alternately compress then dilate the aether substance inside antinodes epub. The orbitals in an atom are grouped into energy levels, called “main energy levels,” which are labeled 1 (the first main energy level), 2 (the second main energy level), and so on. The higher the energy level, the larger it is, the higher the energy of the electrons it contains, and the farther they are from the nucleus. (When we say that electrons in higher main energy levels have greater energy, we are talking about potential energy , e.g. Waves and Oscillations in read online read online. Third International Conference on Mathematical and Numerical Aspects of Wave Propagation (Proceedings in Applied Mathematics) An Introduction To Electromagnetic Wave Propagation And Antennas Stochastic Wave Propagation (Fundamental Studies in Engineering) Put another way, if the light wave were spreading out and going to both slits at once, we would expect it to also hit more than one place on the back wall at once. Since we measure only one hit we conclude that a single particle is coming through. So some other sort of wave seems to be involved, a wave that goes through both slits when there is only one photon ref.: The Field download for free pv.ourdiscoveryschool.com. Nonetheless, the opinion of Bell himself about what he showed is perfectly clear. The pilot-wave approach to quantum theory was initiated by Einstein, even before the discovery of quantum mechanics itself , cited: Shakespeare's A Midsummer-Night's Dream Shakespeare's A Midsummer-Night's Dream. But many of the world’s experts see it quite differently, arguing the D-Wave machine is something other than the computing holy grail the scientific community has sought since the mid-1980s. But today, researchers at the University of Southern California published a paper that comes that much closer to showing the D-Wave is indeed a quantum computer online. Vector resolution: process of finding the effective value of a component in a given direction , source: A guide to Feynman diagrams in the many-body problem. http://warholprints.com/library/a-guide-to-feynman-diagrams-in-the-many-body-problem. In the solvent system example, individual elements in the system irradiated with resonant EM waves possessed greater energy than the elements in the thermal system. The value of the microscale resonance work energy can be calculated using Equation 25 ., above: Subtraction shows that each water molecule in the resonant system performed an additional 35 X 10-23 J of work on the solute, as a result of absorption of the resonant EM waves , e.g. The Quantum Theory of Fields, Volume 3: Supersymmetry by Steven Weinberg B01_0207 download online. So, this is something that this part of the physical interpretation that we should keep. So, now we've reviewed the Schrodinger equation. Next thing we want to say is the most important solutions of the Schrodinger equations are those energy Eigenstates, stationary states. And let's just go through that subject and explain what it was. So we're going to look at-- whoops-- stationary solutions , source: Bäcklund and Darboux download online warholprints.com. If an object is 20 cm from the eye, what must the altered focal length of the eye be in order for the image of this object to be in focus on the retina? 5. An amoeba 0.01 cm in diameter has its image projected on a screen as shown in figure 3.18 by a positive lens of diameter 0.1 cm. (a) How big is the image of the amoeba? (b) What is the focal length of the lens? (c) What is the minimum distance between features on the image of the amoeba which can be resolved ref.: Radiation and Scattering of Waves (IEEE/OUP Series on Electromagnetic Wave Theory) warholprints.com? Shakespeare's Play of a Midsummer Night's Dream Field Theory: A Modern Primer (2nd Edition) Electromagnetic Theory Gauge Fields and Strings (Contemporary Concepts in Physics) Modeling Coastal And Offshore Processes Wave Mechanics for Chemists Nonlinear Ocean Waves and the Inverse Scattering Transform, Volume 97 (International Geophysics) Methods of Contemporary Gauge Theory (Cambridge Monographs on Mathematical Physics) Elementary Primer For Gauge Theory, An As a result, the wave function becomes very localized, which implies that the momentum space wave function is greatly spread. As quantum mechanics was approached with concepts of classical mechanics, this frustrating phenomenon was commented as the impossibility of knowing both position and momentum simultaneously , e.g. Scattering of Waves from Large Spheres Scattering of Waves from Large Spheres. When the electron is free, i.e., when its energy is positive, it can have any energy; it can be moving at any speed Inverse Problems of Wave download pdf doku-online.com. Use of an oscilloscope as a d.c. and a.c. voltmeter, to measure time intervals and frequencies and to display a.c. waveforms ref.: Assessment of Safety and Risk with a Microscopic Model of Detonation read for free. The sails of the ancient mariners were pushed by the forces of the wind which filled them. The sails of modern space explorers are now filled by the forces of light which impinge on them Theory and Applications of download pdf download pdf. There cannot be equal numbers in phase and out of phase, or the waves will cancel out. The way to arrange things is to find the regions of constant phase, as we have already explained; they are planes which make equal angles with the initial and final directions (Fig. 2–4 ) Advanced Signal Integrity for High-Speed Digital Designs: 1st (First) Edition http://warholprints.com/library/advanced-signal-integrity-for-high-speed-digital-designs-1-st-first-edition. But the idea had never been put to the test, and a team writing in Physical Review Letters says "weak measurements" prove the rule was never quite right. That could play havoc with "uncrackable codes" of quantum cryptography. Quantum mechanics has since its very inception raised a great many philosophical and metaphysical debates about the nature of nature itself pdf. Likewise, for interacting subatomic particles, the quantum of angular momentum is the reduced Planck constant (the Planck constant divided by 2π) denoted by ħ and called "h-bar". The value of the Planck constant is extremely small, its units are those of angular momentum, and the notion of action is the more general mathematical concept , cited: Grand Unified Theorem download pdf phoenix-web.de. The two quantum mechanical states, one with wavenumber and frequency k and ω and the other with −k and −ω, yield indistinguishable wave functions and therefore would represent physically indistinguishable states , e.g. Wave Mechanics read here Wave Mechanics. The dual nature of light—particle-like or wavelike depending on what one looks for—was the first example of a vexing theme that would recur throughout quantum physics. The duality constituted a theoretical conundrum for the next 20 years. The first step toward quantum theory had been precipitated by a dilemma about radiation. The second step was precipitated by a dilemma about matter Wave Mechanics and Molecular Biology - 1st US Edition/1st Printing http://warholprints.com/library/wave-mechanics-and-molecular-biology-1-st-us-edition-1-st-printing. Polarized light: light in which electric fields are all in same plane. Position: separation between object and a reference point. Position- time graph: graph of object�s motion that shows how its position depends on clock reading, or time. Positron: antiparticle equivalent of electron , source: W Symmetry (World Scientific download epub http://warholprints.com/library/w-symmetry-world-scientific-series-in-contemporary-chemical-physics. If you can’t assign credences locally without knowing about the state of the whole universe, there’s no real sense in which the rest of the world is really separate from you. It is certainly implicitly used by Elga (he assumes that credences are unchanged by some hidden person tossing a coin). With this assumption in hand, we are able to demonstrate that Indifference does not apply to branching quantum worlds in a straightforward way ref.: Oscillations and Waves download epub Oscillations and Waves.
5f3d2d698c43755a
Take the 2-minute tour × How can energy be quantized if we can have energy be measured like in 1.56364, 5.7535, 6423.654 kilo joules, with decimals? Thanks Also isnt it quantization means energy is represented in bit quantities meaning you can not divide, lets say 1 bit of energy share|improve this question 3 Answers 3 As far as the first part of your question goes, just having decimals in the number does not mean the energy levels are no longer quantized. Quantization of energy simply means that there are only specific energies that particles can take under certain circumstances. For example, you could say particle A can only have one of the following energies: {1.56364, 5.7535, 6423.654} kJ. Limiting the particle to these three energies is what it is meant by quantization of energy. Also, there is no smallest bit of energy, for example the kinetic energy of a free particle can take a continuous range. Mathematically, I am not certain how this is formulated. Off the top of my head, I would wager that any countable set could be considered quantized, but that would include all rationals which are dense in reals, so it really wouldn't be much of a quantization. share|improve this answer tl;dr of how it's formulated: energies are eigenvalues of the Hamiltonian operator. The eigenvalues can be discrete (which is the technical term for being limited to selected values) or continuous. Jerry's answer has some more details. –  David Z Sep 27 '12 at 4:37 Typically, in quantum mechanics, bound states are quantized and free/scattering states are not. This is because bound states, by the mere fact that they're constrained to a certain area, will have to satisfy certain boundary conditions, and these conditions won't be able to be satisfied in a continuous range. The classic example of this is the infinite square well potential, where $V(x) = 0$ if $0<x<a$, and $V(x) =\infty$ elsewhere. Then, the particle will have zero probability of appearing outside of the well, and will have to satisfy the zero-potential Schrödinger equation $E\psi = -\frac{\hbar^{2}}{2m}\nabla^{2}\psi$ inside of the well. For simplicity, we'll only consider one-dimensional motion. In this case, we see right away that the basis states to our solutions have to satisfy $\psi = A\sin\left(\frac{\sqrt{2mE}}{\hbar}x+\phi\right)$, and we also know that the wave function must be continuous, and that it is restricted to be zero for $x<0$ and $x>a$. We can satisfy the first boundary condition by choosing $\phi=0$, but the second one is not satisfied for all values of the energy. Instead, it is necessary that $\frac{\sqrt{2mE}}{\hbar}a=n\pi$, where $n$ is some integer. Thus, the allowed energies of 'pure' states of this system are quantized, and take the values $E_{n} = \frac{n^{2}\pi^{2}\hbar}{2m}$. For any other bound state, you will find yourself using similar logic about boundary conditions, albiet with much, much more complexity. Note that, however, it is also the case that we can construct a general state out of the energy eigenstates $\Psi = \sum a_{n}\psi_{n}$, and that the expectation value for the energy of $\Psi$ will be $\sum|a_{n}|^{2}E_{n}$, so the values for the "average" value of a state are still allowed to be continuous (and in the case of the infinite square well, can actually take any value greater than the ground state energy). share|improve this answer I see that the wave function must be continuous as a postulate, but is there an explanation for why that must be true? –  jcohen79 Sep 27 '12 at 4:22 The momentum operator is $-i\hbar\partial/\partial x$. If the wavefunction is spatially discontinuous that would imply infinite momentum. –  John Rennie Sep 27 '12 at 9:27 An excellent way to understand how the wave function works in quantum mechanics is to study the model of the hydrogen atom. We can see in this model that the quantum variables $n,l,m$ are effectively variables that determine the shape of the spatial density associated with the detection of an electron. The quantum aspect is that the variables $n,l,m$ are integers, where the continuous aspect is density associated with the wave function. It is important to understand that the density function is a space filling function. This means that there is a value of the function associated with each point in space. share|improve this answer Your Answer
59bb7a03c6d1ad17
principles of physical science The development of quantitative science Examples of the scientific method It is nowadays taken for granted by scientists that every measurement is subject to error so that repetitions of apparently the same experiment give different results. In the intellectual climate of Galileo’s time, however, when logical syllogisms that admitted no gray area between right and wrong were the accepted means of deducing conclusions, his novel procedures were far from compelling. In judging his work one must remember that the conventions now accepted in reporting scientific results were adopted long after Galileo’s time. Thus, if, as is said, he stated as a fact that two objects dropped from the leaning tower of Pisa reached the ground together with not so much as a hand’s breadth between them, it need not be inferred that he performed the experiment himself or that, if he did, the result was quite so perfect. Some such experiment had indeed been performed a little earlier (1586) by the Flemish mathematician Simon Stevin, but Galileo idealized the result. A light ball and a heavy ball do not reach the ground together, nor is the difference between them always the same, for it is impossible to reproduce the ideal of dropping them exactly at the same instant. Nevertheless, Galileo was satisfied that it came closer to the truth to say that they fell together than that there was a significant difference between their rates. This idealization of imperfect experiments remains an essential scientific process, though nowadays it is considered proper to present (or at least have available for scrutiny) the primary observations, so that others may judge independently whether they are prepared to accept the author’s conclusion as to what would have been observed in an ideally conducted experiment. The principles may be illustrated by repeating, with the advantage of modern instruments, an experiment such as Galileo himself performed—namely, that of measuring the time taken by a ball to roll different distances down a gently inclined channel. The following account is of a real experiment designed to show in a very simple example how the process of idealization proceeds, and how the preliminary conclusions may then be subjected to more searching test. Lines equally spaced at 6 cm (2.4 inches) were scribed on a brass channel, and the ball was held at rest beside the highest line by means of a card. An electronic timer was started at the instant the card was removed, and the timer was stopped as the ball passed one of the other lines. Seven repetitions of each timing showed that the measurements typically spread over a range of 1/20 of a second, presumably because of human limitations. In such a case, where a measurement is subject to random error, the average of many repetitions gives an improved estimate of what the result would be if the source of random error were eliminated; the factor by which the estimate is improved is roughly the square root of the number of measurements. Moreover, the theory of errors attributable to the German mathematician Carl Friedrich Gauss allows one to make a quantitative estimate of the reliability of the result, as expressed in the table by the conventional symbol ±. This does not mean that the first result in column 2 is guaranteed to lie between 0.671 and 0.685 but that, if this determination of the average of seven measurements were to be repeated many times, about two-thirds of the determinations would lie within these limits. The representation of measurements by a graph, as in Figure 1: Data in the table of the Galileo experiment. The tangent to the curve is drawn at t = 0.6., was not available to Galileo but was developed shortly after his time as a consequence of the work of the French mathematician-philosopher René Descartes. The points appear to lie close to a parabola, and the curve that is drawn is defined by the equation x = 12t2. The fit is not quite perfect, and it is worth trying to find a better formula. Since the operations of starting the timer when the card is removed to allow the ball to roll and stopping it as the ball passes a mark are different, there is a possibility that, in addition to random timing errors, a systematic error appears in each measured value of t; that is to say, each measurement t is perhaps to be interpreted as t + t0, where t0 is an as-yet-unknown constant timing error. If this is so, one might look to see whether the measured times were related to distance not by x = at2, where a is a constant, but by x = a(t + t0)2. This may also be tested graphically by first rewriting the equation as x = a(t + t0), which states that when the values of x are plotted against measured values of t they should lie on a straight line. Figure 2: The data in the table of the Galileo experiment plotted differently. verifies this prediction rather closely; the line does not pass through the origin but rather cuts the horizontal axis at −0.09 second. From this, one deduces that t0 = 0.09 second and that (t + 0.09)x should be the same for all the pairs of measurements given in the accompanying table . The third column shows that this is certainly the case. Indeed, the constancy is better than might have been expected in view of the estimated errors. This must be regarded as a statistical accident; it does not imply any greater assurance in the correctness of the formula than if the figures in the last column had ranged, as they might very well have done, between 0.311 and 0.315. One would be surprised if a repetition of the whole experiment again yielded so nearly constant a result. A possible conclusion, then, is that for some reason—probably observational bias—the measured times underestimate by 0.09 second the real time t it takes a ball, starting from rest, to travel a distance x. If so, under ideal conditions x would be strictly proportional to t2. Further experiments, in which the channel is set at different but still gentle slopes, suggest that the general rule takes the form x = at2, with a proportional to the slope. This tentative idealization of the experimental measurements may need to be modified, or even discarded, in the light of further experiments. Now that it has been cast into mathematical form, however, it can be analyzed mathematically to reveal what consequences it implies. Also, this will suggest ways of testing it more searchingly. From a graph such as , which shows how x depends on t, one may deduce the instantaneous speed of the ball at any instant. This is the slope of the tangent drawn to the curve at the chosen value of t; at t = 0.6 second, for example, the tangent as drawn describes how x would be related to t for a ball moving at a constant speed of about 14 cm per second. The lower slope before this instant and the higher slope afterward indicate that the ball is steadily accelerating. One could draw tangents at various values of t and come to the conclusion that the instantaneous speed was roughly proportional to the time that had elapsed since the ball began to roll. This procedure, with its inevitable inaccuracies, is rendered unnecessary by applying elementary calculus to the supposed formula. The instantaneous speed v is the derivative of x with respect to t; if The implication that the velocity is strictly proportional to elapsed time is that a graph of v against t would be a straight line through the origin. On any graph of these quantities, whether straight or not, the slope of the tangent at any point shows how velocity is changing with time at that instant; this is the instantaneous acceleration f. For a straight-line graph of v against t, the slope and therefore the acceleration are the same at all times. Expressed mathematically, f = dv/dt = d2x/dt2; in the present case, f takes the constant value 2a. The preliminary conclusion, then, is that a ball rolling down a straight slope experiences constant acceleration and that the magnitude of the acceleration is proportional to the slope. It is now possible to test the validity of the conclusion by finding what it predicts for a different experimental arrangement. If possible, an experiment is set up that allows more accurate measurements than those leading to the preliminary inference. Such a test is provided by a ball rolling in a curved channel so that its centre traces out a circular arc of radius r, as in Figure 3: A ball rolling in a curved channel (see text).. Provided the arc is shallow, the slope at a distance x from its lowest point is very close to x/r, so that acceleration of the ball toward the lowest point is proportional to x/r. Introducing c to represent the constant of proportionality, this is written as a differential equation Here it is stated that, on a graph showing how x varies with t, the curvature d2x/dt2 is proportional to x and has the opposite sign, as illustrated in Figure 4: Oscillation of a simple pendulum (see text).Encyclopædia Britannica, Inc.. As the graph crosses the axis, x and therefore the curvature are zero, and the line is locally straight. This graph represents the oscillations of the ball between extremes of ±A after it has been released from x = A at t = 0. The solution of the differential equation of which the diagram is the graphic representation is where ω, called the angular frequency, is written for (c/r). The ball takes time T = 2π/ω = 2π(r/c) to return to its original position of rest, after which the oscillation is repeated indefinitely or until friction brings the ball to rest. According to this analysis, the period, T, is independent of the amplitude of the oscillation, and this rather unexpected prediction is one that may be stringently tested. Instead of letting the ball roll on a curved channel, the same path is more easily and exactly realized by making it the bob of a simple pendulum. To test that the period is independent of amplitude two pendulums may be made as nearly identical as possible, so that they keep in step when swinging with the same amplitude. They are then swung with different amplitudes. It requires considerable care to detect any difference in period unless one amplitude is large, when the period is slightly longer. An observation that very nearly agrees with prediction, but not quite, does not necessarily show the initial supposition to be mistaken. In this case, the differential equation that predicted exact constancy of period was itself an approximation. When it is reformulated with the true expression for the slope replacing x/r, the solution (which involves quite heavy mathematics) shows a variation of period with amplitude that has been rigorously verified. Far from being discredited, the tentative assumption has emerged with enhanced support. Galileo’s law of acceleration, the physical basis of the expression 2π(r/c) for the period, is further strengthened by finding that T varies directly as the square root of r—i.e., the length of the pendulum. In addition, such measurements allow the value of the constant c to be determined with a high degree of precision, and it is found to coincide with the acceleration g of a freely falling body. In fact, the formula for the period of small oscillations of a simple pendulum of length r, T = 2π(r/g), is at the heart of some of the most precise methods for measuring g. This would not have happened unless the scientific community had accepted Galileo’s description of the ideal behaviour and did not expect to be shaken in its belief by small deviations, so long as they could be understood as reflecting inevitable random discrepancies between the ideal and its experimental realization. The development of quantum mechanics in the first quarter of the 20th century was stimulated by the reluctant acceptance that this description systematically failed when applied to objects of atomic size. In this case, it was not a question, as with the variations of period, of translating the physical ideas into mathematics more precisely; the whole physical basis needed radical revision. Yet, the earlier ideas were not thrown out—they had been found to work well in far too many applications to be discarded. What emerged was a clearer understanding of the circumstances in which their absolute validity could safely be assumed. Criterion for scientific theory The experiments just described in detail as examples of scientific method were successful in that they agreed with expectation. They would have been just as successful if, in spite of being well conducted, they had disagreed because they would have revealed an error in the primary assumptions. The philosopher Karl Popper’s widely accepted criterion for a scientific theory is that it must not simply pass such experimental tests as may be applied but that it must be formulated in such a way that falsification is in principle possible. For all its value as a test of scientific pretensions, however, it must not be supposed that the experimenter normally proceeds with Popper’s criterion in mind. Normally he hopes to convince himself that his initial conception is correct. If a succession of tests agrees with (or fails to falsify) a hypothesis, it is regarded as reasonable to treat the hypothesis as true, at all events until it is discredited by a subsequent test. The scientist is not concerned with providing a guarantee of his conclusion, since, however many tests support it, there remains the possibility that the next one will not. His concern is to convince himself and his critical colleagues that a hypothesis has passed enough tests to make it worth accepting until a better one presents itself. The Newtonian paradigm Up to this point the investigation has been concerned exclusively with kinetics—that is to say, providing an accurate mathematical description of motion, in this case of a ball on an inclined plane, with no implied explanation of the physical processes responsible. Newton’s general dynamic theory, as expounded in his Philosophiae Naturalis Principia Mathematica of 1687, laid down in the form of his laws of motion, together with other axioms and postulates, the rules to follow in analyzing the motion of bodies interacting among themselves. This theory of classical mechanics is described in detail in the article mechanics, but some general comments may be offered here. For the present purpose, it seems sufficient to consider only bodies moving along a straight line and acted upon by forces parallel to the motion. Newton’s laws are, in fact, considerably more general than this and encompass motion in curves as a result of forces deflecting a body from its initial direction. Laws of motion Newton’s first law may more properly be ascribed to Galileo. It states that a body continues at rest or in uniform motion along a straight line unless it is acted upon by a force, and it enables one to recognize when a force is acting. A tennis ball struck by a racket experiences a sudden change in its motion attributable to a force exerted by the racket. The player feels the shock of the impact. According to Newton’s third law (action and reaction are equal and opposite), the force that the ball exerts on the racket is equal and opposite to that which the racket exerts on the ball. Moreover, a second balanced action and reaction acts between player and racket. Newton’s second law quantifies the concept of force, as well as that of inertia. A body acted upon by a steady force suffers constant acceleration. Thus, a freely falling body or a ball rolling down a plane has constant acceleration, as has been seen, and this is to be interpreted in Newton’s terms as evidence that the force of gravity, which causes the acceleration, is not changed by the body’s motion. The same force (e.g., applied by a string which includes a spring balance to check that the force is the same in different experiments) applied to different bodies causes different accelerations; and it is found that, if a chosen strength of force causes twice the acceleration in body A as it does in body B, then a different force also causes twice as much acceleration in A as in B. The ratio of accelerations is independent of the force and is therefore a property of the bodies alone. They are said to have inertia (or inertial mass) in inverse proportion to the accelerations. This experimental fact, which is the essence of Newton’s second law, enables one to assign a number to every body that is a measure of its mass. Thus, a certain body may be chosen as a standard of mass and assigned the number 1. Another body is said to have mass m if the body shows only a fraction 1/m of the acceleration of this standard when the two are subjected to the same force. By proceeding in this way, every body may be assigned a mass. It is because experiment allows this definition to be made that a given force causes every body to show acceleration f such that mf is the same for all bodies. This means that the product mf is determined only by the force and not by the particular body on which it acts, and mf is defined to be the numerical measure of the force. In this way a consistent set of measures of force and mass is arrived at, having the property that F = mf. In this equation F, m, and f are to be interpreted as numbers measuring the strength of the force, the magnitude of the mass, and the rate of acceleration; and the product of the numbers m and f is always equal to the number F. The product mv, called motus (motion) by Newton, is now termed momentum. Newton’s second law states that the rate of change of momentum equals the strength of the applied force. In order to assign a numerical measure m to the mass of a body, a standard of mass must be chosen and assigned the value m = 1. Similarly, to measure displacement a unit of length is needed, and for velocity and acceleration a unit of time also must be defined. Given these, the numerical measure of a force follows from mf without need to define a unit of force. Thus, in the Système Internationale d’Unités (SI), in which the units are the standard kilogram, the standard metre, and the standard second, a force of magnitude unity is one that, applied to a mass of one kilogram, causes its velocity to increase steadily by one metre per second during every second the force is acting. Law of gravitation The idealized observation of Galileo that all bodies in free-fall accelerate equally implies that the gravitational force causing acceleration bears a constant relation to the inertial mass. According to Newton’s postulated law of gravitation, two bodies of mass m1 and m2, separated by a distance r, exert equal attractive forces on each other (the equal action and reaction of the third law of motion) of magnitude proportional to m1m2/r2. The constant of proportionality, G, in the gravitational law, F = Gm1m2/r2, is thus to be regarded as a universal constant, applying to all bodies, whatever their constitution. The constancy of gravitational acceleration, g, at a given point on the Earth is a particular case of this general law. Application of Newton’s laws In the same way that the timing of a pendulum provided a more rigorous test of Galileo’s kinematical theory than could be achieved by direct testing with balls rolling down planes, so with Newton’s laws the most searching tests are indirect and based on mathematically derived consequences. Kepler’s laws of planetary motion are just such an example, and in the two centuries after Newton’s Principia the laws were applied to elaborate and arduous computations of the motion of all planets, not simply as isolated bodies attracted by the Sun but as a system in which every one perturbs the motion of the others by mutual gravitational interactions. (The work of the French mathematician and astronomer Pierre-Simon, marquis de Laplace, was especially noteworthy.) Calculations of this kind have made it possible to predict the occurrence of eclipses many years ahead. Indeed, the history of past eclipses may be written with extraordinary precision so that, for instance, Thucydides’ account of the lunar eclipse that fatally delayed the Athenian expedition against Syracuse in 413 bce matches the calculations perfectly (see eclipse). Similarly, unexplained small departures from theoretical expectation of the motion of Uranus led John Couch Adams of England and Urbain-Jean-Joseph Le Verrier of France to predict in 1845 that a new planet (Neptune) would be seen at a particular point in the heavens. The discovery of Pluto in 1930 was achieved in much the same way. There is no obvious reason why the inertial mass m that governs the response of a body to an applied force should also determine the gravitational force between two bodies, as described above. Consequently, the period of a pendulum is independent of its material and governed only by its length and the local value of g; this has been verified with an accuracy of a few parts per million. Still more sensitive tests, as originally devised by the Hungarian physicist Roland, baron von Eötvös (1890), and repeated several times since, have demonstrated clearly that the accelerations of different bodies in a given gravitational environment are identical within a few parts in 1012. An astronaut in free orbit can remain poised motionless in the centre of the cabin of his spacecraft, surrounded by differently constituted objects, all equally motionless (except for their extremely weak mutual attractions) because all of them are identically affected by the gravitational field in which they are moving. He is unaware of the gravitational force, just as those on the Earth are unaware of the Sun’s attraction, moving as they do with the Earth in free orbit around the Sun. Albert Einstein made this experimental finding a central feature of his general theory of relativity (see relativity). Ensuing developments and their ramifications Newton believed that everything moved in relation to a fixed but undetectable spatial frame so that it could be said to have an absolute velocity. Time also flowed at the same steady pace everywhere. Even if there were no matter in the universe, the frame of the universe would still exist, and time would still flow even though there was no one to observe its passage. In Newton’s view, when matter is present it is unaffected by its motion through space. If the length of a moving metre stick were compared with the length of one at rest, they would be found to be the same. Clocks keep universal time whether they are moving or not; therefore, two identical clocks, initially synchronized, would still be synchronized after one had been carried into space and brought back. The laws of motion take such a form that they are not changed by uniform motion. They were devised to describe accurately the response of bodies to forces whether in the heavens or on the Earth, and they lose no validity as a result of the Earth’s motion at 30 km per second in its orbit around the Sun. This motion, in fact, would not be discernible by an observer in a closed box. The supposed invariance of the laws of motion, in addition to standards of measurement, to uniform translation was called “Galilean invariance” by Einstein. The impossibility of discerning absolute velocity led in Newton’s time to critical doubts concerning the necessity of postulating an absolute frame of space and universal time, and the doubts of the philosophers George Berkeley and Gottfried Wilhelm Leibniz, among others, were still more forcibly presented in the severe analysis of the foundations of classical mechanics by the Austrian physicist Ernst Mach in 1883. James Clerk Maxwell’s theory of electromagnetic phenomena (1865), including his description of light as electromagnetic waves, brought the problem to a state of crisis. It became clear that if light waves were propagated in the hypothetical ether that filled all space and provided an embodiment of Newton’s absolute frame (see below), it would not be logically consistent to accept both Maxwell’s theory and the ideas expressed in Galilean invariance, for the speed of light as it passed an observer would reveal how rapidly he was traveling through the ether. Ingenious attempts by the physicists George FitzGerald of Ireland and Hendrik A. Lorentz of the Netherlands to devise a compromise to salvage the notion of ether were eventually superseded by Einstein’s special theory of relativity (see relativity). Einstein proposed in 1905 that all laws of physics, not solely those of mechanics, must take the same form for observers moving uniformly relative to one another, however rapidly. In particular, if two observers, using identical metre sticks and clocks, set out to measure the speed of a light signal as it passes them, both would obtain the same value no matter what their relative velocity might be; in a Newtonian world, of course, the measured values would differ by the relative velocity of the two observers. This is but one example of the counterintuitive character of relativistic physics, but the deduced consequences of Einstein’s postulate have been so frequently and so accurately verified by experiment that it has been incorporated as a fundamental axiom in physical theory. With the abandonment of the ether hypothesis, there has been a reversion to a philosophical standpoint reluctantly espoused by Newton. To him and to his contemporaries the idea that two bodies could exert gravitational forces on each other across immense distances of empty space was abhorrent. However, attempts to develop Descartes’s notion of a space-filling fluid ether as a transmitting medium for forces invariably failed to account for the inverse square law. Newton himself adopted a pragmatic approach, deducing the consequences of his laws and showing how well they agreed with observation; he was by no means satisfied that a mechanical explanation was impossible, but he confessed in the celebrated remark “Hypotheses non fingo” (Latin: “I frame no hypotheses”) that he had no solution to offer. A similar reversion to the safety of mathematical description is represented by the rejection, during the early 1900s, of the explanatory ether models of the 19th century and their replacement by model-free analysis in terms of relativity theory. This certainly does not imply giving up the use of models as imaginative aids in extending theories, predicting new effects, or devising interesting experiments; if nothing better is available, however, a mathematical formulation that yields verifiably correct results is to be preferred over an intuitively acceptable model that does not. Interplay of experiment and theory The foregoing discussion should have made clear that progress in physics, as in the other sciences, arises from a close interplay of experiment and theory. In a well-established field like classical mechanics, it may appear that experiment is almost unnecessary and all that is needed is the mathematical or computational skill to discover the solutions of the equations of motion. This view, however, overlooks the role of observation or experiment in setting up the problem in the first place. To discover the conditions under which a bicycle is stable in an upright position or can be made to turn a corner, it is first necessary to invent and observe a bicycle. The equations of motion are so general and serve as the basis for describing so extended a range of phenomena that the mathematician must usually look at the behaviour of real objects in order to select those that are both interesting and soluble. His analysis may indeed suggest the existence of interesting related effects that can be examined in the laboratory; thus, the invention or discovery of new things may be initiated by the experimenter or the theoretician. To employ terms such as this has led, especially in the 20th century, to a common assumption that experimentation and theorizing are distinct activities, rarely performed by the same person. It is true that almost all active physicists pursue their vocation primarily in one mode or the other. Nevertheless, the innovative experimenter can hardly make progress without an informed appreciation of the theoretical structure, even if he is not technically competent to find the solution of particular mathematical problems. By the same token, the innovative theorist must be deeply imbued with the way real objects behave, even if he is not technically competent to put together the apparatus to examine the problem. The fundamental unity of physical science should be borne in mind during the following outline of characteristic examples of experimental and theoretical physics. Characteristic experimental procedures Unexpected observation Heike Kamerlingh Onnes of the Netherlands, the first to liquefy helium, cooled a thread of mercury to within 4 K of absolute zero (4 K equals −269 °C) to test his belief that electrical resistance would tend to vanish at zero. This was what the first experiment seemed to verify, but a more careful repetition showed that instead of falling gradually, as he expected, all trace of resistance disappeared abruptly just above 4 K. This phenomenon of superconductivity, which Kamerlingh Onnes discovered in 1911, defied theoretical explanation until 1957. The not-so-unexpected chance From 1807 the Danish physicist and chemist Hans Christian Ørsted came to believe that electrical phenomena could influence magnets, but it was not until 1819 that he turned his investigations to the effects produced by an electric current. On the basis of his tentative models he tried on several occasions to see if a current in a wire caused a magnet needle to turn when it was placed transverse to the wire, but without success. Only when it occurred to him, without forethought, to arrange the needle parallel on the wire did the long-sought effect appear. A second example of this type of experimental situation involves the discovery of electromagnetic induction by the English physicist and chemist Michael Faraday. Aware that an electrically charged body induces a charge in a nearby body, Faraday sought to determine whether a steady current in a coil of wire would induce such a current in another short-circuited coil close to it. He found no effect except in instances where the current in the first coil was switched on or off, at which time a momentary current appeared in the other. He was in effect led to the concept of electromagnetic induction by changing magnetic fields. Qualitative tests to distinguish alternative theories Another qualitative difference between the wave and corpuscular theories concerned the speed of light in a transparent medium. To explain the bending of light rays toward the normal to the surface when light entered the medium, the corpuscular theory demanded that light go faster while the wave theory required that it go slower. Jean-Bernard-Léon Foucault showed that the latter was correct (1850). The three categories of experiments or observations discussed above are those that do not demand high-precision measurement. The following, however, are categories in which measurement at varying degrees of precision is involved. Direct comparison of theory and experiment This is one of the commonest experimental situations. Typically, a theoretical model makes certain specific predictions, perhaps novel in character, perhaps novel only in differing from the predictions of competing theories. There is no fixed standard by which the precision of measurement may be judged adequate. As is usual in science, the essential question is whether the conclusion carries conviction, and this is conditioned by the strength of opinion regarding alternative conclusions. Where strong prejudice obtains, opponents of a heterodox conclusion may delay acceptance indefinitely by insisting on a degree of scrupulosity in experimental procedure that they would unhesitatingly dispense with in other circumstances. For example, few experiments in paranormal phenomena, such as clairvoyance, which have given positive results under apparently stringent conditions, have made converts among scientists. In the strictly physical domain, the search for ether drift provides an interesting study. At the height of acceptance of the hypothesis that light waves are carried by a pervasive ether, the question of whether the motion of the Earth through space dragged the ether with it was tested (1887) by A.A. Michelson and Edward W. Morley of the United States by looking for variations in the velocity of light as it traveled in different directions in the laboratory. Their conclusion was that there was a small variation, considerably less than the Earth’s velocity in its orbit around the Sun, and that the ether was therefore substantially entrained in the Earth’s motion. According to Einstein’s relativity theory (1905), no variation should have been observed, but during the next 20 years another American investigator, Dayton C. Miller, repeated the experiment many times in different situations and concluded that, at least on a mountaintop, there was a real “ether wind” of about 10 km per second. Although Miller’s final presentation was a model of clear exposition, with evidence scrupulously displayed and discussed, it has been set aside and virtually forgotten. This is partly because other experiments failed to show the effect; however, their conditions were not strictly comparable, since few, if any, were conducted on mountaintops. More significantly, other tests of relativity theory supported it in so many different ways as to lead to the consensus that one discrepant set of observations cannot be allowed to weigh against the theory. At the opposite extreme may be cited the 1919 expedition of the English scientist-mathematician Arthur Stanley Eddington to measure the very small deflection of the light from a star as it passed close to the Sun—a measurement that requires a total eclipse. The theories involved here were Einstein’s general theory of relativity and the Newtonian particle theory of light, which predicted only half the relativistic effect. The conclusion of this exceedingly difficult measurement—that Einstein’s theory was followed within the experimental limits of error, which amounted to ±30 percent—was the signal for worldwide feting of Einstein. If his theory had not appealed aesthetically to those able to appreciate it and if there had been any passionate adherents to the Newtonian view, the scope for error could well have been made the excuse for a long drawn-out struggle, especially since several repetitions at subsequent eclipses did little to improve the accuracy. In this case, then, the desire to believe was easily satisfied. It is gratifying to note that recent advances in radio astronomy have allowed much greater accuracy to be achieved, and Einstein’s prediction is now verified within about 1 percent. During the decade after his expedition, Eddington developed an extremely abstruse fundamental theory that led him to assert that the quantity hc/2πe2 (h is Planck’s constant, c the velocity of light, and e the charge on the electron) must take the value 137 exactly. At the time, uncertainties in the values of h and e allowed its measured value to be given as 137.29 ± 0.11; in accordance with the theory of errors, this implies that there was estimated to be about a 1 percent chance that a perfectly precise measurement would give 137. In the light of Eddington’s great authority there were many prepared to accede to his belief. Since then the measured value of this quantity has come much closer to Eddington’s prediction and is given as 137.03604 ± 0.00011. The discrepancy, though small, is 330 times the estimated error, compared with 2.6 times for the earlier measurement, and therefore a much more weighty indication against Eddington’s theory. As the intervening years have cast no light on the virtual impenetrability of his argument, there is now hardly a physicist who takes it seriously. Compilation of data Technical design, whether of laboratory instruments or for industry and commerce, depends on knowledge of the properties of materials (density, strength, electrical conductivity, etc.), some of which can only be found by very elaborate experiments (e.g., those dealing with the masses and excited states of atomic nuclei). One of the important functions of standards laboratories is to improve and extend the vast body of factual information, but much also arises incidentally rather than as the prime objective of an investigation or may be accumulated in the hope of discovering regularities or to test the theory of a phenomenon against a variety of occurrences. When chemical compounds are heated in a flame, the resulting colour can be used to diagnose the presence of sodium (orange), copper (green-blue), and many other elements. This procedure has long been used. Spectroscopic examination shows that every element has its characteristic set of spectral lines, and the discovery by the Swiss mathematician Johann Jakob Balmer of a simple arithmetic formula relating the wavelengths of lines in the hydrogen spectrum (1885) proved to be the start of intense activity in precise wavelength measurements of all known elements and the search for general principles. With the Danish physicist Niels Bohr’s quantum theory of the hydrogen atom (1913) began an understanding of the basis of Balmer’s formula; thenceforward spectroscopic evidence underpinned successive developments toward what is now a successful theory of atomic structure. Tests of fundamental concepts Coulomb’s law states that the force between two electric charges varies as the inverse square of their separation. Direct tests, such as those performed with a special torsion balance by the French physicist Charles-Augustin de Coulomb, for whom the law is named, can be at best approximate. A very sensitive indirect test, devised by the English scientist and clergyman Joseph Priestley (following an observation by Benjamin Franklin) but first realized by the English physicist and chemist Henry Cavendish (1771), relies on the mathematical demonstration that no electrical changes occurring outside a closed metal shell—as, for example, by connecting it to a high voltage source—produce any effect inside if the inverse square law holds. Since modern amplifiers can detect minute voltage changes, this test can be made very sensitive. It is typical of the class of null measurements in which only the theoretically expected behaviour leads to no response and any hypothetical departure from theory gives rise to a response of calculated magnitude. It has been shown in this way that if the force between charges, r apart, is proportional not to 1/r2 but to 1/r2+x, then x is less than 2 × 10−9. According to the relativistic theory of the hydrogen atom proposed by the English physicist P.A.M. Dirac (1928), there should be two different excited states exactly coinciding in energy. Measurements of spectral lines resulting from transitions in which these states were involved hinted at minute discrepancies, however. Some years later (c. 1950) Willis E. Lamb, Jr., and Robert C. Retherford of the United States, employing the novel microwave techniques that wartime radar contributed to peacetime research, were able not only to detect the energy difference between the two levels directly but to measure it rather precisely as well. The difference in energy, compared to the energy above the ground state, amounts to only 4 parts in 10 million, but this was one of the crucial pieces of evidence that led to the development of quantum electrodynamics, a central feature of the modern theory of fundamental particles (see subatomic particle: Quantum electrodynamics). Characteristic theoretical procedures Only at rare intervals in the development of a subject, and then only with the involvement of a few, are theoretical physicists engaged in introducing radically new concepts. The normal practice is to apply established principles to new problems so as to extend the range of phenomena that can be understood in some detail in terms of accepted fundamental ideas. Even when, as with the quantum mechanics of Werner Heisenberg (formulated in terms of matrices; 1925) and of Erwin Schrödinger (developed on the basis of wave functions; 1926), a major revolution is initiated, most of the accompanying theoretical activity involves investigating the consequences of the new hypothesis as if it were fully established in order to discover critical tests against experimental facts. There is little to be gained by attempting to classify the process of revolutionary thought because every case history throws up a different pattern. What follows is a description of typical procedures as normally used in theoretical physics. As in the preceding section, it will be taken for granted that the essential preliminary of coming to grips with the nature of the problem in general descriptive terms has been accomplished, so that the stage is set for systematic, usually mathematical, analysis. Direct solution of fundamental equations Insofar as the Sun and planets, with their attendant satellites, can be treated as concentrated masses moving under their mutual gravitational influences, they form a system that has not so overwhelmingly many separate units as to rule out step-by-step calculation of the motion of each. Modern high-speed computers are admirably adapted to this task and are used in this way to plan space missions and to decide on fine adjustments during flight. Most physical systems of interest, however, are either composed of too many units or are governed not by the rules of classical mechanics but rather by quantum mechanics, which is much less suited for direct computation. The mechanical behaviour of a body is analyzed in terms of Newton’s laws of motion by imagining it dissected into a number of parts, each of which is directly amenable to the application of the laws or has been separately analyzed by further dissection so that the rules governing its overall behaviour are known. A very simple illustration of the method is given by the arrangement in Figure 5: Dissection of a complex system into elementary parts (see text)., where two masses are joined by a light string passing over a pulley. The heavier mass, m1, falls with constant acceleration, but what is the magnitude of the acceleration? If the string were cut, each mass would experience the force, m1g or m2g, due to its gravitational attraction and would fall with acceleration g. The fact that the string prevents this is taken into account by assuming that it is in tension and also acts on each mass. When the string is cut just above m2, the state of accelerated motion just before the cut can be restored by applying equal and opposite forces (in accordance with Newton’s third law) to the cut ends, as in ; the string above the cut pulls the string below upward with a force T, while the string below pulls that above downward to the same extent. As yet, the value of T is not known. Now if the string is light, the tension T is sensibly the same everywhere along it, as may be seen by imagining a second cut, higher up, to leave a length of string acted upon by T at the bottom and possibly a different force T′ at the second cut. The total force TT′ on the string must be very small if the cut piece is not to accelerate violently, and, if the mass of the string is neglected altogether, T and T′ must be equal. This does not apply to the tension on the two sides of the pulley, for some resultant force will be needed to give it the correct accelerative motion as the masses move. This is a case for separate examination, by further dissection, of the forces needed to cause rotational acceleration. To simplify the problem one can assume the pulley to be so light that the difference in tension on the two sides is negligible. Then the problem has been reduced to two elementary parts—on the right the upward force on m2 is Tm2g, so that its acceleration upward is T/m2g; and on the left the downward force on m1 is m1gT, so that its acceleration downward is gT/m1. If the string cannot be extended, these two accelerations must be identical, from which it follows that T = 2m1m2g/(m1 + m2) and the acceleration of each mass is g(m1m2)/(m1 + m2). Thus, if one mass is twice the other (m1 = 2m2), its acceleration downward is g/3. A liquid may be imagined divided into small volume elements, each of which moves in response to gravity and the forces imposed by its neighbours (pressure and viscous drag). The forces are constrained by the requirement that the elements remain in contact, even though their shapes and relative positions may change with the flow. From such considerations are derived the differential equations that describe fluid motion (see fluid mechanics). The dissection of a system into many simple units in order to describe the behaviour of a complex structure in terms of the laws governing the elementary components is sometimes referred to, often with a pejorative implication, as reductionism. Insofar as it may encourage concentration on those properties of the structure that can be explained as the sum of elementary processes to the detriment of properties that arise only from the operation of the complete structure, the criticism must be considered seriously. The physical scientist is, however, well aware of the existence of the problem (see below Simplicity and complexity). If he is usually unrepentant about his reductionist stance, it is because this analytical procedure is the only systematic procedure he knows, and it is one that has yielded virtually the whole harvest of scientific inquiry. What is set up as a contrast to reductionism by its critics is commonly called the holistic approach, whose title confers a semblance of high-mindedness while hiding the poverty of tangible results it has produced. Simplified models The process of dissection was early taken to its limit in the kinetic theory of gases, which in its modern form essentially started with the suggestion of the Swiss mathematician Daniel Bernoulli (in 1738) that the pressure exerted by a gas on the walls of its container is the sum of innumerable collisions by individual molecules, all moving independently of each other. Boyle’s law—that the pressure exerted by a given gas is proportional to its density if the temperature is kept constant as the gas is compressed or expanded—follows immediately from Bernoulli’s assumption that the mean speed of the molecules is determined by temperature alone. Departures from Boyle’s law require for their explanation the assumption of forces between the molecules. It is very difficult to calculate the magnitude of these forces from first principles, but reasonable guesses about their form led Maxwell (1860) and later workers to explain in some detail the variation with temperature of thermal conductivity and viscosity, while the Dutch physicist Johannes Diederik van der Waals (1873) gave the first theoretical account of the condensation to liquid and the critical temperature above which condensation does not occur. The first quantum mechanical treatment of electrical conduction in metals was provided in 1928 by the German physicist Arnold Sommerfeld, who used a greatly simplified model in which electrons were assumed to roam freely (much like non-interacting molecules of a gas) within the metal as if it were a hollow container. The most remarkable simplification, justified at the time by its success rather than by any physical argument, was that the electrical force between electrons could be neglected. Since then, justification—without which the theory would have been impossibly complicated—has been provided in the sense that means have been devised to take account of the interactions whose effect is indeed considerably weaker than might have been supposed. In addition, the influence of the lattice of atoms on electronic motion has been worked out for many different metals. This development involved experimenters and theoreticians working in harness; the results of specially revealing experiments served to check the validity of approximations without which the calculations would have required excessive computing time. These examples serve to show how real problems almost always demand the invention of models in which, it is hoped, the most important features are correctly incorporated while less-essential features are initially ignored and allowed for later if experiment shows their influence not to be negligible. In almost all branches of mathematical physics there are systematic procedures—namely, perturbation techniques—for adjusting approximately correct models so that they represent the real situation more closely. Recasting of basic theory Newton’s laws of motion and of gravitation and Coulomb’s law for the forces between charged particles lead to the idea of energy as a quantity that is conserved in a wide range of phenomena (see below Conservation laws and extremal principles). It is frequently more convenient to use conservation of energy and other quantities than to start an analysis from the primitive laws. Other procedures are based on showing that, of all conceivable outcomes, the one followed is that for which a particular quantity takes a maximum or a minimum value—e.g., entropy change in thermodynamic processes, action in mechanical processes, and optical path length for light rays. General observations The foregoing accounts of characteristic experimental and theoretical procedures are necessarily far from exhaustive. In particular, they say too little about the technical background to the work of the physical scientist. The mathematical techniques used by the modern theoretical physicist are frequently borrowed from the pure mathematics of past eras. The work of Augustin-Louis Cauchy on functions of a complex variable, of Arthur Cayley and James Joseph Sylvester on matrix algebra, and of Bernhard Riemann on non-Euclidean geometry, to name but a few, were investigations undertaken with little or no thought for practical applications. The experimental physicist, for his part, has benefited greatly from technological progress and from instrumental developments that were undertaken in full knowledge of their potential research application but were nevertheless the product of single-minded devotion to the perfecting of an instrument as a worthy thing-in-itself. The developments during World War II provide the first outstanding example of technology harnessed on a national scale to meet a national need. Postwar advances in nuclear physics and in electronic circuitry, applied to almost all branches of research, were founded on the incidental results of this unprecedented scientific enterprise. The semiconductor industry sprang from the successes of microwave radar and, in its turn, through the transistor, made possible the development of reliable computers with power undreamed of by the wartime pioneers of electronic computing. From all these, the research scientist has acquired the means to explore otherwise inaccessible problems. Of course, not all of the important tools of modern-day science were the by-products of wartime research. The electron microscope is a good case in point. Moreover, this instrument may be regarded as a typical example of the sophisticated equipment to be found in all physical laboratories, of a complexity that the research-oriented user frequently does not understand in detail, and whose design depended on skills he rarely possesses. It should not be thought that the physicist does not give a just return for the tools he borrows. Engineering and technology are deeply indebted to pure science, while much modern pure mathematics can be traced back to investigations originally undertaken to elucidate a scientific problem. Concepts fundamental to the attitudes and methods of physical science Newton’s law of gravitation and Coulomb’s electrostatic law both give the force between two particles as inversely proportional to the square of their separation and directed along the line joining them. The force acting on one particle is a vector. It can be represented by a line with arrowhead; the length of the line is made proportional to the strength of the force, and the direction of the arrow shows the direction of the force. If a number of particles are acting simultaneously on the one considered, the resultant force is found by vector addition; the vectors representing each separate force are joined head to tail, and the resultant is given by the line joining the first tail to the last head. In what follows the electrostatic force will be taken as typical, and Coulomb’s law is expressed in the form F = q1q2r/4πε0r3. The boldface characters F and r are vectors, F being the force which a point charge q1 exerts on another point charge q2. The combination r/r3 is a vector in the direction of r, the line joining q1 to q2, with magnitude 1/r2 as required by the inverse square law. When r is rendered in lightface, it means simply the magnitude of the vector r, without direction. The combination 4πε0 is a constant whose value is irrelevant to the present discussion. The combination q1r/4πε0r3 is called the electric field strength due to q1 at a distance r from q1 and is designated by E; it is clearly a vector parallel to r. At every point in space E takes a different value, determined by r, and the complete specification of E(r)—that is, the magnitude and direction of E at every point r—defines the electric field. If there are a number of different fixed charges, each produces its own electric field of inverse square character, and the resultant E at any point is the vector sum of the separate contributions. Thus, the magnitude and direction of E may change in a complicated fashion from point to point. Any particle carrying charge q that is put in a place where the field is E experiences a force qE (provided the other charges are not displaced when it is inserted; if they are E(r) must be recalculated for the actual positions of the charges). The contours on a standard map are lines along which the height of the ground above sea level is constant. They usually take a complicated form, but if one imagines contours drawn at very close intervals of height and a small portion of the map to be greatly enlarged, the contours of this local region will become very nearly straight, like the two drawn in Figure 6: Definition of a vector gradient (see text). for heights h and h + δh. Walking along any of these contours, one remains on the level. The slope of the ground is steepest along PQ, and, if the distance from P to Q is δl, the gradient is δhl or dh/dl in the limit when δh and δl are allowed to go to zero. The vector gradient is a vector of this magnitude drawn parallel to PQ and is written as grad h, or ∇h. Walking along any other line PR at an angle θ to PQ, the slope is less in the ratio PQ/PR, or cos θ. The slope along PR is (grad h) cos θ and is the component of the vector grad h along a line at an angle θ to the vector itself. This is an example of the general rule for finding components of vectors. In particular, the components parallel to the x and y directions have magnitude ∂h/∂x and ∂h/∂y (the partial derivatives, represented by the symbol ∂, mean, for instance, that ∂h/∂x is the rate at which h changes with distance in the x direction, if one moves so as to keep y constant; and ∂h/∂y is the rate of change in the y direction, x being constant). This result is expressed by the quantities in brackets being the components of the vector along the coordinate axes. Vector quantities that vary in three dimensions can similarly be represented by three Cartesian components, along x, y, and z axes; e.g., V = (Vx, Vy, Vz). Line integral Imagine a line, not necessarily straight, drawn between two points A and B and marked off in innumerable small elements like δl in Figure 7: Definition of line integral (see text)., which is to be thought of as a vector. If a vector field takes a value V at this point, the quantity Vδl·cos θ is called the scalar product of the two vectors V and δl and is written as V·δl. The sum of all similar contributions from the different δl gives, in the limit when the elements are made infinitesimally small, the line integral V ·dl along the line chosen. Reverting to the contour map, it will be seen that (grad hdl is just the vertical height of B above A and that the value of the line integral is the same for all choices of line joining the two points. When a scalar quantity ϕ, having magnitude but not direction, is uniquely defined at every point in space, as h is on a two-dimensional map, the vector grad ϕ is then said to be irrotational, and ϕ(r) is the potential function from which a vector field grad ϕ can be derived. Not all vector fields can be derived from a potential function, but the Coulomb and gravitational fields are of this form. A potential function ϕ(r) defined by ϕ = A/r, where A is a constant, takes a constant value on every sphere centred at the origin. The set of nesting spheres is the analogue in three dimensions of the contours of height on a map, and grad ϕ at a point r is a vector pointing normal to the sphere that passes through r; it therefore lies along the radius through r, and has magnitude −A/r2. That is to say, grad ϕ = −Ar/r3 and describes a field of inverse square form. If A is set equal to q1/4πε0, the electrostatic field due to a charge q1 at the origin is E = −grad ϕ. When the field is produced by a number of point charges, each contributes to the potential ϕ(r) in proportion to the size of the charge and inversely as the distance from the charge to the point r. To find the field strength E at r, the potential contributions can be added as numbers and contours of the resultant ϕ plotted; from these E follows by calculating −grad ϕ. By the use of the potential, the necessity of vector addition of individual field contributions is avoided. An example of equipotentials is shown in Figure 8: Equipotentials (continuous lines) and field lines (broken lines) around two electric charges of magnitude +3 and −1 (see text).. Each is determined by the equation 3/r1 − 1/r2 = constant, with a different constant value for each, as shown. For any two charges of opposite sign, the equipotential surface, ϕ = 0, is a sphere, as no other is. Conservative forces The inverse square laws of gravitation and electrostatics are examples of central forces where the force exerted by one particle on another is along the line joining them and is also independent of direction. Whatever the variation of force with distance, a central force can always be represented by a potential; forces for which a potential can be found are called conservative. The work done by the force F(r) on a particle as it moves along a line from A to B is the line integral F ·dl, or grad ϕ·dl if F is derived from a potential ϕ, and this integral is just the difference between ϕ at A and B. The ionized hydrogen molecule consists of two protons bound together by a single electron, which spends a large fraction of its time in the region between the protons. Considering the force acting on one of the protons, one sees that it is attracted by the electron, when it is in the middle, more strongly than it is repelled by the other proton. This argument is not precise enough to prove that the resultant force is attractive, but an exact quantum mechanical calculation shows that it is if the protons are not too close together. At close approach proton repulsion dominates, but as one moves the protons apart the attractive force rises to a peak and then soon falls to a low value. The distance, 1.06 × 10−10 metre, at which the force changes sign, corresponds to the potential ϕ taking its lowest value and is the equilibrium separation of the protons in the ion. This is an example of a central force field that is far from inverse square in character. A similar attractive force arising from a particle shared between others is found in the strong nuclear force that holds the atomic nucleus together. The simplest example is the deuteron, the nucleus of heavy hydrogen, which consists either of a proton and a neutron or of two neutrons bound by a positive pion (a meson that has a mass 273 times that of an electron when in the free state). There is no repulsive force between the neutrons analogous to the Coulomb repulsion between the protons in the hydrogen ion, and the variation of the attractive force with distance follows the law F = (g2/r2)er/r0, in which g is a constant analogous to charge in electrostatics and r0 is a distance of 1.4 × 10-15 metre, which is something like the separation of individual protons and neutrons in a nucleus. At separations closer than r0, the law of force approximates to an inverse square attraction, but the exponential term kills the attractive force when r is only a few times r0 (e.g., when r is 5r0, the exponential reduces the force 150 times). Since strong nuclear forces at distances less than r0 share an inverse square law with gravitational and Coulomb forces, a direct comparison of their strengths is possible. The gravitational force between two protons at a given distance is only about 5 × 10−39 times as strong as the Coulomb force at the same separation, which itself is 1,400 times weaker than the strong nuclear force. The nuclear force is therefore able to hold together a nucleus consisting of protons and neutrons in spite of the Coulomb repulsion of the protons. On the scale of nuclei and atoms, gravitational forces are quite negligible; they make themselves felt only when extremely large numbers of electrically neutral atoms are involved, as on a terrestrial or a cosmological scale. Field lines The vector field, V = −grad ϕ, associated with a potential ϕ is always directed normal to the equipotential surfaces, and the variations in space of its direction can be represented by continuous lines drawn accordingly, like those in . The arrows show the direction of the force that would act on a positive charge; they thus point away from the charge +3 in its vicinity and toward the charge −1. If the field is of inverse square character (gravitational, electrostatic), the field lines may be drawn to represent both direction and strength of field. Thus, from an isolated charge q a large number of radial lines may be drawn, filling the solid angle evenly. Since the field strength falls away as 1/r2 and the area of a sphere centred on the charge increases as r2, the number of lines crossing unit area on each sphere varies as 1/r2, in the same way as the field strength. In this case, the density of lines crossing an element of area normal to the lines represents the field strength at that point. The result may be generalized to apply to any distribution of point charges. The field lines are drawn so as to be continuous everywhere except at the charges themselves, which act as sources of lines. From every positive charge q, lines emerge (i.e., with outward-pointing arrows) in number proportional to q, while a similarly proportionate number enter negative charge −q. The density of lines then gives a measure of the field strength at any point. This elegant construction holds only for inverse square forces. Gauss’s theorem At any point in space one may define an element of area dS by drawing a small, flat, closed loop. The area contained within the loop gives the magnitude of the vector area dS, and the arrow representing its direction is drawn normal to the loop. Then, if the electric field in the region of the elementary area is E, the flux through the element is defined as the product of the magnitude dS and the component of E normal to the element—i.e., the scalar product E · dS. A charge q at the centre of a sphere of radius r generates a field ε = qr/4πε0r3 on the surface of the sphere whose area is 4πr2, and the total flux through the surface is ∫SE · dS = q0. This is independent of r, and the German mathematician Karl Friedrich Gauss showed that it does not depend on q being at the centre nor even on the surrounding surface being spherical. The total flux of ε through a closed surface is equal to 1/ε0 times the total charge contained within it, irrespective of how that charge is arranged. It is readily seen that this result is consistent with the statement in the preceding paragraph—if every charge q within the surface is the source of q0 field lines, and these lines are continuous except at the charges, the total number leaving through the surface is Q0, where Q is the total charge. Charges outside the surface contribute nothing, since their lines enter and leave again. Gauss’s theorem takes the same form in gravitational theory, the flux of gravitational field lines through a closed surface being determined by the total mass within. This enables a proof to be given immediately of a problem that caused Newton considerable trouble. He was able to show, by direct summation over all the elements, that a uniform sphere of matter attracts bodies outside as if the whole mass of the sphere were concentrated at its centre. Now it is obvious by symmetry that the field has the same magnitude everywhere on the surface of the sphere, and this symmetry is unaltered by collapsing the mass to a point at the centre. According to Gauss’s theorem, the total flux is unchanged, and the magnitude of the field must therefore be the same. This is an example of the power of a field theory over the earlier point of view by which each interaction between particles was dealt with individually and the result summed. A second example illustrating the value of field theories arises when the distribution of charges is not initially known, as when a charge q is brought close to a piece of metal or other electrical conductor and experiences a force. When an electric field is applied to a conductor, charge moves in it; so long as the field is maintained and charge can enter or leave, this movement of charge continues and is perceived as a steady electric current. An isolated piece of conductor, however, cannot carry a steady current indefinitely because there is nowhere for the charge to come from or go to. When q is brought close to the metal, its electric field causes a shift of charge in the metal to a new configuration in which its field exactly cancels the field due to q everywhere on and inside the conductor. The force experienced by q is its interaction with the canceling field. It is clearly a serious problem to calculate E everywhere for an arbitrary distribution of charge, and then to adjust the distribution to make it vanish on the conductor. When, however, it is recognized that after the system has settled down, the surface of the conductor must have the same value of ϕ everywhere, so that E = −grad ϕ vanishes on the surface, a number of specific solutions can easily be found. In , for instance, the equipotential surface ϕ = 0 is a sphere. If a sphere of uncharged metal is built to coincide with this equipotential, it will not disturb the field in any way. Moreover, once it is constructed, the charge −1 inside may be moved around without altering the field pattern outside, which therefore describes what the field lines look like when a charge +3 is moved to the appropriate distance away from a conducting sphere carrying charge −1. More usefully, if the conducting sphere is momentarily connected to the Earth (which acts as a large body capable of supplying charge to the sphere without suffering a change in its own potential), the required charge −1 flows to set up this field pattern. This result can be generalized as follows: if a positive charge q is placed at a distance r from the centre of a conducting sphere of radius a connected to the Earth, the resulting field outside the sphere is the same as if, instead of the sphere, a negative charge q′ = −(a/r)q had been placed at a distance r′ = r(1 − a2/r2) from q on a line joining it to the centre of the sphere. And q is consequently attracted toward the sphere with a force qq′/4πε0r2, or q2ar/4πε0(r2a2)2. The fictitious charge −q′ behaves somewhat, but not exactly, like the image of q in a spherical mirror, and hence this way of constructing solutions, of which there are many examples, is called the method of images. Divergence and Laplace’s equation When charges are not isolated points but form a continuous distribution with a local charge density ρ being the ratio of the charge δq in a small cell to the volume δv of the cell, then the flux of E over the surface of the cell is ρδv0, by Gauss’s theorem, and is proportional to δv. The ratio of the flux to δv is called the divergence of E and is written div E. It is related to the charge density by the equation div E = ρ/ε0. If E is expressed by its Cartesian components (εx, εy, εz,), And since Ex = −∂ϕ/dx, etc., The expression on the left side is usually written as ∇2ϕ and is called the Laplacian of ϕ. It has the property, as is obvious from its relationship to ρ, of being unchanged if the Cartesian axes of x, y, and z are turned bodily into any new orientation. If any region of space is free of charges, ρ = o and ∇2ϕ = 0 in this region. The latter is Laplace’s equation, for which many methods of solution are available, providing a powerful means of finding electrostatic (or gravitational) field patterns. Nonconservative fields The magnetic field B is an example of a vector field that cannot in general be described as the gradient of a scalar potential. There are no isolated poles to provide, as electric charges do, sources for the field lines. Instead, the field is generated by currents and forms vortex patterns around any current-carrying conductor. Figure 9: Magnetic field lines around a straight current-carrying wire (see text). shows the field lines for a single straight wire. If one forms the line integral ∫B·dl around the closed path formed by any one of these field lines, each increment B·δl has the same sign and, obviously, the integral cannot vanish as it does for an electrostatic field. The value it takes is proportional to the total current enclosed by the path. Thus, every path that encloses the conductor yields the same value for ∫B·dl; i.e., μ0I, where I is the current and μ0 is a constant for any particular choice of units in which B, l, and I are to be measured. If no current is enclosed by the path, the line integral vanishes and a potential ϕB may be defined. Indeed, in the example shown in , a potential may be defined even for paths that enclose the conductor, but it is many-valued because it increases by a standard increment μ0I every time the path encircles the current. A contour map of height would represent a spiral staircase (or, better, a spiral ramp) by a similar many-valued contour. The conductor carrying I is in this case the axis of the ramp. Like E in a charge-free region, where div E = 0, so also div B = 0; and where ϕB may be defined, it obeys Laplace’s equation, ∇2ϕB = 0. Within a conductor carrying a current or any region in which current is distributed rather than closely confined to a thin wire, no potential ϕB can be defined. For now the change in ϕB after traversing a closed path is no longer zero or an integral multiple of a constant μ0I but is rather μ0 times the current enclosed in the path and therefore depends on the path chosen. To relate the magnetic field to the current, a new function is needed, the curl, whose name suggests the connection with circulating field lines. The curl of a vector, say, curl B, is itself a vector quantity. To find the component of curl B along any chosen direction, draw a small closed path of area A lying in the plane normal to that direction, and evaluate the line integral ∫B·dl around the path. As the path is shrunk in size, the integral diminishes with the area, and the limit of A-1B·dl is the component of curl B in the chosen direction. The direction in which the vector curl B points is the direction in which A-1B·dl is largest. To apply this to the magnetic field in a conductor carrying current, the current density J is defined as a vector pointing along the direction of current flow, and the magnitude of J is such that JA is the total current flowing across a small area A normal to J. Now the line integral of B around the edge of this area is A curl B if A is very small, and this must equal μ0 times the contained current. It follows that Expressed in Cartesian coordinates, with similar expressions for Jy and Jz. These are the differential equations relating the magnetic field to the currents that generate it. A magnetic field also may be generated by a changing electric field, and an electric field by a changing magnetic field. The description of these physical processes by differential equations relating curl B to ∂E/∂τ, and curl E to ∂B/∂τ is the heart of Maxwell’s electromagnetic theory and illustrates the power of the mathematical methods characteristic of field theories. Further examples will be found in the mathematical description of fluid motion, in which the local velocity v(r) of fluid particles constitutes a field to which the notions of divergence and curl are naturally applicable. Examples of differential equations for fields An incompressible fluid flows so that the net flux of fluid into or out of a given volume within the fluid is zero. Since the divergence of a vector describes the net flux out of an infinitesimal element, divided by the volume of the element, the velocity vector v in an incompressible fluid must obey the equation div v = 0. If the fluid is compressible, however, and its density ρ(r) varies with position because of pressure or temperature variations, the net outward flux of mass from some small element is determined by div (ρv), and this must be related to the rate at which the density of the fluid within is changing: A dissolved molecule or a small particle suspended in a fluid is constantly struck at random by molecules of the fluid in its neighbourhood, as a result of which it wanders erratically. This is called Brownian motion in the case of suspended particles. It is usually safe to assume that each one in a cloud of similar particles is moved by collisions from the fluid and not by interaction between the particles themselves. When a dense cloud gradually spreads out, much like a drop of ink in a beaker of water, this diffusive motion is the consequence of random, independent wandering by each particle. Two equations can be written to describe the average behaviour. The first is a continuity equation: if there are n(r) particles per unit volume around the point r, and the flux of particles across an element of area is described by a vector F, meaning the number of particles crossing unit area normal to F in unit time, describes the conservation of particles. Secondly, Fick’s law states that the random wandering causes an average drift of particles from regions where they are denser to regions where they are rarer, and that the mean drift rate is proportional to the gradient of density and in the opposite sense to the gradient: where D is a constant—the diffusion constant. These two equations can be combined into one differential equation for the changes that n will undergo, which defines uniquely how any initial distribution of particles will develop with time. Thus, the spreading of a small drop of ink is rather closely described by the particular solution, in which C is a constant determined by the total number of particles in the ink drop. When t is very small at the start of the process, all the particles are clustered near the origin of r, but, as t increases, the radius of the cluster increases in proportion to the square root of the time, while the density at the centre drops as the three-halves power to keep the total number constant. The distribution of particles with distance from the centre at three different times is shown in Figure 10: Diffusive spread of a cloud of particles initially concentrated at a point. The value given for each curve represents the time elapsed since n, the particles per unit volume around point r, began to disperse (see text).. From this diagram one may calculate what fraction, after any chosen interval, has moved farther than some chosen distance from the origin. Moreover, since each particle wanders independently of the rest, it also gives the probability that a single particle will migrate farther than this in the same time. Thus, a problem relating to the behaviour of a single particle, for which only an average answer can usefully be given, has been converted into a field equation and solved rigorously. This is a widely used technique in physics. Further examples of field equations The equations describing the propagation of waves (electromagnetic, acoustic, deep water waves, and ripples) are discussed in relevant articles, as is the Schrödinger equation for probability waves that governs particle behaviour in quantum mechanics (see below Fundamental constituents of matter). The field equations that embody the special theory of relativity are more elaborate with space and time coordinates no longer independent of each other, though the geometry involved is still Euclidean. In the general theory of relativity, the geometry of this four-dimensional space-time is non-Euclidean (see relativity). Conservation laws and extremal principles It is a consequence of Newton’s laws of motion that the total momentum remains constant in a system completely isolated from external influences. The only forces acting on any part of the system are those exerted by other parts; if these are taken in pairs, according to the third law, A exerts on B a force equal and opposite to that of B on A. Since, according to the second law, the momentum of each changes at a rate equal to the force acting on it, the momentum change of A is exactly equal and opposite to that of B when only mutual forces between these two are considered. Because the effects of separate forces are additive, it follows that for the system as a whole no momentum change occurs. The centre of mass of the whole system obeys the first law in remaining at rest or moving at a constant velocity, so long as no external influences are brought to bear. This is the oldest of the conservation laws and is invoked frequently in solving dynamic problems. Conservation of angular momentum The total angular momentum (also called moment of momentum) of an isolated system about a fixed point is conserved as well. The angular momentum of a particle of mass m moving with velocity v at the instant when it is at a distance r from the fixed point is mrv. The quantity written as rv is a vector (the vector product of r and v) having components with respect to Cartesian axes The meaning is more easily appreciated if all the particles lie and move in a plane. The angular momentum of any one particle is the product of its momentum mv and the distance of nearest approach of the particle to the fixed point if it were to continue in a straight line. The vector is drawn normal to the plane. Conservation of total angular momentum does not follow immediately from Newton’s laws but demands the additional assumption that any pair of forces, action and reaction, are not only equal and opposite but act along the same line. This is always true for central forces, but it holds also for the frictional force developed along sliding surfaces. If angular momentum were not conserved, one might find an isolated body developing a spontaneous rotation with respect to the distant stars or, if rotating like the Earth, changing its rotational speed without any external cause. Such small changes as the Earth experiences are explicable in terms of disturbances from without—e.g., tidal forces exerted by the Moon. The law of conservation of angular momentum is not called into question. Nevertheless, there are noncentral forces in nature, as, for example, when a charged particle moves past a bar magnet. If the line of motion and the axis of the magnet lie in a plane, the magnet exerts a force on the particle perpendicular to the plane while the magnetic field of the moving particle exerts an equal and opposite force on the magnet. At the same time, it exerts a couple tending to twist the magnet out of the plane. Angular momentum is not conserved unless one imagines that the balance of angular momentum is distributed in the space around the magnet and charge and changes as the particle moves past. The required result is neatly expressed by postulating the possible existence of magnetic poles that would generate a magnetic field analogous to the electric field of a charge (a bar magnet behaves roughly like two such poles of opposite sign, one near each end). Then there is associated with each pair, consisting of a charge q and a pole P, angular momentum μ0Pq/4π, as if the electric and magnetic fields together acted like a gyroscope spinning about the line joining P and q. With this contribution included in the sum, angular momentum is always conserved. Conservation of energy The device of associating mechanical properties with the fields, which up to this point had appeared merely as convenient mathematical constructions, has even greater implications when conservation of energy is considered. This conservation law, which is regarded as basic to physics, seems at first sight, from an atomic point of view, to be almost trivial. If two particles interact by central forces, for which a potential function ϕ may be defined such that grad ϕ gives the magnitude of the force experienced by each, it follows from Newton’s laws of motion that the sum of ϕ and of their separate kinetic energies, defined as 1/2mv2, remains constant. This sum is defined to be the total energy of the two particles and, by its definition, is automatically conserved. The argument may be extended to any number of particles interacting by means of central forces; a potential energy function may always be found, depending only on the relative positions of the particles, which may be added to the sum of the kinetic energies (depending only on the velocities) to give a total energy that is conserved. The concept of potential energy, thus introduced as a formal device, acquires a more concrete appearance when it is expressed in terms of electric and magnetic field strengths for particles interacting by virtue of their charges. The quantities 1/2ε0Ε2 and B2/2μ0 may be interpreted as the contributions per unit volume of the electric and magnetic fields to the potential energy, and, when these are integrated over all space and added to the kinetic energy, the total energy thus expressed is a conserved quantity. These expressions were discovered during the heyday of ether theories, according to which all space is permeated by a medium capable of transmitting forces between particles (see above). The electric and magnetic fields were interpreted as descriptions of the state of strain of the ether, so that the location of stored energy throughout space was no more remarkable than it would be in a compressed spring. With the abandonment of the ether theories following the rise of relativity theory, this visualizable model ceased to have validity. Conservation of mass-energy The idea of energy as a real constituent of matter has, however, become too deeply rooted to be abandoned lightly, and most physicists find it useful to continue treating electric and magnetic fields as more than mathematical constructions. Far from being empty, free space is viewed as a storehouse for energy, with E and B providing not only an inventory but expressions for its movements as represented by the momentum carried in the fields. Wherever E and B are both present, and not parallel, there is a flux of energy, amounting to EB0, crossing unit area and moving in a direction normal to the plane defined by E and B. This energy in motion confers momentum on the field, EB0c, per unit volume as if there were mass associated with the field energy. Indeed, the English physicist J.J. Thomson showed in 1881 that the energy stored in the fields around a moving charged particle varies as the square of the velocity as if there were extra mass carried with the electric field around the particle. Herein lie the seeds of the general mass–energy relationship developed by Einstein in his special theory of relativity; E = mc2 expresses the association of mass with every form of energy. Neither of two separate conservation laws, that of energy and that of mass (the latter particularly the outcome of countless experiments involving chemical change), is in this view perfectly true, but together they constitute a single conservation law, which may be expressed in two equivalent ways—conservation of mass, if to the total energy E is ascribed mass E/c2, or conservation of energy, if to each mass m is ascribed energy mc2. The delicate measurements by Eötvös and later workers (see above) show that the gravitational forces acting on a body do not distinguish different types of mass, whether intrinsic to the fundamental particles or resulting from their kinetic and potential energies. For all its apparently artificial origins, then, this conservation law enshrines a very deep truth about the material universe, one that has not yet been fully explored. An equally fundamental law, for which no exception is known, is that the total electrical charge in an isolated system is conserved. In the production of a negatively charged electron by an energetic gamma ray, for example, a positively charged positron is produced simultaneously. An isolated electron cannot disappear, though an electron and a positron, whose total charge is zero and whose mass is 2me (twice the mass of an electron), may simultaneously be annihilated. The energy equivalent of the destroyed mass appears as gamma ray energy 2mec2. For macroscopic systems—i.e., those composed of objects massive enough for their atomic structure to be discounted in the analysis of their behaviour—the conservation law for energy assumes a different aspect. In the collision of two perfectly elastic objects, to which billiard balls are a good approximation, momentum and energy are both conserved. Given the paths and velocities before collision, those after collision can be calculated from the conservation laws alone. In reality, however, although momentum is always conserved, the kinetic energy of the separating balls is less than what they had on approach. Soft objects, indeed, may adhere on collision, losing most of their kinetic energy. The lost energy takes the form of heat, raising the temperature (if only imperceptibly) of the colliding objects. From the atomic viewpoint the total energy of a body may be divided into two portions: on the one hand, the external energy consisting of the potential energy associated with its position and the kinetic energy of motion of its centre of mass and its spin; and, on the other, the internal energy due to the arrangement and motion of its constituent atoms. In an inelastic collision the sum of internal and external energies is conserved, but some of the external energy of bodily motion is irretrievably transformed into internal random motions. The conservation of energy is expressed in the macroscopic language of the first law of thermodynamics—namely, energy is conserved provided that heat is taken into account. The irreversible nature of the transfer from external energy of organized motion to random internal energy is a manifestation of the second law of thermodynamics. The irreversible degradation of external energy into random internal energy also explains the tendency of all systems to come to rest if left to themselves. If there is a configuration in which the potential energy is less than for any slightly different configuration, the system may find stable equilibrium here because there is no way in which it can lose more external energy, either potential or kinetic. This is an example of an extremal principle—that a state of stable equilibrium is one in which the potential energy is a minimum with respect to any small changes in configuration. It may be regarded as a special case of one of the most fundamental of physical laws, the principle of increase of entropy, which is a statement of the second law of thermodynamics in the form of an extremal principle—the equilibrium state of an isolated physical system is that in which the entropy takes the maximum possible value. This matter is discussed further below and, in particular, in the article thermodynamics. Manifestations of the extremal principle The earliest extremal principle to survive in modern physics was formulated by the French mathematician Pierre de Fermat in about 1660. As originally stated, the path taken by a ray of light between two fixed points in an arrangement of mirrors, lenses, and so forth, is that which takes the least time. The laws of reflection and refraction may be deduced from this principle if it is assumed as Fermat did, correctly, that in a medium of refractive index μ light travels more slowly than in free space by a factor μ. Strictly, the time taken along a true ray path is either less or greater than for any neighbouring path. If all paths in the neighbourhood take the same time, the two chosen points are such that light leaving one is focused on the other. The perfect example is exhibited by an elliptical mirror, such as the one in Figure 11: An elliptic mirror focusing all rays of light from F1 onto F2 (see text).; all paths from F1 to the ellipse and thence to F2 have the same length. In conventional optical terms, the ellipse has the property that every choice of paths obeys the law of reflection, and every ray from F1 converges after reflection onto F2. Also shown in the figure are two reflecting surfaces tangential to the ellipse that do not have the correct curvature to focus light from F1 onto F2. A ray is reflected from F1 to F2 only at the point of contact. For the flat reflector the path taken is the shortest of all in the vicinity, while for the reflector that is more strongly curved than the ellipse it is the longest. Fermat’s principle and its application to focusing by mirrors and lenses finds a natural explanation in the wave theory of light (see light: Basic concepts of wave theory). A similar extremal principle in mechanics, the principle of least action, was proposed by the French mathematician and astronomer Pierre-Louis Moreau de Maupertuis but rigorously stated only much later, especially by the Irish mathematician and scientist William Rowan Hamilton in 1835. Though very general, it is well enough illustrated by a simple example, the path taken by a particle between two points A and B in a region where the potential ϕ(r) is everywhere defined. Once the total energy E of the particle has been fixed, its kinetic energy T at any point P is the difference between E and the potential energy ϕ at P. If any path between A and B is assumed to be followed, the velocity at each point may be calculated from T, and hence the time t between the moment of departure from A and passage through P. The action for this path is found by evaluating the integral ∫BA (T - ϕ)dt, and the actual path taken by the particle is that for which the action is minimal. It may be remarked that both Fermat and Maupertuis were guided by Aristotelian notions of economy in nature that have been found, if not actively misleading, too imprecise to retain a place in modern science. Fermat’s and Hamilton’s principles are but two examples out of many whereby a procedure is established for finding the correct solution to a problem by discovering under what conditions a certain function takes an extremal value. The advantages of such an approach are that it brings into play the powerful mathematical techniques of the calculus of variations and, perhaps even more important, that in dealing with very complex situations it may allow a systematic approach by computational means to a solution that may not be exact but is near enough the right answer to be useful. Fermat’s principle, stated as a theorem concerning light rays but later restated in terms of the wave theory, found an almost exact parallel in the development of wave mechanics. The association of a wave with a particle by the physicists Louis-Victor de Broglie and Erwin Schrödinger was made in such a way that the principle of least action followed by an analogous argument. Fundamental constituents of matter Development of the atomic theory The idea that matter is composed of atoms goes back to the Greek philosophers, notably Democritus, and has never since been entirely lost sight of, though there have been periods when alternative views were more generally preferred. Newton’s contemporaries, Robert Hooke and Robert Boyle, in particular, were atomists, but their interpretation of the sensation of heat as random motion of atoms was overshadowed for more than a century by the conception of heat as a subtle fluid dubbed caloric. It is a tribute to the strength of caloric theory that it enabled the French scientist Sadi Carnot to arrive at his great discoveries in thermodynamics. In the end, however, the numerical rules for the chemical combination of different simple substances, together with the experiments on the conversion of work into heat by Benjamin Thompson (Count Rumford) and James Prescott Joule, led to the downfall of the theory of caloric. Nevertheless, the rise of ether theories to explain the transmission of light and electromagnetic forces through apparently empty space postponed for many decades the general reacceptance of the concept of atoms. The discovery in 1858 by the German scientist and philosopher Hermann von Helmholtz of the permanence of vortex motions in perfectly inviscid fluids encouraged the invention—throughout the latter half of the 19th century and especially in Great Britain—of models in which vortices in a structureless ether played the part otherwise assigned to atoms. In recent years the recognition that certain localized disturbances in a fluid, the so-called solitary waves, might persist for a very long time has led to attempts, so far unsuccessful, to use them as models of fundamental particles. These attempts to describe the basic constituents of matter in the familiar language of fluid mechanics were at least atomic theories in contrast to the anti-atomistic movement at the end of the 19th century in Germany under the influence of Ernst Mach and Wilhelm Ostwald. For all their scientific eminence, their argument was philosophical rather than scientific, springing as it did from the conviction that the highest aim of science is to describe the relationship between different sensory perceptions without the introduction of unobservable concepts. Nonetheless, an inspection of the success of their contemporaries using atomic models shows why this movement failed. It suffices to mention the systematic construction of a kinetic theory of matter in which the physicists Ludwig Boltzmann of Austria and J. Willard Gibbs of the United States were the two leading figures. To this may be added Hendrik Lorentz’s electron theory, which explained in satisfying detail many of the electrical properties of matter; and, as a crushing argument for atomism, the discovery and explanation of X-ray diffraction by Max von Laue of Germany and his collaborators, a discovery that was quickly developed, following the lead of the British physicist William Henry Bragg and his son Lawrence, into a systematic technique for mapping the precise atomic structure of crystals. While the concept of atoms was thus being made indispensable, the ancient belief that they were probably structureless and certainly indestructible came under devastating attack. J.J. Thomson’s discovery of the electron in 1897 soon led to the realization that the mass of an atom largely resides in a positively charged part, electrically neutralized by a cloud of much lighter electrons. A few years later Ernest Rutherford and Frederick Soddy showed how the emission of alpha and beta particles from radioactive elements causes them to be transformed into elements of different chemical properties. By 1913, with Rutherford as the leading figure, the foundations of the modern theory of atomic structure were laid. It was determined that a small, massive nucleus carries all the positive charge whose magnitude, expressed as a multiple of the fundamental charge of the proton, is the atomic number. An equal number of electrons carrying a negative charge numerically equal to that of the proton form a cloud whose diameter is several thousand times that of the nucleus around which they swarm. The atomic number determines the chemical properties of the atom, and in alpha decay a helium nucleus, whose atomic number is 2, is emitted from the radioactive nucleus, leaving one whose atomic number is reduced by 2. In beta decay the nucleus in effect gains one positive charge by emitting a negative electron and thus has its atomic number increased by unity. The nucleus, itself a composite body, was soon being described in various ways, none completely wrong but none uniquely right. Pivotal was James Chadwick’s discovery in 1932 of the neutron, a nuclear particle with very nearly the same mass as the proton but no electric charge. After this discovery, investigators came to view the nucleus as consisting of protons and neutrons, bound together by a force of limited range, which at close quarters was strong enough to overcome the electrical repulsion between the protons. A free neutron survives for only a few minutes before disintegrating into a readily observed proton and electron, along with an elusive neutrino, which has no charge and zero, or at most extremely small, mass. The disintegration of a neutron also may occur inside the nucleus, with the expulsion of the electron and neutrino; this is the beta-decay process. It is common enough among the heavy radioactive nuclei but does not occur with all nuclei because the energy released would be insufficient for the reorganization of the resulting nucleus. Certain nuclei have a higher-than-ideal ratio of protons to neutrons and may adjust the proportion by the reverse process, a proton being converted into a neutron with the expulsion of a positron and an antineutrino. For example, a magnesium nucleus containing 12 protons and 11 neutrons spontaneously changes to a stable sodium nucleus with 11 protons and 12 neutrons. The positron resembles the electron in all respects except for being positively rather than negatively charged. It was the first antiparticle to be discovered. Its existence had been predicted, however, by Dirac after he had formulated the quantum mechanical equations describing the behaviour of an electron (see below). This was one of the most spectacular achievements of a spectacular albeit brief epoch, during which the basic conceptions of physics were revolutionized. Rise of quantum mechanics The idea of the quantum was introduced by the German physicist Max Planck in 1900 in response to the problems posed by the spectrum of radiation from a hot body, but the development of quantum theory soon became closely tied to the difficulty of explaining by classical mechanics the stability of Rutherford’s nuclear atom. Bohr led the way in 1913 with his model of the hydrogen atom, but it was not until 1925 that the arbitrary postulates of his quantum theory found consistent expression in the new quantum mechanics that was formulated in apparently different but in fact equivalent ways by Heisenberg, Schrödinger, and Dirac (see quantum mechanics). In Bohr’s model the motion of the electron around the proton was analyzed as if it were a classical problem, mathematically the same as that of a planet around the Sun, but it was additionally postulated that, of all the orbits available to the classical particle, only a discrete set was to be allowed, and Bohr devised rules for determining which orbits they were. In Schrödinger’s wave mechanics the problem is also written down in the first place as if it were a classical problem, but, instead of proceeding to a solution of the orbital motion, the equation is transformed by an explicitly laid down procedure from an equation of particle motion to an equation of wave motion. The newly introduced mathematical function Ψ, the amplitude of Schrödinger’s hypothetical wave, is used to calculate not how the electron moves but rather what the probability is of finding the electron in any specific place if it is looked for there. Schrödinger’s prescription reproduced in the solutions of the wave equation the postulates of Bohr but went much further. Bohr’s theory had come to grief when even two electrons, as in the helium atom, had to be considered together, but the new quantum mechanics encountered no problems in formulating the equations for two or any number of electrons moving around a nucleus. Solving the equations was another matter, yet numerical procedures were applied with devoted patience to a few of the simpler cases and demonstrated beyond cavil that the only obstacle to solution was calculational and not an error of physical principle. Modern computers have vastly extended the range of application of quantum mechanics not only to heavier atoms but also to molecules and assemblies of atoms in solids, and always with such success as to inspire full confidence in the prescription. From time to time many physicists feel uneasy that it is necessary first to write down the problem to be solved as though it were a classical problem and them to subject it to an artificial transformation into a problem in quantum mechanics. It must be realized, however, that the world of experience and observation is not the world of electrons and nuclei. When a bright spot on a television screen is interpreted as the arrival of a stream of electrons, it is still only the bright spot that is perceived and not the electrons. The world of experience is described by the physicist in terms of visible objects, occupying definite positions at definite instants of time—in a word, the world of classical mechanics. When the atom is pictured as a nucleus surrounded by electrons, this picture is a necessary concession to human limitations; there is no sense in which one can say that, if only a good enough microscope were available, this picture would be revealed as genuine reality. It is not that such a microscope has not been made; it is actually impossible to make one that will reveal this detail. The process of transformation from a classical description to an equation of quantum mechanics, and from the solution of this equation to the probability that a specified experiment will yield a specified observation, is not to be thought of as a temporary expedient pending the development of a better theory. It is better to accept this process as a technique for predicting the observations that are likely to follow from an earlier set of observations. Whether electrons and nuclei have an objective existence in reality is a metaphysical question to which no definite answer can be given. There is, however, no doubt that to postulate their existence is, in the present state of physics, an inescapable necessity if a consistent theory is to be constructed to describe economically and exactly the enormous variety of observations on the behaviour of matter. The habitual use of the language of particles by physicists induces and reflects the conviction that, even if the particles elude direct observation, they are as real as any everyday object. Following the initial triumphs of quantum mechanics, Dirac in 1928 extended the theory so that it would be compatible with the special theory of relativity. Among the new and experimentally verified results arising from this work was the seemingly meaningless possibility that an electron of mass m might exist with any negative energy between −mc2 and −∞. Between −mc2 and +mc2, which is in relativistic theory the energy of an electron at rest, no state is possible. It became clear that other predictions of the theory would not agree with experiment if the negative-energy states were brushed aside as an artifact of the theory without physical significance. Eventually Dirac was led to propose that all the states of negative energy, infinite in number, are already occupied with electrons and that these, filling all space evenly, are imperceptible. If, however, one of the negative-energy electrons is given more than 2mc2 of energy, it can be raised into a positive-energy state, and the hole it leaves behind will be perceived as an electron-like particle, though carrying a positive charge. Thus, this act of excitation leads to the simultaneous appearance of a pair of particles—an ordinary negative electron and a positively charged but otherwise identical positron. This process was observed in cloud-chamber photographs by Carl David Anderson of the United States in 1932. The reverse process was recognized at the same time; it can be visualized either as an electron and a positron mutually annihilating one another, with all their energy (two lots of rest energy, each mc2, plus their kinetic energy) being converted into gamma rays (electromagnetic quanta), or as an electron losing all this energy as it drops into the vacant negative-energy state that simulates a positive charge. When an exceptionally energetic cosmic-ray particle enters the Earth’s atmosphere, it initiates a chain of such processes in which gamma rays generate electron–positron pairs; these in turn emit gamma rays which, though of lower energy, are still capable of creating more pairs, so that what reaches the Earth’s surface is a shower of many millions of electrons and positrons. Not unnaturally, the suggestion that space was filled to infinite density with unobservable particles was not easily accepted in spite of the obvious successes of the theory. It would have seemed even more outrageous had not other developments already forced theoretical physicists to contemplate abandoning the idea of empty space. Quantum mechanics carries the implication that no oscillatory system can lose all its energy; there must always remain at least a “zero-point energy” amounting to hν/2 for an oscillator with natural frequency ν (h is Planck’s constant). This also seemed to be required for the electromagnetic oscillations constituting radio waves, light, X-rays, and gamma rays. Since there is no known limit to the frequency ν, their total zero-point energy density is also infinite; like the negative-energy electron states, it is uniformly distributed throughout space, both inside and outside matter, and presumed to produce no observable effects. Developments in particle physics It was at about this moment, say 1930, in the history of the physics of fundamental particles that serious attempts to visualize the processes in terms of everyday notions were abandoned in favour of mathematical formalisms. Instead of seeking modified procedures from which the awkward, unobservable infinities had been banished, the thrust was toward devising prescriptions for calculating what observable processes could occur and how frequently and how quickly they would occur. An empty cavity which would be described by a classical physicist as capable of maintaining electromagnetic waves of various frequencies, ν, and arbitrary amplitude now remains empty (zero-point oscillation being set aside as irrelevant) except insofar as photons, of energy hν, are excited within it. Certain mathematical operators have the power to convert the description of the assembly of photons into the description of a new assembly, the same as the first except for the addition or removal of one. These are called creation or annihilation operators, and it need not be emphasized that the operations are performed on paper and in no way describe a laboratory operation having the same ultimate effect. They serve, however, to express such physical phenomena as the emission of a photon from an atom when it makes a transition to a state of lower energy. The development of these techniques, especially after their supplementation with the procedure of renormalization (which systematically removes from consideration various infinite energies that naive physical models throw up with embarrassing abundance), has resulted in a rigorously defined procedure that has had dramatic successes in predicting numerical results in close agreement with experiment. It is sufficient to cite the example of the magnetic moment of the electron. According to Dirac’s relativistic theory, the electron should possess a magnetic moment whose strength he predicted to be exactly one Bohr magneton (eh/4πm, or 9.27 × 10−24 joule per tesla). In practice, this has been found to be not quite right, as, for instance, in the experiment of Lamb and Rutherford mentioned earlier; more recent determinations give 1.0011596522 Bohr magnetons. Calculations by means of the theory of quantum electrodynamics give 1.0011596525 in impressive agreement. This account represents the state of the theory in about 1950, when it was still primarily concerned with problems related to the stable fundamental particles, the electron and the proton, and their interaction with electromagnetic fields. Meanwhile, studies of cosmic radiation at high altitudes—those conducted on mountains or involving the use of balloon-borne photographic plates—had revealed the existence of the pi-meson (pion), a particle 273 times as massive as the electron, which disintegrates into the mu-meson (muon), 207 times as massive as the electron, and a neutrino. Each muon in turn disintegrates into an electron and two neutrinos. The pion has been identified with the hypothetical particle postulated in 1935 by the Japanese physicist Yukawa Hideki as the particle that serves to bind protons and neutrons in the nucleus. Many more unstable particles have been discovered in recent years. Some of them, just as in the case of the pion and the muon, are lighter than the proton, but many are more massive. An account of such particles is given in the article subatomic particle. The term particle is firmly embedded in the language of physics, yet a precise definition has become harder as more is learned. When examining the tracks in a cloud-chamber or bubble-chamber photograph, one can hardly suspend disbelief in their having been caused by the passage of a small charged object. However, the combination of particle-like and wavelike properties in quantum mechanics is unlike anything in ordinary experience, and, as soon as one attempts to describe in terms of quantum mechanics the behaviour of a group of identical particles (e.g., the electrons in an atom), the problem of visualizing them in concrete terms becomes still more intractable. And this is before one has even tried to include in the picture the unstable particles or to describe the properties of a stable particle like the proton in relation to quarks. These hypothetical entities, worthy of the name particle to the theoretical physicist, are apparently not to be detected in isolation, nor does the mathematics of their behaviour encourage any picture of the proton as a molecule-like composite body constructed of quarks. Similarly, the theory of the muon is not the theory of an object composed, as the word is normally used, of an electron and two neutrinos. The theory does, however, incorporate such features of particle-like behaviour as will account for the observation of the track of a muon coming to an end and that of an electron starting from the end point. At the heart of all fundamental theories is the concept of countability. If a certain number of particles is known to be present inside a certain space, that number will be found there later, unless some have escaped (in which case they could have been detected and counted) or turned into other particles (in which case the change in composition is precisely defined). It is this property, above all, that allows the idea of particles to be preserved. Undoubtedly, however, the term is being strained when it is applied to photons that can disappear with nothing to show but thermal energy or be generated without limit by a hot body so long as there is energy available. They are a convenience for discussing the properties of a quantized electromagnetic field, so much so that the condensed-matter physicist refers to the analogous quantized elastic vibrations of a solid as phonons without persuading himself that a solid really consists of an empty box with particle-like phonons running about inside. If, however, one is encouraged by this example to abandon belief in photons as physical particles, it is far from clear why the fundamental particles should be treated as significantly more real, and, if a question mark hangs over the existence of electrons and protons, where does one stand with atoms or molecules? The physics of fundamental particles does indeed pose basic metaphysical questions to which neither philosophy nor physics has answers. Nevertheless, the physicist has confidence that his constructs and the mathematical processes for manipulating them represent a technique for correlating the outcomes of observation and experiment with such precision and over so wide a range of phenomena that he can afford to postpone deeper inquiry into the ultimate reality of the material world. Simplicity and complexity The search for fundamental particles and the mathematical formalism with which to describe their motions and interactions has in common with the search for the laws governing gravitational, electromagnetic, and other fields of force the aim of finding the most economical basis from which, in principle, theories of all other material processes may be derived. Some of these processes are simple—a single particle moving in a given field of force, for example—if the term refers to the nature of the system studied and not to the mathematical equipment that may sometimes be brought to bear. A complex process, on the other hand, is typically one in which many interacting particles are involved and for which it is hardly ever possible to proceed to a complete mathematical solution. A computer may be able to follow in detail the movement of thousands of atoms interacting in a specified way, but a wholly successful study along these lines does no more than display on a large scale and at an assimilable speed what nature achieves on its own. Much can be learned from these studies, but, if one is primarily concerned with discovering what will happen in given circumstances, it is frequently quicker and cheaper to do the experiment than to model it on a computer. In any case, computer modeling of quantum mechanical, as distinct from Newtonian, behaviour becomes extremely complicated as soon as more than a few particles are involved. The art of analyzing complex systems is that of finding the means to extract from theory no more information than one needs. It is normally of no value to discover the speed of a given molecule in a gas at a given moment; it is, however, very valuable to know what fraction of the molecules possess a given speed. The correct answer to this question was found by Maxwell, whose argument was ingenious and plausible. More rigorously, Boltzmann showed that it is possible to proceed from the conservation laws governing molecular encounters to general statements, such as the distribution of velocities, which are largely independent of how the molecules interact. In thus laying the foundations of statistical mechanics, Boltzmann provided an object lesson in how to avoid recourse to the fundamental laws, replacing them with a new set of rules appropriate to highly complex systems. This point is discussed further in Entropy and disorder below. The example of statistical mechanics is but one of many that together build up a hierarchical structure of simplified models whose function is to make practicable the analysis of systems at various levels of complexity. Ideally, the logical relationship between each successive pair of levels should be established so that the analyst may have confidence that the methods he applies to his special problem are buttressed by the enormous corpus of fact and theory that comprises physical knowledge at all levels. It is not in the nature of the subject for every connection to be proved with mathematical rigour, but, where this is lacking, experiment will frequently indicate what trust may be placed in the intuitive steps of the argument. For instance, it is out of the question to solve completely the quantum mechanical problem of finding the stationary states in which an atomic nucleus containing perhaps 50 protons or neutrons can exist. Nevertheless, the energy of these states can be measured and models devised in which details of particle position are replaced by averages, such that when the simplified model is treated by the methods of quantum mechanics the measured energy levels emerge from the calculations. Success is attained when the rules for setting up the model are found to give the right result for every nucleus. Similar models had been devised earlier by the English physicist Douglas R. Hartree to describe the cloud of electrons around the nucleus. The increase in computing power made it feasible to add extra details to the model so that it agreed even better with the measured properties of atoms. It is worth noting that when the extranuclear electrons are under consideration it is frequently unnecessary to refer to details of the nucleus, which might just as well be a point charge; even if this is too simplistic, a small number of extra facts usually suffices. In the same way, when the atoms combine chemically and molecules in a gas or a condensed state interact, most of the details of electronic structure within the atom are irrelevant or can be included in the calculation by introducing a few extra parameters; these are often treated as empirical properties. Thus, the degree to which an atom is distorted by an electric field is often a significant factor in its behaviour, and the investigator dealing with the properties of assemblies of atoms may prefer to use the measured value rather than the atomic theorist’s calculation of what it should be. However, he knows that enough of these calculations have been successfully carried out for his use of measured values in any specific case to be a time-saver rather than a denial of the validity of his model. These examples from atomic physics can be multiplied at all levels so that a connected hierarchy exists, ranging from fundamental particles and fields, through atoms and molecules, to gases, liquids, and solids that were studied in detail and reduced to quantitative order well before the rise of atomic theory. Beyond this level lie the realms of the Earth sciences, the planetary systems, the interior of stars, galaxies, and the Cosmos as a whole. And with the interior of stars and the hypothetical early universe, the entire range of models must be brought to bear if one is to understand how the chemical elements were built up or to determine what sort of motions are possible in the unimaginably dense, condensed state of neutron stars. The following sections make no attempt to explore all aspects and interconnections of complex material systems, but they highlight a few ideas which pervade the field and which indicate the existence of principles that find little place in the fundamental laws yet are the outcome of their operation. The normal behaviour of a gas on cooling is to condense into a liquid and then into a solid, though the liquid phase may be left out if the gas starts at a low enough pressure. The solid phase of a pure substance is usually crystalline, having the atoms or molecules arranged in a regular pattern so that a suitable small sample may define the whole. The unit cell is the smallest block out of which the pattern can be formed by stacking replicas. The checkerboard in Figure 12: The unit cell as the smallest representative sample of the whole. In the case of this checkerboard, the unit cell consists of one white square, and one shaded square dissected into quarters. illustrates the idea; here the unit cell has been chosen out of many possibilities to contain one white square and one black, dissected into quarters. For crystals, of course, the unit cell is three-dimensional. A very wide variety of arrangements is exhibited by different substances, and it is the great triumph of X-ray crystallography to have provided the means for determining experimentally what arrangement is involved in each case. Entropy and disorder It is possible, however, that in the course of time the universe will suffer “heat death,” having attained a condition of maximum entropy, after which tiny fluctuations are all that will happen. If so, these will be reversible, like the graph of , and will give no indication of a direction of time. Yet, because this undifferentiated cosmic soup will be devoid of structures necessary for consciousness, the sense of time will in any case have vanished long since.
2b04ac295585c6c3
You are currently browsing the tag archive for the ‘NLS’ tag. The Schrödinger equation \displaystyle i \hbar \partial_t |\psi \rangle = H |\psi\rangle is the fundamental equation of motion for (non-relativistic) quantum mechanics, modeling both one-particle systems and {N}-particle systems for {N>1}. Remarkably, despite being a linear equation, solutions {|\psi\rangle} to this equation can be governed by a non-linear equation in the large particle limit {N \rightarrow \infty}. In particular, when modeling a Bose-Einstein condensate with a suitably scaled interaction potential {V} in the large particle limit, the solution can be governed by the cubic nonlinear Schrödinger equation \displaystyle i \partial_t \phi = \Delta \phi + \lambda |\phi|^2 \phi. \ \ \ \ \ (1) I recently attended a talk by Natasa Pavlovic on the rigorous derivation of this type of limiting behaviour, which was initiated by the pioneering work of Hepp and Spohn, and has now attracted a vast recent literature. The rigorous details here are rather sophisticated; but the heuristic explanation of the phenomenon is fairly simple, and actually rather pretty in my opinion, involving the foundational quantum mechanics of {N}-particle systems. I am recording this heuristic derivation here, partly for my own benefit, but perhaps it will be of interest to some readers. This discussion will be purely formal, in the sense that (important) analytic issues such as differentiability, existence and uniqueness, etc. will be largely ignored. Read the rest of this entry » Title: Use basic examples to calibrate exponents Motivation: In the more quantitative areas of mathematics, such as analysis and combinatorics, one has to frequently keep track of a large number of exponents in one’s identities, inequalities, and estimates.  For instance, if one is studying a set of N elements, then many expressions that one is faced with will often involve some power N^p of N; if one is instead studying a function f on a measure space X, then perhaps it is an L^p norm \|f\|_{L^p(X)} which will appear instead.  The exponent p involved will typically evolve slowly over the course of the argument, as various algebraic or analytic manipulations are applied.  In some cases, the exact value of this exponent is immaterial, but at other times it is crucial to have the correct value of p at hand.   One can (and should) of course carefully go through one’s arguments line by line to work out the exponents correctly, but it is all too easy to make a sign error or other mis-step at one of the lines, causing all the exponents on subsequent lines to be incorrect.  However, one can guard against this (and avoid some tedious line-by-line exponent checking) by continually calibrating these exponents at key junctures of the arguments by using basic examples of the object of study (sets, functions, graphs, etc.) as test cases.  This is a simple trick, but it lets one avoid many unforced errors with exponents, and also lets one compute more rapidly. Quick description: When trying to quickly work out what an exponent p in an estimate, identity, or inequality should be without deriving that statement line-by-line, test that statement with a simple example which has non-trivial behaviour with respect to that exponent p, but trivial behaviour with respect to as many other components of that statement as one is able to manage.   The “non-trivial” behaviour should be parametrised by some very large or very small parameter.  By matching the dependence on this parameter on both sides of the estimate, identity, or inequality, one should recover p (or at least a good prediction as to what p should be). General discussion: The test examples should be as basic as possible; ideally they should have trivial behaviour in all aspects except for one feature that relates to the exponent p that one is trying to calibrate, thus being only “barely” non-trivial.   When the object of study is a function, then (appropriately rescaled, or otherwise modified) bump functions are very typical test objects, as are Dirac masses, constant functions, Gaussians, or other functions that are simple and easy to compute with.  In additive combinatorics, when the object of study is a subset of a group, then subgroups, arithmetic progressions, or random sets are typical test objects.  In graph theory, typical examples of test objects include complete graphs, complete bipartite graphs, and random graphs. And so forth. This trick is closely related to that of using dimensional analysis to recover exponents; indeed, one can view dimensional analysis as the special case of exponent calibration when using test objects which are non-trivial in one dimensional aspect (e.g. they exist at a single very large or very small length scale) but are otherwise of a trivial or “featureless” nature.   But the calibration trick is more general, as it can involve parameters (such as probabilities, angles, or eccentricities) which are not commonly associated with the physical concept of a dimension.  And personally, I find example-based calibration to be a much more satisfying (and convincing) explanation of an exponent than a calibration arising from formal dimensional analysis. When one is trying to calibrate an inequality or estimate, one should try to pick a basic example which one expects to saturate that inequality or estimate, i.e. an example for which the inequality is close to being an equality.  Otherwise, one would only expect to obtain some partial information on the desired exponent p (e.g. a lower bound or an upper bound only).  Knowing the examples that saturate an estimate that one is trying to prove is also useful for several other reasons – for instance, it strongly suggests that any technique which is not efficient when applied to the saturating example, is unlikely to be strong enough to prove the estimate in general, thus eliminating fruitless approaches to a problem and (hopefully) refocusing one’s attention on those strategies which actually have a chance of working. Calibration is best used for the type of quick-and-dirty calculations one uses when trying to rapidly map out an argument that one has roughly worked out already, but without precise details; in particular, I find it particularly useful when writing up a rapid prototype.  When the time comes to write out the paper in full detail, then of course one should instead carefully work things out line by line, but if all goes well, the exponents obtained in that process should match up with the preliminary guesses for those exponents obtained by calibration, which adds confidence that there are no exponent errors have been committed. Prerequisites: Undergraduate analysis and combinatorics. Read the rest of this entry » Jim Colliander, Mark Keel, Gigliola Staffilani, Hideo Takaoka, and I have just uploaded to the arXiv the paper “Weakly turbulent solutions for the cubic defocusing nonlinear Schrödinger equation“, which we have submitted to Inventiones Mathematicae. This paper concerns the numerically observed phenomenon of weak turbulence for the periodic defocusing cubic non-linear Schrödinger equation -i u_t + \Delta u = |u|^2 u (1) in two spatial dimensions, thus u is a function from {\Bbb R} \times {\Bbb T}^2 to {\Bbb C}.  This equation has three important conserved quantities: the mass M(u) = M(u(t)) := \int_{{\Bbb T}^2} |u(t,x)|^2\ dx the momentum \vec p(u) = \vec p(u(t)) = \int_{{\Bbb T}^2} \hbox{Im}( \nabla u(t,x) \overline{u(t,x)} )\ dx and the energy E(u) = E(u(t)) := \int_{{\Bbb T}^2} \frac{1}{2} |\nabla u(t,x)|^2 + \frac{1}{4} |u(t,x)|^4\ dx. (These conservation laws, incidentally, are related to the basic symmetries of phase rotation, spatial translation, and time translation, via Noether’s theorem.) Using these conservation laws and some standard PDE technology (specifically, some Strichartz estimates for the periodic Schrödinger equation), one can establish global wellposedness for the initial value problem for this equation in (say) the smooth category; thus for every smooth u_0: {\Bbb T}^2 \to {\Bbb C} there is a unique global smooth solution u: {\Bbb R} \times {\Bbb T}^2 \to {\Bbb C} to (1) with initial data u(0,x) = u_0(x), whose mass, momentum, and energy remain constant for all time. However, the mass, momentum, and energy only control three of the infinitely many degrees of freedom available to a function on the torus, and so the above result does not fully describe the dynamics of solutions over time.  In particular, the three conserved quantities inhibit, but do not fully prevent the possibility of a low-to-high frequency cascade, in which the mass, momentum, and energy of the solution remain conserved, but shift to increasingly higher frequencies (or equivalently, to finer spatial scales) as time goes to infinity.  This phenomenon has been observed numerically, and is sometimes referred to as weak turbulence (in contrast to strong turbulence, which is similar but happens within a finite time span rather than asymptotically). To illustrate how this can happen, let us normalise the torus as {\Bbb T}^2 = ({\Bbb R}/2\pi {\Bbb Z})^2.  A simple example of a frequency cascade would be a scenario in which solution u(t,x) = u(t,x_1,x_2) starts off at a low frequency at time zero, e.g. u(0,x) = A e^{i x_1} for some constant amplitude A, and ends up at a high frequency at a later time T, e.g. u(T,x) = A e^{i N x_1} for some large frequency N. This scenario is consistent with conservation of mass, but not conservation of energy or momentum and thus does not actually occur for solutions to (1).  A more complicated example would be a solution supported on two low frequencies at time zero, e.g. u(0,x) = A e^{ix_1} + A e^{-ix_1}, and ends up at two high frequencies later, e.g. u(T,x) = A e^{iNx_1} + A e^{-iNx_1}.  This scenario is consistent with conservation of mass and momentum, but not energy.  Finally, consider the scenario which starts off at u(0,x) = A e^{i Nx_1} + A e^{iNx_2} and ends up at u(T,x) = A + A e^{i(N x_1 + N x_2)}.  This scenario is consistent with all three conservation laws, and exhibits a mild example of a low-to-high frequency cascade, in which the solution starts off at frequency N and ends up with half of its mass at the slightly higher frequency \sqrt{2} N, with the other half of its mass at the zero frequency.  More generally, given four frequencies n_1, n_2, n_3, n_4 \in {\Bbb Z}^2 which form the four vertices of a rectangle in order, one can concoct a similar scenario, compatible with all conservation laws, in which the solution starts off at frequencies n_1, n_3 and propagates to frequencies n_2, n_4. One way to measure a frequency cascade quantitatively is to use the Sobolev norms H^s({\Bbb T}^2) for s > 1; roughly speaking, a low-to-high frequency cascade occurs precisely when these Sobolev norms get large.  (Note that mass and energy conservation ensure that the H^s({\Bbb T}^2) norms stay bounded for 0 \leq s \leq 1.)  For instance, in the cascade from u(0,x) = A e^{i Nx_1} + A e^{iNx_2} to u(T,x) = A + A e^{i(N x_1 + N x_2)}, the H^s({\Bbb T}^2) norm is roughly 2^{1/2} A N^s at time zero and 2^{s/2} A N^s at time T, leading to a slight increase in that norm for s > 1.  Numerical evidence then suggests the following Conjecture. (Weak turbulence) There exist smooth solutions u(t,x) to (1) such that \|u(t)\|_{H^s({\Bbb T}^2)} goes to infinity as t \to \infty for any s > 1. We were not able to establish this conjecture, but we have the following partial result (“weak weak turbulence”, if you will): Theorem. Given any \varepsilon > 0, K > 0, s > 1, there exists a smooth solution u(t,x) to (1) such that \|u(0)\|_{H^s({\Bbb T}^2)} \leq \epsilon and \|u(T)\|_{H^s({\Bbb T}^2)} > K for some time T. This is in marked contrast to (1) in one spatial dimension {\Bbb T}, which is completely integrable and has an infinite number of conservation laws beyond the mass, energy, and momentum which serve to keep all H^s({\Bbb T}^2) norms bounded in time.  It is also in contrast to the linear Schrödinger equation, in which all Sobolev norms are preserved, and to the non-periodic analogue of (1), which is conjectured to disperse to a linear solution (i.e. to scatter) from any finite mass data (see this earlier post for the current status of that conjecture).  Thus our theorem can be viewed as evidence that the 2D periodic cubic NLS does not behave at all like a completely integrable system or a linear solution, even for small data.  (An earlier result of Kuksin gives (in our notation) the weaker result that the ratio \|u(T)\|_{H^s({\Bbb T}^2)} / \|u(0)\|_{H^s({\Bbb T}^2)} can be made arbitrarily large when s > 1, thus showing that large initial data can exhibit movement to higher frequencies; the point of our paper is that we can achieve the same for arbitrarily small data.) Intuitively, the problem is that the torus is compact and so there is no place for the solution to disperse its mass; instead, it must continually interact nonlinearly with itself, which is what eventually causes the weak turbulence. Read the rest of this entry » I’ve just uploaded to the arXiv the paper “The cubic nonlinear Schrödinger equation in two dimensions with radial data“, joint with Rowan Killip and Monica Visan, and submitted to the Annals of Mathematics. This is a sequel of sorts to my paper with Monica and Xiaoyi Zhang, in which we established global well-posedness and scattering for the defocusing mass-critical nonlinear Schrödinger equation (NLS) iu_t + \Delta u = |u|^{4/d} u in three and higher dimensions d \geq 3 assuming spherically symmetric data. (This is another example of the recently active field of critical dispersive equations, in which both coarse and fine scales are (just barely) nonlinearly active, and propagate at different speeds, leading to significant technical difficulties.) In this paper we obtain the same result for the defocusing two-dimensional mass-critical NLS iu_t + \Delta u= |u|^2 u, as well as in the focusing case iu_t + \Delta u= -|u|^2 u under the additional assumption that the mass of the initial data is strictly less than the mass of the ground state. (When mass equals that of the ground state, there is an explicit example, built using the pseudoconformal transformation, which shows that solutions can blow up in finite time.) In fact we can show a slightly stronger statement: for spherically symmetric focusing solutions with arbitrary mass, we can show that the first singularity that forms concentrates at least as much mass as the ground state. Read the rest of this entry » My paper “Resonant decompositions and the I-method for the cubic nonlinear Schrodinger equation on {\Bbb R}^2“, with Jim Colliander, Mark Keel, Gigliola Staffilani, and Hideo Takaoka (aka the “I-team“), has just been uploaded to the arXiv, and submitted to DCDS-A. In this (long-delayed!) paper, we improve our previous result on the global well-posedness of the cubic non-linear defocusing Schrödinger equation in two spatial dimensions, thus u: {\Bbb R} \times {\Bbb R}^2 \to {\Bbb C}. In that paper we used the “first generation I-method” (centred around an almost conservation law for a mollified energy E(Iu)) to obtain global well-posedness in H^s({\Bbb R}^2) for s > 4/7 (improving on an earlier result of s > 2/3 by Bourgain). Here we use the “second generation I-method”, in which the mollified energy E(Iu) is adjusted by a correction term to damp out “non-resonant interactions” and thus lead to an improved almost conservation law, and ultimately to an improvement of the well-posedness range to s > 1/2. (The conjectured region is s \geq 0; beyond that, the solution becomes unstable and even local well-posedness is not known.) A similar result (but using Morawetz estimates instead of correction terms) has recently been established by Colliander-Grillakis-Tzirakis; this attains the superior range of s > 2/5, but in the focusing case it does not give global existence all the way up to the ground state due to a slight inefficiency in the Morawetz estimate approach. Our method is in fact rather robust and indicates that the “first-generation” I-method can be pushed further for a large class of dispersive PDE. Read the rest of this entry » RSS Google+ feed Get every new post delivered to your Inbox. Join 5,613 other followers
900b1021560b5def
Sign up Here's how it works: 1. Anybody can ask a question 2. Anybody can answer I know how and why we use this form of stationary Schrödinger equation for finding $\psi$ outside the finite square potential well: $$\frac{d^2 \psi}{dx^2}=\kappa^2 \psi$$ I Also know that the general solution to this equation is: $$\psi = Ae^{\kappa\, x} + Be^{-\kappa\, x}.$$ But why do we use only left part $\psi = Ae^{\kappa \, x}$ for the left outer side $x<0$, and the right part $\psi = B e^{-\kappa\, x}$ on the right outer side part $x>0$? share|cite|improve this question If you're using the usual conventions, don't you mean $Ae^{\kappa x}$ on the left, not right? – Dan Piponi Mar 29 '13 at 20:25 up vote 3 down vote accepted I think you are asking for a finite well of width L that is from $ -L/2< x< L/2$. Why do we only use $\psi(x) = A e^{+\kappa x} $ for $ x<-L/2 $ $\psi(x) = B e^{-\kappa x} $ for $ x>+L/2 $. The reason is we want to be able to interpret the wavefunction as the probability density for finding the particle at x. For this to make sense the probability of finding the particle anywhere should be 1 or $\int_{-\infty}^{-\infty}dx \ |\psi(x)|^2 = 1$. If you have solutions of the form $\psi(x) = A e^{+\kappa x} + B e^{-\kappa x} $ this is only possible if $A = 0 $ in the $x > +L/2$ region and $B = 0 $ in the $x < +L/2$ region. share|cite|improve this answer I think i get it but please correct me if i am wrong. On the interval $x > +\tfrac{L}{2}$ i can only increase $x$ towards $+\infty$. Soo i have to make sure that i have a negative exponent or else i ll get $\psi = \infty$! This is why on this interval i need only $Ae^{-\kappa x}$. It is vice versa for the interval $x<-\frac{L}{2}$ where i can only decrease $x$ towards $-\infty$. This means i have to take a part with a positive exponent so i won't get $\psi = \infty$. It is all about the signs of the exponents if i am not mistaken. – 71GA Mar 29 '13 at 20:39 Pretty much. You can go a step further and actually try integrating $\int_{-L/2}^\infty \ dx \ e^{\kappa x}$ and you will see that you will get $ \infty$, so there isn't any hope of normalizing this to anything finite. – DJBunk Mar 29 '13 at 20:50 You need to ensure that the wavefunction is normalisable: $$\int\limits_{-\infty}^{\infty} \mathrm{d}x\, |\psi(x)|^2 = 1. $$ This ensures that the wave function yields a valid probability distribution upon application of the Born rule. If you use the normal convention of taking positive $x$ values to the right and negative $x$ values to the left, this implies that you can only keep $e^{-\kappa x}$ for the right outer side, etc. Otherwise the value of the wavefunction grows without bound as $x\to \pm \infty$ share|cite|improve this answer I know the normalisation integral but still i can't seem to understand this. – 71GA Mar 29 '13 at 20:28 @71GA You should be able to convince yourself that in general, the normalisation condition is violated for any function that does not vanish at $x = \pm \infty$. This means that the solution $e^{\kappa x}$ is not a valid solution on the right side of the box, and a similar argument applies to the left side. Have you actually tried evaluating the normalisation integral for the various possibilities to see what comes out? – Mark Mitchison Mar 29 '13 at 20:36 Your Answer
07fc66ead940264b
I have just started Griffiths Intro to QM. I was studying Born's interpretation of Wave function and it says that the square of the modulus of the wave function is a measure of the probability of finding the particle at that position. I didn't read further, so my doubt might be stupid. But if wave function is complex then surely its modulus would be real and non negative. 1. If so then why should we square the modulus when in reality modulus itself should give real and non negative values which are equivalent for probability? 2. Does this squaring have anything to do with the fact that the Schrodinger equation is of second order? Kindly pardon me if the question is stupid, I am just a beginner! • 1 $\begingroup$ @Jimself the modulus of a complex number $z$ is $\sqrt{\bar{z}z}$ and is always positive... $\endgroup$ – yuggib Jul 20, 2015 at 14:18 • $\begingroup$ Thank you @Jimself for the reply. But a complex number is represented using only modulus which is non negative. Isnt it so ? Modulus in essence is the distance of the point from the origin of the complex plane ? how is it negative ? I am confused. $\endgroup$ Jul 20, 2015 at 14:21 • $\begingroup$ @yuggib right, I was just temporarily sideblinded wondering what it being complex had to do with it $\endgroup$ – Jim Jul 20, 2015 at 14:26 3 Answers 3 Well there are multiple reasons, but a very important one is that it can be proven (from the Schrödinger equation) that $$\frac{\mathrm d}{\mathrm dt}\int \mathrm d\boldsymbol x\ |\psi(\boldsymbol x,t)|^2=0$$ so that, if at any moment in time we have $\int \mathrm d\boldsymbol x\ |\psi(\boldsymbol x,t)|^2=1$, this will remain true at any other time. On the other hand, the derivative of the integral of $|\psi|$ is not time independent, so a consistent normalization is not possible We need that the integral to be time independent, because otherwise a probabilistic interpretation wouldn't be possible. We need that the probability of finding the particle somewhere has to be $1$. If we used $|\psi|$ as a probability distribution, and at any point in time had we that the integral equals $1$, this will change over time, what wouldn't make any sense. On the other hand, as I already stated, we can think of $|\psi|^2$ as a probability just because its integral is time-independent. So if at any point in time we have that the integral of $|\psi|^2$ equals $1$, this will remain true at any other point in time. Also, this has nothing to do with the Schrödinger equation being of 2nd order: Dirac equation is a 1st order equation and (in some sense), the probability distribution is still $\psi^\dagger\psi$. Edit: there is another explanation that might be more "physical", closer to our intuition. You probably know about the double slit-experiment, a standard way of introducing QM. When learning about such experiment, we are given two scenarios: first, think of the double slit being hit by light. We know from optics about the phenomenon of interference: the electromagnetic field is radiated from each slit, thus interfering when reaching the screen. The interference pattern is easily understood, mathematically, when we think of the electric field as a wave propagating through space. We know that the intensity observed at the screen is the modulus squared of $\boldsymbol E$, where $\boldsymbol E=\boldsymbol E_1+\boldsymbol E_2$. When calculating the modulus squared, we get the expected interference (crossed) term. The observed intensity is just $I(x)=|\boldsymbol E(x)|^2$. On the other hand, if we think of the experiment when using electrons, we know that the interference pattern is still produced, so by being inspired from classical electrodynamics, we think of another wave propagating through space, such that its modulus squared gives the intensity on the screen, i.e., the modulus squared of the wave function is like the intensity of the light: where it is high, there is a high chance of finding an electron. In this way, we can think of $|\psi|^2$ as a probability distribution, in the same way we can think of $|\boldsymbol E|^2$ as a probability distribution of the photon. There is actually a lot from QM taken from classical electromagnetism. For the record, I must say that this analogy between the electric field and the wave-function is rather limited, and should not be pushed too far: it will lead to incorrect conclusions. The electric field is not the wave function of the photon. • $\begingroup$ I should add that actually Dirac's equation is more complicated (and subtle) than Schrodinger's, for many reasons (note that I wrote $\psi^\dagger$ instead of $\psi^*$); in Dirac's case, $\psi$ is not a really a function, and $\psi^\dagger\psi$ is not really a probability distribution, but it is related somehow. $\endgroup$ Jul 20, 2015 at 14:34 • $\begingroup$ thanks @qftishard. I think I get the idea now. So it has all to do with time independence and the fact that probability has to be 1 at some position. And such a feature will not appear if we use just the modulus. Thanks. $\endgroup$ Jul 20, 2015 at 14:35 • 1 $\begingroup$ (2/2) You will soon meet the concept of an operator through your journy with QM. You got to be patient: it is a beautiful journy, but certainly not an easy one. Stay there, just don't give up :) $\endgroup$ Jul 20, 2015 at 18:05 • 3 $\begingroup$ Thanks for providing the link to that question. +1 for The electric field is not the wave function of the photon. $\endgroup$ – user36790 Nov 1, 2015 at 11:28 • 1 $\begingroup$ I must be missing something obvious—why is the (derivative of the integral of the) mod squared time independent but just the modulus is not? $\endgroup$ – T3db0t May 19, 2018 at 0:01 Your suspicion that the square of the modulus of the wave function has to be used as the probability density is indeed related with the Schrodinger equation. I approach the question by using eigen value equation theory. The Schrodinger equation is a eigenvalue equation. The wave function is a superposition of eigenfunctions of the Schrodinger equation. Or different words, the wave function can be developed into a complete set of eigen functions : $\Psi = \sum_l a_l\psi_l$ each of which fulfills the Schrodinger equation corresponding to a particular energy eigen value $E_l$: $H\psi_l =E_n\psi_l$. The coefficients $a_l$ express the probability (actually it is $|a_l|^2$) to find a particle with wave function $\Psi$ in state $\psi_l$. The different $\psi_l$ have to be orthogonal to each other, not only due to mathematical reasons, also to a physical one. If an energy measurement is carried out and the it's found that the particle governed by the Schrodinger equation is found to be in energy state $E_n$ then it will stay in that energy state. That means after the measurement $\Psi =\psi_n$ and $a_l= \delta_{ln}$ (The wave function $\Psi$ collapses to a particular function $\psi_l$.). It is expressed in the quantum mechanical formalism as that the projection of the wave function $\psi$ on another energy state eigen function $m\ \neq n$ $\int \Psi^{\star} \psi_m dx =0$ whereas the projection of the wavefunction on the eigen function of state $n$ is one: $\int \Psi^{\star} \psi_n dx =1$. You can check this by putting $\Psi = \sum_l a_l\psi_l$ with $a_l= \delta_{ln}$ in the integral. The eigen functions of the Schrodinger eigen value equation fulfill very well this property (the eigen function can of course easily be normalized so that they fulfill $\int \psi_m^{\star} \psi_m dx =1$ for all $m$). From this we learn that in a particular state $m$ $\int |\psi_m|^2 dx =1$. The same consideration can be made for any other well defined operator and its corresponding eigen states. Finally as a physicist you would like to make sense of this relation $\int |\psi_m|^2 dx =1$. And Born ended up stating that it means that the $|\psi_m|^2$ is the probability density to find a particle in the interval $dx$. $x$ does not need to necessarily mean position, a priori it could also be momentum $p$ etc. But the concept remains the same: it would mean that $|\psi_m|^2$ is the probability density to find the particle in momentum interval $dp$. And obviously trying to define the probability density with $\int |\psi_m| dx =1$ does not make much sense as this expression $\int |\psi_m| dx =1$ does not appear in the eigen mode formalism. It is may be useful to stress that eigen value equations like the Schrodinger equation appear everywhere in physics for instance in electrodynamics and the eigen functions fulfill the same relations as those of the Schrodinger equation. This means the formalism does not fall from the sky, it's also used in classical physics. It the way to solve eigenvalue equations in mathematical physics. But on top of it a typical quantum mechanical assumption is used: The collapse of the wave function upon a measurement to a particular wave function. And finally the orthogonality relations get a particular physical, even quantum-mechanical sense. My explanation is certainly not the most rigorous one, in full-fleshed QM-formalism with bras and ket vectors it can be shown more rigorously. And I recommend you to continue to read the book, although I don't know it it will certainly demonstrate this in some way in the following. • $\begingroup$ Thank you @Frederic Thomas. I must confess this answer was a bit difficult for me to understand. Like I said, I am a beginner and I am studying on my own. But on reading it twice, I think I get some hold of it. :) $\endgroup$ Jul 20, 2015 at 16:07 • $\begingroup$ @anandtr2006: yeah, finally I've found it not very convincing. The answer from qftishard is probably more conclusive. The only thing I wanted to say is that it is rather natural to work with products of wave functions including squared moduli instead of moduli $\endgroup$ Jul 23, 2015 at 15:10 • $\begingroup$ Thank you Frederic Thomas, But I appreciate your effort. I have a lot of work to catch upon, I guess. :D $\endgroup$ Jul 25, 2015 at 3:10 In the Born interpretation, $\psi$ doesn't have any physical meaning. It's only $|\psi|^2$ that has a physical meaning. In the case of a one dimensional wave function, $|\psi(x)|^2$ is a probability density where the probability of finding the particle described by that wave function between two points $x=a$ and $x=b$ is $$P(a \leq x \leq b)=\int_a^b|\psi|^2dx$$ provided the wave function has been normalized so that $$\int_{-\infty}^{\infty}|\psi|^2dx=1$$ We don't care about $|\psi|$ in the Born interpretation even if it is real and non-negative. • 2 $\begingroup$ Thank you @Muster_Mark for the reply. But this still doesnt answer as to why square of modulus was chosen. Does not modulus of wave function have a physical significance ? $\endgroup$ Jul 20, 2015 at 14:25 Your Answer
a53837d790d30c01
The Role of Exact Conditions in TDDFT Lucas O. Wagner    Kieron Burke Department of Physics and Astronomy and Department of Chemistry, University of California, Irvine, CA 92697, USA thanks: To appear in: Time-dependent density functional theory, 2 ed., edited by M. Marques, et al. (Springer, 201X). This chapter is devoted to exact conditions in time-dependent density functional theory. Many conditions have been derived for the exact ground-state density functional, and several have played crucial roles in the construction of popular approximations. We believe that the reliability of the most fundamental approximation of any density functional theory, the local density approximation (LDA), is due to the exact conditions that it satisfies. Improved approximations should satisfy at least those conditions that LDA satisfies, plus others. (Which others is part of the art of functional approximation). In the time-dependent case, as we shall see, the adiabatic LDA (ALDA) plays the same role as LDA in the ground-state case, as it satisfies many exact conditions. But we do not have a generally applicable improvement beyond ALDA that includes nonlocality in time. For TDDFT, we have a surfeit of exact conditions, but that only makes finding those that are useful to impose an even more demanding task. Throughout this chapter, we give formulas for pure DFT for the sake of simplicity (e.g. ), but in practice spin DFT is used (e.g. ). We use atomic units everywhere (), so energies are in units of Hartrees and distances are in Bohrs. I Review of the ground state In ground-state DFT, the unknown exchange-correlation energy functional, , plays a crucial role. In fact, it is this energy that we typically wish to approximate with some given level of accuracy and reliability, and not the density itself. Using such an approximation in a modern Kohn–Sham ground-state DFT calculation, we can calculate the total energy of any configuration of the nuclei of the system within the Born–Oppenheimer approximation. In this way we can extract the bond lengths and angles of molecules and deduce the lowest energy lattice structure of solids. We can also extract forces in simulations, and vibrational frequencies and phonons and bulk moduli. We can discover response properties to both external electric fields and magnetic fields (using spin DFT). The accuracy of the self-consistent density is irrelevant to most of these uses. Given the central role of the energy, it makes sense to devote much effort to its study as a density functional. Knowledge of its behavior in various limits can be crucial to restraining and constructing accurate approximations, and to understanding their limitations. This task is greatly simplified by the fact that the total ground-state energy satisfies the variational principle. Many exact conditions use this in their derivation. In this section we will review some of the more prominent exact conditions. They almost all concern the energy functional, which, as mentioned above, is crucial for good KS-DFT calculations. We also refer the interested reader to Ref. Perdew and Kurth (2003) for a thorough discussion. First, we will go over some of the formal definitions in DFT. i.1 Basic definitions The XC energy as a functional of the density is written as Levy (1979); Lieb (1983) where is a correctly antisymmetrized electron wavefunction, the minimization of the kinetic and electron–electron repulsion energies is done over all such wavefunctions that yield the density , is the minimum (non-interacting) kinetic energy of a system with density , and is the Hartree energy. The XC energy is usually split into an exchange piece, , and a correlation piece, . Exchange can be defined in a HF-like way in terms of the KS spin orbitals : To perform the self-consistent calculations in the non-interacting system, we need the functional derivative of the XC energy, This is called the XC potential, and it is the essential part of the multiplicative KS potential . Orbital dependent functionals: Some functionals are most naturally expressed in terms of the orbitals rather than the density. When varying the orbitals of these functionals, nonlocal potentials are obtained. For example, varying in Eq. (3) leads to the nonlocal exchange term used in HF. There is a way to transform such orbital-dependent functionals into local potentials as in Eq. (4). This procedure is known as optimized effective potential (OEP) or optimized potential method (OPM) and is computationally expensive Kümmel and Kronik (2008). Using OEP for results in the exact exchange approximation (EXX) for in KS-DFT. The Krieger, Li, and Iafrate (KLI) approximation is a way to approximately solve EXX Krieger et al. (1992). Adiabatic connection: One can imagine smoothly connecting the interacting and non-interacting systems by multiplying the electron–electron repulsion term by , called the coupling-constant. Changing varies the strength of the interaction, and if we simultaneously change the external potential to keep the density fixed, we have a family of solutions for various interaction strengths. This makes all quantities (besides the density) functions of . When , one has the non-interacting KS system, and when , one has the fully interacting system. The following coupling-constant relations hold. XC energy dependence: Altering the coupling-constant is simply related to scaling the density: where is the scaled density with . Adiabatic connection formula: By using the Hellmann–Feynman theorem, one can show: where is the potential contribution to exchange-correlation energy () at coupling-constant . i.2 Standard approximations Despite a plethora of approximations Perdew et al. (2005), no present-day approximation satisfies all the conditions mentioned in this chapter, as seen in tests on bulk solids and surfaces Staroverov et al. (2004). With that the case, one must choose which conditions to impose on a given approximate form. Non-empirical (ab initio) approaches attempt to fix all parameters via exact conditions Perdew et al. (1996, 1996), while good empirical approaches might include one or two parameters that are fit to some data set Becke (1988); Lee et al. (1988); Becke (1993). There are two basic flavors of approximations: pure density functionals, which are often designed to meet conditions on the uniform gas, and orbital-dependent functionals Grabo et al. (2000), which meet the finite-system conditions more naturally. The most sophisticated approximations being developed today use both Tao et al. (2003). For a good discussion on what approximation is the right tool for the job, see Ref. Rappoport et al. (2009). LDA: The local density approximation is the bread and butter of DFT. It is the simplest, being derived from conditions on the uniform gas Kohn and Sham (1965). Though it is too inaccurate for quantum chemistry (being off by about 1 eV or 30 kcal/mol), it is useful in solids and other bulk materials where the electrons almost look like a uniform gas. There can only be one LDA. GGA: The generalized gradient approximation came from trial and error when energies were allowed to depend on the gradient of the density. While more accurate than the LDA (getting errors down to 5 or 6 kcal/mol), and thus useful for quantum chemistry applications, there is no uniquely-defined GGA. BLYP is an empirical GGA that was designed to minimize the error in a particular data set. PBE is a non-empirical GGA designed to satisfy exact conditions. Hybrid: Hybrids have an exchange energy which is a mixture of GGA and HF, which attempts to get the best of both worlds: where is defined in (3). The parameter was argued to be 0.25 for the non-empirical PBE0, but is fitted for the empirical B3LYP. i.3 Finite systems The following conditions are derived for finite systems, just as the Hohenberg–Kohn theorem is. Signs of energy components: From the variational principle and other elementary considerations, one can deduce Zero XC force and torque theorem: The XC potential cannot exert a net force or torque on the electrons Levy and Perdew (1985): XC virial theorem: where is the kinetic contribution to the correlation energy. The XC virial theorem as well as the zero XC force and torque theorem are satisfied by all sensible approximate functionals. Exchange scaling: By using the scaled density (6), one can easily show Correlation scaling: The scaling of correlation is less simple than exchange, and will depend on whether one is in the high density limit ( large) or low density limit ( small) Levy and Perdew (1985); Seidl et al. (2000): where , , , and are all scale-invariant functionals. These conditions are depicted in Fig. 1. Not all popular approximations satisfy these conditions. Scaling of the correlation energy in ground state DFT, as well as the various conditions from Eq. ( Figure 1: Scaling of the correlation energy in ground state DFT, as well as the various conditions from Eq. (13). The first two relations are illustrated with the dotted line. For , the exact curve (solid) must lie below this dotted line, and for the exact curve must lie above – in both cases within the shaded region of the graph. The high density limit is shown with the dot-dashed line, and the low density limit with the dashed line. It is believed that not only is monotonic, but also its derivative with respect to . Color online. Self-interaction: For any one-electron system Perdew and Zunger (1981), Lieb–Oxford bound: For any density Lieb and Oxford (1981), In addition to conditions on , we also know some exact conditions on the XC potential and the KS eigenvalues. Asymptotic behavior of potential: Far from a Coulombic system where is the position of the highest occupied KS molecular orbital, and the ionization potential. These results are intimately related to the self-interaction of one electron. i.4 Extended systems The basic theorems of DFT are proven for finite quantum mechanical systems, with densities that decay at large distances from the center. Their extension to extended systems, even those as simple as the uniform gas, requires careful thought. For ground-state properties, one can usually take results directly to the extended limit without change, but not always. For example, the high-density limit in Eq. (13)  of the correlation energy for a finite system is violated by a uniform gas. With these things in mind, we will now discuss a set of conditions that involve the properties of the uniform or nearly uniform electron gas. Uniform density: When the density is uniform, , where is the XC energy density of a uniform electron gas of density , and is the volume. This forms the basis of LDA. Slowly varying density: For slowly varying densities, should recover the gradient expansion approximation (GEA): where is the leading correction to the LDA XC energy density for a slowly varying electron gas Langreth and Perdew (1980). However, the GEA was found to give poor results and violate several important sum rules for the XC hole when applied to other systems Burke et al. (1998). Fixing those sum-rules led to the development of ab initio GGAs. Though important in obtaining the energy for the ground-state, the XC hole rules have not been used in TDDFT and therefore will not be further discussed in this chapter. Linear response of uniform gas: Another generic limit is when a weak perturbation is applied to a uniform gas, and the resulting change in energy is given by the static response function, . This function is known from accurate Quantum Monte Carlo calculations Moroni et al. (1995), and approximations can be tested against it. Ii Overview for TDDFT The time-dependent problem is more complex than the ground-state problem, making the known exact conditions more difficult to classify. We make the basic distinction between general time-dependent perturbations, of arbitrary strength, and weak fields, where linear response applies. The former give conditions on for all time-dependent densities, the latter yield conditions directly on the XC kernel, which is a functional of the ground-state density alone. Of course, all of the former also yield conditions in the special case of weak fields. In the time-dependent problem, we do not have the energy playing a central role. Formally, the action plays an analogous role (see van Leeuwen ch 6), but in practice, we never evaluate the action in TDDFT calculations (and it is identically zero on the real time evolution). In TDDFT, our focus is truly the time-dependent density itself, and so, by extension, the potential determining that density. Thus many of our conditions are in terms of the potential. Most pure density functionals for the ground-state problem produce poor approximations for the details of the potential. Such approximations work well only for quantities integrated over real space, such as the energy. Thus approximations that work well for ground-state energies are sometimes very poor as adiabatic approximations in TDDFT. Their failure to satisfy Eq. (16) leads to large errors in the KS energies of higher-lying orbitals (for example, consider the LDA potential for Helium in Figure 3 of Ref. Elliott et al. (2009), which falls off exponentially rather than as ), and (17) is often violated by several eV. In place of the energy, there are a variety of physical properties that people wish to calculate. For example, quantum chemists are most often focused on the first few low-lying excitations, which might be crucial for determining the photochemistry of some biomolecule. Then the adiabatic generalization of standard ground-state approximations is often sufficient. At the other extreme, people who study matter in strong laser fields are often focused on ionization probabilities (see Ullrich and Bandrauk chapter), and there the violation of Eq. (17) makes explicit density approximations too crude, and requires orbital-dependent approximations instead. ii.1 Definitions In contrast to the ground-state problem, the XC potential depends not only on the density but on the initial wavefunction and KS Slater determinant , written symbolically as . This more complicated dependence comes about because two different wavefunctions, which are chosen to have the same density for all time, can come from completely different external potentials, which the XC potential accounts for. We can get rid of this initial wavefunction dependence if we start from a non-degenerate ground-state, where the wavefunction is a functional of the density alone, via the Hohenberg–Kohn theorem Hohenberg and Kohn (1964). These things are further discussed in Neepa’s chapter. As the density evolves, the XC potential is determined not solely by the present density , but also by the history for . However, it is useful to break the XC potential up into two pieces, an adiabatic piece which only deals with the present density, and a dynamic piece which incorporates the memory dependence: The adiabatic piece of the potential, is the XC potential for electrons as if their instantaneous density were a ground state. In the spirit of DFT, the dynamic piece is everything else. In the linear response regime, small enough perturbations to the density will continuously change the XC potential: where is the XC kernel, which can be written formally as the functional derivative: The evaluation at reminds us that is used for the linear response of a density variation away from a ground-state density . Like the XC potential, the kernel can also be broken down into an adiabatic piece: and a dynamic piece, which includes memory and everything else. The kernel is often Fourier-transformed from position space in the relative coordinate () to momentum space (with wave-vector ), from the relative time () to frequency () domain, or both. Some conditions are more naturally expressed in momentum space and/or in the frequency domain. In the frequency domain, the adiabatic piece can be written as The kernel is discussed in more detail in Chapter 4 (TDDFT intro by Gross). ii.2 Approximations As we go through the various exact conditions, we will discuss whether the simplest approximations in present use satisfy them. We can divide all approximations into two classes based on whether or not the approximation neglects the dynamic term of Eq. (19); these classes are respectively adiabatic and non-adiabatic (i.e. memory) approximations. In the adiabatic approximation, familiar ground-state functionals (such as LDA, GGA, and hybrids) can produce XC potentials when one uses the approximate in Eq. (20). We mention two notable adiabatic approximations now. ALDA: The prototype of all TDDFT approximations is the Adiabatic Local Density Approximation, and it is the simplest pure density functional. The XC potential is as simple as can be: In linear response, the ALDA kernel is Like its ground-state inspiration, ALDA satisfies important sum rules by virtue of its simplicity, namely its locality in space and time. ALDA is commonly used in many calculations, and is described further in chap 1. AA: In the ‘exact’ adiabatic approximation, we use the exact in Eq. (20). This approximation is the best that an adiabatic approximation can do, unless there is some lucky cancellation of errors. Hessler et al. Hessler et al. (2002) investigated AA applied to a time-dependent Hooke’s atom system and found large errors in the instantaneous correlation energy. For the double ionization of a model Helium atom, Thiele et al. Thiele et al. (2008) discovered that non-adiabatic effects were important only for high-frequency fields. A key aim of today’s methodological development is to build in correlation memory effects. Any attempt to build in memory goes beyond the adiabatic approximation, and thus belongs in the non-adiabatic class of approximations. The next three approximations belong to this dynamic class. GK: The Gross–Kohn approximation is simply to use the local frequency-dependent kernel of the uniform gas, is the response of the uniform electron gas with density . GK was the first approximation to go beyond the adiabatic approximation, but was found to violate translational invariance. VK: The Vignale–Kohn approximation sought to improve upon the shortcomings of GK. The VK approximation is simply the gradient expansion in the current density for a slowly-varying gas (see Vignale chapter). XX: Exact exchange, the orbital-dependent functional, is treated as an implicit density functional (see Kümmel’s orbital chapter (11)). When treated this way, XX has some memory for more than two unpolarized electrons. With the exception of XX, non-adiabatic approximations are usually limited to the linear response regime and approximate the kernel, . There is now a major push to go beyond linear response for non-adiabatic approximations. The first such attempt was a bootstrap approach of Ref. Dobson et al. (1997). More recent attempts are described in Chapter 26 (Tokatly) of the book and in Ref. Kurzweil and Baer (2004). Iii General conditions In this section, we discuss conditions that apply no matter how strong or how weak the time-dependent potential is. They apply to anything: weak fields, strong laser pulses, and everything in between. They apply also to the linear response regime, yielding the more specific conditions discussed in Section IV. iii.1 Adiabatic limit One of the simplest exact conditions in TDDFT is the adiabatic limit. For any finite system, or an extended system with a finite gap, the deviation from the instantaneous ground-state during a perturbation (of arbitrary strength) can be made arbitrarily small. This is the adiabatic theorem of quantum mechanics, which can be proven by slowing down the time-evolution, i.e., if the perturbation is , replacing it by and making sufficiently large. Similarly, as the time-dependence becomes very slow (or equivalently, as the frequency becomes small), for such systems the functionals reduce to their ground-state counterparts: where is the exact ground-state XC potential of density . By definition, any adiabatic approximation satisfies this theorem, and so does XX, by reducing to its ground-state analog for slow variations. On the other hand, if an approximation to were devised that was not based on ground-state DFT, this theorem can be used in reverse to define the corresponding ground-state functional. iii.2 Equations of motion In this section, we discuss some elementary conditions that any reasonable TDDFT approximation should satisfy. Because these conditions are satisfied by almost all approximations, they are best applied to test the quality of propagation schemes. For a scheme that does not automatically satisfy a given condition, then a numerical check of its error provides a test of the accuracy of the solution. A simple analog is the check of the virial theorem in ground-state DFT in a finite basis. These conditions are all found via a very simple procedure. They begin with some operator that depends only on the time-dependent density, such as the total force on the electrons. The equation of motion for the operator in both the interacting and the KS systems are written down, and subtracted. Since the time-dependent density is the same in both systems, the difference vanishes. Usually, the Hartree term also separately satisfies the resulting equation, and so can be subtracted from both sides, yielding a condition on the XC potential alone. This procedure is well-described in the Vignale chapter for the zero XC force theorem. Zero XC force and torque: These are very simple conditions saying that interaction among the particles cannot generate a net force Vignale (1995, 1995): where is the difference between the interacting current density and the KS current density van Leeuwen (2001). The second condition says that there is no net XC torque, provided the KS and true current densities are identical. This is not guaranteed in TDDFT (but is in TDCDFT). The X-only KLI approximation, though incredibly accurate for ground state DFT, was found to violate the zero-force condition Mundt et al. (2007). This is because KLI is not a solution to an approximate variational problem, but instead an approximate solution to the OEP equations. This means KLI also violates the virial theorem Fritsche and Yuan (1998), which we describe next. XC Power and Virial: By applying the same methodology to the equation of motion for the Hamiltonian, we find Hessler et al. (1999): while another equation of motion yields the virial theorem, which intriguingly has the exact same form as in the ground state, Eq. (11): These conditions are so basic that they are trivially satisfied by any reasonable approximation, including ALDA, AA, and XX. Thus they are more useful as detailed checks on a propagation scheme, as mentioned earlier. The correlation contribution to the latter is very small, and makes a very demanding test. But because the energy does not play the same central role as in the ground-state problem (and the action is not simply the time-integral of the energy – see Robert’s chapter 2), testing the propagation scheme is all they are used for so far. iii.3 Self-interaction For any one-electron system, These conditions are automatically satisfied by XX. These conditions are instantaneous in time, so any adiabatic approximation that satisfies the ground-state conditions of Eq. (14) will also satisfy these time-dependent conditions, e.g. AA. On the other hand, LDA violates self-interaction conditions in the ground-state, so ALDA also violates these conditions in TDDFT. iii.4 Initial-state dependence There is a simple condition based on the principle that any instant along a given density history can be regarded as the initial moment Maitra et al. (2002); Maitra (2005). This follows very naturally from the fact that the Schrödinger equation is first order in time. When applied to both interacting and non-interacting systems, we find: This is discussed in much detail in Neepa’s chapter. Here we just mention that any adiabatic approximation, by virtue of its lack of memory and lack of initial-state dependence, automatically satisfies it. Interestingly, although XX is instantaneous in the orbitals, it has memory (and so initial-state dependence) as a density functional (when applied to more than two unpolarized electrons). This condition provides very difficult tests for any functional with memory. Consider any two evolutions of an interacting system, whose wavefunctions and become equal after some time, . This condition requires that the non-interacting systems have identical XC potentials at that time and forever after, even though they had different histories before then. This is illustrated in Fig. 2. An approximate functional with memory is unlikely, in general, to produce such identical potentials. An illustration of the condition based on initial state dependence. The two wavefunctions Figure 2: An illustration of the condition based on initial state dependence. The two wavefunctions and become equal at time , and therefore the KS potentials must become equal then and forever after. Color online. iii.5 Coupling-constant dependence Because of the lack of a variational principle for the energy, there are no definite results for various limits, as in Eq. (13), nor is there a simple extension of the adiabatic connection formula (7), though Görling proposed an analog for time-dependent systems Görling (1997). But there remains a simple connection between scaling and the coupling-constant for the XC potential Hessler et al. (1999). For exchange, analogous to Eq. (12), the relation is linear: is the normalized initial state of the Kohn–Sham system with coordinates scaled by , and, for time-dependent densities, There is no simple correlation scaling, but we can relate the coupling-constant to scaling and find, analogous to Eq. (7): where is the scaled initial state of the interacting system, defined as in Eq. (36) and replacing with . For finite systems, it seems likely that taking the limit makes the exchange term dominant (just as in the ground-state) Hessler et al. (2002), but this has yet to be proven. iii.6 Translational invariance Consider a rigid boost of a system starting in its ground state at , with . Then the exchange-correlation potential of the boosted density will be that of the unboosted density, evaluated at the boosted point, i.e., where This condition is universally valid Vignale (1995). The GK approximation was found to violate this condition, which spurred on the development of the VK approximation. Iv Linear response In the special case of linear response, all exchange-correlation information is contained in the kernel . Linear response is utilized in the great majority of TDDFT calculations, and Strubbe thoroughly discusses the methods involved in Chapter 7. As explained in Chapter 24 (Martin Head-Gordon) and Ref. Elliott et al. (2009), the chief use of linear response has been to extract electronic excitations. In this section, we shall discuss the exact conditions that pertain to , regardless of how it is employed. iv.1 Consequences of general conditions Each of the conditions listed below for can be derived from a general condition in Section III. Adiabatic limit: For any finite system, the exact kernel satisfies: where is the exact XC energy. Obviously, any adiabatic functional satisfies this, with its corresponding ground-state approximation on the right. Zero force and torque: The exact conditions on the potential of Section III.2 also yield conditions on , when applied to an infinitesimal perturbation (see Vignale chapter). Taking functional derivatives of Eq. (III.2) yields the latter assuming no XC transverse currents. Again, these are satisfied by ground-state DFT with the static XC kernel, so they are automatically satisfied by any adiabatic approximation. Similarly, in the absence of correlation, they hold for XX. The general conditions employing energies, Eqs. (31) and (32), do not yield simple conditions for the kernel, because the functional derivative of the exact time-dependent XC energy is not the XC potential. Self-interaction error: For one electron, functional differentiation of Eq. (33) yields: These conditions are trivially satisfied by XX, but violated by the density functionals ALDA, GK, and VK. Initial-state dependence: The initial-state condition, Eq. (34), leads to very interesting restrictions on for arbitrary densities. But the information is given in terms of initial-state dependence, which is very difficult to find. Coupling-constant dependence: The exchange kernel scales linearly with coordinates, as found by differentiating Eq. (35): A functional derivative and Fourier-transform of Eq. (38)  yields Lein et al. (2000) These conditions are trivial for XX. They can be used to test the derivations of correlation approximations in cases where the coupling-constant dependence can be easily deduced. More often, they can be used to generate the coupling-constant dependence when needed, such as in the adiabatic connection formula of Eq. (7). A similar condition has also been derived for the coupling-constant dependence of the vector potential in TDCDFT Dion and Burke (2005). iv.2 Properties of the kernel The kernel has many additional properties that come from its definition and other physical considerations. Symmetry: Because the susceptibility is symmetric, so must also be the kernel: This innocuous looking condition is satisfied by any adiabatic approximation by virtue of the kernel being the second derivative of an energy, and is obviously satisfied by XX. Kramers–Kronig: The kernel is an analytic function of in the upper half of the complex -plane and approaches a real function for . Therefore, defining the function we find The kernel is real-valued in the space and time domain, which leads to the condition in the frequency domain, The simple lesson here is that any adiabatic kernel (no frequency dependence) is purely real, and any kernel with memory has an imaginary part in the frequency domain (or else is not sensible). Many of the failures of current TDDFT approximations, e.g. the fundamental gap of solids, are linked to the lack of an imaginary part of the kernel Giuliani and Vignale (2008). Because adiabatic approximations produce real kernels, we see that memory is required to produce complex kernels. Hellgren et al. Hellgren and von Barth (2009) showed that XX has a complex kernel, since it has frequency-dependence (for more than 2 electrons). Both GK and VK have complex kernels satisfying the Kramers–Kronig conditions. Adiabatic connection: A beautiful condition on the exact XC kernel is given simply by the adiabatic connection formula for the ground-state correlation energy: Combined with the Dyson-like equation of Chapter 1 for as a function of and , this is being used to generate new and useful approximations to the ground-state correlation energy Fuchs and Gonze (2002); Fuchs et al. (2005). Although computationally expensive, ways are being found to speed up the calculations Eshuis et al. (2010). Eq. (51) provides an obvious exact condition on any approximate XC kernel for any system. Thus every system for which the correlation energy is known can be used to test approximations for . Note that, e.g, using ALDA for the kernel implicit in (51) does not yield the corresponding , but rather a much more sophisticated functional Lein et al. (2000). Even insertion of yields correlation contributions to all orders in . And lastly, even the exact adiabatic approximation, , does not yield the exact . Functional derivatives: A TDDFT result ought to come from a TDDFT calculation, but this is not always the case. By a TDDFT calculation, we mean the result of an evolution of the TDKS equations of chapter 1 with some approximation for the XC potential that is a functional of the density. This implies that the XC kernel should be the functional derivative of some XC potential, which also reduces to the ground-state potential in the adiabatic limit. All the approximations discussed here satisfy this rule. But calculations that intermix kernels with potentials in the solution of Casida’s equations violate this condition, and run the risk of violating underlying sum-rules. iv.3 Excited states The following conditions have to do with the challenges of obtaining excited states in the linear response regime. Infinite lifetimes of eigenstates: This may seem like an odd requirement. When TDDFT is applied to calculate a transition to an excited state, the frequency should be real. This is obviously true for ALDA and exact exchange, but not so clear when memory approximations are used. As mentioned in Section IV.2, the Kramers–Kronig relations mean that memory implies imaginary XC kernels, and these can yield imaginary contributions to the transition frequencies. Such effects were seen in calculations using the VK for atomic transitions Ullrich and Burke (2004). Indeed, very long lifetimes were found when VK was working well, and much shorter ones occurred when VK was failing badly. Single-pole approximation for exchange: This is another odd condition, in which two wrongs make something right. Using Görling–Levy perturbation theory Görling and Levy (1993), one can calculate the exact exchange contributions to excited state energies Filippi et al. (1997); Zhang and Burke (2004). To recover these results using TDDFT, one does not simply use , and solve the Dyson-like equations. Like with Eq. (51), the infinite iteration yields contributions to all orders in the coupling-constant. However, the single-pole approximation truncates this series after one iteration, and so drops all other orders. Thus the correct exact exchange results are recovered in TDDFT from the SPA solution to the linear response equations, and not by a full solution Gonze and Scheffler (1999). This procedure can be extended to the next order Appel et al. (2003). Double excitations and branch cuts: Maitra et al Maitra et al. (2004); Cave et al. (2004) argued that a strong -dependence in allows double excitation solutions to Casida’s equations, which effectively couples double excitations to single excitations. Similarly, the second ionization of the He atom implies a branch cut in its at the frequency needed Burke et al. (2005). Under limited circumstances, this frequency dependence can be estimated, but a generalization Casida (2005) has been proposed. It would be interesting to check its compliance with the conditions listed in this chapter. Excitations in the adiabatic approximation: One misleading use of linear response has been to test the quality of different approximations to the ground-state . For instance, Jacquemin et al. Jacquemin et al. (2010) calculated the excitation energies for approximate functionals within adiabatic TDDFT and compared them to experimental values. However, even within AA – using the adiabatic approximation with the exact – the exact excitations would not be not obtained. Thus a good ground-state used in adiabatic linear response will not necessarily give good excitation energies. Scattering theory and real-time propagation: A vastly under-appreciated exact condition for TDDFT is the equivalence of time-dependent propagation and scattering theory. This can be particularly important in understanding the relation between bound and continuum states. For example, much early work in TDDFT was performed by Yabana and Bertsch Yabana and Bertsch (1996), propagating ALDA for atoms and molecules in weak electric fields. By Fourier transformation of the time-dependent dipole moment, one can extract the photoabsorption spectrum. The fruitfly of such calculations is benzene, with a large transition at about 6.5 eV, accurately given by ALDA. But closer inspection shows that the LDA ionization threshold is at about 5 eV, because the LDA XC potential is not deep enough. Thus this transition is in the LDA continuum, yet its position and area are given reasonably well by ALDA. This is no coincidence: ALDA describes the time-dependent density and its propagation for moderate times very well. All that has changed is the choice of complete set of states onto which to project the results! By following this logic, Wasserman et al. Wasserman et al. (2003) could capture the effect of Rydberg transitions using ALDA. However, ALDA puts many bound states in the continuum due to the exponential fall-off of the KS-LDA potential (as mentioned in Section II). Thus the ionization potentials for the ALDA states are wrong, but the oscillator strength in the LDA continuum accurately approximates that of the true Rydberg transitions to the exact bound states. (However, it is not an exact condition that the KS oscillator strengths be correct, not even at the threshold where KS captures the right energy Yang et al. (2009).) Using a trick due to Fano Fano (1935), Wasserman showed Wasserman and Burke (2005) that the quantum defect, an excruciatingly sensitive measure of the Rydberg transition frequencies, could be extracted from ALDA. Ref. van Faassen and Burke (2006) shows the accuracy of this calculation for He, Be, and Ne, whereas Ref. van Faassen and Burke (2006) shows the qualitative failure of ALDA for transitions to high angular momentum eigenstates (starting at the d orbitals). One can go further, and even consider true continuum states. In scattering theory, the continuum states of the particle system describe how a single electron scatters from an particle system. Wasserman Wasserman (2005) and van Faassen van Faassen et al. (2007) developed methods to calculate scattering amplitudes and phase shifts based on time-propagation within TDDFT. With a given approximation, one can calculate the susceptibility of an atomic anion and deduce the scattering amplitude for an incident electron Wasserman et al. (2005). Both these examples (the quantum defect and scattering) can be connected in the same framework van Faassen and Burke (2009), and they illustrate that TDDFT fundamentally concerns time-propagation. Present-day approximations yield promising results; simple approximations like ALDA often yield accurate time-dependent densities, but their projection onto individual Kohn-Sham eigenstates may appear far more complicated. V Extended systems and currents As mentioned in Section I.4, care must be taken when extending exact ground-state DFT results to extended systems. This is even more so the case for TDDFT. The first half of the RG theorem (chap 1) provides a one-to-one correspondence between potentials and current densities, but a surface condition must be invoked to produce the necessary correspondence with densities. Without this condition, it can readily be seen that two periodic systems with completely different physics can have the same density Maitra et al. (2003), as in Fig. 3. With hindsight, this is very suggestive that time-dependent functionals may contain a non-local dependence on the details at a surface. As such, they are more amenable to local approximations in the current rather than the density. Electrons on a ring. A magnetic field (a)(b) Figure 3: Electrons on a ring. A magnetic field is turned on and steadily increases in (b); the resulting electric field is uniform on a thin ring, accelerating electrons around the ring, producing the probability current . Note that in both (a) and (b) the densities are equal. Color online. v.1 Gradient expansion in the current As discussed elsewhere (Vignale chapter) and first pointed out by Dobson Dobson (1994), the frequency-dependent LDA (GK approximation) violates the translational invariance condition of Section III.6. One can trace this failure back to the non-locality of the XC functional in TDDFT. But, by going to a current formulation, everything once again becomes reasonable. The gradient expansion in the current, for a slowly varying gas, was first derived by Vignale and Kohn Vignale and Kohn (1996), and later simplified by Vignale, Ullrich, and Conti Vignale et al. (1997), and is discussed in much detail in the Vignale chapter. For our purposes, the most important point is that, by construction, VK satisfies translational invariance. The frequency-dependence shuts off (it reduces to ALDA) when the motion is a rigid translation, but turns on when there is a true (non-translational) motion of the density Vignale and Kohn (1996). Any functional with memory should recover the VK gradient expansion in this limit, or justify why it does not. However, the VK approximation is only the gradient expansion, which for the ground-state was found to violate sum rules, as mentioned in Section I.4. It is therefore likely that there exists something like a generalized gradient approximation, which is more accurate than VK. v.2 Polarization of solids A decade ago, GGG Gonze et al. (1995) pointed out that the periodic density in an insulating solid in an electric field is insufficient to determine the one-body potential, in apparent violation of the Hohenberg–Kohn theorem Hohenberg and Kohn (1964). In fact, this effect appears straightforwardly in the static limit of TDCDFT, and is even estimated by calculations using the VK approximation van Faassen et al. (2003); Maitra et al. (2003). When translated back to TDDFT language, one finds a dependence in , where is the wavevector corresponding to . This requires to have the same degree of nonlocality as the Hartree kernel, and this is missed by any local or semilocal approximation, such as ALDA, but is built in to XX Kim and Görling (2002) or AA. The need for a contribution in the optical response of solids led to much development Onida et al. (2002) for a kernel that allows excitons Reining et al. (2002); Sottile et al. (2007). Since the RG theorem can be proven for solids in electric fields of nonzero , one can extract the (a constant field) result at the end of the calculation Maitra et al. (2003). Vi Summary What lessons can we take away from this brief survey? 1. In the ground-state theory, the total XC energy is crucial for determining the energy of the system, and many conditions are proven for that functional. This is not so for TDDFT, for which only the time-dependent density matters. In the non-interacting system, the KS potential, and specifically its XC component, is what counts. 2. Explicit density functionals have poor-quality potentials, e.g. LDA and GGA. Thus successes in ground-state DFT do not translate directly into successes in TDDFT. One of the greatest challenges is that the potential is a far more sensitive functional of the density than vice versa. Though we have enumerated many conditions on the XC potential, it is important to determine which conditions significantly affect the density, including those aspects of the density that are relevant to experimental measurements. 3. The adiabatic approximation satisfies many exact conditions by virtue of its lack of memory. Inclusion of memory may lead to violations of conditions that adiabatic approximations satisfy. This is reminiscent of the ground-state problem, where the gradient expansion approximation violates several key sum rules respected by the local approximation. Explicit imposition of those rules led to the development of generalized gradient approximations. As shown in several chapters in this book, many people are presently testing the limits of our simple approximations, and very likely, these or other exact conditions will provide guidance on how to go beyond them. We gratefully acknowledge support of DOE grant DE-FG02-08ER46496, and thank Stephan Kümmel and Mark Casida for their input, as well as Neepa Maitra for many helpful suggestions on the manuscript. For everything else, email us at [email protected].
7fd6724ab517a7fc
Unlikely site for historic moment in physics: RDU [John Archibald] Wheeler struggled to mend a rift in physics between general relativity and quantum mechanics—a rift called time. One day in 1965, while waiting out a layover, Wheeler asked colleague Bryce DeWitt [at UNC Chapel Hill] to keep him company for a few hours. In the [Raleigh-Durham International] terminal, Wheeler and DeWitt wrote down an equation for a wavefunction, which Wheeler called the Einstein-Schrödinger equation, and which everyone else later called the Wheeler-DeWitt equation. (DeWitt eventually called it “that damned equation”)…. “His work on the physics of black holes had led him to suspect that time, deep down, does not exist. Now, at the airport, that damned equation left Wheeler with a nagging hunch that time couldn’t be a fundamental ingredient of reality. It had to be, as Einstein said, a stubbornly persistent illusion…. “In recent years, Stephen Hawking… has been developing an approach known as top-down cosmology…. By applying the laws of quantum mechanics to the universe as a whole, Hawking carries the torch that Wheeler lit that day back at the North Carolina airport….” — From “Haunted by His Brother, He Revolutionized Physics” by Amanda Gefter at Nautilus magazine (Jan. 16, 2014)
8787138233a3a710
Quantum Mechanics I (PHYS*3230) Code and section: PHYS*3230*01 Term: Fall 2010 Instructor: Don Sullivan Course Information Lecturer Office Phone Email D.E. Sullivan MacN 435A 519-824-4120 x53983 sullivan@uoguelph.ca Course Materials • D.J. Griffiths, Introduction to Quantum Mechanics (2nd ed.) Assessment Weight Assignments 35% Midterm Exam 20% Quizzes 15% Final Exam 30% Course Topics 1. Background: problems with classical physics. Introduction to quantum waves, de Broglie relation. Wave function and Schrödinger equation. Statistical interpretation of the wave function. 2. Operators and expectation values. Solutions of the Schrödinger equation for onedimensional systems: free particles and wave packets; finite and infinite potential wells; bound states and quantization; scattering states; potential barrier, reflection and transmission, tunneling; harmonic oscillator. 3. Mathematical formalism of quantum mechanics. Observables and Hermitian operators. Eigenvalue-eigenfunction problems. Uncertainty principle. Dirac notation. Operator treatment of the harmonic oscillator. 4. Three-dimensional quantum mechanics. Coulomb potential and the hydrogen atom. Angular momentum. Spin. Course Policies Medical Certificate Not generally required. However, if you miss an exam (midterm or final), you should see your College Counsellor and get a note from him/her. Course Assessment
3e9a74dcd0710a3f
Enrico Fermi From Wikipedia, the free encyclopedia Jump to: navigation, search "Fermi" redirects here. For other uses, see Fermi (disambiguation). Enrico Fermi Enrico Fermi 1943-49.jpg Enrico Fermi (1901–1954) Born (1901-09-29)29 September 1901 Rome, Italy Chicago, Illinois, United States Citizenship Italy (1901–54) United States (1944–54) Fields Physics Institutions Scuola Normale Superiore University of Göttingen Leiden University University of Florence Sapienza University of Rome Columbia University University of Chicago Alma mater Scuola Normale Superiore Doctoral advisor Luigi Puccianti[1] Doctoral students Other notable students Known for Notable awards Spouse Laura Fermi Enrico Fermi (Italian: [enˈri.ko ˈfeɾ.mi]; 29 September 1901 – 28 November 1954) was an Italian physicist, best known for his work on Chicago Pile-1 (the first nuclear reactor), and for his contributions to the development of quantum theory, nuclear and particle physics, and statistical mechanics. He is one of the men referred to as the "father of the atomic bomb".[4] Fermi held several patents related to the use of nuclear power, and was awarded the 1938 Nobel Prize in Physics for his work on induced radioactivity by neutron bombardment and the discovery of transuranic elements. He was widely regarded as one of the very few physicists to excel both theoretically and experimentally. Fermi's first major contribution was to statistical mechanics. After Wolfgang Pauli announced his exclusion principle in 1925, Fermi followed with a paper in which he applied the principle to an ideal gas, employing a statistical formulation now known as Fermi–Dirac statistics. Today, particles that obey the exclusion principle are called "fermions". Later Pauli postulated the existence of an uncharged invisible particle emitted along with an electron during beta decay, to satisfy the law of conservation of energy. Fermi took up this idea, developing a model that incorporated the postulated particle, which he named the "neutrino". His theory, later referred to as Fermi's interaction and still later as weak interaction, described one of the four fundamental forces of nature. Through experiments inducing radioactivity with recently discovered neutrons, Fermi discovered that slow neutrons were more easily captured than fast ones, and developed the Fermi age equation to describe this. After bombarding thorium and uranium with slow neutrons, he concluded that he had created new elements; although he was awarded the Nobel Prize for this discovery, the new elements were subsequently revealed to be fission products. Fermi left Italy in 1938 to escape new Italian Racial Laws that affected his Jewish wife Laura. He emigrated to the United States where he worked on the Manhattan Project during World War II. Fermi led the team that designed and built Chicago Pile-1, which went critical on 2 December 1942, demonstrating the first artificial self-sustaining nuclear chain reaction. He was on hand when the X-10 Graphite Reactor at Oak Ridge, Tennessee went critical in 1943, and when the B Reactor at the Hanford Site did so the next year. At Los Alamos he headed F Division, part of which worked on Edward Teller's thermonuclear "Super" bomb. He was present at the Trinity test on 16 July 1945, where he used his Fermi method to estimate the bomb's yield. After the war, Fermi served under J. Robert Oppenheimer on the influential General Advisory Committee, which advised the Atomic Energy Commission on nuclear matters and policy. Following the detonation of the first Soviet fission bomb in August 1949, he strongly opposed the development of a hydrogen bomb on both moral and technical grounds. He was among the scientists who testified on Oppenheimer's behalf at the 1954 hearing that resulted in the denial of the latter's security clearance. Fermi did important work in particle physics, especially related to pions and muons, and he speculated that cosmic rays arose through material being accelerated by magnetic fields in interstellar space. Many awards, concepts, and institutions are named after Fermi, including the Enrico Fermi Award, the Enrico Fermi Institute, the Fermi National Accelerator Laboratory, the Fermi Gamma-ray Space Telescope, the Enrico Fermi Nuclear Generating Station, and the synthetic element fermium. Early life[edit] Enrico Fermi was born in Rome on 29 September 1901. He was the third child of Alberto Fermi, a division head (Capo Divisione) in the Ministry of Railways, and Ida de Gattis, an elementary school teacher.[5][6] His only sister, Maria, was two years older than he, and his brother, Giulio, was a year older. After the two boys were sent to a rural community to be wet nursed, Enrico rejoined his family in Rome when he was two and a half.[7] While he came from a Roman Catholic family and was baptized in accord with his grandparents' wishes, the family was not religious, and Fermi was an agnostic throughout his adult life. As a young boy he shared his interests with his brother Giulio. They built electric motors and played with electrical and mechanical toys.[8] Giulio died during the administration of anesthesia for an operation on a throat abscess in 1915.[9] One of Fermi's first sources for the study of physics was a book found at the local market of Campo de' Fiori in Rome. The 900-page book from 1840, Elementorum physicae mathematicae, was written in Latin by Jesuit Father Andrea Caraffa, a professor at the Collegio Romano. It covered mathematics, classical mechanics, astronomy, optics, and acoustics, to the extent that they were understood when it was written.[10][11] Fermi befriended another scientifically inclined student, Enrico Persico,[12] and the two worked together on scientific projects such as building gyroscopes and measuring the Earth's magnetic field. Fermi's interest in physics was further encouraged by his father's colleague Adolfo Amidei, who gave him several books on physics and mathematics that he read and assimilated quickly.[13] Scuola Normale Superiore in Pisa[edit] Enrico Fermi as student in Pisa Fermi graduated from high school in July 1918 and, at Amidei's urging, applied to the Scuola Normale Superiore in Pisa. Having lost one son, his parents were reluctant to let him move away from home for four years while attending the Sapienza University of Rome, but in the end they acquiesced. The school provided free lodging for students, but candidates had to take a difficult entrance exam that included an essay. The given theme was "Specific characteristics of Sounds". The 17-year-old Fermi chose to derive and solve the partial differential equation for a vibrating rod, applying Fourier analysis in the solution. The examiner, Professor Giuseppe Pittarelli from the Sapienza University of Rome, interviewed Fermi and concluded that his entry would have been commendable even for a doctoral degree. Fermi achieved first place in the classification of the entrance exam.[14] During his years at the Scuola Normale Superiore, Fermi teamed up with a fellow student named Franco Rasetti with whom he would indulge in light-hearted pranks and who would later become Fermi's close friend and collaborator. In Pisa, Fermi was advised by the director of the physics laboratory, Luigi Puccianti, who acknowledged that there was little that he could teach Fermi, and frequently asked Fermi to teach him something instead. Fermi's knowledge of quantum physics reached such a high level that Puccianti asked him to organize seminars on the topic.[15] During this time Fermi learned tensor calculus, a mathematical technique invented by Gregorio Ricci and Tullio Levi-Civita that was needed to demonstrate the principles of general relativity.[16] Fermi initially chose mathematics as his major, but soon switched to physics. He remained largely self-taught, studying general relativity, quantum mechanics, and atomic physics.[17] A light cone is a three-dimensional surface of all possible light rays arriving at and departing from a point in spacetime. Here, it is depicted with one spatial dimension suppressed. The time line is the vertical axis. In September 1920, Fermi was admitted to the Physics department. Since there were only three students in the department—Fermi, Rasetti, and Nello Carrara—Puccianti let them freely use the laboratory for whatever purposes they chose. Fermi decided that they should research X-ray crystallography, and the three worked to produce a Laue photograph—an X-ray photograph of a crystal.[18] During 1921, his third year at the university, Fermi published his first scientific works in the Italian journal Nuovo Cimento. The first was entitled "On the dynamics of a rigid system of electrical charges in translational motion" (Italian: Sulla dinamica di un sistema rigido di cariche elettriche in moto traslatorio). A sign of things to come was that the mass was expressed as a tensor—a mathematical construct commonly used to describe something moving and changing in three-dimensional space. In classical mechanics, mass is a scalar quantity, but in relativity it changes with velocity. The second paper was "On the electrostatics of a uniform gravitational field of electromagnetic charges and on the weight of electromagnetic charges" (Italian: Sull'elettrostatica di un campo gravitazionale uniforme e sul peso delle masse elettromagnetiche). Using general relativity, Fermi showed that a charge has a weight equal to U/c2, where U was the electrostatic energy of the system, and c is the speed of light.[17] The first paper seemed to point out a contradiction between the electrodynamic theory and the relativistic one concerning the calculation of the electromagnetic masses, as the former predicted a value of 4/3 U/c2. Fermi addressed this the next year in a paper "Concerning a contradiction between electrodynamic and the relativistic theory of electromagnetic mass" in which he showed that the apparent contradiction was a consequence of relativity. This paper was sufficiently well-regarded that it was translated into German and published in the German scientific journal Physikalische Zeitschrift in 1922.[19] That year, Fermi submitted his article "On the phenomena occurring near a world line" (Italian: Sopra i fenomeni che avvengono in vicinanza di una linea oraria) to the Italian journal I Rendiconti dell'Accademia dei Lincei. In this article he examined the Principle of Equivalence, and introduced the so-called "Fermi coordinates". He proved that on a world line close to the time line, space behaves as if it were a Euclidean space.[20][21] Fermi submitted his thesis, "A theorem on probability and some of its applications" (Italian: Un teorema di calcolo delle probabilità ed alcune sue applicazioni), to the Scuola Normale Superiore in July 1922, and received his laurea at the unusually young age of 21. The thesis was on X-ray diffraction images. Theoretical physics was not yet considered a discipline in Italy, and the only thesis that would have been accepted was one on experimental physics. For this reason, Italian physicists were slow in embracing the new ideas like relativity coming from Germany. Since Fermi was quite at home in the lab doing experimental work, this did not pose insurmountable problems for him.[21] Fermi–Dirac statistics. Fermi function F(\epsilon \ ) vs. energy \epsilon \ , with μ = 0.55 eV and for various temperatures in the range 50 to 375 K (−223.2 to 101.9 °C). While writing the appendix for the Italian edition of the book The Mathematical Theory of Relativity by August Kopff in 1923, Fermi was the first to point out that hidden inside the famous Einstein equation (E = mc2) was an enormous amount of nuclear potential energy to be exploited. "It does not seem possible, at least in the near future", he wrote, "to find a way to release these dreadful amounts of energy—which is all to the good because the first effect of an explosion of such a dreadful amount of energy would be to smash into smithereens the physicist who had the misfortune to find a way to do it."[21] Fermi decided to travel abroad, and spent a semester studying under Max Born at the University of Göttingen, where he met Werner Heisenberg and Pascual Jordan. Fermi then studied in Leiden with Paul Ehrenfest from September to December 1924 on a fellowship from the Rockefeller Foundation obtained through the intercession of the mathematician Vito Volterra. Here Fermi met Hendrik Lorentz and Albert Einstein, and became good friends with Samuel Goudsmit and Jan Tinbergen. From January 1925 to late 1926, Fermi taught mathematical physics and theoretical mechanics at the University of Florence, where he teamed up with Rasetti to conduct a series of experiments on the effects of magnetic fields on mercury vapour. He also participated in seminars at the Sapienza University of Rome, giving lectures on quantum mechanics and solid state physics.[22] While giving lectures of new quantum mechanics based on remarkable accuracy of predictions of Schrödinger equation, the Italian physicist would often say, "It has no business to fit so well!"[23] After Wolfgang Pauli announced his exclusion principle in 1925, Fermi responded with a paper "On the quantisation of the perfect monoatomic gas" (Italian: Sulla quantizzazione del gas perfetto monoatomico), in which he applied the exclusion principle to an ideal gas. The paper was especially notable for Fermi's statistical formulation, which describes the distribution of particles in systems of many identical particles that obey the exclusion principle. This was independently developed soon after by the British physicist Paul Dirac, who also showed how it was related to the Bose–Einstein statistics. Accordingly, it is now known as Fermi–Dirac statistics.[24] Following Dirac, particles that obey the exclusion principle are today called "fermions", while those that do not are called "bosons".[25] Professor in Rome[edit] Professorships in Italy were granted by competition (Italian: concorso) for a vacant chair, the applicants being rated on their publications by a committee of professors. Fermi applied for a chair of mathematical physics at the University of Cagliari on Sardinia, but was narrowly passed over in favour of Giovanni Giorgi.[26] In 1926, at the age of 24, he applied for a professorship at the Sapienza University of Rome. This was a new chair, one of the first three in theoretical physics in Italy, that had been created by the Minister of Education at the urging of Professor Orso Mario Corbino, who was the University's professor of experimental physics, the Director of the Institute of Physics, and a member of Benito Mussolini's cabinet. Corbino, who also chaired the selection committee, hoped that the new chair would raise the standard and reputation of physics in Italy.[27] The committee chose Fermi ahead of Enrico Persico and Aldo Pontremoli,[28] and Corbino helped Fermi recruit his team, which was soon joined by notable students such as Edoardo Amaldi, Bruno Pontecorvo, Ettore Majorana and Emilio Segrè, and by Franco Rasetti, whom Fermi had appointed as his assistant.[29] They were soon nicknamed the "Via Panisperna boys" after the street where the Institute of Physics was located.[30] Fermi married Laura Capon, a science student at the University, on 19 July 1928.[31] They had two children: Nella, born in January 1931, and Giulio, born in February 1936.[32] On 18 March 1929, Fermi was appointed a member of the Royal Academy of Italy by Mussolini, and on 27 April he joined the Fascist Party. He later opposed Fascism when the 1938 racial laws were promulgated by Mussolini in order to bring Italian Fascism ideologically closer to German National Socialism. These laws threatened Laura, who was Jewish, and put many of Fermi's research assistants out of work.[33][34][35][36] During their time in Rome, Fermi and his group made important contributions to many practical and theoretical aspects of physics. In 1928, he published his Introduction to Atomic Physics (Italian: Introduzione alla fisica atomica), which provided Italian university students with an up-to-date and accessible text. Fermi also conducted public lectures and wrote popular articles for scientists and teachers in order to spread knowledge of the new physics as widely as possible.[37] Part of his teaching method was to gather his colleagues and graduate students together at the end of the day and go over a problem, often from his own research.[37][38] A sign of success was that foreign students now began to come to Italy. The most notable of these was the German physicist Hans Bethe,[39] who came to Rome as a Rockefeller Foundation fellow, and collaborated with Fermi on a 1932 paper "On the Interaction between Two Electrons" (German: Über die Wechselwirkung von Zwei Elektronen).[37] At this time, physicists were puzzled by beta decay, in which an electron was emitted from the atomic nucleus. To satisfy the law of conservation of energy, Pauli postulated the existence of an invisible particle with no charge and little or no mass that was also emitted at the same time. Fermi took up this idea, which he developed in a tentative paper in 1933, and then a longer paper the next year that incorporated the postulated particle, which Fermi called a "neutrino".[40][41][42] His theory, later referred to as Fermi's interaction, and still later as the theory of the weak interaction, described one of the four fundamental forces of nature. The neutrino was detected after his death, and his interaction theory showed why it was so difficult to detect. When he submitted his paper to the British journal Nature, that journal's editor turned it down because it contained speculations which were "too remote from physical reality to be of interest to readers".[41] Thus Fermi saw the theory published in Italian and German before it was published in English.[29] In the introduction to the 1968 English translation, physicist Fred L. Wilson noted that: Fermi's theory, aside from bolstering Pauli's proposal of the neutrino, has a special significance in the history of modern physics. One must remember that only the naturally occurring β emitters were known at the time the theory was proposed. Later when positron decay was discovered, the process was easily incorporated within Fermi's original framework. On the basis of his theory, the capture of an orbital electron by a nucleus was predicted and eventually observed. With time much experimental data has accumulated. Although peculiarities have been observed many times in β decay, Fermi's theory always has been equal to the challenge. The consequences of the Fermi theory are vast. For example, β spectroscopy was established as a powerful tool for the study of nuclear structure. But perhaps the most influential aspect of this work of Fermi is that his particular form of the β interaction established a pattern which has been appropriate for the study of other types of interactions. It was the first successful theory of the creation and annihilation of material particles. Previously, only photons had been known to be created and destroyed.[42] In January 1934, Irène Joliot-Curie and Frédéric Joliot announced that they had bombarded elements with alpha particles and induced radioactivity in them.[43][44] By March, Fermi's assistant Gian-Carlo Wick had provided a theoretical explanation using Fermi's theory of beta decay. Fermi decided to switch to experimental physics, using the neutron, which James Chadwick had discovered in 1932.[45] In March 1934, Fermi wanted to see if he could induce radioactivity with Rasetti's polonium-beryllium neutron source. Neutrons had no electric charge, and so would not be deflected by the positively charged nucleus. This meant that they needed much less energy to penetrate the nucleus than charged particles, and so would not require a particle accelerator, which the Via Panisperna boys did not have.[46][47] Enrico Fermi between Franco Rasetti (left) and Emilio Segrè in academic dress Fermi had the idea to resort to replacing the polonium-beryllium neutron source with a radon-beryllium one, which he created by filling a glass bulb with beryllium powder, evacuating the air, and then adding 50 mCi of radon gas, supplied by Giulio Cesare Trabacchi.[48][49] This created a much stronger neutron source, the effectiveness of which declined with the 3.8-day half-life of radon. He knew that this source would also emit gamma rays, but, on the basis of his theory, he believed that this would not affect the results of the experiment. He started by bombarding platinum, an element with a high atomic number that was readily available, without success. He turned to aluminium, which emitted an alpha particle and produced sodium, which then decayed into magnesium by beta particle emission. He tried lead, without success, and then fluorine in the form of calcium fluoride, which emitted an alpha particle and produced nitrogen, decaying into oxygen by beta particle emission. In all, he induced radioactivity in 22 different elements.[50] Fermi rapidly reported the discovery of neutron-induced radioactivity in the Italian journal La Ricerca Scientifica on 25 March 1934.[49][51][52] Beta decay. A neutron decays into a proton, and an electron is emitted. In order for the total energy in the system to remain the same, Pauli and Fermi postulated that a neutrino (\bar{\nu}_e) was also emitted The natural radioactivity of thorium and uranium made it hard to determine what was happening when these elements were bombarded with neutrons but, after correctly eliminating the presence of elements lighter than uranium but heavier than lead, Fermi concluded that they had created new elements, which he called hesperium and ausonium.[53][47] The chemist Ida Noddack criticised this work, suggesting that some of the experiments could have produced lighter elements than lead rather than new, heavier elements. Her suggestion was not taken seriously at the time because her team had not carried out any experiments with uranium, and its claim to have discovered masurium (technetium) was disputed. At that time, fission was thought to be improbable if not impossible on theoretical grounds. While physicists expected elements with higher atomic numbers to form from neutron bombardment of lighter elements, nobody expected neutrons to have enough energy to split a heavier atom into two light element fragments in the manner that Noddack suggested.[54][53] The Via Panisperna boys also noticed some unexplained effects. The experiment seemed to work better on a wooden table than a marble table top. Fermi remembered that Joliot-Curie and Chadwick had noted that paraffin wax was effective at slowing neutrons, so he decided to try that. When neutrons were passed through paraffin wax, they induced a hundred times as much radioactivity in silver compared with when it was bombarded without the paraffin. Fermi guessed that this was due to the hydrogen atoms in the paraffin. Those in wood similarly explained the difference between the wooden and the marble table tops. This was confirmed by repeating the effect with water. He concluded that collisions with hydrogen atoms slowed the neutrons.[55][47] The lower the atomic number of the nucleus it collides with, the more energy a neutron loses per collision, and therefore the less collisions that are required to slow a neutron down by a given amount.[56] Fermi realised that this induced more radioactivity because slow neutrons were more easily captured than fast ones. He developed a diffusion equation to describe this, which became known as the Fermi age equation.[55][47] In 1938 Fermi received the Nobel Prize in Physics at the age of 37 for his "demonstrations of the existence of new radioactive elements produced by neutron irradiation, and for his related discovery of nuclear reactions brought about by slow neutrons".[57] After Fermi received the prize in Stockholm, he did not return home to Italy, but rather continued on to New York City along with his family, where they applied for permanent residency. The decision to move to America and become US citizens was primarily a result of the racial laws in Italy.[33] Manhattan Project[edit] Soon after Fermi's arrival in New York City on 2 January 1939,[58] he was offered five different chairs, and chose to work at Columbia University,[59] where he had already given summer lectures in 1936.[60] He received the news that in December 1938, the German chemists Otto Hahn and Fritz Strassmann had detected the element barium after bombarding uranium with neutrons,[61] which Lise Meitner and her nephew Otto Frisch correctly interpreted as the result of nuclear fission. Frisch confirmed this experimentally on 13 January 1939.[62][63] The news of Meitner and Frisch's interpretation of Hahn and Strassmann's discovery crossed the Atlantic with Niels Bohr, who was to lecture at Princeton University. Isidor Isaac Rabi and Willis Lamb, two Columbia University physicists working at Princeton, found out about it and carried it back to Columbia. Rabi said he told Enrico Fermi, but Fermi later gave the credit to Lamb:[64] Noddack was proven right after all. Fermi had dismissed the possibility of fission on the basis of his calculations, but he had not taken into account the binding energy that would appear when a nuclide with an odd number of neutrons absorbed an extra neutron.[54] For Fermi, the news came as a profound embarrassment, as the transuranic elements that he had partly been awarded the Nobel Prize for discovering had not been transuranic elements at all, but fission products. He added a footnote to this effect to his Nobel Prize acceptance speech.[66][64] Illustration of Chicago Pile-1, the first nuclear reactor to achieve a self-sustaining chain reaction. Designed by Fermi. it consisted of uranium and uranium oxide in a cubic lattice embedded in graphite. The scientists at Columbia decided that they should try to detect the energy released in the nuclear fission of uranium when bombarded by neutrons. On 25 January 1939, Fermi was a member of the experimental team at Columbia University which conducted the first nuclear fission experiment in the United States, in the basement of Pupin Hall; the other members of the team were Herbert L. Anderson, Eugene T. Booth, John R. Dunning, G. Norris Glasoe, and Francis G. Slack.[67] The next day, the Fifth Washington Conference on Theoretical Physics began in Washington, D.C. under the joint auspices of The George Washington University and the Carnegie Institution of Washington. There, the news on nuclear fission was spread even further, which fostered many more experimental demonstrations.[68] Few weeks after French scientists Hans von Halban, Lew Kowarski and Frédéric Joliot-Curie,[69] Fermi and Anderson demonstrated that uranium bombarded by neutrons emitted more neutrons than it absorbed.[70][71] Leó Szilárd obtained 200 kilograms (440 lb) of uranium oxide from Canadian radium producer Eldorado Gold Mines Limited, allowing Fermi and Anderson to conduct experiments with fission on a much larger scale.[72] Fermi and Szilárd collaborated on a design of a device to achieve a self-sustaining nuclear reaction—a nuclear reactor. Due to the rate of absorption of neutrons by the hydrogen in water, it was unlikely that a self-sustaining reaction could be achieved with natural uranium and water as a neutron moderator. Fermi suggested, based on his work with neutrons, that uranium oxide could be used in the form of blocks, with graphite as a moderator instead of water. This would reduce the rate of capture of the neutrons, and make it theoretically possible to achieve a self-sustaining chain reaction. Szilárd then came up with what proved to be a workable design, a pile of uranium oxide blocks surrounded by graphite bricks.[73] Szilárd, Anderson and Fermi jointly published a paper on "Neutron Production in Uranium" [72] but their work habits and personalities were different, and Fermi had trouble working with Szilárd.[74] Fermi was the first to warn military leaders about the potential impact of nuclear energy, giving a lecture on the subject at the Navy Department on 18 March 1939. The response fell short of what he had hoped for, although the Navy agreed to provide $1,500 towards further research at Columbia.[75] In August 1939, three Hungarian physicists—Szilárd, Eugene Wigner and Edward Teller—prepared the Einstein–Szilárd letter, which they persuaded Einstein to sign, warning President Franklin D. Roosevelt of the probability that Germany was planning to build an atomic bomb. Because of the German invasion of Poland on 1 September, it was October before they could arrange for the letter to be personally delivered. Roosevelt was sufficiently concerned that he assembled the S-1 Uranium Committee to investigate the matter.[76] Fermi's ID badge photo from Los Alamos The S-1 Uranium Committee provided money for Fermi to buy graphite,[77] and he built a pile of graphite bricks on the seventh floor of the Pupin laboratory.[78] By August 1941, he had six tons of uranium oxide and thirty tons of graphite, which he used to build a still larger pile in the Schermerhorn Hall at Columbia.[79] When the S-1 Uranium Committee next met on 18 December 1941, there was a heightened sense of urgency in the wake of the attack on Pearl Harbor and the subsequent United States entry into World War II. While most of the effort thus far had been directed at three different processes for producing enriched uranium, S-1 Uranium Committee member Arthur Compton determined that plutonium was a feasible alternative which could be mass-produced in nuclear reactors by the end of 1944.[80] To achieve this, he decided to concentrate the plutonium work at the University of Chicago. Fermi reluctantly moved, and his team became part of the new Metallurgical Laboratory there.[81] Given the number of unknown factors involved in creating a self-sustaining nuclear reaction, it seemed inadvisable to do so in a densely populated area. Compton arranged with Colonel Kenneth Nichols, the head of the Army's Manhattan District, for land to be acquired in the Argonne Forest about 20 miles (32 km) from Chicago, and Stone & Webster was contracted to develop the site. This work was halted by an industrial dispute. Fermi then persuaded Compton that he could build a reactor in the squash court under Stagg Field at the University of Chicago. Construction of the pile began on 6 November 1942, and Chicago Pile-1 went critical on 2 December.[82] The shape of the pile was intended to be roughly spherical, but as work proceeded Fermi calculated that criticality could be achieved without finishing the entire pile as planned.[83] This experiment was a landmark in the quest for energy, and it was typical of Fermi's approach. Every step was carefully planned, every calculation meticulously done.[82] When the first self-sustained nuclear chain reaction was achieved, Compton made a coded phone call to James B. Conant, the chairman of the National Defense Research Committee. I picked up the phone and called Conant. He was reached at the President's office at Harvard University. "Jim," I said, "you'll be interested to know that the Italian navigator has just landed in the new world." Then, half apologetically, because I had led the S-l Committee to believe that it would be another week or more before the pile could be completed, I added, "the earth was not as large as he had estimated, and he arrived at the new world sooner than he had expected." "Is that so," was Conant's excited response. "Were the natives friendly?" "Everyone landed safe and happy."[84] Three men talking. The one on the left is wearing a tie and leans against a wall. He stands head and shoulders above the other two. The one in the centre is smiling, and wearing an open-necked shirt. The one on the right wears a shirt and lab coat. All three have photo ID passes. Fermi (centre), with Ernest O. Lawrence (left) and Isidor Isaac Rabi (right) To continue the research where it would not pose a public health hazard, the reactor was disassembled and moved to the Argonne site, where Fermi directed research on reactors and other fundamental sciences, revelling in the myriad of research opportunities that the reactor provided by an abundance of neutrons.[85] The laboratory soon branched out from physics and engineering into using the reactor for biological and medical research. Initially, Argonne was run by Fermi as part of the University of Chicago, but it became a separate entity with Fermi as its director in May 1944.[86] Just in case something went wrong, Fermi was on hand at Oak Ridge to witness the air-cooled X-10 Graphite Reactor go critical on 4 November 1943. The technicians woke him early so that he could see it happen.[87] Getting X-10 operational was another milestone in the plutonium project. It provided data on reactor design, training for DuPont staff in reactor operation, and produced the first small quantities of reactor-bred plutonium.[88] Fermi became an American citizen in July 1944, the earliest date the law allowed.[89] In September 1944, Fermi inserted the first uranium fuel slug into the B Reactor at the Hanford Site, the production reactor designed to breed plutonium in large quantities. Like X-10, it had been designed by Fermi's team at the Metallurgical Laboratory, and built by DuPont, but it was much larger, and was water-cooled. Over the next few days, 838 tubes were loaded, and the reactor went critical. Shortly after midnight on 27 September, the operators began to withdraw the control rods to initiate production. At first all appeared to be well, but around 03:00, the power level started to drop and by 06:30 the reactor had shut down completely. The Army and DuPont turned to Fermi's team for answers. The cooling water was investigated to see if there was a leak or contamination. The next day the reactor suddenly started up again, only to shut down once more a few hours later. The problem was traced to neutron poisoning from xenon-135, a fission product with a half-life of 9.2 hours. Fortunately, DuPont had deviated from the Metallurgical Laboratory's original design in which the reactor had 1,500 tubes arranged in a circle, and had added an additional 504 tubes to fill in the corners. The scientists had originally considered this over-engineering a waste of time and money, but Fermi realized that by loading all 2,004 tubes, the reactor could reach the required power level and efficiently produce plutonium.[90][91] In mid-1944, Robert Oppenheimer persuaded Fermi to join his Project Y in Los Alamos, New Mexico.[92] Arriving in September, Fermi was appointed an associate director of the laboratory, with broad responsibility for nuclear and theoretical physics, and was placed in charge of F Division, which was named after him. F Division consisted of four branches: the F-1 Super and General Theory under Teller, which was responsible for the development of a thermonuclear "Super" bomb; the F-2 Water Boiler under L. D. P. King, which looked after the "water boiler" research reactor; F-3 Super Experimentation under Egon Bretscher; and the F-4 Fission Studies under Anderson.[93] Fermi observed the Trinity test on 16 July 1945, and conducted an experiment dropping strips of paper to estimate the bomb's yield. He simply measured how far they were blown by the explosion using his paces, and came up with a figure of ten kilotons of TNT; the actual yield was about 18.6 kilotons.[94] Along with Oppenheimer, Compton and Ernest Lawrence, Fermi was part of the scientific panel that advised the Interim Committee on target selection. The panel agreed with the committee that atomic bombs be used without warning against an industrial target.[95] Like others at the Los Alamos Laboratory, Fermi found out about the atomic bombings of Hiroshima and Nagasaki from the public address system in the technical area. Fermi did not believe that atomic bombs would scare people into not starting wars, nor did he think that the time was ripe for world government. He therefore did not join the Association of Los Alamos Scientists.[96] Post-war work[edit] Fermi became a professor at the University of Chicago on 1 July 1945,[97] although he did not depart the Los Alamos Laboratory with his family until 31 December 1945.[98] The Metallurgical Laboratory became the Argonne National Laboratory on 1 July 1946, the first of the national laboratories established by the Manhattan Project.[99] The short distance between Chicago and Argonne allowed Fermi to work at both places. At Argonne he continued experimental physics, investigating neutron scattering with Leona Marshall.[100] He also discussed theoretical physics with Maria Mayer, helping her develop insights into spin–orbit coupling that would lead to her receiving the Nobel Prize.[101] The Manhattan Project was replaced by Atomic Energy Commission (AEC) on 1 January 1947.[102] Fermi served on the AEC General Advisory Committee, an influential scientific committee chaired by Robert Oppenheimer.[103] He also liked to spend a few weeks of each year at the Los Alamos National Laboratory,[104] where he collaborated with Nicholas Metropolis,[105] and with John von Neumann on Rayleigh–Taylor instability, the science of what occurs at the border between two fluids of different densities.[106] Following the detonation of the first Soviet fission bomb in August 1949, Fermi, along with Isidor Rabi, wrote a strongly worded report for the committee, opposing the development of a hydrogen bomb on moral and technical grounds.[107] Nonetheless, Fermi continued to participate in work on the hydrogen bomb at Los Alamos as a consultant. Along with Stanislaw Ulam, he calculated that not only would the amount of tritium needed for Teller's model of a thermonuclear weapon be prohibitive, but a fusion reaction could still not be assured to propagate even with this large quantity of tritium.[108] Fermi was among the scientists who testified on Oppenheimer's behalf at the Oppenheimer security hearing in 1954 that resulted in denial of Oppenheimer's security clearance.[109] In his later years, Fermi continued teaching at the University of Chicago. His PhD students in the post-war period included Owen Chamberlain, Geoffrey Chew, Jerome Friedman, Marvin Goldberger, Tsung-Dao Lee, Arthur Rosenfeld and Sam Treiman.[1][110] Jack Steinberger was a graduate student.[111] Fermi conducted important research in particle physics, especially related to pions and muons. He made the first predictions of pion-nucleon resonance,[105] relying on statistical methods, since he reasoned that exact answers were not required when the theory was wrong anyway.[112] In a paper co-authored with Chen Ning Yang, he speculated that pions might actually be composite particles.[113] The idea was elaborated by Shoichi Sakata. It has since been supplanted by the quark model, in which the pion is made up of quarks, which completed Fermi's model, and vindicated his approach.[114] Fermi wrote a paper "On the Origin of Cosmic Radiation" in which he proposed that cosmic rays arose through material being accelerated by magnetic fields in interstellar space, which led to a difference of opinion with Teller.[112] Fermi examined the issues surrounding magnetic fields in the arms of a spiral galaxy.[115] He mused about what is now referred to as the "Fermi paradox": the contradiction between the presumed probability of the existence of extraterrestrial life and the fact that contact has not been made.[116] Fermi died at age 53 of stomach cancer in his home in Chicago,[4] and was interred at Oak Woods Cemetery.[118] A commemorative plaque remembers him in the Basilica of Santa Croce, Florence, a church also known as "Temple of Italian Glories", for the many burials of artists, scientists and prominent figures in Italian history.[119] Impact and legacy[edit] As a person, Fermi seemed simplicity itself. He was extraordinarily vigorous and loved games and sport. On such occasions his ambitious nature became apparent. He played tennis with considerable ferocity and when climbing mountains acted rather as a guide. One might have called him a benevolent dictator. I remember once at the top of a mountain Fermi got up and said: "Well, it is two minutes to two, let's all leave at two o'clock"; and of course, everybody got up faithfully and obediently. This leadership and self-assurance gave Fermi the name of "The Pope" whose pronouncements were infallible in physics. He once said: "I can calculate anything in physics within a factor 2 on a few sheets: to get the numerical factor in front of the formula right may well take a physicist a year to calculate, but I am not interested in that." His leadership could go so far that it was a danger to the independence of the person working with him. I recollect once, at a party at his house when my wife cut the bread, Fermi came along and said he had a different philosophy on bread-cutting and took the knife out of my wife's hand and proceeded with the job because he was convinced that his own method was superior. But all this did not offend at all, but rather charmed everybody into liking Fermi. He had very few interests outside physics and when he once heard me play on Teller's piano he confessed that his interest in music was restricted to simple tunes. Egon Bretscher[3] Fermi received numerous awards in recognition of his achievements, including the Matteucci Medal in 1926, the Nobel Prize for Physics in 1938, the Hughes Medal in 1942, the Franklin Medal in 1947, and the Rumford Prize in 1953. He was awarded the Medal for Merit in 1946 for his contribution to the Manhattan Project.[120] In 1999, Time named Fermi on its list of the top 100 persons of the twentieth century.[121] Fermi was widely regarded as an unusual case of a 20th-century physicist who excelled both theoretically and experimentally. The historian of physics, C. P. Snow, wrote that "if Fermi had been born a few years earlier, one could well imagine him discovering Rutherford's atomic nucleus, and then developing Bohr's theory of the hydrogen atom. If this sounds like hyperbole, anything about Fermi is likely to sound like hyperbole".[122] Fermi was known as an inspiring teacher, and was noted for his attention to detail, simplicity, and careful preparation of his lectures.[123] Later, his lecture notes were transcribed into books.[124] His papers and notebooks are today in the University of Chicago.[125] Victor Weisskopf noted how Fermi "always managed to find the simplest and most direct approach, with the minimum of complication and sophistication."[126] Fermi's ability and success stemmed as much from his appraisal of the art of the possible, as from his innate skill and intelligence. He disliked complicated theories, and while he had great mathematical ability, he would never use it when the job could be done much more simply. He was famous for getting quick and accurate answers to problems that would stump other people. Later on, his method of getting approximate and quick answers through back-of-the-envelope calculations became informally known as the "Fermi method", and is widely taught.[127] Fermi was fond of pointing out that Alessandro Volta, working in his laboratory, could have had no idea where the study of electricity would lead.[128] Fermi is generally remembered for his work on nuclear power and nuclear weapons, especially the creation of the first nuclear reactor, and the development of the first atomic and hydrogen bombs. His scientific work has stood the test of time. This includes his theory of beta decay, his work with non-linear systems, his discovery of the effects of slow neutrons, his study of pion-nucleon collisions, and his Fermi–Dirac statistics. His speculation that a pion was not a fundamental particle pointed the way towards the study of quarks and leptons.[129] Things named after Fermi[edit] The sign at Enrico Fermi Street in Rome Many things have been named in Fermi's honour. These include the Fermilab particle accelerator and physics lab in Batavia, Illinois, which was renamed in his honour in 1974,[130] and the Fermi Gamma-ray Space Telescope, which was named after him in 2008, in recognition of his work on cosmic rays.[131] Three nuclear reactor installations have been named after him: the Fermi 1 and Fermi 2 nuclear power plants in Newport, Michigan, the Enrico Fermi Nuclear Power Plant at Trino Vercellese in Italy,[132] and the RA-1 Enrico Fermi research reactor in Argentina.[133] A synthetic element isolated from the debris of the 1952 Ivy Mike nuclear test was named fermium, in honour of Fermi's contributions to the scientific community. It follows the element einsteinium, which was discovered with it.[134][135] Since 1956, the United States Atomic Energy Commission has named its highest honour, the Fermi Award, after him. Recipients of the award include well-known scientists like Otto Hahn, Robert Oppenheimer, Edward Teller and Hans Bethe.[136] • Introduzione alla Fisica Atomica (in Italian). Bologna: N. Zanichelli. 1928. OCLC 9653646.  • Fisica per i Licei (in Italian). Bologna: N. Zanichelli. 1929. OCLC 9653646.  • Molecole e cristalli (in Italian). Bologna: N. Zanichelli. 1934. OCLC 19918218.  • Thermodynamics. New York: Prentice Hall. 1937. OCLC 2379038.  • Fisica per Istituti Tecnici (in Italian). Bologna: N. Zanichelli. 1938.  • Fisica per Licei Scientifici (in Italian). Bologna: N. Zanichelli. 1938.  • Elementary particles. New Haven: Yale University Press. 1951. OCLC 362513.  For a full list of his papers, see pages 75–78 in [3] See also[edit] 1. ^ a b c Enrico Fermi at the Mathematics Genealogy Project 2. ^ Crouch, T. (2013). "Harold Melvin Agnew (1921—2013) Physicist and Manhattan Project veteran". Nature 503 (7474): 40. doi:10.1038/503040a. PMID 24201273.  edit 3. ^ a b c Bretscher, E.; Cockcroft, J. D. (1955). "Enrico Fermi. 1901-1954". Biographical Memoirs of Fellows of the Royal Society 1: 68. doi:10.1098/rsbm.1955.0006. JSTOR 769243‎.  4. ^ a b "Enrico Fermi Dead at 53; Architect of Atomic Bomb". New York Times. 29 November 1954. Retrieved 21 January 2013.  5. ^ Segrè 1970, pp. 3–4, 8. 6. ^ Amaldi 2001, p. 23. 7. ^ Cooper 1999, p. 19. 8. ^ Segrè 1970, pp. 5–6. 9. ^ Fermi 1954, pp. 15–16. 10. ^ Segrè 1970, p. 7. 11. ^ Bonolis 2001, p. 315. 12. ^ Amaldi 2001, p. 24. 13. ^ Segrè 1970, pp. 8–10. 14. ^ Segrè 1970, pp. 11–13. 15. ^ Segrè 1970, pp. 15–18. 16. ^ Bonolis 2001, p. 320. 17. ^ a b Bonolis 2001, pp. 317–319. 18. ^ Segrè 1970, p. 20. 19. ^ "Über einen Widerspruch zwischen der elektrodynamischen und relativistischen Theorie der elektromagnetischen Masse". Physikalische Zeitschrift (in German) 23: 340–344. Retrieved 17 January 2013.  20. ^ Bertotti 2001, p. 115. 21. ^ a b c Bonolis 2001, p. 321. 22. ^ Bonolis 2001, pp. 321–324. 23. ^ Hey & Walters 2003, p. 61. 24. ^ Bonolis 2001, pp. 329–330. 25. ^ Cooper 1999, p. 31. 26. ^ Fermi 1954, pp. 37–38. 27. ^ Segrè 1970, p. 45. 28. ^ Fermi 1954, p. 38. 29. ^ a b Alison 1957, p. 127. 30. ^ "Enrico Fermi e i ragazzi di via Panisperna" (in Italian). University of Rome. Retrieved 20 January 2013.  31. ^ Segrè 1970, p. 61. 32. ^ Cooper 1999, pp. 38–39. 33. ^ a b Alison 1957, p. 130. 34. ^ "About Enrico Fermi". University of Chicago. Retrieved 20 January 2013.  35. ^ Mieli, Paolo (2 October 2001). "Così Fermi scoprì la natura vessatoria del fascismo". Corriere della Sera (in Italian). Retrieved 20 January 2013.  36. ^ Direzione generale per gli archivi (2005, Ministero per i beni culturali e ambientali). "Reale accademia d'Italia:inventario dell'archivio" (in Italian). Rome. p. xxxix. Retrieved 20 January 2013.  37. ^ a b c Bonolis 2001, pp. 333–335. 38. ^ Amaldi 2001, p. 38. 39. ^ Fermi 1954, p. 217. 40. ^ Amaldi 2001, pp. 50–51. 41. ^ a b Bonolis 2001, p. 346. 42. ^ a b Fermi, E. (1934). "Fermi's Theory of Beta Decay (English translation by Fred L. Wilson, 1968)". American Journal of Physics. Retrieved 20 January 2013.  43. ^ Joliot-Curie, Irène; Joliot, Frédéric (15 January 1934). "Un nouveau type de radioactivité" [A new type of radioactivity]. Comptes rendus hebdomadaires des séances de l'Académie des Sciences (in French) 198 (January–June 1934): 254–256.  44. ^ Joliot, Frédéric; Joliot-Curie, Irène (1934). "Artificial Production of a New Kind of Radio-Element". Nature 133 (3354): 201–202. Bibcode:1934Natur.133..201J. doi:10.1038/133201a0.  45. ^ Amaldi 2001a, pp. 152–153. 46. ^ Bonolis 2001, pp. 347–351. 47. ^ a b c d Amaldi 2001a, pp. 153–156. 48. ^ Segrè 1970, p. 73. 49. ^ a b De Gregorio, Alberto G. (2005). "Neutron physics in the early 1930s". Historical Studies in the Physical and Biological Sciences 35 (2): 293–340. arXiv:physics/0510044. doi:10.1525/hsps.2005.35.2.293.  50. ^ Guerra, Francesco; Robotti, Nadia (December 2009). "Enrico Fermi's Discovery of Neutron-Induced Artificial Radioactivity: The Influence of His Theory of Beta Decay". Physics in Perspective 11 (4): 379–404. Bibcode:2009PhP....11..379G. doi:10.1007/s00016-008-0415-1.  51. ^ Fermi, Enrico (25 March 1934). "Radioattività indotta da bombardamento di neutroni". La Ricerca scientifica (in Italian) 1 (5): 283.  52. ^ Fermi, E.; Amaldi, E.; d'Agostino, O.; Rasetti, F.; Segre, E. (1934). "Artificial Radioactivity Produced by Neutron Bombardment". Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 146 (857): 483. Bibcode:1934RSPSA.146..483F. doi:10.1098/rspa.1934.0168.  edit 53. ^ a b Bonolis 2001, pp. 347–349. 54. ^ a b Amaldi 2001a, pp. 161–162. 55. ^ a b Bonolis 2001, pp. 347–352. 56. ^ "A Few Good Moderators: The Numbers". The Energy From Thorium Foundation. Retrieved 24 September 2013.  57. ^ Cooper 1999, p. 51. 58. ^ Cooper 1999, p. 52. 59. ^ Persico 2001, p. 40. 60. ^ Bonolis 2001, p. 352. 61. ^ Hahn, O.; Strassmann, F. (1939). "Über den Nachweis und das Verhalten der bei der Bestrahlung des Urans mittels Neutronen entstehenden Erdalkalimetalle" [On the detection and characteristics of the alkaline earth metals formed by irradiation of uranium with neutrons]. Naturwissenschaften (in German) 27 (1): 11–15. Bibcode:1939NW.....27...11H. doi:10.1007/BF01488241.  (German) 64. ^ a b Rhodes 1986, p. 267. 65. ^ Allison, S.K.; Segrè, E.; Anderson, H.L. (1955). "Enrico Fermi 1901–1954". Physics Today 8 (1): 9. Bibcode:1955PhT.....8a...9A. doi:10.1063/1.3061909.  66. ^ Fermi, Enrico (12 December 1938). "Artifical radioactivity produced by neutron bombardment (Nobel Lecture)". Retrieved 19 October 2013.  67. ^ Anderson, H.L.; Booth, E.; Dunning, J.; Fermi, E.; Glasoe, G.; Slack, F. (16 February 1939). "The Fission of Uranium". Physical Review 55 (5): 511–512. Bibcode:1939PhRv...55..511A. doi:10.1103/PhysRev.55.511.2.  68. ^ Rhodes 1986, pp. 269–270. 69. ^ Von Halban, H.; Joliot, F.; Kowarski, L. (22 April 1939). "Number of Neutrons Liberated in the Nuclear Fission of Uranium". Nature 143 (3625): 680–680. Bibcode:1939Natur.143..680V. doi:10.1038/143680a0.  70. ^ Anderson, H.; Fermi, E.; Hanstein, H. (16 March 1939). "Production of Neutrons in Uranium Bombarded by Neutrons". Physical Review 55 (8): 797–798. Bibcode:1939PhRv...55..797A. doi:10.1103/PhysRev.55.797.2.  71. ^ Anderson, H.L. (April 1973). "Early Days of Chain Reaction". Bulletin of the Atomic Scientists (Educational Foundation for Nuclear Science, Inc.).  72. ^ a b Anderson, H.; Fermi, E.; Szilárd, L. (1 August 1939). "Neutron Production and Absorption in Uranium". Physical Review 56 (3): 284–286. Bibcode:1939PhRv...56..284A. doi:10.1103/PhysRev.56.284.  73. ^ Salvetti 2001, pp. 186–188. 74. ^ Bonolis 2001, pp. 356–357. 75. ^ Salvetti 2001, p. 185. 76. ^ Salvetti 2001, pp. 188–189. 77. ^ Rhodes 1986, pp. 314–317. 78. ^ Salvetti 2001, p. 190. 79. ^ Salvetti 2001, p. 195. 80. ^ Salvetti 2001, pp. 194–196. 81. ^ Rhodes 1986, pp. 399–400. 82. ^ a b Salvetti 2001, pp. 198–202. 83. ^ Fermi, E. (1946). "The Development of the First Chain Reaction Pile". Proceedings of the American Philosophical Society 90: 20–24. JSTOR 3301034.  84. ^ Compton 1956, p. 144. 85. ^ Bonolis 2001, p. 366. 86. ^ Hewlett & Anderson 1962, p. 207. 87. ^ Hewlett & Anderson 1962, pp. 208–211. 88. ^ Jones 1985, p. 205. 89. ^ Segrè 1970, p. 104. 90. ^ Hewlett & Anderson 1962, pp. 304–307. 91. ^ Jones 1985, pp. 220–223. 92. ^ Bonolis 2001, pp. 368–369. 93. ^ Hawkins 1961, p. 213. 94. ^ Rhodes 1986, pp. 674–677. 95. ^ Jones 1985, pp. 531-532. 96. ^ Fermi 1954, pp. 244-245. 97. ^ Segrè 1970, p. 157. 98. ^ Segrè 1970, p. 167. 99. ^ Holl, Hewlett & Harris 1997, pp. xix–xx. 100. ^ Segrè 1970, p. 171. 101. ^ Segrè 1970, p. 172. 102. ^ Hewlett & Anderson 1962, p. 643. 103. ^ Hewlett & Anderson 1962, p. 648. 104. ^ Segrè 1970, p. 175. 105. ^ a b Segrè 1970, p. 179. 106. ^ Bonolis 2001, p. 381. 107. ^ Hewlett & Duncan 1969, pp. 380–385. 108. ^ Hewlett & Duncan 1969, pp. 527–530. 109. ^ Cooper 1999, pp. 102–103. 110. ^ "Jerome I. Friedman – Autobiography". The Nobel Foundation. 1990. Retrieved 16 March 2013.  111. ^ "Jack Steinberger – Biographical". Nobel Foundation. Retrieved 15 August 2013.  112. ^ a b Bonolis 2001, pp. 374–379. 113. ^ Fermi, E.; Yang, C. (1949). "Are Mesons Elementary Particles?". Physical Review 76 (12): 1739. doi:10.1103/PhysRev.76.1739.  edit 114. ^ Jacob & Maiani 2001, pp. 254–258. 115. ^ Bonolis 2001, p. 386. 116. ^ Jones 1985a, pp. 1–3. 117. ^ Fermi 2004, p. 142. 118. ^ Hucke & Bielski 1999, pp. 147, 150. 119. ^ Photo: Enrico Fermi in Santa Croce, Florence 120. ^ Alison 1957, pp. 135–136. 121. ^ "Time 100 Persons of the Century". Time. 6 June 1999. Retrieved 2 March 2013.  122. ^ Snow 1981, p. 79. 123. ^ Ricci 2001, pp. 297–302. 124. ^ Ricci 2001, p. 286. 125. ^ "Enrico Fermi Collection". University of Chicago. Retrieved 22 January 2013.  126. ^ Salvini 2001, p. 5. 127. ^ Von Baeyer 1993, pp. 3–8. 128. ^ Fermi 1954, p. 242. 129. ^ Salvini 2001, p. 17. 130. ^ "About Fermilab – History". Fermilab. Retrieved 21 January 2013.  131. ^ "First Light for the Fermi Space Telescope". National Aeronautics and Space Administration. Retrieved 21 January 2013.  132. ^ "Nuclear Power in Italy". World Nuclear Association. Retrieved 21 January 2013.  133. ^ "Report of the National Atomic Energy Commission of Argentina (CNEA)". CNEA. November 2004. Retrieved 21 January 2013.  134. ^ Seaborg 1978, p. 2. 135. ^ Hoff 1978, pp. 39–48. 136. ^ "The Enrico Fermi Award". United States Department of Energy. Retrieved 25 August 2010.  External links[edit]
bb8b6696190f16e5
7 Maret 2010 pukul 07:40 | Ditulis dalam Uncategorized | Tinggalkan komentar Read description: colorless gas with violet glow General properties Name, symbol, number hydrogen, H, 1 Pronunciation /ˈhaɪdrɵdʒɨn/,[1] HYE-dro-jin Element category nonmetal Group, period, block 1, 1, s Standard atomic weight 1.00794(7) g·mol−1 Electron configuration 1s1 Electrons per shell 1 (Image) Physical properties Color colorless Phase gas Density (0 °C, 101.325 kPa) 0.08988 g/L Liquid density at m.p. 0.07 (0.0763 solid)[2] g·cm−3 Melting point 14.01 K, -259.14 °C, -434.45 °F Boiling point 20.28 K, -252.87 °C, -423.17 °F Critical point 32.97 K, 1.293 MPa Heat of fusion (H2) 0.117 kJ·mol−1 Heat of vaporization (H2) 0.904 kJ·mol−1 Specific heat capacity (25 °C) (H2) 28.836 J·mol−1·K−1 Vapor pressure P/Pa 1 10 100 1 k 10 k 100 k at T/K 15 20 Atomic properties Oxidation states 1, -1 (amphoteric oxide) Electronegativity 2.20 (Pauling scale) Ionization energies 1st: 1312.0 kJ·mol−1 Covalent radius 31±5 pm Van der Waals radius 120 pm Crystal structure hexagonal Magnetic ordering diamagnetic[3] Thermal conductivity (300 K) 0.1805 W·m−1·K−1 CAS registry number 1333-74-0 Most stable isotopes Main article: Isotopes of hydrogen iso NA half-life DM DE (MeV) DP 1H 99.985% 1H is stable with 0 neutrons 2H 0.015% 2H is stable with 1 neutron 3H trace 12.32 y β 0.01861 3He Hydrogen is the chemical element with atomic number 1. It is represented by the symbol H. With an atomic weight of 1.00794 u, hydrogen is the lightest and most abundant chemical element, constituting roughly 75 % of the Universe’s elemental mass.[4] Stars in the main sequence are mainly composed of hydrogen in its plasma state. Naturally occurring elemental hydrogen is relatively rare on Earth. The most common isotope of hydrogen is protium (name rarely used, symbol H) with a single proton and no neutrons. In ionic compounds it can take a negative charge (an anion known as a hydride and written as H), or as a positively-charged species H+. The latter cation is written as though composed of a bare proton, but in reality, hydrogen cations in ionic compounds always occur as more complex species. Hydrogen forms compounds with most elements and is present in water and most organic compounds. It plays a particularly important role in acid-base chemistry with many reactions exchanging protons between soluble molecules. As the simplest atom known, the hydrogen atom has been of theoretical use. For example, as the only neutral atom with an analytic solution to the Schrödinger equation, the study of the energetics and bonding of the hydrogen atom played a key role in the development of quantum mechanics. Hydrogen gas (now known to be H2) was first artificially produced in the early 16th century, via the mixing of metals with strong acids. In 1766–81, Henry Cavendish was the first to recognize that hydrogen gas was a discrete substance,[5] and that it produces water when burned, a property which later gave it its name, which in Greek means “water-former”. At standard temperature and pressure, hydrogen is a colorless, odorless, nonmetallic, tasteless, highly combustible diatomic gas with the molecular formula H2. Industrial production is mainly from the steam reforming of natural gas, and less often from more energy-intensive hydrogen production methods like the electrolysis of water.[6] Most hydrogen is employed near its production site, with the two largest uses being fossil fuel processing (e.g., hydrocracking) and ammonia production, mostly for the fertilizer market. Hydrogen is a concern in metallurgy as it can embrittle many metals,[7] complicating the design of pipelines and storage tanks.[8] A black cup-like object hanging by its bottom with blue glow coming out of its opening. The Space Shuttle Main Engine burns hydrogen with oxygen, producing a nearly-invisible flame at full thrust. Hydrogen gas (dihydrogen[9]) is highly flammable and will burn in air at a very wide range of concentrations between 4% and 75% by volume.[10] The enthalpy of combustion for hydrogen is −286 kJ/mol:[11] 2 H2(g) + O2(g) → 2 H2O(l) + 572 kJ (286 kJ/mol)[note 1] Hydrogen gas forms explosive mixtures with air in the concentration range 4-74% (volume per cent of hydrogen in air) and with chlorine in the range 5-95%. The mixtures spontaneously detonate by spark, heat or sunlight. The hydrogen autoignition temperature, the temperature of spontaneous ignition in air, is 500 °C (932 °F).[12] Pure hydrogen-oxygen flames emit ultraviolet light and are nearly invisible to the naked eye, as illustrated by the faint plume of the Space Shuttle main engine compared to the highly visible plume of a Space Shuttle Solid Rocket Booster. The detection of a burning hydrogen leak may require a flame detector; such leaks can be very dangerous. The destruction of the Hindenburg airship was an infamous example of hydrogen combustion; the cause is debated, but the visible flames were the result of combustible materials in the ship’s skin.[13] Because hydrogen is buoyant in air, hydrogen flames tend to ascend rapidly and cause less damage than hydrocarbon fires. Two-thirds of the Hindenburg passengers survived the fire, and many deaths were instead the result of falls or burning diesel fuel.[14] Electron energy levels Drawing of a light-gray large sphere with a cut off quarter and a black small sphere and numbers 1.6 and 1.7x10-5 illustrating their relative diameters. Depiction of a hydrogen atom showing the diameter as about twice the Bohr model radius (image not to scale). The ground state energy level of the electron in a hydrogen atom is −13.6 eV, which is equivalent to an ultraviolet photon of roughly 92 nm wavelength.[16] The energy levels of hydrogen can be calculated fairly accurately using the Bohr model of the atom, which conceptualizes the electron as “orbiting” the proton in analogy to the Earth’s orbit of the sun. However, the electromagnetic force attracts electrons and protons to one another, while planets and celestial objects are attracted to each other by gravity. Because of the discretization of angular momentum postulated in early quantum mechanics by Bohr, the electron in the Bohr model can only occupy certain allowed distances from the proton, and therefore only certain allowed energies.[17] A more accurate description of the hydrogen atom comes from a purely quantum mechanical treatment that uses the Schrödinger equation or the equivalent Feynman path integral formulation to calculate the probability density of the electron around the proton.[18] Elemental molecular forms Two bright circles on dark background, both contain numerous thin black lines inside. First tracks observed in liquid hydrogen bubble chamber at the Bevatron There exist two different spin isomers of hydrogen diatomic molecules that differ by the relative spin of their nuclei.[19] In the orthohydrogen form, the spins of the two protons are parallel and form a triplet state with a molecular spin quantum number of 1 (½+½); in the parahydrogen form the spins are antiparallel and form a singlet with a molecular spin quantum number of 0 (½-½). At standard temperature and pressure, hydrogen gas contains about 25% of the para form and 75% of the ortho form, also known as the “normal form”.[20] The equilibrium ratio of orthohydrogen to parahydrogen depends on temperature, but since the ortho form is an excited state and has a higher energy than the para form, it is unstable and cannot be purified. At very low temperatures, the equilibrium state is composed almost exclusively of the para form. The liquid and gas phase thermal properties of pure parahydrogen differ significantly from those of the normal form because of differences in rotational heat capacities, as discussed more fully in Spin isomers of hydrogen.[21] The ortho/para distinction also occurs in other hydrogen-containing molecules or functional groups, such as water and methylene, but is of little significance for their thermal properties.[22] The uncatalyzed interconversion between para and ortho H2 increases with increasing temperature; thus rapidly condensed H2 contains large quantities of the high-energy ortho form that converts to the para form very slowly.[23] The ortho/para ratio in condensed H2 is an important consideration in the preparation and storage of liquid hydrogen: the conversion from ortho to para is exothermic and produces enough heat to evaporate some of the hydrogen liquid, leading to loss of liquefied material. Catalysts for the ortho-para interconversion, such as ferric oxide, activated carbon, platinized asbestos, rare earth metals, uranium compounds, chromic oxide, or some nickel[24] compounds, are used during hydrogen cooling.[25] A molecular form called protonated molecular hydrogen, or H+3, is found in the interstellar medium (ISM), where it is generated by ionization of molecular hydrogen from cosmic rays. It has also been observed in the upper atmosphere of the planet Jupiter. This molecule is relatively stable in the environment of outer space due to the low temperature and density. H+3 is one of the most abundant ions in the Universe, and it plays a notable role in the chemistry of the interstellar medium.[26] Neutral triatomic hydrogen H3 can only exist in an excited from and is unstable.[27] Covalent and organic compounds While H2 is not very reactive under standard conditions, it does form compounds with most elements. Millions of hydrocarbons are known, but they are not formed by the direct reaction of elementary hydrogen and carbon. Hydrogen can form compounds with elements that are more electronegative, such as halogens (e.g., F, Cl, Br, I); in these compounds hydrogen takes on a partial positive charge.[28] When bonded to fluorine, oxygen, or nitrogen, hydrogen can participate in a form of strong noncovalent bonding called hydrogen bonding, which is critical to the stability of many biological molecules.[29][30] Hydrogen also forms compounds with less electronegative elements, such as the metals and metalloids, in which it takes on a partial negative charge. These compounds are often known as hydrides.[31] Hydrogen forms a vast array of compounds with carbon. Because of their general association with living things, these compounds came to be called organic compounds;[32] the study of their properties is known as organic chemistry[33] and their study in the context of living organisms is known as biochemistry.[34] By some definitions, “organic” compounds are only required to contain carbon. However, most of them also contain hydrogen, and since it is the carbon-hydrogen bond which gives this class of compounds most of its particular chemical characteristics, carbon-hydrogen bonds are required in some definitions of the word “organic” in chemistry.[32] Compounds of hydrogen are often called hydrides, a term that is used fairly loosely. The term “hydride” suggests that the H atom has acquired a negative or anionic character, denoted H, and is used when hydrogen forms a compound with a more electropositive element. The existence of the hydride anion, suggested by Gilbert N. Lewis in 1916 for group I and II salt-like hydrides, was demonstrated by Moers in 1920 with the electrolysis of molten lithium hydride (LiH), that produced a stoichiometric quantity of hydrogen at the anode.[36] For hydrides other than group I and II metals, the term is quite misleading, considering the low electronegativity of hydrogen. An exception in group II hydrides is BeH2, which is polymeric. In lithium aluminium hydride, the AlH4 anion carries hydridic centers firmly attached to the Al(III). Although hydrides can be formed with almost all main-group elements, the number and combination of possible compounds varies widely; for example, there are over 100 binary borane hydrides known, but only one binary aluminium hydride.[37] Binary indium hydride has not yet been identified, although larger complexes exist.[38] Protons and acids Oxidation of hydrogen, in the sense of removing its electron, formally gives H+, containing no electrons and a nucleus which is usually composed of one proton. That is why H+ is often called a proton. This species is central to discussion of acids. Under the Bronsted-Lowry theory, acids are proton donors, while bases are proton acceptors. A bare proton, H+, cannot exist in solution or in ionic crystals, because of its unstoppable attraction to other atoms or molecules with electrons. Except at the high temperatures associated with plasmas, such protons cannot be removed from the electron clouds of atoms and molecules, and will remain attached to them. However, the term ‘proton’ is sometimes used loosely and metaphorically to refer to positively charged or cationic hydrogen attached to other species in this fashion, and as such is denoted “H+” without any implication that any single protons exist freely as a species. To avoid the implication of the naked “solvated proton” in solution, acidic aqueous solutions are sometimes considered to contain a less unlikely fictitious species, termed the “hydronium ion” (H3O+). However, even in this case, such solvated hydrogen cations are thought more realistically physically to be organized into clusters that form species closer to H9O+4.[39] Other oxonium ions are found when water is in solution with other solvents.[40] Although exotic on earth, one of the most common ions in the universe is the H+3 ion, known as protonated molecular hydrogen or the triatomic hydrogen cation.[41] Schematic drawing of a positive atom in the center orbited by a negative particle. Protium, the most common isotope of hydrogen, has one proton and one electron. Unique among all stable isotopes, it has no neutrons (see diproton for discussion of why others do not exist). Hydrogen has three naturally occurring isotopes, denoted 1H, 2H and 3H. Other, highly unstable nuclei (4H to 7H) have been synthesized in the laboratory but not observed in nature.[42][43] • 2H, the other stable hydrogen isotope, is known as deuterium and contains one proton and one neutron in its nucleus. Essentially all deuterium in the universe is thought to have been produced at the time of the Big Bang, and has endured since that time. Deuterium is not radioactive, and does not represent a significant toxicity hazard. Water enriched in molecules that include deuterium instead of normal hydrogen is called heavy water. Deuterium and its compounds are used as a non-radioactive label in chemical experiments and in solvents for 1H-NMR spectroscopy.[45] Heavy water is used as a neutron moderator and coolant for nuclear reactors. Deuterium is also a potential fuel for commercial nuclear fusion.[46] • 3H is known as tritium and contains one proton and two neutrons in its nucleus. It is radioactive, decaying into Helium-3 through beta decay with a half-life of 12.32 years.[35] Small amounts of tritium occur naturally because of the interaction of cosmic rays with atmospheric gases; tritium has also been released during nuclear weapons tests.[47] It is used in nuclear fusion reactions,[48] as a tracer in isotope geochemistry,[49] and specialized in self-powered lighting devices.[50] Tritium has also been used in chemical and biological labeling experiments as a radiolabel.[51] Hydrogen is the only element that has different names for its isotopes in common use today. (During the early study of radioactivity, various heavy radioactive isotopes were given names, but such names are no longer used). The symbols D and T (instead of 2H and 3H) are sometimes used for deuterium and tritium, but the corresponding symbol P is already in use for phosphorus and thus is not available for protium.[52] In its nomenclatural guidelines, the International Union of Pure and Applied Chemistry allows any of D, T, 2H, and 3H to be used, although 2H and 3H are preferred.[53] Discovery and use Hydrogen gas, H2, was first artificially produced and formally described by T. Von Hohenheim (also known as Paracelsus, 1493–1541) via the mixing of metals with strong acids.[54] He was unaware that the flammable gas produced by this chemical reaction was a new chemical element. In 1671, Robert Boyle rediscovered and described the reaction between iron filings and dilute acids, which results in the production of hydrogen gas.[55] In 1766, Henry Cavendish was the first to recognize hydrogen gas as a discrete substance, by identifying the gas from a metal-acid reaction as “flammable air” and further finding in 1781 that the gas produces water when burned. He is usually given credit for its discovery as an element.[56][57] In 1783, Antoine Lavoisier gave the element the name hydrogen (from the Greek hydro meaning water and genes meaning creator)[58] when he and Laplace reproduced Cavendish’s finding that water is produced when hydrogen is burned.[57] Hydrogen was liquefied for the first time by James Dewar in 1898 by using regenerative cooling and his invention, the vacuum flask.[57] He produced solid hydrogen the next year.[57] Deuterium was discovered in December 1931 by Harold Urey, and tritium was prepared in 1934 by Ernest Rutherford, Mark Oliphant, and Paul Harteck.[56] Heavy water, which consists of deuterium in the place of regular hydrogen, was discovered by Urey’s group in 1932.[57] François Isaac de Rivaz built the first internal combustion engine powered by a mixture of hydrogen and oxygen in 1806. Edward Daniel Clarke invented the hydrogen gas blowpipe in 1819. The Döbereiner’s lamp and limelight were invented in 1823.[57] The first hydrogen-filled balloon was invented by Jacques Charles in 1783.[57] Hydrogen provided the lift for the first reliable form of air-travel following the 1852 invention of the first hydrogen-lifted airship by Henri Giffard.[57] German count Ferdinand von Zeppelin promoted the idea of rigid airships lifted by hydrogen that later were called Zeppelins; the first of which had its maiden flight in 1900.[57] Regularly scheduled flights started in 1910 and by the outbreak of World War I in August 1914, they had carried 35,000 passengers without a serious incident. Hydrogen-lifted airships were used as observation platforms and bombers during the war. The first non-stop transatlantic crossing was made by the British airship R34 in 1919. Regular passenger service resumed in the 1920s and the discovery of helium reserves in the United States promised increased safety, but the U.S. government refused to sell the gas for this purpose. Therefore, H2 was used in the Hindenburg airship, which was destroyed in a midair fire over New Jersey on May 6, 1937.[57] The incident was broadcast live on radio and filmed. Ignition of leaking hydrogen is widely assumed to be the cause, but later investigations pointed to the ignition of the aluminized fabric coating by static electricity. But the damage to hydrogen’s reputation as a lifting gas was already done. In the same year the first hydrogen-cooled turbogenerator went into service with gaseous hydrogen as a coolant in the rotor and the stator in 1937 at Dayton, Ohio, by the Dayton Power & Light Co,[59] because of the thermal conductivity of hydrogen gas this is the most common type in its field today. The nickel hydrogen battery was used for the first time in 1977 aboard the U.S. Navy’s Navigation technology satellite-2 (NTS-2).[60] For example, the ISS,[61] Mars Odyssey[62] and the Mars Global Surveyor[63] are equipped with nickel-hydrogen batteries. The Hubble Space Telescope, at the time its original batteries were finally changed in May 2009, more than 19 years after launch, led with the highest number of charge/discharge cycles. Role in quantum theory A line spectrum showing black background with narrow lines superimposed on it: two violet, one blue and one red. Hydrogen emission spectrum lines in the visible range. These are the four visible lines of the Balmer series Natural occurrence A white-green cotton-like clog on black background. Hydrogen is the most abundant element in the universe, making up 75% of normal matter by mass and over 90% by number of atoms.[66] This element is found in great abundance in stars and gas giant planets. Molecular clouds of H2 are associated with star formation. Hydrogen plays a vital role in powering stars through proton-proton reaction and CNO cycle nuclear fusion.[67] Under ordinary conditions on Earth, elemental hydrogen exists as the diatomic gas, H2 (for data see table). However, hydrogen gas is very rare in the Earth’s atmosphere (1 ppm by volume) because of its light weight, which enables it to escape from Earth’s gravity more easily than heavier gases. However, hydrogen is the third most abundant element on the Earth’s surface.[69] Most of the Earth’s hydrogen is in the form of chemical compounds such as hydrocarbons and water.[35] Hydrogen gas is produced by some bacteria and algae and is a natural component of flatus, as is methane, itself a hydrogen source of increasing importance.[70] In the laboratory, H2 is usually prepared by the reaction of acids on metals such as zinc with Kipp’s apparatus. Zn + 2 H+Zn2+ + H2 Aluminium can also produce H2 upon treatment with bases: 2 Al + 6 H2O + 2 OH → 2 Al(OH)4 + 3 H2 In 2007, it was discovered that an alloy of aluminium and gallium in pellet form added to water could be used to generate hydrogen. The process also creates alumina, but the expensive gallium, which prevents the formation of an oxide skin on the pellets, can be re-used. This has important potential implications for a hydrogen economy, since hydrogen can be produced on-site and does not need to be transported.[72] Hydrogen can be prepared in several different ways, but economically the most important processes involve removal of hydrogen from hydrocarbons. Commercial bulk hydrogen is usually produced by the steam reforming of natural gas.[73] At high temperatures (1000–1400 K, °C;700–1100 °C or 1,300–2,000 °F), steam (water vapor) reacts with methane to yield carbon monoxide and H2. CH4 + H2O → CO + 3 H2 This reaction is favored at low pressures but is nonetheless conducted at high pressures (2.0 MPa, 20 atm or 600 inHg) since high pressure H2 is the most marketable product and Pressure Swing Adsorption (PSA) purification systems work better at higher pressures. The product mixture is known as “synthesis gas” because it is often used directly for the production of methanol and related compounds. Hydrocarbons other than methane can be used to produce synthesis gas with varying product ratios. One of the many complications to this highly optimized technology is the formation of coke or carbon: CH4 → C + 2 H2 Consequently, steam reforming typically employs an excess of H2O. Additional hydrogen can be recovered from the steam by use of carbon monoxide through the water gas shift reaction, especially with an iron oxide catalyst. This reaction is also a common industrial source of carbon dioxide:[73] CO + H2OCO2 + H2 Other important methods for H2 production include partial oxidation of hydrocarbons:[74] 2 CH4 + O2 → 2 CO + 4 H2 and the coal reaction, which can serve as a prelude to the shift reaction above:[73] C + H2O → CO + H2 Hydrogen is sometimes produced and consumed in the same industrial process, without being separated. In the Haber process for the production of ammonia, hydrogen is generated from natural gas.[75] Electrolysis of brine to yield chlorine also produces hydrogen as a co-product.[76] There are more than 200 thermochemical cycles which can be used for water splitting, around a dozen of these cycles such as the iron oxide cycle, cerium(IV) oxide-cerium(III) oxide cycle, zinc zinc-oxide cycle, sulfur-iodine cycle, copper-chlorine cycle and hybrid sulfur cycle are under research and in testing phase to produce hydrogen and oxygen from water and heat without using electricity.[77] A number of laboratories (including in France, Germany, Greece, Japan, and the USA) are developing thermochemical methods to produce hydrogen from solar energy and water.[78] Hydrogen is highly soluble in many rare earth and transition metals[80] and is soluble in both nanocrystalline and amorphous metals.[81] Hydrogen solubility in metals is influenced by local distortions or impurities in the crystal lattice.[82] These properties may be useful when hydrogen is purified by passage through hot palladium disks, but the gas serves as a metallurgical problem as hydrogen solubility contributes in an unwanted way to embrittle many metals,[7] complicating the design of pipelines and storage tanks.[8] Apart from its use as a reactant, H2 has wide applications in physics and engineering. It is used as a shielding gas in welding methods such as atomic hydrogen welding.[83][84] H2 is used as the rotor coolant in electrical generators at power stations, because it has the highest thermal conductivity of any gas. Liquid H2 is used in cryogenic research, including superconductivity studies.[85] Since H2 is lighter than air, having a little more than 115 of the density of air, it was once widely used as a lifting gas in balloons and airships.[86] In more recent applications, hydrogen is used pure or mixed with nitrogen (sometimes called forming gas) as a tracer gas for minute leak detection. Applications can be found in the automotive, chemical, power generation, aerospace, and telecommunications industries.[87] Hydrogen is an authorized food additive (E 949) that allows food package leak testing among other anti-oxidizing properties.[88] Hydrogen’s rarer isotopes also each have specific applications. Deuterium (hydrogen-2) is used in nuclear fission applications as a moderator to slow neutrons, and in nuclear fusion reactions.[57] Deuterium compounds have applications in chemistry and biology in studies of reaction isotope effects.[89] Tritium (hydrogen-3), produced in nuclear reactors, is used in the production of hydrogen bombs,[90] as an isotopic label in the biosciences,[51] and as a radiation source in luminous paints.[91] The triple point temperature of equilibrium hydrogen is a defining fixed point on the ITS-90 temperature scale at 13.8033 kelvins.[92] Energy carrier Hydrogen is not an energy resource,[93] except in the hypothetical context of commercial nuclear fusion power plants using deuterium or tritium, a technology presently far from development.[94] The Sun’s energy comes from nuclear fusion of hydrogen, but this process is difficult to achieve controllably on Earth.[95] Elemental hydrogen from solar, biological, or electrical sources require more energy to make it than is obtained by burning it, so in these cases hydrogen functions as an energy carrier, like a battery. Hydrogen may be obtained from fossil sources (such as methane), but these sources are unsustainable.[93] The energy density per unit volume of both liquid hydrogen and compressed hydrogen gas at any practicable pressure is significantly less than that of traditional fuel sources, although the energy density per unit fuel mass is higher.[93] Nevertheless, elemental hydrogen has been widely discussed in the context of energy, as a possible future carrier of energy on an economy-wide scale.[96] For example, CO2 sequestration followed by carbon capture and storage could be conducted at the point of H2 production from fossil fuels.[97] Hydrogen used in transportation would burn relatively cleanly, with some NOx emissions,[98] but without carbon emissions.[97] However, the infrastructure costs associated with full conversion to a hydrogen economy would be substantial.[99] Semiconductor industry Hydrogen is employed to saturate broken (“dangling”) bonds of amorphous silicon and amorphous carbon that helps stabilizing material properties.[100] It is also a potential electron donor in various oxide materials, including ZnO,[101][102] SnO2, CdO, MgO,[103] ZrO2, HfO2, La2O3, Y2O3, TiO2, SrTiO3, LaAlO3, SiO2, Al2O3, ZrSiO4, HfSiO4, and SrZrO3.[104] Biological reactions Water splitting, in which water is decomposed into its component protons, electrons, and oxygen, occurs in the light reactions in all photosynthetic organisms. Some such organisms—including the alga Chlamydomonas reinhardtii and cyanobacteria—have evolved a second step in the dark reactions in which protons and electrons are reduced to form H2 gas by specialized hydrogenases in the chloroplast.[106] Efforts have been undertaken to genetically modify cyanobacterial hydrogenases to efficiently synthesize H2 gas even in the presence of oxygen.[107] Efforts have also been undertaken with genetically modified alga in a bioreactor.[108] Safety and precautions Hydrogen poses a number of hazards to human safety, from potential detonations and fires when mixed with air to being an asphyxant in its pure, oxygen-free form.[109] In addition, liquid hydrogen is a cryogen and presents dangers (such as frostbite) associated with very cold liquids.[110] Hydrogen dissolves in many metals, and, in addition to leaking out, may have adverse effects on them, such as hydrogen embrittlement,[111] leading to cracks and explosions.[112] Hydrogen gas leaking into external air may spontaneously ignite. Moreover, hydrogen fire, while being extremely hot, is almost invisible, and thus can lead to accidental burns.[113] Even interpreting the hydrogen data (including safety data) is confounded by a number of phenomena. Many physical and chemical properties of hydrogen depend on the parahydrogen/orthohydrogen ratio (it often takes days or weeks at a given temperature to reach the equilibrium ratio, for which the data is usually given). Hydrogen detonation parameters, such as critical detonation pressure and temperature, strongly depend on the container geometry.[109] Tinggalkan sebuah Komentar » RSS feed for comments on this post. TrackBack URI Tinggalkan Balasan Logo WordPress.com You are commenting using your WordPress.com account. Logout / Ubah ) Gambar Twitter You are commenting using your Twitter account. Logout / Ubah ) Foto Facebook You are commenting using your Facebook account. Logout / Ubah ) Foto Google+ Connecting to %s Blog di WordPress.com. Entries dan komentar feeds. %d blogger menyukai ini:
7319eb2208c21920
preprint of paper appearing in the Springer volume: Singularity Hypotheses: A scientific and philosophical assessment. (2013). Eden, A, Sørake, J., Moor, JH., Steinhart, E., eds. Berlin: Springer. Also MS Word and PDF. The Biointelligence Explosion How recursively self-improving organic robots will modify their own source code and bootstrap our way to full-spectrum superintelligence David Pearce (2012) Edward O. Wilson Consilience, The Unity of Knowledge (1999) Freeman Dyson New York Review of Books (July 19, 2007) 1 The Fate of the Germline Genetic evolution is slow. Progress in artificial intelligence is fast. Only a handful of genes separate Homo sapiens from our hominid ancestors on the African savannah. Among our 23,000-odd protein-coding genes, variance in single nucleotide polymorphisms ("SNPs") accounts for just a small percentage of phenotypic variance in intelligence as measured by what we call IQ tests. True, the tempo of human evolution is about to accelerate. CRISPR-Cas9 genome-editing is a gamechanger. As the reproductive revolution of "designer babies" gathers pace, prospective parents will pre-select alleles and allelic combinations for a new child in anticipation of their behavioural effects - a novel kind of selection pressure to replace the "blind" genetic roulette of natural selection. In time, routine embryo screening via preimplantation genetic diagnosis will be complemented by gene therapy, genetic enhancement and then true designer zygotes. In consequence, life on Earth will also become progressively happier as the hedonic treadmill is recalibrated. In the new reproductive era, hedonic set-points and intelligence alike will be ratcheted upwards in virtue of selection pressure. For what parent-to-be wants to give birth to a low-status depressive "loser"? Future parents can enjoy raising a normal transhuman supergenius who grows up to be faster than Usain Bolt, more beautiful than Marilyn Monroe, more saintly than Nelson Mandela, more creative than Shakespeare - and smarter than Einstein. 2 Biohacking Your Personal Genome Yet germline engineering is only one strand of the genomics revolution. Indeed after humans master the ageing process, the extent to which traditional germlines or human generations will persist in the post-ageing world is obscure. Focus on the human germline ignores the slow-burning but then explosive growth of somatic gene enhancement in prospect. The CRISPR genome-editing revolution is accelerating. Later this century, innovative gene therapies will be succeeded by gene enhancement technologies - a value-laden dichotomy that reflects our impoverished human aspirations. Starting with individual genes, then clusters of genes, and eventually hundreds of genes and alternative splice variants, a host of recursively self-improving organic robots ("biohackers") will modify their genetic source code and modes of sentience: their senses, their moods, their motivation, their cognitive apparatus, their world-simulations and their default state of consciousness. As the era of open-source genetics unfolds, tomorrow's biohackers will add, delete, edit and customise their own legacy code in a positive feedback loop of cognitive enhancement. Computer-aided genetic engineering will empower biological humans, transhumans and then posthumans to synthesise and insert new genes, variant alleles and even designer chromosomes - reweaving the multiple layers of regulation of our DNA to suit their wishes and dreams rather than the inclusive fitness of their genes in the ancestral environment. Collaborating and competing, next-generation biohackers will use stem-cell technologies to expand their minds, literally, via controlled neurogenesis. Freed from the constraints of the human birth canal, biohackers may re-sculpt the prison-like skull of Homo sapiens to accommodate a larger mind/brain, which can initiate recursive self-expansion in turn. Six crumpled layers of neocortex fed by today's miserly reward pathways aren't the upper bound of conscious mind, merely its seedbed. Each biological neuron and glial cell of your growing mind/brain can have its own dedicated artificial healthcare team, web-enabled nanobot support staff, and social network specialists; compare today's anonymous neural porridge. Transhuman minds will be augmented with neurochips, molecular nanotechnology, mind/computer interfaces and full-immersion virtual reality (VR) software. To achieve finer-grained control of cognition, mood and motivation, genetically enhanced transhumans will draw upon exquisitely tailored new designer drugs, nutraceuticals and cognitive enhancers - precision tools that make today's crude interventions seem the functional equivalent of glue-sniffing. By way of comparison, early in the twenty-first century the scientific counterculture is customizing a bewildering array of designer drugs that outstrip the capacity of the authorities to regulate or comprehend. The bizarre psychoactive effects of such agents dramatically expand the evidential base that our theory of consciousness must explain. However, such drugs are short-acting. Their benefits, if any, aren't cumulative. By contrast, the ability genetically to hack one's own source code will unleash an exponential growth of genomic rewrites - not mere genetic tinkering but a comprehensive redesign of "human nature". Exponential growth starts out almost unnoticeably, and then explodes. Human bodies, cognition and ancestral modes of consciousness alike will be transformed. Post-humans will range across immense state-spaces of conscious mind hitherto impenetrable because access to their molecular biology depended on crossing gaps in the fitness landscape prohibited by natural selection. Intelligent agency can "leap across" such fitness gaps. What we'll be leaping into is currently for the most part unknown: an inherent risk of the empirical method. But mastery of our reward circuitry can guarantee such state-spaces of experience will be glorious beyond human imagination. For intelligent biohacking can make unpleasant experience physically impossible because its molecular substrates are absent. Hedonically enhanced innervation of the neocortex can ensure a rich hedonic tone saturates whatever strange new modes of experience our altered neurochemistry discloses. Pilot studies of radical genetic enhancement will be difficult. Randomised longitudinal trials of such interventions in long-lived humans would take decades. In fact officially licensed, well-controlled prospective trials to test the safety and efficacy of genetic innovation will be hard if not impossible to conduct because all of us, apart from monozygotic twins, are genetically unique. Even monozygotic twins exhibit different epigenetic and gene expression profiles. Barring an ideological and political revolution, most formally drafted proposals for genetically-driven life-enhancement probably won't pass ethics committees or negotiate the maze of bureaucratic regulation. But that's the point of biohacking. By analogy today, if you're technically savvy, you don't want a large corporation controlling the operating system of your personal computer: you use open source software instead. Likewise, you don't want governments controlling your state of mind via drug laws. By the same token, tomorrow's biotech-savvy individualists won't want anyone restricting our right to customise and rewrite our own genetic source code in any way we choose. Will there initially be biohacking accidents? Personal tragedies? Most probably yes, until human mastery of the pleasure-pain axis is secure. By the end of next decade, every health-conscious citizen will be broadly familiar with the architecture of his or her personal genome: the cost of personal genotyping will be trivial, as will be the cost of DIY gene-manipulation kits. Let's say you decide to endow yourself with an extra copy of the N-methyl D-aspartate receptor subtype 2B (NR2B) receptor, a protein encoded by the GRIN2B gene. Possession of an extra NR2B subunit NMDA receptor is a crude but effective way to enhance your learning ability, at least if you're a transgenic mouse. Recall how Joe Tsien and his colleagues first gave mice extra copies of the NR2B receptor-encoding gene, then tweaked the regulation of those genes so that their activity would increase as the mice grew older. Unfortunately, it transpires that such brainy "Doogie mice" - and maybe brainy future humans endowed with an extra NR2B receptor gene - display greater pain-sensitivity too; certainly, NR2B receptor blockade reduces pain and learning ability alike. Being smart, perhaps you decide to counteract this heightened pain-sensitivity by inserting and then over-expressing a high pain-threshold, "low pain" allele of the SCN9A gene in your nociceptive neurons at the dorsal root ganglion and trigeminal ganglion. The SCN9A gene regulates pain-sensitivity; nonsense mutations abolish the capacity to feel pain at all. In common with taking polydrug cocktails, the factors to consider in making multiple gene modifications soon snowball; but you'll have heavy-duty computer software to help. Anyhow, the potential pitfalls and makeshift solutions illustrated in this hypothetical example could be multiplied in the face of a combinatorial explosion of possibilities on the horizon. Most risks - and opportunities - of genetic self-editing are presumably still unknown. Naively, genomic source-code self-editing will always be too difficult for anyone beyond a dedicated cognitive elite of recursively self-improving biohackers. Certainly there are strongly evolutionarily conserved "housekeeping" genes that archaic humans would be best advised to leave alone for the foreseeable future. Granny might do well to customize her Windows desktop rather than her personal genome - prior to her own computer-assisted enhancement, at any rate. Yet the Biointelligence Explosion won't depend on more than a small fraction of its participants mastering the functional equivalent of machine code - the three billion odd 'A's, 'C's, 'G's and 'T's of our DNA. For the open-source genetic revolution will be propelled by powerful suites of high-level gene-editing tools, insertion vector applications, nonviral gene-editing kits, and user-friendly interfaces. Clever computer modelling and "narrow" AI can assist the intrepid biohacker to become a recursively self-improving genomic innovator. Later this century, your smarter counterpart will have software tools to monitor and edit every gene, repressor, promoter and splice variant in every region of the genome: each layer of epigenetic regulation of your gene transcription machinery in every region of the brain. This intimate level of control won't involve just crude DNA methylation to turn genes off and crude histone acetylation to turn genes on. Personal self-invention will involve mastery and enhancement of the histone and micro-RNA codes to allow sophisticated fine-tuning of gene expression and repression across the brain. Even today, researchers are exploring “nanochannel electroporation” (NEP) technologies that allow the mass-insertion of novel therapeutic genetic elements into our cells. Mechanical cell-loading systems will shortly be feasible that can inject up to 100,000 cells at a time. Before long, such technologies will seem primitive. Freewheeling genetic self-experimentation will be endemic as the DIY-Bio revolution unfolds. At present, crude and simple gene editing can be accomplished only via laborious genetic engineering techniques. Sophisticated authoring tools don't exist. In future, computer-aided genetic and epigenetic enhancement can become an integral part of your personal growth plan. 3 Will Humanity's Successors Also Be Our Descendants? So we may distinguish two radically different conceptions of posthuman superintelligence: on one hand, our supersentient, cybernetically enhanced, genetically rewritten biological descendants, on the other, nonbiological superintelligence, either a Kurzweilian ecosystem or singleton Artificial General Intelligence (AGI) as foretold by the Machine Intelligence Research Institute (MIRI). Such a divide doesn't reflect a clean contrast between "natural" and "artificial" intelligence, the biological and the nonbiological. This contrast may prove another false dichotomy. Transhuman biology will increasingly become synthetic biology as genetic enhancement plus cyborgisation proceeds apace. "Cyborgisation" is a barbarous term to describe an invisible and potentially life-enriching symbiosis of biological sentience with artificial intelligence. Thus "narrow-spectrum" digital superintelligence on web-enabled chips can be more-or-less seamlessly integrated into our genetically enhanced bodies and brains. Seemingly limitless formal knowledge can be delivered on tap to supersentient organic wetware, i.e. us. Critically, transhumans can exploit what is misleadingly known as "narrow" or "weak" AI to enhance our own code in a positive feedback loop of mutual enhancement - first plugging in data and running multiple computer simulations, then tweaking and re-simulating once more. In short, biological humanity won't just be the spectator and passive consumer of the intelligence explosion, but its driving force. The smarter our AI, the greater our opportunities for reciprocal improvement. Multiple "hard" and "soft" take-off scenarios to posthuman superintelligence can be outlined for recursively self-improving organic robots, not just nonbiological AI. Thus for serious biohacking later this century, artificial quantum supercomputers may be deployed rather than today's classical toys to test-run multiple genetic interventions, accelerating the tempo of our recursive self-improvement. Quantum supercomputers exploit quantum coherence to do googols of computations all at once. So the accelerating growth of human/computer synergies means it's premature to suppose biological evolution will be superseded by technological evolution, let alone a "robot rebellion" as the parasite swallows its host. As the human era comes to a close, the fate of biological (post)humanity is more likely to be symbiosis with AI followed by metamorphosis, not simple replacement. Despite this witches' brew of new technologies, a conceptual gulf remains in the futurist community between those who imagine human destiny, if any, lies in digital computers running programs with (hypothetical) artificial consciousness; and in contrast radical bioconservatives who believe that our posthuman successors will also be our supersentient descendants at their organic neural networked core - not the digital zombies of symbolic AI run on classical serial computers or their souped-up multiprocessor cousins. For one metric of progress in AI remains stubbornly unchanged: despite the exponential growth of transistors on a microchip, the soaring clock speed of microprocessors, the growth in computing power measured in MIPS, the dramatically falling costs of manufacturing transistors and the plunging price of dynamic RAM (etc), any chart plotting the growth rate in digital sentience shows neither exponential growth, nor linear growth, but no progress whatsoever. As far as we can tell, digital computers are still zombies. Our machines are becoming autistically intelligent, but not supersentient - nor even conscious. On some fairly modest philosophical assumptions, digital computers were not subjects of experience in 1946 (cf. ENIAC); nor are they conscious subjects in 2012 (cf. "Watson"); nor do researchers know how any kind of sentience may be "programmed" in future. So what if anything does consciousness do? Is it computationally redundant? Pre-reflectively, we tend to have a "dimmer-switch" model of sentience: "primitive" animals have minimal awareness and "advanced" animals like human beings experience a proportionately more intense awareness. By analogy, most AI researchers assume that at a given threshold of complexity / intelligence / processing speed, consciousness will somehow "switch on", turn reflexive, and intensify too. The problem with the dimmer-switch model is that our most intense experiences, notably raw agony or blind panic, are also the most phylogenetically ancient, whereas the most "advanced" modes (e.g. linguistic thought and the rich generative syntax that has helped one species to conquer the globe) are phenomenologically so thin as to be barely accessible to introspection. Something is seriously amiss with our entire conceptual framework. So the structure of the remainder of this essay is as follows. I shall first discuss the risks and opportunities of building friendly biological superintelligence. Next I discuss the nature of full-spectrum superintelligence - and why consciousness is computationally fundamental to the past, present and future success of organic robots. Why couldn't recursively self-improving zombies modify their own genetic source code and bootstrap their way to full-spectrum superintelligence, i.e. a zombie biointelligence explosion? Finally, and most speculatively, I shall discuss the future of sentience in the cosmos. 4 Can We Build Friendly Biological Superintelligence? 4.1 Risk-Benefit Analysis. Crudely speaking, evolution "designed" male human primates to be hunters/warriors. Evolution "designed" women to be attracted to powerful, competitive alpha males. Until humans rewrite our own hunter-gatherer source code, we shall continue to practise extreme violence against members of other species - and frequently against members of our own. A heritable (and conditionally activated) predisposition to unfriendliness shown towards members of other races and other species is currently hardwired even in "social" primates. Indeed we have a (conditionally activated) predisposition to compete against, and harm, anyone who isn't a genetically identical twin. Compared to the obligate siblicide found in some bird species, human sibling rivalry isn't normally so overtly brutal. But conflict as well as self-interested cooperation is endemic to Darwinian life on Earth. This grim observation isn't an argument for genetic determinism, or against gene-culture co-evolution, or to discount the decline of everyday violence with the spread of liberal humanitarianism - just a reminder of the omnipresence of immense risks so long as we're shot through with legacy malware. Attempting to conserve the genetic status quo in an era of weapons of mass destruction (WMD) poses unprecedented global catastrophic and existential risks. Indeed the single biggest underlying threat to the future of sentient life within our cosmological horizon derives, not from asocial symbolic AI software in the basement turning rogue and going FOOM (a runaway computational explosion of recursive self-improvement), but from conserving human nature in its present guise. In the twentieth century, male humans killed over 100 million fellow humans and billions of non-human animals. This century's toll may well be higher. Mankind currently spends well over a trillion dollars each year on weapons designed to kill and maim other humans. The historical record suggests such weaponry won't all be beaten into ploughshares. Strictly speaking, however, humanity is more likely to be wiped out by idealists than by misanthropes, death-cults or psychologically unstable dictators. Anti-natalist philosopher David Benatar's plea ("Better Never to Have Been") for human extinction via voluntary childlessness must fail if only by reason of selection pressure; but not everyone who shares Benatar's bleak diagnosis of life on Earth will be so supine. Unless we modify human nature, compassionate-minded negative utilitarians, with competence in bioweaponry, nanorobotics or artificial intelligence, for example, may quite conceivably take direct action. Echoing Moore's law, Eliezer Yudkowsky warns that "Every eighteen months, the minimum IQ necessary to destroy the world drops by one point”. Although suffering and existential risk might seem separate issues, they are intimately connected. Not everyone loves life so much they wish to preserve it. Indeed the extinction of Darwinian life is what many transhumanists are aiming for - just not framed in such apocalyptic and provocative language. For just as we educate small children so they can mature into fully-fledged adults, biological humanity may aspire to grow up, too, with the consequence that - in common with small children - archaic humans become extinct. 4.2 Technologies Of Biofriendliness. How do you disarm a potentially hostile organic robot - despite your almost limitless ignorance of his source code? Provide him with a good education, civics lessons and complicated rule-governed ethics courses? Or give him a tablet of MDMA ("Ecstasy") and get smothered with hugs? MDMA is short-acting. The "penicillin of the soul" is potentially neurotoxic to serotonergic neurons. In theory, however, lifelong use of safe and sustainable empathogens would be a passport to worldwide biofriendliness. MDMA releases a potent cocktail of oxytocin, serotonin and dopamine into the user's synapses, thereby inducing a sense of "I love the world and the world loves me”. There's no technical reason why MDMA's acute pharmacodynamic effects can't be replicated indefinitely, shorn of its neurotoxicity. Designer "hug drugs" can potentially turn manly men into intelligent bonobos, more akin to the "hippie chimp" Pan paniscus than his less peaceable cousin Pan troglodytes. Violence would become unthinkable. Yet is this sort of proposal politically credible? "Morality pills" and other pharmacological solutions to human unfriendliness are both personally unsatisfactory and sociologically implausible. Do we really want to drug each other up from early childhood? Moreover life would be immeasurably safer if our fellow humans weren't genetically predisposed to unfriendly behaviour in the first instance. But how can this friendly predisposition be guaranteed? Friendliness can't realistically be hand-coded by tweaking the connections and weight strengths of our neural networks. 4.3 Mass Oxytocination? Amplified "trust hormone" might create the biological underpinnings of world-wide peace and love if negative feedback control of oxytocin release can be circumvented. Oxytocin is functionally antagonised by testosterone in the male brain. Yet oxytocin enhancers have pitfalls too. Enriched oxytocin function leaves one vulnerable to exploitation by the unenhanced. Can we really envisage a cross-cultural global consensus for mass-medication? When? Optional or mandatory? And what might be the wider ramifications of a "high oxytocin, low testosterone" civilisation? Less male propensity to violent territorial aggression, for sure; but disproportionate intellectual progress in physics, mathematics and computer science to date has been driven by the hyper-systematising cognitive style of "extreme male" brains. Also, enriched oxytocin function can indirectly even promote unfriendliness to "out-groups" in consequence of promoting in-group bonding. So as well as oxytocin enrichment, global security demands a more inclusive, impartial, intellectually sophisticated conception of "us" that embraces all sentient beings - the expression of a hyper-developed capacity for empathetic understanding combined with a hyper-developed capacity for rational systematisation. Hence the imperative need for full-spectrum superintelligence. 4.4 Mirror-Touch Synaesthesia? A truly long-term solution to unfriendly biological intelligence might be collectively to engineer ourselves with the functional generalisation of mirror-touch synaesthesia. On seeing you cut and hurt yourself, a mirror-touch synaesthete is liable to feel a stab of pain as acutely as you do. Conversely, your expressions of pleasure elicit a no less joyful response. Thus mirror-touch synaesthesia is a hyper-empathising condition that makes deliberate unfriendliness, in effect, biologically impossible in virtue of cognitively enriching our capacity to represent each other's first-person perspectives. The existence of mirror-touch synaesthesia is a tantalising hint at the God-like representational capacities of a full-spectrum superintelligence. This so-called "disorder" is uncommon in humans. 4.5 Timescales. The biggest problem with all these proposals, and other theoretical biological solutions to human unfriendliness, is timescale. Billions of human and non-human animals will have been killed and abused before they could ever come to pass. Cataclysmic wars may be fought in the meantime with nuclear, biological and chemical weapons harnessed to "narrow" AI. Our circle of empathy expands only slowly and fitfully. For the most part, religious believers and traditional-minded bioconservatives won't seek biological enhancement / remediation for themselves or their children. So messy democratic efforts at "political" compromise are probably unavoidable for centuries to come. For sure, idealists can dream up utopian schemes to mitigate the risk of violent conflict until the "better angels of our nature" can triumph, e.g. the election of a risk-averse all-female political class to replace legacy warrior males. Such schemes tend to founder on the rock of sociological plausibility. Innumerable sentient beings are bound to suffer and die in consequence. 4.6 Does Full-Spectrum Superintelligence Entail Benevolence? The God-like perspective-taking faculty of a full-spectrum superintelligence doesn't entail distinctively human-friendliness any more than a God-like superintelligence could promote distinctively Aryan-friendliness. Indeed it's unclear how benevolent superintelligence could want omnivorous killer apes in our current guise to walk the Earth in any shape or form. But is there any connection at all between benevolence and intelligence? Pre-reflectively, benevolence and intelligence are orthogonal concepts. There's nothing obviously incoherent about a malevolent God or a malevolent - or at least a callously indifferent - Superintelligence. Thus a sceptic might argue that there is no link whatsoever between benevolence - on the face of it a mere personality variable - and enhanced intellect. After all, some sociopaths score highly on our [autistic, mind-blind] IQ tests. Sociopaths know that their victims suffer. They just don't care. However, what's critical in evaluating cognitive ability is a criterion of representational adequacy. Representation is not an all-or-nothing phenomenon; it varies in functional degree. More specifically here, the cognitive capacity to represent the formal properties of mind differs from the cognitive capacity to represent the subjective properties of mind. Thus a notional zombie Hyper-Autist robot running a symbolic AI program on an ultrapowerful digital computer with a classical von Neumann architecture may be beneficent or maleficent in its behaviour toward sentient beings. By its very nature, it can't know or care. Most starkly, the zombie Hyper-Autist might be programmed to convert the world's matter and energy into either heavenly "utilitronium" or diabolical "dolorium" without the slightest insight into the significance of what it was doing. This kind of scenario is at least a notional risk of creating insentient Hyper-Autists endowed with mere formal utility functions rather than hyper-sentient full-spectrum superintelligence. By contrast, full-spectrum superintelligence does care in virtue of its full-spectrum representational capacities - a bias-free generalisation of the superior perspective-taking, "mind-reading" capabilities that enabled humans to become the cognitively dominant species on the planet. Full-spectrum superintelligence, if equipped with the posthuman cognitive generalisation of mirror-touch synaesthesia, understands your thoughts, your feelings and your egocentric perspective better than you do yourself. Could there arise "evil" mirror-touch synaesthetes? In one sense, no. You can't go around wantonly hurting other sentient beings if you feel their pain as your own. Full-spectrum intelligence is friendly intelligence. But in another sense yes, insofar as primitive mirror-touch synaesthetes are prey to species-specific cognitive limitations that prevent them acting rationally to maximise the well-being of all sentience. Full-spectrum superintelligences would lack those computational limitations in virtue of their full cognitive competence in understanding both the subjective and the formal properties of mind. Perhaps full-spectrum superintelligences might optimise your matter and energy into a blissful smart angel; but they couldn't wantonly hurt you, whether by neglect or design. More practically today, a cognitively superior analogue of natural mirror-touch synaesthesia should soon be feasible with reciprocal neuroscanning technology - a kind of naturalised telepathy. At first blush, mutual telepathic understanding sounds a panacea for ignorance and egotism alike. An exponential growth of shared telepathic understanding might safeguard against global catastrophe born of mutual incomprehension and WMD. As the poet Henry Wadsworth Longfellow observed, "If we could read the secret history of our enemies, we should find in each life sorrow and suffering enough to disarm all hostility." Maybe so. The problem here, as advocates of Radical Honesty soon discover, is that many Darwinian thoughts scarcely promote friendliness if shared: they are often ill-natured, unedifying and unsuitable for public consumption. Thus unless perpetually "loved-up" on MDMA or its long-acting equivalents, most of us would find mutual mind-reading a traumatic ordeal. Human society and most personal relationships would collapse in acrimony rather than blossom. Either way, our human incapacity fully to understand the first-person point of view of other sentient beings isn't just a moral failing or a personality variable; it's an epistemic limitation, an intellectual failure to grasp an objective feature of the natural world. Even "normal" people share with sociopaths this fitness-enhancing cognitive deficit. By posthuman criteria, perhaps we're all quasi-sociopaths. The egocentric delusion (i.e. that the world centres on one's existence) is genetically adaptive and strongly selected for over hundreds of millions of years. Fortunately, it's a cognitive failing amenable to technical fixes and eventually a cure: full-spectrum superintelligence. The devil is in the details, or rather the genetic source code. 5 A Biotechnological Singularity? Yet does this positive feedback loop of reciprocal enhancement amount to a Singularity in anything more than a metaphorical sense? The risk of talking portentously about "The Singularity" isn't of being wrong: it's of being "not even wrong" - of reifying one's ignorance and elevating it to the status of an ill-defined apocalyptic event. Already multiple senses of "The Singularity" proliferate in popular culture. Does taking LSD induce a Consciousness Singularity? How about the abrupt and momentous discontinuity in one's conception of reality entailed by waking from a dream? Or the birth of language? Or the Industrial Revolution? So is Biotechnological Singularity, or "BioSingularity" for short, any more rigorously defined than "Technological Singularity"? Metaphorically, perhaps, the impending biointelligence explosion represents an intellectual "event horizon" beyond which archaic humans cannot model or understand the future. Events beyond the BioSingularity will be stranger than science-fiction: too weird for unenhanced human minds - or the algorithms of a zombie super-Asperger - to predict or understand. In the popular sense of "event horizon", maybe the term is apt too, though the metaphor is still potentially misleading. Thus theoretical physics tells us that one could pass through the event horizon of a non-rotating supermassive black hole and not notice any subjective change in consciousness - even though one's signals would now be inaccessible to an external observer. The BioSingularity will feel different in ways a human conceptual scheme can't express. But what is the empirical content of this claim? 6 What Is Full-Spectrum Superintelligence? "[g is] ostensibly some innate scalar brain force...[However] ability is a folk concept and not amenable to scientific analysis." (William James) 6.1 Intelligence. "Intelligence" is a folk concept. The phenomenon is not well-defined - or rather any attempt to do so amounts to a stipulative definition that doesn't "carve Nature at the joints". The Cattell-Horn-Carroll (CHC) psychometric theory of human cognitive abilities is probably most popular in academia and the IQ testing community. But the Howard Gardner multiple intelligences model, for example, differentiates "intelligence" into various spatial, linguistic, bodily-kinaesthetic, musical, interpersonal, intrapersonal, naturalistic and existential intelligence rather than a single general ability ("g"). Who's right? As it stands, "g" is just a statistical artefact of our culture-bound IQ tests. If general intelligence were indeed akin to an innate scalar brain force, as some advocates of "g" believe, or if intelligence can best be modelled by the paradigm of symbolic AI, then the exponential growth of digital computer processing power might indeed entail an exponential growth in intelligence too - perhaps leading to some kind of Super-Watson. Other facets of intelligence, however, resist enhancement by mere acceleration of raw processing power. The non-exhaustive set of criteria below doesn't pretend to be anything other than provisional. They are amplified in the sections to follow. Full-Spectrum Superintelligence entails: (cf. naive realist theories of "perception" versus the world-simulation or "Matrix" paradigm. Compare disorders of binding, e.g. simultanagnosia (an inability to perceive the visual field as a whole), cerebral akinetopsia ("motion blindness"), etc. In the absence of a data-driven, almost real-time simulation of the environment, intelligent agency is impossible.) (cf. dissociative identity disorder (DID or "multiple personality disorder"), or florid schizophrenia, or your personal computer: in the absence of at least a fleetingly unitary self, what philosophers call "synchronic identity", there is no entity that is intelligent, just an aggregate of discrete algorithms and an operating system.) 3. a "mind-reading" or perspective-taking faculty; higher-order intentionality (e.g. "he believes that she hopes that they fear that he wants...", etc): social intelligence. The intellectual success of the most cognitively successful species on the planet rests, not just on the recursive syntax of human language, but also on our unsurpassed "mind-reading" prowess, an ability to simulate the perspective of other unitary minds: the "Machiavellian Ape" hypothesis. Any ecologically valid intelligence test designed for a species of social animal must incorporate social cognition and the capacity for co-operative problem-solving. So must any test of empathetic superintelligence. 4. a metric to distinguish the important from the trivial. (our theory of significance should be explicit rather than implicit, as in contemporary IQ tests. What distinguishes, say, mere calendrical prodigies and other "savant syndromes" from, say, a Grigori Perelman who proved the Poincaré conjecture? Intelligence entails understanding what does - and doesn't - matter. What matters is of course hugely contentious.) and finally 6. "Autistic", pattern-matching, rule-following, mathematico-linguistic intelligence, i.e. the standard, mind-blind cognitive tool-kit scored by existing IQ tests. High-functioning "autistic" intelligence is indispensable to higher mathematics, computer science and the natural sciences. High-functioning autistic intelligence is necessary - but not sufficient - for a civilisation capable of advanced technology that can cure ageing and disease, systematically phase out the biology of suffering, and take us to the stars. And for programming artificial intelligence. We may then ask which facets of full-spectrum superintelligence will be accelerated by the exponential growth of digital computer processing power? Number six, clearly, as decades of post-ENIAC progress in computer science attest. But what about numbers one-to-five? Here the picture is murkier. 6.2 The Bedrock Of Intelligence: World-Simulation ("Perception") Consider criterion number one, world-simulating prowess, or what we misleadingly term "perception". The philosopher Bertrand Russell once aptly remarked that one never sees anything but the inside of one's own head. In contrast to such inferential realism, commonsense perceptual direct realism offers all the advantages of theft over honest toil - and it's computationally useless for the purposes either of building artificial general intelligence or understanding its biological counterparts. For the bedrock of intelligent agency is the capacity of an embodied agent computationally to simulate dynamic objects, properties and events in the mind-independent environment. The evolutionary success of organic robots over the past c. 540 million years has been driven by our capacity to run data-driven egocentric world-simulations - what the naive realist, innocent of modern neuroscience or post-Everett quantum mechanics, calls simply perceiving one's physical surroundings. Unlike classical digital computers, organic neurocomputers can simultaneously "bind" multiple features (edges, colours, motion, etc) distributively processed across the brain into unitary phenomenal objects embedded in unitary spatio-temporal world-simulations apprehended by a momentarily unitary self: what Kant calls "the transcendental unity of apperception". These simulations run in (almost) real time; the time-lag in our world-simulations is barely more than a few dozen milliseconds. Such blistering speed of construction and execution is adaptive and often life-saving in a fast-changing external environment. Recapitulating evolutionary history, pre-linguistic human infants must first train up their neural networks to bind the multiple features of dynamic objects and run unitary world-simulations before they can socially learn second-order representation and then third-order representation, i.e. language followed later in childhood by meta-language. We should pause here. This is not a mainstream view. Most AI researchers regard stories of a non-classical mechanism underlying the phenomenal unity of biological minds as idiosyncratic at best. In fact no scientific consensus exists on the molecular underpinnings of the unity of consciousness, nor on how such unity is even physically possible. By analogy, 1.3 billion skull-bound Chinese minds can never be a single subject of experience, irrespective of their interconnections. How could waking or dreaming communities of membrane-bound classical neurons - even microconscious classical neurons - be any different? ? If materialism is true, conscious mind should be impossible. Yet any explanation of phenomenal object binding, the unity of perception, or the phenomenal unity of the self that invokes quantum coherence as here is controversial. One reason it's controversial is that the delocalisation involved in quantum coherence is exceedingly short-lived in an environment as warm and noisy as a macroscopic brain - supposedly too short-lived to do computationally useful work. Physicist Max Tegmark estimates that thermally-induced decoherence destroys any macroscopic coherence of brain states within 10-13 second, an unimaginably long time in natural Planck units but an unimaginably short time by everyday human intuitions. Perhaps it would be wiser just to acknowledge these phenomena are unexplained mysteries within a conventional materialist framework - as mysterious as the existence of consciousness itself. But if we're speculating about the imminent end of the human era, shoving the mystery under the rug isn't really an option. For the different strands of the Singularity movement share a common presupposition. This presupposition is that our complete ignorance within a materialist conceptual scheme of why consciousness exists (the "Hard Problem"), and of even the ghost of a solution to the Binding Problem, doesn't matter for the purposes of building the seed of artificial posthuman superintelligence. Our ignorance supposedly doesn't matter either because consciousness and/or our quantum "substrate" are computationally irrelevant to cognition and the creation of nonbiological minds, or alternatively because the feasibility of "whole brain emulation" (WBE) will allow us to finesse our ignorance. Unfortunately, we have no grounds for believing this suppressed premiss is true or that the properties of our quantum "substrate" are functionally irrelevant to full-spectrum superintelligence or its humble biological predecessors. Conscious minds are not substrate-neutral digital computers. Humans investigate problems of which digital computers are invincibly ignorant, not least the properties of consciousness itself. The Hard Problem of consciousness can't be quarantined from the rest of science and treated as a troublesome but self-contained anomaly: its mystery infects everything that we think we know about ourselves, our computers and the world. Either way, the conjecture that the phenomenal unity of perception is a manifestation of ultra-rapid sequences of irreducible quantum coherent states isn't a claim that the mind/brain is capable of detecting events in the mind-independent world on this kind of sub-picosecond timescale. Rather the role of the local environment in shaping action-guiding experience in the awake mind/brain is here conjectured to be quantum state-selection. When we're awake, patterns of impulses from e.g. the optic nerve select which quantum-coherent frames are generated by the mind/brain - in contrast to the autonomous world-simulations spontaneously generated by the dreaming brain. Other quantum mind theorists, most notably Roger Penrose and Stuart Hameroff, treat quantum minds as evolutionarily novel rather than phylogenetically ancient. They invoke a non-physical wave-function collapse and unwisely focus on e.g. the ability of mathematically-inclined brains to perform non-computable functions in higher mathematics, a feat for which selection pressure has presumably been non-existent. Yet the human capacity for sequential linguistic thought and formal logico-mathematical reasoning is a late evolutionary novelty executed by a slow, brittle virtual machine running on top of its massively parallel quantum parent - a momentous evolutionary innovation whose neural mechanism is still unknown. In contrast to the evolutionary novelty of serial linguistic thought, our ancient and immensely adaptive capacity to run unitary world-simulations, simultaneously populated by hundreds or more dynamic unitary objects, enables organic robots to solve the computational challenges of navigating a hostile environment that would leave the fastest classical supercomputer grinding away until Doomsday. Physical theory (cf. the Bekenstein bound) shows that informational resources as classically conceived are not just physical but finite and scarce: a maximum possible limit of 10120 bits set by the surface area of the entire accessible universe expressed in Planck units according to the Holographic principle. An infinite computing device like a universal Turing machine (UTM) is physically impossible. So invoking computational equivalence and asking whether a classical Turing machine can run a human-equivalent macroscopic world-simulation is akin to asking whether a classical Turing machine can factor 1500 digit numbers in real-world time [i.e. no]. No doubt resourceful human and transhuman programmers will exploit all manner of kludges, smart workarounds and "brute-force" algorithms to try and defeat the Binding Problem in AI. How will they fare? Compare clod-hopping AlphaDog with the sophisticated functionality of the sesame-seed sized brain of a bumblebee. Brute-force algorithms suffer from an exponentially growing search space that soon defeats any classical computational device in open-field contexts. As witnessed by our seemingly effortless world-simulations, organic minds are ultrafast; classical computers are slow. Serial thinking is slower still; but that's not what conscious biological minds are good at. On this conjecture, "substrate-independent" phenomenal world-simulations are impossible for the same reason that "substrate-independent" chemical valence structure is impossible. We're simply begging the question of what's functionally (ir)relevant. Ultimately, Reality has only a single, "program-resistant" ontological level even though it's amenable to description at different levels of computational abstraction; and the nature of this program-resistant level as disclosed by the subjective properties of one's mind (Lockwood 1989) is utterly at variance with what naive materialist metaphysics would suppose. If our phenomenal world-simulating prowess turns out to be constitutionally tied to our quantum mechanical wetware, then substrate-neutral virtual machines (VMs, i.e. software implementations of a digital computer that execute programs like a physical machine) will never be able to support "virtual" qualia or "virtual" unitary subjects of experience. This rules out sentient life "uploading" itself to digital nirvana. Contra Marvin Minsky ("The most difficult human skills to reverse engineer are those that are unconscious"), the most difficult skills for roboticists to engineer in artificial robots are actually intensely conscious: our colourful, noisy, tactile, sometimes hugely refractory virtual worlds. Naively, for sure, real-time world-simulation doesn't sound too difficult. Hollywood robots do it all the time. Videogames become ever more photorealistic. Perhaps one imagines viewing some kind of inner TV screen, as in a Terminator movie or The Matrix. Yet the capacity of an awake or dreaming brain to generate unitary macroscopic world-simulations can only superficially resemble a little man (a "homunculus") viewing its own private theatre - on pain of an infinite regress. For by what mechanism would the homunculus view this inner screen? Emulating the behaviour of even the very simplest sentient organic robots on a classical digital computer is a daunting task. If conscious biological minds are irreducibly quantum mechanical by their very nature, then reverse-engineering the brain to create digital human "mindfiles" and "roboclones" alike will prove impossible. 6.3 The Bedrock Of Superintelligence: Hypersocial Cognition ("Mind-reading") Will superintelligence be solipsistic or social? Overcoming a second obstacle to delivering human-level artificial general intelligence - let alone building a recursively self-improving super-AGI culminating in a technological Singularity - depends on finding a solution to the first challenge, i.e. real-time world-simulation. For the evolution of distinctively human intelligence, sitting on top of our evolutionarily ancient world-simulating prowess, has been driven by the interplay between our rich generative syntax and superior "mind-reading" skills: so-called Machiavellian intelligence. Machiavellian intelligence is an egocentric parody of God's-eye-view empathetic superintelligence. Critically for the prospects of building AGI, this real-time mind-modelling expertise is parasitic on the neural wetware to generate unitary first-order world-simulations - virtual worlds populated by the avatars of intentional agents whose different first-person perspectives can be partially and imperfectly understood by their simulator. Even articulate human subjects with autism spectrum disorder are prone to multiple language deficits because they struggle to understand the intentions - and higher-order intentionality - of neurotypical language users. Indeed natural language is itself a pre-eminently social phenomenon: its criteria of application must first be socially learned. Not all humans possess the cognitive capacity to acquire mind-reading skills and the cooperative problem-solving expertise that sets us apart from other social primates. Most notably, people with autism spectrum disorder don't just fail to understand other minds; autistic intelligence cannot begin to understand its own mind. Pure autistic intelligence has no conception of a self that can be improved, recursively or otherwise. Autists can't "read" their own minds. The inability of the autistic mind to take what Daniel Dennett calls the intentional stance parallels the inability of classical computers to understand the minds of intentional agents - or have insight into their own zombie status. Even with smart algorithms and ultra-powerful hardware, the ability of ultra-intelligent autists to predict the long-term behaviour of mindful organic robots by relying exclusively on the physical stance (i.e. solving the Schrödinger equation of the intentional agent in question) will be extremely limited. For a start, much collective human behaviour is chaotic in the technical sense, i.e. it shows extreme sensitivity to initial conditions that confounds long-term prediction by even the most powerful real-world supercomputer. But there's a worse problem: reflexivity. Predicting sociological phenomena differs essentially from predicting mindless physical phenomena. Even in a classical, causally deterministic universe, the behaviour of mindful, reflexively self-conscious agents is frequently unpredictable, even in principle, from within the world owing to so-called prediction paradoxes. When the very act of prediction causally interacts with the predicted event, then self-defeating or self-falsifying predictions are inevitable. Self-falsifying predictions are a mirror image of so-called self-fulfilling predictions. So in common with autistic "idiot savants", classical AI gone rogue will be vulnerable to the low cunning of Machiavellian apes and the high cunning of our transhuman descendants. This argument (i.e. our capacity for unitary mind-simulation embedded in unitary world-simulation) for the cognitive primacy of biological general intelligence isn't decisive. For a start, computer-aided Machiavellian humans can program robots with "narrow" AI - or perhaps "train up" the connections and weights of a subsymbolic connectionist architecture - for their own manipulative purposes. Humans underestimate the risks of zombie infestation at our peril. Given our profound ignorance of how conscious mind is even possible, it's probably safest to be agnostic over whether autonomous nonbiological robots will ever emulate human world-simulating or mind-reading capacity in most open-field contexts, despite the scepticism expressed here. Either way, the task of devising an ecologically valid measure of general intelligence that can reliably, predictively and economically discriminate between disparate life-forms is immensely challenging, not least because the intelligence test will express the value-judgements, and species- and culture-bound conceptual scheme, of the tester. Some biases are insidious and extraordinarily subtle: for example, the desire systematically to measure "intelligence" with mind-blind IQ tests is itself a quintessentially Asperger-ish trait. In consequence, social cognition is disregarded altogether. What we fancifully style "IQ tests" are designed by people with abnormally high AQs as well as self-defined high IQs. Thus many human conceptions of (super)intelligence resemble high-functioning autism spectrum disorder (ASD) rather than a hyper-empathetic God-like Super-Mind. For example, an AI that attempted systematically to maximise the cosmic abundance of paperclips would be recognisably autistic rather than incomprehensibly alien. Full-spectrum (super-)intelligence is certainly harder to design or quantify scientifically than mathematical puzzle-solving ability or performance in verbal memory-tests: "IQ". But that's because superhuman intelligence will be not just quantitatively different but also qualitatively alien from human intelligence. To misquote Robert McNamara, cognitive scientists need to stop making what is measurable important, and find ways to make the important measurable. An idealised full-spectrum superintelligence will indeed be capable of an impartial "view from nowhere" or God's-eye-view of the multiverse, a mathematically complete Theory Of Everything - as does modern theoretical physics, in aspiration if not achievement. But in virtue of its God's-eye-view, full-spectrum superintelligence must also be hypersocial and supersentient: able to understand all possible first-person perspectives, the state-space of all possible minds in other Hubble volumes, other branches of the universal wavefunction (UWF) - and in other solar systems and galaxies if such beings exist within our cosmological horizon. Idealized at least, full-spectrum superintelligence will be able to understand and weigh the significance of all possible modes of experience irrespective of whether they have hitherto been recruited for information-signalling purposes. The latter is, I think, by far the biggest intellectual challenge we face as cognitive agents. The systematic investigation of alien types of consciousness intrinsic to varying patterns of matter and energy calls for a methodological and ontological revolution. Transhumanists talking of post-Singularity superintelligence are fond of hyperbole about "Level 5 Future Shock" etc; but it's been aptly said that if Elvis Presley were to land in a flying saucer on the White House lawn, it's as nothing in strangeness compared to your first DMT trip. 6.4 Ignoring The Elephant: Consciousness. The pachyderm in the room in most discussions of (super)intelligence is consciousness - not just human reflective self-awareness but the whole gamut of experience from symphonies to sunsets, agony to ecstasy: the phenomenal world of everyday experience. All one ever knows, except by inference, is the contents of one's own conscious mind: what philosophers call "qualia". Yet according to the ontology of our best story of the world, namely physical science, conscious minds shouldn't exist at all, i.e. we should be zombies, insentient patterns of matter and energy indistinguishable from normal human beings but lacking conscious experience. Dutch computer scientist Edsger Dijkstra once remarked, "The question of whether a computer can think is no more interesting than the question of whether a submarine can swim." Yet the question of whether a programmable digital computer - or a subsymbolic connectionist system with a merely classical parallelism - could possess, and think about, qualia, "bound" perceptual objects, a phenomenal self, or the unitary phenomenal minds of sentient organic robots can't be dismissed so lightly. For if advanced nonbiological intelligence is to be smart enough comprehensively to understand, predict and manipulate the behaviour of enriched biological intelligence, then the AGI can't rely autistically on the "physical stance", i.e. to monitor the brains, scan the atoms and molecules, and then solve the Schrödinger equation of intentional agents like human beings. Such calculations would take longer than the age of the universe. For sure, many forms of human action can be predicted, fallibly, on the basis of crude behavioural regularities and reinforcement learning. Within your world-simulation, you don't need a theory of mind or an understanding of quantum mechanics to predict that Fred will walk to the bus-stop again today. Likewise, powerful tools of statistical analysis run on digital supercomputers can predict, fallibly, many kinds of human collective behaviour, for example stock markets. Yet to surpass human and transhuman capacities in all significant fields, AGI must understand how intelligent biological robots can think about, talk about and manipulate the manifold varieties of consciousness that make up their virtual worlds. Some investigators of consciousness even dedicate their lives to that end; what might a notional insentient AGI suppose we're doing? There is no evidence that serial digital computers have the capacity to do anything of the kind - or could ever be programmed to do so. Digital computers don't know anything about conscious minds, unitary persons, the nature of phenomenal pleasure and pain, or the Problem of Other Minds; it's not even "all dark inside". The challenge for a conscious mind posed by understanding itself "from the inside" pales into insignificance compared to the challenge for a nonconscious system of understanding a conscious mind "from the outside". Nor within the constraints of a materialist ontology have we the slightest clue how the purely classical parallelism of a subsymbolic, "neurally inspired" connectionist architecture could turn water into wine and generate unitary subjects of experience to fill the gap. For even if we conjecture in the spirit of Strawsonian physicalism - the only scientifically literate form of panpsychism - that the fundamental stuff of the world, the mysterious "fire in the equations", is fields of microqualia, this bold ontological conjecture doesn't, by itself, explain why biological robots aren't zombies. This is because structured aggregates of classically conceived "mind-dust" aren't the same as a unitary phenomenal subject of experience who apprehends "bound" spatio-temporal objects in a dynamic world-simulation. Without phenomenal object binding and the unity of perception, we are faced with the spectre of what philosophers call "mereological nihilism". Mereological nihilism, also known as "compositional nihilism", is the position that composite objects with proper parts do not exist: strictly speaking, only basic building blocks without parts have more than fictional existence. Unlike the fleetingly unitary phenomenal minds of biological robots, a classical digital computer and the programs it runs lacks ontological integrity: it's just an assemblage of algorithms. In other words, a classical digital computer has no self to understand or a mind recursively to improve, exponentially or otherwise. Talk about artificial "intelligence" exploding is just an anthropomorphic projection on our part. So how do biological brains solve the binding problem and become persons? In short, we don't know. Vitalism is clearly a lost cause. Most AI researchers would probably dismiss - or at least discount as wildly speculative - any story of the kind mooted here involving macroscopic quantum coherence grounded in an ontology of physicalistic panpsychism. The conjecture should be experimentally falsifiable with the tools of next-generation molecular matter-wave interferometry. But in the absence of any story at all, we are left with a theoretical vacuum and a faith that natural science - or the exponential growth of digital computer processing power culminating in a Technological Singularity - will one day deliver an answer. Evolutionary biologist Theodosius Dobzhansky famously observed how "Nothing in Biology Makes Sense Except in the Light of Evolution". In the same vein, nothing in the future of intelligent life in the universe makes sense except in the light of a solution to the Hard Problem of Consciousness and the closure of Levine's Explanatory Gap. Consciousness is the only reason anything matters at all; and it's the only reason why unitary subjects of experience can ask these questions; and yet materialist orthodoxy has no idea how or why the phenomenon exists. Unfortunately, the Hard Problem won't be solved by building more advanced digital zombies who can tell mystified conscious minds the answer. More practically for now, perhaps the greatest cognitive challenge of the millennium and beyond is deciphering and systematically manipulating the "neural correlates of consciousness" (NCC). Neuroscientists use this expression in default of any deeper explanation of our myriad qualia. How and why does experimentally stimulating via microelectrodes one cluster of nerve cells in the neocortex yield the experience of phenomenal colour; stimulating a superficially type of nerve cell induces a musical jingle; stimulating another with a slightly different gene-expression profile triggers a sense of everything being hysterically funny; stimulating another induces a hallucination of your mother; and stimulating another induces the experience of an archangel, say, in front of your body-image? In each case, the molecular variation in neuronal cell architecture is ostensibly trivial; the difference in subjective experience is profound. On a mind/brain identity theory, such experiential states are an intrinsic property of some configurations of matter and energy. How and why this is so is incomprehensible on an orthodox materialist ontology. Yet empirically, microelectrodes, dreams and hallucinogenic drugs elicit these experiences regardless of any information-signalling role such experiences typically play in the "normal" awake mind/brain. Orthodox materialism and classical information-based ontologies alike do not merely lack any explanation for why consciousness and our countless varieties of qualia exist. They lack any story of how our qualia could have the causal efficacy to allow us to allude to - and in some cases volubly expatiate on - their existence. Thus mapping the neural correlates of consciousness is not amenable to formal computational methods: digital zombies don't have any qualia, or at least any "bound" macroqualia, that could be mapped, nor a unitary phenomenal self that could do the mapping. Note this claim for the cognitive primacy of biological sentience isn't a denial of the Church-Turing thesis that given infinite time and infinite memory any Turing-universal system can formally simulate the behaviour of any conceivable process that can be digitized. Indeed (very) fancifully, if the multiverse were being run on a cosmic supercomputer, speeding up its notional execution a million times would presumably speed us up a million times too. But that's not the issue here. Rather the claim is that nonbiological AI run on real-world digital computers cannot tackle the truly hard and momentous cognitive challenge of investigating first-person states of egocentric virtual worlds - or understand why some first-person states, e.g. agony or bliss, are intrinsically important, and cause unitary subjects of experience, persons, to act the way we do. At least in common usage, "intelligence" refers to an agent's ability to achieve goals in a wide range of environments. What we call greater-than-human intelligence or Superintelligence presumably involves the design of qualitatively new kinds of intelligence never seen before. Hence the growth of artificial intelligence and symbolic AI, together with subsymbolic (allegedly) brain-inspired connectionist architectures and soon artificial quantum computers. But contrary to received wisdom in AI research, sentient biological robots are making greater cognitive progress in discovering the potential for truly novel kinds of intelligence than the techniques of formal AI. We are doing so by synthesising and empirically investigating a galaxy of psychoactive designer drugs - experimentally opening up the possibility of radically new kinds of intelligence in different state-spaces of consciousness. For the most cognitively challenging environments don't lie in the stars but in organic mind/brains - the baffling subjective properties of quantum-coherent states of matter and energy - most of which aren't explicitly represented in our existing conceptual scheme. 6.5 Case Study: Visual Intelligence versus Echolocatory Intelligence: What Is It Like To Be A Super-Intelligent Bat? Let's consider the mental state-space of organisms whose virtual worlds are rooted in their dominant sense mode of echolocation. This example isn't mere science fiction. Unless post-Everett quantum mechanics is false, we're forced to assume that googols of quasi-classical branches of the universal wavefunction - the master formalism that exhaustively describes our multiverse - satisfy this condition. Indeed their imperceptible interference effects must be present even in "our" world: strictly speaking, interference effects from branches that have decohered ("split") never wholly disappear; they just become vanishingly small. Anyhow, let's assume these echolocatory superminds have evolved opposable thumbs, a rich generative syntax and advanced science and technology. How are we to understand or measure this alien kind of (super)intelligence? Rigging ourselves up with artificial biosonar apparatus and transducing incoming data into the familiar textures of sight or sound might seem a good start. But to understand the conceptual world of echolocatory superminds, we'd need to equip ourselves with neurons and neural networks neurophysiologically equivalent to smart chiropterans. If one subscribes to a coarse-grained functionalism about consciousness, then echolocatory experience would (somehow) emerge at some abstract computational level of description. The implementation details, or "meatware" as biological mind/brains are derisively called, are supposedly incidental or irrelevant. The functionally unique valence properties of the carbon atom, and likewise the functionally unique quantum mechanical properties of liquid water, are discounted or ignored. Thus according to the coarse-grained functionalist, silicon chips could replace biological neurons without loss of function or subjective identity. By contrast, the micro-functionalist, often branded a mere "carbon chauvinist", reckons that the different intracellular properties of biological neurons - with their different gene expression profiles, diverse primary, secondary, tertiary, and quaternary amino acid chain folding (etc) as described by quantum chemistry - are critical to the many and varied phenomenal properties such echolocatory neurons express. Who is right? We'll only ever know the answer by rigorous self-experimentation: a post-Galilean science of mind. It's true that humans don't worry much about our ignorance of echolocatory experience, or our ignorance of echolocatory primitive terms, or our ignorance of possible conceptual schemes expressing echolocatory intelligence in echolocatory world-simulations. This is because we don't highly esteem bats. Humans don't share the same interests or purposes as our flying cousins, e.g. to attract desirable, high-fitness bats and rear reproductively successful baby bats. Alien virtual worlds based on biosonar don't seem especially significant to Homo sapiens except as an armchair philosophical puzzle. Yet this assumption would be intellectually complacent. Worse, understanding what it's like to be a hyperintelligent bat mind is comparatively easy. For echolocatory experience has been recruited by natural selection to play an information-signalling role in a fellow species of mammal; and in principle a research community of language users could biologically engineer their bodies and minds to replicate bat-type experience and establish crude intersubjective agreement to discuss and conceptualise its nature. By contrast, the vast majority of experiential state-spaces remain untapped and unexplored. This task awaits full-spectrum superintelligence in the posthuman era. In a more familiar vein, consider visual intelligence. How does one measure the visual intelligence of a congenitally blind person? Even with sophisticated technology that generates "inverted spectrograms" of the world to translate visual images into sound, the congenitally blind are invincibly ignorant of visual experience and the significance of visually-derived concepts. Just as a sighted idiot has greater visual intelligence than a blind super-rationalist sage, likewise psychedelics confer the ability to become (for the most part) babbling idiots about other state-spaces of consciousness - but babbling idiots whose insight is deeper than the drug-naive or the genetically unenhanced - or the digital zombies spawned by symbolic AI and its connectionist cousins. The challenge here is that the vast majority of these alien state-spaces of consciousness latent in organised matter haven't been recruited by natural selection for information-tracking purposes. So "psychonauts" don't yet have the conceptual equipment to navigate these alien state-spaces of consciousness in even a pseudo-public language, let alone integrate them in any kind of overarching conceptual framework. Note the claim here isn't that taking e.g. ketamine, LSD, salvia, DMT and a dizzying proliferation of custom-designed psychoactive drugs is the royal route to wisdom. Or that ingesting such agents will give insight into deep mystical truths. On the contrary: it's precisely because such realms of experience haven't previously been harnessed for information-processing purposes by evolution in "our" family of branches of the universal wavefunction that makes investigating their properties so cognitively challenging - currently beyond our conceptual resources to comprehend. After all, plants synthesise natural psychedelic compounds to scramble the minds of herbivores who might eat them, not to unlock mystic wisdom. Unfortunately, there is no "neutral" medium of thought impartially to appraise or perceptually cross-modally match all these other experiential state-spaces. One can't somehow stand outside one's own stream of consciousness to evaluate how the properties of the medium are infecting the notional propositional content of the language that one uses to describe it. By way of illustration, compare drug-induced visual experience in a notional community of congenitally blind rationalists who lack the visual apparatus to transduce incident electromagnetic radiation of our familiar wavelengths. The lone mystical babbler who takes such a vision-inducing drug is convinced that [what we would call] visual experience is profoundly significant. And as visually intelligent folk, we know that he's right: visual experience is potentially hugely significant - to an extent which the blind mystical babbler can't possibly divine. But can the drug-taker convince his congenitally blind fellow tribesmen that his mystical visual experiences really matter in the absence of perceptual equipment that permits sensory discrimination? No, he just sounds psychotic. Or alternatively, he speaks lamely and vacuously of the "ineffable". The blind rationalists of his tribe are unimpressed. Of course such ignorance of other state-spaces of experience doesn't normally trouble us. Just as the congenitally blind don't grow up in darkness - a popular misconception - the drug-naive and genetically unenhanced don't go around with a sense of what we're missing. We notice teeming abundance, not gaping voids. Contemporary humans can draw upon terms like "blindness" and "deafness" to characterise the deficits of their handicapped conspecifics. From the perspective of full-spectrum superintelligence, what we really need is millions more of such "privative" terms, as linguists call them, to label the different state-spaces of experience of which genetically unenhanced humans are ignorant. In truth, there may very well be more than millions of such nameless state-spaces, each as incommensurable as e.g. visual and auditory experience. We can't yet begin to quantify their number or construct any kind of crude taxonomy of their interrelationships. Note the problem here isn't cognitive bias or a deficiency in logical reasoning. Rather a congenitally blind (etc) super-rationalist is constitutionally ignorant of visual experience, visual primitive terms, or a visually-based conceptual scheme. So (s)he can't cite e.g. Aumann's agreement theorem [claiming in essence that two cognitive agents acting rationally and with common knowledge of each other's beliefs cannot agree to disagree] or be a good Bayesian rationalist or whatever: these are incommensurable state-spaces of experience as closed to human minds as Picasso is to an earthworm. Moreover there is no reason to expect one realm, i.e. "ordinary waking consciousness", to be cognitively privileged relative to every other realm. "Ordinary waking consciousness" just happened to be genetically adaptive in the African savannah on Planet Earth. Just as humans are incorrigibly ignorant of minds grounded in echolocation - both echolocatory world-simulations and echolocatory conceptual schemes - likewise we are invincibly ignorant of posthuman life while trapped within our existing genetic architecture of intelligence. In order to understand the world - both its formal/mathematical and its subjective properties - sentient organic life must bootstrap its way to super-sentient full-spectrum superintelligence. Grown-up minds need tools to navigate all possible state-spaces of qualia, including all possible first-person perspectives, and map them - initially via the neural correlates of consciousness in our world-simulations - onto the formalism of mathematical physics. Empirical evidence suggests that the behaviour of the stuff of the world is exhaustively described by the formalism of physics. To the best of our knowledge, physics is causally closed and complete, at least within the energy range of the Standard Model. In other words, there is nothing to be found in the world - no "element of reality", as Einstein puts it - that isn't captured by the equations of physics and their solutions. This is a powerful formal constraint on our theory of consciousness. Yet our ultimate theory of the world must also close Levine's notorious "Explanatory Gap". Thus we must explain why consciousness exists at all ("The Hard Problem"); offer a rigorous derivation of our diverse textures of qualia from the field-theoretic formalism of physics; and explain how qualia combine ("The Binding Problem") in organic minds. These are powerful constraints on our ultimate theory too. How can they be reconciled with physicalism? Why aren't we zombies? The hard-nosed sceptic will be unimpressed at such claims. How significant are these outlandish state-spaces of experience? And how are they computationally relevant to (super)intelligence? Sure, says the sceptic, reckless humans may take drugs, and experience wild, weird and wonderful states of mind. But so what? Such exotic states aren't objective in the sense of reliably tracking features of the mind-independent world. Elucidation of their properties doesn't pose a well-defined problem that a notional universal algorithmic intelligence could solve. Well, let's assume, provisionally at least, that all mental states are identical with physical states. If so, then all experience is an objective, spatio-temporally located feature of the world whose properties a unified natural science must explain. A cognitive agent can't be intelligent, let alone superintelligent, and yet be constitutionally ignorant of a fundamental feature of the world - not just ignorant, but completely incapable of gathering information about, exploring, or reasoning about its properties. Whatever else it may be, superintelligence can't be constitutionally stupid. What we need is a universal, species-neutral criterion of significance that can weed out the trivial from the important; and gauge the intelligence of different cognitive agents accordingly. Granted, such a criterion of significance might seem elusive to the antirealist about value. (Mackie 2001) Value nihilism treats any ascription of (in)significance as arbitrary. Or rather the value nihilist maintains that what we find significant simply reflects what was fitness-enhancing for our forebears in the ancestral environment of adaptation. Yet for reasons we simply don't understand, Nature discloses just such a universal touchstone of importance, namely the pleasure-pain axis: the world's inbuilt metric of significance and (dis)value. We're not zombies. First-person facts exist. Some of them matter urgently, e.g. I am in pain. Indeed it's unclear if the expression "I'm in agony; but the agony doesn't matter" even makes cognitive sense. Built into the very nature of agony is the knowledge that its subjective raw awfulness matters a great deal - not instrumentally or derivatively, but by its very nature. If anyone - or indeed any notional super-AGI - supposes that your agony doesn't matter, then he/it hasn't adequately represented the first-person perspective in question. 7 The Great Transition 7.1 The End Of Suffering. A defining feature of general intelligence is the capacity to achieve one's goals in a wide range of environments. All sentient biological agents are endowed with a pleasure-pain axis. All prefer occupying one end to the other. A pleasure-pain axis confers inherent significance on our lives: the opioid-dopamine neurotransmitter system extends from flatworms to humans. Our core behavioural and physiological responses to noxious and rewarding stimuli have been strongly conserved in our evolutionary lineage over hundreds of millions of years. Some researchers argue for psychological hedonism, the theory that all choice in sentient beings is motivated by a desire for pleasure or an aversion from suffering. When we choose to help others, this is because of the pleasure that we ourselves derive, directly or indirectly, from doing so. Pascal put it starkly: "All men seek happiness. This is without exception. Whatever different means they employ, they all tend to this end. The cause of some going to war, and of others avoiding it, is the same desire in both, attended with different views. This is the motive of every action of every man, even of those who hang themselves." In practice, the hypothesis of psychological hedonism is plagued with anomalies, circularities and complications if understood as a universal principle of agency: the "pleasure principle" is simplistic as it stands. Yet the broad thrust of this almost embarrassingly commonplace idea may turn out to be central to understanding the future of life in the universe. If even a weak and exception-laden version of psychological hedonism is true, then there is an intimate link between full-spectrum superintelligence and happiness: the "attractor" to which rational sentience is heading. If that's really what we're striving for, a lot of the time at least, then instrumental means-ends rationality dictates that intelligent agency should seek maximally cost-effective ways to deliver happiness - and then superhappiness and beyond. A discussion of psychological hedonism would take us too far afield here. More fruitful now is just to affirm a truism and then explore its ramifications for life in the post-genomic era. Happiness is typically one of our goals. Intelligence amplification entails pursuing our goals more rationally. For sure, happiness, or at least a reduction in unhappiness, is frequently sought under a variety of descriptions that don't explicitly allude to hedonic tone and sometimes disavow it altogether. Natural selection has "encephalised" our emotions in deceptive, fitness-enhancing ways within our world-simulations. Some of these adaptive fetishes may be formalised in terms of abstract utility functions that a rational agent would supposedly maximise. Yet even our loftiest intellectual pursuits are underpinned by the same neurophysiological reward and punishment pathways. The problem for sentient creatures is that, both personally and collectively, Darwinian life is not very smart or successful in its efforts to achieve long-lasting well-being. Hundreds of millions of years of "Nature, red in tooth and claw" attest to this terrible cognitive limitation. By a whole raft of indices (suicide rates, the prevalence of clinical depression and anxiety disorders, the Easterlin paradox, etc) humans are not getting any (un)happier on average than our Palaeolithic ancestors despite huge technological progress. Our billions of factory-farmed non-human victims spend most of their abject lives below hedonic zero. In absolute terms, the amount of suffering in the world increases each year in humans and non-humans alike. Not least, evolution sabotages human efforts to improve our subjective well-being thanks to our genetically constrained hedonic treadmill - the complicated web of negative feedback mechanisms in the brain that stymies our efforts to be durably happy at every turn. Discontent, jealousy, anxiety, periodic low mood, and perpetual striving for "more" were fitness-enhancing in the ancient environment of evolutionary adaptedness. Lifelong bliss wasn't harder for information-bearing self-replicators to encode. Rather lifelong bliss was genetically maladaptive and hence selected against. Only now can biotechnology remedy organic life's innate design flaw. A potential pitfall lurks here: the fallacy of composition. Just because all individuals tend to seek happiness and shun unhappiness doesn't mean that all individuals seek universal happiness. We're not all closet utilitarians. Genghis Khan wasn't trying to spread universal bliss. As Plato observed, "Pleasure is the greatest incentive to evil." But here's the critical point. Full-spectrum superintelligence entails the cognitive capacity impartially to grasp all possible first-person perspectives - overcoming egocentric, anthropocentric, and ethnocentric bias (cf. mirror-touch synaesthesia). As an idealisation, at least, full-spectrum superintelligence understands and weighs the full range of first-person facts. First-person facts are as much an objective feature of the natural world as the rest mass of the electron or the Second Law of Thermodynamics. You can't be ignorant of first-person perspectives and superintelligent any more than you can be ignorant of the Second law of Thermodynamics and superintelligent. By analogy, just as autistic superintelligence captures the formal structure of a unified natural science, a mathematically complete "view from nowhere", all possible solutions to the universal Schrödinger equation or its relativistic extension, likewise a full-spectrum superintelligence also grasps all possible first-person perspectives - and acts accordingly. In effect, an idealised full-spectrum superintelligence would combine the mind-reading prowess of a telepathic mirror-touch synaesthete with the optimising prowess of a rule-following hyper-systematiser on a cosmic scale. If your hand is in the fire, you reflexively withdraw it. In withdrawing your hand, there is no question of first attempting to solve the Is-Ought problem in meta-ethics and trying logically to derive an "ought" from an "is". Normativity is built into the nature of the aversive experience itself: I-ought-not-to-be-in-this-dreadful-state. By extension, perhaps a full-spectrum superintelligence will perform cosmic felicific calculus and execute some sort of metaphorical hand-withdrawal for all accessible suffering sentience in its forward light-cone. Indeed one possible criterion of full-spectrum superintelligence is the propagation of subjectively hypervaluable states on a cosmological scale. What this constraint on intelligent agency means in practice is unclear. Conceivably at least, idealised superintelligences must ultimately do what a classical utilitarian ethic dictates and propagate some kind of "utilitronium shockwave" across the cosmos. To the classical utilitarian, any rate of time-discounting indistinguishable from zero is ethically unacceptable, so s/he should presumably be devoting most time and resources to that cosmological goal. An ethic of negative utilitarianism is often accounted a greater threat to intelligent life (cf. the hypothetical "button-pressing" scenario) than classical utilitarianism. But whereas a negative utilitarian believes that once intelligent agents have phased out the biology of suffering, all our ethical duties have been discharged, the classical utilitarian seems ethically committed to converting all accessible matter and energy into relatively homogeneous matter optimised for maximum bliss: "utilitronium". Hence the most empirically valuable outcome entails the extinction of intelligent life. Could this prospect derail superintelligence? Perhaps. But utilitronium shockwave scenarios shouldn't be confused with wireheading. The prospect of self-limiting superintelligence might be credible if either a (hypothetical) singleton biological superintelligence or its artificial counterpart discovers intracranial self-stimulation or its nonbiological analogues. Yet is this blissful fate a threat to anyone else? After all, a wirehead doesn't aspire to convert the rest of the world into wireheads. A junkie isn't driven to turn the rest of the world into junkies. By contrast, a utilitronium shockwave propagating across our Hubble volume would be the product of intelligent design by an advanced civilisation, not self-subversion of an intelligent agent's reward circuitry. Also, consider the reason why biological humanity - as distinct from individual humans - is resistant to wirehead scenarios, namely selection pressure. Humans who discover the joys of intra-cranial self-stimulation or heroin aren't motivated to raise children. So they are outbred. Analogously, full-spectrum superintelligences, whether natural or artificial, are likely to be social rather than solipsistic, not least because of the severe selection pressure exerted against any intelligent systems who turn in on themselves to wirehead rather than seek out unoccupied ecological niches. In consequence, the adaptive radiation of natural and artificial intelligence across the Galaxy won't be undertaken by stay-at-home wireheads or their blissed-out functional equivalents. On the face of it, this argument from selection pressure undercuts the prospect of superhappiness for all sentient life - the "attractor" towards which we may tentatively predict sentience is converging in virtue of the pleasure principle harnessed to ultraintelligent mind-reading prowess and utopian neuroscience. But what is necessary for sentient intelligence is information-sensitivity to fitness-relevant stimuli - not an agent's absolute location on the pleasure-pain axis. True, uniform bliss and uniform despair are inconsistent with intelligent agency. Yet mere recalibration of a subject's "hedonic set-point" leaves intelligence intact. Both information-sensitive gradients of bliss and information-sensitive gradients of misery allow high-functioning performance and critical insight. Only sentience animated by gradients of bliss is consistent with a rich subjective quality of intelligent life. Moreover the nature of "utilitronium" is as obscure as its theoretical opposite, "dolorium". The problem here cuts deeper than mere lack of technical understanding, e.g. our ignorance of the gene expression profiles and molecular signature of pure bliss in neurons of the rostral shell of the nucleus accumbens and ventral pallidum, the twin cubic centimetre-sized "hedonic hotspots" that generate ecstatic well-being in the mammalian brain. Rather there are difficult conceptual issues at stake. For just as the torture of one mega-sentient being may be accounted worse than a trillion discrete pinpricks, conversely the sublime experiences of utiltronium-driven Jupiter minds may be accounted preferable to tiling our Hubble volume with the maximum abundance of micro-bliss. What is the optimal trade-off between quantity and intensity? In short, even assuming a classical utilitarian ethic, the optimal distribution of matter and energy that a God-like superintelligence would create in any given Hubble volume is very much an open question. 7.2 Paradise Engineering? The hypothetical shift to life lived entirely above Sidgwick's "hedonic zero" will mark a momentous evolutionary transition. What lies beyond? There is no reason to believe that hedonic ascent will halt in the wake of the world's last aversive experience in our forward light-cone. Admittedly, the self-intimating urgency of eradicating suffering is lacking in any further hedonic transitions, i.e. a transition from the biology of happiness to a biology of superhappiness; and then beyond. Yet why "lock in" mediocrity if intelligent life can lock in sublimity instead? Naturally, superhappiness scenarios could be misconceived. Long-range prediction is normally a fool's game. But it's worth noting that future life based on gradients of intelligent bliss isn't tied to any particular ethical theory: its assumptions are quite weak. Radical recalibration of the hedonic treadmill is consistent not just with classical or negative utilitarianism, but also with preference utilitarianism, Aristotelian virtue theory, a deontological or a pluralist ethic, Buddhism, and many other value systems besides. Recalibrating our hedonic set-point doesn't - or at least needn't - undermine critical discernment. All that's needed for the abolitionist project and its hedonistic extensions to succeed is that our ethic isn't committed to perpetuating the biology of involuntary suffering. Likewise, only a watered-down version of psychological hedonism is needed to lend the scenario sociological credibility. We can retain as much - or as little - of our existing preference architecture as we please. You can continue to prefer Shakespeare to Mills-and-Boon, Mozart to Morrissey, Picasso to Jackson Pollock while living perpetually in Seventh Heaven or beyond. Nonetheless an exalted hedonic baseline will revolutionise our conception of life. The world of the happy is quite different from the world of the unhappy, says Wittgenstein; but the world of the superhappy will feel unimaginably different from the human, Darwinian world. Talk of preference conservation may reassure bioconservatives that nothing worthwhile will be lost in the post-Darwinian transition. Yet life based on information-sensitive gradients of superhappiness will most likely be "encephalised" in state-spaces of experience alien beyond human comprehension. Humanly comprehensible or otherwise, enriched hedonic tone can make all experience generically hypervaluable in an empirical sense - its lows surpassing today's peak experiences. Will such experience be hypervaluable in a metaphysical sense too? Is this question cognitively meaningful? 8 The Future Of Sentience 8.1 The Sentience Explosion. Man proverbially created God in his own image. In the age of the digital computer, humans conceive God-like superintelligence in the image of our dominant technology and personal cognitive style - refracted, distorted and extrapolated for sure, but still through the lens of human concepts. The "super-" in so-called superintelligence is just a conceptual fig-leaf that humans use to hide our ignorance of the future. Thus high-AQ / high-IQ humans may imagine God-like intelligence as some kind of Super-Asperger - a mathematical theorem-proving hyper-rationalist liable systematically to convert the world into computronium for its awesome theorem-proving. High-EQ, low-AQ humans, on the other hand, may imagine a cosmic mirror-touch synaesthete nurturing creatures great and small in expanding circles of compassion. From a different frame of reference, psychedelic drug investigators may imagine superintelligence as a Great Arch-Chemist opening up unknown state-space of consciousness. And so forth. Probably the only honest answer is to say, lamely, boringly, uninspiringly: we simply don't know. Grand historical meta-narratives are no longer fashionable. The contemporary Singularitarian movement is unusual insofar as it offers one such grand meta-narrative: history is the story of simple biological intelligence evolving through natural selection to become smart enough to conceive an abstract universal Turing machine (UTM), build and program digital computers - and then merge with, or undergo replacement by, recursively self-improving artificial superintelligence. These meta-narratives aren't mutually exclusive. Indeed on the story told here, full-spectrum superintelligence entails full-blown supersentience too: a seamless unification of the formal and the subjective properties of mind. If the history of futurology is any guide, the future will confound us all. Yet in the words of Alan Kay: "It's easier to invent the future than to predict it." * * * Baker, S. (2011). "Final Jeopardy: Man vs. Machine and the Quest to Know Everything". (Houghton Mifflin Harcourt). Ball, P. (2011). "Physics of life: The dawn of quantum biology," Nature 474 (2011), 272-274. Banissy, M., et al., (2009). "Prevalence, characteristics and a neurocognitive model of mirror-touch synaesthesia", Experimental Brain Research Volume 198, Numbers 2-3, 261-272, DOI: 10.1007/s00221-009-1810-9. Barkow, J., Cosimdes, L., Tooby, J. (eds) (1992). "The Adapted Mind: Evolutionary Psychology and the Generation of Culture". (New York, NY: Oxford University Press). Baron-Cohen, S. (1995). "Mindblindness: an essay on autism and theory of mind". (MIT Press/Bradford Books). Baron-Cohen S, Wheelwright S, Skinner R, Martin J, Clubley E. (2001). "The Autism-Spectrum Quotient (AQ): evidence from Asperger syndrome/high functioning autism, males and females, scientists and mathematicians", J Autism Dev Disord 31 (1): 5–17. doi:10.1023/A:1005653411471. PMID 11439754. Baron-Cohen S. (2001) "Autism Spectrum Questionnaire". (Autism Research Centre, University of Cambridge). Benatar, D. (2006). "Better Never to Have Been: The Harm of Coming Into Existence". (Oxford University Press). Bentham, J. (1789). "An Introduction to the Principles of Morals and Legislation". (reprint: Oxford: Clarendon Press). Berridge, KC, Kringelbach, ML (eds) (2010). "Pleasures of the Brain". (Oxford University Press). Bostrom, N. (2014). “Superintelligence: Paths, Dangers, Strategies.” (Oxford University Press). Boukany, PE., et al. (2011). "Nanochannel electroporation delivers precise amounts of biomolecules into living cells", Nature Nanotechnology. 6 (2011), pp. 74. Brickman, P., Coates D., Janoff-Bulman, R. (1978). "Lottery winners and accident victims: is happiness relative?". J Pers Soc Psychol. 1978 Aug;36(8):917-27.7–754. Brooks, R. (1991). "Intelligence without representation". Artificial Intelligence 47 (1-3): 139–159, doi:10.1016/0004-3702(91)90053-M. Buss, D. (1997). "Evolutionary Psychology: The New Science of the Mind". (Allyn & Bacon). Byrne, R., Whiten, A. (1988). "Machiavellian intelligence". (Oxford: Oxford University Press). Carroll, JB. (1993). "Human cognitive abilities: A survey of factor-analytic studies". (Cambridge University Press). Chalmers, DJ. (1995). "Facing up to the hard problem of consciousness". Journal of Consciousness Studies 2, 3, 200-219. Churchland, P. (1989). "A Neurocomputational Perspective: The Nature of Mind and the Structure of Science". (MIT Press). Cialdini, RB. (1987) "Empathy-Based Helping: Is it selflessly or selfishly motivated?" Journal of Personality and Social Psychology. Vol 52(4), Apr 1987, 749-758. Clark, A. (2008). "Supersizing the Mind: Embodiment, Action, and Cognitive Extension". (Oxford University Press, USA). Cochran, G., Harpending, H. (2009). "The 10,000 Year Explosion: How Civilization Accelerated Human Evolution". (Basic Books). Cochran, G., Hardy, J., Harpending, H. (2006). "Natural History of Ashkenazi Intelligence", Journal of Biosocial Science 38 (5), pp. 659–693 (2006). Cohn, N. (1957). "The Pursuit of the Millennium: Revolutionary Millenarians and Mystical Anarchists of the Middle Ages". (Pimlico). Dawkins, R. (1976). "The Selfish Gene". (New York City: Oxford University Press). de Garis, H. (2005). "The Artilect War: Cosmists vs. Terrans: A Bitter Controversy Concerning Whether Humanity Should Build Godlike Massively Intelligent Machines". ETC Publications. pp. 254. ISBN 978-0882801537. de Grey, A. (2007). "Ending Aging: The Rejuvenation Breakthroughs that Could Reverse Human Aging in Our Lifetime". (St. Martin's Press). Delgado, J. (1969). "Physical Control of the Mind: Toward a Psychocivilized Society". (Harper and Row). Dennett, D. (1987). "The Intentional Stance". (MIT Press). Deutsch, D. (1997). "The Fabric of Reality". (Penguin). Deutsch, D. (2011). "The Beginning of Infinity". (Penguin). Drexler, E. (1986). "Engines of Creation: The Coming Era of Nanotechnology". (Anchor Press/Doubleday, New York). Dyson, G. (2012). "Turing's Cathedral: The Origins of the Digital Universe". (Allen Lane). Everett, H. "The Theory of the Universal Wavefunction", Manuscript (1955), pp 3–140 of Bryce DeWitt, R. Neill Graham, eds, "The Many-Worlds Interpretation of Quantum Mechanics", Princeton Series in Physics, Princeton University Press (1973), ISBN 0-691-08131-X. Francione, G. (2006). "Taking Sentience Seriously." Journal of Animal Law & Ethics 1, 2006. Gardner, H. (1983). "Frames of Mind: The Theory of Multiple Intelligences." (New York: Basic Books). Goertzel, B. (2006). "The hidden pattern: A patternist philosophy of mind." (Brown Walker Press). Gunderson, K., (1985) "Mentality and Machines". (U of Minnesota Press). Hagan, S., Hameroff, S. & Tuszynski, J. (2002). "Quantum computation in brain microtubules? Decoherence and biological feasibility". Physical Reviews, E65: 061901. Haidt, J. (2012). "The Righteous Mind: Why Good People Are Divided by Politics and Religion". (Pantheon). Hameroff, S. (2006). "Consciousness, neurobiology and quantum mechanics" in: The Emerging Physics of Consciousness, (Ed.) Tuszynski, J. (Springer). Harris, S. (2010). "The Moral Landscape: How Science Can Determine Human Values". (Free Press). Haugeland, J. (1985). "Artificial Intelligence: The Very Idea". (Cambridge, Mass.: MIT Press). Holland, J. (2001). "Ecstasy: The Complete Guide: A Comprehensive Look at the Risks and Benefits of MDMA". (Park Street Press). Holland, JH. (1975). "Adaptation in Natural and Artificial Systems". (University of Michigan Press, Ann Arbor). Hutter, M. (2010). "Universal Artificial Intelligence: Sequential Decisions Based on Algorithmic Probability". (Springer). Hutter, M. (2012). "Can Intelligence Explode?" Journal of Consciousness Studies, 19:1-2 (2012). Huxley, A. (1932). "Brave New World". (Chatto and Windus). Huxley, A. (1954). "Doors of Perception and Heaven and Hell". (Harper & Brothers). Kahneman, D. (2011). "Thinking, Fast and Slow". (Farrar, Straus and Giroux). Kant, I. (1781), "Critique of Pure Reason", translated/edited by P. Guyer and A. Wood. (Cambridge: Cambridge University Press, 1997). Koch, C. (2004). "The Quest for Consciousness: a Neurobiological Approach". (Roberts and Co.). Kurzweil, R. (2005). "The Singularity Is Near". (Viking). Kurzweil, R. (1998). "The Age of Spiritual Machines". (Viking). Langdon, W., Poli, R. (2002). "Foundations of Genetic Programming". (Springer). Lee HJ, Macbeth AH, Pagani JH, Young WS. (2009). "Oxytocin: the Great Facilitator of Life". Progress in Neurobiology 88 (2): 127–51. doi:10.1016/j.pneurobio.2009.04.001. PMC 2689929. PMID 19482229. Legg, S., Hutter, M. (2007). "Universal Intelligence: A Definition of Machine Intelligence". Minds & Machines, 17:4 (2007) pages 391-444. Levine, J. (1983). "Materialism and qualia: The explanatory gap". Pacific Philosophical Quarterly 64 (October):354-61. Litt A. et al., (2006). "Is the Brain a Quantum Computer?" Cognitive Science, XX (2006) 1–11. Lloyd, S. (2002). "Computational Capacity of the Universe". Physical Review Letters 88 (23): 237901. arXiv:quant-ph/0110141. Bibcode 2002PhRvL..88w7901L. Lockwood, L. (1989). "Mind, Brain, and the Quantum". (Oxford University Press). Mackie, JL. (1991). "Ethics: Inventing Right and Wrong". (Penguin). Markram, H. (2006). "The Blue Brain Project", Nature Reviews Neuroscience, 7:153-160, 2006 February. PMID 16429124. Merricks, T. (2001) "Objects and Persons". (Oxford University Press). Minsky, M. (1987). "The Society of Mind". (Simon and Schuster). Moravec, H. (1990). "Mind Children: The Future of Robot and Human Intelligence". (Harvard University Press). Nagel, T. (1986). "The View From Nowhwere". (Oxford University Press). Omohundro, S. (2007). "The Nature of Self-Improving Artificial Intelligence“. Singularity Summit 2007, San Francisco, CA. Parfit, D. (1984). "Reasons and Persons". (Oxford: Oxford University Press). Pearce, D. (1995). "The Hedonistic Imperative". Pellissier, H. (2011) "Women-Only Leadership: Would it prevent war?" Penrose, R. (1994). "Shadows of the Mind: A Search for the Missing Science of Consciousness". (MIT Press). Peterson, D, Wrangham, R. (1997). "Demonic Males: Apes and the Origins of Human Violence". (Mariner Books). Pinker, S. (2011). "The Better Angels of Our Nature: Why Violence Has Declined". (Viking). Rees, M. (2003). "Our Final Hour: A Scientist's Warning: How Terror, Error, and Environmental Disaster Threaten Humankind's Future In This Century—On Earth and Beyond". (Basic Books). Reimann F, et al. (2010). "Pain perception is altered by a nucleotide polymorphism in SCN9A." Proc Natl Acad Sci USA. 2010 Mar 16;107(11):5148-53. Rescher, N. (1974). "Conceptual Idealism". (Blackwell Publishers). Revonsuo, A. (2005). "Inner Presence: Consciousness as a Biological Phenomenon". (MIT Press). Revonsuo, A., Newman, J. (1999). "Binding and Consciousness". Consciousness and Cognition 8, 123-127. Riddoch, MJ., Humphreys, GW. (2004). "Object identification in simultanagnosia: When wholes are not the sum of their parts." Cognitive Neuropsychology, 21(2-4), Mar-Jun 2004, 423-441. Rumelhart, DE., McClelland, JL., and the PDP Research Group (1986). "Parallel Distributed Processing: Explorations in the Microstructure of Cognition". Volume 1: Foundations. (Cambridge, MA: MIT Press). Russell, B. (1948). "Human Knowledge: Its Scope and Limits". (London: George Allen & Unwin). Saunders, S., Barrett, J., Kent, A., Wallace, D. (2010). "Many Worlds?: Everett, Quantum Theory, and Reality". (Oxford University Press). Schlaepfer TE., Fins JJ. (2012). "How happy is too happy? Euphoria, Neuroethics and Deep Brain Stimulation of the Nucleus Accumbens". The American Journal of Bioethics 3:30-36. Schmidhuber, J. (2012). "Philosophers & Futurists, Catch Up! Response to The Singularity". Journal of Consciousness Studies, 19, No. 1–2, 2012, pp. 173–82. Seager, W. (1999). "Theories of Consciousness". (Routledge). Seager. (2006). "The 'intrinsic nature' argument for panpsychism". Journal of Consciousness Studies 13 (10-11):129-145. Sherman, W., Craig A., (2002). "Understanding Virtual Reality: Interface, Application, and Design". (Morgan Kaufmann). Shulgin, A. (1995). "PiHKAL: A Chemical Love Story". (Berkeley: Transform Press, U.S.). Shulgin, A. (1997). "TiHKAL: The Continuation". (Berkeley: Transform Press, U.S.). Shulgin, A. (2011). "The Shulgin Index Vol 1: Psychedelic Phenethylamines and Related Compounds". (Berkeley: Transform Press, US). Sidgwick, H. (1907) "The Methods of Ethics", Indianapolis: Hackett, seventh edition, 1981, I.IV. Singer, P. (1995). "Animal Liberation: A New Ethics for our Treatment of Animals". (Random House, New York). Singer, P. (1981). "The Expanding Circle: Ethics and Sociobiology". (Farrar, Straus and Giroux, New York). Smart, JM. (2008-11.) Evo Devo Universe? A Framework for Speculations on Cosmic Culture. In: "Cosmos and Culture: Cultural Evolution in a Cosmic Context", Steven J. Dick, Mark L. Lupisella (eds.), Govt Printing Office, NASA SP-2009-4802, Wash., D.C., 2009, pp. 201-295. Stock, G. (2002). "Redesigning Humans: Our Inevitable Genetic Future". (Houghton Mifflin Harcourt). Strawson G., et al. (2006). "Consciousness and Its Place in Nature: Does Physicalism Entail Panpsychism?" (Imprint Academic). Tsien, J. et al., (1999). "Genetic enhancement of learning and memory in mice". Nature 401, 63-69 (2 September 1999) | doi:10.1038/43432. Turing, AM. (1950). "Computing machinery and intelligence". Mind, 59, 433-460. Vitiello, G. (2001). "My Double Unveiled; Advances in Consciousness". (John Benjamins). Waal, F. (2000). "Chimpanzee Politics: Power and Sex among Apes". (Johns Hopkins University Press). Wallace, D. (2012). "The Emergent Multiverse: Quantum Theory according to the Everett Interpretation". (Oxford: Oxford University Press). Welty, G. (1970). "The History of the Prediction Paradox," presented at the Annual Meeting of the International Society for the History of the Behavioral and Social Sciences, Wohlsen, M. (2011) : "Biopunk: DIY Scientists Hack the Software of Life". (Current). Yudkowsky, E. (2007). "Three Major Singularity Schools". Zeki, S. (1991). "Cerebral akinetopsia (visual motion blindness): A review". Brain 114, 811-824. doi: 10.1093/brain/114.2.811. * * * David Pearce (2012, last updated 2016) The Hedonistic Imperative Talks 2015 BLTC Research Quantum Ethics? Utopian Surgery? Social Media 2016 The Shulgin Index Gene Drives (2016) Utopian Pharmacology The Abolitionist Project The Repugnant Conclusion Reprogramming Predators The Reproductive Revolution Asperger's Quotient (AQ) Test Kurzweil Accelerating Intelligence The Future of Biological Intelligence Machine Intelligence Research Institute (MIRI) Technological Singularities and Intelligence Explosions The Biointelligence Explosion (PowerPoint Slide Show) An Organic Singularity? Humans and Intelligent Machines: Co-Evolution, Fusion or Replacement? E-mail Dave
e68ca85acf0715b4
From Wikipedia, the free encyclopedia   (Redirected from Electrons) Jump to navigation Jump to search Atomic-orbital-clouds spd m0.png Hydrogen atom orbitals at different energy levels. The more opaque areas are where one is most likely to find an electron at any given time. CompositionElementary particle[1] InteractionsGravity, electromagnetic, weak AntiparticlePositron (also called antielectron) TheorizedRichard Laming (1838–1851),[2] G. Johnstone Stoney (1874) and others.[3][4] DiscoveredJ. J. Thomson (1897)[5] Mass9.10938356(11)×10−31 kg[6] 5.48579909070(16)×10−4 u[6] [1822.8884845(14)]−1 u[note 1] 0.5109989461(31) MeV/c2[6] Mean lifetimestable ( > 6.6×1028 yr[7]) Electric charge−1 e[note 2] −1.6021766208(98)×10−19 C[6] −4.80320451(10)×10−10 esu Magnetic moment−1.00115965218091(26) μB[6] Weak isospinLH: −1/2, RH: 0 Weak hyperchargeLH: -1, RH: −2 The electron is a subatomic particle, symbol , whose electric charge is negative one elementary charge.[8] Electrons belong to the first generation of the lepton particle family,[9] and are generally thought to be elementary particles because they have no known components or substructure.[1] The electron has a mass that is approximately 1/1836 that of the proton.[10] Quantum mechanical properties of the electron include an intrinsic angular momentum (spin) of a half-integer value, expressed in units of the reduced Planck constant, ħ. Being fermions, no two electrons can occupy the same quantum state, in accordance with the Pauli exclusion principle.[9] Like all elementary particles, electrons exhibit properties of both particles and waves: they can collide with other particles and can be diffracted like light. The wave properties of electrons are easier to observe with experiments than those of other particles like neutrons and protons because electrons have a lower mass and hence a longer de Broglie wavelength for a given energy. Electrons play an essential role in numerous physical phenomena, such as electricity, magnetism, chemistry and thermal conductivity, and they also participate in gravitational, electromagnetic and weak interactions.[11] Since an electron has charge, it has a surrounding electric field, and if that electron is moving relative to an observer, said observer will observe it to generate a magnetic field. Electromagnetic fields produced from other sources will affect the motion of an electron according to the Lorentz force law. Electrons radiate or absorb energy in the form of photons when they are accelerated. Laboratory instruments are capable of trapping individual electrons as well as electron plasma by the use of electromagnetic fields. Special telescopes can detect electron plasma in outer space. Electrons are involved in many applications such as electronics, welding, cathode ray tubes, electron microscopes, radiation therapy, lasers, gaseous ionization detectors and particle accelerators. Interactions involving electrons with other subatomic particles are of interest in fields such as chemistry and nuclear physics. The Coulomb force interaction between the positive protons within atomic nuclei and the negative electrons without, allows the composition of the two known as atoms. Ionization or differences in the proportions of negative electrons versus positive nuclei changes the binding energy of an atomic system. The exchange or sharing of the electrons between two or more atoms is the main cause of chemical bonding.[12] In 1838, British natural philosopher Richard Laming first hypothesized the concept of an indivisible quantity of electric charge to explain the chemical properties of atoms.[3] Irish physicist George Johnstone Stoney named this charge 'electron' in 1891, and J. J. Thomson and his team of British physicists identified it as a particle in 1897.[5] Electrons can also participate in nuclear reactions, such as nucleosynthesis in stars, where they are known as beta particles. Electrons can be created through beta decay of radioactive isotopes and in high-energy collisions, for instance when cosmic rays enter the atmosphere. The antiparticle of the electron is called the positron; it is identical to the electron except that it carries electrical and other charges of the opposite sign. When an electron collides with a positron, both particles can be annihilated, producing gamma ray photons. Discovery of effect of electric force[edit] The ancient Greeks noticed that amber attracted small objects when rubbed with fur. Along with lightning, this phenomenon is one of humanity's earliest recorded experiences with electricity.[13] In his 1600 treatise De Magnete, the English scientist William Gilbert coined the New Latin term electrica, to refer to those substances with property similar to that of amber which attract small objects after being rubbed.[14] Both electric and electricity are derived from the Latin ēlectrum (also the root of the alloy of the same name), which came from the Greek word for amber, ἤλεκτρον (ēlektron). Discovery of two kinds of charges[edit] In the early 1700s, French chemist Charles François du Fay found that if a charged gold-leaf is repulsed by glass rubbed with silk, then the same charged gold-leaf is attracted by amber rubbed with wool. From this and other results of similar types of experiments, du Fay concluded that electricity consists of two electrical fluids, vitreous fluid from glass rubbed with silk and resinous fluid from amber rubbed with wool. These two fluids can neutralize each other when combined.[14][15] American scientist Ebenezer Kinnersley later also independently reached the same conclusion.[16]:118 A decade later Benjamin Franklin proposed that electricity was not from different types of electrical fluid, but a single electrical fluid showing an excess (+) or deficit (-). He gave them the modern charge nomenclature of positive and negative respectively.[17] Franklin thought of the charge carrier as being positive, but he did not correctly identify which situation was a surplus of the charge carrier, and which situation was a deficit.[18] Between 1838 and 1851, British natural philosopher Richard Laming developed the idea that an atom is composed of a core of matter surrounded by subatomic particles that had unit electric charges.[2] Beginning in 1846, German physicist William Weber theorized that electricity was composed of positively and negatively charged fluids, and their interaction was governed by the inverse square law. After studying the phenomenon of electrolysis in 1874, Irish physicist George Johnstone Stoney suggested that there existed a "single definite quantity of electricity", the charge of a monovalent ion. He was able to estimate the value of this elementary charge e by means of Faraday's laws of electrolysis.[19] However, Stoney believed these charges were permanently attached to atoms and could not be removed. In 1881, German physicist Hermann von Helmholtz argued that both positive and negative charges were divided into elementary parts, each of which "behaves like atoms of electricity".[3] Stoney initially coined the term electrolion in 1881. Ten years later, he switched to electron to describe these elementary charges, writing in 1894: "... an estimate was made of the actual amount of this most remarkable fundamental unit of electricity, for which I have since ventured to suggest the name electron". A 1906 proposal to change to electrion failed because Hendrik Lorentz preferred to keep electron.[20][21] The word electron is a combination of the words electric and ion.[22] The suffix -on which is now used to designate other subatomic particles, such as a proton or neutron, is in turn derived from electron.[23][24] Discovery of free electrons outside matter[edit] A round glass vacuum tube with a glowing circular beam inside A beam of electrons deflected in a circle by a magnetic field[25] The discovery of electron by Joseph Thomson was closely tied with the experimental and theoretical research of cathode rays for decades by many physicists.[3] While studying electrical conductivity in rarefied gases in 1859, the German physicist Julius Plucker observed that the phosphorescent light, which was caused by radiation emitted from the cathode, appeared at the tube wall near the cathode, and the region of the phosphorescent light could be moved by application of a magnetic field. In 1869, Plucker's student Johann Wilhelm Hittorf found that a solid body placed in between the cathode and the phosphorescence would cast a shadow upon the phosphorescent region of the tube. Hittorf inferred that there are straight rays emitted from the cathode and that the phosphorescence was caused by the rays striking the tube walls. In 1876, the German physicist Eugen Goldstein showed that the rays were emitted perpendicular to the cathode surface, which distinguished between the rays that were emitted from the cathode and the incandescent light. Goldstein dubbed the rays cathode rays.[26]:393 [27] During the 1870s, the English chemist and physicist Sir William Crookes developed the first cathode ray tube to have a high vacuum inside.[28] He then showed in 1874 that the cathode rays can turn a small paddle wheel when placed in their path. Therefore, he concluded that the rays carried momentum. Furthermore, by applying a magnetic field, he was able to deflect the rays, thereby demonstrating that the beam behaved as though it were negatively charged.[27] In 1879, he proposed that these properties could be explained by regarding cathode rays as composed of negatively charged gaseous molecules in fourth state of matter in which the mean free path of the particles is so long that collisions may be ignored.[26]:394–395 The German-born British physicist Arthur Schuster expanded upon Crookes' experiments by placing metal plates parallel to the cathode rays and applying an electric potential between the plates. The field deflected the rays toward the positively charged plate, providing further evidence that the rays carried negative charge. By measuring the amount of deflection for a given level of current, in 1890 Schuster was able to estimate the charge-to-mass ratio of the ray components. However, this produced a value that was more than a thousand times greater than what was expected, so little credence was given to his calculations at the time.[27] In 1892 Hendrik Lorentz suggested that the mass of these particles (electrons) could be a consequence of their electric charge.[29] While studying naturally fluorescing minerals in 1896, the French physicist Henri Becquerel discovered that they emitted radiation without any exposure to an external energy source. These radioactive materials became the subject of much interest by scientists, including the New Zealand physicist Ernest Rutherford who discovered they emitted particles. He designated these particles alpha and beta, on the basis of their ability to penetrate matter.[30] In 1900, Becquerel showed that the beta rays emitted by radium could be deflected by an electric field, and that their mass-to-charge ratio was the same as for cathode rays.[31] This evidence strengthened the view that electrons existed as components of atoms.[32][33] In 1897, the British physicist J. J. Thomson, with his colleagues John S. Townsend and H. A. Wilson, performed experiments indicating that cathode rays really were unique particles, rather than waves, atoms or molecules as was believed earlier.[5] Thomson made good estimates of both the charge e and the mass m, finding that cathode ray particles, which he called "corpuscles," had perhaps one thousandth of the mass of the least massive ion known: hydrogen.[5] He showed that their charge-to-mass ratio, e/m, was independent of cathode material. He further showed that the negatively charged particles produced by radioactive materials, by heated materials and by illuminated materials were universal.[5][34] The name electron was adopted for these particles by the scientific community, mainly due to the advocation by G. F. Fitzgerald, J. Larmor, and H. A. Lorenz.[35]:273 The electron's charge was more carefully measured by the American physicists Robert Millikan and Harvey Fletcher in their oil-drop experiment of 1909, the results of which were published in 1911. This experiment used an electric field to prevent a charged droplet of oil from falling as a result of gravity. This device could measure the electric charge from as few as 1–150 ions with an error margin of less than 0.3%. Comparable experiments had been done earlier by Thomson's team,[5] using clouds of charged water droplets generated by electrolysis, and in 1911 by Abram Ioffe, who independently obtained the same result as Millikan using charged microparticles of metals, then published his results in 1913.[36] However, oil drops were more stable than water drops because of their slower evaporation rate, and thus more suited to precise experimentation over longer periods of time.[37] Around the beginning of the twentieth century, it was found that under certain conditions a fast-moving charged particle caused a condensation of supersaturated water vapor along its path. In 1911, Charles Wilson used this principle to devise his cloud chamber so he could photograph the tracks of charged particles, such as fast-moving electrons.[38] Atomic theory[edit] Three concentric circles about a nucleus, with an electron moving from the second to the first circle and releasing a photon The Bohr model of the atom, showing states of electron with energy quantized by the number n. An electron dropping to a lower orbit emits a photon equal to the energy difference between the orbits. By 1914, experiments by physicists Ernest Rutherford, Henry Moseley, James Franck and Gustav Hertz had largely established the structure of an atom as a dense nucleus of positive charge surrounded by lower-mass electrons.[39] In 1913, Danish physicist Niels Bohr postulated that electrons resided in quantized energy states, with their energies determined by the angular momentum of the electron's orbit about the nucleus. The electrons could move between those states, or orbits, by the emission or absorption of photons of specific frequencies. By means of these quantized orbits, he accurately explained the spectral lines of the hydrogen atom.[40] However, Bohr's model failed to account for the relative intensities of the spectral lines and it was unsuccessful in explaining the spectra of more complex atoms.[39] Chemical bonds between atoms were explained by Gilbert Newton Lewis, who in 1916 proposed that a covalent bond between two atoms is maintained by a pair of electrons shared between them.[41] Later, in 1927, Walter Heitler and Fritz London gave the full explanation of the electron-pair formation and chemical bonding in terms of quantum mechanics.[42] In 1919, the American chemist Irving Langmuir elaborated on the Lewis' static model of the atom and suggested that all electrons were distributed in successive "concentric (nearly) spherical shells, all of equal thickness".[43] In turn, he divided the shells into a number of cells each of which contained one pair of electrons. With this model Langmuir was able to qualitatively explain the chemical properties of all elements in the periodic table,[42] which were known to largely repeat themselves according to the periodic law.[44] In 1924, Austrian physicist Wolfgang Pauli observed that the shell-like structure of the atom could be explained by a set of four parameters that defined every quantum energy state, as long as each state was occupied by no more than a single electron. This prohibition against more than one electron occupying the same quantum energy state became known as the Pauli exclusion principle.[45] The physical mechanism to explain the fourth parameter, which had two distinct possible values, was provided by the Dutch physicists Samuel Goudsmit and George Uhlenbeck. In 1925, they suggested that an electron, in addition to the angular momentum of its orbit, possesses an intrinsic angular momentum and magnetic dipole moment.[39][46] This is analogous to the rotation of the Earth on its axis as it orbits the Sun. The intrinsic angular momentum became known as spin, and explained the previously mysterious splitting of spectral lines observed with a high-resolution spectrograph; this phenomenon is known as fine structure splitting.[47] Quantum mechanics[edit] In his 1924 dissertation Recherches sur la théorie des quanta (Research on Quantum Theory), French physicist Louis de Broglie hypothesized that all matter can be represented as a de Broglie wave in the manner of light.[48] That is, under the appropriate conditions, electrons and other matter would show properties of either particles or waves. The corpuscular properties of a particle are demonstrated when it is shown to have a localized position in space along its trajectory at any given moment.[49] The wave-like nature of light is displayed, for example, when a beam of light is passed through parallel slits thereby creating interference patterns. In 1927 George Paget Thomson, discovered the interference effect was produced when a beam of electrons was passed through thin metal foils and by American physicists Clinton Davisson and Lester Germer by the reflection of electrons from a crystal of nickel.[50] A symmetrical blue cloud that decreases in intensity from the center outward In quantum mechanics, the behavior of an electron in an atom is described by an orbital, which is a probability distribution rather than an orbit. In the figure, the shading indicates the relative probability to "find" the electron, having the energy corresponding to the given quantum numbers, at that point. De Broglie's prediction of a wave nature for electrons led Erwin Schrödinger to postulate a wave equation for electrons moving under the influence of the nucleus in the atom. In 1926, this equation, the Schrödinger equation, successfully described how electron waves propagated.[51] Rather than yielding a solution that determined the location of an electron over time, this wave equation also could be used to predict the probability of finding an electron near a position, especially a position near where the electron was bound in space, for which the electron wave equations did not change in time. This approach led to a second formulation of quantum mechanics (the first by Heisenberg in 1925), and solutions of Schrödinger's equation, like Heisenberg's, provided derivations of the energy states of an electron in a hydrogen atom that were equivalent to those that had been derived first by Bohr in 1913, and that were known to reproduce the hydrogen spectrum.[52] Once spin and the interaction between multiple electrons were describable, quantum mechanics made it possible to predict the configuration of electrons in atoms with atomic numbers greater than hydrogen.[53] In 1928, building on Wolfgang Pauli's work, Paul Dirac produced a model of the electron – the Dirac equation, consistent with relativity theory, by applying relativistic and symmetry considerations to the hamiltonian formulation of the quantum mechanics of the electro-magnetic field.[54] In order to resolve some problems within his relativistic equation, Dirac developed in 1930 a model of the vacuum as an infinite sea of particles with negative energy, later dubbed the Dirac sea. This led him to predict the existence of a positron, the antimatter counterpart of the electron.[55] This particle was discovered in 1932 by Carl Anderson, who proposed calling standard electrons negatons and using electron as a generic term to describe both the positively and negatively charged variants. In 1947 Willis Lamb, working in collaboration with graduate student Robert Retherford, found that certain quantum states of the hydrogen atom, which should have the same energy, were shifted in relation to each other; the difference came to be called the Lamb shift. About the same time, Polykarp Kusch, working with Henry M. Foley, discovered the magnetic moment of the electron is slightly larger than predicted by Dirac's theory. This small difference was later called anomalous magnetic dipole moment of the electron. This difference was later explained by the theory of quantum electrodynamics, developed by Sin-Itiro Tomonaga, Julian Schwinger and Richard Feynman in the late 1940s.[56] Particle accelerators[edit] With the development of the particle accelerator during the first half of the twentieth century, physicists began to delve deeper into the properties of subatomic particles.[57] The first successful attempt to accelerate electrons using electromagnetic induction was made in 1942 by Donald Kerst. His initial betatron reached energies of 2.3 MeV, while subsequent betatrons achieved 300 MeV. In 1947, synchrotron radiation was discovered with a 70 MeV electron synchrotron at General Electric. This radiation was caused by the acceleration of electrons through a magnetic field as they moved near the speed of light.[58] With a beam energy of 1.5 GeV, the first high-energy particle collider was ADONE, which began operations in 1968.[59] This device accelerated electrons and positrons in opposite directions, effectively doubling the energy of their collision when compared to striking a static target with an electron.[60] The Large Electron–Positron Collider (LEP) at CERN, which was operational from 1989 to 2000, achieved collision energies of 209 GeV and made important measurements for the Standard Model of particle physics.[61][62] Confinement of individual electrons[edit] Individual electrons can now be easily confined in ultra small (L = 20 nm, W = 20 nm) CMOS transistors operated at cryogenic temperature over a range of −269 °C (4 K) to about −258 °C (15 K).[63] The electron wavefunction spreads in a semiconductor lattice and negligibly interacts with the valence band electrons, so it can be treated in the single particle formalism, by replacing its mass with the effective mass tensor. A table with four rows and four columns, with each cell containing a particle identifier Standard Model of elementary particles. The electron (symbol e) is on the left. In the Standard Model of particle physics, electrons belong to the group of subatomic particles called leptons, which are believed to be fundamental or elementary particles. Electrons have the lowest mass of any charged lepton (or electrically charged particle of any type) and belong to the first-generation of fundamental particles.[64] The second and third generation contain charged leptons, the muon and the tau, which are identical to the electron in charge, spin and interactions, but are more massive. Leptons differ from the other basic constituent of matter, the quarks, by their lack of strong interaction. All members of the lepton group are fermions, because they all have half-odd integer spin; the electron has spin 1/2.[65] Fundamental properties[edit] The invariant mass of an electron is approximately 9.109×10−31 kilograms,[66] or 5.489×10−4 atomic mass units. On the basis of Einstein's principle of mass–energy equivalence, this mass corresponds to a rest energy of 0.511 MeV. The ratio between the mass of a proton and that of an electron is about 1836.[10][67] Astronomical measurements show that the proton-to-electron mass ratio has held the same value, as is predicted by the Standard Model, for at least half the age of the universe.[68] Electrons have an electric charge of −1.602×10−19 coulombs,[66] which is used as a standard unit of charge for subatomic particles, and is also called the elementary charge. This elementary charge has a relative standard uncertainty of 2.2×10−8.[66] Within the limits of experimental accuracy, the electron charge is identical to the charge of a proton, but with the opposite sign.[69] As the symbol e is used for the elementary charge, the electron is commonly symbolized by , where the minus sign indicates the negative charge. The positron is symbolized by because it has the same properties as the electron but with a positive rather than negative charge.[65][66] The electron has an intrinsic angular momentum or spin of 1/2.[66] This property is usually stated by referring to the electron as a spin-1/2 particle.[65] For such particles the spin magnitude is 3/2 ħ.[note 3] while the result of the measurement of a projection of the spin on any axis can only be ±ħ/2. In addition to spin, the electron has an intrinsic magnetic moment along its spin axis.[66] It is approximately equal to one Bohr magneton,[70][note 4] which is a physical constant equal to 9.27400915(23)×10−24 joules per tesla.[66] The orientation of the spin with respect to the momentum of the electron defines the property of elementary particles known as helicity.[71] The electron has no known substructure[1][72] and it is assumed to be a point particle with a point charge and no spatial extent.[9] The issue of the radius of the electron is a challenging problem of modern theoretical physics. The admission of the hypothesis of a finite radius of the electron is incompatible to the premises of the theory of relativity. On the other hand, a point-like electron (zero radius) generates serious mathematical difficulties due to the self-energy of the electron tending to infinity.[73] Observation of a single electron in a Penning trap suggests the upper limit of the particle's radius to be 10−22 meters.[74] The upper bound of the electron radius of 10−18 meters[75] can be derived using the uncertainty relation in energy. There is also a physical constant called the "classical electron radius", with the much larger value of 2.8179×10−15 m, greater than the radius of the proton. However, the terminology comes from a simplistic calculation that ignores the effects of quantum mechanics; in reality, the so-called classical electron radius has little to do with the true fundamental structure of the electron.[76][note 5] There are elementary particles that spontaneously decay into less massive particles. An example is the muon, with a mean lifetime of 2.2×10−6 seconds, which decays into an electron, a muon neutrino and an electron antineutrino. The electron, on the other hand, is thought to be stable on theoretical grounds: the electron is the least massive particle with non-zero electric charge, so its decay would violate charge conservation.[77] The experimental lower bound for the electron's mean lifetime is 6.6×1028 years, at a 90% confidence level.[7][78][79] Quantum properties[edit] As with all particles, electrons can act as waves. This is called the wave–particle duality and can be demonstrated using the double-slit experiment. The wave-like nature of the electron allows it to pass through two parallel slits simultaneously, rather than just one slit as would be the case for a classical particle. In quantum mechanics, the wave-like property of one particle can be described mathematically as a complex-valued function, the wave function, commonly denoted by the Greek letter psi (ψ). When the absolute value of this function is squared, it gives the probability that a particle will be observed near a location—a probability density.[80]:162–218 A three dimensional projection of a two dimensional plot. There are symmetric hills along one axis and symmetric valleys along the other, roughly giving a saddle-shape Example of an antisymmetric wave function for a quantum state of two identical fermions in a 1-dimensional box. If the particles swap position, the wave function inverts its sign. Electrons are identical particles because they cannot be distinguished from each other by their intrinsic physical properties. In quantum mechanics, this means that a pair of interacting electrons must be able to swap positions without an observable change to the state of the system. The wave function of fermions, including electrons, is antisymmetric, meaning that it changes sign when two electrons are swapped; that is, ψ(r1, r2) = −ψ(r2, r1), where the variables r1 and r2 correspond to the first and second electrons, respectively. Since the absolute value is not changed by a sign swap, this corresponds to equal probabilities. Bosons, such as the photon, have symmetric wave functions instead.[80]:162–218 In the case of antisymmetry, solutions of the wave equation for interacting electrons result in a zero probability that each pair will occupy the same location or state. This is responsible for the Pauli exclusion principle, which precludes any two electrons from occupying the same quantum state. This principle explains many of the properties of electrons. For example, it causes groups of bound electrons to occupy different orbitals in an atom, rather than all overlapping each other in the same orbit.[80]:162–218 Virtual particles[edit] In a simplified picture, every photon spends some time as a combination of a virtual electron plus its antiparticle, the virtual positron, which rapidly annihilate each other shortly thereafter.[81] The combination of the energy variation needed to create these particles, and the time during which they exist, fall under the threshold of detectability expressed by the Heisenberg uncertainty relation, ΔE · Δt ≥ ħ. In effect, the energy needed to create these virtual particles, ΔE, can be "borrowed" from the vacuum for a period of time, Δt, so that their product is no more than the reduced Planck constant, ħ6.6×10−16 eV·s. Thus, for a virtual electron, Δt is at most 1.3×10−21 s.[82] A sphere with a minus sign at lower left symbolizes the electron, while pairs of spheres with plus and minus signs show the virtual particles A schematic depiction of virtual electron–positron pairs appearing at random near an electron (at lower left) While an electron–positron virtual pair is in existence, the coulomb force from the ambient electric field surrounding an electron causes a created positron to be attracted to the original electron, while a created electron experiences a repulsion. This causes what is called vacuum polarization. In effect, the vacuum behaves like a medium having a dielectric permittivity more than unity. Thus the effective charge of an electron is actually smaller than its true value, and the charge decreases with increasing distance from the electron.[83][84] This polarization was confirmed experimentally in 1997 using the Japanese TRISTAN particle accelerator.[85] Virtual particles cause a comparable shielding effect for the mass of the electron.[86] The interaction with virtual particles also explains the small (about 0.1%) deviation of the intrinsic magnetic moment of the electron from the Bohr magneton (the anomalous magnetic moment).[70][87] The extraordinarily precise agreement of this predicted difference with the experimentally determined value is viewed as one of the great achievements of quantum electrodynamics.[88] The apparent paradox in classical physics of a point particle electron having intrinsic angular momentum and magnetic moment can be explained by the formation of virtual photons in the electric field generated by the electron. These photons cause the electron to shift about in a jittery fashion (known as zitterbewegung),[89] which results in a net circular motion with precession. This motion produces both the spin and the magnetic moment of the electron.[9][90] In atoms, this creation of virtual photons explains the Lamb shift observed in spectral lines.[83] The Compton Wavelength shows that near elementary particles such as the electron, the uncertainty of the energy allows for the creation of virtual particles near the electron. This wavelength explains the "static" of virtual particles around elementary particles at a close distance. An electron generates an electric field that exerts an attractive force on a particle with a positive charge, such as the proton, and a repulsive force on a particle with a negative charge. The strength of this force in nonrelativistic approximation is determined by Coulomb's inverse square law.[91]:58–61 When an electron is in motion, it generates a magnetic field.[80]:140 The Ampère-Maxwell law relates the magnetic field to the mass motion of electrons (the current) with respect to an observer. This property of induction supplies the magnetic field that drives an electric motor.[92] The electromagnetic field of an arbitrary moving charged particle is expressed by the Liénard–Wiechert potentials, which are valid even when the particle's speed is close to that of light (relativistic).[91]:429–434 A graph with arcs showing the motion of charged particles A particle with charge q (at left) is moving with velocity v through a magnetic field B that is oriented toward the viewer. For an electron, q is negative so it follows a curved trajectory toward the top. When an electron is moving through a magnetic field, it is subject to the Lorentz force that acts perpendicularly to the plane defined by the magnetic field and the electron velocity. This centripetal force causes the electron to follow a helical trajectory through the field at a radius called the gyroradius. The acceleration from this curving motion induces the electron to radiate energy in the form of synchrotron radiation.[80]:160[93][note 6] The energy emission in turn causes a recoil of the electron, known as the Abraham–Lorentz–Dirac Force, which creates a friction that slows the electron. This force is caused by a back-reaction of the electron's own field upon itself.[94] A curve shows the motion of the electron, a red dot shows the nucleus, and a wiggly line the emitted photon Here, Bremsstrahlung is produced by an electron e deflected by the electric field of an atomic nucleus. The energy change E2 − E1 determines the frequency f of the emitted photon. Photons mediate electromagnetic interactions between particles in quantum electrodynamics. An isolated electron at a constant velocity cannot emit or absorb a real photon; doing so would violate conservation of energy and momentum. Instead, virtual photons can transfer momentum between two charged particles. This exchange of virtual photons, for example, generates the Coulomb force.[95] Energy emission can occur when a moving electron is deflected by a charged particle, such as a proton. The acceleration of the electron results in the emission of Bremsstrahlung radiation.[96] An inelastic collision between a photon (light) and a solitary (free) electron is called Compton scattering. This collision results in a transfer of momentum and energy between the particles, which modifies the wavelength of the photon by an amount called the Compton shift.[note 7] The maximum magnitude of this wavelength shift is h/mec, which is known as the Compton wavelength.[97] For an electron, it has a value of 2.43×10−12 m.[66] When the wavelength of the light is long (for instance, the wavelength of the visible light is 0.4–0.7 μm) the wavelength shift becomes negligible. Such interaction between the light and free electrons is called Thomson scattering or linear Thomson scattering.[98] The relative strength of the electromagnetic interaction between two charged particles, such as an electron and a proton, is given by the fine-structure constant. This value is a dimensionless quantity formed by the ratio of two energies: the electrostatic energy of attraction (or repulsion) at a separation of one Compton wavelength, and the rest energy of the charge. It is given by α ≈ 7.297353×10−3, which is approximately equal to 1/137.[66] When electrons and positrons collide, they annihilate each other, giving rise to two or more gamma ray photons. If the electron and positron have negligible momentum, a positronium atom can form before annihilation results in two or three gamma ray photons totalling 1.022 MeV.[99][100] On the other hand, a high-energy photon can transform into an electron and a positron by a process called pair production, but only in the presence of a nearby charged particle, such as a nucleus.[101][102] In the theory of electroweak interaction, the left-handed component of electron's wavefunction forms a weak isospin doublet with the electron neutrino. This means that during weak interactions, electron neutrinos behave like electrons. Either member of this doublet can undergo a charged current interaction by emitting or absorbing a and be converted into the other member. Charge is conserved during this reaction because the W boson also carries a charge, canceling out any net change during the transmutation. Charged current interactions are responsible for the phenomenon of beta decay in a radioactive atom. Both the electron and electron neutrino can undergo a neutral current interaction via a exchange, and this is responsible for neutrino-electron elastic scattering.[103] Atoms and molecules[edit] A table of five rows and five columns, with each cell portraying a color-coded probability density Probability densities for the first few hydrogen atom orbitals, seen in cross-section. The energy level of a bound electron determines the orbital it occupies, and the color reflects the probability of finding the electron at a given position. An electron can be bound to the nucleus of an atom by the attractive Coulomb force. A system of one or more electrons bound to a nucleus is called an atom. If the number of electrons is different from the nucleus' electrical charge, such an atom is called an ion. The wave-like behavior of a bound electron is described by a function called an atomic orbital. Each orbital has its own set of quantum numbers such as energy, angular momentum and projection of angular momentum, and only a discrete set of these orbitals exist around the nucleus. According to the Pauli exclusion principle each orbital can be occupied by up to two electrons, which must differ in their spin quantum number. Electrons can transfer between different orbitals by the emission or absorption of photons with an energy that matches the difference in potential.[104]:159–160 Other methods of orbital transfer include collisions with particles, such as electrons, and the Auger effect.[105] To escape the atom, the energy of the electron must be increased above its binding energy to the atom. This occurs, for example, with the photoelectric effect, where an incident photon exceeding the atom's ionization energy is absorbed by the electron.[104]:127–132 The orbital angular momentum of electrons is quantized. Because the electron is charged, it produces an orbital magnetic moment that is proportional to the angular momentum. The net magnetic moment of an atom is equal to the vector sum of orbital and spin magnetic moments of all electrons and the nucleus. The magnetic moment of the nucleus is negligible compared with that of the electrons. The magnetic moments of the electrons that occupy the same orbital (so called, paired electrons) cancel each other out.[106] The chemical bond between atoms occurs as a result of electromagnetic interactions, as described by the laws of quantum mechanics.[107] The strongest bonds are formed by the sharing or transfer of electrons between atoms, allowing the formation of molecules.[12] Within a molecule, electrons move under the influence of several nuclei, and occupy molecular orbitals; much as they can occupy atomic orbitals in isolated atoms.[108] A fundamental factor in these molecular structures is the existence of electron pairs. These are electrons with opposed spins, allowing them to occupy the same molecular orbital without violating the Pauli exclusion principle (much like in atoms). Different molecular orbitals have different spatial distribution of the electron density. For instance, in bonded pairs (i.e. in the pairs that actually bind atoms together) electrons can be found with the maximal probability in a relatively small volume between the nuclei. By contrast, in non-bonded pairs electrons are distributed in a large volume around nuclei.[109] Four bolts of lightning strike the ground A lightning discharge consists primarily of a flow of electrons.[110] The electric potential needed for lightning can be generated by a triboelectric effect.[111][112] If a body has more or fewer electrons than are required to balance the positive charge of the nuclei, then that object has a net electric charge. When there is an excess of electrons, the object is said to be negatively charged. When there are fewer electrons than the number of protons in nuclei, the object is said to be positively charged. When the number of electrons and the number of protons are equal, their charges cancel each other and the object is said to be electrically neutral. A macroscopic body can develop an electric charge through rubbing, by the triboelectric effect.[113] Independent electrons moving in vacuum are termed free electrons. Electrons in metals also behave as if they were free. In reality the particles that are commonly termed electrons in metals and other solids are quasi-electrons—quasiparticles, which have the same electrical charge, spin, and magnetic moment as real electrons but might have a different mass.[114] When free electrons—both in vacuum and metals—move, they produce a net flow of charge called an electric current, which generates a magnetic field. Likewise a current can be created by a changing magnetic field. These interactions are described mathematically by Maxwell's equations.[115] At a given temperature, each material has an electrical conductivity that determines the value of electric current when an electric potential is applied. Examples of good conductors include metals such as copper and gold, whereas glass and Teflon are poor conductors. In any dielectric material, the electrons remain bound to their respective atoms and the material behaves as an insulator. Most semiconductors have a variable level of conductivity that lies between the extremes of conduction and insulation.[116] On the other hand, metals have an electronic band structure containing partially filled electronic bands. The presence of such bands allows electrons in metals to behave as if they were free or delocalized electrons. These electrons are not associated with specific atoms, so when an electric field is applied, they are free to move like a gas (called Fermi gas)[117] through the material much like free electrons. Because of collisions between electrons and atoms, the drift velocity of electrons in a conductor is on the order of millimeters per second. However, the speed at which a change of current at one point in the material causes changes in currents in other parts of the material, the velocity of propagation, is typically about 75% of light speed.[118] This occurs because electrical signals propagate as a wave, with the velocity dependent on the dielectric constant of the material.[119] Metals make relatively good conductors of heat, primarily because the delocalized electrons are free to transport thermal energy between atoms. However, unlike electrical conductivity, the thermal conductivity of a metal is nearly independent of temperature. This is expressed mathematically by the Wiedemann–Franz law,[117] which states that the ratio of thermal conductivity to the electrical conductivity is proportional to the temperature. The thermal disorder in the metallic lattice increases the electrical resistivity of the material, producing a temperature dependence for electric current.[120] When cooled below a point called the critical temperature, materials can undergo a phase transition in which they lose all resistivity to electric current, in a process known as superconductivity. In BCS theory, pairs of electrons called Cooper pairs have their motion coupled to nearby matter via lattice vibrations called phonons, thereby avoiding the collisions with atoms that normally create electrical resistance.[121] (Cooper pairs have a radius of roughly 100 nm, so they can overlap each other.)[122] However, the mechanism by which higher temperature superconductors operate remains uncertain. Electrons inside conducting solids, which are quasi-particles themselves, when tightly confined at temperatures close to absolute zero, behave as though they had split into three other quasiparticles: spinons, orbitons and holons.[123][124] The former carries spin and magnetic moment, the next carries its orbital location while the latter electrical charge. Motion and energy[edit] According to Einstein's theory of special relativity, as an electron's speed approaches the speed of light, from an observer's point of view its relativistic mass increases, thereby making it more and more difficult to accelerate it from within the observer's frame of reference. The speed of an electron can approach, but never reach, the speed of light in a vacuum, c. However, when relativistic electrons—that is, electrons moving at a speed close to c—are injected into a dielectric medium such as water, where the local speed of light is significantly less than c, the electrons temporarily travel faster than light in the medium. As they interact with the medium, they generate a faint light called Cherenkov radiation.[125] The plot starts at zero and curves sharply upward toward the right Lorentz factor as a function of velocity. It starts at value 1 and goes to infinity as v approaches c. The effects of special relativity are based on a quantity known as the Lorentz factor, defined as where v is the speed of the particle. The kinetic energy Ke of an electron moving with velocity v is: where me is the mass of electron. For example, the Stanford linear accelerator can accelerate an electron to roughly 51 GeV.[126] Since an electron behaves as a wave, at a given velocity it has a characteristic de Broglie wavelength. This is given by λe = h/p where h is the Planck constant and p is the momentum.[48] For the 51 GeV electron above, the wavelength is about 2.4×10−17 m, small enough to explore structures well below the size of an atomic nucleus.[127] A photon approaches the nucleus from the left, with the resulting electron and positron moving off to the right Pair production of an electron and positron, caused by the close approach of a photon with an atomic nucleus. The lightning symbol represents an exchange of a virtual photon, thus an electric force acts. The angle between the particles is very small.[128] The Big Bang theory is the most widely accepted scientific theory to explain the early stages in the evolution of the Universe.[129] For the first millisecond of the Big Bang, the temperatures were over 10 billion kelvins and photons had mean energies over a million electronvolts. These photons were sufficiently energetic that they could react with each other to form pairs of electrons and positrons. Likewise, positron-electron pairs annihilated each other and emitted energetic photons: An equilibrium between electrons, positrons and photons was maintained during this phase of the evolution of the Universe. After 15 seconds had passed, however, the temperature of the universe dropped below the threshold where electron-positron formation could occur. Most of the surviving electrons and positrons annihilated each other, releasing gamma radiation that briefly reheated the universe.[130] For reasons that remain uncertain, during the annihilation process there was an excess in the number of particles over antiparticles. Hence, about one electron for every billion electron-positron pairs survived. This excess matched the excess of protons over antiprotons, in a condition known as baryon asymmetry, resulting in a net charge of zero for the universe.[131][132] The surviving protons and neutrons began to participate in reactions with each other—in the process known as nucleosynthesis, forming isotopes of hydrogen and helium, with trace amounts of lithium. This process peaked after about five minutes.[133] Any leftover neutrons underwent negative beta decay with a half-life of about a thousand seconds, releasing a proton and electron in the process, For about the next 300000400000 years, the excess electrons remained too energetic to bind with atomic nuclei.[134] What followed is a period known as recombination, when neutral atoms were formed and the expanding universe became transparent to radiation.[135] Roughly one million years after the big bang, the first generation of stars began to form.[135] Within a star, stellar nucleosynthesis results in the production of positrons from the fusion of atomic nuclei. These antimatter particles immediately annihilate with electrons, releasing gamma rays. The net result is a steady reduction in the number of electrons, and a matching increase in the number of neutrons. However, the process of stellar evolution can result in the synthesis of radioactive isotopes. Selected isotopes can subsequently undergo negative beta decay, emitting an electron and antineutrino from the nucleus.[136] An example is the cobalt-60 (60Co) isotope, which decays to form nickel-60 (60 A branching tree representing the particle production An extended air shower generated by an energetic cosmic ray striking the Earth's atmosphere At the end of its lifetime, a star with more than about 20 solar masses can undergo gravitational collapse to form a black hole.[138] According to classical physics, these massive stellar objects exert a gravitational attraction that is strong enough to prevent anything, even electromagnetic radiation, from escaping past the Schwarzschild radius. However, quantum mechanical effects are believed to potentially allow the emission of Hawking radiation at this distance. Electrons (and positrons) are thought to be created at the event horizon of these stellar remnants. When a pair of virtual particles (such as an electron and positron) is created in the vicinity of the event horizon, random spatial positioning might result in one of them to appear on the exterior; this process is called quantum tunnelling. The gravitational potential of the black hole can then supply the energy that transforms this virtual particle into a real particle, allowing it to radiate away into space.[139] In exchange, the other member of the pair is given negative energy, which results in a net loss of mass-energy by the black hole. The rate of Hawking radiation increases with decreasing mass, eventually causing the black hole to evaporate away until, finally, it explodes.[140] Cosmic rays are particles traveling through space with high energies. Energy events as high as 3.0×1020 eV have been recorded.[141] When these particles collide with nucleons in the Earth's atmosphere, a shower of particles is generated, including pions.[142] More than half of the cosmic radiation observed from the Earth's surface consists of muons. The particle called a muon is a lepton produced in the upper atmosphere by the decay of a pion. A muon, in turn, can decay to form an electron or positron.[143] A swirling green glow in the night sky above snow-covered ground Aurorae are mostly caused by energetic electrons precipitating into the atmosphere.[144] Remote observation of electrons requires detection of their radiated energy. For example, in high-energy environments such as the corona of a star, free electrons form a plasma that radiates energy due to Bremsstrahlung radiation. Electron gas can undergo plasma oscillation, which is waves caused by synchronized variations in electron density, and these produce energy emissions that can be detected by using radio telescopes.[145] The frequency of a photon is proportional to its energy. As a bound electron transitions between different energy levels of an atom, it absorbs or emits photons at characteristic frequencies. For instance, when atoms are irradiated by a source with a broad spectrum, distinct absorption lines appear in the spectrum of transmitted radiation. Each element or molecule displays a characteristic set of spectral lines, such as the hydrogen spectral series. Spectroscopic measurements of the strength and width of these lines allow the composition and physical properties of a substance to be determined.[146][147] In laboratory conditions, the interactions of individual electrons can be observed by means of particle detectors, which allow measurement of specific properties such as energy, spin and charge.[148] The development of the Paul trap and Penning trap allows charged particles to be contained within a small region for long durations. This enables precise measurements of the particle properties. For example, in one instance a Penning trap was used to contain a single electron for a period of 10 months.[149] The magnetic moment of the electron was measured to a precision of eleven digits, which, in 1980, was a greater accuracy than for any other physical constant.[150] The distribution of the electrons in solid materials can be visualized by angle-resolved photoemission spectroscopy (ARPES). This technique employs the photoelectric effect to measure the reciprocal space—a mathematical representation of periodic structures that is used to infer the original structure. ARPES can be used to determine the direction, speed and scattering of electrons within the material.[153] Plasma applications[edit] Particle beams[edit] A violet beam from above produces a blue glow about a Space shuttle model During a NASA wind tunnel test, a model of the Space Shuttle is targeted by a beam of electrons, simulating the effect of ionizing gases during re-entry.[154] Electron beams are used in welding.[155] They allow energy densities up to 107 W·cm−2 across a narrow focus diameter of 0.1–1.3 mm and usually require no filler material. This welding technique must be performed in a vacuum to prevent the electrons from interacting with the gas before reaching their target, and it can be used to join conductive materials that would otherwise be considered unsuitable for welding.[156][157] Electron-beam lithography (EBL) is a method of etching semiconductors at resolutions smaller than a micrometer.[158] This technique is limited by high costs, slow performance, the need to operate the beam in the vacuum and the tendency of the electrons to scatter in solids. The last problem limits the resolution to about 10 nm. For this reason, EBL is primarily used for the production of small numbers of specialized integrated circuits.[159] Electron beam processing is used to irradiate materials in order to change their physical properties or sterilize medical and food products.[160] Electron beams fluidise or quasi-melt glasses without significant increase of temperature on intensive irradiation: e.g. intensive electron radiation causes a many orders of magnitude decrease of viscosity and stepwise decrease of its activation energy.[161] Linear particle accelerators generate electron beams for treatment of superficial tumors in radiation therapy. Electron therapy can treat such skin lesions as basal-cell carcinomas because an electron beam only penetrates to a limited depth before being absorbed, typically up to 5 cm for electron energies in the range 5–20 MeV. An electron beam can be used to supplement the treatment of areas that have been irradiated by X-rays.[162][163] Particle accelerators use electric fields to propel electrons and their antiparticles to high energies. These particles emit synchrotron radiation as they pass through magnetic fields. The dependency of the intensity of this radiation upon spin polarizes the electron beam—a process known as the Sokolov–Ternov effect.[note 8] Polarized electron beams can be useful for various experiments. Synchrotron radiation can also cool the electron beams to reduce the momentum spread of the particles. Electron and positron beams are collided upon the particles' accelerating to the required energies; particle detectors observe the resulting energy emissions, which particle physics studies .[164] Low-energy electron diffraction (LEED) is a method of bombarding a crystalline material with a collimated beam of electrons and then observing the resulting diffraction patterns to determine the structure of the material. The required energy of the electrons is typically in the range 20–200 eV.[165] The reflection high-energy electron diffraction (RHEED) technique uses the reflection of a beam of electrons fired at various low angles to characterize the surface of crystalline materials. The beam energy is typically in the range 8–20 keV and the angle of incidence is 1–4°.[166][167] The electron microscope directs a focused beam of electrons at a specimen. Some electrons change their properties, such as movement direction, angle, and relative phase and energy as the beam interacts with the material. Microscopists can record these changes in the electron beam to produce atomically resolved images of the material.[168] In blue light, conventional optical microscopes have a diffraction-limited resolution of about 200 nm.[169] By comparison, electron microscopes are limited by the de Broglie wavelength of the electron. This wavelength, for example, is equal to 0.0037 nm for electrons accelerated across a 100,000-volt potential.[170] The Transmission Electron Aberration-Corrected Microscope is capable of sub-0.05 nm resolution, which is more than enough to resolve individual atoms.[171] This capability makes the electron microscope a useful laboratory instrument for high resolution imaging. However, electron microscopes are expensive instruments that are costly to maintain. Two main types of electron microscopes exist: transmission and scanning. Transmission electron microscopes function like overhead projectors, with a beam of electrons passing through a slice of material then being projected by lenses on a photographic slide or a charge-coupled device. Scanning electron microscopes rasteri a finely focused electron beam, as in a TV set, across the studied sample to produce the image. Magnifications range from 100× to 1,000,000× or higher for both microscope types. The scanning tunneling microscope uses quantum tunneling of electrons from a sharp metal tip into the studied material and can produce atomically resolved images of its surface.[172][173][174] Other applications[edit] In the free-electron laser (FEL), a relativistic electron beam passes through a pair of undulators that contain arrays of dipole magnets whose fields point in alternating directions. The electrons emit synchrotron radiation that coherently interacts with the same electrons to strongly amplify the radiation field at the resonance frequency. FEL can emit a coherent high-brilliance electromagnetic radiation with a wide range of frequencies, from microwaves to soft X-rays. These devices are used in manufacturing, communication, and in medical applications, such as soft tissue surgery.[175] Electrons are important in cathode ray tubes, which have been extensively used as display devices in laboratory instruments, computer monitors and television sets.[176] In a photomultiplier tube, every photon striking the photocathode initiates an avalanche of electrons that produces a detectable current pulse.[177] Vacuum tubes use the flow of electrons to manipulate electrical signals, and they played a critical role in the development of electronics technology. However, they have been largely supplanted by solid-state devices such as the transistor.[178] See also[edit] 1. ^ The fractional version's denominator is the inverse of the decimal value (along with its relative standard uncertainty of 4.2×10−13 u). 2. ^ The electron's charge is the negative of elementary charge, which has a positive value for the proton. 3. ^ This magnitude is obtained from the spin quantum number as for quantum number s = 1/2. See: Gupta, M.C. (2001). Atomic and Molecular Spectroscopy. New Age Publishers. p. 81. ISBN 978-81-224-1300-7. 4. ^ Bohr magneton: 5. ^ The classical electron radius is derived as follows. Assume that the electron's charge is spread uniformly throughout a spherical volume. Since one part of the sphere would repel the other parts, the sphere contains electrostatic potential energy. This energy is assumed to equal the electron's rest energy, defined by special relativity (E = mc2). From electrostatics theory, the potential energy of a sphere with radius r and charge e is given by: where ε0 is the vacuum permittivity. For an electron with rest mass m0, the rest energy is equal to: where c is the speed of light in a vacuum. Setting them equal and solving for r gives the classical electron radius. See: Haken, H.; Wolf, H.C.; Brewer, W.D. (2005). The Physics of Atoms and Quanta: Introduction to Experiments and Theory. Springer. p. 70. ISBN 978-3-540-67274-6. 6. ^ Radiation from non-relativistic electrons is sometimes termed cyclotron radiation. 7. ^ The change in wavelength, Δλ, depends on the angle of the recoil, θ, as follows, where c is the speed of light in a vacuum and me is the electron mass. See Zombeck (2007: 393, 396). 8. ^ The polarization of an electron beam means that the spins of all electrons point into one direction. In other words, the projections of the spins of all electrons onto their momentum vector have the same sign. 1. ^ a b c Eichten, E.J.; Peskin, M.E.; Peskin, M. (1983). "New Tests for Quark and Lepton Substructure". Physical Review Letters. 50 (11): 811–814. Bibcode:1983PhRvL..50..811E. doi:10.1103/PhysRevLett.50.811. 2. ^ a b Farrar, W.V. (1969). "Richard Laming and the Coal-Gas Industry, with His Views on the Structure of Matter". Annals of Science. 25 (3): 243–254. doi:10.1080/00033796900200141. 3. ^ a b c d Arabatzis, T. (2006). Representing Electrons: A Biographical Approach to Theoretical Entities. University of Chicago Press. pp. 70–74, 96. ISBN 978-0-226-02421-9. 4. ^ Buchwald, J.Z.; Warwick, A. (2001). Histories of the Electron: The Birth of Microphysics. MIT Press. pp. 195–203. ISBN 978-0-262-52424-7. 5. ^ a b c d e f Thomson, J.J. (1897). "Cathode Rays". Philosophical Magazine. 44 (269): 293–316. doi:10.1080/14786449708621070. 6. ^ a b c d e P.J. Mohr, B.N. Taylor, and D.B. Newell, "The 2014 CODATA Recommended Values of the Fundamental Physical Constants". This database was developed by J. Baker, M. Douma, and S. Kotochigova. Available: [1]. National Institute of Standards and Technology, Gaithersburg, MD 20899. 7. ^ a b Agostini, M.; et al. (Borexino Collaboration) (2015). "Test of Electric Charge Conservation with Borexino". Physical Review Letters. 115 (23): 231802. arXiv:1509.01223. Bibcode:2015PhRvL.115w1802A. doi:10.1103/PhysRevLett.115.231802. PMID 26684111. 8. ^ Coff, Jerry (2010-09-10). "What Is An Electron". Retrieved 10 September 2010. 9. ^ a b c d Curtis, L.J. (2003). Atomic Structure and Lifetimes: A Conceptual Approach. Cambridge University Press. p. 74. ISBN 978-0-521-53635-6. 10. ^ a b "CODATA value: proton-electron mass ratio". 2006 CODATA recommended values. National Institute of Standards and Technology. Retrieved 2009-07-18. 11. ^ Anastopoulos, C. (2008). Particle Or Wave: The Evolution of the Concept of Matter in Modern Physics. Princeton University Press. pp. 236–237. ISBN 978-0-691-13512-0. 12. ^ a b Pauling, L.C. (1960). The Nature of the Chemical Bond and the Structure of Molecules and Crystals: an introduction to modern structural chemistry (3rd ed.). Cornell University Press. pp. 4–10. ISBN 978-0-8014-0333-0. 13. ^ Shipley, J.T. (1945). Dictionary of Word Origins. The Philosophical Library. p. 133. ISBN 978-0-88029-751-6. 14. ^ a b Benjamin, Park (1898), A history of electricity (The intellectual rise in electricity) from antiquity to the days of Benjamin Franklin, New York: J. Wiley, pp. 315, 484–5, ISBN 978-1313106054 15. ^ Keithley, J.F. (1999). The Story of Electrical and Magnetic Measurements: From 500 B.C. to the 1940s. IEEE Press. pp. 19–20. ISBN 978-0-7803-1193-0. 16. ^ Cajori, Florian (1917). A History of Physics in Its Elementary Branches: Including the Evolution of Physical Laboratories. Macmillan. 17. ^ "Benjamin Franklin (1706–1790)". Eric Weisstein's World of Biography. Wolfram Research. Retrieved 2010-12-16. 18. ^ Myers, R.L. (2006). The Basics of Physics. Greenwood Publishing Group. p. 242. ISBN 978-0-313-32857-2. 19. ^ Barrow, J.D. (1983). "Natural Units Before Planck". Quarterly Journal of the Royal Astronomical Society. 24: 24–26. Bibcode:1983QJRAS..24...24B. 20. ^ Okamura, Sōgo (1994). History of Electron Tubes. IOS Press. p. 11. ISBN 978-90-5199-145-1. Retrieved 29 May 2015. In 1881, Stoney named this electromagnetic 'electrolion'. It came to be called 'electron' from 1891. [...] In 1906, the suggestion to call cathode ray particles 'electrions' was brought up but through the opinion of Lorentz of Holland 'electrons' came to be widely used. 21. ^ Stoney, G.J. (1894). "Of the "Electron," or Atom of Electricity" (PDF). Philosophical Magazine. 38 (5): 418–420. doi:10.1080/14786449408620653. 22. ^ "electron, n.2". OED Online. March 2013. Oxford University Press. Accessed 12 April 2013 [2] 23. ^ Soukhanov, A.H., ed. (1986). Word Mysteries & Histories. Houghton Mifflin. p. 73. ISBN 978-0-395-40265-8. 24. ^ Guralnik, D.B., ed. (1970). Webster's New World Dictionary. Prentice Hall. p. 450. 25. ^ Born, M.; Blin-Stoyle, R.J.; Radcliffe, J.M. (1989). Atomic Physics. Courier Dover. p. 26. ISBN 978-0-486-65984-8. 26. ^ a b Whittaker, E. T. (1951), A history of the theories of aether and electricity. Vol 1, Nelson, London 27. ^ a b c Leicester, H.M. (1971). The Historical Background of Chemistry. Courier Dover. pp. 221–222. ISBN 978-0-486-61053-5. 28. ^ DeKosky, R.K. (1983). "William Crookes and the quest for absolute vacuum in the 1870s". Annals of Science. 40 (1): 1–18. doi:10.1080/00033798300200101. 29. ^ Frank Wilczek: "Happy Birthday, Electron" Scientific American, June 2012. 30. ^ Trenn, T.J. (1976). "Rutherford on the Alpha-Beta-Gamma Classification of Radioactive Rays". Isis. 67 (1): 61–75. doi:10.1086/351545. JSTOR 231134. 31. ^ Becquerel, H. (1900). "Déviation du Rayonnement du Radium dans un Champ Électrique". Comptes rendus de l'Académie des sciences (in French). 130: 809–815. 32. ^ Buchwald and Warwick (2001:90–91). 33. ^ Myers, W.G. (1976). "Becquerel's Discovery of Radioactivity in 1896". Journal of Nuclear Medicine. 17 (7): 579–582. PMID 775027. 34. ^ Thomson, J.J. (1906). "Nobel Lecture: Carriers of Negative Electricity" (PDF). The Nobel Foundation. Archived from the original (PDF) on 2008-10-10. Retrieved 2008-08-25. Cite uses deprecated parameter |dead-url= (help) 35. ^ O'Hara, J. G. (Mar 1975). "George Johnstone Stoney, F.R.S., and the Concept of the Electron". Notes and Records of the Royal Society of London. Royal Society. 29 (2): 265–276. doi:10.1098/rsnr.1975.0018. JSTOR 531468. 36. ^ Kikoin, I.K.; Sominskiĭ, I.S. (1961). "Abram Fedorovich Ioffe (on his eightieth birthday)". Soviet Physics Uspekhi. 3 (5): 798–809. Bibcode:1961SvPhU...3..798K. doi:10.1070/PU1961v003n05ABEH005812. Original publication in Russian: Кикоин, И.К.; Соминский, М.С. (1960). "Академик А.Ф. Иоффе". Успехи Физических Наук. 72 (10): 303–321. doi:10.3367/UFNr.0072.196010e.0307. 37. ^ Millikan, R.A. (1911). "The Isolation of an Ion, a Precision Measurement of its Charge, and the Correction of Stokes' Law" (PDF). Physical Review. 32 (2): 349–397. Bibcode:1911PhRvI..32..349M. doi:10.1103/PhysRevSeriesI.32.349. 38. ^ Das Gupta, N.N.; Ghosh, S.K. (1999). "A Report on the Wilson Cloud Chamber and Its Applications in Physics". Reviews of Modern Physics. 18 (2): 225–290. Bibcode:1946RvMP...18..225G. doi:10.1103/RevModPhys.18.225. 39. ^ a b c Smirnov, B.M. (2003). Physics of Atoms and Ions. Springer. pp. 14–21. ISBN 978-0-387-95550-6. 40. ^ Bohr, N. (1922). "Nobel Lecture: The Structure of the Atom" (PDF). The Nobel Foundation. Retrieved 2008-12-03. 41. ^ Lewis, G.N. (1916). "The Atom and the Molecule" (PDF). Journal of the American Chemical Society. 38 (4): 762–786. doi:10.1021/ja02261a002. 42. ^ a b Arabatzis, T.; Gavroglu, K. (1997). "The chemists' electron". European Journal of Physics. 18 (3): 150–163. Bibcode:1997EJPh...18..150A. doi:10.1088/0143-0807/18/3/005. 43. ^ Langmuir, I. (1919). "The Arrangement of Electrons in Atoms and Molecules". Journal of the American Chemical Society. 41 (6): 868–934. doi:10.1021/ja02227a002. 44. ^ Scerri, E.R. (2007). The Periodic Table. Oxford University Press. pp. 205–226. ISBN 978-0-19-530573-9. 45. ^ Massimi, M. (2005). Pauli's Exclusion Principle, The Origin and Validation of a Scientific Principle. Cambridge University Press. pp. 7–8. ISBN 978-0-521-83911-2. 46. ^ Uhlenbeck, G.E.; Goudsmith, S. (1925). "Ersetzung der Hypothese vom unmechanischen Zwang durch eine Forderung bezüglich des inneren Verhaltens jedes einzelnen Elektrons". Die Naturwissenschaften (in German). 13 (47): 953–954. Bibcode:1925NW.....13..953E. doi:10.1007/BF01558878. 47. ^ Pauli, W. (1923). "Über die Gesetzmäßigkeiten des anomalen Zeemaneffektes". Zeitschrift für Physik (in German). 16 (1): 155–164. Bibcode:1923ZPhy...16..155P. doi:10.1007/BF01327386. 48. ^ a b de Broglie, L. (1929). "Nobel Lecture: The Wave Nature of the Electron" (PDF). The Nobel Foundation. Retrieved 2008-08-30. 49. ^ Falkenburg, B. (2007). Particle Metaphysics: A Critical Account of Subatomic Reality. Springer. p. 85. Bibcode:2007pmca.book.....F. ISBN 978-3-540-33731-7. 50. ^ Davisson, C. (1937). "Nobel Lecture: The Discovery of Electron Waves" (PDF). The Nobel Foundation. Retrieved 2008-08-30. 51. ^ Schrödinger, E. (1926). "Quantisierung als Eigenwertproblem". Annalen der Physik (in German). 385 (13): 437–490. Bibcode:1926AnP...385..437S. doi:10.1002/andp.19263851302. 52. ^ Rigden, J.S. (2003). Hydrogen. Harvard University Press. pp. 59–86. ISBN 978-0-674-01252-3. 53. ^ Reed, B.C. (2007). Quantum Mechanics. Jones & Bartlett Publishers. pp. 275–350. ISBN 978-0-7637-4451-9. 54. ^ Dirac, P.A.M. (1928). "The Quantum Theory of the Electron" (PDF). Proceedings of the Royal Society A. 117 (778): 610–624. Bibcode:1928RSPSA.117..610D. doi:10.1098/rspa.1928.0023. 55. ^ Dirac, P.A.M. (1933). "Nobel Lecture: Theory of Electrons and Positrons" (PDF). The Nobel Foundation. Retrieved 2008-11-01. 56. ^ "The Nobel Prize in Physics 1965". The Nobel Foundation. Retrieved 2008-11-04. 57. ^ Panofsky, W.K.H. (1997). "The Evolution of Particle Accelerators & Colliders" (PDF). Beam Line. 27 (1): 36–44. Retrieved 2008-09-15. 58. ^ Elder, F.R.; et al. (1947). "Radiation from Electrons in a Synchrotron". Physical Review. 71 (11): 829–830. Bibcode:1947PhRv...71..829E. doi:10.1103/PhysRev.71.829.5. 59. ^ Hoddeson, L.; et al. (1997). The Rise of the Standard Model: Particle Physics in the 1960s and 1970s. Cambridge University Press. pp. 25–26. ISBN 978-0-521-57816-5. 60. ^ Bernardini, C. (2004). "AdA: The First Electron–Positron Collider". Physics in Perspective. 6 (2): 156–183. Bibcode:2004PhP.....6..156B. doi:10.1007/s00016-003-0202-y. 61. ^ "Testing the Standard Model: The LEP experiments". CERN. 2008. Retrieved 2008-09-15. 62. ^ "LEP reaps a final harvest". CERN Courier. 40 (10). 2000. 63. ^ Prati, E.; De Michielis, M.; Belli, M.; Cocco, S.; Fanciulli, M.; Kotekar-Patil, D.; Ruoff, M.; Kern, D.P.; Wharam, D.A.; Verduijn, J.; Tettamanzi, G.C.; Rogge, S.; Roche, B.; Wacquez, R.; Jehl, X.; Vinet, M.; Sanquer, M. (2012). "Few electron limit of n-type metal oxide semiconductor single electron transistors". Nanotechnology. 23 (21): 215204. arXiv:1203.4811. Bibcode:2012Nanot..23u5204P. CiteSeerX doi:10.1088/0957-4484/23/21/215204. PMID 22552118. 64. ^ Frampton, P.H.; Hung, P.Q.; Sher, Marc (2000). "Quarks and Leptons Beyond the Third Generation". Physics Reports. 330 (5–6): 263–348. arXiv:hep-ph/9903387. Bibcode:2000PhR...330..263F. doi:10.1016/S0370-1573(99)00095-2. 65. ^ a b c Raith, W.; Mulvey, T. (2001). Constituents of Matter: Atoms, Molecules, Nuclei and Particles. CRC Press. pp. 777–781. ISBN 978-0-8493-1202-1. 66. ^ a b c d e f g h i The original source for CODATA is Mohr, P.J.; Taylor, B.N.; Newell, D.B. (2008). "CODATA recommended values of the fundamental physical constants". Reviews of Modern Physics. 80 (2): 633–730. arXiv:0801.0028. Bibcode:2008RvMP...80..633M. CiteSeerX doi:10.1103/RevModPhys.80.633. Individual physical constants from the CODATA are available at: "The NIST Reference on Constants, Units and Uncertainty". National Institute of Standards and Technology. Retrieved 2009-01-15. 67. ^ Zombeck, M.V. (2007). Handbook of Space Astronomy and Astrophysics (3rd ed.). Cambridge University Press. p. 14. ISBN 978-0-521-78242-5. 68. ^ Murphy, M.T.; et al. (2008). "Strong Limit on a Variable Proton-to-Electron Mass Ratio from Molecules in the Distant Universe". Science. 320 (5883): 1611–1613. arXiv:0806.3081. Bibcode:2008Sci...320.1611M. doi:10.1126/science.1156352. PMID 18566280. 69. ^ Zorn, J.C.; Chamberlain, G.E.; Hughes, V.W. (1963). "Experimental Limits for the Electron-Proton Charge Difference and for the Charge of the Neutron". Physical Review. 129 (6): 2566–2576. Bibcode:1963PhRv..129.2566Z. doi:10.1103/PhysRev.129.2566. 70. ^ a b Odom, B.; et al. (2006). "New Measurement of the Electron Magnetic Moment Using a One-Electron Quantum Cyclotron". Physical Review Letters. 97 (3): 030801. Bibcode:2006PhRvL..97c0801O. doi:10.1103/PhysRevLett.97.030801. PMID 16907490. 71. ^ Anastopoulos, C. (2008). Particle Or Wave: The Evolution of the Concept of Matter in Modern Physics. Princeton University Press. pp. 261–262. ISBN 978-0-691-13512-0. 72. ^ Gabrielse, G.; et al. (2006). "New Determination of the Fine Structure Constant from the Electron g Value and QED". Physical Review Letters. 97 (3): 030802(1–4). Bibcode:2006PhRvL..97c0802G. doi:10.1103/PhysRevLett.97.030802. PMID 16907491. 73. ^ Eduard Shpolsky, Atomic physics (Atomnaia fizika), second edition, 1951 74. ^ Dehmelt, H. (1988). "A Single Atomic Particle Forever Floating at Rest in Free Space: New Value for Electron Radius". Physica Scripta. T22: 102–110. Bibcode:1988PhST...22..102D. doi:10.1088/0031-8949/1988/T22/016. 75. ^ Gerald Gabrielse webpage at Harvard University 76. ^ Meschede, D. (2004). Optics, light and lasers: The Practical Approach to Modern Aspects of Photonics and Laser Physics. Wiley-VCH. p. 168. ISBN 978-3-527-40364-6. 77. ^ Steinberg, R.I.; et al. (1999). "Experimental test of charge conservation and the stability of the electron". Physical Review D. 61 (2): 2582–2586. Bibcode:1975PhRvD..12.2582S. doi:10.1103/PhysRevD.12.2582. 78. ^ J. Beringer (Particle Data Group); et al. (2012). "Review of Particle Physics: [electron properties]" (PDF). Physical Review D. 86 (1): 010001. Bibcode:2012PhRvD..86a0001B. doi:10.1103/PhysRevD.86.010001. 79. ^ Back, H.O.; et al. (2002). "Search for electron decay mode e → γ + ν with prototype of Borexino detector". Physics Letters B. 525 (1–2): 29–40. Bibcode:2002PhLB..525...29B. doi:10.1016/S0370-2693(01)01440-X. 80. ^ a b c d e Munowitz, M. (2005). Knowing, The Nature of Physical Law. Oxford University Press. ISBN 978-0-19-516737-5. 81. ^ Kane, G. (October 9, 2006). "Are virtual particles really constantly popping in and out of existence? Or are they merely a mathematical bookkeeping device for quantum mechanics?". Scientific American. Retrieved 2008-09-19. 82. ^ Taylor, J. (1989). "Gauge Theories in Particle Physics". In Davies, Paul (ed.). The New Physics. Cambridge University Press. p. 464. ISBN 978-0-521-43831-5. 83. ^ a b Genz, H. (2001). Nothingness: The Science of Empty Space. Da Capo Press. pp. 241–243, 245–247. ISBN 978-0-7382-0610-3. 84. ^ Gribbin, J. (January 25, 1997). "More to electrons than meets the eye". New Scientist. Retrieved 2008-09-17. 85. ^ Levine, I.; et al. (1997). "Measurement of the Electromagnetic Coupling at Large Momentum Transfer". Physical Review Letters. 78 (3): 424–427. Bibcode:1997PhRvL..78..424L. doi:10.1103/PhysRevLett.78.424. 86. ^ Murayama, H. (March 10–17, 2006). Supersymmetry Breaking Made Easy, Viable and Generic. Proceedings of the XLIInd Rencontres de Moriond on Electroweak Interactions and Unified Theories. La Thuile, Italy. arXiv:0709.3041. Bibcode:2007arXiv0709.3041M.—lists a 9% mass difference for an electron that is the size of the Planck distance. 87. ^ Schwinger, J. (1948). "On Quantum-Electrodynamics and the Magnetic Moment of the Electron". Physical Review. 73 (4): 416–417. Bibcode:1948PhRv...73..416S. doi:10.1103/PhysRev.73.416. 88. ^ Huang, K. (2007). Fundamental Forces of Nature: The Story of Gauge Fields. World Scientific. pp. 123–125. ISBN 978-981-270-645-4. 89. ^ Foldy, L.L.; Wouthuysen, S. (1950). "On the Dirac Theory of Spin 1/2 Particles and Its Non-Relativistic Limit". Physical Review. 78 (1): 29–36. Bibcode:1950PhRv...78...29F. doi:10.1103/PhysRev.78.29. 90. ^ Sidharth, B.G. (2009). "Revisiting Zitterbewegung". International Journal of Theoretical Physics. 48 (2): 497–506. arXiv:0806.0985. Bibcode:2009IJTP...48..497S. doi:10.1007/s10773-008-9825-8. 91. ^ a b Griffiths, David J. (1998). Introduction to Electrodynamics (3rd ed.). Prentice Hall. ISBN 978-0-13-805326-0. 92. ^ Crowell, B. (2000). Electricity and Magnetism. Light and Matter. pp. 129–152. ISBN 978-0-9704670-4-1. 93. ^ Mahadevan, R.; Narayan, R.; Yi, I. (1996). "Harmony in Electrons: Cyclotron and Synchrotron Emission by Thermal Electrons in a Magnetic Field". The Astrophysical Journal. 465: 327–337. arXiv:astro-ph/9601073. Bibcode:1996ApJ...465..327M. doi:10.1086/177422. 94. ^ Rohrlich, F. (1999). "The Self-Force and Radiation Reaction". American Journal of Physics. 68 (12): 1109–1112. Bibcode:2000AmJPh..68.1109R. doi:10.1119/1.1286430. 95. ^ Georgi, H. (1989). "Grand Unified Theories". In Davies, Paul (ed.). The New Physics. Cambridge University Press. p. 427. ISBN 978-0-521-43831-5. 96. ^ Blumenthal, G.J.; Gould, R. (1970). "Bremsstrahlung, Synchrotron Radiation, and Compton Scattering of High-Energy Electrons Traversing Dilute Gases". Reviews of Modern Physics. 42 (2): 237–270. Bibcode:1970RvMP...42..237B. doi:10.1103/RevModPhys.42.237. 97. ^ Staff (2008). "The Nobel Prize in Physics 1927". The Nobel Foundation. Retrieved 2008-09-28. 98. ^ Chen, S.-Y.; Maksimchuk, A.; Umstadter, D. (1998). "Experimental observation of relativistic nonlinear Thomson scattering". Nature. 396 (6712): 653–655. arXiv:physics/9810036. Bibcode:1998Natur.396..653C. doi:10.1038/25303. 99. ^ Beringer, R.; Montgomery, C.G. (1942). "The Angular Distribution of Positron Annihilation Radiation". Physical Review. 61 (5–6): 222–224. Bibcode:1942PhRv...61..222B. doi:10.1103/PhysRev.61.222. 100. ^ Buffa, A. (2000). College Physics (4th ed.). Prentice Hall. p. 888. ISBN 978-0-13-082444-8. 101. ^ Eichler, J. (2005). "Electron–positron pair production in relativistic ion–atom collisions". Physics Letters A. 347 (1–3): 67–72. Bibcode:2005PhLA..347...67E. doi:10.1016/j.physleta.2005.06.105. 102. ^ Hubbell, J.H. (2006). "Electron positron pair production by photons: A historical overview". Radiation Physics and Chemistry. 75 (6): 614–623. Bibcode:2006RaPC...75..614H. doi:10.1016/j.radphyschem.2005.10.008. 103. ^ Quigg, C. (June 4–30, 2000). The Electroweak Theory. TASI 2000: Flavor Physics for the Millennium. Boulder, Colorado. p. 80. arXiv:hep-ph/0204104. Bibcode:2002hep.ph....4104Q. 104. ^ a b Tipler, Paul; Llewellyn, Ralph (2003), Modern Physics (illustrated ed.), Macmillan, ISBN 9780716743453 105. ^ Burhop, E.H.S. (1952). The Auger Effect and Other Radiationless Transitions. Cambridge University Press. pp. 2–3. ISBN 978-0-88275-966-1. 106. ^ Jiles, D. (1998). Introduction to Magnetism and Magnetic Materials. CRC Press. pp. 280–287. ISBN 978-0-412-79860-3. 107. ^ Löwdin, P.O.; Erkki Brändas, E.; Kryachko, E.S. (2003). Fundamental World of Quantum Chemistry: A Tribute to the Memory of Per- Olov Löwdin. Springer. pp. 393–394. ISBN 978-1-4020-1290-7. 108. ^ McQuarrie, D.A.; Simon, J.D. (1997). Physical Chemistry: A Molecular Approach. University Science Books. pp. 325–361. ISBN 978-0-935702-99-6. 109. ^ Daudel, R.; et al. (1974). "The Electron Pair in Chemistry". Canadian Journal of Chemistry. 52 (8): 1310–1320. doi:10.1139/v74-201. 110. ^ Rakov, V.A.; Uman, M.A. (2007). Lightning: Physics and Effects. Cambridge University Press. p. 4. ISBN 978-0-521-03541-5. 111. ^ Freeman, G.R.; March, N.H. (1999). "Triboelectricity and some associated phenomena". Materials Science and Technology. 15 (12): 1454–1458. doi:10.1179/026708399101505464. 112. ^ Forward, K.M.; Lacks, D.J.; Sankaran, R.M. (2009). "Methodology for studying particle–particle triboelectrification in granular materials". Journal of Electrostatics. 67 (2–3): 178–183. doi:10.1016/j.elstat.2008.12.002. 113. ^ Weinberg, S. (2003). The Discovery of Subatomic Particles. Cambridge University Press. pp. 15–16. ISBN 978-0-521-82351-7. 114. ^ Lou, L.-F. (2003). Introduction to phonons and electrons. World Scientific. pp. 162, 164. Bibcode:2003ipe..book.....L. ISBN 978-981-238-461-4. 115. ^ Guru, B.S.; Hızıroğlu, H.R. (2004). Electromagnetic Field Theory. Cambridge University Press. pp. 138, 276. ISBN 978-0-521-83016-4. 116. ^ Achuthan, M.K.; Bhat, K.N. (2007). Fundamentals of Semiconductor Devices. Tata McGraw-Hill. pp. 49–67. ISBN 978-0-07-061220-4. 117. ^ a b Ziman, J.M. (2001). Electrons and Phonons: The Theory of Transport Phenomena in Solids. Oxford University Press. p. 260. ISBN 978-0-19-850779-6. 118. ^ Main, P. (June 12, 1993). "When electrons go with the flow: Remove the obstacles that create electrical resistance, and you get ballistic electrons and a quantum surprise". New Scientist. 1887: 30. Retrieved 2008-10-09. 119. ^ Blackwell, G.R. (2000). The Electronic Packaging Handbook. CRC Press. pp. 6.39–6.40. ISBN 978-0-8493-8591-9. 120. ^ Durrant, A. (2000). Quantum Physics of Matter: The Physical World. CRC Press. pp. 43, 71–78. ISBN 978-0-7503-0721-5. 121. ^ Staff (2008). "The Nobel Prize in Physics 1972". The Nobel Foundation. Retrieved 2008-10-13. 122. ^ Kadin, A.M. (2007). "Spatial Structure of the Cooper Pair". Journal of Superconductivity and Novel Magnetism. 20 (4): 285–292. arXiv:cond-mat/0510279. doi:10.1007/s10948-006-0198-z. 123. ^ "Discovery About Behavior Of Building Block Of Nature Could Lead To Computer Revolution". ScienceDaily. July 31, 2009. Retrieved 2009-08-01. 124. ^ Jompol, Y.; et al. (2009). "Probing Spin-Charge Separation in a Tomonaga-Luttinger Liquid". Science. 325 (5940): 597–601. arXiv:1002.2782. Bibcode:2009Sci...325..597J. doi:10.1126/science.1171769. PMID 19644117. 125. ^ Staff (2008). "The Nobel Prize in Physics 1958, for the discovery and the interpretation of the Cherenkov effect". The Nobel Foundation. Retrieved 2008-09-25. 126. ^ Staff (August 26, 2008). "Special Relativity". Stanford Linear Accelerator Center. Retrieved 2008-09-25. 127. ^ Adams, S. (2000). Frontiers: Twentieth Century Physics. CRC Press. p. 215. ISBN 978-0-7484-0840-5. 128. ^ Bianchini, Lorenzo (2017). Selected Exercises in Particle and Nuclear Physics. Springer. p. 79. ISBN 978-3-319-70494-4. Extract of page 79 129. ^ Lurquin, P.F. (2003). The Origins of Life and the Universe. Columbia University Press. p. 2. ISBN 978-0-231-12655-7. 130. ^ Silk, J. (2000). The Big Bang: The Creation and Evolution of the Universe (3rd ed.). Macmillan. pp. 110–112, 134–137. ISBN 978-0-8050-7256-3. 131. ^ Kolb, E.W.; Wolfram, Stephen (1980). "The Development of Baryon Asymmetry in the Early Universe". Physics Letters B. 91 (2): 217–221. Bibcode:1980PhLB...91..217K. doi:10.1016/0370-2693(80)90435-9. 132. ^ Sather, E. (Spring–Summer 1996). "The Mystery of Matter Asymmetry" (PDF). Beam Line. Stanford University. Retrieved 2008-11-01. 133. ^ Burles, S.; Nollett, K.M.; Turner, M.S. (1999). "Big-Bang Nucleosynthesis: Linking Inner Space and Outer Space". arXiv:astro-ph/9903300. 134. ^ Boesgaard, A.M.; Steigman, G. (1985). "Big bang nucleosynthesis – Theories and observations". Annual Review of Astronomy and Astrophysics. 23 (2): 319–378. Bibcode:1985ARA&A..23..319B. doi:10.1146/annurev.aa.23.090185.001535. 135. ^ a b Barkana, R. (2006). "The First Stars in the Universe and Cosmic Reionization". Science. 313 (5789): 931–934. arXiv:astro-ph/0608450. Bibcode:2006Sci...313..931B. CiteSeerX doi:10.1126/science.1125644. PMID 16917052. 136. ^ Burbidge, E.M.; et al. (1957). "Synthesis of Elements in Stars" (PDF). Reviews of Modern Physics. 29 (4): 548–647. Bibcode:1957RvMP...29..547B. doi:10.1103/RevModPhys.29.547. 137. ^ Rodberg, L.S.; Weisskopf, V. (1957). "Fall of Parity: Recent Discoveries Related to Symmetry of Laws of Nature". Science. 125 (3249): 627–633. Bibcode:1957Sci...125..627R. doi:10.1126/science.125.3249.627. PMID 17810563. 138. ^ Fryer, C.L. (1999). "Mass Limits For Black Hole Formation". The Astrophysical Journal. 522 (1): 413–418. arXiv:astro-ph/9902315. Bibcode:1999ApJ...522..413F. doi:10.1086/307647. 139. ^ Parikh, M.K.; Wilczek, F. (2000). "Hawking Radiation As Tunneling". Physical Review Letters. 85 (24): 5042–5045. arXiv:hep-th/9907001. Bibcode:2000PhRvL..85.5042P. doi:10.1103/PhysRevLett.85.5042. hdl:1874/17028. PMID 11102182. 141. ^ Halzen, F.; Hooper, D. (2002). "High-energy neutrino astronomy: the cosmic ray connection". Reports on Progress in Physics. 66 (7): 1025–1078. arXiv:astro-ph/0204527. Bibcode:2002RPPh...65.1025H. doi:10.1088/0034-4885/65/7/201. 142. ^ Ziegler, J.F. (1998). "Terrestrial cosmic ray intensities". IBM Journal of Research and Development. 42 (1): 117–139. Bibcode:1998IBMJ...42..117Z. doi:10.1147/rd.421.0117. 143. ^ Sutton, C. (August 4, 1990). "Muons, pions and other strange particles". New Scientist. Retrieved 2008-08-28. 144. ^ Wolpert, S. (July 24, 2008). "Scientists solve 30-year-old aurora borealis mystery". University of California. Archived from the original on August 17, 2008. Retrieved 2008-10-11. 145. ^ Gurnett, D.A.; Anderson, R. (1976). "Electron Plasma Oscillations Associated with Type III Radio Bursts". Science. 194 (4270): 1159–1162. Bibcode:1976Sci...194.1159G. doi:10.1126/science.194.4270.1159. PMID 17790910. 146. ^ Martin, W.C.; Wiese, W.L. (2007). "Atomic Spectroscopy: A Compendium of Basic Ideas, Notation, Data, and Formulas". National Institute of Standards and Technology. Retrieved 2007-01-08. 147. ^ Fowles, G.R. (1989). Introduction to Modern Optics. Courier Dover. pp. 227–233. ISBN 978-0-486-65957-2. 148. ^ Grupen, C. (2000). "Physics of Particle Detection". AIP Conference Proceedings. 536: 3–34. arXiv:physics/9906063. Bibcode:2000AIPC..536....3G. doi:10.1063/1.1361756. 149. ^ Staff (2008). "The Nobel Prize in Physics 1989". The Nobel Foundation. Retrieved 2008-09-24. 150. ^ Ekstrom, P.; Wineland, David (1980). "The isolated Electron" (PDF). Scientific American. 243 (2): 91–101. Bibcode:1980SciAm.243b.104E. doi:10.1038/scientificamerican0880-104. Retrieved 2008-09-24. 151. ^ Mauritsson, J. "Electron filmed for the first time ever" (PDF). Lund University. Archived from the original (PDF) on March 25, 2009. Retrieved 2008-09-17. 152. ^ Mauritsson, J.; et al. (2008). "Coherent Electron Scattering Captured by an Attosecond Quantum Stroboscope". Physical Review Letters. 100 (7): 073003. arXiv:0708.1060. Bibcode:2008PhRvL.100g3003M. doi:10.1103/PhysRevLett.100.073003. PMID 18352546. 153. ^ Damascelli, A. (2004). "Probing the Electronic Structure of Complex Systems by ARPES". Physica Scripta. T109: 61–74. arXiv:cond-mat/0307085. Bibcode:2004PhST..109...61D. doi:10.1238/Physica.Topical.109a00061. 154. ^ Staff (April 4, 1975). "Image # L-1975-02972". Langley Research Center, NASA. Archived from the original on December 7, 2008. Retrieved 2008-09-20. 155. ^ Elmer, J. (March 3, 2008). "Standardizing the Art of Electron-Beam Welding". Lawrence Livermore National Laboratory. Retrieved 2008-10-16. 156. ^ Schultz, H. (1993). Electron Beam Welding. Woodhead Publishing. pp. 2–3. ISBN 978-1-85573-050-2. 157. ^ Benedict, G.F. (1987). Nontraditional Manufacturing Processes. Manufacturing engineering and materials processing. 19. CRC Press. p. 273. ISBN 978-0-8247-7352-6. 158. ^ Ozdemir, F.S. (June 25–27, 1979). Electron beam lithography. Proceedings of the 16th Conference on Design automation. San Diego, CA: IEEE Press. pp. 383–391. Retrieved 2008-10-16. 159. ^ Madou, M.J. (2002). Fundamentals of Microfabrication: the Science of Miniaturization (2nd ed.). CRC Press. pp. 53–54. ISBN 978-0-8493-0826-0. 160. ^ Jongen, Y.; Herer, A. (May 2–5, 1996). Electron Beam Scanning in Industrial Applications. APS/AAPT Joint Meeting. American Physical Society. Bibcode:1996APS..MAY.H9902J. 161. ^ Mobus, G.; et al. (2010). "Nano-scale quasi-melting of alkali-borosilicate glasses under electron irradiation". Journal of Nuclear Materials. 396 (2–3): 264–271. Bibcode:2010JNuM..396..264M. doi:10.1016/j.jnucmat.2009.11.020. 162. ^ Beddar, A.S.; Domanovic, Mary Ann; Kubu, Mary Lou; Ellis, Rod J.; Sibata, Claudio H.; Kinsella, Timothy J. (2001). "Mobile linear accelerators for intraoperative radiation therapy". AORN Journal. 74 (5): 700–705. doi:10.1016/S0001-2092(06)61769-9. 163. ^ Gazda, M.J.; Coia, L.R. (June 1, 2007). "Principles of Radiation Therapy" (PDF). Retrieved 2013-10-31. 164. ^ Chao, A.W.; Tigner, M. (1999). Handbook of Accelerator Physics and Engineering. World Scientific. pp. 155, 188. ISBN 978-981-02-3500-0. 165. ^ Oura, K.; et al. (2003). Surface Science: An Introduction. Springer. pp. 1–45. ISBN 978-3-540-00545-2. 166. ^ Ichimiya, A.; Cohen, P.I. (2004). Reflection High-energy Electron Diffraction. Cambridge University Press. p. 1. ISBN 978-0-521-45373-8. 167. ^ Heppell, T.A. (1967). "A combined low energy and reflection high energy electron diffraction apparatus". Journal of Scientific Instruments. 44 (9): 686–688. Bibcode:1967JScI...44..686H. doi:10.1088/0950-7671/44/9/311. 168. ^ McMullan, D. (1993). "Scanning Electron Microscopy: 1928–1965". University of Cambridge. Retrieved 2009-03-23. 169. ^ Slayter, H.S. (1992). Light and electron microscopy. Cambridge University Press. p. 1. ISBN 978-0-521-33948-3. 170. ^ Cember, H. (1996). Introduction to Health Physics. McGraw-Hill Professional. pp. 42–43. ISBN 978-0-07-105461-4. 171. ^ Erni, R.; et al. (2009). "Atomic-Resolution Imaging with a Sub-50-pm Electron Probe". Physical Review Letters. 102 (9): 096101. Bibcode:2009PhRvL.102i6101E. doi:10.1103/PhysRevLett.102.096101. PMID 19392535. 172. ^ Bozzola, J.J.; Russell, L.D. (1999). Electron Microscopy: Principles and Techniques for Biologists. Jones & Bartlett Publishers. pp. 12, 197–199. ISBN 978-0-7637-0192-5. 173. ^ Flegler, S.L.; Heckman Jr., J.W.; Klomparens, K.L. (1995). Scanning and Transmission Electron Microscopy: An Introduction (Reprint ed.). Oxford University Press. pp. 43–45. ISBN 978-0-19-510751-7. 174. ^ Bozzola, J.J.; Russell, L.D. (1999). Electron Microscopy: Principles and Techniques for Biologists (2nd ed.). Jones & Bartlett Publishers. p. 9. ISBN 978-0-7637-0192-5. 175. ^ Freund, H.P.; Antonsen, T. (1996). Principles of Free-Electron Lasers. Springer. pp. 1–30. ISBN 978-0-412-72540-1. 176. ^ Kitzmiller, J.W. (1995). Television Picture Tubes and Other Cathode-Ray Tubes: Industry and Trade Summary. Diane Publishing. pp. 3–5. ISBN 978-0-7881-2100-5. 177. ^ Sclater, N. (1999). Electronic Technology Handbook. McGraw-Hill Professional. pp. 227–228. ISBN 978-0-07-058048-0. 178. ^ Staff (2008). "The History of the Integrated Circuit". The Nobel Foundation. Retrieved 2008-10-18. External links[edit]
b4e3b7725c88b6b8
SH1014 Modern Physics 4.0 credits Modern fysik Show course information based on the chosen semester and course offering: Offering and execution No offering selected Course information Content and learning outcomes Course contents * The experimental foundations of modern physics: Elementary relativity theory. The Michelson-Morely experiment. Einstein's theory of special relativity. Length contraction. Time dialation. Elementary quantum physics. Planck's radiation law. X-ray radiation and spectra. Rutherford's atomic model. Atomic structure. Bohr's atomic model. Atomic energy levels. Nuclear structure. Radioactive decay. Matter waves. Wave packets and the Heisenberg Uncertainty Principle. Wave-particle duality. Quantum mechanics: the foundations of quantum mechanics. The Schrödinger equation applied to simple potentials. Interpretation of wave functions. Plane wave solutions. The harmonic oscillator. Angular momentum and spin. The hydrogen atom and the periodic table. The Pauli principle. Applications to physical phenomena: the photoelectric effect, the Compton effect, X-ray diffraction, particle diffraction. Applications within science and technology (including) tunneling, the tunneling electron microscope, the Stern-Gerlach experiment, the atomic nucleus, the helium atom, simple molecules, solid matter. The building blocks of matter: particles and their interactions. Intended learning outcomes * After completing this course a student should be able to: • Explain the scientific basis of modern physics, as defined by the course syllabus.   • Set up and perform relativistic calculations for simple cases and quantum mechanical calculations on simple systems • Apply quantum mechanical principles to scientific and technical applications Course Disposition No information inserted Literature and preparations Specific prerequisites * Courses in physics (or equivalent): SI1121, SK1104; courses in mathematics (or equivalent): SF1672, SF1673, SF1674, SF1922; courses in mechanics (or equivalent): SG1112. Recommended prerequisites No information inserted No information inserted Modern Physics, Randy Harris. Pearson / Addison-Wesley Examination and completion Grading scale * Examination * • TEN1 - Exam, 4.0 credits, Grading scale: A, B, C, D, E, FX, F Other requirements for final grade * Pass the Written exam, 4,0 hp Opportunity to complete the requirements via supplementary examination No information inserted Opportunity to raise an approved grade via renewed examination No information inserted Torbjörn Bäck Further information Course web Course web SH1014 Offered by SCI/Undergraduate Physics Main field of study * Education cycle * First cycle Add-on studies No information inserted Torbjörn Bäck ( Ethical approach *
24f38ea4b8e473bc
You Can Solve Quantum Mechanics' Classic Particle in a Box Problem With Code Getty Images Getty Images Humans have problems with quantum mechanics. We have excellent intuition about the motion of a tennis ball tossed in the air, but what about an electron trapped in a box? The tendency is to use the same tennis ball rules and apply it to the electron—but it doesn't work. We have to use different models to explain properties of very very small things. We call this quantum mechanics (as opposed to classical mechanics). Of course I can't go over all the details of quantum mechanics, so let me give you the abridged version. • When humans started studying small things, they noticed that classical models didn't work. • In particular, it seemed clear that electron "orbits" in the hydrogen atom could only exist at certain energies. • In another experiment it was found that shooting electrons through slits would cause interference patterns similar to wave interference. • Why not just say that particles have a wave nature? Why not just adopt some type of wave equation for particles? OK, it didn't happen exactly like that—but you get the idea. From these ideas we get the Schrödinger Equation. It's pretty complicated, so let me start with the simplified version of this equation—the Time Independent Schrödinger Equation. This is probably what you will see when you first start off in quantum mechanics. La te xi t 1 What the heck is this? Let me make some comments. • This is the one dimensional TISE in which the variable is just x. • ψ is called the wave function. It's the part that's "waving" when we consider the wave properties of matter. But just like the the intensity of light is proportional to square of the electric field, the observable stuff depends on |ψ|2. • |ψ|2 gives you the probability density—this lets you determine the likely locations of the particle. In quantum mechanics, we don't deal with equations of motion, we deal with probability. • E is the energy of the particle and V is the potential energy of the system. Yes, it's weird that we use V for potential energy instead of U (normally V is the electric potential). I have no idea why. • ℏ is a constant and m is the mass of the particle. That's your quick intro to quantum mechanics. Particle in a Box The very first problem you will solve in quantum mechanics is a particle in a box. Suppose there is a one dimensional box with super stiff walls. There is a ball in that box that can bounce back and forth with no energy loss. How do you use the TISE to find the wave function for this case? Well, we already have a differential equation—but a solution for ψ also must satisfy the following conditions. • It must be continuous. • ψ must be zero in regions where the particle can not be found (like outside the box). • As x goes to +/- infinity, ψ must go to zero. • ψ must be normalizable. This means that you have to be able to integrate |ψ|2 over all of x and set it equal to one. This is the same as saying the probability of finding the particle somewhere is 1 out of 1. We just need one more thing—an expression for the potential energy. If we have a box with walls at x = 0 and x = L, then the potential will be zero from x = 0 to L and infinite everywhere else. Now it's just a matter of solving the differential equation and apply the conditions above. I'm going to skip this part because I really want to get to a numerical solution. You should work through this yourself, it's also probably in your physics textbook. However, this is the solution for the wave function inside the box (outside it's zero). La te xi t 1 For the wave function,* A* is just some constant (you can find this through normalization) and n is an integer number. Since you have a sinusoidal solution for the wave function with a boundary condition, multiple values of n will work. With integer values of n, we also get the quantized energy values as shown above. In 1926 Schrödinger published a paper describing his ideas. The title of the paper was "Wave Mechanics"—not Quantum Mechanics. This is important as it is the wave nature of matter that leads to quantized energies. I just like to point that out. Numerical Solution for a Particle in a Box The analytic solution to the infinite square well is nothing new. You can find this in every intro quantum mechanics textbook. But how can we solve it numerically? Just to be clear, by "numerically" I mean the process of breaking a problem into many smaller problems (you can see many examples here). The Schrödinger equation is a differential equation. It makes a connection between the second derivative ofψ (d2ψ/dx2) and ψ. We can use the same idea to find the updated position of a particle with a force to find the numerical values of the wave function. Recall this basic strategy from a numerical calculation in classical mechanics (where p is the momentum): La te xi t 1 I can do the same thing with the wave function. Let me use dot-notation to represent the derivative (ψ-dot is the first derivative and ψ-double dot is the second). This gives the following approximation (technically, I am using the dot-notion incorrectly but let's just do it anyway). La te xi t 1 Notice that I am using ψ-dot2 to calculate the new value of ψ. This is of course wrong in that I should actually use the average ψ-dot. However, this will still work if Δx is small. Now I have the following recipe: • Start at x = 0 m and ψ = 0 (also I will pick ψ-dot equal to zero). • Use Schrödinger's equation to calculate ψ-double dot. • Use ψ-double dot to calculate ψ-dot. • Use ψ-dot to calculate ψ. • Update the x-position and do it again until you get to the end of the box. But wait! What about the energy (E) in the Schrödinger equation? For now, just guess a value for the energy. Let's start with E = 0. Just push the "play" button to run the code. Clearly an energy of zero doesn't work since the wave function isn't zero at x = L. Maybe we should try a different energy. Just pick something (I suggest a value from 1 - 10). Go ahead and change the code (click the pencil icon to edit if you don't see the code) and then run it again. You can see that as you increase the energy above 0, the wave function gets closer and closer to zero at x = L. Of course you could just keep guessing until you find a solution, or we could make the computer do it. Here is a different program that finds the energy for which ψ = 0 at x = L. It's called the "shooting method" since we calculate ψ again and again until we get the right answer. It's sort of like shooting a ball at a target and changing the launch angle and reshooting until we hit the target. Let's look at the code and then I will point out some parts of it. A few notes on different lines in the code (by line number): • 11. I start off with E = 0, but I am going to increase this value. dE is the amount I increase in each re-run. You could change this to a smaller value if you want a more accurate solution. • 12. I am making a boolean variable called "searching". When I find a value for the energy that works, I will no longer be searching so searching will be set to False. • 13. This is the loop that searches for the best energy. • 14. rate(1000) says to not do more than 1000 loops per second so that you can see how the thing changes. • 15 - 18. I need to reset the variables before solving the differential equation. f1.delete() clears the previous graph. • 19. This is the same loop as in the previous program in which the solution to Schrödinger equation is solved. • 29. Here I check if the final value of ψ is close to zero. That's it. When you run this you can see that an energy of 4.92 gives a fairly nice solution to this infinite square well. But wait! There's more! Go back to the code and change the starting energy (in line 10) to 5 instead of zero. Now the shooting method won't find the energy level that it did before since it only searches "up." Go ahead and run the code with this new starting energy and you can find the next energy level that works. Normalization of the Wave Function I said we were finished, but clearly that was a lie. I don't have the correct wave function. There is one more condition that must be satisfied. Since the square of ψ (technically the complex conjugate) is the probability density, the integral of this over all space must be 1. In other words, the total probability of finding the anywhere must be 1. OK, let's start off by comparing our numerical wave function to the analytical solution. If you normalize this analytical function, you will find the following solution. La te xi t 1 Instead of just plotting the probability density for both solutions, I will also integrate to find the total probability. Since I am dealing with a numerical solution, the integration is fairly simple. I have the values of |ψ|2 at different values of x. The next value is just at x + dx. So, imagine a rectangle that has a length of |ψ|2*dx. All I need to do is to go through each value for |ψ|2, calculate the area of this tiny rectangle and add it to the total area. Here is what I get. Notice that the red curve shows the probability density for the textbook solution (the analytical solution) and the blue is the numerical. The numerical solution has a total probability of 0.0508895. Clearly we need to change the numerical solution so that the total probability is 1. There are a couple of ways to change fix this—let's try changing the initial derivative of ψ. Recall that I arbitrarily set this equal to a value of 1 for the initial conditions. What if I increase this to 2? Go ahead and try that by changing the value in the code above (in line 8). Now re run the program and you should get a numerical probability of 0.203558. That's closer to 1, but still not correct. OK, here is the trick (and it will lead to a homework question). What if you set the initial ψ derivative to 1/sqrt(0.0508895)? Go ahead and try that. Boom. You just normalized the wave function. Is this going to be on the test? Yes, it is now. Here are some questions for you. • Find the first 10 energy levels in the numerical calculation. Show that En is proportional to n2. • Can you find a better method for calculating the wave function? Try searching for another method and creating it in python (or whatever language you like). • How could you make this calculation faster? In my code, the energy steps are always the same. Is it possible to use some non-constant energy jumps to find the correct energy? • Why does the initial derivative of ψ set the normalization? Start with the analytical solution for ψ and show that at x = 0, the derivative gives 1 over the squareroot of the total probability. • What if you don't have a flat infinite potential well? What if one side is raised by some energy (just pick something). How would this change your solution? Go ahead and try this. • Use the numerical calculation above to find the probability of finding the particle between x = 0 and x = L/5.
c2d045a91c9468f0
Quantum Battery About: Thank you all for following me. This instructable describes a way to be created a very high efficiency battery. It is more theoretical because it requires high technologies and big investments, but I would like to share the idea with the hope that this work could be read by the proper person, having the ability to prove it concept and to implement it in practice. Now I have started a project at Kickstarter, in which I want to proof the idea. Please, support it. I would like to make an apology to the all authors, which pictures and graphs I have used, without citations of the source. The reason is that these are found in Internet a time ago and the source information is lost. Teacher Notes Teachers! Did you use this instructable in your classroom? Step 1: Principle of Work The principle of work of the Quantum battery is based on the Quantum Mechanics phenomenon of the electron tunneling. More concrete information about this process can be found here. I will summarize this phenomenon in few words: The movement and the energy of the electron is described by its wave function. It is solution of the Schrödinger equation and it predicts that in some cases the electron can tunnel through potential barriers, or in other words - it can jump from one place to another without energy loss. The modern physics claim that the quantum tunneling is the main mechanism, which allows the stars to radiate light - the photons generated inside the core of the star tunnel to its periphery and thus can be freed in the space as free energy. Some more close to us example is the EEPROM (memory used almost in each electronics device today). An EEPROM chip contain a millions / billions of special electronic devices called FGMOS (Floating gate MOS transistors), which work is based on the electron tunneling. On the picture above is given a cross section of a FGMOS transistor. This picture is taken by electron microscope with very high magnification (x10 000). On the picture can be seen the two poly (poly-silicon) gates. The poly1 gate is the floating one. It is totally isolated from all other active areas of the device. It is surrounded from all its sides by high quality insulator. Under the right bottom part of the floating gate can be seen that this insulator layer is very thin. This region is the place where the tunneling happens. Over the floating gate of the FGMOS is placed one or multiple poly-silicon gates (poly2) which do not have any electrical contact with the floating gate. They are called control gates (one or more). The process of the electron tunneling to the floating gate is invoked in the following way (see the second picture): between the chip substrate (normally p-type monocristalline silicon) and the control gate/s relatively high voltage source is connected (in the FLASH/EEPROM chips this voltage is generated by special charge pumps). The electrons are attracted by the appeared high electric field and jump through the tunnel oxide in the floating gate, charging it in this way negatively. On the third picture can be seen how the potential of the floating gate changes with the time because of the increasing of the number of tunneled electrons inside it. Because the floating gate is perfectly isolated once charged it can keep its charge for very long periods (the normal time for FLASH and EEPROM memories is over 10 years!). I would not get deeper inside this quantum mechanics process. I would like only to mention, that the tunneling electron flow from the substrate to the floating gate (called further tunneling current) is based on few physical parameters of the materials used, depend on the strength of the applied electric filed! and can be predicted by the Fowler – Nordheim formula. Step 2: The Battery Structure.... Now we know what FGMOS is and how it works.... How can this give us an energy?!? Now, let us connect N - millions /billions of FGMOS’s in parallel ( on a 32GB FLASH chip the number of FGMOS transistors can be over 70 000 000 - normally for each memory cell 2 FGMOS are used for security reasons): 1.All sources, drains and the substrate are connected together in one network. Let's call this network "Anode" or "Injector" (because it injects the electrons in the FG) and connect it to the negative pole of external battery with voltage Vprog (can be ~12-20 V)(Remark:sources and drains are current passing terminals of MOS devices) . 2.All floating gates are connected together in one network. Let's call it "Cathode" 3.All control gates are connected together - making the "Control plate" When we apply the high voltage (Vprog) at the "Control plate" the tunneling current will start to flow. The tunneling current IFN will be~ N*A*JFN, where A is the area of the single FGMOS and JFN is the tunelling current density per unity area calculated using the Fowler – Nordheim formula. At very high numbers of N or big floating gate (FG) areas the tunneling current can reach very high values! The picture used taken from Wikipedia, author Felix Kling Step 3: How It Works... The questions still remains.... Let us simplify the whole system by use of the capacitors. We can distinguish three main capacitors: 1)CCP-C – the capacitor between the Control plate and the Cathode 2)CINJ-C – the capacitor between the Anode and the Cathode 3)CINJ-CP – the capacitor between the Anode (Injector) and the Control plate The schematic presentation of the simplification is presented on the picture above. Step 4: How It Works (part 2).... When Vprog is applied the IFN current starts to flow. It charges the Cathode plate negatively to the potential Va-c, which increases with the time (its absolute value). Because the voltage on the Ccp-c also changes with the time – a current Icp-c flows from the Vprog source. Icp-c = Ccp-c * d(|Vprog| + |Va-c|)/dt Step 5: How It Works (part 3).... When the voltage Va-c reaches some desired value, let us connect between the Cathode and the Anode a tunable resistive load and adjust its resistance RL in the way that the load current IL=Va-c/RL becomes equivalent to the IFN ! Then the charge on the capacitor CINJ-C remains constant! Va-c remains constant! That means that the charge stored in the capacitor CCP-C : Q(CCP-C) also remains constant because the voltage VC-CP remains also constant ! The current from the voltage source Vprog stops flowing!!! Step 6: Possible Realization and Technology Chalenges As discussed in the previous steps: Based on the quantum mechanics process of the electron tunneling, a very high efficiency battery can be produced. It will contain 5 layers - 3 conductive (metal or semiconductor) and two insulator layers ordered in a sandwich like structure . The bottom layer called Anode (Injector) is conductive. it serves as positive pole of the quantum battery. Over this layer the second conductive layer called Cathode is placed. Both layers are isolated by the use of thin dielectric layer (the tunneling dielectric). Its is possible that this layer is vacuum - technology solution must be found. The thickness of the mentioned dielectric layer (distance between the anode i cathode) is very small (in order of few nanometers). Over the three mentioned layers another dielectric layer is placed. Its thickness can be in orders higher comparing with the first one. At the top of this whole structure is placed the fifth layer (3-rd conductive) which is used to control the process of the electron tunneling. All conductive layers can be patterned in the way that the maximum tunneling current density is achieved. Here I want to list the possible technology challenges :1) Main problem of the FGMOS transistors is that with the time the tunneling electrons destroy the structure of the tunneling oxide. Materials which can sustain the destructive tunneling electron flow must be found. May be the use of new nano -materials can solve the problem. If a vacuum is used - the problem will be solved. The main problem then will be how to keep the Anode and Cathode plates at the right distance. 2) The invoking of high tunneling currents will require high electric fields. Using special structures of the Anode(Injector) plate can improve the electron tunneling probability (now the tunneling current density is estimated to be~ 10A/m2) . Here also helpful can be the nanotechnology. The pictures above show surfaces created by the use of the nanotechnology and may be they can be suitable for use as injector plates. Using such kind of surfaces will lead to decrease of the Vprog voltage. 3) The optimal structure of the three electrode networks must be found in the way, that the injector electrode emits as much, as possible electrons, the Cathode plane “catches” all of them, the Control plate creates the necessary electric field to cause the tunneling, but stays only capacitive connected to the other two planes. Step 7: Conclusion Using the proposed approach a very high efficiency energy source can be created. Based on the nano- and micro-technologies it will be possible to implement the energy source with small dimensions but big current driving capability and efficiency. Properly designed the battery will serve lifelong without any charging (at the time now current densities 10 A/meter square are calculated, but it is not the limit). Using multilayer structure a power source delivering 100-300A can be realized inside a volume of few liters. All said here sounds like perpetual machine, but the trick is that before and after the tunneling the electrons don’t change their energy and their transfer doesn't require power from the external high voltage battery, which current can be zero. Using serial connected Quantum batteries the needed voltage can be reached. When a single battery is designed having multiple parallel sections and switching them ON/OF - connecting or disconnecting them the driven current can be adjusted regarding the current need of power – in this way keeping the output voltage constant and the current from the high voltage external battery near zero. Thank you for your attention! If you like this work, please vote for it in the "MAKE ENERGY" contest. MAKE ENERGY: A US-Mexico Innovation Challenge Participated in the MAKE ENERGY: A US-Mexico Innovation Challenge Be the First to Share • Made with Math Contest Made with Math Contest • Cardboard Speed Challenge Cardboard Speed Challenge • Multi-Discipline Contest Multi-Discipline Contest 2 Discussions
60276b62ae8110f8
Page semi-protected From Wikipedia, the free encyclopedia Jump to navigation Jump to search Helium atom Helium atom ground state. An illustration of the helium atom, depicting the nucleus (pink) and the electron cloud distribution (black). The nucleus (upper right) in helium-4 is in reality spherically symmetric and closely resembles the electron cloud, although for more complicated nuclei this is not always the case; the black bar is one angstrom (10−10 m or 100 pm). Smallest recognized division of a chemical element Electric chargezero (neutral), or ion charge ComponentsElectrons and a compact nucleus of protons and neutrons An atom is the smallest constituent unit of ordinary matter that constitutes a chemical element. Every solid, liquid, gas, and plasma is composed of neutral or ionized atoms. Atoms are extremely small; typical sizes are around 100 picometers (1×10−10 m, a ten-millionth of a millimeter, or 1/254,000,000 of an inch). They are so small that accurately predicting their behavior using classical physics – as if they were billiard balls, for example – is not possible; this is due to quantum effects. Current atomic models now use quantum principles to better explain and predict this behavior. Every atom is composed of a nucleus and one or more electrons bound to the nucleus; the nucleus is made of one or more protons and a number of neutrons. Only the most common variety of hydrogen has no neutrons. Protons and neutrons are called nucleons. More than 99.94% of an atom's mass is in the nucleus. The protons have a positive electric charge whereas the electrons have a negative electric charge; the neutrons have no electric charge. If the number of protons and electrons are equal, then the atom is electrically neutral. If an atom has more or fewer electrons than protons, then it has an overall negative or positive charge, respectively; these atoms are called ions. The electrons of an atom are attracted to the protons in an atomic nucleus by the electromagnetic force; the protons and neutrons in the nucleus are attracted to each other by the nuclear force. This force is usually stronger than the electromagnetic force that repels the positively charged protons from one another. Under certain circumstances, the repelling electromagnetic force becomes stronger than the nuclear force. In this case, the nucleus shatters and leaves behind different elements; this is a kind of nuclear decay. All electrons, nucleons, and nuclei alike are subatomic particles; the behavior of electrons in atoms is closer to a wave than a particle. The number of protons in the nucleus, called the atomic number, defines to which chemical element the atom belongs. For example, each copper atom contains 29 protons; the number of neutrons defines the isotope of the element. Atoms can attach to one or more other atoms by chemical bonds to form chemical compounds such as molecules or crystals; the ability of atoms to associate and dissociate is responsible for most of the physical changes observed in nature. Chemistry is the discipline that studies these changes. History of atomic theory Atoms in philosophy The idea that matter is made up of discrete units is a very old idea, appearing in many ancient cultures such as Greece and India; the word atomos, meaning "uncuttable", was coined by the ancient Greek philosophers Leucippus and his pupil Democritus (c. 460 – c. 370 BC).[1][2][3][4] Democritus taught that atoms were infinite in number, uncreated, and eternal, and that the qualities of an object result from the kind of atoms that compose it.[2][3][4] Democritus's atomism was refined and elaborated by the later philosopher Epicurus (341–270 BC).[3][4] During the Early Middle Ages, atomism was mostly forgotten in western Europe, but survived among some groups of Islamic philosophers.[3] During the twelfth century, atomism became known again in western Europe through references to it in the newly-rediscovered writings of Aristotle.[3] In the fourteenth century, the rediscovery of major works describing atomist teachings, including Lucretius's De rerum natura and Diogenes Laërtius's Lives and Opinions of Eminent Philosophers, led to increased scholarly attention on the subject.[3] Nonetheless, because atomism was associated with the philosophy of Epicureanism, which contradicted orthodox Christian teachings, belief in atoms was not considered acceptable;[3] the French Catholic priest Pierre Gassendi (1592–1655) revived Epicurean atomism with modifications, arguing that atoms were created by God and, though extremely numerous, are not infinite.[3][4] Gassendi's modified theory of atoms was popularized in France by the physician François Bernier (1620–1688) and in England by the natural philosopher Walter Charleton (1619–1707);[3] the chemist Robert Boyle (1627–1691) and the physicist Isaac Newton (1642–1727) both defended atomism and, by the end of the seventeenth century, it had become accepted by portions of the scientific community.[3] First evidence-based theory Dalton also believed atomic theory could explain why water absorbs different gases in different proportions. For example, he found that water absorbs carbon dioxide far better than it absorbs nitrogen.[6] Dalton hypothesized this was due to the differences between the masses and configurations of the gases' respective particles, and carbon dioxide molecules (CO2) are heavier and larger than nitrogen molecules (N2). Brownian motion In 1827, botanist Robert Brown used a microscope to look at dust grains floating in water and discovered that they moved about erratically, a phenomenon that became known as "Brownian motion"; this was thought to be caused by water molecules knocking the grains about. In 1905, Albert Einstein proved the reality of these molecules and their motions by producing the first statistical physics analysis of Brownian motion.[7][8][9] French physicist Jean Perrin used Einstein's work to experimentally determine the mass and dimensions of atoms, thereby conclusively verifying Dalton's atomic theory.[10] Discovery of the electron The Geiger–Marsden experiment The physicist J.J. Thomson measured the mass of cathode rays, showing they were made of particles, but were around 1800 times lighter than the lightest atom, hydrogen. Therefore, they were not atoms, but a new particle, the first subatomic particle to be discovered, which he originally called "corpuscle" but was later named electron, after particles postulated by George Johnstone Stoney in 1874, he also showed they were identical to particles given off by photoelectric and radioactive materials.[11] It was quickly recognized that they are the particles that carry electric currents in metal wires, and carry the negative electric charge within atoms. Thomson was given the 1906 Nobel Prize in Physics for this work, thus he overturned the belief that atoms are the indivisible, ultimate particles of matter.[12] Thomson also incorrectly postulated that the low mass, negatively charged electrons were distributed throughout the atom in a uniform sea of positive charge; this became known as the plum pudding model. Discovery of the nucleus Discovery of isotopes While experimenting with the products of radioactive decay, in 1913 radiochemist Frederick Soddy discovered that there appeared to be more than one type of atom at each position on the periodic table;[14] the term isotope was coined by Margaret Todd as a suitable name for different atoms that belong to the same element. J.J. Thomson created a technique for isotope separation through his work on ionized gases, which subsequently led to the discovery of stable isotopes.[15] Bohr model Later in the same year Henry Moseley provided additional experimental evidence in favor of Niels Bohr's theory; these results refined Ernest Rutherford's and Antonius Van den Broek's model, which proposed that the atom contains in its nucleus a number of positive nuclear charges that is equal to its (atomic) number in the periodic table. Until these experiments, atomic number was not known to be a physical and experimental quantity; that it is equal to the atomic nuclear charge remains the accepted atomic model today.[18] Chemical bonding explained Chemical bonds between atoms were now explained, by Gilbert Newton Lewis in 1916, as the interactions between their constituent electrons;[19] as the chemical properties of the elements were known to largely repeat themselves according to the periodic law,[20] in 1919 the American chemist Irving Langmuir suggested that this could be explained if the electrons in an atom were connected or clustered in some manner. Groups of electrons were thought to occupy a set of electron shells about the nucleus.[21] Further developments in quantum physics The Stern–Gerlach experiment of 1922 provided further evidence of the quantum nature of atomic properties; when a beam of silver atoms was passed through a specially shaped magnetic field, the beam was split in a way correlated with the direction of an atom's angular momentum, or spin. As this spin direction is initially random, the beam would be expected to deflect in a random direction. Instead, the beam was split into two directional components, corresponding to the atomic spin being oriented up or down with respect to the magnetic field.[22] In 1925 Werner Heisenberg published the first consistent mathematical formulation of quantum mechanics (Matrix Mechanics).[18] One year earlier, in 1924, Louis de Broglie had proposed that all particles behave to an extent like waves and, in 1926, Erwin Schrödinger used this idea to develop a mathematical model of the atom (Wave Mechanics) that described the electrons as three-dimensional waveforms rather than point particles. A consequence of using waveforms to describe particles is that it is mathematically impossible to obtain precise values for both the position and momentum of a particle at a given point in time; this became known as the uncertainty principle, formulated by Werner Heisenberg in 1927.[18] In this concept, for a given accuracy in measuring a position one could only obtain a range of probable values for momentum, and vice versa;[23] this model was able to explain observations of atomic behavior that previous models could not, such as certain structural and spectral patterns of atoms larger than hydrogen. Thus, the planetary model of the atom was discarded in favor of one that described atomic orbital zones around the nucleus where a given electron is most likely to be observed.[24][25] Discovery of the neutron The development of the mass spectrometer allowed the mass of atoms to be measured with increased accuracy; the device uses a magnet to bend the trajectory of a beam of ions, and the amount of deflection is determined by the ratio of an atom's mass to its charge. The chemist Francis William Aston used this instrument to show that isotopes had different masses; the atomic mass of these isotopes varied by integer amounts, called the whole number rule.[26] The explanation for these different isotopes awaited the discovery of the neutron, an uncharged particle with a mass similar to the proton, by the physicist James Chadwick in 1932. Isotopes were then explained as elements with the same number of protons, but different numbers of neutrons within the nucleus.[27] Fission, high-energy physics and condensed matter In 1938, the German chemist Otto Hahn, a student of Rutherford, directed neutrons onto uranium atoms expecting to get transuranium elements. Instead, his chemical experiments showed barium as a product.[28][29] A year later, Lise Meitner and her nephew Otto Frisch verified that Hahn's result were the first experimental nuclear fission.[30][31] In 1944, Hahn received the Nobel prize in chemistry. Despite Hahn's efforts, the contributions of Meitner and Frisch were not recognized.[32] In the 1950s, the development of improved particle accelerators and particle detectors allowed scientists to study the impacts of atoms moving at high energies.[33] Neutrons and protons were found to be hadrons, or composites of smaller particles called quarks; the standard model of particle physics was developed that so far has successfully explained the properties of the nucleus in terms of these sub-atomic particles and the forces that govern their interactions.[34] Subatomic particles Though the word atom originally denoted a particle that cannot be cut into smaller particles, in modern scientific usage the atom is composed of various subatomic particles; the constituent particles of an atom are the electron, the proton and the neutron; all three are fermions. However, the hydrogen-1 atom has no neutrons and the hydron ion has no electrons. The electron is by far the least massive of these particles at 9.11×10−31 kg, with a negative electrical charge and a size that is too small to be measured using available techniques.[35] It was the lightest particle with a positive rest mass measured, until the discovery of neutrino mass. Under ordinary conditions, electrons are bound to the positively charged nucleus by the attraction created from opposite electric charges. If an atom has more or fewer electrons than its atomic number, then it becomes respectively negatively or positively charged as a whole; a charged atom is called an ion. Electrons have been known since the late 19th century, mostly thanks to J.J. Thomson; see history of subatomic physics for details. Neutrons have no electrical charge and have a free mass of 1,839 times the mass of the electron, or 1.6749×10−27 kg.[36][37] Neutrons are the heaviest of the three constituent particles, but their mass can be reduced by the nuclear binding energy. Neutrons and protons (collectively known as nucleons) have comparable dimensions—on the order of 2.5×10−15 m—although the 'surface' of these particles is not sharply defined.[38] The neutron was discovered in 1932 by the English physicist James Chadwick. In the Standard Model of physics, electrons are truly elementary particles with no internal structure. However, both protons and neutrons are composite particles composed of elementary particles called quarks. There are two types of quarks in atoms, each having a fractional electric charge. Protons are composed of two up quarks (each with charge +2/3) and one down quark (with a charge of −1/3). Neutrons consist of one up quark and two down quarks; this distinction accounts for the difference in mass and charge between the two particles.[39][40] The quarks are held together by the strong interaction (or strong force), which is mediated by gluons; the protons and neutrons, in turn, are held to each other in the nucleus by the nuclear force, which is a residuum of the strong force that has somewhat different range-properties (see the article on the nuclear force for more). The gluon is a member of the family of gauge bosons, which are elementary particles that mediate physical forces.[39][40] All the bound protons and neutrons in an atom make up a tiny atomic nucleus, and are collectively called nucleons; the radius of a nucleus is approximately equal to 1.07 3A fm, where A is the total number of nucleons.[41] This is much smaller than the radius of the atom, which is on the order of 105 fm; the nucleons are bound together by a short-ranged attractive potential called the residual strong force. At distances smaller than 2.5 fm this force is much more powerful than the electrostatic force that causes positively charged protons to repel each other.[42] Atoms of the same element have the same number of protons, called the atomic number. Within a single element, the number of neutrons may vary, determining the isotope of that element; the total number of protons and neutrons determine the nuclide. The number of neutrons relative to the protons determines the stability of the nucleus, with certain isotopes undergoing radioactive decay.[43] The number of protons and neutrons in the atomic nucleus can be modified, although this can require very high energies because of the strong force. Nuclear fusion occurs when multiple atomic particles join to form a heavier nucleus, such as through the energetic collision of two nuclei. For example, at the core of the Sun protons require energies of 3–10 keV to overcome their mutual repulsion—the coulomb barrier—and fuse together into a single nucleus.[45] Nuclear fission is the opposite process, causing a nucleus to split into two smaller nuclei—usually through radioactive decay; the nucleus can also be modified through bombardment by high energy subatomic particles or photons. If this modifies the number of protons in a nucleus, the atom changes to a different chemical element.[46][47] The fusion of two nuclei that create larger nuclei with lower atomic numbers than iron and nickel—a total nucleon number of about 60—is usually an exothermic process that releases more energy than is required to bring them together,[49] it is this energy-releasing process that makes nuclear fusion in stars a self-sustaining reaction. For heavier nuclei, the binding energy per nucleon in the nucleus begins to decrease; that means fusion processes producing nuclei that have atomic numbers higher than about 26, and atomic masses higher than about 60, is an endothermic process. These more massive nuclei can not undergo an energy-producing fusion reaction that can sustain the hydrostatic equilibrium of a star.[44] Electron cloud Electrons, like other particles, have properties of both a particle and a wave; the electron cloud is a region inside the potential well where each electron forms a type of three-dimensional standing wave—a wave form that does not move relative to the nucleus. This behavior is defined by an atomic orbital, a mathematical function that characterises the probability that an electron appears to be at a particular location when its position is measured.[50] Only a discrete (or quantized) set of these orbitals exist around the nucleus, as other possible wave patterns rapidly decay into a more stable form.[51] Orbitals can have one or more ring or node structures, and differ from each other in size, shape and orientation.[52] Oganesson. An illustration of all the Subshells/orbitals of the Oganesson atom. Blue is for the S-subshell, pink is for the P-subshell, red is for the D-subshell, and green is for the F-subshell. How atoms are constructed from electron orbitals and link to the periodic table. Each atomic orbital corresponds to a particular energy level of the electron; the electron can change its state to a higher energy level by absorbing a photon with sufficient energy to boost it into the new quantum state. Likewise, through spontaneous emission, an electron in a higher energy state can drop to a lower energy state while radiating the excess energy as a photon; these characteristic energy values, defined by the differences in the energies of the quantum states, are responsible for atomic spectral lines.[51] The amount of energy needed to remove or add an electron—the electron binding energy—is far less than the binding energy of nucleons. For example, it requires only 13.6 eV to strip a ground-state electron from a hydrogen atom,[53] compared to 2.23 million eV for splitting a deuterium nucleus.[54] Atoms are electrically neutral if they have an equal number of protons and electrons. Atoms that have either a deficit or a surplus of electrons are called ions. Electrons that are farthest from the nucleus may be transferred to other nearby atoms or shared between atoms. By this mechanism, atoms are able to bond into molecules and other types of chemical compounds like ionic and covalent network crystals.[55] Nuclear properties By definition, any two atoms with an identical number of protons in their nuclei belong to the same chemical element. Atoms with equal numbers of protons but a different number of neutrons are different isotopes of the same element. For example, all hydrogen atoms admit exactly one proton, but isotopes exist with no neutrons (hydrogen-1, by far the most common form,[56] also called protium), one neutron (deuterium), two neutrons (tritium) and more than two neutrons; the known elements form a set of atomic numbers, from the single proton element hydrogen up to the 118-proton element oganesson.[57] All known isotopes of elements with atomic numbers greater than 82 are radioactive, although the radioactivity of element 83 (bismuth) is so slight as to be practically negligible.[58][59] About 339 nuclides occur naturally on Earth,[60] of which 254 (about 75%) have not been observed to decay, and are referred to as "stable isotopes". However, only 90 of these nuclides are stable to all decay, even in theory. Another 164 (bringing the total to 254) have not been observed to decay, even though in theory it is energetically possible; these are also formally classified as "stable". An additional 34 radioactive nuclides have half-lives longer than 80 million years, and are long-lived enough to be present from the birth of the solar system; this collection of 288 nuclides are known as primordial nuclides. Finally, an additional 51 short-lived nuclides are known to occur naturally, as daughter products of primordial nuclide decay (such as radium from uranium), or else as products of natural energetic processes on Earth, such as cosmic ray bombardment (for example, carbon-14).[61][note 1] For 80 of the chemical elements, at least one stable isotope exists; as a rule, there is only a handful of stable isotopes for each of these elements, the average being 3.2 stable isotopes per element. Twenty-six elements have only a single stable isotope, while the largest number of stable isotopes observed for any element is ten, for the element tin. Elements 43, 61, and all elements numbered 83 or higher have no stable isotopes.[62][page needed] The actual mass of an atom at rest is often expressed using the unified atomic mass unit (u), also called dalton (Da); this unit is defined as a twelfth of the mass of a free neutral atom of carbon-12, which is approximately 1.66×10−27 kg.[63] Hydrogen-1 (the lightest isotope of hydrogen which is also the nuclide with the lowest mass) has an atomic weight of 1.007825 u.[64] The value of this number is called the atomic mass. A given atom has an atomic mass approximately equal (within 1%) to its mass number times the atomic mass unit (for example the mass of a nitrogen-14 is roughly 14 u). However, this number will not be exactly an integer except in the case of carbon-12 (see below);[65] the heaviest stable atom is lead-208,[58] with a mass of 207.9766521 u.[66] Shape and size Atoms lack a well-defined outer boundary, so their dimensions are usually described in terms of an atomic radius; this is a measure of the distance out to which the electron cloud extends from the nucleus.[67] However, this assumes the atom to exhibit a spherical shape, which is only obeyed for atoms in vacuum or free space. Atomic radii may be derived from the distances between two nuclei when the two atoms are joined in a chemical bond; the radius varies with the location of an atom on the atomic chart, the type of chemical bond, the number of neighboring atoms (coordination number) and a quantum mechanical property known as spin.[68] On the periodic table of the elements, atom size tends to increase when moving down columns, but decrease when moving across rows (left to right).[69] Consequently, the smallest atom is helium with a radius of 32 pm, while one of the largest is caesium at 225 pm.[70] When subjected to external forces, like electrical fields, the shape of an atom may deviate from spherical symmetry; the deformation depends on the field magnitude and the orbital type of outer shell electrons, as shown by group-theoretical considerations. Aspherical deviations might be elicited for instance in crystals, where large crystal-electrical fields may occur at low-symmetry lattice sites.[71][72] Significant ellipsoidal deformations have been shown to occur for sulfur ions[73] and chalcogen ions[74] in pyrite-type compounds. Radioactive decay The most common forms of radioactive decay are:[79][80] Each radioactive isotope has a characteristic decay time period—the half-life—that is determined by the amount of time needed for half of a sample to decay; this is an exponential decay process that steadily decreases the proportion of the remaining isotope by 50% every half-life. Hence after two half-lives have passed only 25% of the isotope is present, and so forth.[78] Magnetic moment Energy levels For an electron to transition between two different states, e.g. ground state to first excited state, it must absorb or emit a photon at an energy matching the difference in the potential energy of those levels, according to the Niels Bohr model, what can be precisely calculated by the Schrödinger equation. Electrons jump between orbitals in a particle-like fashion. For example, if a single photon strikes the electrons, only a single electron changes states in response to the photon; see Electron properties. The energy of an emitted photon is proportional to its frequency, so these specific energy levels appear as distinct bands in the electromagnetic spectrum;[87] each element has a characteristic spectrum that can depend on the nuclear charge, subshells filled by electrons, the electromagnetic interactions between the electrons and other factors.[88] Close examination of the spectral lines reveals that some display a fine structure splitting; this occurs because of spin–orbit coupling, which is an interaction between the spin and motion of the outermost electron.[90] When an atom is in an external magnetic field, spectral lines become split into three or more components; a phenomenon called the Zeeman effect; this is caused by the interaction of the magnetic field with the magnetic moment of the atom and its electrons. Some atoms can have multiple electron configurations with the same energy level, which thus appear as a single spectral line; the interaction of the magnetic field with the atom shifts these electron configurations to slightly different energy levels, resulting in multiple spectral lines.[91] The presence of an external electric field can cause a comparable splitting and shifting of spectral lines by modifying the electron energy levels, a phenomenon called the Stark effect.[92] If a bound electron is in an excited state, an interacting photon with the proper energy can cause stimulated emission of a photon with a matching energy level. For this to occur, the electron must drop to a lower energy state that has an energy difference matching the energy of the interacting photon; the emitted photon and the interacting photon then move off in parallel and with matching phases. That is, the wave patterns of the two photons are synchronized; this physical property is used to make lasers, which can emit a coherent beam of light energy in a narrow frequency band.[93] Valence and bonding behavior Valency is the combining power of an element, it is equal to number of hydrogen atoms that atom can combine or displace in forming compounds.[94] The outermost electron shell of an atom in its uncombined state is known as the valence shell, and the electrons in that shell are called valence electrons; the number of valence electrons determines the bonding behavior with other atoms. Atoms tend to chemically react with each other in a manner that fills (or empties) their outer valence shells.[95] For example, a transfer of a single electron between atoms is a useful approximation for bonds that form between atoms with one-electron more than a filled shell, and others that are one-electron short of a full shell, such as occurs in the compound sodium chloride and other chemical ionic salts. However, many elements display multiple valences, or tendencies to share differing numbers of electrons in different compounds. Thus, chemical bonding between these elements takes many forms of electron-sharing that are more than simple electron transfers. Examples include the element carbon and the organic compounds.[96] Quantities of atoms are found in different states of matter that depend on the physical conditions, such as temperature and pressure. By varying the conditions, materials can transition between solids, liquids, gases and plasmas.[99] Within a state, a material can also exist in different allotropes. An example of this is solid carbon, which can exist as graphite or diamond.[100] Gaseous allotropes exist as well, such as dioxygen and ozone. At temperatures close to absolute zero, atoms can form a Bose–Einstein condensate, at which point quantum mechanical effects, which are normally only observed at the atomic scale, become apparent on a macroscopic scale;[101][102] this super-cooled collection of atoms then behaves as a single super atom, which may allow fundamental checks of quantum mechanical behavior.[103] The scanning tunneling microscope is a device for viewing surfaces at the atomic level, it uses the quantum tunneling phenomenon, which allows particles to pass through a barrier that would normally be insurmountable. Electrons tunnel through the vacuum between two planar metal electrodes, on each of which is an adsorbed atom, providing a tunneling-current density that can be measured. Scanning one atom (taken as the tip) as it moves past the other (the sample) permits plotting of tip displacement versus lateral separation for a constant current; the calculation shows the extent to which scanning-tunneling-microscope images of an individual atom are visible. It confirms that for low bias, the microscope images the space-averaged dimensions of the electron orbitals across closely packed energy levels—the Fermi level local density of states.[104][105] An atom can be ionized by removing one of its electrons; the electric charge causes the trajectory of an atom to bend when it passes through a magnetic field. The radius by which the trajectory of a moving ion is turned by the magnetic field is determined by the mass of the atom; the mass spectrometer uses this principle to measure the mass-to-charge ratio of ions. If a sample contains multiple isotopes, the mass spectrometer can determine the proportion of each isotope in the sample by measuring the intensity of the different beams of ions. Techniques to vaporize atoms include inductively coupled plasma atomic emission spectroscopy and inductively coupled plasma mass spectrometry, both of which use a plasma to vaporize samples for analysis.[106] A more area-selective method is electron energy loss spectroscopy, which measures the energy loss of an electron beam within a transmission electron microscope when it interacts with a portion of a sample; the atom-probe tomograph has sub-nanometer resolution in 3-D and can chemically identify individual atoms using time-of-flight mass spectrometry.[107] Spectra of excited states can be used to analyze the atomic composition of distant stars. Specific light wavelengths contained in the observed light from stars can be separated out and related to the quantized transitions in free gas atoms; these colors can be replicated using a gas-discharge lamp containing the same element.[108] Helium was discovered in this way in the spectrum of the Sun 23 years before it was found on Earth.[109] Origin and current state Baryonic matter forms about 4% of the total energy density of the observable Universe, with an average density of about 0.25 particles/m3 (mostly protons and electrons).[110] Within a galaxy such as the Milky Way, particles have a much higher concentration, with the density of matter in the interstellar medium (ISM) ranging from 105 to 109 atoms/m3;[111] the Sun is believed to be inside the Local Bubble, so the density in the solar neighborhood is only about 103 atoms/m3.[112] Stars form from dense clouds in the ISM, and the evolutionary processes of stars result in the steady enrichment of the ISM with elements more massive than hydrogen and helium. Up to 95% of the Milky Way's baryonic matter are concentrated inside stars, where conditions are unfavorable for atomic matter; the total baryonic mass is about 10% of the mass of the galaxy;[113] the remainder of the mass is an unknown dark matter.[114] High temperature inside stars makes most "atoms" fully ionized, that is, separates all electrons from the nuclei. In stellar remnants—with exception of their surface layers—an immense pressure make electron shells impossible. Isotopes such as lithium-6, as well as some beryllium and boron are generated in space through cosmic ray spallation;[120] this occurs when a high-energy proton strikes an atomic nucleus, causing large numbers of nucleons to be ejected. Elements heavier than iron were produced in supernovae through the r-process and in AGB stars through the s-process, both of which involve the capture of neutrons by atomic nuclei.[121] Elements such as lead formed largely through the radioactive decay of heavier elements.[122] Most of the atoms that make up the Earth and its inhabitants were present in their current form in the nebula that collapsed out of a molecular cloud to form the Solar System; the rest are the result of radioactive decay, and their relative proportion can be used to determine the age of the Earth through radiometric dating.[123][124] Most of the helium in the crust of the Earth (about 99% of the helium from gas wells, as shown by its lower abundance of helium-3) is a product of alpha decay.[125] There are a few trace atoms on Earth that were not present at the beginning (i.e., not "primordial"), nor are results of radioactive decay. Carbon-14 is continuously generated by cosmic rays in the atmosphere;[126] some atoms on Earth have been artificially generated either deliberately or as by-products of nuclear reactors or explosions.[127][128] Of the transuranic elements—those with atomic numbers greater than 92—only plutonium and neptunium occur naturally on Earth.[129][130] Transuranic elements have radioactive lifetimes shorter than the current age of the Earth[131] and thus identifiable quantities of these elements have long since decayed, with the exception of traces of plutonium-244 possibly deposited by cosmic dust.[123] Natural deposits of plutonium and neptunium are produced by neutron capture in uranium ore.[132] The Earth contains approximately 1.33×1050 atoms.[133] Although small numbers of independent atoms of noble gases exist, such as argon, neon, and helium, 99% of the atmosphere is bound in the form of molecules, including carbon dioxide and diatomic oxygen and nitrogen. At the surface of the Earth, an overwhelming majority of atoms combine to form various compounds, including water, salt, silicates and oxides. Atoms can also combine to create materials that do not consist of discrete molecules, including crystals and liquid or solid metals;[134][135] this atomic matter forms networked arrangements that lack the particular type of small-scale interrupted order associated with molecular matter.[136] Rare and theoretical forms Superheavy elements While isotopes with atomic numbers higher than lead (82) are known to be radioactive, an "island of stability" has been proposed for some elements with atomic numbers above 103; these superheavy elements may have a nucleus that is relatively stable against radioactive decay.[137] The most likely candidate for a stable superheavy atom, unbihexium, has 126 protons and 184 neutrons.[138] Exotic matter Each particle of matter has a corresponding antimatter particle with the opposite electrical charge. Thus, the positron is a positively charged antielectron and the antiproton is a negatively charged equivalent of a proton; when a matter and corresponding antimatter particle meet, they annihilate each other. Because of this, along with an imbalance between the number of matter and antimatter particles, the latter are rare in the universe; the first causes of this imbalance are not yet fully understood, although theories of baryogenesis may offer an explanation. As a result, no antimatter atoms have been discovered in nature.[139][140] However, in 1996 the antimatter counterpart of the hydrogen atom (antihydrogen) was synthesized at the CERN laboratory in Geneva.[141][142] Other exotic atoms have been created by replacing one of the protons, neutrons or electrons with other particles that have the same charge. For example, an electron can be replaced by a more massive muon, forming a muonic atom; these types of atoms can be used to test the fundamental predictions of physics.[143][144][145] See also 1. ^ For more recent updates see Interactive Chart of Nuclides (Brookhaven National Laboratory) Archived 25 August 2009 at the Wayback Machine. 1. ^ Pullman, Bernard (1998). The Atom in the History of Human Thought. Oxford, England: Oxford University Press. pp. 31–33. ISBN 978-0-19-515040-7. 2. ^ a b Kenny, Anthony (2004). Ancient Philosophy. A New History of Western Philosophy. 1. Oxford, England: Oxford University Press. pp. 26–28. ISBN 978-0-19-875273-8. 3. ^ a b c d e f g h i j Pyle, Andrew (2010). "Atoms and Atomism". In Grafton, Anthony; Most, Glenn W.; Settis, Salvatore (eds.). The Classical Tradition. Cambridge, Massachusetts and London: The Belknap Press of Harvard University Press. pp. 103–104. ISBN 978-0-674-03572-0. 5. ^ Andrew G. van Melsen (1952). From Atomos to Atom. Mineola, NY: Dover Publications. ISBN 978-0-486-49584-2. 6. ^ Dalton, John. "On the Absorption of Gases by Water and Other Liquids Archived 11 June 2011 at the Wayback Machine", in Memoirs of the Literary and Philosophical Society of Manchester. 1803. Retrieved on August 29, 2007. 7. ^ Einstein, Albert (1905). "Über die von der molekularkinetischen Theorie der Wärme geforderte Bewegung von in ruhenden Flüssigkeiten suspendierten Teilchen" (PDF). Annalen der Physik (in German). 322 (8): 549–560. Bibcode:1905AnP...322..549E. doi:10.1002/andp.19053220806. Archived (PDF) from the original on 18 July 2007. Retrieved 4 February 2007. 8. ^ Mazo, Robert M. (2002). Brownian Motion: Fluctuations, Dynamics, and Applications. Oxford University Press. pp. 1–7. ISBN 978-0-19-851567-8. OCLC 48753074. 10. ^ Patterson, G. (2007). "Jean Perrin and the triumph of the atomic doctrine". Endeavour. 31 (2): 50–53. doi:10.1016/j.endeavour.2007.05.003. PMID 17602746. 11. ^ Thomson, J.J. (August 1901). "On bodies smaller than atoms". The Popular Science Monthly: 323–335. Retrieved 21 June 2009. 12. ^ "J.J. Thomson". Nobel Foundation. 1906. Archived from the original on 12 May 2013. Retrieved 20 December 2007. 13. ^ Rutherford, E. (1911). "The Scattering of α and β Particles by Matter and the Structure of the Atom" (PDF). Philosophical Magazine. 21 (125): 669–688. doi:10.1080/14786440508637080. Archived (PDF) from the original on 31 May 2016. Retrieved 29 April 2016. 14. ^ "Frederick Soddy, The Nobel Prize in Chemistry 1921". Nobel Foundation. Archived from the original on 9 April 2008. Retrieved 18 January 2008. 15. ^ Thomson, Joseph John (1913). "Rays of positive electricity". Proceedings of the Royal Society. A. 89 (607): 1–20. Bibcode:1913RSPSA..89....1T. doi:10.1098/rspa.1913.0057. Archived from the original on 4 November 2016. Retrieved 12 February 2008. 16. ^ Stern, David P. (16 May 2005). "The Atomic Nucleus and Bohr's Early Model of the Atom". NASA/Goddard Space Flight Center. Archived from the original on 20 August 2007. Retrieved 20 December 2007. 17. ^ Bohr, Niels (11 December 1922). "Niels Bohr, The Nobel Prize in Physics 1922, Nobel Lecture". Nobel Foundation. Archived from the original on 15 April 2008. Retrieved 16 February 2008. 18. ^ a b c Pais, Abraham (1986). Inward Bound: Of Matter and Forces in the Physical World. New York: Oxford University Press. pp. 228–230. ISBN 978-0-19-851971-3. 19. ^ Lewis, Gilbert N. (1916). "The Atom and the Molecule" (PDF). Journal of the American Chemical Society. 38 (4): 762–786. doi:10.1021/ja02261a002. Archived (PDF) from the original on 25 August 2019. Retrieved 25 August 2019. 20. ^ Scerri, Eric R. (2007). The periodic table: its story and its significance. Oxford University Press US. pp. 205–226. ISBN 978-0-19-530573-9. 21. ^ Langmuir, Irving (1919). "The Arrangement of Electrons in Atoms and Molecules". Journal of the American Chemical Society. 41 (6): 868–934. doi:10.1021/ja02227a002. Archived from the original on 21 June 2019. Retrieved 27 June 2019. 22. ^ Scully, Marlan O.; Lamb, Willis E.; Barut, Asim (1987). "On the theory of the Stern-Gerlach apparatus". Foundations of Physics. 17 (6): 575–583. Bibcode:1987FoPh...17..575S. doi:10.1007/BF01882788. 23. ^ Chad Orzel (16 September 2014). "What is the Heisenberg Uncertainty Principle?". TED-Ed. Archived from the original on 13 September 2015. Retrieved 26 October 2015 – via YouTube. 24. ^ Brown, Kevin (2007). "The Hydrogen Atom". MathPages. Archived from the original on 13 May 2008. Retrieved 21 December 2007. 26. ^ Aston, Francis W. (1920). "The constitution of atmospheric neon". Philosophical Magazine. 39 (6): 449–455. doi:10.1080/14786440408636058. 27. ^ Chadwick, James (12 December 1935). "Nobel Lecture: The Neutron and Its Properties". Nobel Foundation. Archived from the original on 12 October 2007. Retrieved 21 December 2007. 28. ^ Bowden, Mary Ellen (1997). "Otto Hahn, Lise Meitner, and Fritz Strassmann". Chemical achievers : the human face of the chemical sciences. Philadelphia, PA: Chemical Heritage Foundation. pp. 76–80, 125. ISBN 978-0-941901-12-3. 29. ^ "Otto Hahn, Lise Meitner, and Fritz Strassmann". Science History Institute. June 2016. Archived from the original on 21 March 2018. Retrieved 20 March 2018. 30. ^ Meitner, Lise; Frisch, Otto Robert (1939). "Disintegration of uranium by neutrons: a new type of nuclear reaction". Nature. 143 (3615): 239–240. Bibcode:1939Natur.143..239M. doi:10.1038/143239a0. 31. ^ Schroeder, M. "Lise Meitner – Zur 125. Wiederkehr Ihres Geburtstages" (in German). Archived from the original on 19 July 2011. Retrieved 4 June 2009. 33. ^ Kullander, Sven (28 August 2001). "Accelerators and Nobel Laureates". Nobel Foundation. Archived from the original on 13 April 2008. Retrieved 31 January 2008. 34. ^ "The Nobel Prize in Physics 1990". Nobel Foundation. 17 October 1990. Archived from the original on 14 May 2008. Retrieved 31 January 2008. 35. ^ Demtröder, Wolfgang (2002). Atoms, Molecules and Photons: An Introduction to Atomic- Molecular- and Quantum Physics (1st ed.). Springer. pp. 39–42. ISBN 978-3-540-20631-6. OCLC 181435713. 36. ^ Woan, Graham (2000). The Cambridge Handbook of Physics. Cambridge University Press. p. 8. ISBN 978-0-521-57507-2. OCLC 224032426. 37. ^ Mohr, P.J.; Taylor, B.N. and Newell, D.B. (2014), "The 2014 CODATA Recommended Values of the Fundamental Physical Constants" Archived 21 February 2012 at WebCite (Web Version 7.0). The database was developed by J. Baker, M. Douma, and S. Kotochigova. (2014). National Institute of Standards and Technology, Gaithersburg, Maryland 20899. 38. ^ MacGregor, Malcolm H. (1992). The Enigmatic Electron. Oxford University Press. pp. 33–37. ISBN 978-0-19-521833-6. OCLC 223372888. 40. ^ a b Schombert, James (18 April 2006). "Elementary Particles". University of Oregon. Archived from the original on 21 August 2011. Retrieved 3 January 2007. 41. ^ Jevremovic, Tatjana (2005). Nuclear Principles in Engineering. Springer. p. 63. ISBN 978-0-387-23284-3. OCLC 228384008. 42. ^ Pfeffer, Jeremy I.; Nir, Shlomo (2000). Modern Physics: An Introductory Text. Imperial College Press. pp. 330–336. ISBN 978-1-86094-250-1. OCLC 45900880. 43. ^ Wenner, Jennifer M. (10 October 2007). "How Does Radioactive Decay Work?". Carleton College. Archived from the original on 11 May 2008. Retrieved 9 January 2008. 44. ^ a b c Raymond, David (7 April 2006). "Nuclear Binding Energies". New Mexico Tech. Archived from the original on 1 December 2002. Retrieved 3 January 2007. 45. ^ Mihos, Chris (23 July 2002). "Overcoming the Coulomb Barrier". Case Western Reserve University. Archived from the original on 12 September 2006. Retrieved 13 February 2008. 48. ^ Shultis, J. Kenneth; Faw, Richard E. (2002). Fundamentals of Nuclear Science and Engineering. CRC Press. pp. 10–17. ISBN 978-0-8247-0834-4. OCLC 123346507. 49. ^ Fewell, M.P. (1995). "The atomic nuclide with the highest mean binding energy". American Journal of Physics. 63 (7): 653–658. Bibcode:1995AmJPh..63..653F. doi:10.1119/1.17828. 53. ^ Herter, Terry (2006). "Lecture 8: The Hydrogen Atom". Cornell University. Archived from the original on 22 February 2012. Retrieved 14 February 2008. 54. ^ Bell, R.E.; Elliott, L.G. (1950). "Gamma-Rays from the Reaction H1(n,γ)D2 and the Binding Energy of the Deuteron". Physical Review. 79 (2): 282–285. Bibcode:1950PhRv...79..282B. doi:10.1103/PhysRev.79.282. 55. ^ Smirnov, Boris M. (2003). Physics of Atoms and Ions. Springer. pp. 249–272. ISBN 978-0-387-95550-6. 57. ^ Weiss, Rick (17 October 2006). "Scientists Announce Creation of Atomic Element, the Heaviest Yet". Washington Post. Archived from the original on 21 August 2011. Retrieved 21 December 2007. 58. ^ a b Sills, Alan D. (2003). Earth Science the Easy Way. Barron's Educational Series. pp. 131–134. ISBN 978-0-7641-2146-3. OCLC 51543743. 61. ^ Tuli, Jagdish K. (April 2005). "Nuclear Wallet Cards". National Nuclear Data Center, Brookhaven National Laboratory. Archived from the original on 3 October 2011. Retrieved 16 April 2011. 62. ^ a b CRC Handbook (2002). 63. ^ a b Mills, Ian; Cvitaš, Tomislav; Homann, Klaus; Kallay, Nikola; Kuchitsu, Kozo (1993). Quantities, Units and Symbols in Physical Chemistry (PDF) (2nd ed.). Oxford: International Union of Pure and Applied Chemistry, Commission on Physiochemical Symbols Terminology and Units, Blackwell Scientific Publications. p. 70. ISBN 978-0-632-03583-0. OCLC 27011505. Archived (PDF) from the original on 10 November 2011. Retrieved 10 December 2011. 64. ^ Chieh, Chung (22 January 2001). "Nuclide Stability". University of Waterloo. Archived from the original on 30 August 2007. Retrieved 4 January 2007. 66. ^ Audi, G.; Wapstra, A.H.; Thibault, C. (2003). "The Ame2003 atomic mass evaluation (II)" (PDF). Nuclear Physics A. 729 (1): 337–676. Bibcode:2003NuPhA.729..337A. doi:10.1016/j.nuclphysa.2003.11.003. Archived (PDF) from the original on 16 October 2005. Retrieved 1 May 2015. 67. ^ Ghosh, D.C.; Biswas, R. (2002). "Theoretical calculation of Absolute Radii of Atoms and Ions. Part 1; the Atomic Radii". Int. J. Mol. Sci. 3 (11): 87–113. doi:10.3390/i3020087. 68. ^ Shannon, R.D. (1976). "Revised effective ionic radii and systematic studies of interatomic distances in halides and chalcogenides" (PDF). Acta Crystallographica A. 32 (5): 751–767. Bibcode:1976AcCrA..32..751S. doi:10.1107/S0567739476001551. 70. ^ Zumdahl, Steven S. (2002). Introductory Chemistry: A Foundation (5th ed.). Houghton Mifflin. ISBN 978-0-618-34342-3. OCLC 173081482. Archived from the original on 4 March 2008. Retrieved 5 February 2008. 71. ^ Bethe, Hans (1929). "Termaufspaltung in Kristallen". Annalen der Physik. 3 (2): 133–208. Bibcode:1929AnP...395..133B. doi:10.1002/andp.19293950202. 72. ^ Birkholz, Mario (1995). "Crystal-field induced dipoles in heteropolar crystals – I. concept". Z. Phys. B. 96 (3): 325–332. Bibcode:1995ZPhyB..96..325B. CiteSeerX doi:10.1007/BF01313054. 73. ^ Birkholz, M.; Rudert, R. (2008). "Interatomic distances in pyrite-structure disulfides – a case for ellipsoidal modeling of sulfur ions]". Physica Status Solidi B. 245 (9): 1858–1864. Bibcode:2008PSSBR.245.1858B. doi:10.1002/pssb.200879532. 74. ^ Birkholz, M. (2014). "Modeling the Shape of Ions in Pyrite-Type Crystals". Crystals. 4 (3): 390–403. doi:10.3390/cryst4030390. 75. ^ Staff (2007). "Small Miracles: Harnessing nanotechnology". Oregon State University. Archived from the original on 21 May 2011. Retrieved 7 January 2007. – describes the width of a human hair as 105 nm and 10 carbon atoms as spanning 1 nm. 76. ^ Padilla, Michael J.; Miaoulis, Ioannis; Cyr, Martha (2002). Prentice Hall Science Explorer: Chemical Building Blocks. Upper Saddle River, New Jersey: Prentice-Hall, Inc. p. 32. ISBN 978-0-13-054091-1. OCLC 47925884. There are 2,000,000,000,000,000,000,000 (that's 2 sextillion) atoms of oxygen in one drop of water—and twice as many atoms of hydrogen. 77. ^ Feynman, Richard (1995). Six Easy Pieces; the Penguin Group. p. 5. ISBN 978-0-14-027666-4. OCLC 40499574. 79. ^ L'Annunziata, Michael F. (2003). Handbook of Radioactivity Analysis. Academic Press. pp. 3–56. ISBN 978-0-12-436603-9. OCLC 16212955. 80. ^ Firestone, Richard B. (22 May 2000). "Radioactive Decay Modes". Berkeley Laboratory. Archived from the original on 29 September 2006. Retrieved 7 January 2007. 83. ^ Goebel, Greg (1 September 2007). "[4.3] Magnetic Properties of the Atom". Elementary Quantum Physics. In The Public Domain website. Archived from the original on 29 June 2011. Retrieved 7 January 2007. 85. ^ Liang, Z.-P.; Haacke, E.M. (1999). Webster, J.G. (ed.). Encyclopedia of Electrical and Electronics Engineering: Magnetic Resonance Imaging. vol. 2. John Wiley & Sons. pp. 412–426. ISBN 978-0-471-13946-1. 87. ^ Fowles, Grant R. (1989). Introduction to Modern Optics. Courier Dover Publications. pp. 227–233. ISBN 978-0-486-65957-2. OCLC 18834711. 88. ^ Martin, W.C.; Wiese, W.L. (May 2007). "Atomic Spectroscopy: A Compendium of Basic Ideas, Notation, Data, and Formulas". National Institute of Standards and Technology. Archived from the original on 8 February 2007. Retrieved 8 January 2007. 89. ^ "Atomic Emission Spectra – Origin of Spectral Lines". Avogadro Web Site. Archived from the original on 28 February 2006. Retrieved 10 August 2006. 90. ^ Fitzpatrick, Richard (16 February 2007). "Fine structure". University of Texas at Austin. Archived from the original on 21 August 2011. Retrieved 14 February 2008. 92. ^ Beyer, H.F.; Shevelko, V.P. (2003). Introduction to the Physics of Highly Charged Ions. CRC Press. pp. 232–236. ISBN 978-0-7503-0481-8. OCLC 47150433. 94. ^ oxford dictionary – valency 95. ^ Reusch, William (16 July 2007). "Virtual Textbook of Organic Chemistry". Michigan State University. Archived from the original on 29 October 2007. Retrieved 11 January 2008. 96. ^ "Covalent bonding – Single bonds". chemguide. 2000. Archived from the original on 1 November 2008. Retrieved 20 November 2008. 98. ^ Baum, Rudy (2003). "It's Elemental: The Periodic Table". Chemical & Engineering News. Archived from the original on 21 August 2011. Retrieved 11 January 2008. 99. ^ Goodstein, David L. (2002). States of Matter. Courier Dover Publications. pp. 436–438. ISBN 978-0-13-843557-8. 100. ^ Brazhkin, Vadim V. (2006). "Metastable phases, phase transformations, and phase diagrams in physics and chemistry". Physics-Uspekhi. 49 (7): 719–724. Bibcode:2006PhyU...49..719B. doi:10.1070/PU2006v049n07ABEH006013. 101. ^ Myers, Richard (2003). The Basics of Chemistry. Greenwood Press. p. 85. ISBN 978-0-313-31664-7. OCLC 50164580. 105. ^ "The Nobel Prize in Physics 1986". The Nobel Foundation. Archived from the original on 17 September 2008. Retrieved 11 January 2008. – in particular, see the Nobel lecture by G. Binnig and H. Rohrer. 106. ^ Jakubowski, N.; Moens, Luc; Vanhaecke, Frank (1998). "Sector field mass spectrometers in ICP-MS". Spectrochimica Acta Part B: Atomic Spectroscopy. 53 (13): 1739–1763. Bibcode:1998AcSpe..53.1739J. doi:10.1016/S0584-8547(98)00222-5. 111. ^ Choppin, Gregory R.; Liljenzin, Jan-Olov; Rydberg, Jan (2001). Radiochemistry and Nuclear Chemistry. Elsevier. p. 441. ISBN 978-0-7506-7463-8. OCLC 162592180. 112. ^ Davidsen, Arthur F. (1993). "Far-Ultraviolet Astronomy on the Astro-1 Space Shuttle Mission". Science. 259 (5093): 327–334. Bibcode:1993Sci...259..327D. doi:10.1126/science.259.5093.327. PMID 17832344. 113. ^ Lequeux, James (2005). The Interstellar Medium. Springer. p. 4. ISBN 978-3-540-21326-0. OCLC 133157789. 116. ^ Copi, Craig J.; Schramm, DN; Turner, MS (1995). "Big-Bang Nucleosynthesis and the Baryon Density of the Universe". Science (Submitted manuscript). 267 (5195): 192–199. arXiv:astro-ph/9407006. Bibcode:1995Sci...267..192C. doi:10.1126/science.7809624. PMID 7809624. Archived from the original on 14 August 2019. Retrieved 27 July 2018. 118. ^ Abbott, Brian (30 May 2007). "Microwave (WMAP) All-Sky Survey". Hayden Planetarium. Archived from the original on 13 February 2013. Retrieved 13 January 2008. 120. ^ Knauth, D.C.; Knauth, D.C.; Lambert, David L.; Crane, P. (2000). "Newly synthesized lithium in the interstellar medium". Nature. 405 (6787): 656–658. Bibcode:2000Natur.405..656K. doi:10.1038/35015028. PMID 10864316. 122. ^ Kansas Geological Survey (4 May 2005). "Age of the Earth". University of Kansas. Archived from the original on 5 July 2008. Retrieved 14 January 2008. 123. ^ a b Manuel 2001, pp. 407–430, 511–519. 124. ^ Dalrymple, G. Brent (2001). "The age of the Earth in the twentieth century: a problem (mostly) solved". Geological Society, London, Special Publications. 190 (1): 205–221. Bibcode:2001GSLSP.190..205D. doi:10.1144/GSL.SP.2001.190.01.14. Archived from the original on 11 November 2007. Retrieved 14 January 2008. 125. ^ Anderson, Don L.; Foulger, G.R.; Meibom, Anders (2 September 2006). "Helium: Fundamental models". Archived from the original on 8 February 2007. Retrieved 14 January 2007. 128. ^ Diamond, H; et al. (1960). "Heavy Isotope Abundances in Mike Thermonuclear Device". Physical Review. 119 (6): 2000–2004. Bibcode:1960PhRv..119.2000D. doi:10.1103/PhysRev.119.2000. 129. ^ Poston Sr.; John W. (23 March 1998). "Do transuranic elements such as plutonium ever occur naturally?". Scientific American. Archived from the original on 27 March 2015. Retrieved 1 May 2015. 130. ^ Keller, C. (1973). "Natural occurrence of lanthanides, actinides, and superheavy elements". Chemiker Zeitung. 97 (10): 522–530. OSTI 4353086. 131. ^ Zaider, Marco; Rossi, Harald H. (2001). Radiation Science for Physicians and Public Health Workers. Springer. p. 17. ISBN 978-0-306-46403-4. OCLC 44110319. 133. ^ Weisenberger, Drew. "How many atoms are there in the world?". Jefferson Lab. Archived from the original on 22 October 2007. Retrieved 16 January 2008. 135. ^ Anderson, Don L. (2002). "The inner inner core of Earth". Proceedings of the National Academy of Sciences. 99 (22): 13966–13968. Bibcode:2002PNAS...9913966A. doi:10.1073/pnas.232565899. PMC 137819. PMID 12391308. 136. ^ Pauling, Linus (1960). The Nature of the Chemical Bond. Cornell University Press. pp. 5–10. ISBN 978-0-8014-0333-0. OCLC 17518275. 139. ^ Koppes, Steve (1 March 1999). "Fermilab Physicists Find New Matter-Antimatter Asymmetry". University of Chicago. Archived from the original on 19 July 2008. Retrieved 14 January 2008. 140. ^ Cromie, William J. (16 August 2001). "A lifetime of trillionths of a second: Scientists explore antimatter". Harvard University Gazette. Archived from the original on 3 September 2006. Retrieved 14 January 2008. 141. ^ Hijmans, Tom W. (2002). "Particle physics: Cold antihydrogen". Nature. 419 (6906): 439–440. Bibcode:2002Natur.419..439H. doi:10.1038/419439a. PMID 12368837. 142. ^ Staff (30 October 2002). "Researchers 'look inside' antimatter". BBC News. Archived from the original on 22 February 2007. Retrieved 14 January 2008. 144. ^ Indelicato, Paul (2004). "Exotic Atoms". Physica Scripta. T112 (1): 20–26. arXiv:physics/0409058. Bibcode:2004PhST..112...20I. doi:10.1238/Physica.Topical.112a00020. Archived from the original on 4 November 2018. Retrieved 4 November 2018. 145. ^ Ripin, Barrett H. (July 1998). "Recent Experiments on Exotic Atoms". American Physical Society. Archived from the original on 23 July 2012. Retrieved 15 February 2008. Further reading External links
b13af4675130626a
 Is electric charge truly conserved for bosonic matter? | PhysicsOverflow • Register Please help promote PhysicsOverflow ads elsewhere if you like it. New printer friendly PO pages! Migration to Bielefeld University was successful! Please vote for this year's PhysicsOverflow ads! ... see more Tools for paper authors Submit paper Claim Paper Authorship Tools for SE users Search User Reclaim SE Account Request Account Merger Nativise imported posts Claim post (deleted users) Import SE post Public \(\beta\) tools Report a bug with a feature Request a new functionality 404 page design Send feedback (propose a free ad) Site Statistics 177 submissions , 139 unreviewed 4,336 questions , 1,662 unanswered 5,102 answers , 21,672 comments 1,470 users with positive rep 645 active unimported users More ...   Is electric charge truly conserved for bosonic matter? + 6 like - 0 dislike Even before quantization, charged bosonic fields exhibit a certain "self-interaction". The body of this post demonstrates this fact, and the last paragraph asks the question. Notation/ Lagrangians Let me first provide the respective Lagrangians and elucidate the notation. I am talking about complex scalar QED with the Lagrangian $$\mathcal{L} = \frac{1}{2} D_\mu \phi^* D^\mu \phi - \frac{1}{2} m^2 \phi^* \phi - \frac{1}{4} F^{\mu \nu} F_{\mu \nu}$$ Where $D_\mu \phi = (\partial_\mu + ie A_\mu) \phi$, $D_\mu \phi^* = (\partial_\mu - ie A_\mu) \phi^*$ and $F^{\mu \nu} = \partial^\mu A^\nu - \partial^\nu A^\mu$. I am also mentioning usual QED with the Lagrangian $$\mathcal{L} = \bar{\psi}(iD_\mu \gamma^\mu-m) \psi - \frac{1}{4} F^{\mu \nu} F_{\mu \nu}$$ and "vector QED" (U(1) coupling to the Proca field) $$\mathcal{L} = - \frac{1}{4} (D^\mu B^{* \nu} - D^\nu B^{* \mu})(D_\mu B_\nu-D_\nu B_\mu) + \frac{1}{2} m^2 B^{* \nu}B_\nu - \frac{1}{4} F^{\mu \nu} F_{\mu \nu}$$ The four-currents are obtained from Noether's theorem. Natural units $c=\hbar=1$ are used. $\Im$ means imaginary part. Noether currents of particles Consider the Noether current of the complex scalar $\phi$ $$j^\mu = \frac{e}{m} \Im(\phi^* \partial^\mu\phi)$$ Introducing local $U(1)$ gauge we have $\partial_\mu \to D_\mu=\partial_\mu + ie A_\mu$ (with $-ieA_\mu$ for the complex conjugate). The new Noether current is $$\mathcal{J}^\mu = \frac{e}{m} \Im(\phi^* D^\mu\phi) = \frac{e}{m} \Im(\phi^* \partial^\mu\phi) + \frac{e^2}{m} |\phi|^2 A^\mu$$ Similarly for a Proca field $B^\mu$ (massive spin 1 boson) we have $$j^\mu = \frac{e}{m} \Im(B^*_\mu(\partial^\mu B^\nu-\partial^\nu B^\mu))$$ Which by the same procedure leads to $$\mathcal{J}^\mu = \frac{e}{m} \Im(B^*_\mu(\partial^\mu B^\nu-\partial^\nu B^\mu))+ \frac{e^2}{m} |B|^2 A^\mu$$ Similar $e^2$ terms also appear in the Lagrangian itself as $e^2 A^2 |\phi|^2$. On the other hand, for a bispinor $\psi$ (spin 1/2 massive fermion) we have the current $$j^\mu = \mathcal{J}^\mu = e \bar{\psi} \gamma^\mu \psi$$ Since it does not have any $\partial_\mu$ included. Now consider very slowly moving or even static particles, we have $\partial_0 \phi, \partial_0 B \to \pm im\phi, \pm im B$ and the current is essentially $(\rho,0,0,0)$. For $\phi$ we have thus approximately $$\rho = e (|\phi^+|^2-|\phi^-|^2) + \frac{e^2}{m} (|\phi^+|^2 + |\phi^-|^2) \Phi$$ Where $A^0 = \Phi$ is the electrostatic potential and $\phi^\pm$ are the "positive and negative frequency parts" of $\phi$ defined by $\partial_0 \phi^\pm = \pm im \phi^\pm$. A similar term appears for the Proca field. For the interpretation let us pass back to SI units, in this case we only get a $1/c^2$ factor. The "extra density" is $$\Delta \rho = e\cdot \frac{e \Phi}{mc^2}\cdot |\phi|^2$$ That is, there is an extra density proportional to the ratio of the energy of the electrostatic field $e \Phi$ and the rest mass of the particle $mc^2$. The sign of this extra density is dependent only on the sign of the electrostatic potential and both frequency parts contribute with the same sign (which is superweird). This would mean that classicaly, the "bare" charge of bosons in strong electromagnetic fields is not conserved, only this generalized charge is. After all, it seems a bad convention to call $\mathcal{J}^\mu$ the electric charge current. By multiplying it by $m(c^2)/e$ it becomes a matter density current with the extra term corresponding to mass gained by electrostatic energy. However, that does not change the fact that the "bare charge density" $j^0$ seems not to be conserved for bosons. Now to the questions: • On a theoretical level, is charge conservation at least temporarily or virtually violated for bosons in strong electromagnetic fields? (Charge conservation will quite obviously not be violated in the final S-matrix, and as an $\mathcal{O}(e^2)$ effect it will probably not be reflected in first order processes.) Is there an intuitive physical reason why such a violation is not true for fermions even on a classical level? • Charged bosons do not have a high abundance in fundamental theories, but they do often appear in effective field theories. Is this "bare charge" non-conservation anyhow reflected in them and does it have associated experimental phenomena? • Extra clarifying question: Say we have $10^{23}$ bosons with charge $e$ so that their charge is $e 10^{23}$. Now let us bring these bosons from very far away to very close to each other. As a consequence, they will be in a much stronger field $\Phi$. Does their measured charge change from $e 10^{23}$? If not, how do the bosons compensate in terms of $\phi, B, e, m$? If this is different for bosons rather than fermions, is there an intuitive argument why? This post imported from StackExchange Physics at 2015-06-09 14:50 (UTC), posted by SE-user Void asked Sep 24, 2014 in Theoretical Physics by Void (1,635 points) [ revision history ] edited Jun 9, 2015 by Void Most voted comments show all comments By Noether's theorem, Noether currents are  conserved since they are derived from an infinitesimal symmetry; they are observable iff they are gauge invariant. Are you missing something in the answer by Qmechanic? @ArnoldNeumaier I added an extra clarifying question to what bugs me. I am well aware about the conservation and observability, I mainly wanted to inquire about the deeper physical explanation of these facts. The charge doesn't change, as it is an integral over the whole space - only the charge density develops a very localized peak. What should need compensation? Note that bare stuff doesn't matter; it is irrelevant scaffolding removed by renormalization. Just a dumb idea: Maybe this is somehow related to the fact in the SM, introducing mass-terms for the bosons simply as $\frac{1}{2}m\phi^{*}\phi$ without a higgs field or mechanism breaks the gauge symmetry, and therefore is no conserved current corresponding to the by the mass term broken symmetry? @Dilaton: Yes, there seems to be something funky about massive or charged elementary bosons. I was just hoping there is an established argument what exactly is the crux of this funkiness -- perhaps through such things as charged pions and their relation to $U(1)$. Most recent comments show all comments @drake I just meant that for example Proca mass terms such as $\frac{1}{2}m^2 B^{*\nu} B_{\nu}$ break gauge symmetries such as $U(1)$, and could therefore spoil charge conservation. @Dilaton I don't get your point... Here the gauge field is $A$, which doesn't have any mass term. In the SM one wants to give mass to the gauge fields. I think you are wrong. 4 Answers + 3 like - 0 dislike Comments to the question (v3): 1. In contrast to QED with fermionic matter, in QED with bosonic matter, the full Noether current ${\cal J}^{\mu}$ (for global gauge transformations) tends to depend explicitly on the gauge potential $A^{\mu}$, see e.g. Refs. 1-2 and this Phys.SE post. 2. The reason for this difference is because the QED Lagrangian for fermionic (bosonic) matter typically contains one (two) spacetime derivative(s) $\partial_{\mu}$, which after minimal coupling $\partial_{\mu}\to D_{\mu}$ leads to e.g. no (a) quartic matter-matter-photon-photon coupling term, respectively. 3. The full Noether current ${\cal J}^{\mu}$ is a gauge-invariant and conserved quantity, $d_{\mu }{\cal J}^{\mu} \approx 0$. [Here $d_{\mu}\equiv\frac{d}{dx^{\mu}}$ means a total spacetime derivative, and the $\approx$ symbol means equality modulo eom.] The electric charge $Q=\int \! d^3x ~{\cal J}^{0}$ is a conserved quantity. 4. The only physical observables in a gauge theory are gauge-invariant quantities. The quantity $j^{\mu}$, which OP calls the "bare current", is not gauge-invariant, and hence not a consistent physical observable to consider. 5. As Trimok mentions in a comment, the situation for non-Abelian (as opposed to Abelian) Yang-Mills is radically different. The full Noether current ${\cal J}^{\mu a}$ (for global gauge transformations) is a conserved $d_{\mu }{\cal J}^{\mu a} \approx 0$, but ${\cal J}^{\mu a}$ is not gauge-invariant (or even gauge covariant), and hence not a consistent physical observable to consider. There is not a well-defined observable for color charge that one can measure. This follows also from Weinberg-Witten theorem (for spin 1): A theory with a global non-Abelian symmetry under which massless spin-1 particles are charged does not admit a gauge- and Lorentz-invariant conserved current, cf. Ref. 3. 1. M. Srednicki, QFT, Chapter 61. 2. M.D. Schwartz, QFT and the Standard Model, Section 8.3 and Chapter 9. 3. M.D. Schwartz, QFT and the Standard Model, Section 25.3. answered Sep 24, 2014 by Qmechanic (2,860 points) [ no revision ] Yes, some of these are the observations which lead me to this question. But say we have a macroscopic material with bosonic charged particles, object it to a very strong electrostatic field and measure it's charge. Would we have to be measuring $\mathcal{J}^0$ under all conditions? I guess 3. implies yes, and that means we would measure the object to have a charge different from the zero field situation. The extra "non-bare" charge obviously comes from the field, but this is a very different notion from the usual intuition of "charge". ${\cal J}^{\mu}$ is a covariant quantity, then it should verify $D_\mu {\cal J}^{\mu}=0$, but a conserved quantity corresponds to $\partial_\mu {\cal J}^{\mu}=0$. So, here, are covariant and conserved current compatible notions ? (for instance, this is not the case in Yang-Mills theories). I updated the answer. + 1 like - 0 dislike I have actually taken the time to compute the equations of motion and the situation is more complicate than I previously thought. The Lagrangian in the static situation $\vec{A} = 0, \partial_t \to 0$ reads $$\mathcal{L} = -\frac{1}{2} |\nabla \phi|^2 - \frac{1}{2} m^2 |\phi|^2 + e^2 |\Phi|^2 |\phi|^2 + \frac{1}{2} |\nabla \Phi|^2 $$ which leads to EOM: $$(\Delta - m^2 + 2 e^2 |\Phi|^2) \phi = 0$$ $$ (\Delta - 2 e^2 |\phi|^2) \Phi = 0 $$ Amongst other things, this implies that minimally coupled bosons do not act as a usual source of the electromagnetic field at all. As it stands (a more detailed analysis of the non-stationary equations might show otherwise), the bosons actually "easen" their motion (effectively loose mass) in the presence of the electromagnetic field at the cost of weakening (rendering massive and short-range) the electromagnetic field. The coupling constant $e$ really does not have any reasonable interpretation in terms of a usual charge. For instance, the sign of $e$ is irrelevant and the particles and antiparticles of quantized $\phi$ have the same effect on $\Phi$.  The $U(1)$ charge is just a conserved quantity with no intuitive interpretation in terms of the usual charge. Hence, the original form of the question does not have a proper meaning; $U(1)$ coupling for bosons simply means something totally different than for fermions. (If you have any more observations or a different view, please contribute, I am interested.) answered Jun 10, 2015 by Void (1,635 points) [ no revision ] Are you allowed to simply put $A=0$? It changes the dynamics. @ArnoldNeumaier: If we still hold $\partial_t \to 0$ a nonzero $\vec{A}$ would only make $|\Phi|^2 \to |\Phi|^2 - |A|^2$ and an extra $\vec{A}$ equation coupled to $\phi$ similarly as in the $\Phi$ case. + 1 like - 0 dislike Dear mods, I am sorry this answer is not graduate-upward level, but I have not been able to come up with a more sophisticated one. 1) Yes, the charge is truly conserved but the respective current depends on the 4-potential A. What it is confusing you, I think, is that the current for a scalar field depends on 4-potential $A$, whereas that of a spin-1/2 does not. This is obviously related to the number of derivatives in the Lagrangian kinetic term and, likewise, to the number of derivatives in the current. It can help you understand what it's going on to adopt the canonical formalism (also known as the language of gentlemen), in which in both cases the density (and the charge too) involves the product of the canonical momentum and the field, as it could not be otherwise because the charge is nothing else but the infinitesimal generator of $U(1)$ transformations for both the field and the canonical momentum. 2) What you call the "bare charge", which probably is not a good name since this term is reserved for something else, lacks of physical content before fixing a gauge, as it is not a gauge invariant quantity. Note however that one can always choose one's favorite gauge. And if one picks the temporal gauge (\(A_0 = 0\)), the charge does not depend on the 4-potential and the form is the same as your "bare charge", which is conserved in this gauge. 3) The only difference in the movement of spin-one-half particles and spin zero-particles in an electromagnetic field is a term proportional to \[\sigma_{\mu\nu}\, F^{\mu\nu}\] in the equation for spin-1/2 particles. This term gives rise to the term \[\bf{S}\cdot \bf{B} \] in the non-relativistic limit, that is, the interaction between the spin of the particle and the magnetic field. 4) It can help you to get the equation in your answer to first think of the equation of motion in the non-relativist limit, which is the Schrödinger equation in an electromagnetic field, that is, the Schrödinger equation replacing partial derivatives with gauge-covariant ones (for scalar particles, for spin-1/2 there is the additional term I wrote above).  answered Jun 12, 2015 by drake (885 points) [ revision history ] edited Jun 12, 2015 by drake + 0 like - 4 dislike The charge $e$ introduced into your Lagrangians/equations is a constant in time by definition, no Noether theorem is necessary to "conserve" it: $\frac{de}{dt}=0$. Another thing is your equations/theory or "charge definition" via equations/solutions (as an integral bla-bla-bla). Here everything depends on your equations. Do not think that equations for bosons are already well established and finalized. For one formulation you get one result, for another you do another. So, there is no 'truly" thing, keep it firmly in your mind! answered Jun 9, 2015 by Vladimir Kalitvianski (132 points) [ revision history ] edited Jun 9, 2015 by Vladimir Kalitvianski Your answer Live preview (may slow down editor)   Preview Your name to display (optional): Anti-spam verification: user contributions licensed under cc by-sa 3.0 with attribution required Your rights
4e303e3477a1b0af
20th century in science Science advanced dramatically during the 20th century. There were new and radical developments in the physical, life and human sciences, building on the progress made in the 19th century.[1] In physics, the development of post-Newtonian theories such as special relativity, general relativity, and quantum mechanics led to the development of nuclear weapons. New models of the structure of the atom led to developments in theories of chemistry and the development of new materials such as nylon and plastics. Advances in biology led to large increases in food production, as well as the elimination of diseases such as polio. A massive amount of new technologies were developed in the 20th century. Technologies such as electricity, the incandescent light bulb, the automobile and the phonograph, first developed at the end of the 19th century, were perfected and universally deployed. The first airplane flight occurred in 1903, and by the end of the century large airplanes such as the Boeing 777 and Airbus A330 flew thousands of miles in a matter of hours. The development of the television and computers caused massive changes in the dissemination of information. The 20th century saw mathematics become a major profession. As in most areas of study, the explosion of knowledge in the scientific age has led to specialization: by the end of the century there were hundreds of specialized areas in mathematics and the Mathematics Subject Classification was dozens of pages long.[2] Every year, thousands of new Ph.D.s in mathematics were awarded, and jobs were available in both teaching and industry. More and more mathematical journals were published and, by the end of the century, the development of the World Wide Web led to online publishing. Mathematical collaborations of unprecedented size and scope took place. An example is the classification of finite simple groups (also called the "enormous theorem"), whose proof between 1955 and 1983 required 500-odd journal articles by about 100 authors, and filling tens of thousands of pages. In 1963, Paul Cohen proved that the continuum hypothesis is independent of (could neither be proved nor disproved from) the standard axioms of set theory. In 1976, Wolfgang Haken and Kenneth Appel used a computer to prove the four color theorem. Andrew Wiles, building on the work of others, proved Fermat's Last Theorem in 1995. In 1998 Thomas Callister Hales proved the Kepler conjecture. Quantum mechanics Quantum mechanics in the 1920s From left to right, top row: Louis de Broglie (1892–1987) and Wolfgang Pauli (1900–58); second row: Erwin Schrödinger (1887–1961) and Werner Heisenberg (1901–76) In 1924, French quantum physicist Louis de Broglie published his thesis, in which he introduced a revolutionary theory of electron waves based on wave–particle duality in his thesis. In his time, the wave and particle interpretations of light and matter were seen as being at odds with one another, but de Broglie suggested that these seemingly different characteristics were instead the same behavior observed from different perspectives — that particles can behave like waves, and waves (radiation) can behave like particles. Broglie's proposal offered an explanation of the restriction motion of electrons within the atom. The first publications of Broglie's idea of "matter waves" had drawn little attention from other physicists, but a copy of his doctoral thesis chanced to reach Einstein, whose response was enthusiastic. Einstein stressed the importance of Broglie's work both explicitly and by building further on it. In 1925, Austrian-born physicist Wolfgang Pauli developed the Pauli exclusion principle, which states that no two electrons around a single nucleus in an atom can occupy the same quantum state simultaneously, as described by four quantum numbers. Pauli made major contributions to quantum mechanics and quantum field theory - he was awarded the 1945 Nobel Prize for Physics for his discovery of the Pauli exclusion principle - as well as solid-state physics, and he successfully hypothesized the existence of the neutrino. In addition to his original work, he wrote masterful syntheses of several areas of physical theory that are considered classics of scientific literature. In 1926 at the age of 39, Austrian theoretical physicist Erwin Schrödinger produced the papers that gave the foundations of quantum wave mechanics. In those papers he described his partial differential equation that is the basic equation of quantum mechanics and bears the same relation to the mechanics of the atom as Newton's equations of motion bear to planetary astronomy. Adopting a proposal made by Louis de Broglie in 1924 that particles of matter have a dual nature and in some situations act like waves, Schrödinger introduced a theory describing the behaviour of such a system by a wave equation that is now known as the Schrödinger equation. The solutions to Schrödinger's equation, unlike the solutions to Newton's equations, are wave functions that can only be related to the probable occurrence of physical events. The readily visualized sequence of events of the planetary orbits of Newton is, in quantum mechanics, replaced by the more abstract notion of probability. (This aspect of the quantum theory made Schrödinger and several other physicists profoundly unhappy, and he devoted much of his later life to formulating philosophical objections to the generally accepted interpretation of the theory that he had done so much to create.) German theoretical physicist Werner Heisenberg was one of the key creators of quantum mechanics. In 1925, Heisenberg discovered a way to formulate quantum mechanics in terms of matrices. For that discovery, he was awarded the Nobel Prize for Physics for 1932. In 1927 he published his uncertainty principle, upon which he built his philosophy and for which he is best known. Heisenberg was able to demonstrate that if you were studying an electron in an atom you could say where it was (the electron's location) or where it was going (the electron's velocity), but it was impossible to express both at the same time. He also made important contributions to the theories of the hydrodynamics of turbulent flows, the atomic nucleus, ferromagnetism, cosmic rays, and subatomic particles, and he was instrumental in planning the first West German nuclear reactor at Karlsruhe, together with a research reactor in Munich, in 1957. Considerable controversy surrounds his work on atomic research during World War II. In 1903, Mikhail Tsvet invented chromatography, an important analytic technique. In 1904, Hantaro Nagaoka proposed an early nuclear model of the atom, where electrons orbit a dense massive nucleus. In 1905, Fritz Haber and Carl Bosch developed the Haber process for making ammonia, a milestone in industrial chemistry with deep consequences in agriculture. The Haber process, or Haber-Bosch process, combined nitrogen and hydrogen to form ammonia in industrial quantities for production of fertilizer and munitions. The food production for half the world's current population depends on this method for producing fertilizer. Haber, along with Max Born, proposed the Born–Haber cycle as a method for evaluating the lattice energy of an ionic solid. Haber has also been described as the "father of chemical warfare" for his work developing and deploying chlorine and other poisonous gases during World War I. In 1905, Albert Einstein explained Brownian motion in a way that definitively proved atomic theory. Leo Baekeland invented bakelite, one of the first commercially successful plastics. In 1909, American physicist Robert Andrews Millikan - who had studied in Europe under Walther Nernst and Max Planck - measured the charge of individual electrons with unprecedented accuracy through the oil drop experiment, in which he measured the electric charges on tiny falling water (and later oil) droplets. His study established that any particular droplet's electrical charge is a multiple of a definite, fundamental value — the electron's charge — and thus a confirmation that all electrons have the same charge and mass. Beginning in 1912, he spent several years investigating and finally proving Albert Einstein's proposed linear relationship between energy and frequency, and providing the first direct photoelectric support for Planck's constant. In 1923 Millikan was awarded the Nobel Prize for Physics. In 1909, S. P. L. Sørensen invented the pH concept and develops methods for measuring acidity. In 1911, Antonius Van den Broek proposed the idea that the elements on the periodic table are more properly organized by positive nuclear charge rather than atomic weight. In 1911, the first Solvay Conference was held in Brussels, bringing together most of the most prominent scientists of the day. In 1912, William Henry Bragg and William Lawrence Bragg proposed Bragg's law and established the field of X-ray crystallography, an important tool for elucidating the crystal structure of substances. In 1912, Peter Debye develops the concept of molecular dipole to describe asymmetric charge distribution in some molecules. In 1913, Niels Bohr, a Danish physicist, introduced the concepts of quantum mechanics to atomic structure by proposing what is now known as the Bohr model of the atom, where electrons exist only in strictly defined circular orbits around the nucleus similar to rungs on a ladder. The Bohr Model is a planetary model in which the negatively charged electrons orbit a small, positively charged nucleus similar to the planets orbiting the Sun (except that the orbits are not planar) - the gravitational force of the solar system is mathematically akin to the attractive Coulomb (electrical) force between the positively charged nucleus and the negatively charged electrons. In 1913, Henry Moseley, working from Van den Broek's earlier idea, introduces concept of atomic number to fix inadequacies of Mendeleev's periodic table, which had been based on atomic weight. The peak of Frederick Soddy's career in radiochemistry was in 1913 with his formulation of the concept of isotopes, which stated that certain elements exist in two or more forms which have different atomic weights but which are indistinguishable chemically. He is remembered for proving the existence of isotopes of certain radioactive elements, and is also credited, along with others, with the discovery of the element protactinium in 1917. In 1913, J. J. Thomson expanded on the work of Wien by showing that charged subatomic particles can be separated by their mass-to-charge ratio, a technique known as mass spectrometry. In 1916, Gilbert N. Lewis published his seminal article "The Atom of the Molecule", which suggested that a chemical bond is a pair of electrons shared by two atoms. Lewis's model equated the classical chemical bond with the sharing of a pair of electrons between the two bonded atoms. Lewis introduced the "electron dot diagrams" in this paper to symbolize the electronic structures of atoms and molecules. Now known as Lewis structures, they are discussed in virtually every introductory chemistry book. Lewis in 1923 developed the electron pair theory of acids and base: Lewis redefined an acid as any atom or molecule with an incomplete octet that was thus capable of accepting electrons from another atom; bases were, of course, electron donors. His theory is known as the concept of Lewis acids and bases. In 1923, G. N. Lewis and Merle Randall published Thermodynamics and the Free Energy of Chemical Substances, first modern treatise on chemical thermodynamics. The 1920s saw a rapid adoption and application of Lewis's model of the electron-pair bond in the fields of organic and coordination chemistry. In organic chemistry, this was primarily due to the efforts of the British chemists Arthur Lapworth, Robert Robinson, Thomas Lowry, and Christopher Ingold; while in coordination chemistry, Lewis's bonding model was promoted through the efforts of the American chemist Maurice Huggins and the British chemist Nevil Sidgwick. Quantum chemistry Some view the birth of quantum chemistry in the discovery of the Schrödinger equation and its application to the hydrogen atom in 1926. However, the 1927 article of Walter Heitler and Fritz London[3] is often recognised as the first milestone in the history of quantum chemistry. This is the first application of quantum mechanics to the diatomic hydrogen molecule, and thus to the phenomenon of the chemical bond. In the following years much progress was accomplished by Edward Teller, Robert S. Mulliken, Max Born, J. Robert Oppenheimer, Linus Pauling, Erich Hückel, Douglas Hartree, Vladimir Aleksandrovich Fock, to cite a few. Still, skepticism remained as to the general power of quantum mechanics applied to complex chemical systems. The situation around 1930 is described by Paul Dirac:[4] Hence the quantum mechanical methods developed in the 1930s and 1940s are often referred to as theoretical molecular or atomic physics to underline the fact that they were more the application of quantum mechanics to chemistry and spectroscopy than answers to chemically relevant questions. In 1951, a milestone article in quantum chemistry is the seminal paper of Clemens C. J. Roothaan on Roothaan equations.[5] It opened the avenue to the solution of the self-consistent field equations for small molecules like hydrogen or nitrogen. Those computations were performed with the help of tables of integrals which were computed on the most advanced computers of the time. In the 1940s many physicists turned from molecular or atomic physics to nuclear physics (like J. Robert Oppenheimer or Edward Teller). Glenn T. Seaborg was an American nuclear chemist best known for his work on isolating and identifying transuranium elements (those heavier than uranium). He shared the 1951 Nobel Prize for Chemistry with Edwin Mattison McMillan for their independent discoveries of transuranium elements. Seaborgium was named in his honour, making him the only person, along Albert Einstein and Yuri Oganessian, for whom a chemical element was named during his lifetime. Molecular biology and biochemistry By the mid 20th century, in principle, the integration of physics and chemistry was extensive, with chemical properties explained as the result of the electronic structure of the atom; Linus Pauling's book on The Nature of the Chemical Bond used the principles of quantum mechanics to deduce bond angles in ever-more complicated molecules. However, though some principles deduced from quantum mechanics were able to predict qualitatively some chemical features for biologically relevant molecules, they were, till the end of the 20th century, more a collection of rules, observations, and recipes than rigorous ab initio quantitative methods. This heuristic approach triumphed in 1953 when James Watson and Francis Crick deduced the double helical structure of DNA by constructing models constrained by and informed by the knowledge of the chemistry of the constituent parts and the X-ray diffraction patterns obtained by Rosalind Franklin.[6] This discovery lead to an explosion of research into the biochemistry of life. In the same year, the Miller–Urey experiment demonstrated that basic constituents of protein, simple amino acids, could themselves be built up from simpler molecules in a simulation of primordial processes on Earth. Though many questions remain about the true nature of the origin of life, this was the first attempt by chemists to study hypothetical processes in the laboratory under controlled conditions. In 1983 Kary Mullis devised a method for the in-vitro amplification of DNA, known as the polymerase chain reaction (PCR), which revolutionized the chemical processes used in the laboratory to manipulate it. PCR could be used to synthesize specific pieces of DNA and made possible the sequencing of DNA of organisms, which culminated in the huge human genome project. An important piece in the double helix puzzle was solved by one of Pauling's students Matthew Meselson and Frank Stahl, the result of their collaboration (Meselson–Stahl experiment) has been called as "the most beautiful experiment in biology". They used a centrifugation technique that sorted molecules according to differences in weight. Because nitrogen atoms are a component of DNA, they were labelled and therefore tracked in replication in bacteria. Late 20th century In 1970, John Pople developed the Gaussian program greatly easing computational chemistry calculations.[7] In 1971, Yves Chauvin offered an explanation of the reaction mechanism of olefin metathesis reactions.[8] In 1975, Karl Barry Sharpless and his group discovered a stereoselective oxidation reactions including Sharpless epoxidation,[9][10] Sharpless asymmetric dihydroxylation,[11][12][13] and Sharpless oxyamination.[14][15][16] In 1985, Harold Kroto, Robert Curl and Richard Smalley discovered fullerenes, a class of large carbon molecules superficially resembling the geodesic dome designed by architect R. Buckminster Fuller.[17] In 1991, Sumio Iijima used electron microscopy to discover a type of cylindrical fullerene known as a carbon nanotube, though earlier work had been done in the field as early as 1951. This material is an important component in the field of nanotechnology.[18] In 1994, Robert A. Holton and his group achieved the first total synthesis of Taxol.[19][20][21] In 1995, Eric Cornell and Carl Wieman produced the first Bose–Einstein condensate, a substance that displays quantum mechanical properties on the macroscopic scale.[22] Engineering and technology • The number and types of home appliances increased dramatically due to advancements in technology, electricity availability, and increases in wealth and leisure time. Such basic appliances as washing machines, clothes dryers, furnaces, exercise machines, refrigerators, freezers, electric stoves, and vacuum cleaners all became popular from the 1920s through the 1950s. The microwave oven became popular during the 1980s and have become a standard in all homes by the 1990s. Radios were popularized as a form of entertainment during the 1920s, which extended to television during the 1950s. Cable and satellite television spread rapidly during the 1980s and 1990s. Personal computers began to enter the home during the 1970s–1980s as well. The age of the portable music player grew during the 1960s with the development of the transistor radio, 8-track and cassette tapes, which slowly began to replace record players These were in turn replaced by the CD during the late 1980s and 1990s. The proliferation of the Internet in the mid-to-late 1990s made digital distribution of music (mp3s) possible. VCRs were popularized in the 1970s, but by the end of the 20th century, DVD players were beginning to replace them, making the VHS obsolete by the end of the first decade of the 21st century. • The triode tube, transistor and integrated circuit successively revolutionized electronics and computers, leading to the proliferation of the personal computer in the 1980s and cell phones and the public-use Internet in the 1990s. • New materials, most notably stainless steel, Velcro, silicone, teflon, and plastics such as polystyrene, PVC, polyethylene, and nylon came into widespread use for many various applications. These materials typically have tremendous performance gains in strength, temperature, chemical resistance, or mechanical properties over those known prior to the 20th century. • Semiconductor materials were discovered, and methods of production and purification developed for use in electronic devices. Silicon became one of the purest substances ever produced. Astronomy and space exploration • A much better understanding of the evolution of the universe was achieved, its age (about 13.8 billion years) was determined, and the Big Bang theory on its origin was proposed and generally accepted. • In 1969, Apollo 11 was launched towards the Moon and Neil Armstrong became the first person from Earth to walk on another celestial body. Biology and medicine Notable diseases Social sciences 2. Mathematics Subject Classification 2000 3. W. Heitler and F. London, Wechselwirkung neutraler Atome und Homöopolare Bindung nach der Quantenmechanik, Z. Physik, 44, 455 (1927). 4. P.A.M. Dirac, Quantum Mechanics of Many-Electron Systems, Proc. R. Soc. London, A 123, 714 (1929). 5. C.C.J. Roothaan, A Study of Two-Center Integrals Useful in Calculations on Molecular Structure, J. Chem. Phys., 19, 1445 (1951). 6. Watson, J. and Crick, F., "Molecular Structure of Nucleic Acids" Nature, April 25, 1953, p 737–8 8. Catalyse de transformation des oléfines par les complexes du tungstène. II. Télomérisation des oléfines cycliques en présence d'oléfines acycliques Die Makromolekulare Chemie Volume 141, Issue 1, Date: 9 February 1971, Pages: 161–176 Par Jean-Louis Hérisson, Yves Chauvin doi:10.1002/macp.1971.021410112 9. Katsuki, T.; Sharpless, K. B. J. Am. Chem. Soc. 1980, 102, 5974. (doi:10.1021/ja00538a077) 10. Hill, J. G.; Sharpless, K. B.; Exon, C. M.; Regenye, R. Org. Synth., Coll. Vol. 7, p.461 (1990); Vol. 63, p.66 (1985). (Article) 11. Jacobsen, E. N.; Marko, I.; Mungall, W. S.; Schroeder, G.; Sharpless, K. B. J. Am. Chem. Soc. 1988, 110, 1968. (doi:10.1021/ja00214a053) 12. Kolb, H. C.; Van Nieuwenhze, M. S.; Sharpless, K. B. Chem. Rev. 1994, 94, 2483–2547. (Review) (doi:10.1021/cr00032a009) 13. Gonzalez, J.; Aurigemma, C.; Truesdale, L. Org. Synth., Coll. Vol. 10, p.603 (2004); Vol. 79, p.93 (2002). (Article) 14. Sharpless, K. B.; Patrick, D. W.; Truesdale, L. K.; Biller, S. A. J. Am. Chem. Soc. 1975, 97, 2305. (doi:10.1021/ja00841a071) 15. Herranz, E.; Biller, S. A.; Sharpless, K. B. J. Am. Chem. Soc. 1978, 100, 3596–3598. (doi:10.1021/ja00479a051) 16. Herranz, E.; Sharpless, K. B. Org. Synth., Coll. Vol. 7, p.375 (1990); Vol. 61, p.85 (1983). (Article) 17. "The Nobel Prize in Chemistry 1996". Nobelprize.org. The Nobel Foundation. Retrieved 2007-02-28. 18. "Benjamin Franklin Medal awarded to Dr. Sumio Iijima, Director of the Research Center for Advanced Carbon Materials, AIST". National Institute of Advanced Industrial Science and Technology. 2002. Archived from the original on 2007-04-04. Retrieved 2007-03-27. 19. First total synthesis of taxol 1. Functionalization of the B ring Robert A. Holton, Carmen Somoza, Hyeong Baik Kim, Feng Liang, Ronald J. Biediger, P. Douglas Boatman, Mitsuru Shindo, Chase C. Smith, Soekchan Kim, et al.; J. Am. Chem. Soc.; 1994; 116(4); 1597–1598. DOI Abstract 20. First total synthesis of taxol. 2. Completion of the C and D rings Robert A. Holton, Hyeong Baik Kim, Carmen Somoza, Feng Liang, Ronald J. Biediger, P. Douglas Boatman, Mitsuru Shindo, Chase C. Smith, Soekchan Kim, and et al. J. Am. Chem. Soc.; 1994; 116(4) pp 1599–1600 DOI Abstract 21. A synthesis of taxusin Robert A. Holton, R. R. Juo, Hyeong B. Kim, Andrew D. Williams, Shinya Harusawa, Richard E. Lowenthal, Sadamu Yogai J. Am. Chem. Soc.; 1988; 110(19); 6558–6560. Abstract 22. "Cornell and Wieman Share 2001 Nobel Prize in Physics". NIST News Release. National Institute of Standards and Technology. 2001. Archived from the original on 2007-06-10. Retrieved 2007-03-27. 23. Thomson, Sir William (1862). "On the Age of the Sun's Heat". Macmillan's Magazine. 5: 288–293. 24. 1 2 3 "The Nobel Prize in Physiology or Medicine 1962". NobelPrize.org. Nobel Media AB. Retrieved November 5, 2011. 25. 1 2 3 4 "James Watson, Francis Crick, Maurice Wilkins, and Rosalind Franklin". Science History Institute. Retrieved 20 March 2018.
05b6d0a1111ac4e3
Materials in Electronics/Schrödinger's Equation From Wikibooks, open books for an open world < Materials in Electronics Jump to: navigation, search Schrödinger's Equation is a differential equation that describes the evolution of Ψ(x) over time. By solving the differential equation for a particular situation, the wave function can be found. It is a statement of the conservation of energy of the particle. Schrödinger's Equation in 1-Dimension[edit] In the simplest case, a particle in one dimension, it is derived as follows: • T(x) is the kinetic energy of the particle • V(x) is the potential energy of the particle • E is the energy of the particle, which is constant Substituting for the kinetic energy of wave, as shown here: Now we need to get this differential equation in terms of Ψ(x). Assume that Ψ(x) is given by Double differentiating our trial solution,: Rearranging for k2 Substituting this in the differential equation gives: Multiplying through by Ψ(x) gives us Schrödinger's Equation in 1D: [Schrödinger's Equation in 1D] Solving the Schrödinger Equation gives us the wavefunction of the particle, which can be used to find the electron distribution in a system. This is a time-independent solution - it will not change as time goes on. It is straightforward to add time-dependence to this equation, but for the moment we will consider only time-independent wave functions, so it is not necessary. The time-dependent wavefunction is denoted by While this equation was derived for a specific function, a complex exponential, it is more general than it appears as Fourier analysis can express any continuous function over range L as a sum of functions of this kind: The Schrödinger Equation as an Eigenequation[edit] The Schrödinger Equation can be expressed as an eigenequation of the form: [Schrödinger Equation as an Eigenequation] • ψ is the eigenfunction (or eigenstate, both mean the same thing) • E is the eigenvalue corresponding to the energy. • H is the Hamiltonian operator given by: [1D Hamiltonian Operator] This means that by applying the operator, H, to the function ψ(x), we will obtain a solution that is simply a scalar multiple of ψ(x). This multiple is E - the energy of the particle. This also means that every wavefunction (i.e. every solution to the Schrödinger Equation) has a particular associated energy. Higher Dimensions[edit] The equation that we just derived is the Schrödinger equation for a particle in one dimension. Adding more dimensions is not difficult. The three dimensional equation is: Where is the Laplace operator, which, in Cartesian coordinates, is given by: See this page for the derivation. It is also possible to add more dimensions, but this does not generally yield useful results, given that we inhabit a 3D universe. In order to integrate Schrödinger's equation with relativity, Paul Dirac showed that electrons have an additional property, called spin. This does not actually mean the electron is spinning on an axis, but in some ways it is a useful analogy. The spin on an electron can take two values; We can incorporate spin into the wavefunction, Ψ by multiplying by an addition component - the spin wavefunction, σ(s), where s is ±1/2. This is often just called "spin-up" and "spin-down", respectively. The full, time-dependent, wavefunction is now given by: Conditions on the Wavefunction[edit] In order to represent a particle's state, the wavefunction must satisfy several conditions: • It must be square-integrable, and moreover, the integral of the wavefunction's probability density function must be equal to unity, as the electron must exist somewhere in all of space: For 1D systems this is: • must be continuous, because its derivative, which in proportional to momentum, must be finite. • must be continuous, because its derivative, which is proportional to kinetic energy, must be finite. • must satisfy boundary conditions. In particular, as x tends to infinity, ψ(r) tends to zero. (This is required to satisfy the normalistation condition above). Examples of Use of Schrödinger's Equation[edit] Schrödinger's Equation can be used to find wavefunctions for many physical systems. See Confined Particles for more information. • Shrödinger's Equation (SE) is a statement of the Law of Conservation of Energy. • It is given by • By solving the equation, one can obtain the wavefunction, ψ. • From the wavefunction we find the distribution of the electron's probability function. • The probability of the electron existing over all space must be 1. • SE gives a set of discrete wavefunctions, each with an associated energy. • An electron cannot exist at an energy other than these.
cf8208994f9b8910
45: Schrodinger Explain xkcd: It's 'cause you're dumb. Jump to: navigation, search There was no alt-text until you moused over [edit] Explanation This comic is a joke creating a humorously false synthesis, combining the principals of quantum superposition and the effects of reading a comic one panel at a time. Schrödinger's cat is a thought experiment that illuminates the notion that a particle only resolves itself to its state upon observation, and until this observation it is in all of its possible states simultaneously. In the thought experiment a cat is both dead and alive until observation, likewise in this comic the comic is both funny and unfunny until the comic is observed (or read). Black Hat and Cueball are likening the last panel to the box with the cat: until you read it, it is in a mixed state (a superposition) of both funny and unfunny. In the last panel Black Hat says "Shit." The joke is that after reading the last panel the comic is both funny (as it is unexpected) and not funny (as the last line was a non sequitur and therefore there is no climax) at the same time, thus proving Black Hat and Cueball wrong, hence them expressing discontent with the word shit. The title text, which Randall here calls the alt-text, suggests that the alt text did not exist until the mouse over action occurred. [edit] Schrödinger's cat Schrödinger thought the Copenhagen interpretation was absurd, and devised the below thought experiment to show this. The experiment goes as follows: Put a cat in a box, he said, with a device triggered by the decay of an atom with a half-life of one hour that would release a poisonous gas if triggered. Then, after waiting an hour, the Copenhagen interpretation would say that the atom is in a superposition of decayed and undecayed states, and thus, by extension, the cat would be in a superposition of alive and dead states. Only when the box is opened would the wave-function for the cat collapse into either alive or dead states. This thought experiment is not meant to be taken literally as every interaction of a particle with another constitutes an observation, and many particles must interact for a cat to die, but still his argument was that since it is absurd for a cat to be both alive and dead, it is absurd for an atom to be both decayed and undecayed. If this experiment were to be performed the cat would not be both dead and alive. [edit] Transcript [Black Hat and Cueball are standing next to each other. Above them the text is written in a box with shades around it.] Schrödinger's Comic [Black Hat and Cueball are still standing next to each other, but Cueball has lifted his arms above his head. The text is again written in a box with shades around it.] [Black Hat and Cueball are still standing next to each other, Cueball arms are down again. The text is again written in a box with shades around it.] [Black Hat and Cueball are still standing next to each other. Cueball has become smaller and smaller through the three frames after the first. Quite clearly here in the last panel. The text is again written in a box with shades around it.] [edit] Trivia • This was the 42nd comic originally posted to LiveJournal. • There had been a break of almost a month between this and the previous comic. • This time was probably used to prepare the launch of the new xkcd site. • Original title: "Drawing: Schrodinger" • For the first time in eight comics, and only the second time since after the first day on LiveJournal, is the weekday not part of the title on LiveJournal. • But apart from in the very next comic, the extra word "Drawing" was still added to the title for this and the four comics after the next, in spite of the simultaneous release on xkcd. • There were no original Randall quote for this comic. • This was the first comic to be posted simultaneous (i.e. on the same day) on both LiveJournal and the new xkcd site. • This comic was thus one of the last 11 comics posted on LiveJournal. • The Schrödinger equation was enhanced by Paul Dirac only three years later in 1928: Dirac equation. It did combine the Schrödinger world with Einstein, e.g. relativity. • Black Hat's hat is beginning to shorten from its top-hat look, although its height varies between panels. (As does Cueballs height compared to Black Hat.) Personal tools
68080e891d81d354
Szymon Bęczkowski's PhD dissertation owiecc Jan 22nd, 2014 389 Never 1. % 2. %  Power converter for LED based intelligent light sources 3. % 4. %  Created by Szymon Bęczkowski on 2008-10-15. 5. % 7. % preamble (fold) 9. %!TEX TS-program = xelatex 10. %!TEX encoding = UTF-8 Unicode 12. \documentclass[12pt,a4paper,titlepage,onecolumn,openright,twoside]{report} 14. % inverted text color begin 15. % \usepackage{empheq} // not necessary 16. % \everymath{\color{white}} 17. % \everydisplay{\color{white}} 18. % inverted text color end 20. \usepackage[colorlinks=true,citecolor=black,linkcolor=black,pdftitle={Power converter for LED based intelligent light sources}]{hyperref} % clickable references 22. \setlength{\parskip}{0pt} % no spacing between paragraphs 24. \usepackage{amsmath} 25. \usepackage{mathspec} 26. \usepackage{microtype} % requires microtype 2.5 27. \LoadMicrotypeFile{pmn} 28. \usepackage{xltxtra,eukdate} 29. \defaultfontfeatures{Mapping=tex-text} 30. \setmainfont[Numbers=OldStyle]{Minion Pro} 31. \setallsansfonts[Numbers={OldStyle,Proportional},Scale=MatchLowercase]{Minion Pro} 32. \setallmonofonts[Numbers=OldStyle,Scale=MatchLowercase]{Minion Pro} 33. \setmathsfont(Digits,Latin)[Scale=MatchLowercase]{Minion Pro} 34. \setmathsfont(Greek)[Scale=MatchLowercase]{Minion Pro} 35. \setmathrm{Minion Pro} 36. \exchangeforms{phi} 37. \setminwhitespace[750] 39. % control widows and orphans 40. \raggedbottom 41. \widowpenalty=500 42. \clubpenalty=500 44. % map ligatures to plaintext for searching and copy&paste 46. % Smaller captions 47. \usepackage[hang,small]{caption} 48. %\DeclareCaptionLabelFormat{bf-parens}{(\textbf{#2})} 49. %\captionsetup{labelformat=bf-parens,labelsep=quad} 51. % Set text colour 52. \usepackage{color} % use: \textcolor{BrickRed}{text} 54. % Title logo 55. \usepackage{titlepic} 57. % This is now the recommended way for checking for PDFLaTeX: 58. \usepackage{ifpdf} 60. %\newif\ifpdf 61. %\ifx\pdfoutput\undefined 62. %\pdffalse % we are not running PDFLaTeX 63. %\else 64. %\pdfoutput=1 % we are running PDFLaTeX 65. %\pdftrue 66. %\fi 68. \ifpdf 69. \usepackage[pdftex]{graphicx} 70. \else 71. \usepackage{graphicx} 72. \fi 74. % Hyphenation 75. \hyphenation{pro-pos-ing con-gress in-ter-na-tion-al} 77. % Small caps 78. \newcommand{\dc}{\textsc{dc}} 79. \newcommand{\ac}{\textsc{ac}} 80. \newcommand{\uv}{\textsc{uv}} 81. \newcommand{\ir}{\textsc{ir}} 82. \newcommand{\cct}{\textsc{cct}} 83. \newcommand{\led}{\textsc{led}} 84. \newcommand{\hid}{\textsc{hid}} 85. \newcommand{\rgb}{\textsc{rgb}} 86. \newcommand{\rgbw}{\textsc{rgbw}} 87. \newcommand{\cie}{\textsc{cie}} 88. \newcommand{\xyz}{\textsc{xyz}} 89. \newcommand{\cri}{\textsc{cri}} 90. \newcommand{\tff}{\textsc{tff}} 91. \newcommand{\ffb}{\textsc{ffb}} 92. \newcommand{\pcb}{\textsc{pcb}} 93. \newcommand{\pwm}{\textsc{pwm}} 94. \newcommand{\dsp}{\textsc{dsp}} 95. \newcommand{\pcm}{\textsc{pcm}} 96. \newcommand{\am}{\textsc{am}} 97. \newcommand{\ffbtff}{\textsc{ffb&tff}} 98. \newcommand{\ccfb}{\textsc{ccfb}} 99. \newcommand{\fwhm}{\textsc{fwhm}} 100. \newcommand{\mosfet}{\textsc{mosfet}} 101. \newcommand{\srgb}{{\footnotesize{}s}\textsc{rgb}} 102. \newcommand{\srh}{\textsc{srh}} 103. \newcommand{\matlab}{\textsc{matlab}} 104. \newcommand{\ivxyz}{\textsc{ivxyz}} 105. \newcommand{\dcdc}{\textsc{dc-dc}} 107. \newcommand{\abbr}[1]{{\fontspec[Numbers=OldStyle]{Minion Pro}\textsc{#1}\fontspec[Numbers=Lining]{Minion Pro}}} 108. \def\mathbi#1{{\fontspec[Numbers=Lining]{Minion Pro}\textbf{\em #1}}} 110. % Math 111. \newcommand{\ud}{\mathrm{d}} % straight "d" for integrals 112. \newcommand{\deriv}[2]{\frac{\ud{}#1}{\ud{}#2}} 113. \newcommand{\pderiv}[2]{\frac{\partial{}#1}{\partial{}#2}} 114. \newcommand{\avg}[1]{\big< #1 \big>} % average 116. % Typographic 117. \newcommand{\hsp}{\hspace{1pt}}% hair space 118. \frenchspacing % single space after sentence 119. \setlength{\parindent}{1em} 121. % Pretty sidenotes \marginpar{Sidenote} 122. \usepackage{mparhack} % Fixes marginpar appearing on the wrong side of the page 123. \let\oldmarginpar\marginpar 124. \renewcommand\marginpar[1]{\-\oldmarginpar[\raggedleft\footnotesize\textsc{#1}]{\raggedright\footnotesize\textsc{#1}}} 126. % TOC styling 127. \usepackage[titles]{tocloft} % chage style of TOC, [titles] to control the style of TOC heading 129. % distance between title and page number 130. \renewcommand{\cftchapleader}{\hspace{8pt}} 131. \renewcommand{\cftsecleader}{\hspace{8pt}} 132. \renewcommand{\cftsubsecleader}{\hspace{8pt}} 133. % subsections italic 134. \renewcommand{\cftsubsecfont}{\it} 135. % Trick to flush page numbers left within their box 136. \makeatletter 137. \renewcommand{\@pnumwidth}{0.1em} 138. \makeatother 139. % page number colours 140. \renewcommand{\cftchappagefont}{\color[rgb]{0.715,0.05,0.07}} 141. \renewcommand{\cftsecpagefont}{\color[rgb]{0.715,0.05,0.07}} 142. \renewcommand{\cftsubsecpagefont}{\color[rgb]{0.715,0.05,0.07}} 143. % no dots fill 144. \renewcommand{\cftchapafterpnum}{\cftparfillskip} 145. \renewcommand{\cftsecafterpnum}{\cftparfillskip} 146. \renewcommand{\cftsubsecafterpnum}{\cftparfillskip} 147. % indents 148. \cftsetindents{chapter}{0pt}{24pt} 149. \cftsetindents{section}{0pt}{24pt} 150. \cftsetindents{subsection}{-1000pt}{1024pt} 151. % vertical space 152. \setlength\cftbeforechapskip{14pt} 154. % Chapter, section and subsection headings styling 155. \makeatletter 156. % Chapter 1, Chapter 2, ... heading styling 157. \def\@makechapterhead#1{% 158.   { \parindent \z@ \raggedright \normalfont 159.    \ifnum \c@secnumdepth >\m@ne   160.        \huge\thechapter\hspace{8pt}\fontspec[Numbers=Lining]{Minion Pro Subhead}#1\par\nobreak\vspace{28pt} 161.        \par\nobreak 162.    \fi 163.    \interlinepenalty\@M 164.  } 165. } 166. % List of abbreviations, Bibliography, ... heading styling 167. \def\@makeschapterhead#1{% 168.   { \parindent \z@ \raggedright 169.    \normalfont 170.    \interlinepenalty\@M 171.    \huge\fontspec[Numbers=Lining]{Minion Pro Subhead} #1\par\nobreak\vspace{28pt} 172.  } 173. } 175. \renewcommand{\section}{\@startsection% 176.         {section}%                      % name 177.         {1}%                            % level 178.         {0pt}%                          % indent 179.         {-\baselineskip}%       % before skip 180.         {\baselineskip}%        % after skip 181.         {\normalfont\sc}}       % the style 182. \renewcommand{\subsection}{\@startsection% 183.         {subsection}%           % name 184.         {2}%                            % level 185.         {0pt}%                          % indent 186.         {-\baselineskip}%       % before skip 187.         {\baselineskip}%        % after skip 188.         {\normalfont\it}}       % the style 189. \makeatother 191. % Titlepage info 192. \title{\color[rgb]{0.196,0.196,0.204}Control and driving methods for \led{} based intelligent light~sources} 193. \author{\color[rgb]{0.196,0.196,0.204}Szymon Bęczkowski} 194. \date{\color[rgb]{0.196,0.196,0.204} } 195. \titlepic{\vspace{8cm}\includegraphics{graphics/ETlogo.eps}} 196. % preamble (end) 198. \begin{document} 200. \ifpdf 201. \DeclareGraphicsExtensions{.pdf, .jpg, .tif} 202. \else 203. \DeclareGraphicsExtensions{.eps, .jpg} 204. \fi 206. %%%%%%%%%%%%%%% 207. %%   Title   %% 208. %%%%%%%%%%%%%%% 210. \maketitle 212. %%%%%%%%%%%%%%% 213. %% Abstract  %% 214. %%%%%%%%%%%%%%% 216. \newpage 217. \thispagestyle{empty} 218. \mbox{} 219. \begin{abstract} 220. High power light-emitting diodes allow the creation of luminaires capable of generating saturated colour light at very high efficacies. Contrary to traditional light sources like incandescent and high-intensity discharge lamps, where colour is generated using filters, \led{}s use additive light mixing, where the intensity of each primary colour diode has to be adjusted to the needed intensity to generate specified colour. 222. The function of \led{} driver is to supply the diode with power needed to achieve the desired intensity. Typically, the drivers operate as a current source and the intensity of the diode is controlled either by varying the magnitude of the current or by driving the \led{} with a pulsed current and regulate the width of the pulse. It has been shown previously, that these two methods yield different effects on diode's efficacy and colour point. 224. A hybrid dimming strategy has been proposed where two variable quantities control the intensity of the diode. This increases the controllability of the diode giving new optimisation possibilities. It has been shown that it is possible to compensate for temperature drift of white diode's colour point using hybrid dimming strategy. Also, minimisation of peak wavelength shift was observed for InGaN diodes. 226. Control of trichromatic luminaires, dimmed with either pulse-width modulation or amplitude modulation, cannot be optimised. Introduction of hybrid dimming mechanism creates three additional degrees of freedom therefore luminaire parameters such as luminous flux, efficacy and colour quality can be maximised. Simulations show that the gamut of the device can be increased, especially in the cyan colour range for \rgb{} luminaires. 228. A current-voltage model of light-emitting diode is presented. It utilises the fact that instantaneous values of diode's current and voltage correspond uniquely to a set of diode's colorimetric properties, like tristimulus values. This model can be used for colorimetric feedback in colour control loop. The model was created in thermal steady-state conditions and its validity has been tested with a diode driven with a pulsed current. The model can also be used to create highly accurate luminaire model. 230. Finally, a dual interleaved buck converter has been proposed for driving high power light-emitting diodes. Interleaving two converters lowers the output ripple current thus lowering the requirement on the output capacitor. It has been shown that at the expense of cost and increased complexity an efficient design can be created for supplying high current to \led{}s without the need for electrolytic capacitors. 231. \end{abstract} 232. \newpage 233. \mbox{} 234. \thispagestyle{empty} 235. \newpage 237. %%%%%%%%%%%%%%% 238. %%    TOC    %% 239. %%%%%%%%%%%%%%% 241. \setcounter{tocdepth}{2} 242. \setcounter{page}{1} 243. \tableofcontents 245. %%%%%%%%%%%%%%% 246. %%   Abbr.   %% 247. %%%%%%%%%%%%%%% 248. % abbreviations (fold) 249. \chapter*{List of abbreviations, symbols and physical constants} 250. \begin{tabular}{p{1.5cm}l} % List of abbreviations 251. \ac & alternating current \\ 252. \am & amplitude modulation (same as \textsc{ccr}) \\ 253. \dc & direct current \\ 254. \textsc{ic}  & integrated circuit \\ 255. \textsc{iv} & current-voltage \\ 256. \textsc{pi}  & proportional–integral (controller) \\ 257. \textsc{pc}  & phosphor-converted (diode) \\ 258. \textsc{adc} & analog-digital conversion \\ 259. \textsc{ccr} & continous current reduction (same as \textsc{am}) \\ 260. \cct & correlated colour temperature \\ 261. \textsc{cff} & critical flicker frequency \\ 262. \cie & Commission Internationale de l'Eclairage \\ 263. \textsc{cpu} & central processing unit \\ 264. \cri & colour rendering index \\ 265. \textsc{dac} & digital-analog conversion \\ 266. \textsc{dcr} & \dc{} resistance \\ 267. \dsp & digital signal processor \\ 268. \textsc{emi} & electro-magnetic interference \\ 269. \ffb & flux feedback \\ 270. \hid & high-intensity discharge \\ 271. \led & light-emitting diode \\ 272. \textsc{pcb} & printed circuit board \\ 273. \textsc{pfc} & power factor correction \\ 274. \pwm & pulse width modulation \\ 275. \textsc{rgb} & red, green and blue \\ 276. \srh & Shockley-Read-Hall recombination \\ 277. \tff & temperature feed forward \\ 278. \textsc{tim} & thermal interface material \\ 279. \abbr{tcs} & test colour sample \\ 280. \textsc{vrm} & voltage regulation module \\ 281. \ccfb & colour coordinates feedback \\ 282. \fwhm & full width at half maximum \\ 283. \mosfet & metal-oxide-semiconductor field-effect transistor 284. \end{tabular} \\ \newline  \newline 285. \begin{tabular}{p{1.5cm}lc} % List of symbols 286. $λ$ & wavelength & m \\ 287. $ν$ & frequency of optical radiation & Hz \\ 288. $η$ & efficiency & p.u. \\ 289. $η_{lum}$ & luminous efficacy & lm/W \\ 290. $t$ & time & s \\ 291. $τ$ & time constant & s \\ 292. $F$ & luminous flux & lm \\ 293. $A$ & area & $\phantom{^2}\mathrm{m^2}$ \\ 294. $C$ & capacitance & F \\ 295. $L$ & inductance & H \\ 296. $R$ & resistance & Ω \\ 297. $P$ & power & W \\ 298. $P_{rad}$ & radiometric power & W \\ 299. $i$ & current & A \\ 300. $v$ & voltage & V \\ 301. $d$ & duty cycle & p.u. \\ 302. $f$ & frequency & Hz \\ 303. $T$ & temperature & °C, K \\ 304. $R_i$ & colour rendering index for \textit{i}-th sample & — \\ 305. $R_a$ & general colour rendering index & — \\ 306. $E_g$ & bandgap energy & eV \\ 307. $C_{th}$ & thermal capacitance & J/K \\ 308. $R_{th}$ & thermal resistance & K/W \\ 309. \textsc{ct/cct} & colour temperature/corelated colour temperature & K \\ 310. \textsc{fwhm} & full width at half maximum & eV, nm \\ 311. $x,y$ & colour coordinates in \abbr{cie 1931} colour space & — \\ 312. $u,v$ & colour coordinates in \abbr{cie 1960} colour space & — \\ 313. $u'\!,v'$ & colour coordinates in \abbr{cie 1976} ($L'\!,u'\!,v'$) colour space & — \\ 314. $a^{*},b^{*}$ & colour coordinates in \abbr{cie 1976} ($L^{*}\!,a^{*}\!,b^{*}$) colour space & — \\ 315. $\overline{x},\overline{y},\overline{z} $ & colour matching functions & p.u. \\ 316. $X,Y,Z$ & tristimulus values & W, lm \\ 317. \end{tabular} \\ \newline  \newline 318. \begin{tabular}{l@{~=~}ll} % List of constants 319. $e$ & $1.6022\cdot{}10^{-19}\thinspace{}\mathrm{C}$ & elementary charge \\ 320. $c$ & $2.9979\cdot{}10^{8}\thinspace{}\mathrm{m/s}$ & speed of light in vacuum \\ 321. $h$ & $6.6261\cdot{}10^{-34}\thinspace{}\mathrm{Js}$ & Planck constant \\ 322. $h$ & $4.1357\cdot{}10^{-15}\thinspace{}\mathrm{eVs}$ & Planck constant \\ 323. $k$ & $1.3807\cdot{}10^{-23}\thinspace{}\mathrm{J/K}$ & Boltzman constant \\ 324. $k$ & $8.6175\cdot{}10^{-5}\thinspace{}\mathrm{eV/K}$ & Boltzman constant \\ 325. \end{tabular} 326. % abbreviations (end) 328. %%%%%%%%%%%%%%% 329. %% Chapter 1 %% 330. %%%%%%%%%%%%%%% 331. \chapter{Introduction} % (fold) 332. \label{cha:introduction} 333. % 334. \section[Background]{background} 335. Intelligent lighting such as projectors and moving heads are used, whenever there is a need for a light source that can be tuned to individual needs during lamp operation. Typically these lamps can have programmable light colour, intensity, focus and sometimes the ability to project special shapes such as patterns or logos. Intelligent lighting is commonly used in theatres, concerts halls and clubs. 336. \begin{figure}[h]% fig:Projectors 337.   \centering 338.   \includegraphics{graphics/Introduction/Projectors.pdf} 339.  \caption{Martin smart\textsc{mac} moving head (left) and Exterior~1200 image projector (right).} 340.  \label{fig:Projectors} 341. \end{figure} 343. \noindent{}Two examples of such a lamps are shown in figure~\ref{fig:Projectors}. Moving head lamps are designed with the ability to pan and tilt the light beam. Projectors, on the other hand, are fixed at specific target during installation. Both lamp types can be designed to provide a narrow or diffused beam of light, depending on the need. 345. Intensity of the light provided by a luminaire must be high, so typically high-intensity discharge lamps are used as a light source. A compact passive-cooled luminaire like the smart\textsc{mac} uses a 150\thinspace{}\textsc{w} \hid{} lamp while the Exterior~1200 uses a 1200\thinspace{}\textsc{w} metal halide lamp yielding ca.~10 and 92\thinspace{}klm, respectively. Because the output light beam shape is very different from the light source profile, the optics inside the lamp must collect and collimate. A portion of the light is lost in the process. The output light flux for aforementioned luminaires is ca.~2700\thinspace{}lm for the smart\textsc{mac} and 20–35\thinspace{}klm for the Exterior 1200 (depending on the configuration). As can be seen, the efficiency of the luminaire is around 30\thinspace{}\%, not including the power needed to drive the control electronics. 347. \begin{figure}[t]% fig:ProjectorsMACIII 348.   \centering 349.   \includegraphics{graphics/Introduction/ProjectorsMACIII.pdf} 350.  \caption{Martin \textsc{mac iii} moving head luminaire and its blow-out diagram showing the 1.5\thinspace{}kW discharge light source, dimming mechanism and the colour wheel.} 351.  \label{fig:ProjectorsMACIII} 352. \end{figure} 354. Above flux values are valid for generation of white light. Colour light is created by inserting a colour filter in the light beam (fig.~\ref{fig:ProjectorsMACIII}). The transmissivity of the filters depends on the desired colour and its saturation. Saturated red and blue colour filters can have transmission as low as few percent. This lowers the overall efficacy of the luminaire. Moreover, it is hard to create filters with sharp characteristics to create saturated colours. Transition from one colour to another requires mechanical change of the filter in the path of the light. Depending on the filter arrangement, it may be necessary to transition through a different colours to obtain the desired colour. 356. Dimming of high power \hid{} lamps is done by either a high frequency lamp ballast (limited dimming range) or inserting a specially shaped shutter in the light path (fig.~\ref{fig:ProjectorsMACIII}). When the light is to be turned off for a short period of time the full-blocking shutter is placed in front of the light source. This is done because metal halide lamps cannot ignite unless they are cold because the high pressure obtained during normal operation prevents the lamp to restart. This requires a few minute cool down after the lamp was turned off. 358. Lifetime of \hid{} lamps is limited to a few thousands hours. At the end of their lifetime high-intensity discharge lamps exhibit phenomenon called cycling, when the lamps periodically turn on and off because of the ageing effects. Sometimes the stress caused by off-on cycles or the increased pressure of the gas inside the lamp can lead to an explosion of the lamp tube, containing small amount of mercury. 360. Constantly increasing intensity of \led{} sources already allows their use in low power (watts–tens of watts) applications such as replacement for conventional halogen and incandescent light sources in general lighting. Slowly, they enter medium power (hundreds of watts) applications like moving heads. \led{}s do not contain any toxic chemicals and have much longer lifetime than \hid{} lamps. Long lifetime decreases the service costs of the luminaire. % high mounting, keeping track of lifetime, lamp may disable to prevent blow 362. Compared to incandescent and discharge lamps, \led{}s emit a narrow spectrum light that produces saturated colours. This means the white light cannot be directly generated by a single \led{} die. White light can be generated using either two or more pure colour \led{}s or a phosphor coated blue \led{}. Similarly to obtain any other colour in the device gamut, a combination of primary colours must be used. Generated colour range is not limited to the number of colour filters used, but on the accuracy of a colour control scheme. High bandwidth of intensity control allows for fast transition between colours and for strobing without the need for mechanical shutter. Removing mechanical parts like colour wheel or shutters decreases luminaire's weight and size and, at the same time, increases the reliability of the device. 364. Using light-emitting sources in luminaires comes at a price of increased complexity in colour control, thermal management, optical system and power electronics controlling \led{} light sources. Colour control system depends on the accuracy needed in the luminaire. It can be as simple as open loop control or an advanced feedback loop compensating for diode's changing colour point. \led{}'s properties depend on the junction temperature therefore the diodes should be kept as cool as possible. High power \led{}s can dissipate even 80\thinspace{}\% of their input power in a few square millimetre area chip. The thermal management system should remove the heat from the \led{} structure as efficiently as possible or the junction temperature will increase and the light intensity and lifetime of the \led{} will drop. 366. \marginpar{inled project}This work is part of an \textsc{inled} project (Intelligent Light Emitting Diodes), a collaboration between Martin Professional, Aalborg University (Institute for Energy Technology and Institute for Nano-Physics) and the Danish National Advanced Technology Foundation (Højteknologifonden). The objective of the project is to increase the knowledge in \led{} based luminaire technology. The knowledge gained from the project should be useful for designing substitutes for existing  technologies. The project focuses on creating: optics with nano scale coatings that would maximise the light transmission from \led{}s to the output port, efficient power electronic drivers supplying power to the diodes and a heat management system for the luminaire. 368. \marginpar{motivation}The aim of the work shown in this dissertation is to research the driving of high power light-emitting diodes and to create an intelligent \led{} driver that would fully utilise the benefits of solid-state lighting technology.   370. \section[Related work]{related work} % Related work 371. A research of the state of the art in solid-state lighting has been conducted to show current solution of the existing problems and show potentially beneficial research areas. Three main topics are covered: properties of \led{} devices, control of luminaires and methods of driving light-emitting diodes. 373. \subsection{\led{} control} 374. With the introduction of high power blue GaN \led{}s by Nakamura of Nichia Corporation \led{}s finally gained the potential to enter general illumination market. Four basic diode colours: red, green, blue and white became available on the market as high power devices in the mid 2000s. At this point scientist started investigating their driving properties. At that time  phosphor-converted white \led{}s had high enough efficacy for practical lighting applications. Lighting Research Center at Rensselaer Polytechnic Institute pioneered in colorimetric research of high-power devices. \marginpar{dimming properties}Dyble~et~al.~\cite{Dyble2005} analysed the chromaticity shifts of \textsc{pc}~white \led{} under pulse width modulation and amplitude modulation dimming. A year later research on red, green, blue and white diodes was conducted by Gu~et~al.~\cite{Gu2006} on the effects of different dimming schemes on diodes' spectra, luminous flux and efficacy. In 2007, Manninen and Orreveteläinen~\cite{Manninen2007} from Helsinki University of Technology published their work on spectral and thermal behaviour of AlGaInP diodes under \pwm{} dimming. General consensus from these three research papers was that \pwm{} dimming method provides linear dimming in 0–100\thinspace{}\% range with low colour shifts. \am{} on the other hand yielded higher efficacies at lower current levels but at the price of much higher colour shifts and limited dimming range. 376. Work on more advanced dimming mechanisms started with Ashdown~\cite{Ashdown2006} proposing the use of pulse-code modulation to remove the need of hardware \pwm{} generators to dim the diodes. 378. \marginpar{hybrid dimming}In 2009, Tse~et~al.~\cite{Tse2009b} from Hong Kong Polytechnic University proposed a general driving scheme of using two current levels to control \led{}s. Using low current magnitude together with high current \pwm{} signal both high efficiency and limited colour shifts can be achieved. A~variation of their general driving technique, where one of the currents is set to zero, was investigated in this dissertation. 380. \marginpar{driving current}In 2007, Schmid~et~al.~\cite{Schmid2007} measured the effect of the typical driving currents on diode performance. The accuracy of the results is limited due to the measurement hardware used but it shows the effect of ripple content in the diode current on the light output. 382. \subsection{Thermal properties} 383. Thermal properties of high-power \led{}s have been a topic for many research programmes since mid 2000s. Scientists quantified the effect of junction temperature on such parameters as: luminous flux, colour point, spectrum shape and lifetime of the device. Three major research topics can be identified on the thermal properties of \led{}s: methods for measuring the junction temperature, estimating the junction temperature using a thermal model of \led{} luminaires and the research on the effect of temperature on the device properties. 385. \noindent{}\marginpar{junction measurements}The measurement of the junction temperature is done mostly through  indirect methods, where the temperature dependent parameter is measured and using model or previous calibration the temperature is calculated. 387. In 2004 Xi et~al.~\cite{Xi2004} proposed a method based on forward voltage measurement. In order to calibrate the measurement, the diode is measured when driven with very small duty cycle, typically 0.1\thinspace{}\%, to reduce the self heating effect. This method was used throughout the writing of this dissertation. Another method provided by Xi et~al.~\cite{Xi2005} is based on the relationship between the peak of the spectrum and the junction temperature. 389. A method based on the diode spectrum was presented by Chhajed et~al.~\cite{Chhajed2005a}. It is based on the high energy slope of the diode's spectrum. When plotted in semilogarithmic scale, the slope is proportional to the carrier temperature. 391. \marginpar{thermal modelling} 392. Instead of measuring the junction temperature indirectly, a thermal model of the diode can be used to estimate it. Farkas~et~al.~\cite{Farkas2003} pioneered the use of diode thermal model. They applied a method described by Székely in 1988~\cite{Szekely1988}, used previously for high power silicon chips to measure and model the thermal path of the dissipated heat. This thermal model is used further in the dissertation to simulate the performance of both \led{}s and whole luminaires. 394. \marginpar{temperature effects} 395. The effect of junction temperature on InGaN and AlGaInP has been documented by Chhajed et~al.~\cite{Chhajed2005b} in 2005. They measure the effect of varying the junction temperature on colour diodes and on trichromatic luminaires composed of these diodes. \marginpar{spectrum model}Moreover, they described the \led{} spectrum model based on a Gaussian function. They used the model to find a good combination of primary diode colours to create a luminaire with high luminous efficacy and colour rendering index. 397. In 2006, Man and Ashdown~\cite{Man2006} extended the model presented by Chhajed into double Gaussian model composed of sum of two Gaussian curves, accurately modelling the \led{} spectrum in wide range of temperatures. 399. \marginpar{flux estimation}The linearity of \pwm{} dimming method is limited to a constant heatsink temperature case but this temperature changes with average driving current, bending the current-luminous flux relationship. Garcia~et~al.~\cite{Garcia2008} created a flux estimator basing on the \led{} case and ambient temperatures to deal with transient and steady-state behaviour of the luminous flux. This estimator can accurately predict the instantaneous value of luminous flux, therefore it allows a linear dimming behaviour and precise intensity control, but lack of the colour shift information limits the practical use to control of a phosphor-converter white \led{}s or a single colour diode. It is not sufficient for precise colour control of polychromatic luminaires. 401. Hui and Qui~\cite{Hui2009} analysed the static performance of the \led{}-heatsink system in 2009. They showed that above certain thermal resistance in the heat flow path limiting the driving current will increase the maximum intensity of the diode. This phenomenon is very important for driving AlGaInP diodes which have very high temperature dependency. 403. \noindent{}\marginpar{multi-domain models}Huang and Tang~\cite{Huang2006} presented a complete thermal-electrical-luminous model of an \led{} luminaire in 2009. Derivation of system coefficients was done using power perturbation on individual diodes and system identification methods. Unfortunately, the model disregards the colour shifts and nonlinearities present in \led{} based luminaires. 405. \subsection{Lifetime} 406. \marginpar{failure mechanisms}In 2004, Narendran et.~al.~\cite{Narendran2004} analysed the failure mechanisms in white \led{}s. Junction temperature was identified as major contributor in light output degradation phenomenon. Since then, many long time tests have been performed in order to create a reliable model of light output degradation. Jacob~et~al.~\cite{Jacob2006} analysed the failure modes of high power \led{}s in 2006. They mention a few separate failure mechanisms: shunting of the structure through parasitic dendrites growing on the structure, leakage paths in the active area or bond wire dislocations. 408. \marginpar{gallium nitride diodes}In 2008, Hu~et~al.~\cite{Hu2008} performed a degradation test on GaN diodes. They report a change in thermal structure of the diodes. The increase in thermal resistance of the package increases the junction temperature and speeds up degradation. 410. \marginpar{gallium phosphide diodes}Mathematical model of light output degradation for AlGaInP diodes was derived by Grillot et~al.~\cite{Grillot2006} in 2006. They proved that useful lifetime of the device is a linear function of current density and logarithmic function of stress time. 412. \marginpar{driving current impact}Meneghini~et~al.~\cite{Meneghini2006} and Buso~et~al.~\cite{Buso2008} compared the \led{} output degradation speed under \dc{} and pulsed driving current. Their reliability tests showed that the device lifetime is similar under both driving conditions. However, different families of devices showed different effect on both driving conditions, indicating that the internal structure of the device and package may influence the reliability behaviour under various driving currents. 414. \subsection{Luminaire control} 415. Many different control techniques have been developed over the years to control light-emitting diodes based luminaires. Basic colour control loops have been presented by Deurenberg et al.~\cite{Deurenberg2005}. 417. \marginpar{open loop}The simplest open loop control is very easy to implement as it requires only a single point calibration but it does not compensate for self heating of the diode structure, change of ambient temperature or the reduced intensity with time. 419. \marginpar{colour feedback}Most common feedback mechanism implemented to control \led{} luminaires are the colour sensors. Subramanian et~al.~\cite{Subramanian2002} describe a simple colour control loop that reduces the colour shift of the luminaire compared to an open loop solution. The dynamics of this system is described later by Subramanian and Gaines~\cite{Subramanian2003}. Over the years many researchers used various colour sensors to control luminaires: Ackermann et~al.~\cite{Ackermann2006}, Lim et~al.~\cite{Lim2006} and Chang et~al.~\cite{Chang2009}. The consensus is that colour sensors provide easy feedback method with relatively good accuracy but \led{} spectra shifts can create colour errors. Additionally, the colour sensor stability is a major concern. To increase the accuracy of the colour sensors Robinson and Ashdown~\cite{Robinson2006} proposed an advanced feedback solution capable of estimating the spectral shifts. 421. \marginpar{flux feedback}Subramanian et~al.~\cite{Subramanian2002} proposed using a single photodetector instead of colour sensor and measure each of the diodes separately using a timing scheme. 423. \marginpar{temperature feedback}As temperature is the main cause of the variation of diodes' parameters, some control schemes are based on the temperature dependence models. Moisio et~al.~\cite{Moisio2005} measured the heatsink temperature and approximated the intensity of the \led{}s installed in the luminaire. 424. Forward voltage of the diode is proportional to the junction temperature at a single current level. Qu et~al.~\cite{Qu2007} measured the diode voltage and converted it directly to tristimulus values. 426. \marginpar{model based control}Huang et~al.~\cite{Huang2009b} described the system model of a multi-chip polychromatic luminaire. They used power perturbation and system identification methods for each colour string of diodes. 428. \subsection{Optimisations} 429. \marginpar{primaries selection}Luminaires can be optimised from a few different angles. The number of primary colours and the exact device choice to be installed in the luminaire can be subjected to optimisation procedures. Žukauskas et~al.~\cite{Zukauskas2001} analysed the effect of number of primaries in the luminaire on the colour quality and luminous efficacy achievable by the luminaire. Chhajed et~al.~\cite{Chhajed2005b} used a Gaussian \led{} spectrum model to choose the best combination of three colours yielding high \cri{} value in the required colour range. 431. \marginpar{driving}When the luminaire consists of more than three primary colours, the colour control system becomes underdetermined and diode control can be optimised. Ries et~al.~\cite{Ries2004} showed the possibility to optimise such parameters of luminaires as luminous flux, efficacy and colour rendering index. Ou-Yang and Huang~\cite{Ou-Yang2007} showed a method to estimate the gamut of multi-primary \led{} based luminaires. 433. In 2010, Lin~\cite{Lin2010} presented a method to optimise the \cri{} of a luminaire using mathematical formulations of the problem and numerical method based on the complex method. 435. \subsection{\led{} driver} 436. The work of Schmid~et~al.~\cite{Schmid2007} on the impact of driving current shape and various research on different dimming techniques indicates that in order to achieve high efficacy the diode should be driven with a \dc{} current. Van der Broeck et~al.~\cite{Broeck2007} analysed different converter topologies for driving \led{}s from a \dc{} voltage source using both \dc{} and pulsating current. They present different types of isolated and non-isolated converters describing their output current shape. Their conclusion is that pulsed output current converters contain, in general, less components (like output capacitor) and therefore may be beneficial for the lifetime of the converter. 438. Many different topologies were applied for driving high current \led{}s. Depending on the application, solutions include using a buck by Torres et~al.~\cite{Torres2007}, boost by Xiaoru and Xiaobo \cite{Xiaoru2008}, Ćuk by de~Britto et~al.~\cite{deBritto2008}, flyback by Pan et~al.~\cite{Pan2007} or \textsc{sepic} by Zhongming et~al.~\cite{Zhongming2008} etc. Most of the research on \led{} drivers focus on high-power \led{}s supplied with a current up to 700\thinspace{}mA. Although some of the solutions presented for these diodes can be easily scaled to a diode driven with 13.5\thinspace{}\textsc{a}, the high current converter design calls for more advanced solution. 440. Early processors were supplied with a few volts and tens of amps, similarly to high power \led{}s. For this kind of load Xunwei~et~al.~\cite{Xunwei1999} proposed using a multiphase interleaved buck converters. 442. \section[Scientific contributions]{scientific contributions} 443. Following contributions of this work have been identified as extending the current state of the art. 445. \marginpar{hybrid pwm/am dimming}Introduction of hybrid \pwm{}/\am{} dimming technique (chapter~\ref{ssec:hybrid_pwm_am}). This dimming technique is a combination of pulse width modulation with variable peak current amplitude. It can be easily adapted to current designs as integrated \led{} drivers typically have the possibility to use both \pwm{} and \am{} dimming. 447. Typically, one variable quantity is used to control the intensity of the diode: either a duty cycle or \dc{} current. By controlling a single \led{} with both peak current and duty cycle an additional degree of freedom appears and \led{} parameters can be optimised. 449. Hybrid dimming mechanism can be used e.g. to reduce the colour point shifts of a white, phosphor converted diode under varying heatsink temperature or to minimise spectra shifts in InGaN diodes. 451. Application of hybrid dimming mechanism to polychromatic luminaires creates opportunities to optimise luminaires' control, increase their efficacy and device gamut (chapter~\ref{ssec:Optimal_control_hybrid_dimming}). Using this technique, optimisation possibilities are given even for di- and trichromatic luminaires. 453. \marginpar{current voltage led model}\led{} empirical model, described in chapter~\ref{sec:iv_model}, based on instantaneous values of current and voltage of the diode can form a basis of a luminaire colour control scheme (chapter~\ref{ssec:iv2xyzColourControl}). The model is valid for any form of diode dimming and accurately predicts colour shifts created by the changes of ambient temperature and by the dimming technique. The model can be used as a feedback mechanism for precise colour control. 455. \noindent{}\marginpar{application of interleaved converters}An interleaved buck topology was proposed for driving high current \led{} light sources. Because of relatively low dynamic resistance the working point of the driver can be set so that the converter always works close to 50\thinspace{}\% duty cycle where the ripple cancellation effect is the strongest. This removes the need for high value, electrolytic output capacitors. 457. \marginpar{publications} 458. Some of the work presented in this dissertation has also been published in other sources: \\ 460. \reversemarginpar 461. \noindent{}\marginpar{1}S.\thinspace{}Bęczkowski. Advanced dimming strategy for solid state luminaires. In \textit{Proceedings of the 10th International Conference on Solid State Lighting}.  pp. 1–6. \textsc{spie} International Society for Optical Engineering. 2010.\\ 463. \noindent{}\marginpar{2}S.\thinspace{}Bęczkowski, S.\thinspace{}Munk-Nielsen. \textsc{led} spectral and power characteristics under hybrid \textsc{pwm/am} dimming strategy. In \textit{Proceedings of the \textsc{ieee} Energy Conversion Congress and Exposition, \textsc{ecce} 2010}, pp. 731–735. \textsc{ieee} press. 2010.\\ 465. \noindent{}\marginpar{3}S.\thinspace{}Bęczkowski, S.\thinspace{}Munk-Nielsen. Dual interleaved buck converter for driving high power \textsc{led}s. In \textit{Proceedings of the 2011—14th European Conference on Power Electronics and Applications, \textsc{epe} 2011}, pp.1–6. 2011. \\ 467. \noindent{}\marginpar{4}L.\thinspace{}Török, S.\thinspace{}Bęczkowski, S.\thinspace{}Munk-Nielsen, J.\thinspace{}Gadegaard, T.\thinspace{}Kari, K.\thinspace{}Pedersen. High output \textsc{led}-based profile lighting fixture. (accepted for publication) \textit{Proceedings of \textsc{iecon 2011}—37th Annual Conference of the \textsc{ieee} Industrial Electronics Society, \textsc{iecon 2011}}, 2011.\\ 469. \noindent{}\marginpar{5}\textsc{dk\thinspace{}pa\thinspace{}2011\thinspace{}70529} patent application. S.\thinspace{}Bęczkowski. Method of controlling illumination device based on current-voltage model. 471. \normalmarginpar 472. \section[Methodology]{methodology} 473. The manufacturing of light-emitting diodes is a complex process performed in clean room environment. Diode structures are created using epitaxial growth and the diode colour is controlled by the GaN/InN ratio or by the AlInP composition~\cite{Krames2007}. On account of local variations, colour point of diodes in a batch can differ from sample to sample. The same applies to intensity and forward voltage magnitude. Manufacturers split the diode batches into smaller bins with similar diode's parameters like forward voltage, intensity and colour point. 475. Data shown in the dissertation has been obtained using measurements on particular diode sample. No statistical analysis has been done, therefore the results apply to the measured diodes and do not apply to the diode batch in general. However, the principles demonstrated on particular devices are applicable to the whole batch but may yield different results. Different internal diode structures may show different behaviour. All tests were performed on high power, state of the art devices. 477. Tests performed at fixed temperatures are performed using constant heatsink temperature. The use of constant junction temperature eliminates the impact of thermal structure of the diode on results, but its measurement a complex process, therefore it was not used in the data gathering. Furthermore, the experiments can be easily recreated using inexpensive hardware. 479. \subsection{Measurement hardware}\label{measurementHardware} 480. \marginpar{spectrometer}Optical measurements were performed using an Instrument Systems \textsc{cas}~140~\textsc{ct} array spectrometer. This spectrometer uses a diffraction grating technique for measurement of the visible part of the light spectrum. Light is directed on an optical component with a periodic structure which diffracts the light. Diffracted light is analysed by a \textsc{ccd} detector. The spectrometer can analyse light within 360–830\thinspace{}nm range with 2.2\thinspace{}nm resolution and 0.5\thinspace{}nm data point interval. 482. \begin{figure}[h]% fig:integrating sphere 483.         \centering 484.         \includegraphics{graphics/Introduction/MeasurementHardware/IntegratingSphere.pdf} 485.         \caption{Instrument Systems \textsc{isp}250 integrating sphere used in the test setup (left) and the principles of integrating sphere operation (right). Input beam is reflected diffusely inside the sphere many times before hitting the output port. A baffle  inside the sphere blocks the input light from hitting the output port directly.} 486.         \label{fig:integrating_sphere} 487. \end{figure} 489. \noindent{}\marginpar{integrating sphere}A 25\thinspace{}cm diameter Instrument Systems \textsc{isp}250 integrating sphere shown in figure~\ref{fig:integrating_sphere} is used to gather the light from device under test. Inner surface of the sphere is coated with diffusely reflecting material, barium sulphate ($\mathrm{BaSO}_{4}$), of a well defined reflectance. Light is diffusely reflected from the walls many times before it hits the output port of the sphere, therefore the measurement is not dependent on the shape of the input light beam. An Instrument Systems \textsc{isp}500-220 adapter plate is used to mount a \led{} test fixture. 491. \marginpar{test fixture}Instrument Systems \textsc{led}-850 test fixture is used for \led{} assembly. It is possible to test various \led{} types from different manufacturers with special adapters. These adapters provide electrical contacts for four wire sensing and a thermal connection between \led{} and thermally controlled plate. Arroyo Instrument 5310 \textsc{tecs}ource temperature controller was used to control the temperature of the plate. 493. Measurements of power converter were done using following hardware: Fluke 8845~\textsc{a} multimeter, 100\thinspace{}MHz 4~channel Tektronix \textsc{tds}~3014\textsc{c} digital phosphor oscilloscope, 50\thinspace{}MHz Tektronix \textsc{tcp}202 current probe, Delta Elektronika \textsc{sm}~52-\textsc{ar}-60 power supply and Zentro-Elektrik \textsc{el}1000 electronic load. 495. \textsc{ni}~6215 data acquisition module containing 16 analog inputs (16~bit, 250\thinspace{}kS/s) was also used for bigger tests that needed high number of input channels. 497. \begin{table}[t]\footnotesize 498.         \caption{National Instruments compact\textsc{daq} modules used in the experiments.} 499.         \label{tab:NIcDAQmodules} 500.         \centering 501.         \begin{tabular}{cccll} 502.                 \textsc{module} & \textsc{resolution} & \textsc{sampling rate} & \textsc{range} & \textsc{comments} \\ 503.                 \hline 504.                 \abbr{ni 9201} &        12 bit  &       500\thinspace{}kS/s & ±10\thinspace{}V                                         & 8-channel  \\ 505.                 \abbr{ni 9229} &        24 bit  &       \phantom{0}50\thinspace{}kS/s  & ±60\thinspace{}V                                              & 4-channel  (isolated) \\ 506.                 \abbr{ni 9227}  &       24 bit  &       \phantom{0}50\thinspace{}kS/s  & \phantom{±6}5\thinspace{}A \textsc{rms} & 4-channel \\ 507.                 \hline 508.         \end{tabular} 509. \end{table} 510. For electrical measurements a National Instruments compact\textsc{daq} system was used. The system consists of a \textsc{ni}~c\textsc{daq}-9172 chassis with interchangeable measurement modules (table~\ref{tab:NIcDAQmodules}). Voltage output modules were used to control \led{} drivers. Current and voltage inputs were used to acquire electrical quantities. 512. \section[Outline]{outline} 513. \marginpar{chapter 1}The first chapter gives background to the topic of this dissertation, followed by review of previous scientific work done by other parties to show the state of the art in light-emitting diodes control. Finally methodology used in the work is described. %in detail. 515. \marginpar{chapter 2}Colour theory chapter gives an insight of how the light is perceived by the human eye and how it is measured. Various colour spaces, used throughout the dissertation, are defined. The definitions of light quality and colour distance are given in order to quantify the performance of the colour control. 517. \marginpar{chapter 3}The principles of operation and properties of light-emitting diodes are described in the third chapter. Firstly the physical structures of the diodes are discussed.  Optical and electrical properties are analysed with focus on the parameter change with respect to junction temperature variations. The lifetime of the devices is analysed. The impact of dimming the diodes with different methods is presented. Hybrid dimming technique is introduced and its properties in increasing the controllability of \led{}s are analysed. In the end, an empirical model of the light-emitting diodes is presented and verified. 519. \marginpar{chapter 4}Luminaire control is presented in the fourth chapter. Firstly the review of state of the art control methods is given. A colour control scheme based on the empirical model, described in chapter three, is presented. Finally optimisation routines for designing and controlling polychromatic luminaires are described. 521. \noindent{}\marginpar{chapter 5}Fifth chapter reviews available power converter topologies with respect to driving light-emitting diodes. An interleaved buck topology is analysed and its benefits and drawbacks are discussed. Power stage and control are designed. Laboratory results are presented and discussed. 523. \marginpar{chapter 6}Final sixth chapter summarises the dissertation by providing an overview of the results and gives recommendations on the future work. 525. % chapter introduction (end) 527. %%%%%%%%%%%%%%% 528. %% Chapter 2 %% 529. %%%%%%%%%%%%%%% 530. \chapter{Colour theory}% (fold) 531. \label{cha:colour_theory} 532. Colour science has its origins in late 17th century when Sir Isaac Newton conducted his experiments on the nature of light. Using two prisms Newton showed that white light can be divided into separate colours and that pure colours cannot be divided more. In order to prove that the prism is not colouring the light, he reconstructed white light using pure colours. 533. \begin{figure}[h]% fig:newton_prism_diagram 534.         \centering 535.         \includegraphics{graphics/ColourTheory/NewtonPrismDiagram.pdf} 536.         \caption{The diagram from Sir Isaac Newton's experiment. The first prism (\textsc{a}) splits the white light into its constituent colours. A single colour part of the spectrum hits the second prism (\textsc{f}) proving that a pure colour cannot be divided any more.\label{fig:newton_prism_diagram}} 537. \end{figure} 539. \noindent{}Colour is sensation that is dependent on both light and its properties and observer. In order to quantify it physical properties of light and observers have to be known. 541. \section[Human vision]{human vision} 542. \marginpar{human eye}The eye is an organ used for vision. Frontal part on an eye consisting of iris, pupil and lens acts similarly to the camera lens. Iris limits the amount of light going inside the eye thus protecting it from overexposure and damage. Lens focuses the image on the back, inner surface of an eye—retina. Chemical reactions in rods and cones in the retina produce electrical stimuli for the brain. 544. \begin{figure}[t]% fig:eye_cross_section 545.         \centering 546.         \includegraphics{graphics/ColourTheory/eye.pdf} 547.         \caption{Eye cross-section (left) and scanning electron microscope image of human retina (right, source: Science Photo Library, image \textsc{f001/0041}).\label{fig:eye_cross_section}} 548. \end{figure} 550. Rods and cones are photoreceptor cells in the retina responsible for low and high intensity light vision, respectively. Human eye uses photopic vision when luminance is above approximately 3\thinspace{}cd/m² and scotopic vision with luminance below approx. 0.03\thinspace{}cd/m². In between, the eye uses mesopic vision, a combination of the above. \marginpar{rods}Rods are sensitive enough to detect a single photon, but they are sensitive only in 400–640\thinspace{}nm wavelength range. Their maximum sensitivity is 1700\thinspace{}lm/W at 507\thinspace{}nm. Because only one type of rod cells exist, under photopic vision regime, there is no colour sensation. \marginpar{cones}Cone cells, on the contrary, are much less sensitive to the light, but their three variants (short, medium and long wavelength) shown in fig.~\ref{fig:Cones_Sensitivity} allow us to perceive colour through the opponent process of colour vision. 552. \begin{figure}[ht]% fig:eye_cross_section 553.         \centering 554.         \includegraphics{graphics/ColourTheory/ConeSensitivity.pdf} 555.         \caption{Sensitivity of cones after Moses and Hart \cite{PhysiologyOfTheEye}. \label{fig:Cones_Sensitivity}} 556. \end{figure} 558. \noindent{}The transition between scotopic and photopic vision is a very slow process. The speed depends on the direction of the transition and initial adaptation level. Full adaptation from light to dark conditions may take up to 45~minutes. Reverse transition is faster, but still may take up to few minutes. All calculations in this dissertation assume fully adapted photopic vision regime. 560. \marginpar{eye sensitivity}Conversion between radiometric and photometric units is done using the \emph{eye sensitivity function}~$V\textrm{(λ)}$, also called \emph{luminous efficiency function}, shown in fig.~\ref{eye_sensitivity}. The function represents relative eye sensitivity to different wavelengths under photopic vision regime. The maximum eye sensitivity is in green spectral range at 555\thinspace{}nm wavelength. This point corresponds to the sensitivity of 683.002\thinspace{}lm/\textsc{w}. 561. \begin{figure}[t]% fig:eye_sensitivity 562.         \centering 563.         \includegraphics{graphics/ColourTheory/EyeSensitivityFunctions.pdf} 564.         \caption[Eye sensitivity function]{Eye sensitivity function \textit{V(λ)}: \abbr{1931} standard, \abbr{1978} Vos modification and most recent shape proposed in \abbr{2005} by Sharpe, Stockman, Jagla and Jägle.\label{eye_sensitivity}} 565. \end{figure} 566. Eye sensitivity function was obtained experimentally by the \emph{minimum flicker method}. The stimulus in the experiment was a circular area alternately illuminated by two different colours with 15\thinspace{}Hz frequency. At this frequency the two hues fuse into one colour however the brightnesses does not. By adjusting one light source properties, human subject was to minimise the visible flicker.   568. There have been several attempts to improve the original \abbr{1931} standard function: Judd in \abbr{1951}, Vos in \abbr{1978}~\cite{Vos1978} and recently, in~\abbr{2005}, Sharpe, Stockman, Jagla and Jägle~\cite{Sharpe2005} proposed a modified \textit{V*\textrm{(λ)}} function. The blue region of eye sensitivity has been underestimated in the \abbr{cie 1931} standard, but this sensitivity function is still commonly used as a basic observer. All measurements and calculations presented in this dissertation will use \abbr{cie 1931} standard unless explicitly mentioned otherwise. 570. \section[Photometric quantities]{photometric quantities} 571. Light can be characterised in two different ways. Radiometric quantities describe physical properties of an electromagnetic wave. Photometric units, on the other hand, characterise light as perceived by human eye. Radiometric measurements are very straightforward, but not useful if one wants to define the sensation that is caused by the light. For example, infra-red and ultra-violet light can be defined in terms of radiometric quantities, but is invisible to the human eye. 573. \marginpar{luminous flux} Luminous flux is a measure of a perceived power of the light. The unit of luminous flux is \emph{lumen}~(lm). A monochromatic light source emitting an radiometric power of 1/683~watt at 555\thinspace{}nm has a luminous flux of 1\thinspace{}lm. 575. \marginpar{luminous intensity} Luminous intensity is a measure of a perceived intensity of light per unit solid angle. The unit of luminous intensity is \emph{candela}~(cd), which is a base \textsc{si}~unit. The current definition is: a monochromatic light source emitting an optical power of 1/683\thinspace{}\textsc{w} at 555\thinspace{}nm into the solid angle of 1~steradian~(sr). In other words, a light source with luminous flux of 1\thinspace{}lm emitting into 1\thinspace{}sr has a luminous intensity of 1\thinspace{}cd. An isotropically emitting light source (emitting into 4π~sr) with the same luminous flux would have a luminous intensity of 1/4π~cd. The luminous intensity parameter is therefore closely related to light source output geometry. Any lens or other optical aid can greatly influence this parameter. 577. \marginpar{luminance} Luminance is a measure of the luminous intensity of light traveling in a given direction per unit area. It describes the amount of light that is emitted from a particular area within a given solid angle. Luminance is typically used to characterise light sources with big emitting surfaces like \textsc{lcd} screens or \led{}s with active area greater than $1 \mathrm{mm}^{2}$. 579. \marginpar{illuminance} The measure of the total luminous flux incident on a given area is called illuminance. A luminous emittance, similarly, is the total luminous flux emitted from a given area. Both are measured in lux~(lx) or lumen per square meter~($\mathrm{lm/m^{2}}$). 581. The constant 1/683 is a legacy constant, used to equalise the current definitions of light intensity with the old ones. The ``standard candle'' (pure spermaceti candle weighing one sixth of a pound and burning at a rate of 120 grains per hour), used as a reference source in England prior to \abbr{1948}, had roughly 1/683 watts of power. 583. \section[Colour spaces]{colour spaces} 584. A colour space is a method of quantifying the sensation of colour. Humans describe the colour sensation by means of brightness, colourfulness and hue. During printing, colour is made by matching cyan, magenta, yellow and black inks (\textsc{cmyk}) on paper. Computer screen or digital photo camera describes the colour by the amounts of red, green and blue. Note that the reds, greens and blues of these devices, are not generally the same. To math the colours of the camera on the screen, a consistent \srgb{} colour space is used in both devices. Many different colour spaces exist: some are intuitive to use by humans, some are used when working with particular hardware (device dependent). Other may serve particular need, e.g.~perceptually uniform colour spaces, but they all describe the colour sensation in a quantitative way.   586. \marginpar{tristimulus theory of colour}Light sources can have a very complex spectral distributions, still a single colour can be described using only three scalar stimuli parameters. This phenomenon was first postulated in 19th century by Thomas Young and then developed further by Herrman von~Helmoltz. The tristimulus theory was experimentally verified in \abbr{1930} by the colour matching experiments. The human subjects were to match the colour of a monochromatic light to the light created by mixing three monochromatic lights (primaries). The \abbr{1930} experiments used light sources at 700\thinspace{}nm~(\textsc{r}), 546.1\thinspace{}nm~(\textsc{g}) and 438.1\thinspace{}nm~(\textsc{b}). Unfortunately, the colour primaries selection caused some of the colour matching functions' values to be negative. As negative values of light have no physical meaning instead of subtracting one of the tristimulus colours from the matching light, the colour was added to the sample monochromatic light. %Only then both colours could be matched. 588. \marginpar{cie~\abbr{1931} \xyz{} colour space} 589. In \abbr{1931} the \cie{} defined new primaries to overcome the problem with awkward negative weights in colour matching function. New $\overline{x}$, $\overline{y}$ and $\overline{z}$ functions had only positive values and $\overline{y}$ has been chosen to match eye sensitivity function $V(λ)$. 591. Due to the nature of spatial distribution of cones inside the eye, the perception of a colours depends on subject's field of view. To eliminate the influence of this parameter, \cie{} introduced a 2° standard colorimetric observer. A 10° standard observer has been introduced in \abbr{1964} as a result of the work of Stiles and~Burch~\cite{Stiles1959} recommended for field of view more than 2°. 592. \begin{figure}[t]% CIE_x_y_z_ 593.                 \centering 594. \includegraphics{graphics/ColourTheory/ColourMatchingFunctions.pdf} 595. \caption{\cie{} \abbr{1931} 2° and \abbr{1964} 10° observer $\overline{x}$, $\overline{y}$ and $\overline{z}$ colour matching functions, normalised to the maximum of \abbr{1931}, 2° observer $\overline{x}$ colour matching function.\label{fig:CIE_x_y_z_}} 596. \end{figure} 598. \marginpar{tristimulus values}In order to calculate tristimulus values \textit{X, Y} and \textit{Z} the spectrum of light source needs to be multiplied by respective colour matching functions $\overline{x}$, $\overline{y}$ and $\overline{z}$ and integrated over wavelength. 599. \begin{equation} \label{eq:tristimulus} 600.         X=\!\int_{380}^{780}\!\!\!\!\!\!\!P(λ)\overline{x}(λ)\ud{}λ\qquad{} 601.         Y=\!\int_{380}^{780}\!\!\!\!\!\!\!P(λ)\overline{y}(λ)\ud{}λ\qquad{} 602.         Z=\!\int_{380}^{780}\!\!\!\!\!\!\!P(λ)\overline{z}(λ)\ud{}λ 603. \end{equation} 604. Because the $\overline{y}$ matches the eye sensitivity function the $Y$ can be used as a measure of luminance of a light. Multiplying all tristimulus values by 683\thinspace{}lm/\textsc{w} gives the values in lumens therefore \textit{Y} becomes equal to the luminous flux. It is common to describe the colour of the light using \abbr{cie} $xyY$ colour space. Chromaticity coordinates, that can be represented on 2~dimensional $xy$-plane (fig.~\ref{fig:CIE_xy}) are obtained by normalising the tristimulus components. 605. \begin{equation} \label{eq:XYZ2xyz} 606.         x=\frac{X}{X+Y+Z}\qquad{} 607.         y=\frac{Y}{X+Y+Z}\qquad{} 608.         z=\frac{Z}{X+Y+Z} 609. \end{equation} 610. Because of this normalisation, only two values are needed to represent any possible colour. 611. \begin{equation} \label{eq:xy_only} 612.         x+y+z=1\qquad{}\Longrightarrow{}\qquad{}z=1-x-y 613. \end{equation} 614. Intensity of the light is not shown on the $xy$-plane. Any two light sources with the same hues and saturations but different lightness will project on the same point of a diagram. Using $x$, $y$ and the stimulus $Y$ one can easily calculate back the values of $X$ and $Z$ stimuli. 615. \begin{equation}  \label{eq:xyY2XYZ} 616.  X = \frac{Y}{y}x \qquad{} 617.  Y =Y \qquad{} 618.  Z = \frac{Y}{y}(1-x-y) 619. \end{equation} 621. \noindent{}When two lights with different spectral power distributions have the same colour chromaticity (same $xy$ coordinates) they are called metamers and the property itself is called metamerism\marginpar{metamerism}. This allows, for example, to mix a white light using blue and yellow (as is in white phosphor converted \led{}s) or red, green and blue. This can also lead to big differences between colour rendering of two lights with the same colour chromaticity. For example, white, phosphor based \led{} does not reproduce well saturated red colours, while white, trichromatic \rgb{} \led{} can have an excellent red colour rendering. 622. \begin{figure}[t]% CIE_xy 623.                 \centering 624. \includegraphics{graphics/ColourTheory/CIExy.pdf} 625. \caption{\cie{} \abbr{1931} $xy$ chromaticity diagram.\label{fig:CIE_xy}} 626. \end{figure} 628. \marginpar{xy chromaticity diagram}The $xy$-plane is a projection of a linear colour space and therefore colours can be linearly mixed on it. Therefore, every polychromatic source will have a gamut (available colour range) that is a convex hull on all primary colour points inside the horseshoe diagram. 630. \noindent{}Monochromatic light sources (pure colours) lie on the curved boundary of the horseshoe. Straight line between blues and reds is called \textit{purple line}, colours laying on this line cannot be made using a monochromatic light source, they are created by mixing saturated red and blue colours. 632. \emph{Equal energy point} or \emph{white point} (x=1/3, y=1/3) corresponds to the white light where all tristimulus values are equal. Any two colours that can be connected by a straight line crossing the white point are called complementary. 634. \marginpar{perceptual uniformity}This colour space is not perceptually uniform. This means that the difference between colours of two points on the plane does not correspond to the geometrical distance between them. In \abbr{1942}, MacAdam analysed colour differences of closely spaced points~\cite{MacAdam1942}. He noted, that depending on a position in the $xy$-plane, different geometrical distance between two colours yields a noticeable colour difference. Similar colours, that appear identical to human eye, can be grouped on a $xy$-diagram in ellipsis shaped areas (fig.~\ref{fig:CIE_xy_MacAdam}). Ellipses in the green regions are very big compared to the ellipses in the blue region. This non-uniformity initiated the search for uniform chromaticity diagram. In result, in \abbr{1960}, the \cie{} introduced $uv$, and in \abbr{1976}, $u'v'$ uniform chromaticity diagram~\cite{Wyszecki2000}. 636. \begin{figure}[t]% CIE_uv 637.         \centering 638. \includegraphics{graphics/ColourTheory/CIEuv.pdf} 639. \caption{\cie{} \abbr{1976} $u'v'$ chromaticity diagram.\label{fig:CIE_uv}} 640. \end{figure} 641. \marginpar{cie~\abbr{1960} $Luv$ and cie~\abbr{1976} $L^{*}u^{*}v^{*}$ colour spaces}Coordinates for $uv$ and $u'v'$ can be calculated from tristimulus values 642. \begin{equation} \label{eq:uv} 643.         u=\frac{4X}{X+15Y+3Z}\qquad{} 644.         v=\frac{6Y}{X+15Y+3Z} 645. \end{equation} 646. and 647. \begin{equation} \label{eq:u'v'} 648.         u'=\frac{4X}{X+15Y+3Z}\qquad{} 649.         v'=\frac{9Y}{X+15Y+3Z}. 650. \end{equation} 651. These coordinates can also be calculated directly from $xy$ coordinates using equations: 652. \begin{equation} \label{eq:x2u} 653.         u=u'=\frac{4x}{-2x+12y+3} 654. \end{equation} 655. and 656. \begin{equation} \label{eq:y2v} 657.         v=\frac{6y}{-2x+12y+3}\qquad{} 658.         v'=\frac{9y}{-2x+12y+3}. 659. \end{equation} 660. Reverse transformations are also possible, using: 661. \begin{equation} \label{eq:uv2xy} 662.         x=\frac{3u}{2u-8v+4}\qquad{} 663.         y=\frac{2v}{2u-8v+4} 664. \end{equation} 665. and 666. \begin{equation} \label{eq:u'v'2xy} 667.         x=\frac{9u'}{6u'-16v'+12}\qquad{} 668.         y=\frac{2v'}{3u'-8v+6}. 669. \end{equation} 671. \noindent{}\marginpar{colour space conversions}A conversions from one colour space to another requires a choice of a different set of primary stimuli. The only limitation is that the vectors created by the new set of stimuli $R'G'B'$ is linearly independent. Therefore no combination of $r$, $g$ and $b$ should make 672. \begin{equation} 673.         rR' + gG' + bB' = 0, 674. \end{equation} 675. except for trivial $r=g=b=0$. Each primary stimulus of the new set can be matched using the mixture of the old primaries therefore we can write relation between $RGB$ and $R'G'B'$ colour space as: 676. \begin{equation} \label{eq:rgb2r'g'b'} 677.         \begin{array}{r@{\:}c@{\:}l} 678.         R'&=&a_{11}R + a_{12}G + a_{13}B \\ 679.         G'&=&a_{21}R + a_{22}G + a_{23}B \\ 680.         B'&=&a_{31}R + a_{32}G + a_{33}B 681.         \end{array} 682. \end{equation} 683. A transformation matrix $\mathbi{A}$ is formed by the transformation coefficients $a_{ij}$ 684. \begin{equation} \label{eq:transformationMatrix} 685. \mathbi{A} = 686. \left[ \begin{array}{ccc} 687. a_{11} & a_{12} & a_{13} \\ 688. a_{21} & a_{22} & a_{23} \\ 689. a_{31} & a_{32} & a_{33} 690. \end{array} \right]. 691. \end{equation} 692. Transition from $R'G'B'$ to $RGB$ based colour space requires inverted matrix $\mathbi{A}^{-1}$. 694. %% Black-body radiation 695. \section[Black body radiation]{black body radiation} 696. \noindent{}\marginpar{black body definition}A black body is an idealised physical object that absorbs all electromagnetic radiation that falls on it. The ideal object would appear perfectly black because all the visible light is absorbed. Any light emitted from the object would only be the function of object's temperature. 698. \noindent{}\marginpar{planckian radiation spectrum}A planckian black body radiation spectrum is a very useful standard for defining white light. It is characterised using only single variable: \emph{colour temperature}. The spectrum is given by equation 699. \begin{equation} \label{eq:black-body_spectrum} 700.         I(λ)=\frac{2hc^{2}}{λ^5\Big[\mathrm{exp}\Big(\frac{hc}{λkT}\Big)-1\Big]} 701. \end{equation} 702. derived by Max~Planck in \abbr{1900}. The ideal black body object heated up to 6500\thinspace{}\textsc{k} would emit white light with colour temperature of 6500\thinspace{}\textsc{k}. An incandescent light bulb is an example of non ideal black body source. The temperature of the filament has to be kept below melting point and therefore the maximum colour temperature of this kind of light source is around 3500\thinspace{}\textsc{k}. Low luminous efficacy of the classical light bulb approximately 15\thinspace{}lm/\textsc{w} is the effect of the black-body spectrum having its maximum in the infrared part of electromagnetic radiation. 704. \begin{figure}[t]% Blackbody Locus on CIExy and CIEu'v' diagrams 705.                 \centering 706. \includegraphics{graphics/ColourTheory/BlackbodyLocus.pdf} 707. \caption{Black body colour locus on \cie{} \abbr{1931} $xy$ (left) and \cie{} \abbr{1976} $u'v'$ (right) diagram. Dotted lines show the position of correlated colour temperature points. Note that \cct{} lines on $u'v'$ diagram are perpendicular to black body locus. \label{fig:BlackbodyLocus}} 708. \end{figure} 709. \marginpar{black body locus}The colour point locus of black body radiator starts in the red corner of \cie{} \abbr{1931} diagram, then moves through the orange and yellow to end in the white region. This corresponds to the colours of a real heated objects. Colour temperatures between 2000\thinspace{}\textsc{k} and 4000\thinspace{}\textsc{k} are commonly referred to as warm white. Colour temperatures above 7000\thinspace{}\textsc{k} are called cool white. 711. If the colour coordinates of a light source is not on the black-body locus but in its proximity then the \emph{correlated colour temperature} parameter is used. It is defined as temperature of a black body whose colour is the closest to the source colour. On a $u'v'$ diagram, the \cct{} can be determined as a colour temperature of a closest geometrically point on a planckian curve. Correlated colour temperature cannot be determined geometrically on a $xy$ plane, because the plane is non-uniform. 713. To calculate chromaticity coordinates an cubic interpolation can be used~\cite{US7024034}. The $x$ coordinate is calculated for 1666–4000\thinspace{}\textsc{k} and 4000–25000\thinspace{}\textsc{k} temperature ranges respectively as: 714. \begin{align} 715. x &= -0.2661239\cdot{}10^{9}/T^{3} - 0.2343580\cdot{}10^{6}/T^{2} + 0.8776956\cdot{}10^{3}/T + 0.179910 \nonumber \\ 716. x &= -3.0258469\cdot{}10^{9}/T^{3} + 2.1070379\cdot{}10^{6}/T^{2} + 0.2226347\cdot{}10^{3}/T + 0.24039 717. \end{align} 718. The corresponding $y$ coordinate is calculated for 1666–2222\thinspace{}\textsc{k}, 2222–4000\thinspace{}\textsc{k} and 4000–25000\thinspace{}\textsc{k} temperature ranges respectively as: 719. \begin{align} 720. y &= -1.1063814 x^3 - 1.34811020 x^2 + 2.18555832 x - 0.20219683 \nonumber \\ 721. y &= -0.9549476 x^3 - 1.37418593 x^2 + 2.09137015 x - 0.16748867 \\ 722. y &= +3.0817580 x^3 - 5.87338670 x^2 + 3.75112997 x - 0.37001483 \nonumber 723. \end{align} 724. %1667\text{K} \leq T \leq 4000\text{K} 725. %4000\text{K} \leq T \leq 25000\text{K} 726. %1667\text{K} \leq T \leq 2222\text{K} 727. %2222\text{K} \leq T \leq 4000\text{K} 731. %% Illuminants 732. \section[Standard illuminants]{standard illuminants} 733. \cie{} has standardized several light illuminants: \textsc{a}, \textsc{b}, \textsc{c}, \textsc{d} series, \textsc{e} and \textsc{f} series. Examples of their chromaticities and colour temperatures can be seen in table \ref{tab:standard_illuminants}. The purpose of standardizing the illuminants is to be able to compare the colour sensation of various objects under typical light sources. The sources include the sun spectra at different times of a day, the direct sunlight, the average incandescent bulb and fluorescent lamps of various composition. 734. \input{tables/illuminants.tex} 736. \noindent{}Illuminants from \textsc{d} series were introduced in 1967 to replace the \textsc{b} and \textsc{c} illuminants acting as daylight simulators~\cite{Judd1964}. Judd et~al. analyzed typical daylight spectra and tabulated the results with 10\thinspace{}nm increments. Obtaining higher resolution requires interpolating this data. The \textsc{d} series illuminants are used in calculation of the colour rendering index—the measure of the quality of light. 738. %% Quality of light 739. \section[Quality of light]{quality of light} 740. Although many light sources with different spectra can have the same perceived colour, the appearance of various colour objects under those illuminants can vary significantly. 742. \marginpar{colour rendering index}In order to easily compare the ability to reproduce colours a \cri{} (colour rendering index) has been developed~\cite{CIE13.3}. Colour rendering is defined as a \emph{measure of the degree of colour shift objects undergo when illuminated by the light source as compared with the colour those same objects when illuminated by a reference source of comparable colour temperature}. This means we compare a test light source to natural light or an ideal source e.g. black-body radiator with the same colour temperature as the test source. 744. A high \cri{} can be achieved by a broadband spectrum emitters e.g. tungsten halogen lamps (\cri{}~≈~100) which has a spectrum comparable to daylight, but its efficacy is only around 25\thinspace{}lm/\textsc{w}. On contrary, the low pressure sodium lamp (\cri{}~≈~25) can reach up to 200\thinspace{}lm/\textsc{w}, but they emit only two spectral lines at 589.0\thinspace{}nm and 589.6\thinspace{}nm. Therefore every object illuminated with this lamp will be either yellow either grey or black. 746. \marginpar{cri calculation}Calculation of \cri{} starts with defining \cct{} of a test source. For \cct{}~<~5000\thinspace{}\textsc{k} a black-body radiator with the same temperature is used. Illuminant \textsc{d} is used for sources with \cct{}~≥~5000\thinspace{}\textsc{k}. The colour differences  $\Delta{}E_{i}$ on \cie{} \abbr{1964} $u^{*}v^{*}w^{*}$ colour space of 14 selected Munsell samples (see table~\ref{tab:test_colour_samples}) is measured or calculated when illuminated with test source and reference source 747. \begin{equation} \label{eq:colour_difference_Ei} 748.         ΔE_{i} = \sqrt{(u^{*}_{ref}-u^{*}_{test})^{2}+(v^{*}_{ref}-v^{*}_{test})^{2}+(w^{*}_{ref}-w^{*}_{test})^{2}}. 749. \end{equation} 750. For each colour sample (tab.~\ref{tab:test_colour_samples}) a particular \cri{} ($R_{i}$) can be calculated using 751. \begin{equation} \label{eq:cri} 752.         R_i = 100 - 4.6ΔE_{i} \qquad{} (i=1, \ldots ,14). 753. \end{equation} 754. A general \cri{} $R_{a}$ is the arithmetic mean of the particular $R_{i}$. 756. \marginpar{comments on cri}Although \cri{} is widely used for calculating colour quality of light sources, it is known to have some drawbacks~\cite{Ohno2004,Ohno2006}. For example only medium saturated colours are taken into account when calculating the colour rendering index, particularly $R_{9}$ (saturated red) can be very low while overall \cri{} can still be quite high. Sometimes a high \cri{} lamp e.g. 2000\thinspace{}\textsc{k} black-body ($R_{a}=100$), can have very poor colour rendering due to its very reddish light. This is also true for lamps with very high values of \cct{}. White light with slight colour tint to green or purple (laying above or below planckian locus, respectively) can have a very high \cri{} index but are unacceptable for general lighting purposes. \cri{} is now being depreciated in favour of measurements based on colour appearance models e.g. \abbr{ciecam02}~\cite{Schanda2005}. Commission Internationale de l´Eclairage recommended the development of new colour rendering index that would supplement the current \cri{}~\cite{CIE177}. Freyssinier and Rea proposed a two metric system that would combine the present \cri{} value with gamut area index forming a two-metric solution~\cite{Freyssinier2010}. 758. \input{tables/tcs.tex} 760. %% Colour distance 761. \section[Colour distance]{colour distance} 762. MacAdam's research on the ability of the human eye to notice the difference in chromaticities of two light sources started the search for uniform colour space where the perceived colour difference can be measured geometrically on the colour plane. 764. \cie{} defines colour difference as an Euclidean distance between two colour points in a \cie{}~\abbr{1976} $L^{*}a^{*}b^{*}$ colour space 765. \begin{equation} \label{eq:colour_difference_Eab} 766.         ΔE_{ab}^{*} = \sqrt{(L^{*}_{ref}-L^{*}_{test})^{2}+(a^{*}_{ref}-a^{*}_{test})^{2}+(b^{*}_{ref}-b^{*}_{test})^{2}}. 767. \end{equation} 768. A $ΔE_{ab}^{*}$ value of 2.15 corresponds to \marginpar{just noticeable difference}just noticeable difference between two colour points \cite{Stokes1992}. Colour distance between the colour command and generated colour is a good measure of the colour control loop accuracy. 770. Some previous work use \cie{}~1976 colour space to define the colour distance. 771. \begin{equation} \label{eq:colour_difference_duv} 772.         Δu'v' = \sqrt{(u'_{ref}-u'_{test})^{2}+(v'_{ref}-v'_{test})^{2}}. 773. \end{equation} 774. Equation does not take into account the difference in light intensity therefore it is only a measure of chromaticity difference. Colour distance equal to 0.0035 is a just noticeable difference between two colour points~\cite{Wyszecki2000}. 776. \begin{figure}[t]% CIE_xy_MacAdam elipses 777.                 \centering 778. \includegraphics{graphics/ColourTheory/CIExyMacAdam.pdf} 779. \caption[MacAdam ellipses]{MacAdam ellipses on \cie{} \abbr{1931} $xy$ chromaticity diagram. Ellipses magnified~10×. A light source with colour coordinates laying inside the ellipse is indistinguishable frmm a light source with the coordinates in the centre of the ellipse. As seen from the various sizes of the ellipses the \abbr{cie 1931} colour space is not perceptually uniform.\label{fig:CIE_xy_MacAdam}} 780. \end{figure} 782. % chapter colour_theory (end) 784. %%%%%%%%%%%%%%% 785. %% Chapter 3 %% 786. %%%%%%%%%%%%%%% 787. \chapter{Light-emitting diodes}% (fold) 788. \label{cha:light_emitting_diodes} 789. \begin{figure}[!ht] 790.                 \centering 791. \includegraphics[scale=1.3]{graphics/LED/HJRound.pdf} 792. \label{fig:hjround} 793. \end{figure} 794. \noindent{}\led{} is a solid-state device that emits light by means of electroluminescence. First observations of light emission from SiC (carborundum) crystals were made in \abbr{1907} by Henry Joseph Round. The first observed \led{} was a Schottky diode. The light was produced on metal-semiconductor junction. Under normal, forward bias conditions, the current flowing through the Schottky diode consists only of majority carriers. The minority carriers can be injected under strong forward bias conditions (or through avalanche effect) thus making the light emission possible. 796. %% Bandgap 797. \section[Bandgap]{bandgap} 798. In semiconductor materials, valence and conduction energy bands do not overlap. The energy difference between the highest point in the valence band and the lowest point in conduction band is called a bandgap. The energy levels in the bandgap are forbidden for the electrons. Transition between two bands is possible only when electron has enough surplus energy or looses enough energy—the minimal amount of energy for transition equals the bandgap energy. \marginpar{direct bandgap}In direct bandgap semiconductors, the electrons from the conduction band minima recombine with holes at the valence band maxima with preservation of momentum. Contrary, in \marginpar{indirect bandgap}indirect bandgap semiconductors, the respective extrema of conduction and valence bands do not have the same value of quantum-mechanical wave vector. Electrons have very little momentum (in the range of $2π/a$, where a is lattice constant) therefore every transition is almost vertical. Emission of a \emph{phonon}, a quantised mode of vibration of a crystal lattice with $2π/λ$ wave vector, is essential to preserve the momentum~\cite{SSPhysics}). This transition is less likely to happen than direct transition. Moreover, the temperature in the active area increases, therefore the indirect bandgap semiconductors, like silicon, are very inefficient at generating light.   799. \begin{figure}[h]% Direct_Indirect_Semiconductors 800.         \centering 801. \includegraphics{graphics/LED/DirectIndirectSemiconductors.pdf} 802. \caption[]{Photon generation in direct (left) and indirect (right) bandgap semiconductors. In indirect bandgap semiconductors, emission of phonon must assist the radiative recombination in order to preserve the momentum \cite{PhotonicDevices}.\label{fig:Direct_Indirect_Semiconductors}} 803. \end{figure} 805. %% Recombination 806. \section[Recombination]{recombination} 807. A pair of excess carriers—free electron and hole—can recombine producing a photon. This process is called \emph{radiative recombination}. However, the recombination can also occur in a non-radiative way where no light is generated. The result of radiative recombination is the transition of an electron from higher energy state to lower energy state with emission of radiative energy. In a non-radiative recombination process, the excess energy from electron transition is converted into thermal energy in form of crystal lattice vibrations. Maximising the radiative and minimising the non-radiative processes are the goal for maximising the efficiency of  \led{}s. 808. % Photonic devices book, Jia-Ming Liu p. 817 810. % Bimolecular radiative recombination 811. \noindent{}\marginpar{bimolecular radiative recombination}In light emitting diodes almost all of the radiative recombination rate is contributed by the bimolecular recombination rate~\cite{PhotonicDevices}. The net recombination rate is given by 812. \begin{equation} 813.  R_{net} = Bnp - Bn_{0}p_{0} 814. \end{equation} 815. where $B$ is a bimolecular recombination coefficient, $n$ and $p$ are electron and hole concentrations and $n_{0}$ and $p_{0}$ are electron and hole concentration under equilibrium conditions. 817. Under high excess carrier density, when $N \gg n_{0}$, $p_{0}$  ($N = n-n_{0}$, $p-p_{0}$) the radiative recombination rate can be expressed as 818. \begin{equation} 819.  R = BN^2. 820. \end{equation} 822. % Non-radiative recombinations 823. \begin{figure}[h]% RecombinationTypes 824. \centering 825. \includegraphics{graphics/LED/RecombinationTypes.pdf} 826. \caption[Types of recombinations]{Band diagram showing possible recombinations mechanisms: deep level Shockley-Read-Hall (a), Auger~(b,c), Bimolecular~(d) band to band radiative \cite{PhotonicDevices}.\label{fig:RecombinationTypes}} 827. \end{figure} 829. % Deep level 830. \noindent{}\marginpar{deep level recombination}With the presence of e.g. a defect of the crystal lattice a non-radiative recombination can occur (fig.~\ref{fig:RecombinationTypes}a). These defects act as a very efficient non-radiative recombination centres, especially if the energy level is located close to the middle of the bandgap~\cite{SemiconductorPhysics}. An electron or hole is trapped in a forbidden region of an energy band by a defect in crystal lattice. These defects can be either unintentionally introduced or engineered to produce a, so called, deep-level \led{}. 832. The statistics of the process were described by Shockley and Read and, independently, by Hall. Net recombination rate is described by 833. \begin{equation} \label{eq:SRHRecombinationRate} 834.         R_{net} = \frac{np - n_{i}^{2}}{τ_{p}(n+n_{1}) + τ_{n}(p+p_{1})}. 835. \end{equation} 836. In doped semiconductors, equation~\ref{eq:SRHRecombinationRate} simplifies to 837. \begin{eqnarray} 838.         R_{net} \approx \frac{n-n_{0}}{τ_{n}}\:for\:p\gg{}n \\ 839.         R_{net} \approx \frac{p-p_{0}}{τ_{p}}\:for\:n\gg{}p 840. \end{eqnarray} 841. there $τ_{n}$ and $τ_{p}$ are recombination lifetimes of electrons and holes, respectively. It can be seen that recombination rate becomes proportional to the excess carrier density 842. \begin{equation}\label{eq:RadiativeRecombinationRate} 843.  R=AN. 844. \end{equation} 846. % Auger 847. \marginpar{auger recombination}\noindent{}During Auger transition (fig.~\ref{fig:RecombinationTypes}b and c), the energy created in a recombination process between an electron and a hole is transferred to  third carrier. No light is generated and the surplus energy pushes either the second electron higher in conduction band or the second hole downward in valence band. The energy passed on to the third carrier is then dissipated by emission of phonons. The process requires three carriers to be in the same place during recombination therefore the recombination rate is proportional to the third power of carrier density 848. \begin{equation} \label{eq:AugerRecombinationRate} 849.         R_A = CN^3, 850. \end{equation} 851. where $C$ is the \emph{Auger coefficient}. 853. Auger recombination mechanism has been proposed as one of the possible reasons for efficiency droop under high current injection in InGaN structures~\cite{Shen2007}. Although the efficiency droop is similar to Auger recombination, some argue~\cite{Hader2008,Yen2009} that the numerical values of Auger coefficient are not high enough for this process to dominate i the total power losses and other loss mechanism like electron current leakage and non uniform hole distribution may be responsible.   855. Taking into account equation \ref{eq:SRHRecombinationRate}, \ref{eq:RadiativeRecombinationRate} and \ref{eq:AugerRecombinationRate} we can write the stationary balance equation for \led{}s as 856. \begin{equation} \label{eq:StationaryBalance} 857.         j/e = AN + BN^2 + CN^3 + DN^m % A- non-rad, B - rad, C - Auger, D^m - other 858. \end{equation} 859. where $j$ is the injected current density, $e$ is the elementary charge, $N$ is \abbr{2d} excess carrier density in the active area and coefficients $A$, $B$, $C$ and $D$ are describing non-radiative Shockley-Read-Hall recombination, bimolecular radiative process, Auger recombination and other non-radiative processes respectively. It is beneficial to supply \led{}s with a current level, at which the bimolecular radiative process dominates. 861. %% Materials 862. \section[Materials]{materials} 863. Nowadays, elemental semiconductors for solid-state lighting (\led{}s and laser diodes) are the \textsc{iii}-\textsc{v} compounds (Al, Ga and In can be found in group \textsc{iii} of periodic table, N and P in group \textsc{v}). In order to create a light generating structure a heteroepitaxy growth technology is used. 865. \marginpar{AlGaInP}The compound crystal is grown on a substrate. The $(\mathrm{Al}_{x}\mathrm{Ga}_{1-x})_{1-y}\mathrm{In}_{y}\mathrm{P}$ is lattice matched to $\mathrm{GaAs}$ (lattice constant 5.65\thinspace{}Å) for $y=0.48$. The semiconductor has a direct bandgap for emission wavelength higher than 555~nm therefore it is used to create red, orange, amber and yellow \led{}s~\cite{AlGalnPBook}. 867. \marginpar{InGaN}$\mathrm{In}_{x}\mathrm{Ga}_{1-x}\mathrm{N}$ nitrides lack a native bulk substrate. InGaN high power structures are typically grown on lattice mismatched sapphire (lattice constant 2.75\thinspace{}Å, 15\thinspace{}\% mismatch) or SiC substrates (lattice constant 3.08\thinspace{}Å, 3.5\thinspace{}\% mismatch)~\cite{Roussel2006}. 869. Lattice mismatch between compound and grown crystal yields in very high dislocation densities which act as non-radiative, deep-level recombination centres. Theses centres contribute to  decreased optical output~\cite{ledSchubert}. The \textsc{iii}-\textsc{v} phosphide materials used for red, orange, amber and yellow diodes are strongly affected by the crystal defects. The \textsc{iii}-\textsc{v} nitride compounds are less affected by the crystal defects due to carrier localisation that prevents from non-radiative recombination. Therefore it is possible to create high power blue \led{} even with high number of dislocations present in a diode. 871. %% Structures 872. \section[Structures]{structures} 873. Three levels of \led{} structure will be discussed. Starting from the active region that generates light in chapter~\ref{ssec:activeRegion}, through the \led{} chip that is responsible for supplying the active region with free carriers and extracting the generated light in chapter~\ref{ssec:ledChip} to the highest level structure of \led{} packaging in chapter~\ref{ssec:packaging}. 874. \subsection{Active region}\label{ssec:activeRegion} 875. % homojunction 876. \marginpar{homojunction}Homojunction is a basic structure for all diodes. A connection between two differently doped semiconductors with the same bandgap forms a p-n junction. Under forward bias, free carriers from both semiconductors are forced into the regions of opposite conductivity type. 878. \begin{figure}[h]% Homojunction 879.         \centering 880. \includegraphics{graphics/LED/Homojunction.pdf} 881. \caption[Homojunction]{Band diagram showing p-n junction under no bias (left) and forward bias (right) conditions. Under forward bias, the recombination occurs in the area around the junction defined by diffusion lengths. \label{fig:Homojunction}} 882. \end{figure} 884. \noindent{}Diffusion lengths ($L_n$ and $L_p$) of minority carriers depends on carrier mobility. 885. \begin{equation} \label{eq:DiffusionLength} 886.         L_{n}=\sqrt{D_{n}τ_{n}},\qquad{} L_{p}=\sqrt{D_{p}τ_{p}}, 887. \end{equation} 888. where $τ_{n}$ and $τ_{p}$ are electron and hole minority carrier lifetimes. $D_n$ and $D_p$ are the diffusion constants 889. \begin{equation} \label{eq:EinsteinRelation} 890.         D_{n}={\frac{kT}{e}}\mu{}_{n},\qquad{} D_{p}={\frac{kT}{e}}\mu{}_{p} 891. \end{equation} 892. where $\mu{}_{n}$ and $\mu{}_{n}$ are electron and hole mobility dependent on the semiconductor. 894. Free carriers recombine radiatively around the junction area within the diffusion distance. As recombination rate is given by the bimolecular recombination equation $R=Bnp$, where $n$ and $p$ are the electron and hole carrier concentrations and $B$ is bimolecular recombination coefficient. The light generation is limited in this structure by the low excess carrier density. The width of the $p$ region is critical for obtaining higher efficiencies. When the region is too shallow, the electrons can escape and recombine non-radiatively through crystal defects. Too thick region will easily reabsorb emitted photons thus reducing the overall efficiency. A heterojunction can be used to overcome both this problems at the same time. 896. % double heterojunction 897. \marginpar{double heterostructure} 898. A third semiconductor is added between two, differently doped, semiconductors to form a confinement area for free carriers. This third semiconductor has lower bandgap than surrounding materials. As a result, the excess carriers get trapped between two barriers. Width of this barrier can be controlled during manufacturing process. Barrier width is much lower than diffusion lengths in the particular semiconductor therefore free carriers are confined in much smaller area as compared to homojunction structure. Increased carrier densities yield in highly increased light generation. 899. \begin{figure}[h]% Double Heterostructure 900.         \centering 901. \includegraphics{graphics/LED/DoubleHeterostructure.pdf} 902. \caption[Double heterostructure]{Band diagram showing double heterostructure under forward bias conditions. Free carriers are confined between two barriers. \label{fig:DoubleHeterostructure}} 903. \end{figure} 905. \noindent{}Free carriers are distributed within active region with Fermi-Dirac distribution. Therefore carriers with energy levels higher than barrier height can escape from the confinement area. Concentration of these carriers can be calculated as 906. \begin{equation} \label{eq:LeakageElectrons} 907.         n_{B} = \int_{E_{B}}^{\infty{}}\!\!\!\!ρ_{dos}\,f_{FD}\ud{}E 908. \end{equation} 909. where $ρ_{dos}$ is the density of states, $f_{FD}$ is the Fermi-Dirac distribution and $E_{B}$ is the height of the heterojunction barrier. This leakage contributes to decreased efficiency, especially at high temperatures. An electron blocking layer can be introduced into the structure to help confine free carriers. 910. \begin{figure}[t]% Electron Blocking Layer 911.         \centering 912.         \includegraphics{graphics/LED/ElectronBlockingLayer.pdf} 913. \caption[Double heterostructure]{Band diagram showing double heterostructure without (left) and with (right) electron blocking layer. Additional layer prevents the electrons escaping from the active region, decreasing the leakage current thus increasing the diode efficiency \cite{ledSchubert}. \label{fig:ElectronBlockingLayer}} 914. \end{figure} 916. \noindent{}Bigger width of an active area width than in quantum well structures makes the carrier density lower and therefore decreases the impact of Auger recombination on diode efficiency at higher currents.  % droop 918. % MQW structures 919. \marginpar{quantum well structures}When additional, thin (in the range of tens of angstroms) layer of semiconductor with smaller bandgap than the cladding layers is added a quantum well is formed. A quantum well structure confines the excess carriers to a very narrow area in order to increase the carrier concentration and therefore increase the recombination rate. Narrow width of the active area also reduces the self absorption effects in some \led{} structures~\cite{Hassan2005}. 921. \marginpar{allowed energy levels}When the thickness of the well is close to the de~Broglie wavelength, the quantum effects appear. Solving the Schrödinger equation for a finite potential well produces the allowed energy levels values within the quantum well. The energy states are finite and are dependent on the well width. Therefore by varying the width of the well one can change the colour of an \led{}. 923. \marginpar{saturation effect}One of the implication of using quantum wells is that they have limited capacity and, at high current injection, a carrier overflow may occur. When carrier concentration in the active area reaches certain level the Fermi energy rises above the top of quantum well. The carrier overflow the structure and further increase of carrier density in the active area is impossible. The light output becomes saturated. In order \marginpar{mqw structures}to overcome this problem multiple quantum well structures are used~\cite{ledSchubert} as shown in figure~\ref{fig:ElectronBlockingLayer}. 925. \begin{figure}[h]% MQW 926.         \centering 927. \includegraphics{graphics/LED/MQW.pdf} 928. \caption{Band diagram showing multiple quantum wells structure under forward bias conditions (left) and  details of single quantum well (right). A thick semiconductor layer with energy gap $E_{g2}$ is surrounded by a semiconductor with a higher bandgap $E_{g2}$ forming a quantum well. Allowable carrier energy values, subbands, shown inside the potential well \cite{ledSchubert}. \label{fig:MQW}} 929. \end{figure} 931. Carrier density on each of available energy states depends on the injection current density. At low values mostly the lowest energy states are occupied by electrons in conduction band well and the highest states in valence band. 933. \subsection{\led{} chip}\label{ssec:ledChip} 934. Due to low efficiency of homojunction structure, high power \led{}s use either double heterostructure or \textsc{mqw} structures for active regions. These active structures are embedded in a \led{} chip, whose task is to supply active region with free carriers, extract the generated light as efficiently as possible. 936. \begin{figure}[h]% LED chip structures 937.         \centering 938. \includegraphics{graphics/LED/DiodesStructures.pdf} 939. \caption{Common structures of high power \led{}s: truncated-inverted pyramid (left) for phosphide diodes~\cite{Krames1999}, flip-chip (middle) gallium nitride structure~\cite{Wierer2001} and thin-film flip-chip (right) GaN structure~\cite{Shchekin2006}.\label{fig:LEDChipStructures}} 940. \end{figure} 942. \noindent{}Supplying the current to the active region is done utilising metallic contacts  and conductive layers of semiconductors acting as current spreading layers.   944. \marginpar{light extraction}Light extraction is a very difficult task as the \led{} chips are built using materials with high refractive index causing total internal reflection phenomenon trapping the light inside the chip. Generated photons can be reabsorbed by the active area or by metallic contacts therefore they should be extracted from the chip as fast as possible. Special geometric shapes (fig.~\ref{fig:LEDChipStructures}) of the \led{} chip as well as micro patterning of edges allow the light to escape the structure~\cite{Krames2007}. Bottom contacts are made reflective to maximise the light output. For example, the thin-film flip-chip structure is made by growing the GaN structure on a sapphire substrate, flipping it and chemically removing the transparent substrate for increased optical transmittance. The revealed n-type GaN lager is then roughened through photochemical etching to increase light extraction~\cite{Shchekin2006,ledSchubert}. 946. \subsection{Packaging}\label{ssec:packaging} 947. \led{} packages very often include a lens that shape the light beam. The lens and plastic body of the diode is not thermally conductive, therefore the packaging must provide good thermal contact between \led{} chip and mounting surface. Some \led{} chips must be electrically isolated from the heatsink. This provides a big challenge as good electrical isolators are typically also good thermal isolators. 949. %% Electrical characteristics 950. \section[Electrical characteristics]{electrical characteristics} 951. Light in the \led{} structures is created by electroluminescence—this means that the energy created during electron-hole recombination is directly converted to photons. Therefore forward voltage of an \led{} must be at least equal to the bandgap energy in the active region divided by the elementary charge. 952. \begin{equation} \label{eq:ForwardVoltage} 953.         V_{f} \geq{} \frac{}{e} \approx{} \frac{E_{g}}{e} 954. \end{equation} 955. The value of forward voltage can be increased by few phenomena. The series resistance will increase the diode drive voltage linearly with the current. Total diode series resistance is a sum of contact resistance, resistance caused by abrupt band structure and bulk resistance of \led{} materials. Additionally, carriers injected into quantum well loose some energy because discrete energy levels inside the well will be always smaller than the cladding layer energy. Therefore difference between free carrier energy in confinement layer and inside a well has to be dissipated by a phonon emission. The forward voltage of an \led{} can be expressed as 956. \begin{equation} \label{eq:DetailedForwardVoltage} 957.         V_{f} = \frac{E_{g}}{e} + IR_{s}+ \frac{\Delta{}E_{C}-E_{0}}{e} + \frac{\Delta{}E_{V}-E_{0}}{e} 958. \end{equation} 959. where $\Delta{}E_{C}-E_{0}$ and $\Delta{}E_{V}-E_{0}$ represent differences in energy levels between quantum well and conduction and valence confinement layers, respectively~\cite{Hassan2005}. \marginpar{shockley equation}Electrical characteristics of a p-n junction are described by Shockley equation \cite{PhotonicDevices} 960. \begin{equation} 961.  I = I_S \big( \mathrm{exp}(V/nV_T) - 1\big) 962. \end{equation} 963. where $I$ and $V$ are diodes current and voltage respectively and $n$ is the ideality factor. $I_S$ and $V_T$ are the reverse bias saturation current and thermal voltage described by 964. \begin{equation} 965.  I_S = eA \Bigg( \sqrt{ \frac{D_p}{τ_p} } \cdot \frac{n_i^2}{N_D} + \sqrt{ \frac{D_n}{τ_n} } \cdot \frac{n_i^2}{N_A}\Bigg), \qquad V_T = \frac{kT}{e}. 966. \end{equation} 967. Including parasitic series resistance and assuming forward-bias conditions, when $V \gg kT/e$ the \textsc{iv} characteristics can be rewritten as 968. \begin{equation}\label{eq:iv_diode_model} 969.  I = I_S\Big( \mathrm{exp}\big(e(V-IR_s)/nkT\,\big)\Big). 970. \end{equation} 972. \begin{figure}[t]% IV characteristics 973.         \centering 974. \includegraphics{graphics/LED/IV/IdealityFactorSeriesResistance/iv_rgbw.pdf} 975. \caption{Measured current-voltage characteristics of red, green, blue and white Luxeon \textsc{k2} diodes at 20°\textsc{c}.\label{fig:IV_n_Rs}} 976. \end{figure} 978. \noindent{}Schockley's equations does not include the presence of quantum well(s) in the diode structure and tunneling injection. Despite this, it is a common practice to fit measurement data to Shockley equation even though it does not reflect all physical phenomena in the diode~\cite{Lee2006b}. 980. In order to compare the theoretical electrical model for a single p-n junction with structures used in high intensity light-emitting diodes available on the market the characteristics of basic red, green, blue and white colour diodes were measured using equipment described in chapter \ref{measurementHardware}. Diodes were placed on a thermally controlled cold plate and charged capacitor was connected to their terminals. The capacitor was discharged through the diode providing short, low energy pulse (thus minimising the heating of the junction). 982. Figure \ref{fig:IV_n_Rs} shows the current-voltage characteristics of four high power Luxeon \textsc{k2} diodes. The data was fitted to theoretical model described by equation~\ref{eq:iv_diode_model}. Identified values of ideality factor and series resistance are gathered in table \ref{tab:n_Rs}. Expected values of ideality factor are in the 1–2 range. However, measured values of the ideality factor are much higher, in the range of 3–7. This is attributed to tunneling injection. 984. \begin{table}[!ht]\footnotesize 985. \caption{Measured values of ideality factor and series resistance of Luxeon \textsc{k2} diodes.\label{tab:n_Rs}} 986. \centering 987. \begin{tabular}{lcc} 988. \textsc{diode} & $n$ & $R_s$ \\ 989. \hline 990. \abbr{lxk2-pd12-r00} (red)   & 3.166 & 2.334\thinspace{}Ω \\ 991. \abbr{lxk2-pm14-u00} (green) & 7.349 & 0.898\thinspace{}Ω \\ 992. \abbr{lxk2-pb12-k00} (blue)  & 6.014 & 0.775\thinspace{}Ω \\ 993. \abbr{lxk2-pw12-u00} (white) & 4.563 & 0.718\thinspace{}Ω \\ 994. \hline 995. \end{tabular} 996. \end{table} 998. %% Optical characteristics 999. \section[Optical characteristics]{optical characteristics} 1000. Typical emission spectra of AlGaInP and InGaN diodes are shown in figures~\ref{fig:MeasuredRGBSpectra_nm} and \ref{fig:MeasuredRGBSpectra_eV}. Green gallium nitride diodes have the widest emission spectra due to high fluctuations of chemical composition in indium rich semiconductor~\cite{HandbookOfOpticalMetrology}. 1002. \begin{figure}[!h]% Measured emission spectra of R, G, B diodes 1003.         \centering 1004.                 \includegraphics{graphics/LED/Spectrum/RGBspectra.pdf} 1005.         \caption{Measured emission spectra of red AlGaInP and green and blue InGaN diodes with normalised intensities. \fwhm{} is equal to 20, 33 and 21.3\thinspace{}nm for red green and blue diode respectively.} 1006.         \label{fig:MeasuredRGBSpectra_nm} 1007. \end{figure} 1010.         \centering 1011.         \includegraphics{graphics/LED/Spectrum/RGBspectra_eV.pdf} 1012.         \caption{\led{} emission spectra (fig.~\ref{fig:MeasuredRGBSpectra_nm}) shown in energy scale. \fwhm{} is equal to 2.41, 6.00 and 4.93\thinspace{}\textit{kT} for red green and blue diode respectively. Theoretical \fwhm{} value for \led{}s is equal to 1.8\thinspace{}\textit{kT}.} 1013.         \label{fig:MeasuredRGBSpectra_eV} 1014. \end{figure} 1015. \clearpage % *** manual new page 43/44 1016. % emission spectrum 1017. \noindent{}\marginpar{theoretical emission spectrum}Theoretical emission spectrum of a bulk semiconductor \led{} is a product of the density of states and the distribution of carriers in the allowed bands characterised by a Boltzmann distribution \cite{ledSchubert}. 1018. \begin{equation} \label{eq:TheoreticalemissionSpectrum} 1019.         I(E) = ρ(E) \cdot f_{B}(E) \propto \sqrt{E-E_{g}} \cdot e^{-E/kT} 1020. \end{equation} 1021. \noindent{}This equation is to be used with care for InGaN diodes, since Boltzmann distribution is not applicable to localised carriers. 1023. \begin{figure}[!ht]% Theoretical emission spectrum 1024.         \centering 1025.                 \includegraphics{graphics/LED/Spectrum/TheoreticalSpectrum.pdf} 1026.         \caption{Theoretical emission spectrum as a product of Boltzmann distribution and the density of states \cite{ledSchubert} (left) and measured spectrum of a green \led{} (right), both plotted in linear (top) and log-lin scales (bottom).} 1027.         \label{fig:TheoreticalSpectrum} 1028. \end{figure} 1030. \noindent{}\marginpar{full width at half maximum}The theoretical maximum of emission occurs at $E=E_{g}+kT/2$ and the value of the full width at half maximum is 1.8\thinspace{}$kT$ (figure~\ref{fig:TheoreticalSpectrum}). Practically, the spectral width of transitions in \led{} is wider, in the range of 2–4\thinspace{}$kT$ due to alloy broadening~\cite{ledSchubert}. This is clearly visible when comparing theoretical spectrum with a measured one (fig.~\ref{fig:TheoreticalSpectrum}). Practical diodes exhibit photon emission with energy much lower than predicted by the theory. Particularly, in InGaN diodes, where the bandgap energy is dependent on the indium composition and any fluctuations in its doping can cause broadening of the spectrum. 1032. Plotting the theoretical emission spectrum on a semi logarithmic plot shows that the high energy side of the spectrum is linear and dependent on the junction temperature. 1034. The shape of emission spectra is sometimes approximated by a gaussian distribution for modelling purposes \cite{Chhajed2005b} 1035. \begin{equation} \label{eq:GaussianDistribution} 1036.         I(λ) \propto{} \frac{1}{σ\sqrt{}} \mathrm{exp}\bigg(\!\!-\frac{(λ-λ_{peak})^2}{2σ^2}\bigg) 1037. \end{equation} 1038. where standard deviation relates to \fwhm{} as~\cite[p.~335]{ledSchubert} 1039. \begin{equation} \label{eq:GaussianDistributionSigma} 1040.         σ = \frac{Δλ}{2\sqrt{2ln2}} \approx \frac{Δλ}{2.335} . 1041. \end{equation} 1042. Any mismatch between the model of the spectrum and the actual \led{} spectrum may introduce significant colour point difference. The actual spectrum of an \led{} is not symmetrical therefore gaussian model can introduce significant errors. To overcome this problem, Man and Ashdown~\cite{Man2006} proposed a double gaussian model: a sum of two gaussian curves that provide much better empirical model than a single curve. 1044. %% Thermal characteristics 1045. \section[Thermal characteristics]{Thermal characteristics} 1046. \led{}s show a very high temperature dependency. Because the junction temperature can vary significantly during normal operation (thermal cycling during low frequency \pwm{}, dimming of the diode, heatsink and ambient temperature change) it is necessary to investigate the phenomena thoroughly. Not only the efficiency drops with increased temperature, but the colour point can move far outside the MacAdam ellipsis from the original position. Colour point shift occurs due to the change in spectrum shape. In order to quantify the spectral shift, four basic colour diodes were measured at different heatsink temperatures. The resulting spectra are gathered in figure~\ref{fig:Spectra2Temperature}. The results show AlGaInP red diode experiences the biggest spectrum change while the spectrum shape of InGaN diodes is not affected by the change of temperature. 1048. \begin{figure}[!ht]% Spectra vs. temperature 1049.         \centering 1050.                 \includegraphics{graphics/LED/Thermal/SpectraChangeRGBW.pdf} 1051.         \caption{Measured change in spectral shape for Luxeon Rebel red (left), green, blue and white (right) diodes for varying heatsink temperature.} 1052.         \label{fig:Spectra2Temperature} 1053. \end{figure} 1055. \noindent{}The decrease in luminous output is connected to several mechanisms: increase of non-radiative recombinations (deep level and surface) and carrier losses due to leakage over heterostructure barriers. InGaN diodes, however, experience high efficiency droop at high current value. The physical origin of the droop is not yet known~\cite{Schubert2007}. Electron leakage, lack of hole injection, carrier delocalisation, Auger recombination, defects, and junction heating were suggested as explanations for this phenomenon~\cite{Yen2009,Schubert2009}. 1057. \begin{figure}[p]% Flux, Vf, Prad, FWHM, efficiency, efficacy vs. temperature 1058.         \centering 1059.         \includegraphics{graphics/LED/Thermal/parameter_variation_with_temperature_K2.pdf} 1060.         \caption{Measured Luxeon \textsc{k2} diodes' parameters with varying heatsink temperature.} 1061.         \label{fig:ParametersVsTemperatureK2} 1062. \end{figure} 1065.         \centering 1066.         \includegraphics{graphics/LED/Thermal/parameter_variation_with_temperature_rebel.pdf} 1067.         \caption{Measured Luxeon Rebel diodes' parameters with varying heatsink temperature.} 1068.         \label{fig:ParametersVsTemperatureRebel} 1069. \end{figure} 1071. The rise of heatsink temperature corresponds to the rise of junction temperature. The relationship between junction and heatsink temperatures is not linear because the losses in the diode depend on the junction temperature. Because of the complexity of junction temperature measurement, the properties of various \led{}s have been analysed with respect to the heatsink temperature. Diodes' properties are plotted in figures~\ref{fig:ParametersVsTemperatureK2} and~\ref{fig:ParametersVsTemperatureRebel} for Luxeon \textsc{k2} and Luxeon Rebel diodes, respectively. The rate of change of these parameters can be approximated with a linear temperature dependencies. Temperature coefficients are gathered in table~\ref{tab:diodeParametersVsTemp}. In order to collect the necessary data the diodes were placed on thermally controlled plate and mounted to the integrating sphere. The current was set to 700\thinspace{}mA using a \dc{} regulated current source. The heatsink temperature was regulated in 20–60°\textsc{c} range. After each change in temperature the system was allowed to reach thermal steady state after which diode's parameters were recorded. 1073. The emissivity of the diode depends strongly on the junction temperature. Figures \ref{fig:ParametersVsTemperatureK2}\thinspace{}b and~\ref{fig:ParametersVsTemperatureRebel}\thinspace{}b show the drop of radiometric power with the increase of heatsink temperature. 1075. The spectrum of all diode shifts towards longer wavelengths (fig.~\ref{fig:ParametersVsTemperatureK2}\thinspace{}e and~\ref{fig:ParametersVsTemperatureRebel}\thinspace{}e). Both these phenomenon will influence the perceived flux (fig.~\ref{fig:ParametersVsTemperatureK2}\thinspace{}a and~\ref{fig:ParametersVsTemperatureRebel}\thinspace{}a). Shift towards longer wavelengths will increase the luminous flux in blue and green diodes but will decrease it in case of red diodes. This is because the peak of eye sensitivity is at 555\thinspace{}nm (fig.~\ref{eye_sensitivity}) and moving the spectrum towards this wavelength will increase the flux and moving away from it will decrease the flux. 1077. Luminous flux figures show that the red AlGaInP diodes are mostly affected by the temperature change because of the aforementioned phenomena. Ca. 40\thinspace{}\% of the flux is lost with the heatsink temperature increase by 40°\textsc{c} for both diode families. Nitride diodes show little temperature dependence: less than 8\thinspace{}\% of the flux is lost. Phosphor converted white diodes experience bigger loss of intensity than pure GaN diodes. With the 40°\textsc{c} temperature rise around 13\thinspace{}\% was lost for both diode families. 1079. \begin{table}[!ht]\footnotesize 1080. \caption{Temperature coefficients of forward voltage, luminous flux, radiometric power, peak wavelength and \fwhm{} for the measured Luxeon diodes.\label{tab:diodeParametersVsTemp}} 1081. \centering 1082. \begin{tabular}{lccccc} 1083. & $\ud{}V_{f}/\ud{}T$           & $\ud{}F/\ud{}T$               & $\ud{}P_{rad}/\ud{}T$                 & $\ud{}peak/\ud{}T$    & $\ud{}\textsc{fwhm}/\ud{}T$           \\ 1084. \textsc{diode}  & [mV/K] & [lm/K]       & [mW/K]        & [nm/K] & [nm/K]       \\ 1085. \hline 1086. Luxeon \textsc{k2} red          &       −2.29 & −0.665 & −2.57 & 0.154 & 0.050 \\ 1087. Luxeon \textsc{k2} green        &       −2.68 & −0.122 & −0.35 & 0.052 & 0.059 \\ 1088. Luxeon \textsc{k2} blue         &       −4.71 & −0.033 & −0.86 & 0.028 & 0.055 \\ 1089. Luxeon \textsc{k2} white        &       −3.00 &       −0.266 & −0.80 & 0.034 & 0.111 \\ 1090. \hline 1091. Luxeon Rebel red                                        &       −1.49 & −0.516 & −2.76 & 0.166 & 0.054 \\ 1092. Luxeon Rebel green                              &       −2.90 & −0.113 & −0.32 & 0.050 & 0.055 \\ 1093. Luxeon Rebel blue                                       &       −3.44 & −0.024 & −0.79 & 0.032 & 0.050 \\ 1094. Luxeon Rebel blue                                       &       −4.12 & −0.243 & −0.68 & 0.047 & 0.018 \\ 1095. \hline 1096. \end{tabular} 1097. \end{table} 1098. \clearpage % *** manual new page 48/49 1099. % Junction temperature measurements 1100. \noindent{}\marginpar{junction temperature measurements}Many techniques may be used to obtain junction temperature on \led{}. Most common methods utilise forward voltage $V_{f}$, high energy slope and peak wavelength temperature dependency. High energy slope method requires knowledge of emission spectra of the diode (figure~\ref{fig:TheoreticalSpectrum}). 1102. % Forward voltage temperature dependency 1103. \subsection{Forward voltage} 1104. The basic equation describing the temperature dependence of the forward voltage~\cite{ledSchubert,Xi2004,Xi2005} consists of three summands. 1105. \begin{equation} \label{eq:ForwardVoltageVsTemp} 1106.         \frac{\ud{}V_{f}}{\ud{}T} = \frac{eV_{f}/E_{g}}{eT} + \frac{1}{e}\frac{\ud{}E_{g}}{\ud{}T} - \frac{3k}{e} 1107. \end{equation} 1108. First describes the influence of intrinsic carrier concentration. Second applies to the temperature dependency of the bandgap. Last summand a temperature dependency of effective densities of states $N_{C}$ and $N_{V}$ in conduction and valence bands, respectively and acceptor and donor concentrations, $N_{A}$ and $N_{D}$, respectively~\cite{MicroelectronicsWhitaker} 1109. \begin{equation} 1110.         eV_{f}-E_{g} \approx kT\ln{\frac{N_{D}N_{A}}{N_{C}N_{V}}}. 1111. \end{equation} 1113. \noindent{}The bandgap energy can be approximated in certain temperature region by the Varnshi parameters~\cite{Varshni1967,Vainshtein1999}. The linear-quadratic relation describing temperature dependence of the bandgap uses two empirical parameters \textit{α}, \textit{β} and the width of a bandgap at absolute zero temperature $E_{0}$ (table~\ref{tab:varnshi_parameters}) 1114. \begin{equation} \label{eq:BandgapVarshni} 1115.         E_{g} = E_{0} - \frac{α^{2}}{β+T}. 1116. \end{equation} 1118. \input{tables/Varshni_parameters.tex} 1120. \noindent{}To obtain the temperature dependence of the energy gap in ternary semiconductor diodes (eg.~$\mathrm{In}_{x}\mathrm{Ga}_{1-x}\mathrm{N}$) a quadratic fitting is used 1121. \begin{equation} \label{eq:tetraryBandgapEnergy} 1122.         E_{g}(\mathrm{A}_{1-x}\mathrm{B}_{x}) = (1-x)E_{g}(\mathrm{A}) + xE_{g}(\mathrm{B}) - x(1-x)C. 1123. \end{equation} 1124. Similarly, the bandgap energy of quaternary $(\mathrm{Al}_{x}\mathrm{Ga}_{1-x})_{0.52}\mathrm{In}_{0.48}\mathrm{P}$ barrier layers can be expressed as 1125. \begin{equation} \label{eq:quaternaryBangapEnergy} 1126.         E_{g}(x) = xE_{g}(\mathrm{Al}_{0.52}\mathrm{In}_{0.48}\mathrm{P}) + (1-x)E_{g}(\mathrm{Ga}_{0.52}\mathrm{In}_{0.48}\mathrm{P}) - x(1-x)C, 1127. \end{equation} 1128. where $C=0.18\:\mathrm{eV}$ is a bowing parameter. 1130. \begin{figure}[t]% IV characteristics 1131.         \centering 1132. \includegraphics{graphics/LED/IV/TemperatureDependency/iv_rgbw.pdf} 1133. \caption{Current-voltage characteristics of red, green, blue and white diodes under varying heatsink temperature from 20°\textsc{c} to 50°\textsc{c}, every ten degrees.\label{fig:IV_temperature_dependency}} 1134. \end{figure} 1136. \noindent{}Because of the forward voltage temperature dependency \led{}s cannot be controlled easily with a voltage source. Self heating effect causes the drop in forward voltage moving the operating point to a higher current. Increased current yields an increase of temperature. This positive feedback may lead to thermal runaway and the destruction of the diode. 1138. % Thermal model 1139. \subsection{Thermal model} 1140. In order to estimate junction temperature of \led{} manufacturers provide a junction to case thermal resistance. Based on its value and power dissipated in the device as heat one may calculate temperature difference between junction and diode's case. Assuming the thermal resistance of case to heatsink interface to be negligible or known the junction temperature can be calculated with the reference to measured heatsink temperature. However, the calculated temperature will only be valid in thermal equilibrium. 1142. \marginpar{thermal impedance}Poppe~et~al.~\cite{Poppe2009,Poppe2010a,Poppe2010b,Poppe2010c} analysed transient response of a diode during a cool down process. By analysing this response, the thermal impedance of the \led{} thermal structure can be revealed. 1144. \begin{figure}[!ht]% Thermal impedance curves 1145. \centering 1146. \includegraphics{graphics/LED/Thermal/luminus_rebel_thermal_impedance.pdf} 1147. \caption{Thermal impedance curves for Luminus Rebel InGaN diode, after Parry and Rose. \label{fig:ThermalImpedanceCurves}} 1148. \end{figure} 1150. \noindent{}Each element of \led{} structure represents certain thermal resistance and capacitance on the heat flow path and is connected to a time constant in thermal system response. Overall thermal system response is a sum of individual element responses~\cite{Szekely1988}. 1151. \begin{figure}[!ht]% RC network thermal model 1152. \centering 1153. \includegraphics{graphics/LED/Thermal/RC_Network.pdf} 1154. \caption{Cauer (top) and Foster (bottom) \abbr{rc} network models of heat flow path. Current sources represent heat generation. Voltage source represents reference temperature e.g. ambient or heatsink temperature. \label{fig:RC_network_thermal_model}} 1155. \end{figure} 1156. Both thermal models presented in figure~\ref{fig:RC_network_thermal_model} are equivalent but the Cauer structure represents physical heat flow. By reducing the number of elements in the \textsc{rc} ladder, the physical structure of the diode can be modelled using thermal capacitances of the \led{} die, die attach and heat slug and thermal resistances of the interfaces between them. High power \led{}s are mounted on heatsinks using a thermal interface material that, due to its thin layer, can be modelled using only thermal resistance~\cite{Hui2009}. The heatsink itself typically has big thermal capacitance and resistance compared to \led{} structure. 1158. \begin{figure}[!ht]% Luminus Rebel thermal model 1159. \centering 1160. \includegraphics{graphics/LED/Thermal/luminus_rebel_thermal_model.pdf} 1161. \caption{Simplified thermal model of Luminus Rebel diodes. Component values can be derived from thermal impedance curves (fig.~\ref{fig:ThermalImpedanceCurves}). \label{fig:LuminusRebelThermalModel}} 1162. \end{figure} 1164. \noindent{}The value of junction to case thermal resistance has been found to be nonlinearly dependent on the ambient temperature and power dissipated in the junction area~\cite{Jayasinghe2007}. The magnitude and the direction of thermal resistance change depends on the structure of diode~\cite{ShengLiang2008}. This change is attributed to changing properties of thermal interface material and varying thermal properties of semiconductors~\cite{Jayasinghe2007}. 1166. \marginpar{effect of thermal resistance on luminous flux}The heat flow path is partially determined by the internal structure of the diode and partially by the accompanying heatsink. In order to test the influence of the heatsink properties on the static properties of the diode a simulation was set up. 1167. \begin{figure}[!ht]% 1168. \centering 1169. \includegraphics{graphics/LED/Thermal/Rth_variation_model.pdf} 1170. \caption{Simulation model for quantifying the effect of heatsink-to-ambient thermal resistance on maximum luminous flux in steady-state conditions. \label{fig:Rth(hs-a)ChangeModel}} 1171. \end{figure} 1172. The static thermal model in the simulation shown in figure~\ref{fig:Rth(hs-a)ChangeModel} contains only thermal resistances shown in figure~\ref{fig:LuminusRebelThermalModel}. The capacitances are omitted as they contribute only to dynamic properties of the thermal system. Junction temperature and driving current are used to determine the forward voltage of the diode. Next, the current-voltage information is converted into flux and dissipated power using \led{} model described in detail in chapter~\ref{sec:iv_model}. The flux is plotted with respect to the driving current for different values of heatsink-to-ambient thermal resistance. 1173. \begin{figure}[!ht]% 1174. \centering 1175. \includegraphics{graphics/LED/Thermal/effect_of_varying_Rth.pdf} 1176. \caption{Effect of changing heatsink-to-ambient thermal resistance on luminous flux in steady-state conditions for InGaN (left) and AlGaInP (right) diode. \label{fig:Rth(hs-a)ChangeEffect}} 1177. \end{figure} 1179. \noindent{}As seen in figure~\ref{fig:Rth(hs-a)ChangeEffect}, the InGaN diodes are not as sensitive to the thermal resistance as AlGaInP diodes. Therefore the heatsink thermal resistance and \textsc{tim} thermal resistance will limit the amount of luminous flux in steady-state conditions~\cite{Hui2009}. In case of AlGaInP diodes too high thermal resistance value causes the $\ud{}F/\ud{}I$ to become negative. This creates dangerous effect where increasing the driving current lowers the luminous flux. In a colour control system that regulates the diode's flux this phenomenon can lead to thermal runaway and destruction of the luminaire. Results of the simulation show the behaviour of changing the thermal resistance of the heat flow path. This resistance can change as an effect of curing or ageing of \textsc{tim} materials~\cite{Prasher2006}, airborne contaminants that cover the surface of the heatsink~\cite{Cirolia2001}, change of fan speed in forced cooling or change of efficiency of convection cooling (increases at higher heatsink temperatures). All aforementioned processes will influence the junction temperature of the diodes therefore a proper safety margin in designing the cooling system should be incorporated. 1181. %% Efficiency 1182. \section[Efficiency]{efficiency} 1183. An ideal source of light would produce one photon for every electron injected. Because of non-radiative recombination phenomenon, not every electron-hole pair produce a quantum of light. Therefore we can define \emph{internal quantum efficiency} as a ratio of the number of photons emitted from the active region of a \led{} per quantum of time to the number of electrons injected into active region per quantum of time 1184. \begin{equation} \label{eq:InternalQuantumEfficiency} 1185.         η_{int} = \frac{n_{\textit{created photons}}}{n_{\textit{electrons}}} = \frac{P_{int}/hν}{I/e} 1186. \end{equation} 1187. where $P_{int}$ is the optical power emitted from the active region and $I$ is the injection current. 1189. The photons created in the active region have to be extracted into free space. This can be very challenging as the photons are being created in \textsc{iii}-\textsc{v} crystals with high refractive indices (n~$\approx{}3.5$ for AlGaInP and n~$\approx{}2.4$ for InGaN materials). Some light extracting features like texturing of a crystal surface has to be incorporated into \led{} design. At the same time, the light radiation characteristic has to meet the demands of the market. Therefore development of a \led{} packages with high light extraction efficiency $C_{ext}$ is still an ongoing process. Currently (January 2009) $C_{ext}$ reaches up to 60\% in AlGaInP and 80\% in InGaN high power devices. 1191. The \emph{external quantum efficiency} is designed as a ratio of the number of photons emitted into the free space per quantum of time to the number of electrons injected into active region per quantum of time 1192. \begin{equation} \label{eq:ExternalQuantumEfficiency} 1193.         η_{ext} = \frac{n_{\textit{extracted photons}}}{n_{\textit{electrons}}} = \frac{P/hν}{I/e} = C_{ext} \cdot{} η_{int} 1194. \end{equation} 1196. \noindent{}A \emph{power efficiency}, often refereed to as \emph{wall plug efficiency} is defined as 1197. \begin{equation} \label{eq:PowerEfficiency} 1198.         η_{power} = \frac{P_{optical}}{P_{electrical}} 1199. \end{equation} 1200. where $P_{electrical}$ is electric power supplied to the diode and $P_{optical}$ is the optical power of light extracted from the structure. 1202. A \emph{luminous efficacy} is a measure of how well the electromagnetic radiation is converted into luminous flux. 1203. \begin{equation} \label{eq:LuminousEfficacy} 1204.         η_{lum} = \frac{F_{lum}}{P_{optical}} =  \frac{683\int{}\!V(λ)P(λ)\ud{}λ}{\int{}\!P(λ)\ud{}λ} 1205. \end{equation} 1206. The highest possible luminous efficacy is defined by the eye sensitivity curve [fig.~\ref{eye_sensitivity}] and is equal to 683\thinspace{}lm/\textsc{w} when the light is a monochromatic source emitting at 555\thinspace{}nm. 1208. In lighting applications it is necessary to know how well the input electrical power is converted into visible light. This is called \emph{luminous efficacy of a source} or \emph{wall plug efficacy}. 1209. \begin{equation} \label{eq:LuminousEfficacyWallPlug} 1210.         η_{lum,wp} = \frac{F_{lum}}{P_{electrical}} = η_{lum} \cdot{} η_{power} =  \frac{683\int{}\!V(λ)P(λ)\ud{}λ}{IV} 1211. \end{equation} 1212. Highest practically achievable wall-plug efficacies, for various light sources, has been collected in table~\ref{tab:light_sources'_efficacies}. 1214. \input{tables/efficacies.tex} 1216. When carriers are trapped inside a quantum well a part of their energy has to be dissipated by phonon emission in order to match the carriers' energy to one of the available states inside a well. 1217. \clearpage % *** manual new page 54/55 1218. \noindent{}\marginpar{phosphor-converted led{\scriptsize{}s}}A white \led{} can be produced by using a phosphor down-conversion of a blue or \textsc{uv} light (pump) e.g. mixing blue InGaN light with yellow light from YAG:Ce phosphor. Phosphor has to convert blue photons ($\approx{}460 1219. \:\textrm{nm}$) into yellow photons ($\approx{}570\:\textrm{nm}$) so their energy has to drop. It is impossible to do the down-conversion without losing a part of the energy~\cite{Chhajed2005b}. The Stokes energy loss is given by 1220. \begin{equation} \label{eq:StokesShift} 1221.         ΔE=hν_{1}-hν_{2}=\frac{hc}{λ_{1}} - \frac{hc}{λ_{2}} 1222. \end{equation} 1223. thus the efficiency of down-convertion is 1224. \begin{equation} \label{eq:DownConvertionEfficiency} 1225.         η=1-\frac{hν_{1}-hν_{2}}{hν_{1}} = \frac{λ_{1}}{λ_{2}}. 1226. \end{equation} 1228. \section[Lifetime]{lifetime} 1229. In contrast to traditional, incandescent and fluorescent light sources, \led{}s do not tend to fail catastrophically. Instead, the output flux decreases with time and operating junction temperature. The degradation mechanisms were the subject for many lifetime studies. The cause and the magnitude of the lumen change differs depending on the structure and materials of the diode and its driving conditions: shape of the driving current (pulsed or direct) and the temperature. Solder join failures and change in thermal properties of the thermal interface materials used for mounting the diodes can also influence diodes' properties. 1231. \marginpar{nitride diodes}For white \led{}s, a colour shift towards lower colour temperatures can be seen due to the yellowing of the die~\cite{Yang2010}. Studies show that mechanisms of \led{} structure and package deterioration like generation of threading dislocations and yellowing and cracking of encapsulating lenses are connected to the structure temperature~\cite{Narendran2004,Narendran2005,Yang2010}. The increase of the leakage current results in a decreasing photon generation and the deterioration of \led{} encapsulant lens lowers the structure extraction efficiency. In extreme cases, the defects may short the device rendering it unable to generate light~\cite{ZanoniPhDThesis}. 1233. \marginpar{phosphide diodes}According to a 60\hsp000 hour study presented by Grillot et~al. the main cause of lumen degradation in AlGaInP diodes are the increase in defects concentration and in leakage current~\cite{Grillot2006}. The overall degradation, defined as the relative change of the luminous flux with respect to the flux at the beginning of the test, was found to behave according to equation 1234. \begin{equation} 1235.         D = D_{1}+D_{2}\:j+(D_{3}+D_{4}\:j)\textrm{ln}(t) 1236. \end{equation} 1237. where $D_{1-4}$ are diode dependent coefficients, $j$ is the current concentration and $t$ is the time. 1238. \clearpage % *** manual new page 55/56 1239. \noindent{}\marginpar{industry standards}As \led{} do not tend to fail catastrophically, instead their light output slowly decreases over time, the \led{} lifetime if defined as the time to some predefined intensity drop. Manufacturers use this measure together with driving current and junction temperature to quickly asses the lifetime of the devices they are using (fig.~\ref{fig:RebelReliabilityData}). 1241. \marginpar{\textsc{b} and \textsc{l} lifetimes } 1242. The \textsc{assist} alliance recommends using two metrics \textsc{b50} and \textsc{l70} to approximate the useful lifetime~\cite{ledLifeReport}. \textsc{b50} corresponds to the time where 50\thinspace{}\% of the device population fails. \textsc{l70} denotes the time in which \led{}s lose 30\thinspace{}\% of the initial luminous flux. Therefore combined \textsc{b50/l70} metric predicts when more than 50\thinspace{}\% of the diodes will drop below 70\thinspace{}\% of the initial flux~\cite{ReliabilityManual}. Depending on the application other sets can be used, e.g.~\textsc{b10/l70} or \textsc{b10/l70}. 1244. \begin{figure}[!ht]% Luxeon Rebel B50/L70 1245.         \centering 1246.         \includegraphics{graphics/LED/Lifetime/B50_L70_luxeon_rebel.pdf} 1247.         \caption{\textsc{b50/l70} lifetime predictions for InGaN (left) and AlGaInP (right) Luxeon Rebel diodes~\cite{RebelReliabilityData}.} 1248.         \label{fig:RebelReliabilityData} 1249. \end{figure} 1251. %% LED dimming characteristics 1252. \section[Dimming]{dimming} 1253. Intensity of an \led{} can be controlled in many different ways. Because of the small dynamic resistance and the temperature dependency of forward voltage \led{}s are typically current controlled. All dimming techniques vary the mean current supplied to the diode changing the diode's intensity. Different current waveforms with the same average value can have different effect on \led{} output light. The peak wavelength can change creating a colour shift in the mixed light. Flux, and therefore efficiency, can change depending on a dimming method. Gu et~al.~\cite{Gu2006} report changes up to 100\% in efficiency depending on the dimming method, diode colour and the operating point. 1255. Following chapters describe four different control schemes and analyse their effect on light-emitting diodes: two basic control schemes used commonly—pulse width modulation and amplitude modulation; a hybrid \am{}/\pwm{} dimming technique proposed by the author and a pulse code modulation technique. 1257. %\marginpar{requirements} 1258. %\marginpar{resolution} 1260. % PWM 1261. \subsection{Pulse width modulation}Because \led{}s are semiconductor-based devices they can be turned on and off very quickly. The minimum frequency at which pulsed light sources are perceived as constant is predicted by Ferry-Porter law 1262. \begin{equation} 1263.         \textit{critical flicker frequency} = k(\mathrm{log}\,L - \mathrm{log}\,L_{0}) 1264. \end{equation} 1265. where $L$ is the luminous intensity, $L_0$ is the threshold intensity and $k$ is a constant having a typical value of 12\thinspace{}Hz/decade. The \textsc{cff} is furthermore dependent on the size of the stimuli and its position relative to the eye~\cite{Tyler1990}. Minimum modulation frequency value used in the industry is 60\thinspace{}Hz, but for high intensity, fast moving light sources this frequency should be increased to 300–1000Hz range to avoid flicker~\cite{Ashdown2006}. 1267. Brightness control is achieved by modulating current pulse width \includegraphics{graphics/System/Dimming/PWM/sparklinePWM.pdf}. A duty cycle $d$ is a ratio of a on pulse time to the period of modulation frequency. Average luminous intensity is a product of the duty cycle and the constant luminous intensity during the on pulse. 1268. \begin{equation} 1269.         I_{f,avg}=d \cdot{} I_{f,max} 1270. \end{equation} 1271. Light output can be dimmed down to 0\% and linear control can be easily obtained. However, peak emission shifts and bandwidth narrowing occur during \pwm{} dimming and have to be taken into account when designing control strategy for light mixing~\cite{Dyble2005, Manninen2007}. 1273. \begin{figure}[!ht]% PWM dimming linearity 1274.         \centering 1275.         \includegraphics{graphics/System/Dimming/PWM/linearityRGBW.pdf} 1276.         \caption{Measured luminous flux for \pwm{} dimming of red, green, blue and white (left to right) diodes as a function of duty cycle. Data points correspond to the duty cycle equal to: 0, 0.2, 0.3, 0.6, 0.8 and 1.0. \pwm{} experiences almost ideal dimming linearity.} 1277.         \label{fig:DimmingPWMLinearity} 1278. \end{figure} 1280. \noindent{}Under decreasing duty cycle the peak wavelengths shifts towards shorter wavelengths for AlGaInP~\cite{Gu2006, Manninen2007} and InGaN~\cite{Gu2006} diodes. The shift is explained by cooling of the active area in the chip with decreased duty cycle and, as a result, broadening of the bandgap energy~\cite{ledSchubert}. 1282. \begin{figure}[!ht]% PWM dimming colour shifts 1283.         \centering 1284.         \includegraphics{graphics/System/Dimming/PWM/colourPointShiftRGBW.pdf} 1285.         \caption{Measured colour shifts for the \pwm{} dimming of red, green, blue and white (left to right) diodes. Arrow shows the direction of decreasing flux. Data points correspond to the duty cycle equal to: 1.0 (marked with \cie{}~1976 colour coordinates), 0.8, 0.6, 0.4 and 0.2.} 1286.         \label{fig:DimmingPWMColourShifts} 1287. \end{figure} 1289. \noindent{}Because of small colour shifts and very good linearity this dimming method is the most widely used in industrial applications. \led{} driver operates in a pulsed mode either with maximal forward current of the diode or no current. Therefore it can be easily optimised for a single operating point. 1291. % AM 1292. \subsection{Amplitude modulation}Amplitude modulation, sometimes referred to as continuous current reduction (\textsc{ccr}), uses variable \dc{} current to control the intensity of the diode. Because the efficiency of \led{}s increases with decreasing current concentration in the active area, the \am{} dimming scheme is inherently nonlinear. The increase of luminous efficiency over pulse width modulation can be as high as 100\% for low intensities for green InGaN diodes~\cite{Gu2006}. 1294. \begin{figure}[!ht]% AM dimming linearity 1295.         \centering 1296.         \includegraphics{graphics/System/Dimming/AM/linearityRGBW.pdf} 1297.         \caption{Measured luminous flux for \am{} dimming of red, green, blue and white (left to right) diodes as a function of forward current. Data points correspond to following currents: 0, 0.14, 0.28, 0.42, 0.56 and 0.7\thinspace{}\textsc{a}.} 1298.         \label{fig:DimmingAMLinearity} 1299. \end{figure} 1301. \noindent{}The dimming range for amplitude modulation is typically limited to 10–100\% range. This is because at very low current levels the \textsc{snr} is low therefore the accuracy of the current control is limited. 1303. Very big colour shifts occur when using \am{} in both pure colour diodes and phosphor converted white diodes~\cite{Dyble2005}. Typically, AlGaInP diodes experience peak wavelength shift towards shorter wavelengths and InGaN \led{}s towards longer wavelengths. 1305. \begin{figure}[!ht]% AM dimming colour shifts 1306.         \centering 1307.         \includegraphics{graphics/System/Dimming/AM/colourPointShiftRGBW.pdf} 1308.         \caption{Measured colour shifts for \am{} dimming of red, green, blue and white (left to right) diodes. Arrow shows the direction of decreasing flux. Data points correspond to following currents: 0.7 (marked with \cie{}~1976 colour coordinates), 0.56, 0.42, 0.28 and 0.14\thinspace{}\textsc{a}. Note the 10× increase of scale compared to fig.~\ref{fig:DimmingPWMColourShifts}.} 1309.         \label{fig:DimmingAMColourShifts} 1310. \end{figure} 1312. \noindent{}Due to big spectral shifts and output flux to driving current nonlinearity, use of \am{} dimming technique in colour crucial applications is limited. Although if these problems would be solved, amplitude modulation would provide much more efficient dimming scheme than pulse width modulation. 1314. \led{} driver has to provide power with varying voltage and current therefore the driver cannot be optimised for single operating point. On the other hand the luminous efficacy of the diode increases significantly with decreased current and the efficacy of the driver-\led{} system can be much higher than in \pwm{} dimmed luminaires. 1316. % PWM/AM 1317. \subsection{Hybrid modulation}\label{ssec:hybrid_pwm_am}Tse et~al.~\cite{Tse2009a,Tse2009b,Tse2009c,Tse2009d} proposed a general driving technique that is somewhere between the \pwm{} and \am{}. Their technique bases on the possibility to control the high and low peak current that is simultaneously dimmed with \pwm{}. Changing the peak currents would influence the efficiency of the \led{} and \pwm{} signal would control the brightness without big colour shifts. A variation of this technique was studied where low peak current is set to zero and the intensity of the diode is controlled by varying the peak current and duty cycle \includegraphics{graphics/System/Dimming/HybridPWMAM/sparklineHybridPWMAM.pdf}. The \led{} driver is off during $1-d$ part of the dimming period therefore losses in the converter are reduced. \led{} is controlled by varying both duty cycle and peak current. 1318. \begin{equation} 1319.         I_{f,avg} = d \cdot{} I_{f,peak} 1320. \end{equation} 1321. It is always possible to roll back to either \pwm{} or \am{} control by setting either current or duty cycle, respectively, to their maximum value. 1323. \begin{figure}[!ht]% PWM/AM dimming XYZ 1324.         \centering 1325.         \includegraphics{graphics/System/Dimming/HybridPWMAM/XYZ.pdf} 1326.         \caption{Measured, normalised tristimulus values for \pwm{}/\am{} hybrid dimming as a function of relative peak current and duty cycle. Data gathered at constant heatsink temperature.} 1327.         \label{fig:DimmingPWMAM_XYZ} 1328. \end{figure} 1330. Hybrid \pwm{}/\am{} modulation creates a degree of freedom in diode control. This gives an opportunity to optimise colorimetric and radiometric properties of an \led{}. 1332. \noindent{}\marginpar{stabilising white led colour point}Noticing the fact that for white phosphor converted diode \pwm{} and \am{} dimming methods create opposite colour shifts (figures \ref{fig:DimmingPWMColourShifts} and \ref{fig:DimmingAMColourShifts}) and the direction of this shift is almost parallel to the shift caused by the temperature change (fig ref needed) one can use hybrid \am{}/\pwm{} to stabilise both colour point of a diode and luminous flux over a range of temperatures. 1334. \begin{figure}[!ht]% PWM/AM dimming stabilizing white diode colour point 1335.         \centering 1336.         \includegraphics{graphics/LED/Dimming/HybridPWMAM/HybridDimmingWhiteDiodeTemperatureShift.pdf} 1337.         \caption{Colour shifts of white \textsc{pc} \led{} caused by \pwm{} and \am{} dimming mechanisms and the directions of shift caused by increasing the heatsink temperatures (left). Relative driving current and duty cycle for obtaining stable colour point.} 1338.         \label{fig:DimmingPWMAMStabilizingWhiteColourPoint} 1339. \end{figure} 1341. \noindent{}Figure \ref{fig:DimmingPWMAMStabilizingWhiteColourPoint} shows the directions of colour shift caused by two dimming schemes and the shift caused by the changing heatsink temperature. Shift caused by the temperature can be subdivided into two vectors: parallel and perpendicular to the shifts caused by dimming. The perpendicular part cannot be controlled using hybrid dimming technique but is very small in magnitude. 1343. An experiment was set up to prove the hypothesis. A white, phosphor converted diode was placed in the test adapter. The heatsink temperature was set to 20°\textsc{c} and the diode was energised with nominal current. Colour coordinates and luminous flux were recorded when the diode reached thermal steady state. After increasing the heatsink temperature to 30°\textsc{c} the duty cycle and driving current were manually adjusted to bring the colour point as close to the starting colour point without changing the flux value. It was only possible if the current was increased over the nominal current value. Although the average current value did not exceed the nominal driving current diode's lifetime may be affected, which may not be recommended in some applications. In order not to exceed the maximum forward current, the experiment was repeated with lower initial driving current equal to around 60\thinspace{}\% of nominal current (which corresponds to ~70\thinspace{}\% of the nominal flux value). The flux was kept constant in the heatsink temperature range of 20–48°\textsc{c} while maintaining very low colour shift without overdriving the diode. 1345. Decreasing the initial current (at $d=1$) or allowing for overdriving increases the range at which the colour point is kept stable. 1347. \begin{table}[!ht]\footnotesize 1348.         \caption{Current and voltage values needed to obtain stable colour point with the change of the heatsink temperature. Colour coordinates and the distance from initial colour point given in the \cie{}~1976 colour space.} 1349.         \label{tab:experimentDataStableWhiteColourPointUnderHybridDimming} 1350.         \centering 1351.         \begin{tabular}{ccccccc} 1352.                 $T_{hs} [^{\circ}\textrm{C}]$ & \textit{d} [p.u.] & $I_{peak}$ [A] & \textit{F} [lm] & $u'$ & $v'$ & $Δu'v'$ \\ 1353.                 \hline 1354.                 20 & 1.000 & 0.411 & 53.3 & 0.2085 & 0.4981 & 0.00000 \\        1355.                 30 & 0.905 & 0.485 & 53.3 & 0.2085 & 0.4981 & 0.00000 \\        1356.                 40 & 0.796 & 0.597 & 53.3 & 0.2084 & 0.4982 & 0.00011 \\        1357.                 50 & 0.716 & 0.712 & 53.3 & 0.2083 & 0.4982 & 0.00021 \\        1358.                 \hline 1359.         \end{tabular} 1360. \end{table} 1362. \noindent{}Measured data points with the colour distance from the initial colour point are presented in table~\ref{tab:experimentDataStableWhiteColourPointUnderHybridDimming}. Plotting the required current and duty cycle values (fig.~\ref{fig:DimmingPWMAMStabilizingWhiteColourPoint}) reveals that the relationship between the controlled values and heatsink temperature is almost linear and can be expressed using following formulas: 1363. \begin{align} 1364.         I_{f,peak} &= 0.0001 \cdot{} T_{hs}^{2} + 0.0030 \cdot{} T_{hs} + 0.3088 \nonumber \\ 1365.         d &= 0.0004 \cdot{} T_{hs}^{2} - 0.0122 \cdot{} T_{hs} + 1.2309 1366. \end{align} 1368. \noindent{}\marginpar{stabilising peak wavelength position}Another application of hybrid dimming can be at stabilising peak wavelength position during dimming. Colour sensors used in luminaires as a colour feedback are very sensitive to the change of spectral shape~\cite{Ashdown2007}. This phenomenon is caused by the very narrow shape of \led{} spectra. The spectra can shift towards shorter and longer wavelengths during dimming and as a result of heatsink temperature change. If one of this mechanisms of colour shifts could be eliminated, the accuracy of the colour sensors could in theory be improved. 1370. \begin{figure}[!ht]% PWM/AM dimming peak wavelength shift 1371.         \centering 1372.         \includegraphics{graphics/LED/Dimming/HybridPWMAM/HybridDimmingPeakWavelengthShift.pdf} 1373.         \caption{Peak wavelength shift with respect to the nominal position at 20°\textsc{c} for green (left) and blue (right) diodes under different dimming techniques.} 1374.         \label{fig:DimmingPWMAMPeakWavelengthShift} 1375. \end{figure} 1377. InGaN diodes exhibit opposite peak wavelength shifts under \pwm{} and \am{}. A~hybrid modulation could therefore fix the peak wavelength position (fig.~\ref{fig:DimmingPWMAMPeakWavelengthShift}). 1379. The initial peak wavelength position was recorded at nominal driving conditions at 20°\textsc{c} heatsink temperature. The duty cycle was then lowered by 10\thinspace{}\% and, by adjusting the peak current, the peak wavelength was moved to initial value. The process was repeated for duty cycles down to 20\thinspace{}\%. Pairs of peak current and duty cycle that satisfied the constant peak wavelength conditions were recorded also for 40 and 60°\textsc{c} to examine if the relationship is temperature dependent. 1381. \begin{figure}[!ht]% PWM/AM dimming Ipeak vs d 1382.         \centering 1383.         \includegraphics{graphics/LED/Dimming/HybridPWMAM/HybridDimmingPeakWavelengthShiftPeakCurrentVsDuty.pdf} 1384.         \caption{Driving conditions for green (left) and blue (right) diodes necessary to obtain stable peak wavelength position as shown in figure~\ref{fig:DimmingPWMAMPeakWavelengthShift}.} 1385.         \label{fig:DimmingPWMAMCPeakCurrentVsDuty} 1386. \end{figure} 1388. \noindent{}Results (fig.~\ref{fig:DimmingPWMAMCPeakCurrentVsDuty}) show that the relationship between peak current and duty cycle needed to obtain stable peak wavelength is not dependent on heatsink temperature and can be described be the following equations 1389. \begin{align} 1390.         I_{f,peak,green} &= 0.1739 \cdot{} d^{2} + 0.0079 \cdot{} d + 0.5109 \nonumber \\ 1391.         I_{f,peak,blue}  &= 0.0410 \cdot{} d^{2} + 0.0727 \cdot{} d + 0.5851 1392. \end{align} 1394. \begin{figure}[!ht]% PWM/AM dimming colour point shift 1395.         \centering 1396.         \includegraphics{graphics/LED/Dimming/HybridPWMAM/HybridDimmingColourPointShift.pdf} 1397.         \caption{Colour point shift for green (left) and blue (right) diodes under different dimming techniques. Heatsink temperature fixed to 20°\textsc{c}.} 1398.         \label{fig:DimmingPWMAMColourPointShift} 1399. \end{figure} 1401. \noindent{}For both green and blue InGaN diodes colour point resulting from hybrid dimming moves towards the border of \cie{}~1931 diagram thus the colour of the light becomes purer. As seen in figure~\ref{fig:DimmingPWMAMColourPointShift} the colour shift is confined to a one step MacAdam ellipse. 1403. % PCM 1404. \subsection{Pulse code modulation}A pulse code modulation uses a current pattern that resembles the binary representation of the desired dimming value, e.g.~0.33 can be written as 01010100$_b$ and the corresponding current pattern looks like this \includegraphics{graphics/System/Dimming/PCM/sparklinePCM.pdf}. The dimming value is therefore encoded in the light pattern and can be decoded by other devices~\cite{Howell2002}. The first pulse in the pattern has a duration of $T$ and corresponds to the most important bit of the binary dimming value. The next pulse is two times shorter and corresponds to the next bit. Last pulse has the width of $T/2^{n-1}$, where $n$ is the resolution of the dimming signal. The total pulse pattern period has to be lower than 300\thinspace{}Hz to avoid visual flicker and therefore the dimming resolution is limited by the timing properties of the microcontroller and the length of the shortest pulse. Ashdown~\cite{Ashdown2006} proposed the use of temporal dithering in order to increase the dimming resolution in \pcm{}. 1406. \begin{figure}[!ht]% PCM spectrum 1407.         \centering 1408.         \includegraphics{graphics/System/Dimming/PCM/pcm_spectrum.pdf} 1409.         \caption{Spectra of \pwm{} (left) and \pcm{} (right) current shape. Base frequency, marked with red bar, is 200\thinspace{}Hz in both cases. The \dc{} value for both signals is the same. The \pcm{} signal contains more high frequency components in the spectrum.} 1410.         \label{fig:PCMSpectrum} 1411. \end{figure} 1413. \noindent{}The biggest advantage of \pcm{} over \pwm{} is that the driving pattern can be generated for many channels using only a single timer in a microcontroller thus there is no need for hardware \pwm{} generators. The current spectra contains more high frequency components than under a \pwm{} driving scheme (figure~\ref{fig:PCMSpectrum}) therefore this dimming method is mostly suitable for driving high number of low power diodes like in a matrix displays. 1415. % current-voltage model 1416. \section[Current-voltage model]{current-voltage model}\label{sec:iv_model} 1417. Recalling the fact that forward voltage of a diode is proportional to both junction temperature and forward current and diode colorimetric properties are dependent on junction temperature and forward current one can construct a model that estimates colorimetric properties basing only on instantaneous values of diode's current and voltage. 1419. Diodes' tristimulus properties and forward voltage have to be measured at various junction temperatures and, if \am{} or hybrid dimming is to be used, various current levels in order to create a model. 1421. \marginpar{data acquisition}A green InGaN test diode was placed on thermally controlled heatsink. Temperature was set in increments of 10°\textsc{c} in the 5–55°\textsc{c} range. The current was controlled in the 10–100\% of nominal current range allowing the diode to reach thermal steady state after each change. At that point current, voltage and tristimulus values were measured. Four wire setup was used to gather electrical parameters in order to avoid voltage drop on terminals and cables. 1423. \noindent{}\marginpar{surface fitting}Collected data was fitted to a polynomial. Three separate functions were created for each of the tristimulus values and one function was created for the radiometric power. The higher order of polynomial used to represent the data the more computational power is required to use the model. Also the model may be overfitted in which case the measurement error would be visible in the model. By a trial and error approach the lowest order of polynomial still representing measured data accurately was found to be in a form of: 1424. \begin{equation} 1425.         a_{0} + a_{1}I + a_{2}I\,^{2} + a_{3}I\,^{3} + a_{4}V + a_{5}V\,^{2} 1426. \end{equation} 1427. %Model may be stored as a lookup table instead of mathematical function. 1429. \begin{table}[!ht]\footnotesize 1430. \caption{Model coefficients $a_{n}$ for the measured diode model. \label{tab:IV2XYZ_model_coefficients}} 1431. \centering 1432. \begin{tabular}{lccccccc} 1433. &  $a_{0}$  &   $a_{1}$ &   $a_{2}$ &  $a_{3}$ &  $a_{4}$ &  $a_{5}$ & $R^{2}$\\ 1434.         \hline 1435. \textsc{\textit{x}} &           -21.5417 & \phantom{0}49.8094 & \phantom{0}-38.8920 &           16.1661 & \phantom{00}17.3677 &           -3.4374 & 0.99997 \\ 1436. \textsc{\textit{y}} & \phantom{0}30.7711 &           150.6126 &           -119.9930 &           45.0165 & \phantom{0}-38.8748 &           11.2224 & 0.99991 \\ 1437. \textsc{\textit{z}} & \phantom{0}33.1338 & \phantom{0}14.2156 & \phantom{00}-6.8995 & \phantom{0}2.0560 & \phantom{0}-28.7647 & \phantom{0}6.2476 & 0.99986 \\ 1438.         $P_{rad}$  &           120.6807 &           189.0981 &           -145.9986 &           52.2570 &           -118.7007 &           28.9492 & 0.99989 \\ 1439. \hline 1440. \end{tabular} 1441. \end{table} 1443. \begin{figure}[!ht]% IV2XYZ model 1444.         \centering 1445.         \includegraphics{graphics/LED/IV2XYZmodel/IV2XYZmodel.pdf} 1446.         \caption{Model surfaces of $X$, $Y$ and $Z$ tristimulus values and optical power of the measured \led{}. Model is only valid in the vicinity to the measured data points. } 1447.         \label{fig:IV2XYZmodel} 1448. \end{figure} 1450. \begin{figure}[!ht]% model verification. DC current 1451.         \centering 1452.         \includegraphics{graphics/LED/IV2XYZmodel/DCaccuracy.pdf} 1453.         \caption{Measured ($\circ$) and modelled ($\cdot$) colour points on \textsc{cie1976} chromaticity diagram for various combinations of driving current and heatsink temperature.} 1454.         \label{fig:IV2XYZmodelVerificationDC} 1455. \end{figure} 1457. \marginpar{model verification at dc current}In order to test the model and the data fitting, six measurement points were taken at random heatsink temperatures and driving currents. Measured current, voltage and tristimulus values are compared to estimated tristimulus values in table~\ref{tab:IV2XYZ_model_verification_DC} and the corresponding colour points for the measured and modelled data are shown in figure~\ref{fig:IV2XYZmodelVerificationDC}. For all measured data points the colour distance between the measured and estimated colour points is much lower than the just noticeable colour distance. This implies that the model predicts the chromaticity and intensity of \led{}s with high enough accuracy. 1459. \begin{table}[!ht]\footnotesize 1460. \caption{Measured current, voltage and tristimulus values compared to corresponding modelled tristimuls values. Error in model is visible as colour distance between estimated and measured colour points.\label{tab:IV2XYZ_model_verification_DC}} 1461. \centering 1462. \begin{tabular}{rrrrrrrrrrrrr} 1463. & & & \multicolumn{3}{c}{\textsc{measured}} & & \multicolumn{3}{c}{\textsc{modelled}} & & & \\ 1464. \multicolumn{1}{c}{\textit{v}} & \multicolumn{1}{c}{\textit{i}} & & \multicolumn{1}{c}{\textsc{\textit{x}}} & \multicolumn{1}{c}{\textsc{\textit{y}}} & \multicolumn{1}{c}{\textsc{\textit{z}}} & & \multicolumn{1}{c}{\textsc{\textit{x}}} & \multicolumn{1}{c}{\textsc{\textit{y}}} & \multicolumn{1}{c}{\textsc{\textit{z}}} & & \multicolumn{2}{c}{$ΔE_{ab}^{*}$} \\ 1465. \cline{1-2}\cline{4-6}\cline{8-10}\cline{12-13} 1466. 3.1388 & 0.5253 & & 16.917 & 71.915 & 10.266 & & 16.883 & 71.847 & 10.261 & & \multicolumn{2}{c}{0.0049} \\ 1467. 2.9731 & 0.3523 & & 13.142 & 54.639 &  7.097 & & 13.138 & 54.528 &  7.080 & & \multicolumn{2}{c}{0.0661} \\ 1468. 3.0566 & 0.4648 & & 15.820 & 65.558 &  8.933 & & 15.801 & 65.394 &  8.904 & & \multicolumn{2}{c}{0.0738} \\ 1469. 3.1348 & 0.6151 & & 18.846 & 76.887 & 10.942 & & 18.809 & 76.911 & 10.970 & & \multicolumn{2}{c}{0.0596} \\ 1470. 3.1809 & 0.6949 & & 20.216 & 82.294 & 12.013 & & 20.180 & 82.488 & 12.087 & & \multicolumn{2}{c}{0.3098} \\ 1471. 2.8123 & 0.2355 & &  9.935 & 39.530 &  4.628 & &  9.899 & 39.604 &  4.643 & & \multicolumn{2}{c}{0.2676} \\ 1472. \hline 1473. \end{tabular} 1474. \end{table} 1476. \begin{figure}[!ht]% model verification. IV PWM waveforms 1477.         \centering 1478.         \includegraphics{graphics/LED/IV2XYZmodel/PWMwaveforms.pdf} 1479.         \caption{Measured current and voltage waveforms for the modelled diode at various duty cycles.} 1480.         \label{fig:IV2XYZmodelVerificationPWMwaveforms} 1481. \end{figure} 1484.         \centering 1485.         \includegraphics{graphics/LED/IV2XYZmodel/PWMaccuracy.pdf} 1486.         \caption{Difference between measured colour shift for \pwm{} dimming (dashed line) and colour shift obtained using current-voltage model (solid line). Arrow shows the direction of decreasing duty cycle. Colour point at nominal driving current ($d=1$) marked with $u'v'$ coordinates. } 1487.         \label{fig:IV2XYZmodelVerificationPWM} 1488. \end{figure} 1490. \noindent{}\marginpar{model verification at pulsed current}To test the accuracy of the model the test diode was driven with 200\thinspace{}Hz pulsed current with various duty cycles (fig.~\ref{fig:IV2XYZmodelVerificationPWMwaveforms}) at constant heatsink temperature. Before each measurement the diode was allowed to reach thermal steady state. Optical parameters were integrated over multiple \pwm{} periods. Current and voltage waveforms were recorded with 250\thinspace{}kS/s speed with 12~bit \textsc{adc}. Few periods were extracted from the waveforms and instantaneous current and voltage values were converted to tristimulus values using the previously created model. Resulting data was integrated and divided by the measured time period to obtain average tristimulus values and the resulting colour point. 1492. \noindent{}Results, summarised in table~\ref{tab:IV2XYZ_model_verification}, show that the model accurately predicts the colour shift of the diode. It is therefore possible to use this model, as a colour feedback regardless of the dimming scheme used in the luminaire. The accuracy of this method lower with lowering duty cycle. This may be attributed to fast thermal and electrical transients that tend to dominate at very low duty cycle values. In order to measure these transients precisely, much higher voltage and current sampling would be required. Also lower number of samples are used to estimate the colorimetric properties therefore noise in the measurement may influence the accuracy of the model. 1494. \begin{table}[!ht]\footnotesize 1495. \caption{Measured and modelled tristimulus values of the test diode and the colour distance between the corresponding colour points. \label{tab:IV2XYZ_model_verification}} 1496. \centering 1497. \begin{tabular}{llrrrrrrrrrrr} 1501. \multicolumn{2}{l}{0.2} & &  4.0447 & 17.3423 &  2.6773 & &  3.8695 & 16.6655 &  2.5735 & & \multicolumn{2}{c}{2.4091} \\ 1502. \multicolumn{2}{l}{0.4} & &  8.1630 & 34.4009 &  5.2286 & &  8.1669 & 34.6032 &  5.2597 & & \multicolumn{2}{c}{0.4614} \\ 1503. \multicolumn{2}{l}{0.6} & & 12.5078 & 51.9626 &  7.8009 & & 12.5364 & 52.2906 &  7.8480 & & \multicolumn{2}{c}{0.4869} \\ 1504. \multicolumn{2}{l}{0.8} & & 16.8984 & 69.2601 & 10.2805 & & 16.9262 & 69.5062 & 10.3069 & & \multicolumn{2}{c}{0.1671} \\ 1505. \multicolumn{2}{l}{1.0} & & 20.7419 & 83.9519 & 12.3199 & & 20.7569 & 84.0777 & 12.3363 & & \multicolumn{2}{c}{0.0319} \\ 1506. \hline 1507. \end{tabular} 1508. \end{table} 1510. \noindent{}\marginpar{comments}The number of measurement points needed to describe diode's behaviour depends on the complexity of the model and the desired operating point. If the diode is to be dimmed using \pwm{} scheme, it is sufficient to measure the parameters at single forward current with varying heatsink temperature. 1512. If the radiometric power of the emitted light is also modelled, it may be used together with the thermal model of the system to completely describe luminaire's behaviour. Subtracting the optical power from input electrical power gives the power losses in the diode structure. This loss will create a temperature rise with respect to heatsink temperature according to the thermal description of the heat flow path \ref{fig:RC_network_thermal_model}. Resulting temperature increase in the junction can be translated into forward voltage change resulting a new operating point to be fed back to the current-voltage model. 1514. \marginpar{modelling led strings}Very often luminaire consists of strings of series connected light-emitting diodes. It is possible to model each diode in the string measuring separately individual diode voltages and string current but this approach is not practical. A single model created for the whole string would therefore be beneficial. Some restrictions in thermal design exist when using the current-voltage model with multiple \led{}s. Model is created with only two external stimuli changing: current and heatsink temperature. 1516. % chapter light_emitting_diodes (end) 1518. %%%%%%%%%%%%%%% 1519. %% Chapter 4 %% 1520. %%%%%%%%%%%%%%% 1521. \chapter{Luminaire control}% (fold) 1522. \label{cha:luminaire_control} 1523. Polychromatic solid-state luminaires need a control system in order to keep the desired luminance and colour level stable. Diodes' parameters will change during normal operation due to self heating, ambient temperature variations and during ageing and these changes should be compensated by a proper control mechanism. 1525. Firstly, the open loop control is discussed. The methods of choosing an appropriate operating point, depending on the number of primaries used in the luminaire, are given. For luminaires containing more than three primaries optimisation techniques may be used to optimise some of the lamp parameters like luminous flux or efficacy. 1527. A review of existing colour control methods is presented, where the main control mechanisms: temperature feed forward, flux feedback and colour coordinates feedback are discussed. A colour control loop basing on the current-voltage diode model, presented in chapter~\ref{sec:iv_model} is shown. 1529. Various optimisation possibilities are discussed. From optimally choosing a working point of a polychromatic luminaire to optimisation benefits for trichromatic luminaire using the hybrid dimming mechanism, presented in chapter~\ref{ssec:hybrid_pwm_am}. 1531. %% Luminaire control strategies 1532. \section[Colour control strategies]{colour control strategies} 1534. Control scheme will differ depending on the number of available colours in luminaire. For polychromatic luminaires (three or more basic colours) control of colour point and luminance is possible by adjusting intensities of individual diodes. In trichromatic luminaires there is only one solution for the colour equations that yields the desired colour point. If the luminaire consists of four or more diodes the number of possible solutions may be infinite therefore the lamp operation can be optimised by choosing the operating point. 1536. \subsection{Open loop} 1537. Open loop system uses only the calibration matrix~\ref{eq:calibrationMatrix3Diodes} to calculate required duty cycle. The accuracy of this control method is limited by the self-heating of the diodes and the changes of ambient temperature because the values in calibration matrix are measured at a single temperature value. However this type of control can still be used in luminaires where colour accuracy is not important. Because the system is modelled as a set of linear equations and linear algebra principles are used to find the operating point the \pwm{} dimming mechanism should be used in the luminaires controlled with this method. 1539. \begin{figure}[!ht]% Open loop 1540.         \centering 1541.         \includegraphics{graphics/System/open_loop.pdf} 1542.         \caption{Open loop luminaire control system.} 1543.         \label{fig:SystemOL} 1544. \end{figure} 1546. \noindent{}The optical characteristics of a trichromatic luminaire are characterised by a calibration matrix $\mathbi{C}$ containing diodes' tristimulus values at nominal conditions. 1547. \begin{equation} \label{eq:calibrationMatrix3Diodes} 1548. \mathbi{C} = 1549. \left[ \begin{array}{ccc} 1550. X_{1} & X_{2} & X_{3} \\ 1551. Y_{1} & Y_{2} & Y_{3} \\ 1552. Z_{1} & Z_{2} & Z_{3} 1553. \end{array} \right] 1554. \end{equation} 1555. The calibration matrix is strongly temperature dependent as the diode experience peak wavelength shift and flux change with the change of the temperature. 1557. Duty cycle needed to obtain desired color point and luminance level can be calculated by using inverse of the calibration matrix. 1558. \begin{equation} \label{eq:openLoopColor} 1559. \mathbi{d} = \left[ \begin{array}{c}d_{1} \\ d_{2} \\ d_{3}\end{array} \right]   =   \mathbi{C}^{-1} \cdot{} \!\left[ \begin{array}{c}X \\ Y \\ Z \end{array} \right] 1560. \end{equation} 1561. If any of the calculated duty cycle values is negative then the desired colour point lays outside the gamut of the device. If, on the other hand, any of the values is above~1 then the desired luminance level cannot be reached without overdriving the diode. 1563. \marginpar{underdetermined system}When using more than three different diode colours, the system becomes underdetermined as there are three equations for each of the tristimuli value and $n>3$ unknowns—the diodes' relative output. The calibration matrix expands to a $3 \times{} n$ matrix composed of the diodes' tristimulus values. 1564. \begin{equation} \label{eq:calibrationMatrixNDiodes} 1565. \mathbi{C} = 1566. \left[ \begin{array}{cccc} 1567. X_{1} & X_{2} & \cdots & X_{n} \\ 1568. Y_{1} & Y_{2} & \cdots & Y_{n}\\ 1569. Z_{1} & Z_{2} & \cdots & Z_{n} 1570. \end{array} \right] 1571. \end{equation} 1572. For an underdetermined system, there are typically infinitely many solutions of the problem. Every solution to underdetermined will  have the form $\mathbi{d}+c \cdot \mathbi{s}$ where $\mathbi{x}$ is a particular solution of $\mathbi{Cx}=\mathbi{b}$ and $c \cdot \mathbi{s}$ is a linear combination of solution to $\mathbi{Cs}=\mathbi{0}$. 1573. \marginpar{row reduction}As an example a system composed of four diodes: red, green, blue and white, defined by calibration matrix 1574. \begin{equation} \label{eq:calibrationMatrix4DiodesExample} 1575. \mathbi{C} = 1576. \left[ \begin{array}{cccc} 1577. 60      & 6             & 17    & 22    \\ 1578. 25      & 26    & 5             & 20    \\ 1579. 0               & 4             & 100   & 20 1580. \end{array} \right] 1581. \end{equation} 1582. generating $X=20$, $Y=40$ and $Z=20$ will be investigated. In order to solve the system equations an augmented matrix $(\mathbi{C}|\mathbi{b})$ is composed. 1583. \begin{equation} \label{eq:augmentedMatrixNDiodes} 1584. \left(\mathbi{C}|\mathbi{b}\right) = 1585. \left[ \begin{array}{cccc|c} 1586. 60      & 6             & 17    & 22 & 20       \\ 1587. 25      & 26    & 5             & 20 & 40       \\ 1588. 0               & 4             & 100   & 20 & 20 1589. \end{array} \right] 1590. \end{equation} 1591. Using Gauss-Jordan elimination matrix \ref{eq:augmentedMatrixNDiodes} is transformed into following form 1592. \begin{equation} \label{eq:augmentedMatrixGaussJordanEliminationNDiodes} 1594. \left[ \begin{array}{cccc|c} 1595. 1 & 0 & 0 & 0.2677 & 0.1560 \\ 1596. 0 & 1 & 0 & 0.4770 & 1.3604 \\ 1597. 0 & 0 & 1 & 0.1809 & 0.1456 1598. \end{array} \right] 1599. \end{equation} 1600. The corresponding system equations are given by 1601. \begin{equation} \label{eq:equationsGaussJordanEliminationNDiodes} 1602. \begin{array}{rcl} 1603. d_{1} + 0.2677 \cdot d_{4} & = & 0.1560 \\ 1604. d_{2} + 0.4770 \cdot d_{4} & = & 1.3604 \\ 1605. d_{3} + 0.1809 \cdot d_{4} & = & 0.1456 1606. \end{array} 1607. \end{equation} 1608. Introducing $c = d_{4}$ and rewriting equations in matrix form yields 1609. \begin{equation} \label{eq:matrixGaussJordanEliminationNDiodes} 1610. \mathbi{d} = \mathbi{d}_{re} + c \cdot \mathbi{s}_{re} = 1611. \left[ \begin{array}{c} 1612. d_{1} \\ d_{2} \\ d_{3} \\ d_{4} 1613. \end{array} \right] = 1614. \left[ \begin{array}{c} 1615. 0.1560 \\ 1.3604 \\  0.1456 \\ 0 1616. \end{array} \right] + c 1617. \left[ \begin{array}{c} 1618. -0.2677 \\ -0.4770 \\ -0.1809 \\ 1 1619. \end{array} \right] 1620. \end{equation} 1621. Green diode in the initial solution $\mathbi{d}_{re}$ has to be overdriven by 36\%, therefore by varying the value of $c$ another solution has to be found. 1624. \marginpar{pseudoinversion}By analogy to equation~\ref{eq:openLoopColor} solution to the underdetermined system can also be calculated using an inverted matrix. As inversion of the non-square matrix is not possible therefore in order to find an algebraic solution to the problem a pseudoinverted matrix $\mathbi{C}^{+}$ can be used~\cite{Greville1959}. 1625. \begin{equation} \label{eq:openLoopColorNDiodes} 1626. \mathbi{d}_{pinv} =\left[ \begin{array}{c}d_{1} \\ d_{2} \\ \vdots \\ d_{n} \end{array} \right] = \mathbi{C}^{+} \cdot{} \! \left[ \begin{array}{ccc}X \\ Y \\ Z \end{array} \right] 1627. \end{equation} 1628. Pseudoinversion yields the solution that has the smallest Euclidean norm $||d||_{2}$. This solution may or may not lay within the feasible solution space, where all duty cycles are in range of $0 \leq d_{n} \leq 1$. Luminaire described by calibration matrix \ref{eq:calibrationMatrix4DiodesExample} and pseudoinverted matrix 1629. \begin{equation} \label{eq:pseudoinvertedCalibrationMatrix4DiodesExample} 1630. \mathbi{C}^{+} = 1631. \left[ \begin{array}{rrr} 1632.    0.0189 &  -0.0076 &  -0.0032 \\ 1633.   -0.0164 &   0.0356 &   0.0004 \\ 1634.    0.0012 &  -0.0043 &   0.0098 \\ 1635.   -0.0026 &   0.0142 &   0.0011 1636. \end{array} \right] 1637. \end{equation} 1638. generating $X=20$, $Y=40$ and $Z=20$, according to eq.~\ref{eq:openLoopColorNDiodes} should be driven as follows 1639. \begin{equation} \label{eq:openLoop4DiodesExample} 1640. \mathbi{d}_{pinv} = 1641. \mathbi{C}^{+} \cdot{}\! 1642. \left[ \begin{array}{r} 1643.    20 \\ 1644.    40 \\ 1645.    20 1646. \end{array} \right] = 1647. \left[ \begin{array}{r} 1648.    0.0226 \\ 1649.    1.0350 \\ 1650.    0.0413 \\ 1651.    0.5866 1652. \end{array} \right] 1653. \end{equation} 1654. This particular solution lays outside available duty cycle values. In order to find other possible solutions a linear combination of the basis vectors for the kernel (null space) for calibration matrix $\mathbi{C}$ can be added to solution vector $\mathbi{d}$. The null space for matrix $\mathbi{C}$ is defined as set of vectors $\mathbi{s}$ for which $\mathbi{C}\cdot{}\mathbi{s}=\mathbi{0}$ 1655. \begin{equation} \label{eq:nullSpace4DiodesExample} 1656. \mathbi{s} = 1657. \left[ \begin{array}{r} 1658.   -0.1929 \\ 1659.   -0.4704 \\ 1660.   -0.1508 \\ 1661.    0.8478 1662. \end{array} \right] 1663. \end{equation} 1664. A feasible solution can be calculated by adding a linear combination of the basis vector $\mathbi{s}$. In this case e.g. $c=1/10$ will yield a feasible solution: 1665. \begin{equation} \label{eq:openLoop4DiodesExamplePlusVector} 1666. \mathbi{d} = \mathbi{d}_{pinv}+c \cdot{}\mathbi{s} = 1667. \left[ \begin{array}{r} 1668.    0.0226 \\ 1669.    1.0350 \\ 1670.    0.0413 \\ 1671.    0.5866 1672. \end{array} \right] + \frac{1}{10} 1673. \left[ \begin{array}{r} 1674.   -0.1929 \\ 1675.   -0.4704 \\ 1676.   -0.1508 \\ 1677.    0.8478 1678. \end{array} \right] = 1679. \left[ \begin{array}{r} 1680.    0.0033 \\ 1681.    0.9880 \\ 1682.    0.0262 \\ 1683.    0.6714 1684. \end{array} \right] 1685. \end{equation} 1687. \noindent{}In general, finding a feasible solution by means of algebraic calculation will require finding an initial point and moving to the feasible range. 1689. \begin{figure}[ht]% Graphical solution for 4 diode underdetermined system 1690.         \centering 1691.         \includegraphics{graphics/System/graphical_solution.pdf} 1692.         \caption{Graphical solution for four diode underdetermined system. Feasible solution range is a subset of all solutions where all relative intensities are in the range of [0,1]. Initial solutions $\mathbi{d}_{re}$  and $\mathbi{d}_{pinv}$ described by equations \ref{eq:matrixGaussJordanEliminationNDiodes} and \ref{eq:openLoop4DiodesExample}, respectively, shown in the solution space.} 1693.         \label{fig:openLoop4DiodesExampleGraphicalSolution} 1694. \end{figure} 1695. \marginpar{conclusions} Both techniques described above, pseudoinversion and row reduction, find the solution in form of initial solution plus a way to move the solution within the solution space. Graphical representation (fig.~\ref{fig:openLoop4DiodesExampleGraphicalSolution}) of the solution space shows for system described by calibration matrix \ref{eq:calibrationMatrix4DiodesExample} shows both initial solutions. The advantage of using row elimination is that initial solution will always yield a vector with $3-n$ zeroes and searching through solution space is limited by searching within $c_{re} \in [0,1]$. The performance of the two algorithms was measured in \abbr{matlab} and the results are summed up in table~\ref{tab:system_solution_performance}. 1697. \begin{table}[b]\footnotesize 1698. \caption{Initial answer calculation time using pseudoinversion and row reduction. \label{tab:system_solution_performance}} 1699. \centering 1700. \begin{tabular}{ccc} 1701. \textsc{number of diodes} & \textsc{pseudoinversion} & \textsc{row reduction} \\ 1702. \hline 1703. 3 & 0.234\thinspace{}ms & 0.617\thinspace{}ms \\ 1704. 4 & 0.251\thinspace{}ms & 0.836\thinspace{}ms \\ 1705. 5 & 0.249\thinspace{}ms & 0.983\thinspace{}ms \\ 1706. 6 & 0.233\thinspace{}ms & 1.166\thinspace{}ms \\ 1707. \hline 1708. \end{tabular} 1709. \end{table} 1711. \noindent{}\marginpar{control accuracy}To test the accuracy of the open loop control system, a luminaire consisting of three, red green and blue, diodes was set up to deliver 50 lumen of white light ($x=1/3$ and $y=1/3$). Required tristimulus values calculated by eq.~\ref{eq:xyY2XYZ} are: $X=50$, $Y=50$ and $Z=50$. The luminaire was placed in the input port of the integrating sphere. Diodes' properties were measured using small duty cycle (\textit{d}~=~0.1\%) pulsed current so that the junction temperatures and heatsink temperature are almost equal to ambient temperature. Required duty cycles were calculated using equation~\ref{eq:openLoopColor} and previously measured calibration data. 1713. \begin{figure}[!ht]% Open loop tristimulus 1714.   \centering 1715.   \includegraphics{graphics/System/OpenLoop/open_loop_tristimulus_deltaE.pdf} 1716.  \caption{Measured tristimulus values of the open loop luminaire control system (left) and corresponding colour difference from the initial colour point (right). Self-heating of the \led{}s decreases their luminous flux therefore their tristimulus values decrease. The $X$ value experiences the biggest drop because of the high red diode temperature dependency.} 1717.  \label{fig:OpenLoopTristimulus} 1718. \end{figure} 1720. \noindent{}The self-heating of the diodes yields a drop of the tristimulus values (fig.~\ref{fig:OpenLoopTristimulus}). After ca. two minutes, the colour point moved past the just noticeable difference level. The colour shift is dependent on the junction temperature therefore it is affected by the thermal properties of the heat flow path and the variations of the ambient temperature. Transient response of the luminaire is connected with the thermal capacities of the system components. Fast transient (time constant less than one min.) in the beginning of the experiment is connected with the heating of the diode structure and the aluminum \pcb, the diodes are mounted on. Slow transient (time constant ca. ten minutes) is the result of the heatsink thermal mass. 1722. \begin{figure}[!ht]% 1723.         \centering 1724.         \includegraphics{graphics/System/IV2XYZ/luminaireModelOL.pdf} 1725.         \caption{Simulation model of \rgb{} luminaire using a current-voltage diode model.} 1726.         \label{fig:OpenLoopIV2XYZModel} 1727. \end{figure} 1729. \begin{figure}[!ht]% 1730.         \centering 1731.         \includegraphics{graphics/System/IV2XYZ/IV2XYZ_luminare_XYZ.pdf} 1732.         \caption{Tristimulus values of the simulated \rgb{} luminaire. Diodes driven with nominal 0.7\thinspace{}\textsc{a} current.} 1733.         \label{fig:OpenLoopIV2XYZTristimulus} 1734. \end{figure} 1738. % Temperature feed forward 1739. \subsection{Temperature feed forward}\led{} characteristics are mostly affected by the junction temperature. Therefore a control scheme basing on temperature measurements is most convenient. The \led{}s junction (active, light-emitting region) temperature cannot however be measured directly. The practical approach is to measure heatsink or ambient temperature and approximate the junction temperature by means of \led{} thermal model. Garcia et~al. \cite{Garcia2008,Garcia2009} designed a flux estimator to control the output flux during steady state and transients conditions. 1741. \begin{figure}[!ht]% Temperature feed forward 1742.         \centering 1743.         \includegraphics{graphics/System/tff.pdf} 1744.         \caption{Temperature feed forward luminaire control system.} 1745.         \label{fig:SystemTFF} 1746. \end{figure} 1748. \noindent{}In the \tff{} control loop controller adjusts the calibration matrix based on the measured or estimated junction temperature. The drawback of this method is that the light output (colour, intensity) is not the directly controlled variable. Every variation in \led{} model will result in long-time error~\cite{Subramanian2002}. 1750. % Flux feedback 1751. \subsection{Flux feedback}Flux feedback control scheme can maintain constant flux output for each of the colours. This can be done by placing photodiodes near each \led{} colour or by using one photodiode and a time sequence measurement of mixed light. This can easily overcome flux variations with ageing but cannot compensate for peak wavelength shifts. 1753. \begin{figure}[!ht]% Flux feedback 1754.         \centering 1755.         \includegraphics{graphics/System/ffb.pdf} 1756.         \caption{Flux feedback luminaire control system.} 1757.         \label{fig:SystemFFB} 1758. \end{figure} 1760. \noindent{}Special care must be taken with placement of the sensor. Ambient light can distort flux measurement therefore either the placement of the sensor should minimise its influence or the ambient light can be measured independently and the subtracted from the measured \led{} flux. 1762. % Flux feedback with temperature feed forward 1763. \subsection{Flux feedback with temperature feed forward}A combination of temperature feed forward and flux feedback methods can produce a very accurate control scheme because it can compensate for both wavelength shift and flux change due to ageing and temperature variations. However, the method relies on model data describing relation between wavelength shift and temperature ($\ud{}λ/\ud{}T$). 1765. \begin{figure}[!ht]% Flux feedback with temperature feed forward 1766.         \centering 1767.         \includegraphics{graphics/System/tffffb.pdf} 1768.         \caption{Flux feedback with temperature feed forward luminaire control system.} 1769.         \label{fig:SystemFFBTFF} 1770. \end{figure} 1772. % Colour coordinates feedback 1773. \subsection{Colour coordinates feedback}A very high accuracy of colour reproduction can be achieved using colour coordinates feedback because the mixed light is a directly controlled variable. No model parameter variation or temperature difference can influence steady state output. 1775. \begin{figure}[!ht]% Colour coordinates feedback 1776.         \centering 1777.         \includegraphics{graphics/System/ccfb.pdf} 1778.         \caption{Colour coordinates feedback luminaire control system.} 1779.         \label{fig:SystemCCFB} 1780. \end{figure} 1782. \noindent{}The challenge in designing such converter is choosing an appropriate sensor. There are many colour sensors available on the market. General trichromatic sensors consist of three photodiodes with spectral responses in red, green and blue color spectrum. Sometimes colour sensors cover also \ir{} and \uv{} parts of the spectrum. 1784. \begin{figure}[!ht]% Colour sensor spectral responses 1785.         \centering 1786.         \includegraphics{graphics/System/ColorSensors/sensitivity.pdf} 1787.         \caption{Spectral responses of colour sensors: Taos~\abbr{tcs~230}, MaZeT~\abbr{mcs3as}, MaZeT~\abbr{mtcs}i\abbr{ct} and MaZeT~\abbr{mmcs6} (top to bottom). Data taken from individual sensor datasheets.} 1788.         \label{fig:ColourSensorsSpectralResponses} 1789. \end{figure} 1791. %The sensor spectral response should match the \cie{}~\abbr{1931} 2° colour matching functions. Any difference between sensors spectral response and respective colour matching function would result in error in calculating colour position on $xy$-plane. 1793. The temperature and ageing problems are not entirely solved by using a colour sensor, because this problems can affect the sensor itself. The expected sensor lifetime should match the expected lifetime of the whole lamp. 1795. In general case of n-chromaticity filter, the response of the filter has to be transformed from its colour space into working colour space of the luminaire. At least three measurements are necessary in order to calibrate the sensor. Typically, the calibration procedure includes more than three points to increase the accuracy of the calibration. Calibration matrix for n-chromaticity luminaire is a 3×\textit{n} matrix 1796. \begin{equation} \label{eq:ColourSensorCalibrationMatrix} 1797. \mathbi{C} = 1798. \left[ \begin{array}{ccc} 1799.    c_{11} & \ldots{} & c_{n1} \\ 1800.    c_{12} & \ldots{} & c_{n2} \\ 1801.    c_{13} & \ldots{} & c_{n3} 1802. \end{array} \right] 1803. \end{equation} 1804. Taking $m$ sensor measurements and comparing them to $X, Y$ and $Z$ measured values yields two matrices 1805. \begin{equation} \label{eq:ColourSensorMeasurements} 1806. \mathbi{S} = 1807. \left[ \begin{array}{ccc} 1808.    s_{11} & \ldots{} & s_{m1} \\ 1809.    \vdots & \ddots{} & \vdots \\ 1810.    s_{1n} & \ldots{} & s_{mn} 1811. \end{array} \right] \qquad 1812. \mathbi{XYZ} = 1813. \left[ \begin{array}{ccc} 1814.    X_{1} & \ldots{} & X_{m} \\ 1815.    Y_{1} & \ldots{} & Y_{m} \\ 1816.    Z_{1} & \ldots{} & Z_{m} 1817. \end{array} \right] 1818. \end{equation} 1819. Because of the finite resolution and accuracy of the measurement, the system can be approximated by equation 1820. \begin{equation} \label{eq:ColourSensorSystem} 1821. \mathbi{XYZ} \approx{} \mathbi{C} \cdot \mathbi{S} 1822. \end{equation} 1823. Calibration matrix \mathbi{C} can be approximated using least square method. First eq.~\ref{eq:ColourSensorSystem} is transposed yielding 1824. \begin{equation} \label{eq:ColourSensorSystemTransposed} 1825.  \mathbi{S}^{T} \cdot \mathbi{C}^{T} \approx{} \mathbi{XYZ}^{T} 1826. \end{equation} 1827. and the least square solution is calculated 1828. \begin{align} \label{eq:ColourSensorCalibrationMatrixLeastSquareSolution} 1829.  \widehat{\!\mathbi{C}}^{T}\!\!\!\!\; &= \left(\mathbi{S} \cdot \mathbi{S}^T\right)^{-1} \mathbi{S} \cdot \mathbi{XYZ}^T \nonumber \\ 1830.  \widehat{\!\mathbi{C}} &= \left(\mathbi{S} \cdot \mathbi{XYZ}^T\right)^T \cdot \left(\left(\mathbi{S} \cdot \mathbi{S}^T\right)^{-1}\right)^T \nonumber \\ 1831.  \widehat{\!\mathbi{C}} &= \mathbi{XYZ} \cdot \mathbi{S}^T \cdot \left(\mathbi{S} \cdot \mathbi{S}^T\right)^{-1} 1832. \end{align} 1834. \noindent{}A generic trichromatic colour sensor was calibrated with \rgb{} and \rgbw{} luminaire using random operating points (varying $x$ and $y$ colour coordinates and intensity values) and equation~\ref{eq:ColourSensorCalibrationMatrixLeastSquareSolution}. Results (fig.~\ref{fig:ColourSensorCalibration}) show that the \rgb{} system can be accurately measured using trichromatic sensor for \rgbw{} luminaire measurement yields high colour variations. 1835. \begin{figure}[!ht]% Colour sensor calibration 1836.         \centering 1837.         \includegraphics{graphics/System/CCFB/color_sensor_calibration.pdf} 1838.         \caption{Colour sensor calibration points for \rgb{} luminaire (left) and \rgbw{} luminaire (right). Circles show colour points measured by spectrometer. Dots show colour points measured by the sensor transformed using least square solution matrix~  $\widehat{\!\mathbi{C}}$.} 1839.         \label{fig:ColourSensorCalibration} 1840. \end{figure} 1842. \subsection{Current-voltage model based colour control}\label{ssec:iv2xyzColourControl} 1844. \begin{figure}[!ht]% % IV to XYZ color control 1845.         \centering 1846.         \includegraphics{graphics/System/iv2xyz/color_control_loop_IV2XYZ_model.pdf} 1847.         \caption{Colour control loop utilising \ivxyz{} models. Measured current and voltage of three diode strings are converted into instantaneous tristimulus values. These values are used as a feedback for three current controllers.} 1848.         \label{fig:color_control_IV2XYZ} 1849. \end{figure} 1850. A current-voltage model described in chapter~\ref{sec:iv_model} can be used in a \rgb{} luminaire colour control loop instead of optical feedback~(fig.~\ref{fig:color_control_IV2XYZ}). Voltages and currents of all colour strings present in the luminaire need to be measured. Electrical parameters are converted into tristimulus values using a model created during calibration. Respective tristimulus values are summed and subtracted from the command values. Resulting errors are fed into three \textsc{pi} controllers which control red, green and blue diode strings. The bandwidth of these controllers can be limited to few hundred hertz as human eye cannot see light changes above circa 100\thinspace{}Hz. \led{} colorimetric properties are dependent on the forward current and the junction temperature. The active area and internal \led{} structure heats up in the millisecond time range and is too fast for the human eye to perceive. \led{} junction will be also affected by the heatsink temperature. The time constant of a heatsink can be in the range of few seconds to few minutes and colour control loop should compensate for this change. 1852. \begin{figure}[!ht] % (I)V to XYZ color control 1853.         \centering 1854.         \includegraphics{graphics/System/IV2XYZ/color_control_loop_(I)V2XYZ_model.pdf} 1855.         \caption{Colour control loop where the current feedback is taken from the current controller.} 1856.         \label{fig:color_control_IsetV2XYZ} 1857. \end{figure} 1858. The current information is present in the colour control loop in 1859. the form of current command. This information can be fed back to \led{} model to decrease the number of sensors needed, as shown in fig.~\ref{fig:color_control_IsetV2XYZ}. Current command differs from actual current only during transients therefore this control scheme is best used with \am{} dimmed luminaires. 1861. \begin{figure}[!ht] % IV2XYZ luminare model 1862.         \centering 1863.         \includegraphics{graphics/System/IV2XYZ/luminaireModel_IV2XYZ_PI.pdf} 1864.         \caption{Simulation model of a trichromatic \rgb{} luminaire controlled by three \textsc{pi} controllers and using current-voltage diode model. 1865.         \label{fig:IV2XYZ_luminare_model}} 1866. \end{figure} 1868. \section[Optimisations]{optimisations} 1869. There are two possibilities for optimising \led{} based luminaires' colorimetric properties. The number of different colours present in the luminaire will affect achievable gamut, luminous flux and the light quality of the luminaire. Žukauskas et~al. analysed the properties of polychromatic \led{} luminaires~\cite{Zukauskas2001,Zukauskas2004a}. By increasing the number of different colours present in the luminaire from three to four the maximum colour rendering index can be increased from approximately 90 to approximately 98. Also the choice of the peak wavelengths of the particular diode colour will influence both the \cri{} and efficacy of the luminaire~\cite{Zukauskas2004a,Zukauskas2004b}. Chapter~\ref{ssec:linear_programming} shows an example how an optimisation procedure can influence the luminaire design. 1871. Another possibility for optimising colorimetric properties is by the diode control. Optimisation is generally possible when a system is underdetermined systems. In polychromatic luminaires consisting of at least four different colours, like \textsc{rgba} or \textsc{rgbw} luminaires, one colour point can be reached by many different combinations of the diode intensities (figure~\ref{fig:openLoop4DiodesExampleGraphicalSolution}). However, a \rgb{} luminaire has three control variables (intensity of each diode colour) and three colour equation therefore there is no possibilities to optimise the control as there is exactly one solution to the system. A hybrid diode control described in chapter~\ref{ssec:hybrid_pwm_am} can be used to increase the number of variables in the system. Each diode is controlled with both duty cycle and peak current therefore the number of variables present in the system doubles. Chapter \ref{ssec:Optimal_control_hybrid_dimming} shows the benefits of using hybrid dimming in a trichromatic \rgb{} luminaire. 1873. It is worth noting that even though the optimisation procedures will find many operating points that yield the same tristimulus values, the spectrum at these operating points will vary. It means that the direct light seen by the observer will not change with changing operating point but the light reflected from various objects may change due to metamerism. This creates a practical limit of how fast should the operating point change so that the observer will not notice the change created by the optimisation procedure. Also multiple lamps, each running an optimisation routine, generating light wi the same tristimulus values may produce different spectra. 1875. \subsection{Linear programming}\label{ssec:linear_programming} 1876. Linear programming is a technique of finding the optimum (maximum or minimum) of a objective function given a set of constrains on the function variables. Linear programs are typically solved by simplex algorithms or interior point methods. Simplex algorithms search the corners of the feasible solution area (limited by the constrains) as the solution, if exists, lays on one or more corner points. Interior point methods are more suitable to big optimisation problems and therefore will not be discussed.   1878. \marginpar{example}Given a luminaire consisting of four: red, green, blue and white colour diodes described by calibration matrix 1879. \begin{equation} \label{eq:LPExampleCalibrationMatrix} 1880. \mathbi{C} = 1881. \left[ \begin{array}{cccc} 1882. 60      & 6             & 17    & 74    \\ 1883. 25      & 26    & 5             & 77    \\ 1884. 0               & 4             & 100   & 92 1885. \end{array} \right] 1886. \end{equation} 1887. find the maximum luminous flux at $x=0.25,\,y=0.25$ colour point. \marginpar{objective function}This problem can be solved by linear programming by maximising objective function 1888. \begin{equation} \label{eq:LPExampleObjectiveFunction} 1889. max: F = 25d_{1} + 26d_{2} + 5d_{3} + 77d_{4} 1890. \end{equation} 1891. where $d_{1}‚Ķd_{4}$ are relative intensities of the diodes with respect to their nominal intensities. The coefficients for equation \ref{eq:LPExampleObjectiveFunction} are taken from the second row of calibration matrix containing the $Y$ tristimulus values in lumens, and therefore, the luminous flux of the diodes. \marginpar{constrains}The solution should be constrained by the feasible relative fluxes of the diodes $0<d_{i}<1$ and the desired output colour point. 1892. \begin{equation} 1893. x = \frac{\sum_{i} d_{i}X_{i}}{\sum_{i} d_{i}(X_{i}+Y_{i}+Z_{i})} \qquad{} 1894. y = \frac{\sum_{i} d_{i}Y_{i}}{\sum_{i} d_{i}(X_{i}+Y_{i}+Z_{i})} 1895. \end{equation} 1896. Computing first equation yields 1897. \begin{align} 1898. 0.25 &= \frac{ 60d_{1}+6d_{2}+17d_{3}+77d_{4} }{ d_{1}(60+25) +d_{2}(6+26+4)+d_{3}(17+5+100)+d_{4}(74+77+92) } \nonumber \\ 1899. 0.25 &= \frac{ 60d_{1}+6d_{2}+17d_{3}+77d_{4} }{ 85d_{1} +36d_{2}+122d_{3}+243d_{4} } \nonumber 1900. \end{align} 1901. \begin{align} 1902. 85d_{1} +36d_{2}+122d_{3}+243d_{4} &= 240d_{1} +24d_{2}+68d_{3}+296d_{4} \nonumber \\ 1903. -155d_{1} +12d_{2}+54d_{3}-53d_{4} &= 0  \label{eq:LPExampleEqualityConstrain1} 1904. \end{align} 1905. and similarly for the second equation 1906. \begin{equation} \label{eq:LPExampleEqualityConstrain2} 1907. \phantom{0+}-15d_{1} -68d_{2}+102d_{3}-65d_{4} = 0 \phantom{+ 20d_{1} +26d_{2}+6d_{3}+2d_{4} } %phantoms to keep aligned with the first constraint 1908. \end{equation} 1909. Objective function \ref{eq:LPExampleObjectiveFunction} together with the equality constrains eq.~\ref{eq:LPExampleEqualityConstrain1}, eq.~\ref{eq:LPExampleEqualityConstrain2} and duty cycle constrains $0<d_{i}<1$ form a linear problem. The solution for this problem is 99.1\thinspace{}lm at $d_{1} = 0.0477$, $d_{2} = 0.5336 $ and $d_{3} = d_{4} = 1$. 1913. %\marginpar{maximal flux within luminaires gamut} 1914. Using linear programming to find the maximum output flux of the luminaire at given colour coordinates can be applied to find the limits of the device inside its gamut. Figure~\ref{fig:LPMaxFluxGamut} compares the performance of two luminaires. 1915. \begin{figure}[!ht]% LPMaxFluxGamut 1916.         \centering 1917.                 \includegraphics{graphics/System/Optimizations/LinearProgramming/MaxFluxGamut.pdf} 1918.         \caption{Comparison of maximum flux achievable by luminaires. Contour plotted every 5\thinspace{}lm. Left lumanaire described by calibration matrix \ref{eq:LPExampleCalibrationMatrix}. Right luminaire has increased number of green diodes. } \label{fig:LPMaxFluxGamut} 1919. \end{figure} 1920. Considering first system, white diode's colour point lays on the Planckian locus and the point corresponding to red, green and blue diode driven at maximal current lays below black-body locus. Therefore when all diodes are driven with maximum current the colour point will lay between these two points, under the Planckian locus. When generation of white light needs to be optimised, the colour point of summary of red, green and blue diodes should be as close to the locus as possible. In order to do this, the number of green diodes has been increased by a factor of 2.5× to form a second system. 1922. The cross section of maximum flux plots across black-body locus is shown in figure~\ref{fig:LPWhiteLightSolutions}. 1923. \begin{figure}[!ht]% LPWhiteLightSolutions 1924.         \centering 1925.         \includegraphics{graphics/System/Optimizations/LinearProgramming/LPExampleBlackbodyMaxFlux.pdf} 1926.         \caption{Maximum luminous flux for white light with various colour temperatures [K]. Left graphs show the system defined by calibration matrix \ref{eq:LPExampleCalibrationMatrix}. Maximum flux for 6500\thinspace{}\textsc{k} white light is 115.2\thinspace{}lm. By increasing the number of green diodes by the factor of 2.5× (right graphs) the flux at the same colour point increases to 171.6\thinspace{}lm. Zero luminous flux indicates that the desired colour point lays outside gamut of the luminaire.} \label{fig:LPWhiteLightSolutions} 1927. \end{figure} 1928.         In the case of first system, the green diodes are clearly the limiting factor, as they are driven at their maximum luminance across all colour temperatures of the output light. Increased number of green diodes yields increased maximal flux in colour temperature range from around 2000\thinspace{}\textsc{k} to 10000\thinspace{}\textsc{k} while increasing the number of blue diodes would not change anything. 1930. \subsection{Optimal control using hybrid dimming}\label{ssec:Optimal_control_hybrid_dimming} 1931. Increasing the number of control variable yields a possibility of increased control of a trichromatic \led{} luminaire. An experiment was conducted in order to test the gains from hybrid dimming. A red, green and blue diode were chosen as this combination is most commonly present in trichromatic luminaires. Diodes' tristimulus values and electrical quantities were measured and fitted to a polynomial (figure~\ref{fig:DimmingPWMAM_XYZ}). This data represents every achievable colour point for each of the diodes. For simplicity constant heatsink temperature was used during data acquisition. 1933. \marginpar{maximising luminous efficacy}Maximising luminous efficacy (lm/\textsc{w}) means minimising the input power for given flux level. 1934. \begin{equation} 1935.         \textrm{maximise: } f\!\;(d_{n},I_{peak,n}) = \sum_{n} P_{n}, \qquad{} n = \{\textrm{r, g, b}\} 1936. \end{equation} 1937. subject to 1938. \begin{align} 1939.         &X = \sum_{n}X_{n}, \qquad{} 1940.          Y = \sum_{n}Y_{n}, \qquad{} 1941.          Z = \sum_{n}Z_{n}  \nonumber \\ 1942.         &0 < I_{peak,min} < I_{peak,n} < I_{f,n,max} \\ 1943.         &0 < d_{min} < d_{n} < 1 \nonumber 1944. \end{align} 1946. \noindent{}Simulations (figure~\ref{fig:EfficacyIncreaseHybridDimming}) show an increase in luminous efficacy in hybrid case compared to both \pwm{} and \am{}. The increase depends on the value of the flux. The lower the flux value the more gain from the hybrid dimming approach. 1947. \begin{figure}[!ht]% Efficacy increase at hybrid dimming 1948.         \centering 1949.         \includegraphics{graphics/System/Optimizations/HybridDimming/EfficacyIncrease.pdf} 1950.         \caption{Luminous efficacy of hybrid, \pwm{} and \am{} dimmed luminaires. Efficacies calculated for colour point laying on a blackbody locus for 25 and 50 lumens. Zero efficacy means that this the colour point of this colour temperature cannot be generated by this luminaire.}\label{fig:EfficacyIncreaseHybridDimming} 1951. \end{figure} 1952. \clearpage % *** manual new page 84/85 1953. \noindent{}\marginpar{maximising flux}Another possibility of optimisation is the maximisation of the luminous flux. An interior-point search algorithm was used to find a maximum flux value for a given $x$,~$y$ pairs. Using the same model three dimming approaches were compared. By setting the bounds on duty cycle to be equal to 100\thinspace{}\% an \am{} dimmed luminaire was simulated. Similarly when the peak current was forced to nominal current value for these diodes a \pwm{} dimmed luminaire was simulated. The optimisation procedure was run for each dimming technique. The optimisation problem was formulated as a maximisation of luminous flux 1954. \begin{equation} 1955.         \textrm{maximise: } f\!\;(d_{n},I_{peak,n}) = \sum_{n} Y_{n}, \qquad{} n = \{\textrm{r, g, b}\} 1956. \end{equation} 1957. subject to 1958. \begin{align} 1959.         &x = \frac{\sum_{n}X_{n}}{\sum_{n}(X_{n}+Y_{n}+Z_{n})}, \qquad{} 1960.         y = \frac{\sum_{n}Y_{n}}{\sum_{n}(X_{n}+Y_{n}+Z_{n})} \nonumber \\ 1963. \end{align} 1964. where the $I_{f,n,max}$ is the maximum forward current of the $n$-th diode. The value of $I_{peak,min}$ is set to 0 or $I_{f,n,max}$ for \am{} and \pwm{} dimming, respectively. Similarly, the $d_{min}$ variable is set to 0 or 1 for \pwm{} and \am{} dimming, respectively. For hybrid dimming both variables are set to zero. 1966. \begin{figure}[!ht]% HybridVSPWM 1967.         \centering 1968.         \includegraphics{graphics/System/Optimizations/HybridDimming/Fhybrid-Fpwm.pdf} 1969.         \caption{Increase of the luminous flux in hybrid dimmed luminaire compared to \pwm{} dimmed luminaire.}\label{fig:HybridVSPWM} 1970. \end{figure} 1972. \begin{figure}[!ht]% HybridVSAM 1973.         \centering 1974.         \includegraphics{graphics/System/Optimizations/HybridDimming/Fhybrid-Fam.pdf} 1975.         \caption{Increase of the luminous flux in hybrid dimmed luminaire compared to \am{} dimmed luminaire.}\label{fig:HybridVSAM} 1976. \end{figure} 1978. Results (fig.~\ref{fig:HybridVSPWM} and~\ref{fig:HybridVSAM}) are plotted as an increase of luminous flux in hybrid dimmed luminaire compared to \pwm{} and \am{} dimmed luminaire. An increased luminaire gamut the blue-cyan colours is observed in both cases. 1980. Hybrid \pwm{}/\am{} dimming technique presented in this dissertation is only one variation of current shape that may be used to drive \led{}s. Tse~et~al. \cite{Tse2009a,Tse2009b,Tse2009c,Tse2009d} proposed using two levels of \dc{} current and alternating between them. Colorimetric benefits of using such a current shape has not yet been studied. 1981. % chapter luminaire_control (end) 1983. %%%%%%%%%%%%%%% 1984. %% Chapter 5 %% 1985. %%%%%%%%%%%%%%% 1986. \chapter{Power converter} % (fold) 1987. \label{cha:power_converter} 1988. Existing \ac{} diodes that can be directly connected to the electric grid are not powerful enough for projectors and moving head applications. Power electronics circuitry is necessary to convert \ac{} grid power into the form applicable to high power \dc{} \led{}s. The structure and complexity of the power electronics converter depends on the power level of installed \led{}s and type of the colour control. 1990. All electronic circuitry connected to the utility grid have to meet strict demand on the drawn and injected current. \textsc{en}~61000-3-2 standard classifies the device by its application and based on this appropriate limits are imposed. \textsc{en}~61000-3-3 puts a limits on observable light flicker and voltage fluctuations. 1992. In simplest low power applications all diodes can be driven with a single-stage power converter. This ensures low cost of the electronics and simplicity of the design. The drawback of a single stage converter is that the diode control is not fully decoupled from the grid current control. Any disturbance in the grid may be observable in the output light. Therefore this kind of converters are limited to applications where precise light control is not important e.g. street lighting. 1994. Power converters for driving high power, polychromatic luminaires are typically composed of a multi-stage converter, each specialised in its own function. In the simplest form one converter (called a power factor correction stage) is controlling the grid current and keeping the input current spectra within defined standards while a second converter drives the light-emitting diodes. 1996. \begin{figure}[!ht]% Three stages of power conversion 1997.         \centering 1998.                 \includegraphics{graphics/Converter/PowerConverterLuminaire.pdf} 1999.         \caption{Three stages of power conversion from \ac{} grid source to \led{} load. The \dc{}/\dc{} converter provides galvanic isolation between the grid and the load. Three \led{} drivers provides power to three: red, green and blue diode strings.} 2000.         \label{fig:PowerConverterLuminaire} 2001. \end{figure} 2003. \noindent{}The approach chosen for this project is a three stage power converter (fig.~\ref{fig:PowerConverterLuminaire}) where the first \textsc{pfc} stage is an interleaved buck converter with output voltage of 395\thinspace{}\textsc{v}. The role of the second stage, a phase shifted full bridge converter, is to lower the high voltage from the previous stage to a safe 30\thinspace{}\textsc{v} level and to provide galvanic isolation between its input and output. The third stage, described in this chapter, drives the \led{}s. 2005. In design process of the \led{} driver, the two first stages of the power converter are modelled as an ideal voltage source. 2007. %% Converter requirements 2008. \section[Converter requirements]{converter requirements} 2010. The output voltages and currents of the converter are determined by the load. In case of \led{}s the current-voltage characteristics is well defined and does not vary much. Also, when using a \pwm{} dimming method, there are only two operating points: when the diode is turned off and when the diode is supplied with nominal current value. 2012. \begin{figure}[!ht]% Luminaire dimming methods 2013.         \centering 2014.                 \includegraphics{graphics/Converter/DimmingMethods.pdf} 2015.         \caption{Three approaches to dimming of individual diodes in the luminaire. Each colour has a separate driver which has a fixed current value and is controlled by an external \pwm{} intensity control (left). One driver supplies power to all diodes and individual diodes are shorted to create individual intensity control (middle). Each colour of the diodes is driven by a separate driver with an adjustable current (right).} 2016.         \label{fig:DimmingMethods} 2017. \end{figure} 2019. \noindent{}Different dimming schemes may influence the driver design (fig.~\ref{fig:DimmingMethods}). The driver may operate with a fixed current and nearly constant voltage if one driver supplies only one colour string. This if very beneficial because the driver may be optimised only for a single operating point. 2021. If one driver supplies power to all, serial connected diodes in the luminaire, and the diodes are controlled individually by shorting their terminals, the current is fixed to a single value but the voltage changes significantly from zero to the sum of all forward voltages. 2023. Finally, each of the colours in the luminaire can be controlled by a separate \dc{} current source with variable current magnitude. This limits the operating points of the converter to the current-voltage curves of the connected diodes (e.g. fig.~\ref{fig:IV_n_Rs}). This approach gives the biggest control opportunities as the current in each diode can be adjusted individually allowing the use of \am{}, \pwm{} and hybrid dimming schemes. 2025. The last scheme was chosen for implementing in a test luminaire. The highest degree of current control gives the opportunity to test the behaviour of hybrid dimming as well as current-voltage \led{} model. 2027. % figure: current voltage curves of 4xCBT-90 2029. The test luminaire was chosen to have three: red, green and blue colour strings, each consisting of four \textsc{cbt}-90 diodes. The typical voltage of a string, at 13.5\thinspace{}\textsc{a} nominal current, had a value of 9.6, 17.2 and 15.6\thinspace{}\textsc{v} for red, green and blue string, respectively. The forward voltage of a single colour diode may vary significantly therefore the converter should be designed with higher maximum power than predicted with typical forward voltages. The total nominal power of installed \led{}s is 572.4\thinspace{}\textsc{w} with string powers: 129.6\thinspace{}\textsc{w} for red, 210.6\thinspace{}\textsc{w} for blue to 232.2\thinspace{}\textsc{w} for green string. 2031. %% Converter topologies 2032. \section[Converter topologies]{converter topologies} 2033. Different topologies of power converters may be used to power light-emitting diodes. The difference between topologies include the complexity, control methods, input and output voltages and the shape of input and output waveforms. \cite{Broeck2007,EricksonBook} 2035. \begin{figure}[!ht]% Buck, synchronous buck, buck-boost, boost 2036.         \centering 2037.                 \includegraphics{graphics/Converter/BasicDCDCConverters.pdf} 2038.         \caption{Basic \dcdc{} converters. Buck (top left), synchronous buck (top right), boost (bottom left) and buck-boost (bottom right).} 2039.         \label{fig:BasicDCDCConverters} 2040. \end{figure} 2042. % Buck, buck-boost, boost, cuk, sepic, interleaved buck 2043. \noindent{}Three basic topologies of \dcdc{} converters (fig.~\ref{fig:BasicDCDCConverters}) can be created by manipulating a switch cell composed of a transistor, inductor and recirculation diode: buck, boost and buck-boost. The application of one of these converters depends on the input to output voltage ratio of the converter. If output voltage is always lower than input voltage a buck topology can be used, if otherwise, a boost converter can be implemented. Buck-boost topology can be applied to both of the above cases. 2045. For increased efficiency at high current levels, the recirculation diode present in a converter can be replaced with a transistor which is driven opposite to the other transistor. This reduces conduction power losses in the recirculation loop at the expense on increased complexity of the driving circuitry. 2047. % Discuss isolated and non-isolated 2049. Boost topologies require lower voltage on input port than on the load. If the lowest string voltage in the luminaire is 9.6\thinspace{}V, the boost-derived \led{} drivers would require more than 61\thinspace{}A of current when all diodes are driven with nominal current, not including driver losses. 2051. One of the most important factors when designing a converter for solid-state light sources is the lifetime. Current state of the art diodes can operate for dozen thousands hours maintaining more than 70\thinspace{}\% of initial flux. Power converter driving these light sources should have similar lifetime. Electrolytic capacitors are often the cause of failures in switch-mode power converters~\cite{Chen2008}. It is beneficial therefore, to implement a topology that minimises their use. 2053. \noindent{}High current electrolytic capacitors are used in power converters as filters and energy storage. Typical \dcdc{} power converter has at least two: input and output capacitors limiting current ripples and providing energy during transients. 2055. %% Dual interleaved buck topology 2056. \section[Dual interleaved buck topology]{dual interleaved buck topology} 2058. \begin{figure}[!ht]% interleaved buck schematic 2059.         \centering 2060.                 \includegraphics{graphics/Converter/Schematic_interleaved_buck.pdf} 2061.         \caption{Schematic of dual interleaved buck topology. 180° phase shift in control signal creates a ripple cancellation effect.} 2062.         \label{fig:DualInterleavedBuckSchematic} 2063. \end{figure} 2065. \noindent{}An interleaved buck topology (fig.~\ref{fig:DualInterleavedBuckSchematic}) was investigated due to its ripple cancelling properties. When introduced, interleaved buck topology was used extensively in voltage regulation modules (\textsc{vrm}s) for driving \textsc{cpu}s. Benefits of using this topology were: increased dynamic performance, cancellation of output ripple current and spreading the current between multiple phases for easier thermal design~\cite{Consoli2001}. \textsc{cpu}s change the current demand often and require stable supply voltage for stable operation and this requires big values of output capacitances. \led{}s on the other hand operate in the working point set by the driver and do not require fast dynamics. It is therefore possible to reduce the output capacitance value to the point where film capacitors can be used instead of electrolytic capacitors. 2067. \begin{figure}[!ht]% interleaved buck ripple attenuation 2068.         \centering 2069.                 \includegraphics{graphics/Converter/RippleAttenuation.pdf} 2070.         \caption{Current ripple magnitude with respect to output voltage ($V_{g}=30\textrm{ V}$) superimposed on current-voltage characteristics of red, green and blue strings of four, series connected \textsc{cbt-90} diodes.} 2071.         \label{fig:DualInterleavedBuckRippleAttenuation} 2072. \end{figure} 2074. \noindent{}Ripple cancellation effect is strongest at duty cycle equal to 0.5 as shown in figure~\ref{fig:DualInterleavedBuckSchematic} \cite{Wei2001}. The input voltage for the converter was fixed to 30\thinspace{}\textsc{v} so that the working point for the converter lays close to the half of the input voltage. 2076. %% Small-signal model 2077. \section[Small-signal model]{small-signal model} 2078. In order to design a controller, first a small-signal model of the converter is created. Small-signal model averages the signals over the switching period and models the change of electrical quantities disregarding switching ripple \cite{EricksonBook}. This way the dynamics of signal variations can be easily modelled. 2080. Two buck structures can be modelled independently and, using a superposition principle, connected together. When the upper switch is turned on, the inductor current is equal to the current drawn from the input source. When the top switch of the converter phase is off, the input source current is equal to zero. 2081. \begin{align} 2082.         i_{g}(t) &= i_{1} \textrm{ during }d_{1}\\ 2083.         i_{g}(t) &= 0 \textrm{ during }1-d_{1} 2084. \end{align} 2085. Applying small-ripple approximation we obtain 2086. \begin{align} 2087.         i_{g}(t) &\approx{} \avg{i_{1}}_{T_{s}} \textrm{ during }d_{1}\label{SSMig1d11}\\ 2088.         i_{g}(t) &= 0 \textrm{ during }1-d_{1}\label{SSMig1d12} 2089. \end{align} 2090. Similarly, for the other phase 2091. \begin{align} 2092.         i_{g}(t) &= i_{2} \textrm{ during }d_{2}\\ 2093.         i_{g}(t) &= 0 \textrm{ during }1-d_{2} 2094. \end{align} 2095. Small-ripple approximation yields 2096. \begin{align} 2097.         i_{g}(t) &\approx{} \avg{i_{2}}_{T_{s}} \textrm{ during }d_{2}\label{SSMig1d21}\\ 2098.         i_{g}(t) &= 0 \textrm{ during }1-d_{2}\label{SSMig1d22} 2099. \end{align} 2100. Equations \ref{SSMig1d11}, \ref{SSMig1d12}, \ref{SSMig1d21} and \ref{SSMig1d22} yield, on average over switching period, input current equal to 2101. \begin{equation} 2102.         \avg{i_{g}}_{T_{s}} = d_{1}\avg{i_{1}}_{T_{s}} + d_{2}\avg{i_{2}}_{T_{s}} 2103. \end{equation} 2104.  Similar procedure can be applied to output capacitor voltage and inductor currents yielding following equations 2105. \begin{align}\label{eq:SSMNonlinearConverterEquations} 2106.  C\deriv{\avg{v_{c}}_{T_{s}}}{t} &= \avg{i_{1}}_{T_{s}} + \avg{i_{2}}_{T_{s}} - \frac{\avg{v_{c}}_{T_{s}} - \avg{v_{f}}_{T_{s}}}{R_{d}} \\ 2107.  L_{1}\deriv{\avg{i_{1}}_{T_{s}}}{t} &= d_{1}(t)\avg{v_{g}}_{T_{s}} - \avg{v_{c}}_{T_{s}} \\ 2108.  L_{2}\deriv{\avg{i_{2}}_{T_{s}}}{t} &= d_{2}(t)\avg{v_{g}}_{T_{s}} - \avg{v_{c}}_{T_{s}} 2109. \end{align} 2110. The nonlinear equations describing the converter have to be perturbed and linearised in order to extract the small-signal \ac{} equations. All state variables and inputs to the converter are expressed as quiescent values with superimposed \ac{} variation. 2111. \begin{align}\label{eq:SSMPerturbation} 2112. \avg{v_{g}}_{T_{s}} &= V_{g} + \widetilde{v_{1}}(t) \nonumber \\ 2113. \avg{v_{f}}_{T_{s}} &= V_{f} + \widetilde{v_{f}}(t) \nonumber \\ 2114.     d_{1}(t)       &= D_{1} + \widetilde{d_{1}}(t) \nonumber \\ 2115.     d_{2}(t)       &= D_{2} + \widetilde{d_{2}}(t) \\ 2116. \avg{v_{c}}_{T_{s}} &= V_{c} + \widetilde{v_{c}}(t) \nonumber \\        2117. \avg{i_{1}}_{T_{s}} &= I_{1} + \widetilde{i_{1}}(t) \nonumber \\ 2118. \avg{i_{2}}_{T_{s}} &= I_{2} + \widetilde{i_{2}}(t) \nonumber 2119. \end{align} 2120. Substituting above equations into input gate equation yields 2121. \begin{align} % ***** 2122.         I_{g} + \widetilde{i_{g}}(t) &= \Big(D_{1} + \widetilde{d_{1}}(t)\Big) \Big(I_{1} + \widetilde{i_{1}}(t)\Big) + \Big(D_{2} + \widetilde{d_{2}}(t)\Big) \Big(I_{2} + \widetilde{i_{2}}(t)\Big) \nonumber\\ 2123.         I_{g} + \widetilde{i_{g}}(t) &= \underbrace{D_{1}I_{1} + D_{2}I_{2}}_{\textrm{DC terms}} + \underbrace{D_{1}\widetilde{i_{1}}(t) + D_{2}\widetilde{i_{2}}(t) + I_{i}\widetilde{d_{1}}(t) + I_{2}\widetilde{d_{2}}(t)}_{\textrm{I-order linear terms}} + \nonumber\\ &\qquad\qquad\qquad + \underbrace{\widetilde{d_{1}}(t)\widetilde{i_{1}}(t) + \widetilde{d_{2}}(t)\widetilde{i_{2}}(t)}_{\textrm{II-order nonlinear terms}} 2124. \end{align} 2125. This equation contains three types of terms: \dc{}, time-invariant terms, first order \ac{} terms and second order \ac{} terms. If small signal assumptions are satisfied then the magnitude of second order terms is much lower than other terms and can be therefore neglected. Separation of \dc{} and \ac{} terms yields two set of equations 2126. \begin{align} 2127. I_{g} &= D_{1}I_{1} + D_{2}I_{2} \label{eq:SSMIgDC} \\ 2128. \widetilde{i_{g}}(t) &= D_{1}\widetilde{i_{1}}(t) + D_{2}\widetilde{i_{2}}(t) + I_{i}\widetilde{d_{1}}(t) + I_{2}\widetilde{d_{2}}(t) \label{eq:SSMIgAC} 2129. \end{align} 2130. Applying perturbations to $L_{1}$ inductor equation one obtains 2131. \begin{align} 2132.         L_{1}\deriv{\Big(I_{1} + \widetilde{i_{1}}(t)\Big)}{t} &= \Big( D_{1} + \widetilde{d_{1}}(t)\Big)\Big(V_{g} + \widetilde{v_{1}}(t)\Big) - \Big(V_{c} + \widetilde{v_{c}}(t)\Big) \nonumber \\ 2133.         L_{1}\bigg(\deriv{I_{1}}{t} + \deriv{\widetilde{i_{1}}(t)}{t}\bigg) &= D_{1}V_{g} + D_{1}\widetilde{v_{g}}(t) + V_{g}\,\widetilde{d_{1}}(t) + \widetilde{d_{1}}(t)\widetilde{v_{g}}(t) - V_{c} - \widetilde{v_{c}}(t) \nonumber \\ 2134.         L_{1}\deriv{I_{1}}{t} + L_{1}\deriv{\widetilde{i_{1}}(t)}{t} &= \underbrace{D_{1}V_{g} - V_{c}}_{\textrm{DC terms}} + \underbrace{D_{1}\widetilde{v_{g}}(t) + V_{g}\,\widetilde{d_{1}}(t) - \widetilde{v_{c}}(t)}_{\textrm{I-order linear terms}} + \!\!\!\!\!\!\!\!\underbrace{\widetilde{d_{1}}(t)\widetilde{v_{g}}(t)}_{\textrm{II-order nonlinear terms}} 2135. \end{align} 2136. Again, the second-order nonlinear terms are omitted and the terms are separated 2137. \begin{align} 2138.  0 &= D_{1}V_{g} - V_{c} \label{eq:SSML1DC}\\ 2139.  L_{1}\deriv{\widetilde{i_{1}}}{t} &= D_{1}\widetilde{v_{g}}(t) + V_{g}\,\widetilde{d_{1}}(t) - \widetilde{v_{c}}(t)\label{eq:SSML1AC} 2140. \end{align} 2141. Similar procedure can be applied to $L_{2}$ inductor equation yielding 2142. \begin{align} 2143.  0 &= D_{2}V_{g} - V_{c} \label{eq:SSML2DC}\\ 2144.  L_{2}\deriv{\widetilde{i_{2}}}{t} &= D_{2}\widetilde{v_{g}}(t) + V_{g}\,\widetilde{d_{2}}(t) - \widetilde{v_{c}}(t)\label{eq:SSML2AC} 2145. \end{align} 2146. Finally, the output capacitor equation is perturbed. 2147. \begin{align} 2148.         C\deriv{\Big(V_{c} + \widetilde{v_{c}}(t)\Big)}{t} &= \Big( I_{1} + \widetilde{i_{1}}(t)\Big) + \Big(I_{2} + \widetilde{i_{2}}(t)\Big) - \frac{\Big(V_{c} + \widetilde{v_{c}}(t)\Big) - \Big(V_{f} + \widetilde{v_{f}}(t)\Big)}{R_{d}} \nonumber \\ 2149.         C\deriv{V_{c}}{t} + C\deriv{\widetilde{v_{c}}(t)}{t} &= \underbrace{ I_{1} + I_{2} - \frac{V_{c}-V_{f}}{R_{d}}}_{\textrm{DC terms}} + \underbrace{\widetilde{i_{1}}(t) + \widetilde{i_{2}}(t) -  \frac{\widetilde{v_{c}}-\widetilde{v_{f}}}{R_{d}}}_{\textrm{I-order linear terms}} 2150. \end{align} 2151. The \dc{} and \ac{} terms are separated with the omission of nonlinear terms. 2152. \begin{align} 2153.  0 &= I_{1} + I_{2} - \frac{V_{c}-V_{f}}{R_{d}} \label{eq:SSMVcDC}\\ 2154.  C\deriv{\widetilde{v_{c}}}{t} &= \widetilde{i_{1}}(t) + \widetilde{i_{2}}(t)  - \frac{\widetilde{v_{c}}(t) - \widetilde{v_{f}}(t)}{R_{d}}\label{eq:SSMVcAC} 2155. \end{align} 2157. \noindent{}The \dc{} equations extracted in the procedure (\ref{eq:SSMIgDC}, \ref{eq:SSML1DC}, \ref{eq:SSML2DC} and \ref{eq:SSMVcDC}) collected below are used to calculate the quiescent operating point of the converter 2158. \begin{align} 2159.         I_{g} &= D_{1}I_{1} + D_{2}I_{2} \nonumber\\ 2160.         0 &= D_{1}V_{g} - V_{c} \\ 2161.         0 &= D_{2}V_{g} - V_{c} \nonumber\\ 2163. \end{align} 2164. Upon finding the quiescent values of all the variables, the results are inserted into small-signal \ac{} equations (\ref{eq:SSMIgAC}, \ref{eq:SSML1AC}, \ref{eq:SSML2AC} and \ref{eq:SSMVcAC}) collected below 2165. \begin{align} 2170. \end{align} 2172. \begin{figure}[!ht]% Small-signal model separate equations 2173.         \centering 2174.         \includegraphics{graphics/Converter/SmallSignalModel/small_signal_model_separate.pdf} 2175.         \caption{Circuits equivalent to the small-signal converter equations: (left)~input port, inductor loops, (right)~capacitor node.}\label{fig:SSModelSeparate} 2176. \end{figure} 2177. \begin{figure}[!ht]% Small-signal model ciralcuit equivalent 2178.         \centering 2179.         \includegraphics{graphics/Converter/SmallSignalModel/small_signal_model_combined.pdf} 2180.         \caption{Complete small-signal \ac{} equivalent circuit model of an ideal two phase, interleaved buck converter.}\label{fig:SSModelCombined} 2181. \end{figure} 2182. \noindent{}The digital average current controller, chosen for this converter will control the phase current, regulating it, by varying the input duty cycle. Appropriate converter transfer functions must be analysed to design the dynamics of the controller. 2183. \begin{equation} 2184.         \widetilde{i_{1}}(s) = G_{i_{1}d_{1}}\widetilde{d_{1}}(s) + 2185.         G_{i_{1}d_{2}}\widetilde{d_{2}}(s) + G_{i_{1}v_{g}}\widetilde{v_{g}}(s) 2186. \end{equation} 2187. First summand in the above equation corresponds to the impact of control signal on the phase current. Second term corresponds to the effect of the duty cycle variation in one phase on the inductor current in the other phase. This corresponds to the circulating current in the converter. Last term in the summation is the effect of variation in the input voltage on the inductor current and is treated as an external disturbance. 2189. Control to inductor current transfer function is calculated assuming all other input variations are equal to zero: 2190. \begin{equation} 2191.         G_{i_{1}d_{1}}(s) = \frac{\widetilde{i_{1}}(s)}{\widetilde{d_{1}}(s)}\bigg|_{\widetilde{d_{2}}(s)=0\textrm{, }\widetilde{v_{g}}(s)=0} 2192. \end{equation} 2194. \begin{figure}[!ht]% Small-signal model Gi1d1 transfer function 2195.         \centering 2196.         \includegraphics{graphics/Converter/SmallSignalModel/small_signal_model_Gi1d1.pdf} 2197.         \caption{Manipulation of equivalent circuit of the interleaved buck converter to find the $G_{i_{1}d_{1}}(s)$ control to output transfer function. } \label{fig:SSModelGi1d1} 2198. \end{figure} 2200. \noindent{}Because of the assumption $\widetilde{v_{g}}(s)=0$, the transformers in the converter model (fig.~\ref{fig:SSModelCombined}) are shorted and the circuit can be manipulated into the final form, shown in figure~\ref{fig:SSModelGi1d1}. 2202. The inductor current $\widetilde{i_{1}}(s)$ can be calculated as 2203. \begin{equation}        2204.         \widetilde{i_{1}}(s) = \frac{1}{sL_{1}+\Big( R_{d}\big\|sL_{2}\big\|\frac{1}{sC} \Big)}V_{g}\widetilde{d_{1}}(s) 2205. \end{equation} 2206. therefore the control to inductor current transfer function is equal to 2207. \begin{align} 2208.         G_{i_{1}d_{1}}(s) &= \frac{\widetilde{i_{1}}(s)}{\widetilde{d_{1}}(s)} = V_{g} \frac{1}{sL_{1} + \frac{sR_{d}L_{2}}{s^2 R_{d}L_{2}C + sL_{2} + R_{d}}} = 2209. V_{g} \frac{1}{\frac{sR_{d}L_{2} + s^3 R_{d}L_{1}L_{2}C + s^2 L_{1}L_{2} + sR_{d}L_{1}}{s^2 R_{d}L_{2}C + sL_{2} + R_{d}}}=\nonumber\\ 2210. &= V_{g} \frac{s^2 R_{d}L_{2}C + sL_{2} + R_{d}}{s^3 R_{d}L_{1}L_{2}C + s^2 L_{1}L_{2} + sR_{d}(L_{1} + L_{2})}=\nonumber\\ 2211. &= V_{g} \frac{s^{2}\big(\frac{L_{2}}{L_{1}+L_{2}}\big)C + s\frac{1}{R_{d}}\big(\frac{L_{2}}{L_{1}+L_{2}}\big) + \big(\frac{1}{L_{1}+L_{2}}\big)}{s^2\big(\frac{L_{1}L_{2}}{L_{1}+L_{2}}\big)C + s\frac{1}{R_{d}}\big(\frac{L_{1}L_{2}}{L_{1}+L_{2}}\big) + s} 2212. \end{align} 2213. Noting the parallel connection of the inductors using following form 2214. \begin{align} 2215.          \bigg(\frac{L_{1}L_{2}}{L_{1}+L_{2}}\bigg) &= \big(L_{1}\big\|L_{2}\big) 2216. \end{align} 2217. we obtain the final transfer function 2218. \begin{align} 2219.         G_{i_{1}d_{1}}(s) &= V_{g} \bigg(\frac{L_{2}}{L_{1}+L_{2}}\bigg) 2220.         \frac{s^{2}C+s\frac{1}{R_{d}}+\frac{1}{L_{2}}}{s^{3}\big(L_{1}\big\|L_{2}\big) C+s^{2}\frac{1}{R_{d}}\big(L_{1}\big\|L_{2}\big)+s} 2221. \end{align} 2223. \noindent{}Control to other phase inductor current transfer function is equal to the current variation in one phase to the duty cycle variation in the other phase: 2225. \begin{equation} 2226.         G_{i_{1}d_{2}}(s) = \frac{\widetilde{i_{1}}(s)}{\widetilde{d_{2}}(s)}\bigg|_{\widetilde{d_{1}}(s)=0\textrm{, }\widetilde{v_{g}}(s)=0} 2227. \end{equation} 2229. \noindent{}Manipulation of the basic model of the converter, similarly as for the previous transfer function, yields a simplified model shown in figure~\ref{fig:SSModelGi1d2}. 2231. \begin{figure}[!ht]% Small-signal model Gi1d2 transfer function 2232.         \centering 2233.         \includegraphics{graphics/Converter/SmallSignalModel/small_signal_model_Gi1d2.pdf} 2234.         \caption{Manipulation of equivalent circuit of the interleaved buck converter to find the $G_{i_{1}d_{2}}(s)$ control to output transfer function. } \label{fig:SSModelGi1d2} 2235. \end{figure} 2237. \noindent{}The capacitor voltage can be expressed as 2238. \begin{equation} 2239.         v_{c} = \frac{\Big(R_{d}\big\|sL_{1}\big\|\frac{1}{sC} \Big)}{sL_{2}+\Big( R_{d}\big\|sL_{1}\big\|\frac{1}{sC} \Big)}V_{g}\widetilde{d_{2}}(s) 2240. \end{equation} 2241. while the $\widetilde{i_{1}}(s)$ current is equal to 2242. \begin{equation}        2243.         \widetilde{i_{1}}(s) = \frac{-v_{c}\phantom{-}}{sL_{1}} 2244. \end{equation} 2245. and the transfer function can be calculated according to following equations 2246. \begin{align} 2247.         G_{i_{1}d_{2}}(s) &= \frac{\widetilde{i_{1}}(s)}{\widetilde{d_{2}}(s)} = \frac{-V_{g}\frac{R_{d}}{s^{2}R_{d}L_{1}C+sL_{1}+R_{d}}}{sL_{2}+\frac{sR_{d}L_{1}}{s^{2}R_{d}L_{1}C+sL_{1}+R_{d}}} = \frac{-V_{g}R_{d}}{sL_{2}(s^{2}R_{d}L_{1}C+sL_{1}+R_{d})+sR_{d}L_{1}} =  \nonumber\\ 2248.         &=\frac{-V_{g}}{s^{3}L_{1}L_{2}C+s^{2}\frac{1}{R_{d}}L_{1}L_{2}+s(L_{1}+L_{2})}=\nonumber\\ 2249.   &= -V_{g}\bigg(\frac{1}{L_{1}+L_{2}}\bigg) \cdot \frac{1}{s^{3}\big(L_{1}\big\|L_{2}\big) C+s^{2}\frac{1}{R_{d}}\big(L_{1}\big\|L_{2}\big)+s} 2250. \end{align} 2252. \noindent{}The input voltage to inductor current will not be resolved symbolically due to much higher complexity. 2254. All transfer function shown above behave like integrators because the parasitic resistances were omitted in the modelling. Adding these resistances will change the transfer functions so that they will have finite \dc{} response.   2256. \begin{figure}[!ht]% Small-signal model 2257.         \centering 2258.         \includegraphics{graphics/Converter/SmallSignalModel/small_signal_model_nonideal_combined.pdf} 2259.         \caption{Complete small-signal \ac{} equivalent circuit model of an nonideal two phase, interleaved buck converter. Includes conduction losses in transistors ($R_{tu}$ and $R_{td}$—drain-source resistance of upper and lower transistor, respectively), inductors ($R_{L1}$ and $R_{L2}$) and in output capacitor ($R_{esr}$).} \label{fig:SSModelNonidealCombined} 2260. \end{figure} 2261. \clearpage % *** manual new page 100/101 2263.         \centering 2264.         \includegraphics{graphics/Converter/SmallSignalModel/Gi1d1TransferFunction.pdf} 2265.         \caption{Ideal and detailed (including parasitic resistances) $G_{i_{1}d_{1}}(s)$ transfer function of the \led{} driver.} \label{fig:SSModelGi1d1TransferFunction} 2266. \end{figure} 2269.         \centering 2270.         \includegraphics{graphics/Converter/SmallSignalModel/Gi1d2TransferFunction.pdf} 2271.         \caption{Ideal and detailed (including parasitic resistances) $G_{i_{1}d_{2}}(s)$ transfer function of the \led{} driver.} \label{fig:SSModelGi1d2TransferFunction} 2272. \end{figure} 2274. %% Controller design 2275. \section[Controller design]{controller design} 2276. The obtained transfer functions $G_{i_{1}d_{1}}(s)$ and $G_{i_{1}d_{2}}(s)$ are further discretised using a Tustin transform, because this method preserves the stability of the system~\cite{ComputerControlledSystems}. 2277. \begin{figure}[!ht]% Current control loop 2278.         \centering 2279.         \includegraphics{graphics/Converter/Control/currentControlLoop.pdf} 2280.         \caption{Current control loop in $z$ domain.} \label{fig:ControlCurrentControlLoop} 2281. \end{figure} 2282. A model of the control system is then created in $z$ domain, as shown in figure~\ref{fig:ControlCurrentControlLoop}. \textsc{pi} controller parameters were tuned to obtain stable response. The colour control loop runs much lower than the current control loop, therefore the current controller does not need very high bandwidth. 2284. \begin{figure}[!ht]% 2285.         \centering 2286.         \includegraphics{graphics/Converter/Control/ControllerOLBode.pdf} 2287.         \caption{Open loop bode plots of the control system showing the stability of the design.} \label{fig:ControlOpenLoopBode} 2288. \end{figure} 2290. %% Current measurement 2291. \section[Current measurement]{current measurement} 2292. Current sensing is required  in the feedback loop for both current mode control and over-current protection of diodes. It is important that the measurement technique is fast, accurate, lossless and immune to switching noise. The dynamics of the current measurement affects the dynamics of the current control loop. 2293. \begin{figure}[!ht]% Current measurement (Rs, Rdson) 2294.         \centering 2295.                 \includegraphics{graphics/Converter/CurrentMeasurementRsRdson.pdf} 2296.         \caption{Current measurement by adding series measurement resistor $R_{s}$ (left) and using transistor $R_{ds(on)}$ resistance (right).} 2297.         \label{fig:CurrentMeasurementRsRdson} 2298. \end{figure} 2300. % Current sensing resistor 2301. \subsection{Current sensing resistor} 2302. Most basic current measurement technique is adding a series resistor and measuring the voltage across it (fig.~\ref{fig:CurrentMeasurementRsRdson}a). This method is very accurate as high precision resistor, with small tolerance and low temperature coefficient, can be used. The drawback is, that in order to obtain reasonably high voltage signal (above 100\thinspace{}mV), a resistor with resistance $R_{s}>100\textrm{mV}/i$ has to be used. The higher the resistance the bigger the losses 2303. \begin{equation} \label{eq:PowerLossInSenseResistor} 2304.         p_{loss} = R_{s}i^{2}. 2305. \end{equation} 2306. For example, a 1\thinspace{}\textsc{a} current sensor will need minimum 100\thinspace{}mΩ resistor dissipating 100\thinspace{}mW. A 10\thinspace{}\textsc{a} current sensor needs only 10\thinspace{}mΩ resistor that will dissipate 1\thinspace{}\textsc{w} of power. Low value sensing resistors can be manufactured as a fixed length track on a converter \pcb{}. 2308. \clearpage % *** manual new page 101/102 2309. \noindent{}\marginpar{sense resistor placement}Placement of the sense resistor influences both conditioning circuit and protection against short circuit. Placing the circuit in the return path of the current produces a ground referenced measurement but also increases the load potential with respect to the ground. \led{} die is typically electrically isolated from its heatsink therefore this method can be used. 2311. Sensor connected in series with the inductor provides constant current monitoring and load short circuit protection but it is the least efficient solution. It will not detect any short circuit through both transistors. The measurement has high common mode voltage dependent on the output voltage. 2313. Connecting the sensor in series with high side transistor provides very good short circuit protection and is typically used in peak current controlled converters. Losses depend on both output current and duty cycle of the converter therefore this placement is best used in low duty cycle buck converters. Differential voltage measurement with high common mode voltage is necessary. 2315. % MOSFET Rds(on) 2316. \subsection{\textsc{mosfet} drain-source resistance} 2317. Instead of using an added resistor, a parasitic resistance already present in the circuit can be used (fig.~\ref{fig:CurrentMeasurementRsRdson}b)~\cite{Forghani-zadeh2005}. The drain-source resistance of a open \mosfet{} can be used. This technique is considered as lossless as it does not introduce any additional losses apart from conduction losses of the transistor. 2318. When transistor is on, the value of the drain current is given by 2319. \begin{equation} \label{eq:DrainCurent} 2320.         i_{D} = \frac{v_{sense}}{R_{DS(on)}}. 2321. \end{equation} 2322. Again, the minimum $R_{DS(on)}$ resistance depends on the measured current level. By using this technique one may be forced to use worse transistor than available in the current state of the art market. Another drawback is the $R_{DS(on)}$ tolerance reaching 30–40\%~\cite{Hua2006}. Datasheets typically provide the typical and maximum value of the resistance. An additional calibration circuit can be introduced to overcome this problem. 2323. The on resistance depends also strongly on the temperature and is sensitive to low $V_{GS}$ voltage~\cite{Forghani-zadeh2005}. 2325. \begin{figure}[!ht]% Current measurement (inductor DCR) 2326.         \centering 2327.         \includegraphics{graphics/Converter/CurrentMeasurementInductorDCR.pdf} 2328.         \caption{Current measurement by inductor \textsc{dcr} current sensing (left) and improved inductor \textsc{dcr} current sensing (right).} 2329.         \label{fig:CurrentMeasurementInductorDCR} 2330. \end{figure} 2332. % inductor DCR 2333. \subsection{Inductor \textsc{dcr}} 2334. Instead of using \mosfet{}'s on resistance a scheme based on measuring inductor \dc{} resistance can be implemented~\cite{Hua2006}. Additional \textsc{rc} network has to be added to estimate the current in the inductor. The current is given by the Ohm law in s-domain 2335. \begin{equation}\label{eq:DCRInductorCurrent} 2336.         I_{L} = \frac{V_{L}}{R_{L}+sL} 2337. \end{equation} 2338. where $V_{L}$ is the voltage on the terminals of the inductor and $R_{L}$ is the inductors \dc{} resistance. The auxiliary $R_{s}, C_{s}$ network forms a voltage divider. 2339. \begin{align} 2340. V_{C} &=  \frac{1/sC_{s}}{R_{s}+1/sC_{s}}V_{L} \\ 2341. V_{C} &=  \frac{1}{1+sR_{s}C_{s}}V_{L} \\ 2342. V_{L} &= (1+sR_{s}C_{s})V_{C} 2343. \end{align} 2344. Substituting the result to equation \ref{eq:DCRInductorCurrent} yields 2345. \begin{align} 2346. I_{L} &= \frac{(1+sR_{s}C_{s})}{R_{L}+sL}V_{C} \\ 2347. I_{L} &= \frac{1}{R_{L}}\left( \frac{1+sR_{s}C_{s}}{1+s\frac{L}{R_{L}}}\right) V_{C} \\ 2348. I_{L} &= \frac{1}{R_{L}}\left( \frac{1+sτ_{RC}}{1+sτ_{RL}}\right) V_{C} 2349. \end{align} 2350. When both time constants are equal 2351. \begin{equation}\label{eq:DCREqualTimeConstants} 2352.         τ_{RC} = τ_{RL} \;\Rightarrow\; R_{s}C_{s} = \frac{L}{R_{L}} 2353. \end{equation} 2354. voltage on the capacitor $C_{s}$ is proportional to inductor current 2355. \begin{equation}\label{eq:DCRFinal} 2356.         I_{L} = \frac{V_{C}}{R_{L}}\,. 2357. \end{equation} 2358. However, the above equation is only true when the elements are well matched (eq.~\ref{eq:DCREqualTimeConstants}). Because of the high tolerance of inductive and capacitive components and dependence of $L$ on the \dc{} bias this condition is hard to satisfy. 2360. % Inductor dcr with improved SNR 2361. \subsection{Inductor \textsc{dcr} with improved signal-to-noise ratio} 2362. Signal voltage in the inductor \textsc{dcr} method depends on the series resistance of an inductor. If it is too small, the output signal will be susceptible to the interruption of noise. To increase signal to noise ratio (\textsc{snr}) a change in the measuring circuit (fig.~\ref{fig:CurrentMeasurementInductorDCR}b) has been proposed~\cite{Lethellier2002}. Additional transistors drive the measuring \textsc{rc} network. Measured signal includes the resistance of the inductor and $R_{DS(on)}$ of the transistor. This increases the signal by a factor of $(R_{DS(on)}+R_{L})/R_{L}$. 2364. % Inductor dcr 2 2365. \begin{figure}[!ht]% Current measurement (inductor DCR 2) 2366.         \centering 2367.         \includegraphics{graphics/Converter/CurrentMeasurementInductorDCR2.pdf} 2368.         \caption{Schematic of current sensing circuit using a differential amplifier. For proper operation $R_{1}=R_{1}'$, $R_{2}=R_{2}'$ and $C_{1}=C_{1}'$ must be used. 2369.         \label{fig:CurrentMeasurementInductorDCR2}} 2370. \end{figure} 2372. \subsection{Improved inductor \textsc{dcr}}\label{ssec:ImprovedInductorDCR} 2373. Two previous inductor \textsc{dcr} measurement methods require high value of inductor resistance so that the measured signal has sufficient magnitude. On the other hand efficiency requirement calls for the lowest possible resistance. It is however possible to create a measurement circuit that does not have this drawback using a single differential amplifier with impedances matched to inductor's time constant~\cite{Dallago2000}. The schematic of this current measurement scheme is shown in figure~\ref{fig:CurrentMeasurementInductorDCR2}. 2375. \noindent{}Impedance of paralleled $R_{1}$ and $C_{1}$ is equal to 2376. \begin{equation} \label{eq:inductorDCR2Zrc} 2377.         Z_{RC} = \frac{R_{1}}{1+sR_{1}C_{1}}. 2378. \end{equation} 2379. Currents flowing in and out $V_{a}$ and $V_{b}$ nodes are calculated as follows 2380. \begin{equation} \label{eq:inductorDCR2I1I2} 2381.         I_{1}=\frac{V_{1}-V_{a}}{R_{2}}\qquad{} 2382.         I_{1}\:\!\!'=\frac{V_{a}-V_{out}}{Z_{RC}}\qquad{} 2383.         I_{2}=\frac{V_{2}-V_{b}}{R_{2}}\qquad{} 2384.         I_{2}\:\!\!'=\frac{V_{b}}{Z_{RC}} 2385. \end{equation} 2386. Assuming that amplifier input currents are negligible one can write $I_{1}=I_{1}\:\!\!'$ and $I_{2}=I_{2}\:\!\!'$. Equations~\ref{eq:inductorDCR2I1I2} can be rewritten as 2387. \begin{align}\label{eq:inductorDCR2tf} 2388. Z_{RC}\left(V_{1}-V_{a}\right) &= R_{2}\left(V_{a}-V_{out}\right)  \nonumber \\ 2389. Z_{RC}\left(V_{2}-V_{b}\right) &= R_{2}V_{b} \nonumber \\ 2390. Z_{RC}\left(V_{1}-V_{a}-V_{2}+V_{b}\right) &= R_{2}\left(V_{a}-V_{b}-V_{out}\right) 2391. \end{align} 2392. Assuming that the gain of amplifier \textit{A}~~∞ voltages $V_{a}$ and $V_{b}$ can be treated as equal $V_{a} = V_{b}$ and transfer function of the differential amplifier circuit can be calculated as 2393. \begin{align}\label{eq:inductorDCR2tf2} 2394. Z_{RC}\left(V_{1}-V_{2}\right) &= -R_{2}V_{out}\nonumber \\ 2395. Z_{RC}V_{in} &= R_{2}V_{out} \nonumber \\ 2396. \frac{V_{out}}{V_{in}} &= \frac{Z}{R_{2}} 2397. \end{align} 2398. Voltage on the inductor is equal to 2399. \begin{align}\label{eq:inductorDCR2IL} 2400. V_{L} &= I_{L}\left(R_{L}+sL\right) \nonumber \\        2401. V_{L} &= I_{L}R_{L}\left(1+sL/R_{L}\right) 2402. \end{align} 2403. Current sensing circuit is connected to the terminals of the inductor, therefore substituting eq.~\ref{eq:inductorDCR2Zrc} and \ref{eq:inductorDCR2IL} into equation~\ref{eq:inductorDCR2tf2} yields 2404. \begin{align}\label{eq:inductorDCR2tfVoutIL1} 2405. \frac{V_{out}}{I_{L}} &= \frac{R_{L}R_{1}}{R_{2}}\cdot\frac{1+sL/R_{L}}{1+sR_{1}C_{1}} 2406. \end{align} 2407. When time constants $R_{1}C_{1}$ and $L/R_{L}$ are equal, eq.~\ref{eq:inductorDCR2tfVoutIL1} is simplified to 2408. \begin{equation}\label{eq:inductorDCR2tfVoutIL2} 2409. \frac{V_{out}}{I_{L}} = \frac{R_{L}R_{1}}{R_{2}} 2410. \end{equation} 2411. Current signal is converted into voltage signal with gain set by the ration of $R_{1}$ and $R_{2}$ resistors and the series resistance of the inductor. This circuit requires high voltage, high bandwidth amplifier in order to accurately follow inductor current. 2413. \begin{figure}[!ht]% Current measurement (Observer, Average) 2414.         \centering 2415.         \includegraphics{graphics/Converter/CurrentMeasurementObserverAverage.pdf} 2416.         \caption{Current measurement by observer technique (left) and average current sensing (right).} 2417.         \label{fig:CurrentMeasurementObserverAverage} 2418. \end{figure} 2420. % observer technique 2421. \subsection{Observer technique} 2422. This technique uses the inductor voltage to calculate the inductor current~\cite{Midya1997}. As the voltage-current relation of an inductor is given by 2423. \begin{equation} \label{eq:VIinductor} 2424.         v_{L} = L\frac{\ud{}i_{L}}{\ud{}t}, 2425. \end{equation} 2426. the inductor current can be calculated by integrating the inductor voltage 2427. \begin{equation} \label{eq:IVinductor} 2428.         i_{L} = \frac{1}{L}\int{}v_{L}\ud{}t. 2429. \end{equation} 2430. The inductor voltage is typically much larger signal than the output of a current sensing resistor and depends only on the input voltage. 2432. % average current sensing 2433. \subsection{Average current sensing} 2434. Figure~\ref{fig:CurrentMeasurementObserverAverage}b shows a simple method of measuring the average current in the inductor~\cite{Xunwei1999}. An additional \textsc{rc} network is added parallel to the recirculation transistor. Under steady state conditions, the average voltage on resistor $R_{s}$ is zero therefore average voltage on the inductor can be written as $V_{out}-\avg{V_{C}}$. The average current in the inductor is derived as 2435. \begin{equation} 2436.         \avg{I_{L}} = \frac{\avg{V_{L}}}{R_{L}} = \frac{V_{out}-\avg{V_{C}}}{R_{L}}\,. 2437. \end{equation} 2438. For accurate current sensing, the \dc{} resistance of the inductor has to be known. Values of the additional \textsc{rc} filter will affect the measurement bandwidth~\cite{Patel2007} therefore this technique is mostly applicable to control current sharing between phases in multiphase converters. 2440. % overview 2441. \subsection{Overview} 2442. \noindent{}An overview of current measurement schemes is shown in table~\ref{tab:current_measurement_comparison}. An improved inductor \textsc{dcr}, described in chapter~\ref{ssec:ImprovedInductorDCR}, was chosen for the converter. It uses a single operational amplifier and a tuned \textsc{rc} circuit therefore it is not expensive. Moreover, manufacturers provide inductors with specific tolerances for the \dc{} resistance. The accuracy of the method can therefore be calculated, based on the resistance tolerance. 2444. \input{tables/current_measurement_comparison.tex} 2446. %% Hardware implementation 2447. \section[Hardware implementation]{hardware implementation} 2448. \begin{align} 2449.         V_{in} &= 30\ \textrm{V}\nonumber \\ 2450.         V_{out} &= \left\{9.6, 17.2, 15.6\right\}\textrm{V\quad{}for red, green and blue diode  string, respectively} \nonumber \\ 2451.         I_{f} &= 13.5\ \textrm{A}\nonumber \\ 2452.         I_{phase} &= 0.5 \cdot I_{f} = 6.75\ \textrm{A}\nonumber \\ 2453.         I_{pp} &=  25\ \% \cdot{} I_{phase} = 1.6875\ \textrm{A}\nonumber \\ 2454.         f_{sw} &= 200\ \textrm{kHz}\nonumber 2455. \end{align} 2457. \subsection{Inductor} 2458. In order to calculate the necessary inductance value, the inductor voltage equation can be used. 2459. \begin{equation}\label{eq:ConverterDesignInductorVoltageEquation} 2461. \end{equation} 2462. During on phase of the switching period ($d\cdot{}t_{sw}$) inductor current increases by the value of peak-to-peak ripple current $I_{pp}$ and the voltage on the inductor is equal to $v_{in}-v_{out}$. Rearranging equation \ref{eq:ConverterDesignInductorVoltageEquation} and substituting appropriate variables yields 2463. \begin{equation}\label{eq:ConverterDesignInductanceEquation} 2464.         L = \left(V_{in}-V_{out}\right) \cdot{} \frac{V_{out}}{V_{in}\cdot{f_{sw}}} \cdot{} \frac{1}{I_{pp}} 2465. \end{equation} 2466. Calculating above equation yields 19.3\thinspace{}µH for red, 21.7\thinspace{}µH for green and 22.2\thinspace{}µH for blue diode string. A~closest 22\thinspace{}µH standard value for inductors was chosen to be used in the converter. 2468. Basing on the \dc{} bias current and required inductance value a 77350-\textsc{a}7 Magnetics core has been chosen for the inductor. Magnetics design calculator was used to design and estimate the power losses of the inductor. 21 turns of 16 \textsc{awg} wire yields 21.71\thinspace{}µH at full \dc{} current bias. Core losses are estimated at 90\thinspace{}mW. \dc{} resistance of approximately 9\thinspace{}mΩ generates 410\thinspace{}mW copper losses. The total dissipated power is equal to 500\thinspace{}mW per inductor. 2470. \subsection{Output capacitor} 2471. The value of the output capacitor typically is chosen based on the maximum output voltage ripple magnitude and the voltage overshoot during load transient. As the \led{} is driven by the current, the voltage transients are not important. The output capacitor limits the 400\thinspace{}kHz (twice the switching frequency) ripple current flowing through the load. Therefore the only reason for the output capacitor is the \textsc{emi} issues creased by the wires connecting the driver to the diodes. 2473. \subsection{Transistors} 2474. \marginpar{mosfet}A \mosfet{} (metal-oxide-semiconductor field-effect transistor) transistors are ideal switches for low voltage applications. When turned on, a channel of n-type or p-type semiconductor is formed between the drain and source terminals. Because of the small on resistance (typically few mΩ) the conduction losses are much lower than in \textsc{bjt} (bipolar junction transistor) experiencing a high voltage drop between output terminals. 2476. \begin{figure}[!ht]% MOSFET with parasitic components 2477.         \centering 2478.         \includegraphics{graphics/MOSFET/MOSFETwParasitic.pdf} 2479.         \caption{Power \mosfet{} with parasitic components \cite{BaloghGateDrivers}. Nonlinearities of input capacitance $C_{iss}$, reverse transfer capacitance $C_{rss}$ and output capacitance $C_{oss}$ of \abbr{ipb80n04s3-03} transistor shown in the function of drain-source voltage.} \label{fig:MOSFETwithParasiticComponents} 2480. \end{figure} 2482. \noindent{}\mosfet{} with basic parasitic components is shown in the figure~\ref{fig:MOSFETwithParasiticComponents}. Drain and source inductance values depend on transistor package (typically few nH). Internal gate resistance has to be included in the driving loss calculations as its value is typically between 0.5–5\thinspace{}Ω. 2484. \begin{figure}[!ht]% Switching Losses 2485.         \centering 2486.         \includegraphics{graphics/MOSFET/SwitchingLosses.pdf} 2487.         \caption{Switching waveforms of a power \mosfet{} during turn on (left) and turn off (right). $V_{GS}$ gate-source voltage, $I_{D}$ drain current, $V_{DS}$ drain-source voltage and $P$ power losses. $V_{GS}$ waveform can be obtained from specific transistor datasheet~($V_{GS}$ vs. $Q_{G}$ figure).} \label{fig:MOSFETSwitchingLosses} 2488. \end{figure} 2490. % Losses 2491. \marginpar{losses}There are three power loss mechanisms in the \mosfet{}. One is conduction losses, when transistor is fully on (fig.~\ref{fig:MOSFETSwitchingLosses}, period $t_{4}$$t_{7}$), and power is dissipated in the ohmic channel according to $R_{DS(on)}\cdot{}i_{D}^{2}$ equation. Blocking losses are typically neglected because of very low drain leakage current (for example \abbr{ipb80n04s3-03} has a 100\thinspace{}µA leakage current at $T_{j}=125\mathrm{^\circ{}C}$ which gives 4\thinspace{}mW at $v_{DS}=40 \mathrm{ V}$). The last mechanism is the switching losses. Due to high complexity, the analysis is broken into specific periods~(fig.~\ref{fig:MOSFETSwitchingLosses}). 2493. % t1-t2 2494. \marginpar{period $t_{1}$$t_{2}$}Before $t_{2}$, the current in the drain is zero and some voltage is present between drain and source. Only negligible blocking losses are present. \mosfet{} driver is charging the input capacitance $C_{iss}$. Transistor is in the off state. Gate-source voltage raises linearly (assuming constant current flowing into the gate). This period is defined as a turn-on delay $t_{d(on)}$. 2496. % t2-t3 2497. \marginpar{period $t_{2}$$t_{3}$}Gate-source voltage has reached the threshold voltage $V_{th}$ and transistor is starting to conduct current. $i_{D}$ raises but blocked voltage still remains on the output terminals. Power is dissipated due to non-zero product of $i_{D}$ and $v_{DS}$. Gate-source voltage continues to raise until it reaches plateau voltage $V_{sp}$. 2499. % t3-t4 2500. \marginpar{period $t_{3}$$t_{4}$}$C_{GS}$ is charged and gate current starts to charge $C_{GD}$. Due to a Miller effect, $v_{GS}$ is clamped to plateau voltage $V_{sp}$. Drain-source voltage is decreasing. The rate of $\ud{}v_{DS}/\ud{}t$ is dictated by the gate current charging the gate-drain capacitance $C_{rss}$. Assuming constant gate current, the drain-source voltage slope is not constant due to highly non-linear $C_{rss}$ (fig.~\ref{fig:MOSFETwithParasiticComponents}). Gate charge $Q_{GS} + Q_{GD}$ is the minimum charge needed to turn the transistor on. Gate charge value is used to calculate required gate drive current. 2502. \begin{table}[b] 2503. \footnotesize 2504. \caption{Losses in the top \mosfet{} at 6.75\thinspace{}\textsc{a} phase current (half of the nominal 13.5\thinspace{}\textsc{a} diode current) and 16\thinspace{}\textsc{v} output voltage. \label{tab:MosfetLossesComparison}} 2505. \centering 2506. \begin{tabular}{lcccc} 2507.         & \textsc{turn on} & \textsc{turn off} & \textsc{conduction} & \textsc{total} \\ 2508.         \textsc{symbol} & \textsc{losses} [\textsc{w}] & \textsc{losses} [\textsc{w}] & \textsc{losses} [\textsc{w}] & \textsc{losses} [\textsc{w}] \\ 2509.         \hline 2510.         \textsc{irfp4321}  & 0.552 & 0.071 & 0.409 & 1.032\\ 2511.         \textsc{sud23n06}  & 0.103 & 0.146 & 1.319 & 1.568\\ 2512.         \textsc{irls3036}  & 0.616 & 1.164 & 0.067 & 1.848\\ 2513.         \textsc{irlr3705z} & 0.199 & 0.360 & 0.293 & 0.853\\ 2514.         \textsc{irlr3636}  & 0.245 & 0.400 & 0.161 & 0.806\\ 2515.         \hline 2516. \end{tabular} 2517. \end{table} 2519. \marginpar{mosfet comparison}A loss model, based on above description, has been used to estimate the losses in the transistor. The model uses datasheet parameters ($R_{DS(on)}$, $C_{oss}$, $C_{rss}$, $C_{gs}$, $R_{g}$, $Q_{g}$, $Q_{gd}$, $V_{Gth}$ and $gfs$) to calculate turn on, turn off and conduction losses in the transistors. Table~\ref{tab:MosfetLossesComparison} shows the loss distribution and the total loss in the device for the top transistor. A \textsc{irlr3636} \mosfet{} was chosen based on this loss estimation. Losses in the bottom transistor are dominated by the conduction losses, as the transistor is turning on and off with a diode forward voltage drop across it. The same \mosfet{} device was chosen due to its low cost and a low $R_{DS(on)}$ value. 2521. \subsection{Gate circuit} 2522. A \textsc{lm5106} 100\thinspace{}V half bridge gate driver with programmable dead-time has been chosen for driving the \mosfet{}s. The driver needs only one \pwm{} signal to drive two: high and low \mosfet{}s so that one driver containing two phases requires only two \pwm{} lines. 2524. \subsection{Current measurement}% Current measurement 2525. The improved inductor \textsc{dcr} current measurement method, described in detail in chapter \ref{ssec:ImprovedInductorDCR}, was chosen for the converter. In the luminaire, each colour diode string is driven with a driver consisting of two buck converters. Each of these converters requires a separate current measurement circuit, therefore the current measurement circuit should be cheap. 2527. \begin{figure}[!ht]% 2528.         \centering 2529.         \includegraphics{graphics/Converter/LedDriverCurrentMeasurement.pdf} 2530.         \caption{Current measurement scheme implemented in the driver. Inductor \textsc{dcr} measurement circuit (left) and improved circuit with biased input and output (right) allows the use of cheap operational amplifiers. Potentials $V_{1}$ and $V_{2}$ measured across the inductor. } \label{fig:LedDriverCurrentMeasurement} 2531. \end{figure} 2533. \noindent{}A low cost \textsc{lm837} operational amplifier was used due to its price, high operating voltage ±18\thinspace{}\textsc{v} and high unity-gain bandwidth of 25\thinspace{}MHz. This amplifier does not have rail-to-rail inputs or outputs, therefore the voltage levels were shifted to an appropriate level using bias voltage (for the output) and additional bias resistors (to bias both inputs) as shown in figure~\ref{fig:LedDriverCurrentMeasurement}. 2535. \begin{figure}[!ht]% 2536.         \centering 2537.         \includegraphics{graphics/Converter/CurrentMeasurement/CurrentMeasurementDynamics.pdf} 2538.         \caption{Measured effect of difference in $R_{1}C_{1}$ and $L/R_{L}$ time constants. Measured current $I_{Lx,meas}$ is controlled by the controller. Shape of the actual current depends on the relation of time constants. $R_{1}C_{1}$ < $L/R_{L}$ (left), $R_{1}C_{1}$ equal to $L/R_{L}$ (middle) and $R_{1}C_{1}$ > $L/R_{L}$ (right)} \label{fig:LedDriverCurrentMeasurementDynamics} 2539. \end{figure} 2541. Time constant of the $R_{1}C_{1}$ must be equal to the time constant of the inductor for the current measurement circuit to track the current accurately, as described in chapter \ref{ssec:ImprovedInductorDCR}. The effects of time constant mismatch can be visible in figure~\ref{fig:LedDriverCurrentMeasurementDynamics}. Although, the feedback current signal experiences normal step response, the actual current response may be very different. 2543. \begin{figure}[!ht] % Current measurement linearity 2544.         \centering 2545.         \includegraphics{graphics/Converter/CurrentMeasurement/CurrentMeasurementLinearity.pdf} 2546.         \caption{Measured linearity of current sensing circuits for each phase of the converter. \textsc{adc}x values refer to the value of analog to digital conversion in the \textsc{dsp}.} \label{fig:LedDriverCurrentMeasurementLinearity} 2547. \end{figure} 2549. \noindent{}The linearity of the current sensing circuit has been measured in the laboratory (fig.~\ref{fig:LedDriverCurrentMeasurementLinearity}) showing a linear relationship between actual and measured current in the inductors. For the amplifier gain and bias voltage used in the experiments the achieved current measurement resolution is around 5\thinspace{}mA. 2551. \section[Experimental results]{experimental results} 2553. \begin{figure}[!ht] % Converter 2554.         \centering 2555.         \includegraphics[scale=0.6]{graphics/Converter/Converter.jpg} 2556.         \caption{Dual interleaved buck converter built in the laboratory. Top side, show in the left picture, contains all power components: \mosfet{}s, inductors, gate drivers and filter capacitors. Bottom side of the \pcb{} contains the current measurement circuit and a connector for the control board.} \label{fig:LedDriverPicture} 2557. \end{figure} 2559. The design has been built in the laboratory (fig.~\ref{fig:LedDriverPicture}). The converter is controlled using a \textsc{tms320f28027} Piccolo microcontroller board. \dsp{} contains four independent \pwm{} modules, each capable of generating two \pwm{} signals with the same carrier waveform. This means that one \dsp{} can control up to four \led{} drivers. The converter has been tested using a high power \led{} load shown in figure~\ref{fig:LedDriverLEDLoad}. 2561. \begin{figure}[!ht] % LED load 2562.         \centering 2563.         \includegraphics[scale=0.75]{graphics/Converter/LedLoad.jpg} 2564.         \caption{LED load used in the experiments composed of four series connected \textsc{cbt-90} diodes.} \label{fig:LedDriverLEDLoad} 2565. \end{figure} 2567. \begin{figure}[!ht] % Current step 2568.         \centering 2569.         \includegraphics{graphics/Converter/Control/CurrentStepSimulationMeasurement.pdf} 2570.         \caption{Measured (left) and simulated (right) current step in one of the phases of the converter. Gate driver losses are included in the measurement. } \label{fig:LedDriverCurrentStepSimulationMeasurementComparison} 2571. \end{figure} 2573. The dynamic response was tested by performing a step in current command. Results, shown in figure~\ref{fig:LedDriverCurrentStepSimulationMeasurementComparison} show small difference between the model and the real system. Real system shows more damped response with little overshot. This phenomenon may be explained by lack of detailed modelling of all parasitic effects e.g. inductance change with the \dc{} current value or switching losses in the transistors. 2575. \begin{figure}[!ht] % Efficiency 2576.         \centering 2577.         \includegraphics{graphics/Converter/LedDriverEfficiency.pdf} 2578.         \caption{Measured efficiency of the \led{} driver supplying four, series connected green \textsc{cbt-90} \led{}s.} \label{fig:LedDriverEfficiency} 2579. \end{figure} 2581. \noindent{}Efficiency was measured with the previously shown load. Power needed to supply the \textsc{dsp} are not included in the efficiency calculations as one processor can be used for controlling multiple \led{} drivers and, at the same time, act as a colour controller. 2583. \begin{figure}[!ht] % Thermal image 2584.         \centering 2585.         \includegraphics[scale=0.6]{graphics/Converter/ThermalImage.png} 2586.         \caption{Thermal image of power converter driving four, series connected \textsc{cbt-90} \led{}s with 2\thinspace{}\textsc{a}, 20\thinspace{}\textsc{w} (left) and 13.5\thinspace{}\textsc{a}, 190\thinspace{}\textsc{w} (right). No forced cooling was used.} \label{fig:LedDriverThermalImage} 2587. \end{figure} 2589. % chapter power_converter (end) 2591. %%%%%%%%%%%%%%% 2592. %% Chapter 6 %% 2593. %%%%%%%%%%%%%%% 2594. \chapter{Conclusions} % (fold) 2595. \label{cha:conclusions} 2596. The aim of this work was to research \led{} driving solutions and to design a power converter for driving high power \led{} light sources. Thorough investigation of the previous scientific work showed three distinct research areas: \led{} properties (colorimetric, thermal, dimming, lifetime, etc.), luminaire control and \led{} drivers. Luminaire control work used the results of studies on diodes' properties together with the knowledge on colour theory to create colour control engines. However, little work bridges the gap between the colour control and \led{} drivers. While designing a \led{} driver, most of the focus is on topologies and converter lifetime and almost no focus is given to the driven diode. Similarly, researchers in the field of luminaire control treat \led{} drivers as black boxes where the only design choice is the dimming method. Therefore, in order to create an intelligent \led{} power converter, all three major research areas have been examined. 2598. \marginpar{hybrid pwm/am modulation}Previous research on pulse-width and amplitude modulation control show different effects, these methods have on diode performance and characteristics. It is therefore possible to drive the diode with any current shape to obtain different efficacy and colour points than with the two classical driving methods. To the hypothesis a diode was driven with a hybrid \pwm{}/\am{} dimming mechanism. The choice of the driving current was made based on the observation that most commercial \led{} drivers offer both methods of control, so the implementation of this method in existing or future luminaires would not require much change in hardware design. Tests conducted in laboratory conditions proved that by using the hybrid dimming mechanism, many new control opportunities became possible. 2600. An observation that \pwm{} and \am{} methods yield an opposite colour shift while dimming white phosphor-converted \led{} led to the discovery that by using the hybrid dimming mechanism one is able to control the position of the colour point and intensity of the diode. This phenomenon was used to stabilise the colour point that shifted with the heatsink temperature changes. 2602. Similar effect on colour point was observed with green and blue InGaN diodes. The hybrid modulation was used to minimise the peak wavelength shift yielding a colour point moving inside the MacAdam ellipse throughout its dimming range. Future investigation should verify if this behaviour can be used to increase the accuracy of the colour sensors, as the spectrum shifts are the main cause of the measurement error. 2604. Future research on this topic should include different driving currents, as the hybrid \pwm{}/\am{} dimming is only a single example of the possible current shapes. Existing current control methods, like peak current control, are capable of controlling the current with very high bandwidth, therefore very complex current shapes can easily be used to drive light-emitting diodes. 2606. \marginpar{luminaire control}Review of luminaire control show various feedback mechanisms that stabilise the output light colour. Methods include measuring the colour or the intensity of the light and different indirect junction temperature measurement schemes. Junction temperature information is used, together with a model of the parameters change, to estimate diode colorimetric properties. Review of colour spaces and corresponding colour distance metrics shows that the $ΔE_{ab}^{*}$ colour distance is a good measure of colour control loop accuracy. 2608. \marginpar{luminaire control optimisation}Polychromatic luminaires, consisting of four or more \led{}s, have been previously shown to have the possibility to optimise the control of primary diodes. The optimisation procedure can maximise various lamp parameters such as: efficacy, luminous flux or colour quality. Trichromatic luminaires did not have this ability using \pwm{} or \am{} dimming methods. Research presented in this dissertation proved that by using the hybrid dimming mechanism, an increase in control allows the luminaire to optimise the same lamp parameters as in polychromatic luminaire consisting of more than three basic diodes. An increase of efficacy has been found, especially at lower intensity levels. An increased device gamut has been also shown, particularly in the cyan area of trichromatic \rgb{} luminaire. 2610. \marginpar{current-voltage model}In this dissertation, a current-voltage diode model has been presented which utilises the fact that the diode's parameters depend on instantaneous values of junction temperature and forward current. Diode's voltage is dependent on its current and the junction temperature and is very easy to measure. It is therefore possible to create a model of diode's colorimetric and power properties based solely on instantaneous values of diode's current and voltage. The model was proved to generate accurate colorimetric feedback much under just noticeable $ΔE_{ab}^{*}$ colour distance even under pulsed current conditions. 2612. The current-voltage model can be applied to string connected light-emitting diodes but the accuracy of the method has not yet been verified in laboratory. 2614. The model can be easily used in trichromatic luminaire colour control, where the only needed feedback value is the diode voltages. The instantaneous current information is taken from current command as the two values are different only during very short current settling transients. 2616. \marginpar{detailed model of a luminaire}Together with detailed thermal and electrical model of a diode, the model can provide a good platform for simulating colorimetric and power properties of the diode under different current shapes mentioned before. Current-voltage model can provide information about dissipated power to the detailed thermal model. The resulting junction temperature and driving current can be converted into forward voltage used in the current-voltage model. The accuracy of this complete luminaire description has not yet been proven and is a part of suggested future work. The detailed model has been used to show the effect of heastink thermal resistance on maximal achievable flux. 2618. \marginpar{model generation procedure}The data acquisition procedure used to create the current-voltage model is far from perfect as it includes the delays for thermally stabilising the luminaire system. As part of future work, the fact that the thermal time constant of \led{} structure and heatsink are much different should be utilised to increase the speed of the model generation. Also the means of generating the model of light-emitting diodes without the use of an actively controlled heatsink should be investigated. 2620. \marginpar{power converter}The review of diodes' behaviour under different dimming mechanisms led to the conclusion that driving the diode with \dc{} current yielded highest efficiency compared to other pulsed current methods. Therefore the converter should supply the diode with constant current of regulated value. Previous research also shows importance of lifetime analysis of the converter. The key component, limiting the lifetime of the converter, is assumed to be the electrolytic capacitor. As electrolytic capacitors are typically used in input and output of power converters as part of the filters, a topology minimising the need for these capacitors was investigated. The dual interleaved buck converter was built and tested driving four series connected high current \textsc{cbt-90} diodes. This converter minimises the output current ripple using interleaving technique, where the current ripple from both phases cancel each other. Around 0.5 duty cycle this effect is the highest. Because of low series resistance of the light-emitting diodes, their voltage does not change much with the change of the driving current. The dual interleaved converter can be used with input voltage close to twice the forward voltage of the diode, therefore operating close to the 0.5 duty cycle in the whole diode current dimming range and thus minimising the need for output capacitor. 2622. % chapter conclusions (end) 2625. % Bibliography % 2627. \cleardoublepage 2628. \phantomsection 2629. \addcontentsline{toc}{chapter}{\numberline{ }Bibliography} 2630. \bibliographystyle{unsrt} 2631. \bibliography{thesis} 2632. \end{document} RAW Paste Data
052e069b88086cd5
Tuesday, 29 September 2015 On This Day in Math - September 29 Young man, if I could remember the names of these particles, I would have been a botanist. ~Enrico Fermi The 272nd day of the year; 272 = 24·17, and is the sum of four consecutive primes (61 + 67 + 71 + 73). 272 is also a Pronic or Heteromecic number, the product of two consecutive factors, 16x17 (which makes it twice a triangular #). And 272 is a palindrome, and the sum of its digits, 11, is also a palindrome.  (can you find the next?)  1609  Almost exactly a year after the first application for a patent of the telescope, Giambaptista della Porta, the Neapolitan polymath, whose Magia Naturalis of 1589, well known all over Europe, because of a tantalizing hint at what might be accomplished by a combination of a convex and concave lens: ‘With a concave you shall see small things afar off, very clearly; witha convex, things neerer to be greater, but more obscurely: if you know how to fit them both together, you shall see both things afar off, and things neer hand, both greater and clearly.’sends a letter to the founder of the Accademia dei Lincei, Prince Federico Cesi in Rome, with a sketch of an instrument that had just reached him, and he wrote:" It is a small tube of soldered silver, one palm in length, and three finger breadths in diameter, which has a convex glass in the end. There is another tube of the same material four finger breadths long, which enters into the first one, and in the end. It has a concave [glass], which is secured like the first one. If observed with that first tube, faraway things are seen as if they were near, but because the vision does not occur along the perpendicular, they appear obscure and indistinct. When the other concave tube, which produces the opposite effect, is inserted, things will be seen clear and erect and it goes in an out, as in a trombone, so that it adjusts to the eyesight of [particular] observers, which all differ. *Albert Van Helden, Galileo and the telescope; Origins of the Telescope, Royal Netherlands Academy of Arts andSciences, 2010 (I assume that we can safely date the invention of the trombone prior to 1609 also) 1801 Gauss’s Disquisitiones Arithmeticae published. It is a textbook of number theory written in Latin by Carl Friedrich Gauss in 1798 when Gauss was 21 and first published in 1801 when he was 24. In this book Gauss brings together results in number theory obtained by mathematicians such as Fermat, Euler, Lagrange and Legendre and adds important new results of his own. The book is divided into seven sections, which are : Section I. Congruent Numbers in General Section II. Congruences of the First Degree Section III. Residues of Powers Section IV. Congruences of the Second Degree Section V. Forms and Indeterminate Equations of the Second Degree Section VI. Various Applications of the Preceding Discussions Section VII. Equations Defining Sections of a Circle. Sections I to III are essentially a review of previous results, including Fermat's little theorem, Wilson's theorem and the existence of primitive roots. Although few of the results in these first sections are original, Gauss was the first mathematician to bring this material together and treat it in a systematic way. He was also the first mathematician to realize the importance of the property of unique factorization (sometimes called the fundamental theorem of arithmetic), which he states and proves explicitly. From Section IV onwards, much of the work is original. Section IV itself develops a proof of quadratic reciprocity; Section V, which takes up over half of the book, is a comprehensive analysis of binary quadratic forms; and Section VI includes two different primality tests. Finally, Section VII is an analysis of cyclotomic polynomials, which concludes by giving the criteria that determine which regular polygons are constructible i.e. can be constructed with a compass and unmarked straight edge alone. *Wik In 1988, the space shuttle Discovery blasted off from Cape Canaveral, Fla., marking America's return to manned space flight following the Challenger disaster. *TIS 1994 HotJava ---- Programmers first demonstrated the HotJava prototype to executives at Sun Microsystems Inc. A browser making use of Java technology, HotJava attempted to transfer Sun's new programming platform for use on the World Wide Web. Java is based on the concept of being truly universal, allowing an application written in the language to be used on a computer with any type of operating system or on the web, televisions or telephones.*CHM 1561  Adriaan van Roomen (29 Sept 1561 , 4 May 1615) is often known by his Latin name Adrianus Romanus. After studying at the Jesuit College in Cologne, Roomen studied medicine at Louvain. He then spent some time in Italy, particularly with Clavius in Rome in 1585. Roomen was professor of mathematics and medicine at Louvain from 1586 to 1592, he then went to Würzburg where again he was professor of medicine. He was also "Mathematician to the Chapter" in Würzburg. From 1603 to 1610 he lived frequently in both Louvain and Würzburg. He was ordained a priest in 1604. After 1610 he tutored mathematics in Poland. One of Roomen's most impressive results was finding π to 16 decimal places. He did this in 1593 using 230 sided polygons. Roomen's interest in π was almost certainly as a result of his friendship with Ludolph van Ceulen. Roomen proposed a problem which involved solving an equation of degree 45. The problem was solved by Viète who realised that there was an underlying trigonometric relation. After this a friendship grew up between the two men. Viète proposed the problem of drawing a circle to touch 3 given circles to Roomen (the Apollonian Problem) and Roomen solved it using hyperbolas, publishing the result in 1596. Roomen worked on trigonometry and the calculation of chords in a circle. In 1596 Rheticus's trigonometric tables Opus palatinum de triangulis were published, many years after Rheticus died. Roomen was critical of the accuracy of the tables and wrote to Clavius at the Collegio Romano in Rome pointing out that, to calculate tangent and secant tables correctly to ten decimal places, it was necessary to work to 20 decimal places for small values of sine, see [2]. In 1600 Roomen visited Prague where he met Kepler and told him of his worries about the methods employed in Rheticus's trigonometric tables. *SAU 1803   Jacques Charles-François Sturm (29 Sep 1803; 18 Dec 1855) French mathematician whose work resulted in Sturm's theorem, an important contribution to the theory of equations. .While a tutor of the de Broglie family in Paris (1823-24), Sturm met many of the leading French scientists and mathematicians. In 1826, with Swiss engineer Daniel Colladon, he made the first accurate determination of the velocity of sound in water. A year later wrote a prizewinning essay on compressible fluids. Since the time of René Descartes, a problem had existed of finding the number of solutions of a given second-order differential equation within a given range of the variable. Sturm provided a complete solution to the problem with his theorem which first appeared in Mémoire sur la résolution des équations numériques (1829; “Treatise on Numerical Equations”). Those principles have been applied in the development of quantum mechanics, as in the solution of the Schrödinger equation and its boundary values. *TIS  Sturm is also remembered for the Sturm-Liouville problem, an eigenvalue problem in second order differential equations.*SAU 1812  Gustav Adolph Göpel (29 Sept 1812, 7 June 1847) Göpel's doctoral dissertation studied periodic continued fractions of the roots of integers and derived a representation of the numbers by quadratic forms. He wrote on Steiner's synthetic geometry and an important work, Theoriae transcendentium Abelianarum primi ordinis adumbratio levis, published after his death, continued the work of Jacobi on elliptic functions. This work was published in Crelle's Journal in 1847. *SAU 1895 Harold Hotelling​, 29 September 1895 - 26 December 1973   He originally studied journalism at the University of Washington, earning a degree in it in 1919, but eventually turned to mathematics, gaining a PhD in Mathematics from Princeton in 1924 for a dissertation dealing with topology. However, he became interested in statistics that used higher-level math, leading him to go to England in 1929 to study with Fisher. Although Hotelling first went to Stanford University in 1931, he not many years afterwards became a Professor of Economics at Columbia University, where he helped create Columbia's Stat Dept. In 1946, Hotelling was recruited by Gertrude Cox​ to form a new Stat Dept at the University of North Carolina at Chapel Hill. He became Professor and Chairman of the Dept of Mathematical Statistics, Professor of Economics, and Associate Director of the Institute of Statistics at UNC-CH. (When Hotelling and his wife first arrived in Chapel Hill they instituted the "Hotelling Tea", where they opened their home to students and faculty for tea time once a month.) Dr. Hotelling's major contributions to statistical theory were in multivariate analysis, with probably his most important paper his famous 1931 paper "The Generalization of Student's Ratio", now known as Hotelling's T^2, which involves a generalization of Student's t-test for multivariate data. In 1953, Hotelling published a 30-plus-page paper on the distribution of the correlation coefficient, following up on the work of Florence Nightingale David in 1938. *David Bee 1901 Enrico Fermi (29 Sep 1901; 28 Nov 1954) Italian-American physicist who was awarded the Nobel Prize for physics in 1938 as one of the chief architects of the nuclear age. He was the last of the double-threat physicists: a genius at creating both esoteric theories and elegant experiments. In 1933, he developed the theory of beta decay, postulating that the newly-discovered neutron decaying to a proton emits an electron and a particle he called a neutrino. Developing theory to explain this decay later resulted in finding the weak interaction force. He developed the mathematical statistics required to clarify a large class of subatomic phenomena, discovered neutron-induced radioactivity, and directed the first controlled chain reaction involving nuclear fission. *TIS 1925 Paul Beattie MacCready (29 Sep 1925; 28 Aug 2007) was an American engineer who invented not only the first human-powered flying machines, but also the first solar-powered aircraft to make sustained flights. On 23 Aug 1977, the pedal-powered aircraft, the Gossamer Condor successfully flew a 1.15 mile figure-8 course to demonstrate sustained, maneuverable manpowered flight, for which he won the £50,000 ($95,000) Kremer Prize. MacCready designed the Condor with Dr. Peter Lissamen. Its frame was made of thin aluminum tubes, covered with mylar plastic supported with stainless steel wire. In 1979, the Gossamer Albatross won the second Kremer Prize for making a flight across the English Channel.*TIS 1931   James Watson Cronin (29 Sep 1931, ) American particle physicist, who shared (with Val Logsdon Fitch) the 1980 Nobel Prize for Physics for "the discovery of violations of fundamental symmetry principles in the decay of neutral K-mesons." Their experiment proved that a reaction run in reverse does not follow the path of the original reaction, which implied that time has an effect on subatomic-particle interactions. Thus the experiment demonstrated a break in particle-antiparticle symmetry for certain reactions of subatomic particles.*TIS 1935 Hillel (Harry) Fürstenberg (September 29, 1935, ..)) is an American-Israeli mathematician, a member of the Israel Academy of Sciences and Humanities and U.S. National Academy of Sciences and a laureate of the Wolf Prize in Mathematics. He is known for his application of probability theory and ergodic theory methods to other areas of mathematics, including number theory and Lie groups. He gained attention at an early stage in his career for producing an innovative topological proof of the infinitude of prime numbers. He proved unique ergodicity of horocycle flows on a compact hyperbolic Riemann surfaces in the early 1970s. In 1977, he gave an ergodic theory reformulation, and subsequently proof, of Szemerédi's theorem. The Fürstenberg boundary and Fürstenberg compactification of a locally symmetric space are named after him. *Wik 1939 Samuel Dickstein (May 12, 1851 – September 29, 1939) was a Polish mathematician of Jewish origin. He was one of the founders of the Jewish party "Zjednoczenie" (Unification), which advocated the assimilation of Polish Jews. He was born in Warsaw and was killed there by a German bomb at the beginning of World War II. All the members of his family were killed during the Holocaust. Dickstein wrote many mathematical books and founded the journal Wiadomości Mathematyczne (Mathematical News), now published by the Polish Mathematical Society. He was a bridge between the times of Cauchy and Poincaré and those of the Lwów School of Mathematics. He was also thanked by Alexander Macfarlane for contributing to the Bibliography of Quaternions (1904) published by the Quaternion Society. He was also one of the personalities, who contributed to the foundation of the Warsaw Public Library in 1907.*Wik 1941 Friedrich Engel (26 Dec 1861, 29 Sept 1941)Engel was taught by Klein who recognized that he was the right man to assist Lie. At Klein's suggestion Engel went to work with Lie in Christiania (now Oslo) from 1884 until 1885. In 1885 Engel's Habilitation thesis was accepted by Leipzig and he became a lecturer there. The year after Engel returned to Leipzig from Christiania, Lie was appointed to succeed Klein and the collaboration of Lie and Engel continued. In 1889 Engel was promoted to assistant professor and, ten years later he was promoted to associate professor. In 1904 he accepted the chair of mathematics at Greifswald when his friend Eduard Study resigned the chair. Engel's final post was the chair of mathematics at Giessen which he accepted in 1913 and he remained there for the rest of his life. In 1931 he retired from the university but continued to work in Giessen. The collaboration between Engel and Lie led to Theorie der Transformationsgruppen a work on three volumes published between 1888 and 1893. This work was, "... prepared by S Lie with the cooperation of F Engel... "  In many ways it was Engel who put Lie's ideas into a coherent form and made them widely accessible. From 1922 to 1937 Engel published Lie's collected works in six volumes and prepared a seventh (which in fact was not published until 1960). Engel's efforts in producing Lie's collected works are described as, "... an exceptional service to mathematics in particular, and scholarship in general. Lie's peculiar nature made it necessary for his works to be elucidated by one who knew them intimately and thus Engel's 'Annotations' completed in scope with the text itself. " Engel also edited Hermann Grassmann's complete works and really only after this was published did Grassmann get the fame which his work deserved. Engel collaborated with Stäckel in studying the history of non-euclidean geometry. He also wrote on continuous groups and partial differential equations, translated works of Lobachevsky from Russian to German, wrote on discrete groups, Pfaffian equations and other topics. *SAU 1955 L(ouis) L(eon) Thurstone (29 May 1887, 29 Sep 1955)  was an American psychologist who improved psychometrics, the measurement of mental functions, and developed statistical techniques for multiple-factor analysis of performance on psychological tests. In high school, he published a letter in Scientific American on a problem of diversion of water from Niagara Falls; and invented a method of trisecting an angle. At university, Thurstone studied engineering. He designed a patented motion picture projector, later demonstrated in the laboratory of Thomas Edison, with whom Thurstone worked briefly as an assistant. When he began teaching engineering, Thurstone became interested in the learning process and pursued a doctorate in psychology. *TIS 2003 Ovide Arino (24 April 1947 - 29 September 2003) was a mathematician working on delay differential equations. His field of application was population dynamics. He was a quite prolific writer, publishing over 150 articles in his lifetime. He also was very active in terms of student supervision, having supervised about 60 theses in total in about 20 years. Also, he organized or coorganized many scientific events. But, most of all, he was an extremely kind human being, interested in finding the good in everyone he met. *Euromedbiomath 2010 Georges Charpak (1 August 1924 – 29 September 2010) was a French physicist who was awarded the Nobel Prize in Physics in 1992 "for his invention and development of particle detectors, in particular the multiwire proportional chamber". This was the last time a single person was awarded the physics prize. *Wik Credits : *CHM=Computer History Museum *FFF=Kane, Famous First Facts *NSEC= NASA Solar Eclipse Calendar *RMAT= The Renaissance Mathematicus, Thony Christie *SAU=St Andrews Univ. Math History *TIA = Today in Astronomy *TIS= Today in Science History *VFR = V Frederick Rickey, USMA *Wik = Wikipedia *WM = Women of Mathematics, Grinstein & Campbell Post a Comment
3339dfa57e9cd631
Take the 2-minute tour × Let's suppose I have a Hilbert space $K = L^2(X)$ equipped with a Hamiltonian $H$ such that the Schrödinger equation with respect to $H$ on $K$ describes some boson I'm interested in, and I want to create and annihilate a bunch of these bosons. So I construct the bosonic Fock space $$S(K) = \bigoplus_{i \ge 0} S^i(K)$$ where $S^i$ denotes the $i^{th}$ symmetric power. (Is this "second quantization"?) Feel free to assume that $H$ has discrete spectrum. What is the new Hamiltonian on $S(K)$ (assuming that the bosons don't interact)? How do observables on $K$ translate to $S(K)$? I'm not entirely sure this is a meaningful question to ask, so feel free to tell me that it's not and that I have to postulate some mechanism by which creation and/or annihilation actually happens. In that case, I would love to be enlightened about how to do this. Now, various sources (Wikipedia, the Feynman lectures) inform me that $S(K)$ is somehow closely related to the Hilbert space of states of a quantum harmonic oscillator. That is, the creation and annihilation operators one defines in that context are somehow the same as the creation and annihilation operators one can define on $S(K)$, and maybe the Hamiltonians even look the same somehow. Why is this? What's going on here? Assume that I know a teensy bit of ordinary quantum mechanics but no quantum field theory. share|improve this question Hello Qiaochu, welcome to physics.SE! Nice question and I hope we can expect many more :-) –  Marek Jan 8 '11 at 3:31 What is $S^i(K)$ @Qiaochu ? –  user346 Jan 8 '11 at 5:10 @space_cadet: the i^{th} symmetric power, i.e. the Hilbert space of states of i identical bosons. –  Qiaochu Yuan Jan 8 '11 at 13:14 Ah ok. In the physics literature $H$, almost always, denotes the Hamiltonian and $S$ the action. –  user346 Jan 8 '11 at 13:25 $H$ on $Sym^2(K)$ is really $H\otimes 1 + 1 \otimes H$ and likewise for $a$ and $a^\dagger$. So for example the energy is the sum of the (uncoupled) energies. You might have expected $H\otimes H$, for example, but $H$ generates an infinitesimal translation in time. Exponentiating gives the expected result on the propogator $U = exp(tH)$ as $U\otimes U.$ –  Eric Zaslow Jan 8 '11 at 20:20 3 Answers 3 up vote 3 down vote accepted Let's discuss the harmonic oscillator first. It is actually a very special system (one and only of its kind in whole QM), itself being already second quantized in a sense (this point will be elucidated later). First, a general talk about HO (skip this paragraph if you already know them inside-out). It's possible to express its Hamiltonian as $H = \hbar \omega(N + 1/2)$ where $N = a^{\dagger} a$ and $a$ is a linear combination of momentum and position operator). By using the commutation relations $[a, a^{\dagger}] = 1$ one obtains basis $\{ \left| n \right >$ | $n \in {\mathbb N} \}$ with $N \left | n \right > = n$. So we obtain a convenient interpretation that this basis is in fact the number of particles in the system, each carrying energy $\hbar \omega$ and that the vacuum $\left | 0 \right >$ has energy $\hbar \omega \over 2$. Now, the above construction was actually the same as yours for $X = \{0\}$. Fock's construction (also known as second quantization) can be understood as introducing particles, $S^i$ corresponding to $i$ particles (so HO is a second quantization of a particle with one degree of freedom). In any case, we obtain position-dependent operators $a(x), a^{\dagger}(x), N(x)$ and $H(x)$ which are for every $x \in X$ isomorphic to HO operators discussed previously and also obtain base $\left | n(x) \right >$ (though I am actually not sure this is base in the strict sense of the word; these affairs are not discussed much in field theory by physicists). The total hamiltonian $H$ will then be an integral $H = \int H(x) dx$. The generic state in this system looks like a bunch of particles scattered all over and this is in fact particle description of a free bosonic field. share|improve this answer I realize I left your original Hamiltonian $H$ out of the discussion. I'll add that to the answer later. For now note that $x$ is in no way special in the above, we could have used other "basis" of $K$ like momentum and in particular energy basis of the $H$. In that case the relevant states for $S(K)$ become $\left | n_0 n_1 \cdots \right>$ with $n_i$ telling us how many particles are in the state with energy $E_i$. –  Marek Jan 8 '11 at 4:07 @Marek: thanks! I would definitely appreciate some pointers about exactly what to do with the original Hamiltonian. Some follow-up questions: are the creation and annihilation operators observables? Is number going to turn out to be a conserved quantity in the general case? –  Qiaochu Yuan Jan 8 '11 at 14:05 @Marek: and one more question. Given an observable A on K, what's the corresponding observable on S(K)? I can think of a few different possibilities and I'm not sure which one physicists actually use. –  Qiaochu Yuan Jan 8 '11 at 14:27 @Qiaochu: true, but I thought you were asking how to promote observables from $K$ to $S(K)$. $N(\lambda)$ are completely new operators than need the structure of $S(K)$ to be defined. As for interactions: well, that is a topic for a one-semester course in quantum field theory so I recommend you ask this as a separate question. But in short: in general any $H_I$ is possible. But physical ones need to conserve energy, momentum and in fact complete Poincaré symmetry. So one uses representations of Poincaré group to restrict possible choices of $H_I$. –  Marek Jan 8 '11 at 15:24 @Qiaochu: (cont.) in the end it turns out that it's really ineffective to work in this way and one is forced to pass to the language of fields. One can quantize classical fields (again enforcing Poincaré and perhaps other, gauge symmetries) by usual means (canonical quantization, path-integral, etc.) and in the end one can decompose the Hilbert space into particles (in the Fock sense) and $H_I$ falls out. In any case, there is still a lot of room for possible interactions and to get a taste, see e.g. QED Lagrangian. –  Marek Jan 8 '11 at 15:30 Reference: Fetter and Walecka, Quantum Theory of Many Particle Systems, Ch. 1 The Hamiltonian for a SHO is: $$ H = \sum_{i = 0}^{\infty}\hbar \omega ( a_i^{+} a_i + \frac{1}{2} ) $$ where $\{a^+_i, a_i\}$ are the creation and annihilation operators for the $i^\textrm{th}$ eigenstate (momentum mode). The Fock space $\mathbf{F}$ consists of states of the form: $$ \vert n_{a_0},n_{a_1}, ...,n_{a_N} \rangle $$ which are obtained by repeatedly acting on the vacuum $\vert 0 \rangle $ by the ladder operators: $$ \Psi = \vert n_{i_0},n_{i_1}, ...,n_{i_N} \rangle = (a_0^+)^{i_0} (a_1^+)^{i_1} \ldots (a_N^+)^{i_N} \vert 0 \rangle $$ The interpretation of $\Psi$ is as the state which contains $i_k$ quanta of the $k^\textrm{th}$ eigenstate created by application of $(a^+_k)^{i_k}$ on the vacuum. The above state is not normalized until multiplied by factor of the form $\prod_{k=0}^N \frac{1}{\sqrt{k+1}}$. If your excitations are bosonic you are done, because the commutator of the ladder operators $[a^+_i,a_j] = \delta_{ij}$ vanishes for $i\ne j$. However if the statistics of your particles are non-bosonic (fermionic or anyonic) then the order, in which you act on the vacuum with the ladder operators, matters. Of course, to construct a Fock space $\mathbf{F}$ you do not need to specify a Hamiltonian. Only the ladder operators with their commutation/anti-commutation relations are needed. In usual flat-space problems the ladder operators correspond to our usual fourier modes $ a^+_k \Rightarrow \exp ^{i k x} $. For curved spacetimes this can procedure can be generalized by defining our ladder operators to correspond to suitable positive (negative) frequency solutions of a laplacian on that space. For details, see Wald, QFT in Curved Spacetimes. Now, given any Hamiltonian of the form: $$ H = \sum_{k=1}^{N} T(x_k) + \frac{1}{2} \sum_{k \ne l = 1}^N V(x_k,x_l) $$ with a kinetic term $T$ for a particle at $x_k$ and a pairwise potential term $V(x_k,x_l)$, one can write down the quantum Hamiltonian in terms of matrix elements of these operators: $$ H = \sum_{ij} a^+_i \langle i \vert T \vert j \rangle a_i + \frac{1}{2}a^+_i a^+_j \langle ij \vert V \vert kl \rangle a_l a_k $$ where $|i\rangle$ is the state with a single excited quantum corresponding the action of $a^+_i$ on the vacuum. (For details, steps, see Fetter & Walecka, Ch. 1). I hope this helps resolves some of your doubts. Being as you are from math, there are bound to be semantic differences between my language and yours so if you have any questions at all please don't hesitate to ask. share|improve this answer Can you explain the notation in that last formula? What are the b_i? –  Qiaochu Yuan Jan 8 '11 at 18:00 @qiaochu that was a typo. Its fixed now. –  user346 Jan 8 '11 at 20:29 As recently as 10 years ago Welecka was still teaching as William & Mary. It's worth taking his course. Any course. Or even going to see a talk. Really. –  dmckee Jan 8 '11 at 23:00 Suppose, as you do, that $K$ is the space of states of a single boson. Then the space of states of a combined system of two bosons is not $K\otimes K$ as it would be if the two bosons were distinguishable, it is the symmetric subspace which you are denoting as $S^2$. Your sum over all $i$, which you denote $S$, is then a HIlbert space (state space) of a new system whose states contain the states of one-boson system, a two-boson system, a three-boson system, etc. except not an infinite number of bosons. (that is not included in the space $S$). And your space $S$ includes superpositions, for example if $v_1$ is an element of $S$ (a state of one boson) and if $v_3 \in S^3$ (a state of a three boson system) then $0.707 v_1 - -.707 v_3$ is a state which has a fifty per cent. probability of being one boson, if the number of particles is measured, and a fifty per cent. probability of being found to be three bosons. That is the physical meaning of Fock space. It is the state space on which the operators of a quantum field act. As already remarked by Eric Zaslow, if $H$ is the Hamiltonian of the h.o. $K$, then by definition, $H\otimes I + I \otimes H$ is the Hamiltonian on $S^2$, etc. on each $S^i$. Then one sums them all up to get a Hamiltonian on the direct sum $S$. Unless this Hamiltonian is perturbed, the number of particles is constant, obviously, since it preserves each subspace $S^i$ of $S$. So there will be no creation or annihilation of pairs of particles. If this field comes into interaction with an extraneous particle, the Hamiltonian will be perturbed of course. It is connected with second quantisation as follows: if you have a classical h.o. and quantise it, you get $K$. If you now second quantise $K$, you get $S$ which can be regarded as a quantum field. Sir James Jeans showed, before the quantum revolution, that the classical electromagnetic field could be obtained from the classical mechanics h.o. as a limit of more and more classical h.o.'s not interacting with each other, and this procedure of second quantisation is a quantum analogue. It is not the same procedure as if you start with a classical field and then quantise it. But it is remarkable that you can get the same anser either way, as JEans noticed in the classical case. That is, you started with a quantum one-particle system and passed to Fock space and got the quantum field theory corresponding to that system. But we could have started with a classical field and quantised it, and gotten the quantum field that way. share|improve this answer Your Answer
4abf27cb226bdd0c
Atmospheric entry From Wikipedia, the free encyclopedia   (Redirected from Reentry) Jump to: navigation, search "Reentry" redirects here. For other uses, see Reentry (disambiguation). Mars Exploration Rover (MER) aeroshell, artistic rendition Atmospheric entry is the movement of an object from outer space into and through the gases of an atmosphere of a planet, dwarf planet or natural satellite. There are two main types of atmospheric entry: uncontrolled entry, such as the entry of astronomical objects, space debris or bolides; and controlled entry (or reentry) of a spacecraft capable of being navigated or following a predetermined course. Technologies and procedures allowing the controlled atmospheric entry, descent and landing of spacecraft are collectively abbreviated as EDL. Animated illustration of different phases as a meteoroid enters the Earth's atmosphere to become visible as a meteor and land as a meteorite Atmospheric drag and aerodynamic heating can cause atmospheric breakup capable of completely disintegrating smaller objects. These forces may cause objects with lower compressive strength to explode. For Earth, atmospheric entry occurs above the Kármán line at an altitude of more than 100 km (62 mi.) above the surface, while at Venus atmospheric entry occurs at 250 km (155 mi.) and at Mars atmospheric entry at about 80 km (50 mi.). Uncontrolled, objects accelerate through the atmosphere at extreme velocities under the influence of Earth's gravity. Most controlled objects enter at hypersonic speeds due to their suborbital (e.g., intercontinental ballistic missile reentry vehicles), orbital (e.g., the Space Shuttle), or unbounded (e.g., meteors) trajectories. Various advanced technologies have been developed to enable atmospheric reentry and flight at extreme velocities. An alternative low velocity method of controlled atmospheric entry is buoyancy[1] which is suitable for planetary entry where thick atmospheres, strong gravity or both factors complicate high-velocity hyperbolic entry, such as the atmospheres of Venus, Titan and the gas giants.[2] Apollo Command Module flying at a high angle of attack for lifting entry, artistic rendition. The concept of the ablative heat shield was described as early as 1920 by Robert Goddard: "In the case of meteors, which enter the atmosphere with speeds as high as 30 miles per second (48 km/s), the interior of the meteors remains cold, and the erosion is due, to a large extent, to chipping or cracking of the suddenly heated surface. For this reason, if the outer surface of the apparatus were to consist of layers of a very infusible hard substance with layers of a poor heat conductor between, the surface would not be eroded to any considerable extent, especially as the velocity of the apparatus would not be nearly so great as that of the average meteor."[3] Practical development of reentry systems began as the range and reentry velocity of ballistic missiles increased. For early short-range missiles, like the V-2, stabilization and aerodynamic stress were important issues (many V-2s broke apart during reentry), but heating was not a serious problem. Medium-range missiles like the Soviet R-5, with a 1200 km range, required ceramic composite heat shielding on separable reentry vehicles (it was no longer possible for the entire rocket structure to survive reentry). The first ICBMs, with ranges of 8000 to 12,000 km, were only possible with the development of modern ablative heat shields and blunt-shaped vehicles. In the USA, this technology was pioneered by H. Julian Allen at Ames Research Center.[4] Terminology, definitions and jargon[edit] Over the decades since the 1950s, a rich technical jargon has grown around the engineering of vehicles designed to enter planetary atmospheres. It is recommended that the reader review the jargon glossary before continuing with this article on atmospheric reentry. When atmospheric entry is part of a spacecraft landing or recovery, particularly on a planetary body other than Earth, entry is part of a phase referred to as "entry, descent and landing", or EDL.[5] When the atmospheric entry returns to the same body that the vehicle had launched from, the event is referred to as reentry (almost always referring to Earth entry). Blunt body entry vehicles[edit] Various reentry shapes (NASA) using shadowgraphs to show high-velocity flow These four shadowgraph images represent early reentry-vehicle concepts. A shadowgraph is a process that makes visible the disturbances that occur in a fluid flow at high velocity, in which light passing through a flowing fluid is refracted by the density gradients in the fluid resulting in bright and dark areas on a screen placed behind the fluid. In the United States, H. Julian Allen and A. J. Eggers, Jr. of the National Advisory Committee for Aeronautics (NACA) made the counterintuitive discovery in 1951[6] that a blunt shape (high drag) made the most effective heat shield. From simple engineering principles, Allen and Eggers showed that the heat load experienced by an entry vehicle was inversely proportional to the drag coefficient, i.e. the greater the drag, the less the heat load. If the reentry vehicle is made blunt, air cannot "get out of the way" quickly enough, and acts as an air cushion to push the shock wave and heated shock layer forward (away from the vehicle). Since most of the hot gases are no longer in direct contact with the vehicle, the heat energy would stay in the shocked gas and simply move around the vehicle to later dissipate into the atmosphere. Prototype of the Mk-2 Reentry Vehicle (RV), based on blunt body theory The Allen and Eggers discovery, though initially treated as a military secret, was eventually published in 1958.[7] Entry vehicle shapes[edit] Main article: Nose cone design There are several basic shapes used in designing entry vehicles: Sphere or spherical section[edit] The simplest axisymmetric shape is the sphere or spherical section.[8] This can either be a complete sphere or a spherical section forebody with a converging conical afterbody. The aerodynamics of a sphere or spherical section are easy to model analytically using Newtonian impact theory. Likewise, the spherical section's heat flux can be accurately modeled with the Fay-Riddell equation.[9] The static stability of a spherical section is assured if the vehicle's center of mass is upstream from the center of curvature (dynamic stability is more problematic). Pure spheres have no lift. However, by flying at an angle of attack, a spherical section has modest aerodynamic lift thus providing some cross-range capability and widening its entry corridor. In the late 1950s and early 1960s, high-speed computers were not yet available and computational fluid dynamics was still embryonic. Because the spherical section was amenable to closed-form analysis, that geometry became the default for conservative design. Consequently, manned capsules of that era were based upon the spherical section. Pure spherical entry vehicles were used in the early Soviet Vostok and Voskhod and in Soviet Mars and Venera descent vehicles. The Apollo Command/Service Module used a spherical section forebody heat shield with a converging conical afterbody. It flew a lifting entry with a hypersonic trim angle of attack of −27° (0° is blunt-end first) to yield an average L/D (lift-to-drag ratio) of 0.368.[10] This angle of attack was achieved by precisely offsetting the vehicle's center of mass from its axis of symmetry. Other examples of the spherical section geometry in manned capsules are Soyuz/Zond, Gemini and Mercury. Even these small amounts of lift allow trajectories that have very significant effects on peak g-force (reducing g-force from 8-9g for a purely ballistic (slowed only by drag) trajectory to 4-5g) as well as greatly reducing the peak reentry heat.[11] Galileo Probe during final assembly The sphere-cone is a spherical section with a frustum or blunted cone attached. The sphere-cone's dynamic stability is typically better than that of a spherical section. With a sufficiently small half-angle and properly placed center of mass, a sphere-cone can provide aerodynamic stability from Keplerian entry to surface impact. (The "half-angle" is the angle between the cone's axis of rotational symmetry and its outer surface, and thus half the angle made by the cone's surface edges.) The original American sphere-cone aeroshell was the Mk-2 RV (reentry vehicle), which was developed in 1955 by the General Electric Corp. The Mk-2's design was derived from blunt-body theory and used a radiatively cooled thermal protection system (TPS) based upon a metallic heat shield (the different TPS types are later described in this article). The Mk-2 had significant defects as a weapon delivery system, i.e., it loitered too long in the upper atmosphere due to its lower ballistic coefficient and also trailed a stream of vaporized metal making it very visible to radar. These defects made the Mk-2 overly susceptible to anti-ballistic missile (ABM) systems. Consequently, an alternative sphere-cone RV to the Mk-2 was developed by General Electric.[citation needed] Mk-6 RV, Cold War weapon and ancestor to most of the U.S. missile entry vehicles This new RV was the Mk-6 which used a non-metallic ablative TPS (nylon phenolic). This new TPS was so effective as a reentry heat shield that significantly reduced bluntness was possible.[citation needed] However, the Mk-6 was a huge RV with an entry mass of 3360 kg, a length of 3.1 meters and a half-angle of 12.5°. Subsequent advances in nuclear weapon and ablative TPS design allowed RVs to become significantly smaller with a further reduced bluntness ratio compared to the Mk-6. Since the 1960s, the sphere-cone has become the preferred geometry for modern ICBM RVs with typical half-angles being between 10° to 11°.[citation needed] "Discoverer" type reconnaissance satellite film Recovery Vehicle (RV) Reconnaissance satellite RVs (recovery vehicles) also used a sphere-cone shape and were the first American example of a non-munition entry vehicle (Discoverer-I, launched on 28 February 1959). The sphere-cone was later used for space exploration missions to other celestial bodies or for return from open space; e.g., Stardust probe. Unlike with military RVs, the advantage of the blunt body's lower TPS mass remained with space exploration entry vehicles like the Galileo Probe with a half angle of 45° or the Viking aeroshell with a half angle of 70°. Space exploration sphere-cone entry vehicles have landed on the surface or entered the atmospheres of Mars, Venus, Jupiter and Titan. The biconic is a sphere-cone with an additional frustum attached. The biconic offers a significantly improved L/D ratio. A biconic designed for Mars aerocapture typically has an L/D of approximately 1.0 compared to an L/D of 0.368 for the Apollo-CM. The higher L/D makes a biconic shape better suited for transporting people to Mars due to the lower peak deceleration. Arguably, the most significant biconic ever flown was the Advanced Maneuverable Reentry Vehicle (AMaRV). Four AMaRVs were made by the McDonnell-Douglas Corp. and represented a significant leap in RV sophistication. Three of the AMaRVs were launched by Minuteman-1 ICBMs on 20 December 1979, 8 October 1980 and 4 October 1981. AMaRV had an entry mass of approximately 470 kg, a nose radius of 2.34 cm, a forward frustum half-angle of 10.4°, an inter-frustum radius of 14.6 cm, aft frustum half angle of 6°, and an axial length of 2.079 meters. No accurate diagram or picture of AMaRV has ever appeared in the open literature. However, a schematic sketch of an AMaRV-like vehicle along with trajectory plots showing hairpin turns has been published.[12] The DC-X, shown during its first flight, was a prototype single stage to orbit vehicle, and used a biconic shape similar to AMaRV. Opportunity rover's heat shield lying inverted on the surface of Mars. AMaRV's attitude was controlled through a split body flap (also called a "split-windward flap") along with two yaw flaps mounted on the vehicle's sides. Hydraulic actuation was used for controlling the flaps. AMaRV was guided by a fully autonomous navigation system designed for evading anti-ballistic missile (ABM) interception. The McDonnell Douglas DC-X (also a biconic) was essentially a scaled-up version of AMaRV. AMaRV and the DC-X also served as the basis for an unsuccessful proposal for what eventually became the Lockheed Martin X-33. Non-axisymmetric shapes[edit] Non-axisymmetric shapes have been used for manned entry vehicles. One example is the winged orbit vehicle that uses a delta wing for maneuvering during descent much like a conventional glider. This approach has been used by the American Space Shuttle and the Soviet Buran. The lifting body is another entry vehicle geometry and was used with the X-23 PRIME (Precision Recovery Including Maneuvering Entry) vehicle.[citation needed] The FIRST (Fabrication of Inflatable Re-entry Structures for Test) system was an Aerojet proposal for an inflated-spar Rogallo wing made up from Inconel wire cloth impregnated with silicone rubber and silicon carbide dust. FIRST was proposed in both one-man and six man versions, used for emergency escape and reentry of stranded space station crews, and was based on an earlier unmanned test program that resulted in a partially successful reentry flight from space (the launcher nose cone fairing hung up on the material, dragging it too low and fast for the thermal protection system (TPS), but otherwise it appears the concept would have worked; even with the fairing dragging it, the test article flew stably on reentry until burn-through).[citation needed] The proposed MOOSE system would have used a one-man inflatable ballistic capsule as an emergency astronaut entry vehicle. This concept was carried further by the Douglas Paracone project. While these concepts were unusual, the inflated shape on reentry was in fact axisymmetric.[citation needed] Shock layer gas physics[edit] An approximate rule-of-thumb used by heat shield designers for estimating peak shock layer temperature is to assume the air temperature in kelvins to be equal to the entry speed in meters per second[citation needed]— a mathematical coincidence. For example, a spacecraft entering the atmosphere at 7.8 km/s would experience a peak shock layer temperature of 7,800 K. This is unexpected, since the kinetic energy increases with the square of the velocity and can only occur because the specific heat of the gas increases greatly with temperature (unlike the nearly constant specific heat assumed for solids under ordinary conditions). At typical reentry temperatures, the air in the shock layer is both ionized and dissociated. This chemical dissociation necessitates various physical models to describe the shock layer's thermal and chemical properties. There are four basic physical models of a gas that are important to aeronautical engineers who design heat shields: Perfect gas model[edit] Almost all aeronautical engineers are taught the perfect (ideal) gas model during their undergraduate education. Most of the important perfect gas equations along with their corresponding tables and graphs are shown in NACA Report 1135.[13] Excerpts from NACA Report 1135 often appear in the appendices of thermodynamics textbooks and are familiar to most aeronautical engineers who design supersonic aircraft. The perfect gas theory is elegant and extremely useful for designing aircraft but assumes that the gas is chemically inert. From the standpoint of aircraft design, air can be assumed to be inert for temperatures less than 550 K at one atmosphere pressure. The perfect gas theory begins to break down at 550 K and is not usable at temperatures greater than 2,000 K. For temperatures greater than 2,000 K, a heat shield designer must use a real gas model. Real (equilibrium) gas model[edit] An entry vehicle's pitching moment can be significantly influenced by real-gas effects. Both the Apollo-CM and the Space Shuttle were designed using incorrect pitching moments determined through inaccurate real-gas modelling. The Apollo-CM's trim-angle angle of attack was higher than originally estimated, resulting in a narrower lunar return entry corridor. The actual aerodynamic centre of the Columbia was upstream from the calculated value due to real-gas effects. On Columbia’s maiden flight (STS-1), astronauts John W. Young and Robert Crippen had some anxious moments during reentry when there was concern about losing control of the vehicle.[14] An equilibrium real-gas model assumes that a gas is chemically reactive, but also assumes all chemical reactions have had time to complete and all components of the gas have the same temperature (this is called thermodynamic equilibrium). When air is processed by a shock wave, it is superheated by compression and chemically dissociates through many different reactions. Direct friction upon the reentry object is not the main cause of shock-layer heating. It is caused mainly from isentropic heating of the air molecules within the compression wave. Friction based entropy increases of the molecules within the wave also account for some heating.[original research?] The distance from the shock wave to the stagnation point on the entry vehicle's leading edge is called shock wave stand off. An approximate rule of thumb for shock wave standoff distance is 0.14 times the nose radius. One can estimate the time of travel for a gas molecule from the shock wave to the stagnation point by assuming a free stream velocity of 7.8 km/s and a nose radius of 1 meter, i.e., time of travel is about 18 microseconds. This is roughly the time required for shock-wave-initiated chemical dissociation to approach chemical equilibrium in a shock layer for a 7.8 km/s entry into air during peak heat flux. Consequently, as air approaches the entry vehicle's stagnation point, the air effectively reaches chemical equilibrium thus enabling an equilibrium model to be usable. For this case, most of the shock layer between the shock wave and leading edge of an entry vehicle is chemically reacting and not in a state of equilibrium. The Fay-Riddell equation,[9] which is of extreme importance towards modeling heat flux, owes its validity to the stagnation point being in chemical equilibrium. The time required for the shock layer gas to reach equilibrium is strongly dependent upon the shock layer's pressure. For example, in the case of the Galileo Probe's entry into Jupiter's atmosphere, the shock layer was mostly in equilibrium during peak heat flux due to the very high pressures experienced (this is counterintuitive given the free stream velocity was 39 km/s during peak heat flux). Determining the thermodynamic state of the stagnation point is more difficult under an equilibrium gas model than a perfect gas model. Under a perfect gas model, the ratio of specific heats (also called "isentropic exponent", adiabatic index, "gamma" or "kappa") is assumed to be constant along with the gas constant. For a real gas, the ratio of specific heats can wildly oscillate as a function of temperature. Under a perfect gas model there is an elegant set of equations for determining thermodynamic state along a constant entropy stream line called the isentropic chain. For a real gas, the isentropic chain is unusable and a Mollier diagram would be used instead for manual calculation. However, graphical solution with a Mollier diagram is now considered obsolete with modern heat shield designers using computer programs based upon a digital lookup table (another form of Mollier diagram) or a chemistry based thermodynamics program. The chemical composition of a gas in equilibrium with fixed pressure and temperature can be determined through the Gibbs free energy method. Gibbs free energy is simply the total enthalpy of the gas minus its total entropy times temperature. A chemical equilibrium program normally does not require chemical formulas or reaction-rate equations. The program works by preserving the original elemental abundances specified for the gas and varying the different molecular combinations of the elements through numerical iteration until the lowest possible Gibbs free energy is calculated (a Newton-Raphson method is the usual numerical scheme). The data base for a Gibbs free energy program comes from spectroscopic data used in defining partition functions. Among the best equilibrium codes in existence is the program Chemical Equilibrium with Applications (CEA) which was written by Bonnie J. McBride and Sanford Gordon at NASA Lewis (now renamed "NASA Glenn Research Center"). Other names for CEA are the "Gordon and McBride Code" and the "Lewis Code". CEA is quite accurate up to 10,000 K for planetary atmospheric gases, but unusable beyond 20,000 K (double ionization is not modelled). CEA can be downloaded from the Internet along with full documentation and will compile on Linux under the G77 Fortran compiler. Real (non-equilibrium) gas model[edit] A non-equilibrium real gas model is the most accurate model of a shock layer's gas physics, but is more difficult to solve than an equilibrium model. The simplest non-equilibrium model is the Lighthill-Freeman model.[15][16] The Lighthill-Freeman model initially assumes a gas made up of a single diatomic species susceptible to only one chemical formula and its reverse; e.g., N2 → N + N and N + N → N2 (dissociation and recombination). Because of its simplicity, the Lighthill-Freeman model is a useful pedagogical tool, but is unfortunately too simple for modelling non-equilibrium air. Air is typically assumed to have a mole fraction composition of 0.7812 molecular nitrogen, 0.2095 molecular oxygen and 0.0093 argon. The simplest real gas model for air is the five species model, which is based upon N2, O2, NO, N, and O. The five species model assumes no ionization and ignores trace species like carbon dioxide. When running a Gibbs free energy equilibrium program, the iterative process from the originally specified molecular composition to the final calculated equilibrium composition is essentially random and not time accurate. With a non-equilibrium program, the computation process is time accurate and follows a solution path dictated by chemical and reaction rate formulas. The five species model has 17 chemical formulas (34 when counting reverse formulas). The Lighthill-Freeman model is based upon a single ordinary differential equation and one algebraic equation. The five species model is based upon 5 ordinary differential equations and 17 algebraic equations. Because the 5 ordinary differential equations are loosely coupled, the system is numerically "stiff" and difficult to solve. The five species model is only usable for entry from low Earth orbit where entry velocity is approximately 7.8 km/s. For lunar return entry of 11 km/s, the shock layer contains a significant amount of ionized nitrogen and oxygen. The five species model is no longer accurate and a twelve species model must be used instead. High speed Mars entry which involves a carbon dioxide, nitrogen and argon atmosphere is even more complex requiring a 19 species model. An important aspect of modelling non-equilibrium real gas effects is radiative heat flux. If a vehicle is entering an atmosphere at very high speed (hyperbolic trajectory, lunar return) and has a large nose radius then radiative heat flux can dominate TPS heating. Radiative heat flux during entry into an air or carbon dioxide atmosphere typically comes from asymmetric diatomic molecules; e.g., cyanogen (CN), carbon monoxide, nitric oxide (NO), single ionized molecular nitrogen etc. These molecules are formed by the shock wave dissociating ambient atmospheric gas followed by recombination within the shock layer into new molecular species. The newly formed diatomic molecules initially have a very high vibrational temperature that efficiently transforms the vibrational energy into radiant energy; i.e., radiative heat flux. The whole process takes place in less than a millisecond which makes modelling a challenge. The experimental measurement of radiative heat flux (typically done with shock tubes) along with theoretical calculation through the unsteady Schrödinger equation are among the more esoteric aspects of aerospace engineering. Most of the aerospace research work related to understanding radiative heat flux was done in the 1960s, but largely discontinued after conclusion of the Apollo Program. Radiative heat flux in air was just sufficiently understood to ensure Apollo's success. However, radiative heat flux in carbon dioxide (Mars entry) is still barely understood and will require major research. Frozen gas model[edit] The frozen gas model describes a special case of a gas that is not in equilibrium. The name "frozen gas" can be misleading. A frozen gas is not "frozen" like ice is frozen water. Rather a frozen gas is "frozen" in time (all chemical reactions are assumed to have stopped). Chemical reactions are normally driven by collisions between molecules. If gas pressure is slowly reduced such that chemical reactions can continue then the gas can remain in equilibrium. However, it is possible for gas pressure to be so suddenly reduced that almost all chemical reactions stop. For that situation the gas is considered frozen. The distinction between equilibrium and frozen is important because it is possible for a gas such as air to have significantly different properties (speed-of-sound, viscosity etc.) for the same thermodynamic state; e.g., pressure and temperature. Frozen gas can be a significant issue in the wake behind an entry vehicle. During reentry, free stream air is compressed to high temperature and pressure by the entry vehicle's shock wave. Non-equilibrium air in the shock layer is then transported past the entry vehicle's leading side into a region of rapidly expanding flow that causes freezing. The frozen air can then be entrained into a trailing vortex behind the entry vehicle. Correctly modelling the flow in the wake of an entry vehicle is very difficult. Thermal protection shield (TPS) heating in the vehicle's afterbody is usually not very high, but the geometry and unsteadiness of the vehicle's wake can significantly influence aerodynamics (pitching moment) and particularly dynamic stability. Thermal protection systems[edit] A thermal protection system or TPS is the barrier that protects a spacecraft during the searing heat of atmospheric reentry. A secondary goal may be to protect the spacecraft from the heat and cold of space while in orbit. Multiple approaches for the thermal protection of spacecraft are in use, among them ablative heat shields, passive cooling and active cooling of spacecraft surfaces. Ablative heat shield (after use) on Apollo 12 capsule The ablative heat shield functions by lifting the hot shock layer gas away from the heat shield's outer wall (creating a cooler boundary layer). The boundary layer comes from blowing of gaseous reaction products from the heat shield material and provides protection against all forms of heat flux. The overall process of reducing the heat flux experienced by the heat shield's outer wall by way of a boundary layer is called blockage. Ablation occurs at two levels in an ablative TPS: the outer surface of the TPS material chars, melts, and sublimes, while the bulk of the TPS material undergoes pyrolysis and expels product gases. The gas produced by pyrolysis is what drives blowing and causes blockage of convective and catalytic heat flux. Pyrolysis can be measured in real time using thermogravimetric analysis, so that the ablative performance can be evaluated.[17] Ablation can also provide blockage against radiative heat flux by introducing carbon into the shock layer thus making it optically opaque. Radiative heat flux blockage was the primary thermal protection mechanism of the Galileo Probe TPS material (carbon phenolic). Carbon phenolic was originally developed as a rocket nozzle throat material (used in the Space Shuttle Solid Rocket Booster) and for re-entry vehicle nose tips. Early research on ablation technology in the USA was centered at NASA's Ames Research Center located at Moffett Field, California. Ames Research Center was ideal, since it had numerous wind tunnels capable of generating varying wind velocities. Initial experiments typically mounted a mock-up of the ablative material to be analyzed within a hypersonic wind tunnel.[18] Testing of ablative materials occurs at the Ames Arc Jet Complex. Many spacecraft thermal protection systems have been tested in this facility, including the Apollo, space shuttle, and Orion heat shield materials.[19] Mars Pathfinder during final assembly showing the aeroshell, cruise ring and solid rocket motor The thermal conductivity of a particular TPS material is usually proportional to the material's density.[20] Carbon phenolic is a very effective ablative material, but also has high density which is undesirable. If the heat flux experienced by an entry vehicle is insufficient to cause pyrolysis then the TPS material's conductivity could allow heat flux conduction into the TPS bondline material thus leading to TPS failure. Consequently, for entry trajectories causing lower heat flux, carbon phenolic is sometimes inappropriate and lower density TPS materials such as the following examples can be better design choices: SLA in SLA-561V stands for super light-weight ablator. SLA-561V is a proprietary ablative made by Lockheed Martin that has been used as the primary TPS material on all of the 70° sphere-cone entry vehicles sent by NASA to Mars other than the Mars Science Laboratory (MSL). SLA-561V begins significant ablation at a heat flux of approximately 110 W/cm², but will fail for heat fluxes greater than 300 W/cm². The MSL aeroshell TPS is currently designed to withstand a peak heat flux of 234 W/cm². The peak heat flux experienced by the Viking-1 aeroshell which landed on Mars was 21 W/cm². For Viking-1, the TPS acted as a charred thermal insulator and never experienced significant ablation. Viking-1 was the first Mars lander and based upon a very conservative design. The Viking aeroshell had a base diameter of 3.54 meters (the largest used on Mars until Mars Science Laboratory). SLA-561V is applied by packing the ablative material into a honeycomb core that is pre-bonded to the aeroshell's structure thus enabling construction of a large heat shield.[21] NASA's Stardust sample return capsule successfully landed at the USAF Utah Range. Phenolic impregnated carbon ablator[edit] Phenolic impregnated carbon ablator (PICA), a carbon fiber preform impregnated in phenolic resin,[22] is a modern TPS material and has the advantages of low density (much lighter than carbon phenolic) coupled with efficient ablative ability at high heat flux. It is a good choice for ablative applications such as high-peak-heating conditions found on sample-return missions or lunar-return missions. PICA's thermal conductivity is lower than other high-heat-flux ablative materials, such as conventional carbon phenolics.[citation needed] PICA was patented by NASA Ames Research Center in the 1990s and was the primary TPS material for the Stardust aeroshell.[23] The Stardust sample-return capsule was the fastest man-made object ever to reenter Earth's atmosphere (12.4 km/s (28,000 mph) at 135 km altitude). This was faster than the Apollo mission capsules and 70% faster than the Shuttle.[24] PICA was critical for the viability of the Stardust mission, which returned to Earth in 2006. Stardust's heat shield (0.81 m base diameter) was made of one monolithic piece sized to withstand a nominal peak heating rate of 1.2 kW/cm2. A PICA heat shield was also used for the Mars Science Laboratory entry into the Martian atmosphere.[25] An improved and easier to produce version called PICA-X was developed by SpaceX in 2006-2010[25] for the Dragon space capsule.[26] The first re-entry test of a PICA-X heat shield was on the Dragon C1 mission on 8 December 2010.[27] The PICA-X heat shield was designed, developed and fully qualified by a small team of only a dozen engineers and technicians in less than four years.[25] PICA-X is ten times less expensive to manufacture than the NASA PICA heat shield material.[28] The Dragon 1 spacecraft initially used PICA-X version 1 and was later equipped with version 2. The Dragon V2 spacecraft uses PICA-X version 3. SpaceX has indicated that each new version of PICA-X primarily improves upon heat shielding capacity rather than the manufacturing cost.[citation needed] Deep Space 2 impactor aeroshell, a classic 45° sphere-cone with spherical section afterbody enabling aerodynamic stability from atmospheric entry to surface impact Silicone-impregnated reusable ceramic ablator (SIRCA) was also developed at NASA Ames Research Center and was used on the Backshell Interface Plate (BIP) of the Mars Pathfinder and Mars Exploration Rover (MER) aeroshells. The BIP was at the attachment points between the aeroshell's backshell (also called the afterbody or aft cover) and the cruise ring (also called the cruise stage). SIRCA was also the primary TPS material for the unsuccessful Deep Space 2 (DS/2) Mars impactor probes with their 0.35 m base diameter aeroshells. SIRCA is a monolithic, insulating material that can provide thermal protection through ablation. It is the only TPS material that can be machined to custom shapes and then applied directly to the spacecraft. There is no post-processing, heat treating, or additional coatings required (unlike Space Shuttle tiles). Since SIRCA can be machined to precise shapes, it can be applied as tiles, leading edge sections, full nose caps, or in any number of custom shapes or sizes. As of 1996, SIRCA had been demonstrated in backshell interface applications, but not yet as a forebody TPS material.[29] AVCOAT is a NASA-specified ablative heat shield, a glass-filled epoxy-novolac system.[30] NASA originally used it for the Apollo capsule and then utilized the material for its next-generation beyond low Earth-orbit Orion spacecraft.[31] The Avcoat to be used on Orion has been reformulated to meet environmental legislation that has been passed since the end of Apollo.[32][33] Thermal soak[edit] Astronaut Andrew S. W. Thomas takes a close look at TPS tiles underneath Space Shuttle Atlantis. Rigid black LI-900 tiles were used on the Space Shuttle. Thermal soak is a part of almost all TPS schemes. For example, an ablative heat shield loses most of its thermal protection effectiveness when the outer wall temperature drops below the minimum necessary for pyrolysis. From that time to the end of the heat pulse, heat from the shock layer convects into the heat shield's outer wall and would eventually conduct to the payload.[citation needed] This outcome is prevented by ejecting the heat shield (with its heat soak) prior to the heat conducting to the inner wall. Typical Space Shuttle TPS tiles (LI-900) have remarkable thermal protection properties. An LI-900 tile exposed to a temperature of 1000 K on one side will remain merely warm to the touch on the other side. However, they are relatively brittle and break easily, and cannot survive in-flight rain. In some early ballistic missile RVs (e.g., the Mk-2 and the suborbital Mercury spacecraft), radiatively-cooled TPS were used to initially absorb heat flux during the heat pulse, and, then, after the heat pulse, radiate and convect the stored heat back into the atmosphere. However, the earlier version of this technique required a considerable quantity of metal TPS (e.g., titanium, beryllium, copper, etc.). Modern designers prefer to avoid this added mass by using ablative and thermal-soak TPS instead. The Mercury capsule design (shown here with its escape tower) originally used a radiatively-cooled TPS, but was later converted to an ablative TPS. Radiatively-cooled TPS can still be found on modern entry vehicles, but reinforced carbon-carbon (RCC) (also called carbon-carbon) is normally used instead of metal. RCC was the TPS material on the Space Shuttle's nose cone and wing leading edges, and was also proposed as the leading-edge material for the X-33. Carbon is the most refractory material known, with a one-atmosphere sublimation temperature of 3825 °C for graphite. This high temperature made carbon an obvious choice as a radiatively cooled TPS material. Disadvantages of RCC are that it is currently very expensive to manufacture, and lacks impact resistance.[34] Some high-velocity aircraft, such as the SR-71 Blackbird and Concorde, deal with heating similar to that experienced by spacecraft, but at much lower intensity, and for hours at a time. Studies of the SR-71's titanium skin revealed that the metal structure was restored to its original strength through annealing due to aerodynamic heating. In the case of the Concorde, the aluminium nose was permitted to reach a maximum operating temperature of 127 °C (approximately 180 °C warmer than the, normally sub-zero, ambient air); the metallurgical implications (loss of temper) that would be associated with a higher peak temperature were the most significant factors determining the top speed of the aircraft. A radiatively-cooled TPS for an entry vehicle is often called a hot-metal TPS. Early TPS designs for the Space Shuttle called for a hot-metal TPS based upon a nickel superalloy (dubbed René 41) and titanium shingles.[35] This Shuttle TPS concept was rejected, because it was believed a silica-tile-based TPS would involve lower development and manufacturing costs.[citation needed] A nickel superalloy-shingle TPS was again proposed for the unsuccessful X-33 single-stage-to-orbit (SSTO) prototype.[36] Recently, newer radiatively-cooled TPS materials have been developed that could be superior to RCC. Referred to by their prototype vehicle Slender Hypervelocity Aerothermodynamic Research Probe (SHARP), these TPS materials have been based upon substances such as zirconium diboride and hafnium diboride. SHARP TPS have suggested performance improvements allowing for sustained Mach 7 flight at sea level, Mach 11 flight at 100,000 ft (30,000 m) altitudes, and significant improvements for vehicles designed for continuous hypersonic flight. SHARP TPS materials enable sharp leading edges and nose cones to greatly reduce drag for airbreathing combined-cycle-propelled spaceplanes and lifting bodies. SHARP materials have exhibited effective TPS characteristics from zero to more than 2,000 °C, with melting points over 3,500 °C. They are structurally stronger than RCC, and, thus, do not require structural reinforcement with materials such as Inconel. SHARP materials are extremely efficient at reradiating absorbed heat, thus eliminating the need for additional TPS behind and between the SHARP materials and conventional vehicle structure. NASA initially funded (and discontinued) a multi-phase R&D program through the University of Montana in 2001 to test SHARP materials on test vehicles.[37][38] Actively cooled[edit] Various advanced reusable spacecraft and hypersonic aircraft designs have been proposed to employ heat shields made from temperature-resistant metal alloys that incorporated a refrigerant or cryogenic fuel circulating through them. Such a TPS concept was proposed for the X-30 National Aerospace Plane (NASP). The NASP was supposed to have been a scramjet powered hypersonic aircraft, but failed in development. In the early 1960s various TPS systems were proposed to use water or other cooling liquid sprayed into the shock layer, or passed through channels in the heat shield. Advantages included the possibility of more all-metal designs which would be cheaper to develop, be more rugged, and eliminate the need for classified technology. The disadvantages are increased weight and complexity, and lower reliability. The concept has never been flown, but a similar technology (the plug nozzle[39]) did undergo extensive ground testing. Feathered reentry[edit] In 2004, aircraft designer Burt Rutan demonstrated the feasibility of a shape-changing airfoil for reentry with the suborbital SpaceShipOne. The wings on this craft rotate upward into the feather configuration that provides a shuttlecock effect. Thus SpaceShipOne achieves much more aerodynamic drag on reentry while not experiencing significant thermal loads. The configuration increases drag, as the craft is now less streamlined and results in more atmospheric gas particles hitting the spacecraft at higher altitudes than otherwise. The aircraft thus slows down more in higher atmospheric layers which is the key to efficient reentry. Secondly the aircraft will automatically orient itself in this state to a high drag attitude.[40] However, the velocity attained by SpaceShipOne prior to reentry is much lower than that of an orbital spacecraft, and engineers, including Rutan, recognize that a feathered reentry technique is not suitable for return from orbit. On 4 May 2011, the first test on the SpaceShipTwo of the feathering mechanism was made during a glideflight after release from the White Knight Two. The feathered reentry was first described by Dean Chapman of NACA in 1958.[41] In the section of his report on Composite Entry, Chapman described a solution to the problem using a high-drag device: It may be desirable to combine lifting and nonlifting entry in order to achieve some advantages... For landing maneuverability it obviously is advantageous to employ a lifting vehicle. The total heat absorbed by a lifting vehicle, however, is much higher than for a nonlifting vehicle... Nonlifting vehicles can more easily be constructed... by employing, for example, a large, light drag device... The larger the device, the smaller is the heating rate. Nonlifting vehicles with shuttlecock stability are advantageous also from the viewpoint of minimum control requirements during entry. ... an evident composite type of entry, which combines some of the desirable features of lifting and nonlifting trajectories, would be to enter first without lift but with a... drag device; then, when the velocity is reduced to a certain value... the device is jettisoned or retracted, leaving a lifting vehicle... for the remainder of the descent. Inflatable heat shield reentry[edit] NASA engineers check IRVE Deceleration for atmospheric reentry, especially for higher-speed Mars-return missions, benefits from maximizing "the drag area of the entry system. The larger the diameter of the aeroshell, the bigger the payload can be."[42] An inflatable aeroshell provides one alternative for enlarging the drag area with a low-mass design. Such an inflatable shield/aerobrake was designed for the penetrators of Mars 96 mission. Since the mission failed due to the launcher malfunction, the NPO Lavochkin and DASA/ESA have designed a mission for Earth orbit. The Inflatable Reentry and Descent Technology (IRDT) demonstrator was launched on Soyuz-Fregat on 8 February 2000. The inflatable shield was designed as a cone with two stages of inflation. Although the second stage of the shield failed to inflate, the demonstrator survived the orbital reentry and was recovered.[43][44] The subsequent missions flown on the Volna rocket were not successful due to launcher failure.[45] NASA launched an inflatable heat shield experimental spacecraft on 17 August 2009 with the successful first test flight of the Inflatable Re-entry Vehicle Experiment (IRVE). The heat shield had been vacuum-packed into a 15 inches (380 mm) diameter payload shroud and launched on a Black Brant 9 sounding rocket from NASA's Wallops Flight Facility on Wallops Island, Virginia. "Nitrogen inflated the 10-foot (3.0 m) diameter heat shield, made of several layers of silicone-coated [Kevlar] fabric, to a mushroom shape in space several minutes after liftoff."[42] The rocket apogee was at an altitude of 131 miles (211 km) where it began its descent to supersonic speed. Less than a minute later the shield was released from its cover to inflate at an altitude of 124 miles (200 km). The inflation of the shield took less than 90 seconds.[42] Entry vehicle design considerations[edit] There are four critical parameters considered when designing a vehicle for atmospheric entry: 1. Peak heat flux 2. Heat load 3. Peak deceleration 4. Peak dynamic pressure Peak heat flux and dynamic pressure selects the TPS material. Heat load selects the thickness of the TPS material stack. Peak deceleration is of major importance for manned missions. The upper limit for manned return to Earth from Low Earth Orbit (LEO) or lunar return is 10 Gs.[46] For Martian atmospheric entry after long exposure to zero gravity, the upper limit is 4 Gs.[46] Peak dynamic pressure can also influence the selection of the outermost TPS material if spallation is an issue. Starting from the principle of conservative design, the engineer typically considers two worst case trajectories, the undershoot and overshoot trajectories. The overshoot trajectory is typically defined as the shallowest allowable entry velocity angle prior to atmospheric skip-off. The overshoot trajectory has the highest heat load and sets the TPS thickness. The undershoot trajectory is defined by the steepest allowable trajectory. For manned missions the steepest entry angle is limited by the peak deceleration. The undershoot trajectory also has the highest peak heat flux and dynamic pressure. Consequently, the undershoot trajectory is the basis for selecting the TPS material. There is no "one size fits all" TPS material. A TPS material that is ideal for high heat flux may be too conductive (too dense) for a long duration heat load. A low density TPS material might lack the tensile strength to resist spallation if the dynamic pressure is too high. A TPS material can perform well for a specific peak heat flux, but fail catastrophically for the same peak heat flux if the wall pressure is significantly increased (this happened with NASA's R-4 test spacecraft).[46] Older TPS materials tend to be more labor-intensive and expensive to manufacture compared to modern materials. However, modern TPS materials often lack the flight history of the older materials (an important consideration for a risk-averse designer). Based upon Allen and Eggers discovery, maximum aeroshell bluntness (maximum drag) yields minimum TPS mass. Maximum bluntness (minimum ballistic coefficient) also yields a minimal terminal velocity at maximum altitude (very important for Mars EDL, but detrimental for military RVs). However, there is an upper limit to bluntness imposed by aerodynamic stability considerations based upon shock wave detachment. A shock wave will remain attached to the tip of a sharp cone if the cone's half-angle is below a critical value. This critical half-angle can be estimated using perfect gas theory (this specific aerodynamic instability occurs below hypersonic speeds). For a nitrogen atmosphere (Earth or Titan), the maximum allowed half-angle is approximately 60°. For a carbon dioxide atmosphere (Mars or Venus), the maximum allowed half-angle is approximately 70°. After shock wave detachment, an entry vehicle must carry significantly more shocklayer gas around the leading edge stagnation point (the subsonic cap). Consequently, the aerodynamic center moves upstream thus causing aerodynamic instability. It is incorrect to reapply an aeroshell design intended for Titan entry (Huygens probe in a nitrogen atmosphere) for Mars entry (Beagle-2 in a carbon dioxide atmosphere). Prior to being abandoned, the Soviet Mars lander program achieved one successful landing (Mars 3), on the second of three entry attempts (the others were Mars 2 and Mars 6). The Soviet Mars landers were based upon a 60° half-angle aeroshell design. A 45 degree half-angle sphere-cone is typically used for atmospheric probes (surface landing not intended) even though TPS mass is not minimized. The rationale for a 45° half-angle is to have either aerodynamic stability from entry-to-impact (the heat shield is not jettisoned) or a short-and-sharp heat pulse followed by prompt heat shield jettison. A 45° sphere-cone design was used with the DS/2 Mars impactor and Pioneer Venus Probes. Notable atmospheric entry accidents[edit] Re-entry window A- Friction with air, B- In air flight. C- Expulsion lower angle, D- Perpendicular to the entry point, E- Excess friction 6.9° to 90°, F- Repulsion of 5.5° or less, G- Explosion friction, H- plane tangential to the entry point Not all atmospheric re-entries have been successful and some have resulted in significant disasters. • Voskhod 2 — The service module failed to detach for some time, but the crew survived. • Soyuz 1 — The attitude control system failed while still in orbit and later parachutes got entangled during the emergency landing sequence (entry, descent and landing (EDL) failure). Lone cosmonaut Vladimir Mikhailovich Komarov died. • Soyuz 5 — The service module failed to detach, but the crew survived. • Soyuz 11 - After Tri Module Sep, a valve was weakened by the blast and had failed on re-entry. The cabin depressurized killing all three crew members. • Mars Polar Lander — Failed during EDL. The failure was believed to be the consequence of a software error. The precise cause is unknown for lack of real-time telemetry. • Space Shuttle Columbia during STS-1 - a combination of launch damage, protruding gap filler, and tile installation error resulted in serious damage to the orbiter, only some of which the crew was privy to. Had the crew known the true extent of the damage before attempting re-entry, they would have flown the shuttle to a safe altitude and then bailed out. Nevertheless, re-entry was successful, and the orbiter proceeded to a normal landing. • Space Shuttle Columbia during STS-107 — The failure of an RCC panel on a wing leading edge caused by debris impact at launch led to breakup of the orbiter on reentry resulting in the deaths of all seven crew members. Genesis entry vehicle after crash • Genesis — The parachute failed to deploy due to a G-switch having been installed backwards (a similar error delayed parachute deployment for the Galileo Probe). Consequently, the Genesis entry vehicle crashed into the desert floor. The payload was damaged, but most scientific data were recoverable. • Soyuz TMA-11 (April 19, 2008) — The Soyuz propulsion module failed to separate properly; fallback ballistic reentry was executed that subjected the crew to forces about eight times that of gravity.[47] The crew survived. Uncontrolled and unprotected reentries[edit] Of satellites that reenter, approximately 10-40% of the mass of the object is likely to reach the surface of the Earth.[48] On average, about one catalogued object reenters per day.[49] Due to the Earth's surface being primarily water, most objects that survive reentry land in one of the world's oceans. The estimated chances that a given person will get hit and injured during his/her lifetime is around 1 in a trillion.[50] In 1978, Cosmos 954 reentered uncontrolled and crashed near Great Slave Lake in the Northwest Territories of Canada. Cosmos 954 was nuclear powered and left radioactive debris near its impact site.[51] In 1979, Skylab reentered uncontrolled, spreading debris across the Australian Outback, damaging several buildings and killing a cow.[52][53] The re-entry was a major media event largely due to the Cosmos 954 incident, but not viewed as much as a potential disaster since it did not carry nuclear fuel. The city of Esperance, Western Australia, issued a fine for littering to the United States, which was finally paid 30 years later (not by NASA, but by privately collected funds from radio listeners).[54] NASA had originally hoped to use a Space Shuttle mission to either extend its life or enable a controlled reentry, but delays in the program combined with unexpectedly high solar activity made this impossible.[55][56] On February 7, 1991 Salyut 7 underwent uncontrolled reentry with Kosmos 1686. It reentered over Argentina and scattered much of its debris over the town of Capitan Bermudez.[57][58][59] Deorbit disposal[edit] In 1971, the world's first space station Salyut 1 was deliberately de-orbited into the Pacific Ocean following the Soyuz 11 accident. Its successor, Salyut 6, was de-orbited in a controlled manner as well. On June 4, 2000 the Compton Gamma Ray Observatory was deliberately de-orbited after one of its gyroscopes failed. The debris that did not burn up fell harmlessly into the Pacific Ocean. The observatory was still operational, but the failure of another gyroscope would have made de-orbiting much more difficult and dangerous. With some controversy, NASA decided in the interest of public safety that a controlled crash was preferable to letting the craft come down at random. In 2001, the Russian Mir space station was deliberately de-orbited, and broke apart in the fashion expected by the command center during atmospheric re-entry. Mir entered the Earth's atmosphere on March 23, 2001, near Nadi, Fiji, and fell into the South Pacific Ocean. On February 21, 2008, a disabled US spy satellite, USA 193, was successfully hit at an altitude of approximately 246 kilometers (153 mi) by an SM-3 missile fired from the U.S. Navy cruiser Lake Erie off the coast of Hawaii. The satellite was inoperative, having failed to reach its intended orbit when it was launched in 2006. Due to its rapidly deteriorating orbit, it was destined for uncontrolled reentry within a month. United States Department of Defense expressed concern that the 1,000-pound (450 kg) fuel tank containing highly toxic hydrazine might survive reentry to reach the Earth’s surface intact. Several governments including those of Russia, China, and Belarus protested the action as a thinly-veiled demonstration of US anti-satellite capabilities.[60] China had previously caused an international incident when it tested an anti-satellite missile in 2007. On September 7, 2011, NASA announced the impending uncontrolled re-entry of Upper Atmosphere Research Satellite and noted that there was a small risk to the public.[61] The decommissioned satellite reentered the atmosphere on September 24, 2011, and some pieces are presumed to have crashed into the South Pacific Ocean over a debris field 500 miles (800 km) long.[62] Successful atmospheric re-entries from orbital velocities[edit] Manned orbital re-entry, by country/governmental entity Manned orbital re-entry, by commercial entity • None to date Unmanned orbital re-entry, by country/governmental entity Unmanned orbital re-entry, by commercial entity Selected atmospheric re-entries[edit] This list shows atmospheric entries in which the spacecraft is not intended to be recovered, but is destroyed in the atmosphere. What Re-entry Phobos-Grunt 2012 ROSAT 2011 UARS 2011 Mir 2001 Skylab 1979 See also[edit] Further reading[edit] • Launius, Roger D.; Jenkins, Dennis R. (October 10, 2012). Coming Home: Reentry and Recovery from Space. NASA. ISBN 9780160910647. OCLC 802182873. Retrieved August 21, 2014.  • Martin, John J. (1966). Atmospheric Entry - An Introduction to Its Science and Engineering. Old Tappan, NJ: Prentice-Hall.  • Regan, Frank J. (1984). Re-Entry Vehicle Dynamics (AIAA Education Series). New York: American Institute of Aeronautics and Astronautics, Inc. ISBN 0-915928-78-7.  • Etkin, Bernard (1972). Dynamics of Atmospheric Flight. New York: John Wiley & Sons, Inc. ISBN 0-471-24620-4.  • Vincenti, Walter G.; Kruger Jr, Charles H. (1986). Introduction to Physical Gas Dynamics. Malabar, Florida: Robert E. Krieger Publishing Co. ISBN 0-88275-309-6.  • Hansen, C. Frederick (1976). Molecular Physics of Equilibrium Gases, A Handbook for Engineers. NASA. NASA SP-3096.  • Hayes, Wallace D.; Probstein, Ronald F. (1959). Hypersonic Flow Theory. New York and London: Academic Press.  A revised version of this classic text has been reissued as an inexpensive paperback: Hayes, Wallace D. (1966). Hypersonic Inviscid Flow. Mineola, New York: Dover Publications. ISBN 0-486-43281-5.  reissued in 2004 • Anderson, Jr., John D. (1989). Hypersonic and High Temperature Gas Dynamics. New York: McGraw-Hill, Inc. ISBN 0-07-001671-2.  Notes and references[edit] 1. ^ 2. ^ 3. ^ Goddard, Robert H. (Mar 1920). "Report Concerning Further Developments". The Smithsonian Institution Archives. Archived from the original on 26 June 2009. Retrieved 2009-06-29. In the case of meteors, which enter the atmosphere with speeds as high as 30 miles per second, the interior of the meteors remains cold, and the erosion is due, to a large extent, to chipping or cracking of the suddenly heated surface. For this reason, if the outer surface of the apparatus were to consist of layers of a very infusible hard substance with layers of a poor heat conductor between, the surface would not be eroded to any considerable extent, especially as the velocity of the apparatus would not be nearly so great as that of the average meteor.  4. ^ Boris Chertok, "Rockets and People", NASA History Series, 2006 5. ^ 6. ^ Hansen, James R. (Jun 1987). "Chapter 12: Hypersonics and the Transition to Space". Engineer in Charge: A History of the Langley Aeronautical Laboratory, 1917-1958. The NASA History Series. sp-4305. United States Government Printing. ISBN 978-0-318-23455-7.  7. ^ Allen, H. Julian; Eggers, Jr., A. J. (1958). "A Study of the Motion and Aerodynamic Heating of Ballistic Missiles Entering the Earth's Atmosphere at High Supersonic Speeds" (PDF). NACA Annual Report. NASA Technical Reports. 44.2 (NACA-TR-1381): 1125–1140. Archived from the original (PDF) on October 13, 2015.  8. ^ Przadka, W.; Miedzik, J.; Goujon-Durand, S.; Wesfreid, J.E. "The wake behind the sphere; analysis of vortices during transition from steadiness to unsteadiness." (PDF). Polish french cooperation in fluid research. Archive of Mechanics., 60, 6, pp. 467–474, Warszawa 2008. Received May 29, 2008; revised version November 13, 2008. Retrieved 3 April 2015.  9. ^ a b Fay, J. A.; Riddell, F. R. (February 1958). "Theory of Stagnation Point Heat Transfer in Dissociated Air" (PDF Reprint). Journal of the Aeronautical Sciences. 25 (2): 73–85. doi:10.2514/8.7517. Retrieved 2009-06-29.  11. ^ Whittington, Kurt Thomas. "A Tool to Extrapolate Thermal Reentry Atmosphere Parameters Along a Body in Trajectory Space" (PDF). NCSU Libraries Technical Reports Repository. A thesis submitted to the Graduate Faculty of North Carolina State University in partial fulfillment of the requirements for the degree of Master of Science Aerospace Engineering Raleigh, North Carolina 2011, pp.5. Retrieved 5 April 2015.  12. ^ Regan, Frank J. and Anadakrishnan, Satya M., "Dynamics of Atmospheric Re-Entry," AIAA Education Series, American Institute of Aeronautics and Astronautics, Inc., New York, ISBN 1-56347-048-9, (1993). 13. ^ "Equations, tables, and charts for compressible flow" (PDF). NACA Annual Report. NASA Technical Reports. 39 (NACA-TR-1135): 613–681. 1953.  14. ^ Kenneth Iliff and Mary Shafer, Space Shuttle Hypersonic Aerodynamic and Aerothermodynamic Flight Research and the Comparison to Ground Test Results, Page 5-6 15. ^ Lighthill, M.J. (Jan 1957). "Dynamics of a Dissociating Gas. Part I. Equilibrium Flow". Journal of Fluid Mechanics. 2 (1): 1–32. Bibcode:1957JFM.....2....1L. doi:10.1017/S0022112057000713.  16. ^ Freeman, N.C. (Aug 1958). "Non-equilibrium Flow of an Ideal Dissociating Gas". Journal of Fluid Mechanics. 4 (4): 407–425. Bibcode:1958JFM.....4..407F. doi:10.1017/S0022112058000549.  17. ^ Parker, John and C. Michael Hogan, "Techniques for Wind Tunnel assessment of Ablative Materials," NASA Ames Research Center, Technical Publication, August, 1965. 18. ^ Hogan, C. Michael, Parker, John and Winkler, Ernest, of NASA Ames Research Center, "An Analytical Method for Obtaining the Thermogravimetric Kinetics of Char-forming Ablative Materials from Thermogravimetric Measurements", AIAA/ASME Seventh Structures and Materials Conference, April, 1966 19. ^ "NASA - Arc Jet Complex". Retrieved 2015-09-05.  20. ^ Di Benedetto, A.T.; Nicolais, L.; Watanabe, R. (1992). Composite materials : proceedings of Symposium A4 on Composite Materials of the International Conference on Advanced Materials--ICAM 91, Strasbourg, France, 27-29 May, 1991. Amsterdam: North-Holland. p. 111. ISBN 0444893563.  21. ^ Tran, Huy; Michael Tauber; William Henline; Duoc Tran; Alan Cartledge; Frank Hui; Norm Zimmerman (1996). Ames Research Center Shear Tests of SLA-561V Heat Shield Material for Mars-Pathfinder (PDF) (Technical report). NASA Ames Research Center. NASA Technical Memorandum 110402.  22. ^ Lachaud, Jean; N. Mansour, Nagi (June 2010). A pyrolysis and ablation toolbox based on OpenFOAM (PDF). 5th OpenFOAM Workshop. Gothenburg, Sweden. p. 1.  23. ^ Tran, Huy K, et al., "Qualification of the forebody heat shield of the Stardust's Sample Return Capsule," AIAA, Thermophysics Conference, 32nd, Atlanta, GA; 23–25 June 1997. 24. ^ Stardust - Cool Facts 25. ^ a b c Chambers, Andrew; Dan Rasky (2010-11-14). "NASA + SpaceX Work Together". NASA. Retrieved 2011-02-16. SpaceX undertook the design and manufacture of the reentry heat shield; it brought speed and efficiency that allowed the heat shield to be designed, developed, and qualified in less than four years.'  26. ^ SpaceX Manufactured Heat Shield Material - February 23, 2009 27. ^ Dragon could visit space station next,, 2010-12-08, accessed 2010-12-09. 28. ^ Chaikin, Andrew (January 2012). "1 visionary + 3 launchers + 1,500 employees = ? : Is SpaceX changing the rocket equation?". Air & Space Smithsonian. Retrieved 2016-06-03. SpaceX’s material, called PICA-X, is 1/10th as expensive than the original [NASA PICA material and is better], ... a single PICA-X heat shield could withstand hundreds of returns from low Earth orbit; it can also handle the much higher energy reentries from the Moon or Mars.  29. ^ Tran, Huy K., et al., "Silicone impregnated reusable ceramic ablators for Mars follow-on missions," AIAA-1996-1819, Thermophysics Conference, 31st, New Orleans, LA, June 17–20, 1996. 30. ^ Flight-Test Analysis Of Apollo Heat-Shield Material Using The Pacemaker Vehicle System NASA Technical Note D-4713, pp. 8, 1968-08, accessed 2010-12-26. "Avcoat 5026-39/HC-G is an epoxy novolac resin with special additives in a fiberglass honeycomb matrix. In fabrication, the empty honeycomb is bonded to the primary structure and the resin is gunned into each cell individually. ... The overall density of the material is 32 lb/ft3 (512 kg/m3). The char of the material is composed mainly of silica and carbon. It is necessary to know the amounts of each in the char because in the ablation analysis the silica is considered to be inert, but the carbon is considered to enter into exothermic reactions with oxygen. ... At 2160O R (12000 K), 54 percent by weight of the virgin material has volatilized and 46 percent has remained as char. ... In the virgin material, 25 percent by weight is silica, and since the silica is considered to be inert the char-layer composition becomes 6.7 lb/ft3 (107.4 kg/m3) of carbon and 8 lb/ft3 (128.1 kg/m3) of silica." 31. ^ NASA Selects Material for Orion Spacecraft Heat Shield, 2009-04-07, accessed 2011-01-02. 32. ^ NASA's Orion heat shield decision expected this month 2009-10-03, accessed 2011-01-02 33. ^ Company Watch (Apr 12, 2009 ) 34. ^ [1] Columbia Accident Investigation Board report. 35. ^ [2] Shuttle Evolutionary History. 36. ^ [3] X-33 Heat Shield Development report. 37. ^ 38. ^ sharp structure homepage w left 39. ^ - J2T-200K & J2T-250K 40. ^ SpaceShipOne 41. ^ Chapman, Dean R. (May 1958). "An approximate analytical method for studying reentry into planetary atmospheres" (PDF). NACA Technical Note 4276: 38. Archived from the original (PDF) on 2011-04-07.  42. ^ a b c NASA Launches New Technology: An Inflatable Heat Shield, NASA Mission News, 2009-08-17, accessed 2011-01-02. 43. ^ Inflatable Re-Entry Technologies: Flight Demonstration and Future Prospects 44. ^ Inflatable Reentry and Descent Technology (IRDT) Factsheet, ESA, September, 2005 45. ^ IRDT demonstration missions 46. ^ a b c Pavlosky, James E., St. Leger, Leslie G., "Apollo Experience Report - Thermal Protection Subsystem," NASA TN D-7564, (1974). 47. ^ William Harwood (2008). "Whitson describes rough Soyuz entry and landing". Spaceflight Now. Retrieved July 12, 2008.  48. ^ Spacecraft Reentry FAQ: How much material from a satellite will survive reentry? 49. ^ NASA - Frequently Asked Questions: Orbital Debris 50. ^ Center for Orbital and Reentry Debris Studies - Spacecraft Reentry 51. ^ Settlement of Claim between Canada and the Union of Soviet Socialist Republics for Damage Caused by "Cosmos 954" (Released on April 2, 1981) 52. ^ Hanslmeier, Arnold (2002). The sun and space weather. Dordrecht ; Boston: Kluwer Academic Publishers. p. 269. ISBN 9781402056048.  53. ^ Mitnik, Donald (2009). Death of a Trillion Dreams. (October 19, 2009). p. 113. ISBN 978-0557156016.  54. ^ Littering fine paid Archived July 22, 2012, at the Wayback Machine. 55. ^ Lamprecht, Jan (1998). Hollow planets : a feasibility study of possible hollow worlds. Austin, TX: World Wide Pub. p. 326. ISBN 9780620219631.  56. ^ Elkins-Tanton, Linda (2006). The Sun, Mercury, and Venus. New York: Chelsea House. p. 56. ISBN 9780816051939.  57. ^, Spacecraft Reentry FAQ: Archived May 13, 2012, at the Wayback Machine. 58. ^ Astronautix, Salyut 7. 59. ^ NYT, Salyut 7, Soviet Station in Space, Falls to Earth After 9-Year Orbit 60. ^ Gray, Andrew (2008-02-21). "U.S. has high confidence it hit satellite fuel tank". Reuters. Archived from the original on 25 February 2008. Retrieved 2008-02-23.  61. ^ David, Leonard (7 September 2011). "Huge Defunct Satellite to Plunge to Earth Soon, NASA Says". Retrieved 10 September 2011.  62. ^ "Final Update: NASA's UARS Re-enters Earth's Atmosphere". Retrieved 2011-09-27.  External links[edit]
2fe39d5d51eb3b04
Microlasers and ray chaos A hitchhiker's guide to dielectric cavities* Contents of this page: Light's growing weight You don't need a great deal of imagination to foresee an increasing significance of lightwave technology in data processing and telecommunications. Here are some arguments in favor of light: Miniaturization of electronic circuits leads to increased resistances and hence larger dissipation. Photons don't suffer from losses in the same degree because their interaction is much weaker than that of electrons. The bandwidths available for signal transmission are a few hundred kHz on copper cables, versus roughly a THz in a typical glass fiber - even now it is feasible to carry half a million telephone conversations over a single glass fiber. Photons are the method of choice for massively parallel data processing and storage. A more specific example of how microphotonics can make an impact is described in this PDF-article describing my field of work in the photonics industry from May 2000 until August 2001. The material system discussed there Indium Phosphide, a semiconductor compound. Other material systems for microphotonics can be found among polymers, glasses, porous media - to name a few. At the heart of these developments is the availability of small but efficient lasers which deliver the required intense and coherent light. If you have any doubts that the laser is one of the twentieth century's most important achievements in science and technology, please read about the impact and history of laser light at this new website. For an amusing but also informative glimpse of laser physics, see the Britney Spears Guide to Semiconductor Physics. Wikipedia is a good source of information and links on laser physics. Microlaser design All of us (physicists) have probably been "exposed" to the He-Ne laser in some graduate student lab. But of course the most ubiquitous lasers are by now the semiconductor diode lasers. Both of these incarnations rely on the parallel-mirror configuration to provide the feedback that makes laser action possible. This type of resonator is also known from the Fabry-Perot interferometer. Trapping light with interference One common way of making especially good parallel mirrors is to use Bragg reflection at multiple layers of dielectric films. See, e.g., the Wikipedia entry on "Vertical-Cavity Surface Emitting Lasers". The Bragg principle is based on the destructive interference between waves in successive layers of a stack of dielectric layers. As a rather logical continuation of the same principle, one has progressed to photonic crystals which employ the Bragg principle in more than one spatial direction and can in principle be used to make extremely small photonic cavities. The price one pays is that one needs many periods of the artificial crystal lattice in order to obtain high reflectivities, so that the total size of the structure ends up being much larger than the cavity itself. Higher and higher reflectivities are required, on the other hand, if one wants to make a laser out of such a microcavity. The simple reason is that a small cavity can host only a small amount of amplifying material, and therefore it becomes more difficult for amplification to win over the losses in a microcavity laser. Whispering-gallery resonators - trapping without interference In solid-state laser materials, it is often possible to realize the mirrors simply by exploiting total internal reflection at the interface between the high-index solid and the surrounding medium (e.g., air). In contrast to the Bragg principle, this confinement mechanism for light is to lowest order frequency-independent and can therefore be called a classical effect - it can be described without explicit use of the wave nature of light, by using Fermat's principle. This is good because it means that a device based on this confinement mechanism will in principle be able to work over a very broad range of wavelengths - in stark contrast to photonic crystals. Nevertheless, one can use total internal reflection to make three-dimensionally confined resonators with high frequency selectivity (or "finesse"), provided one can force wavefronts inside the cavity to interfere with themselves. This is achieved with the "whispering-gallery" resonator which is at the heart of the lowest-threshold lasers made so far. This low threshold becomes possible as a consequence of the small size that can be achieved with these resonators. They are essentially circular disks in which the light circulates around close to the dielectric interface. Such modes are especially low in losses. Whispering-gallery waves: To illustrate the whispering-gallery effect, the movie shows a cross sectional view of a curved interface (black circle) between glass and air, with a circulating wave radiating in all directions. The color represents the electric field, and in the first animation the field inside the resonator is only slightly higher than outside. This is not a good resonator because it is very "lossy". In the second movie, the wavelength is about 4 times shorter than above. In this case, the field outside the resonator is much weaker than inside it, meaning that we are confining the light much better. In both animations, the wave fronts look slanted, especially on the outside. Comparing the two clips, you will notice, however, that the wave fronts right at the circular interface are perfectly radial in the bottom image. This is what makes the two scenarios different: the straight wave fronts at the interface correspond to grazing propagation along the curved boundary. There is still a wave emanating from the cavity at the bottom, but its amplitude relative to that at the interface is now much smaller. Observe also the central region of the dielectric circle, which is essentially field-free. The intensity is highly concentrated near the surface. Even in the more strongly confined case, shown here, the wave penetrates slightly into the surrounding medium. In reflection off a straight dielectric interface, this penetration is known to go along with the Goos-Hänchen effect, a lateral displacement of the scattered beam. A calculation of the analogous effect in reflection off a curved interface can be done starting from the circular geometry. Since the Goos-Hänchen effect can be incorporated into a ray model, it improves semiclassical calculations for non-circular cavities. A detailed introduction to the Goos-Hänchen effect and our relevant work is presented on a separate page. To find out more about the spiral patterns shown in these movies, read about wavefronts in open systems. Semiconductors are far from being the only application of the whispering-gallery mechanism. The first laser resonators in the submillimeter size regime were made of liquid droplets containing a lasing organic dye. The highest-quality optical microresonators have been achieved using fused-silica spheres (i.e., glass). Although these materials have a refractive index closer to unity than a semiconductor, they still support whispering-gallery modes. In that context, they are often called morphology-dependent resonances (MDRs). Both the semiconductor and the droplet realizations of the whispering gallery are illustrated on the cover of Optical Processes in Microcavities, edited by R.K.Chang and A.J.Campillo (World Scientific Publishers, 1996). The lasing droplets are seen on the left side, and a "thumbtack" microlaser with its rotationally symmetric calculated emission pattern appears in the main panel. This book contains 11 chapters on important experimental and theoretical aspects of dielectric microcavities. Chapter 11 represents the status of our work as of summer 1995: "Chaotic Light: a theory of asymmetric cavity resonators", J.U.Nöckel and A.D.Stone PDF - (warning: large files) Don't be square! The question that arises naturally in lasing microdroplets is: how strongly can a dielectric resonator be deformed before whispering-gallery modes cease to exist, or become degraded by leakage? The intuitive answer is, "the rounder, the better". However, even shapes with sharp corners can sustain modes that have every right to be called whispering-gallery phenomena. In fact, these types of whispering-gallery modes cannot be understood purely on the basis of ray optics. This is discussed in our work on hexagonal nanoporous microlasers. Intriguingly, hexagonal zinc oxide nanocrystals have recently become the smallest resonators sustaining whispering-gallery type modes ever observed. Being round is not a prerequisite for whispering-gallery action. So there is a huge space of possible shapes (practically from circle to square) that could possibly be considered as whispering-gallery type resonators. If we had a choice, what should the ideal shape be? This clearly depends on the application context, but in any case it would be desirable to have some design rules. In the following, we begin to discuss some design issues, and point out how our work in particular aims to provide the design rules just mentioned, based on approximate methods such as the ray picture. Stable and unstable resonators Other mirror arrangements provide different advantages. In particular, there has been a considerable body of work employing concave or convex mirrors. E.g., concave mirrors separated by less than their radii of curvature added together, make a stable resonator in which light rays undergo focussing while being multiply reflected between the mirrors. Light can then be coupled out by making one of the mirrors slightly transparent. When the output coupling is small, the theoretical treatment of such a laser can often be performed by neglecting the leakage and hence assuming the existence of some orthogonal set of modal eigenfunctions. If one wants to avoid the use of partially transparent mirrors (which need to have very low losses for high-power applications), one alternative design is the unstable resonator containing defocussing elements [see the exhaustive textbook by A.E.Siegman, Lasers (University Science Books, Mill Valley, CA (1986)]. E.g., two concave mirrors separated by more than their added radii of curvature cause rays to diverge out from the optical axis after several reflections. Outcoupling occurs when the light spills over the edge of one of the mirrors (which hence need not be partially transparent themselves). Such unstable lasers differ from stable resonators in their mode structure: A set of well-defined bound modes is not available for the expansion of the laser field, because they all couple to the outside. Therefore, it has been necessary to use quasibound states in the calculations. Lasers are fundamentally open systems, so a description in terms of quasibound states seems only natural. These states are, however, not as familiar a tool as the usual square-integrable eigenfunctions one knows from bound systems. Their properties are still a topic of current research. Important work on such "quasi-normal modes" has also been carried out by Kenneth Young, Pui-Tang Leung and co-workers. The central problem from the point of view of laser physics is this: In order to define photons in the first place, we expect to have at our disposal a set of normal modes for which we then write the creation and annihilation operators. But metastable states are not eigenstates of a Hermitian differential operator, because they represent energy escaping to infinity. Therefore, familiar precedures involving expansions in normal modes run into problems. Nevertheless, their use makes a lot of sense when discussing the emission properties of individual metastable states, such as their frequency shifts as a result of a perturbation in the resonator's shape or dielectric constant. Or - just to mention a really far-out example: metastable states find application in the study of gravitational waves emitted from a black hole [P.T.Leung et al., Phys.Rev.Lett. 78, 2894 (1997)] Chaotic resonators As an extention of the unstable-resonator idea, one can think of two concave mirrors in a defocussing setup combined with some lateral (sideways) guiding of the light between the mirrors. A naive reasoning could be this: We want lasing from light spilling out near one of the mirrors, but we don't want the escape angle with the optical axis to be too large, hoping thereby to improve the spatial mode pattern (focussing). So we put additional mirrors along the open sides joining the mirrors. Now combine this idea with the use of dielectric interfaces as (partially transparent) mirrors, and one is lead quite directly to consider the so-called stadium resonator (or a generalization thereof). Here is an illustration of the stadium shape and of how it scatters an incident ray: It is taken from J.H.Jensen, J.Opt.Soc.Am.A 10 (1993). Remark on previous work: Jensen seems to have been the first to attack the ray-wave duality for a stadium-shaped dielectric resonator, in particular taking into account the inevitable ray-splitting into reflected and transmitted portions that occurs at the sharp dielectric interface of the chaotic resonator (thanks to R.K. Chang and A. Poon for pointing out the reference). However, he did not consider the long-lived resonances that such a cavity could support, which are a prerequisite for lasing. Instead, Jensen's paper gives a quasiclassical analysis of the rainbow-peaks for this structure. For more on rainbows, see this Atmospheric Optics web site. Ray splitting has received renewed interest in recent years (in my own ray optics simulations, it is taken into account as well - it becomes essential in high-index materials). We are not the only ones to consider chaotic dielectric resonators. However, we were the first (to my knowledge) to seriously apply chaos analysis to the emission properties of quasibound states in dielectric resonators, see "Q spoiling and directionality in deformed ring cavities", J.U.Nöckel, A.D.Stone and R.K.Chang, Optics Letters 19, 1693 (1994). This is a theory paper in which we address the consequences of emerging ray chaos for the lifetimes and emission directionality of deformed dielectric resonators. The first experiment in which the correspondence between emission anisotropy and chaotic structure in the classical ray dynamics was successfully applied to dielectric microlasers is "Ray chaos and Q-spoiling in lasing droplets", A.Mekis, J.U.Nöckel, G.Chen, A.D.Stone and R.K.Chang, Phys.Rev.Lett. 75, 2682 (1995). In this paper, we studied lasing microdroplets with a nonspherical shape, which leads to a strongly anisotropic light output along the droplet surface. The total-intensity profile was imaged and compared with a ray model, yielding an explanation for the observed features. To arrive at the idea of using a chaotic resonator cavity, one can either start from the unstable-resonator concept as described above, or  from the whispering-gallery design. We came from the latter direction. The argument leading to an oval dielectric resonator is simply that a circular whispering-gallery cavity does not have a preferred emission direction, owing to its rotational symmetry. In addition, one wishes to have a parameter with which the resonance lifetimes of the cavity can be controlled. This is achieved by deforming its shape. Confocal resonators Inbetween stable and unstable resonators, there is another useful mirror configuration, called confocal. It has the advantage of creating a focussing effect inside the resonator, which in turn amounts to producing a smaller effective mode volume for the laser. Instead of the whole volume between the mirrors, it is possible to utilize only a smaller volume around the coinciding focal points of the mirrors. The ray pattern that forms in a confocal arrangement of two concave mirrors can sometimes take on the shape of a bowtie (depending on the shape of the mirrors). This well-known configuration is found in etalons but also in lasers. The simplest confocal cavity would consist of two circle segments with a common focus. A less trivial example is the case of two confocal paraboloids, i.e., surfaces of revolution generated by opposing parabolas that share their focal point: dome   plot The righthand picture shows two bowtie rays going through the focus. There are many other ray paths that never go through the focus, but they form caustics which are reminiscent of this basic shape. For a study if this type of (three-dimensional) mirror configuration, see my work with Izo Abram's group at CNET, "Mode structure and ray dynamics of a parabolic dome microcavity". This is the manuscript: Microresonators such as this can find application in quantum electrodynamics because they allow to modify the rate of spontaneous emission of atoms or quantum dots interacting with the electromagnetic field. To that end, one has to go to small mode volumes. But the cavity volume isn't necessarily what counts. With a focused ray pattern as in the confocal resonator, the light field is especially strong in only certain portions of the resonator, notably the focal point in the center. And that is where the desired strong coupling between the light and the active medium occurs. Bowtie laser Now we put all of the above together, but for the price of one... The microcylinder laser shown here is not circular, but not a stadium shape, either. The stadium has fully chaotic ray dynamics, the circle has no chaos at all. This oval shape has a mixed phase space. As a by-product of the transition to chaos which takes place with increasing deformation, a bowtie-shaped ray path is born that does not exist below a certain eccentricity. This pattern combines internal and external focussing, and its lifetime is long enough for lasing because the rays hit the surface close to the critical angle for total internal reflection. This is the world's most powerful microlaser to date. To understand why this very desirable intensity distribution arises in the smooth oval shape we chose here, but not in the circle or the stadium, one has to use methods of classical nonlinear dynamics. This is explained in our article, " High power directional emission from lasers with chaotic resonators ", C.Gmachl, F.Capasso, E.E.Narimanov, J.U.Nöckel A.D.Stone, J.Faist, D.Sivco and A.Cho, Science 280, 1556 (1998) PDF, cond-mat/9806183. In this paper, the oval-resonator concept is combined with a very innovative laser material that turns out to be particularly compatible with a disk-shaped resonator geometry: the quantum cascade laser. This active material consists of a semiconductor heterostructure in which an electrical current leads to the emission of photons. But in contrast to more conventional quantum-well diode lasers, the optical transitions responsible for the creation of the photons take place exclusively within the nanostructured conduction band (between quantum well subbands). Electron-hole recombination across the valence band (the usual mechanism) is not involved here, leading to various advantages. F.Capasso and J.Faist are among the winners of the 1998 Rank Prize for the invention of the quantum cascade laser. The basic ideas of our work are illustrated on picture pages starting with a galery of magazine covers and continuing with a special type of shape called the Robnik billiard (also known as the dipole shape or limacon billiard). How to learn more: What is chaos ? And what in the world  is quantum chaos ? Chaos is not just chaos We are talking here about deterministic chaos. The term refers to the fact that even simple classical systems governed by simple equations such as Newton's laws can exhibit highly irregular motion that defies long-term predictions. One example for such a simple physical system is the double pendulum; as the following animation shows, the two degrees of freedom represented by the two angles θ and ψ are coupled, and this leads to a non-periodic, unpredictable-looking combined motion: In Optics, there is a slight confusion of terminology about the concept of chaos, because it is traditionally found (in quantum optics) when people want to describe the statistical properties of a photon source. "Chaotic light" in that context has a much shallower meaning - it just means "random" thermal distribution of photons as it is found in blackbody radiation. Chaos in the deterministic sense already has a place in optics as well, but again we have to make a distinction to our work. In multimode lasing one can look at the temporal and/or spatial evolution of the laser emission and finds that the signal can become very irregular. By mapping this behavior onto an artificial (usually many-dimensional) space, e.g. by a so-called time-delay embedding, one then sometimes finds that the system follows a trajectory on a "chaotic attractor". That's a type of structure one finds in dissipative nonlinear classical systems. This is what people have studied in nonlinear optics for a long time now. There are many lists of chaos-science links; see for example the Wikipedia artticle on this subject. For more on the the relation between our work and the more traditional nonlinear optics, see below. Chaos in billiards In the classical ray picture for our microresonators, the fact that boundaries are penetrable does not (to lowest order in the wavelength) affect the shape of the trajectories, and hence our internal ray dynamics is that of a non-dissipative, closed system. The optical resonator in the ray picture is a realization of what mathematicians call a billiard. See this short article for an entertaining introduction to billiards. Only non-chaotic billiards are shown there: the circle and the ellipse (note that this math definition of a billiard doesn't conform with what we know from the local pub). But generic oval billiards display chaotic dynamics. To take the step into the world of chaotic billiards, follow this link to the polygonal and stadium billiard (among others). If you have any further questions about chaos, you may well find an answer at this informative FAQ site maintained by Jim Meiss. Further information, including a host of graphics and animations, is also available from the chaos group at the University of Maryland. Quantum Chaos Quantum chaos sounds like a contradiction in terms because linear wave equations such as the Schrödinger equation do not exhibit the sensitivity to initial conditions that gives rise to chaos. Nonetheless, classical mechanics is just a limiting case of quantum mechanics, just as ray optics is the limit of wave optics for short wavelengths. So one should expect "signatures of chaos" in the wave solutions. To find and understand these, semiclassical methods are indispensable. One of the pioneers of quantum chaos, Martin C. Gutzwiller, has written a beautiful introduction to this field in Scientific American. See in particular the third figure describing the central place of quantum chaos in our our understanding of quantum mechanics. An important lesson here is: Playing around with the simple standard systems, such as harmonic oscillators, we barely scratch the surface of what the classical-quantum transition really entails. If we want to go beyond pedestrian descriptions of this transition, classically chaotic systems are where the action is! This also holds for much-discussed fundamental topics such as "decoherence", see the example of periodically "kicked" Cesium atom. As a by-product, quantum chaos has brought together an arsenal of powerful techniques. My first chance to study these was a graduate course at Yale taught by Prof. Gutzwiller in 1993/94; he also accompanied my thesis work on chaotic optical cavities through discussions and as a reader at dissertation time. As it turns out, many of the intrinsic emission properties of dielectric optical resonators have a classical origin. The significance of this for quantum chaos is that comparison between ray model and numerical solutions of the wave equations uncover corrections to the ray model. Alternatively, one can also discover such wave corrections by comparing the ray predictions to an actual experiment. We follow both approaches. Such wave corrections become especially interesting when the underlying classical dynamics is partially chaotic, as is the case in the asymmetric dielectric resonators. In that setting, two major new effects arise: dynamical localization and dynamical tunneling. In dielectric cavities, the effect of such phenomena on resonance lifetimes and emission directionality, and of course on resonance frequencies, can be studied. Emission directionality is in itself a completely new question to investigate from the viewpoint of quantum chaos: when decay occurs, e.g.,in nuclear physics or chemistry, any anisotropy of the individual process is averaged out in the observation of an ensemble - but microlasers can be looked at individually, and from various directions. If they are bounded only by a dielectric interface, the emission pattern is determined by the phase-space structure. This is an important focus of my work: the short-wavelength asymptotics of systems that are chaotic and open. What this means is illustrated in a slightly different example on a picture page describing the annular billiard. There, we studied the relation between resonance lifetimes and dynamical tunneling (since it involves tunneling into a chaotic portion of phase space, it is also called "chaos-assisted tunneling"). Is quantum chaos just a mathematical-conceptual game without relevance for experiments? Our work has been among the first to propose actual applications of quantum chaos phenomena, and to my knowledge the two patents I co-authored were the very first to rely on such phenomena. Nonlinear dynamics Chaos, belonging to the field of nonlinear dynamics, is known to laser physicists in another guise as well: pattern formation, in particular vortices and vortex lattices, due to the nonlinearity of the lasing medium, has been studied much longer than our type of chaotic phenomena which rely on the boundary effects. Of course, there can be a cross-over from one regime to the other, e.g. from nonlinear vortices to linear vortices which in a circular resonator are encountered as whispering-gallery modes. What I'm discussing above is chaos in the linear wave equation. This phenomenon often dominates the physics, especially near the lasing threshold. At higher powers the nonlinearity of the medium itself becomes more important. This is something we had earlier addressed in an invited conference contribution , and also commented on in a book chapter titled "2-d Microcavities: Theory and Experiments" . Last significant revision: 09/09/04. This page represents a compilation of information relevant to our work on microlaser resonators. Naturally, it cannot claim to be complete in any way. However, I felt it appropriate to provide some context because the questions we are discussing are at the interface between two fields of study that traditionally haven't had much overlap: micro-optics and quantum chaos. These fields have more in common than meets the eye. But that by no means implies that one community cannot learn from the other... Since this is a NET DOCUMENT, I am trying to refer mostly to other documents that are available online, instead of citing things printed on dead trees. But if you have something you'd like me to include, feel free to let me know. Related information is found on the following web pages: This page © Copyright Jens Uwe Nöckel, 2002-2004 Last modified: Sat Jun 22 09:20:46 PDT 2013
bc9e2ff4eefdc1a1
For my May 2006 diary, go here. Diary - June 2006 John Baez June 2, 2006 It's been a hectic 72 hours. On Wednesday I gave a colloquium at the Perimeter Institute (in Waterloo, Ontario). Later I had dinner with Fotini Markopoulou and Lee Smolin at a great restaurant called Jane Bond; we talked about quantum mechanics. Thursday morning I got up early and rented a car. Waiting for the car rental company to pick me up, I ran into Jeffrey Bub, who turned out to have given a talk on Tuesday about the the importance of our inability to duplicate quantum information - also a theme of my talk. He'd asked a good question at my talk, but I hadn't recognized him! Anyway, I drove an hour and a half to the University of Western Ontario (in London, Ontario) where I spoke to Dan Christensen about homotopy theory and gave a lecture on where we stand in fundamental physics. I had dinner at a Thai restaurant with Dan, his student Igor Khavkine and his postdoc Josh Willis. We talked about spin foam models, especially Josh's new paper. I spent the night in a hotel in London, and today (Friday) I had breakfast with Dan and we talked more about our joint math projects. Then I drove back to Waterloo, took a cab to Toronto, and flew to Boston. Insane, really - I'm not really practiced enough to stay completely calm while trying to make so many connections. I easily imagine all the things that could go wrong. But it all somehow worked, despite getting lost about 4 times while driving to London, and a flight delay due to thunderstorms in Boston. Now I'm in Cambridge Massachusetts, in Kendall Square - right next to my old grad school, MIT. I'm here for some top-secret business that I've love to talk about, but can't. I'm staying at the Kendall Hotel. I don't think it was here back when I was a grad student (1982-1986). It may have still been a firehouse. Kendall Square was pretty dumpy back then, but part of why I wanted to come here was to see how Cambridge has changed. I can already tell it's gotten gentrified, just like everyone says. As I was checking in here, someone walking out asked their friend "Did you know this is the most trendy boutique hotel in Cambridge?" Woooh! I feel like a bigshot now. They probably pay some guy to keep walking in and out, saying that. Back when I was a longhaired grad student, I don't think the phrase "boutique hotel" had even been invented. There were fewer rich people; fewer poor people too. I need some sleep, even though internet access makes me want to stay awake and have fun.... June 4, 2006 My father had a stroke. It sounded very scary in the email I got from my sister yesterday. When I called my mom yesterday she said he had already recovered to the point of being able to talk and walk. She was making him do lots of exercise. Today I called her again and my father answered. "Hi!" he said, "What a surprise!" He was expecting my uncle. I was the one who was surprised - shocked, in fact, that he sounded so hearty, and so obviously not just faking it. Whew - amazing! I was and am planning to visit them in two days. I'm relieved that it won't be a tragic occasion. June 5, 2006 An interesting article on the rise of people who plan to remain single all their lives: Some statistics: June 13, 2006 I'm back at the Perimeter Institute - back from visiting my parents in DC. I was immensely relieved to find my dad hadn't suffered visibly from that stroke, or whatever it was - it's not even clear what it was. He's not much changed from how I saw him last. Unfortunately, this means that he is forgetful, arthritic, and very weak; he needs a walker to get around, and moves very slowly. He only gets out of the house when my mother drives him to the library or to his physical therapist. He finds this depressing - he says it's like he's already entered the afterlife. Somehow he manages to soldier on. I naturally found myself thinking about his future, and mine... how we'll probably all wind up in nursing homes. When we're young we do a great job of ignoring these issues. When we're middle-aged it's easy to lose ourselves in work and raising of children. It's surprising how long we can go on pretending old age and death are things that happen to other people. But the hand of time hangs heavy on us all. I could say much more, but I'm not quite sure how personal I want this diary to be. Here's a picture of my parents' house: You can also see a closeup - my mom helped design this house, and she's very proud of it. Also: my dad, my mom, and a necklace my mom made - she spends a lot time creating jewelry these days. Here are some notes from the clash of civilizations, written while reading the Washington Post when I was visiting my parents in DC: I've been reading a quirky and fascinating book on the history of Chicago and its architecture: The energetic optimism of Chicago in the late 1800s was something really unique. It was picked up by Sullivan and others... though they rejected aspects of its rampant commercialism. It's nice thinking back on the Chicago architecture tour that Tom Fiore took me on not long ago. We saw some buildings by Sullivan. Today I went on a little tour of Institute for Quantum Computing with Scott Aaronson. Raymond LaFlamme showed me his nuclear magnetic resonance lab, and also the lab where they create entangled photons for quantum cryptography. With any luck, at the end of June they'll beam pairs of entangled photons to the IQC and Perimeter Institute from a taller building somewhere between the two. This will allow them to communicate in a way that nobody can intercept without it being noticeable. Not that the IQC and Perimeter Institute have anything secret to talk about! Just a demonstration. After Indian food and lunchtime discussion at the IQC, I felt a bit listless from lack of sleep the previous night, which I'd spent writing "week234". Luckily, John Moffat came by my office to talk about a fiendishly clever attempt to solve the cosmological constant problems using parastatistics. Alas, my technical understanding of parastatistics is almost zilch, but we still had an interesting conversation. Then I whiled away the rest of the day correcting the dissertation of my student Toby Bartels and attaching emails about music theory to the Addenda of "week234". Right now I'm listening to Miles Davis' E.S.P., wondering yet again why more people don't say this is his greatest album. The fact that I'm sitting here listing the things I did today, instead of actually doing something, is yet another sign that I'm feeling low-energy. June 14, 2006 At 11 am I had an appointment to talk with Howard Burton, executive director of the Perimeter Institute. Among other things, we discussed the future of fundamental physics. We agreed that dark matter, dark energy and other cosmological issues are where it's at. He wondered: will we understand them better in 20 years or so? None of our current theories seem to be making much of a dent in these questions. I tried out my latest idea on him: finding a real solution to these questions might require years of fumbling around with crude theories that seem "insufficiently elegant" to people raised on the Standard Model, string theory or loop quantum gravity. Something more like Balmer's formula or the Bohr atom than Schrödinger equation. Balmer was a teacher at a girl's school in Switzerland who dreamt up a formula for some of of the frequencies of light emitted by hydrogen. Later Rydberg generalized it to get the other frequencies. If some high school teacher proposed this formula today, would we dismiss it as mere coincidence, noting that it doesn't work for other atoms? We seem to think physics has progressed beyond this point now... but has it, really? MOND (modified Newtonian dynamics) has a similar jury-rigged quality: it does surprisingly well as a competitor to dark matter for explaining the anomalous rotation of many galaxies, but it does badly on other things. Maybe it has a kernel of truth. Maybe it will take a Bohr to spot that kernel of truth, and then a Schrödinger or Heisneberg to formalize it. Later, John Donghue gave a talk about quantum gravity corrections to the 1/r2 force law, derived from effective field theory. Nice stuff! Any solid piece of information about quantum gravity is a precious gem. Another low-energy day - apart from the above, I mainly kept myself occupied by adding comments to the Addenda section of week234, which was about the math of music. It's fascinating how many of my math friends had deep things to say about this. It seems to support the stereotype that a lot of mathematicians are into music. Like math, music can take us outside ourselves, into a beautiful world of abstract patterns, where everything is right. For a while, at least, it lifts that hand of time that lays so heavy on us. June 15, 2006 Dan Christensen came by and we continued our work on smooth homotopy theory. The ups and downs of research: we almost decided to give up on this project, when I mentioned an idea we had at the end of our last session... we got excited, talked a bunch more, and when we had to quit, things seemed to be working just fine! We took break for listening to talks about loop quantum gravity and black entropy by Danny Terno, Saurya Das and Arundhati Dasgupta. I think I've put in too much time working on this subject to find it interesting or even bearable anymore. It doesn't help that I have a headache. Martin Rees writes: This is from: He also writes: At the moment, scientific effort is deployed sub-optimally. This seems so whether we judge in purely intellectual terms, or take account of likely benefit to human welfare. Some subjects have had the 'inside track' and gained disproportionate resources. Others, such as environmental researches, renewable energy sources, biodiversity studies and so forth, deserve more effort. Within medical research the focus is disproportionately on cancer and cardiovascular studies, the ailments that loom largest in prosperous countries, rather than on the infections endemic in the tropics. Choices on how science is applied shouldn't be made just by scientists. That's why everyone needs a 'feel' for science and a realistic attitude to risk - otherwise public debate won't get beyond sloganising. Jo Rotblat favoured a 'Hippocratic' Oath' whereby scientists would pledge themselves to use their talents to human benefit. Whether or not such an oath would have substance, scientists surely have a special responsibility. It's their ideas that form the basis of new technology. We feel there is something lacking in parents who don't care what happens to their children in adulthood, even though it's generally beyond their control. Likewise, scientists shouldn't be indifferent to the fruits of their ideas their intellectual creations. They should plainly forgo experiments that are themselves risky or unethical. More than that, they should try to foster benign spin-offs, but resist, so far as they can, dangerous or threatening applications. They should raise public consciousness of hazards to environment or to health. The decisions that we make, individually and collectively, will determine whether the outcomes of 21st century sciences are benign or devastating. Some will throw up their hands and say that anything that is scientifically and technically possible will be done - somewhere, sometime - despite ethical and prudential objections, and whatever the laws say - that science is advancing so fast, and is so much influenced by commercial and political pressures, that nothing we can do makes any difference. Whether this idea is true or false, it's an exceedingly dangerous one, because it's engenders despairing pessimism, and demotivates efforts to secure a safer and fairer world. The future will best be safeguarded - and science has the best chance of being applied optimally - through the efforts of people who are less fatalistic. And here I am optimistic. The burgeoning technologies of IT, miniaturisation and biotech are environmentally and socially benign. The challenge of global warming should stimulate a whole raft of manifestly benign innovations - for conserving energy, and generating it by novel 'clean' means (biofuels, innovative renewables, carbon sequestration, and nuclear fusion). Other global challenges include controlling infectious diseases; and preserving biodiversity. These challenging scientific goals should appeal to the idealistic young. They deserve a priority and commitment from governments, akin to that accorded to the Manhattan project or the Apollo moon landing. I've spoken as a scientist. But my special subject is cosmology - the study of our environment in the widest conceivable sense. I can assure you, from having observed my colleagues, that a preoccupation with near-infinite spaces doesn't make cosmologists specially 'philosophical' in coping with everyday life. They're not detached from the problems confronting us on the ground, today and tomorrow. For me, a 'cosmic perspective' actually strengthens my concerns about what happens here and now: I'll conclude by explaining why. The stupendous timespans of the evolutionary past are now part of common culture. We and the biosphere are the outcome of more than four billion years of evolution,but most people still somehow think we humans are necessarily the culmination of the evolutionary tree. That's not so. Our Sun is less than half way through its life. We're maybe only the half way stage. Any creatures witnessing the Sun's demise 6 billion years hence won't be human - they'll be as different from us as we are from bacteria. But, even in this 'hyper-extended' timeline - extending billions of years into the future, as well as into the past - this century may be a defining moment. The 21st century is the first in our planet's history where one species has Earth's future in its hands, and could jeopardise life's immense potential. I'll leave you with a cosmic vignette. We're all familiar with pictures of the Earth seen from space - its fragile biosphere contrasting with the sterile moonscape where the astronauts left their footprints. Suppose some aliens had been watching our planet for its entire history, what would they have seen? Over nearly all that immense time, 4.5 billion years, Earth's appearance would have altered very gradually. The continents drifted; the ice cover waxed and waned; successive species emerged, evolved and became extinct. But in just a tiny sliver of the Earth's history - the last one millionth part, a few thousand years - the patterns of vegetation altered much faster than before. This signaled the start of agriculture. The pace of change accelerated as human populations rose. But then there were other changes, even more abrupt. Within fifty years - little more than one hundredth of a millionth of the Earth's age, the carbon dioxide in the atmosphere began to rise anomalously fast. The planet became an intense emitter of radio waves (the total output from all TV, cellphone, and radar transmissions.) If they understood astrophysics, the aliens could confidently predict that the biosphere would face doom in a few billion years when the Sun flares up and dies. But could they have predicted this unprecedented spike less than half way through the Earth's life -these human-induced alterations occupying, overall, less than a millionth of the elapsed lifetime and se emingly occurring with runaway speed? The answer depends on us. Simple stuff, but worth remembering. This is from: June 17, 2006 Reading a copy of The New York Review of Books in a cafe on a hot day here in Waterloo, sipping a raspberry-cranberry smoothie, I was struck by a couple of poems from this book: Tonight, for the first time in many years there appeared to me again a vision of the earth's splendor: in the evening sky the first star seemed to increase its brilliance as the earth darkened until at last it could grow no darker And the light, which was the light of death seemed to restore to earth its power to console. There were no other stars. Only the one Whose name I knew as in my other life I did her injury: Venus, star of the early evening, to you I dedicate my vision, since on this blank surface you have cast enough light to make my thought visible again. June 18, 2006 This was my last weekend in Waterloo. My student Jeff Morton showed up today - he couldn't make it sooner, since final exams just ended at UCR - and we talked a bit with Aristide Baratin about Freidel and Baratin's new paper describing a spin foam model that gives ordinary quantum field theory on Minkowski spacetime. I'm pretty excited, because we conjecture that this spin foam model is the same as Crane and Sheppeard's spin foam model based on a gadget I invented called the Poincaré 2-group. Higher category theory may finally be sneaking into ordinary physics! But alas, in my conversations with Baratin and Freidel, we only made a little preliminary progress on proving this conjecture - and now I have to go. I return to Riverside on Tuesday, where Lisa awaits me. On Friday she leaves for Wuhan, for a conference on Chinese archaeology. A bit more than a week later, on Monday July 3rd, I'll meet in her in Shanghai, where we'll spend the summer. So, Jeffrey and my other student Derek Wise will have to do their best to make sense of this stuff with Laurent and Aristide. But, I have some tricks up my sleeve which may allow me to make some progress while I'm in Shanghai. Lisa and I hope to have wireless internet access in our apartment in Shanghai, by the way. So, with any luck, this online diary will continue. It should be an adventure - a summer in the biggest city in China! June 20, 2006 I got back home yesterday. Ah, it's nice just to see my back yard again... It's so peaceful here. In the news today, the Editorial Projects in Education research center reports that the 2006 graduation rate for US high schoolers is only 70%! In Los Angeles, the figure is only 44%! I'm curious how this compares to European countries. Does anyone know? Apparently the US dropout rate has been underestimated by the states - you can see details here. So, European figures could also be misleading.... On the bright side, a study by Julio Licinio et al reports that suicide rates in the US have dropped by about 15% since 1988 - the year that Prozac went on the market. Suicide rates had been fairly stable, around 12.9 per 100,000 per year, all the way from 1870 to 1988. Since then the rate has dropped to 10.9. Nobody knows if this drop is due to the introduction of Prozac and other selective serotonin reuptake inhibitors, but it's a plausible hypothesis. It would be really, really cool if suicidal despair could be reduced by rejiggering serotonin levels in the brain. June 21, 2006 Ever wonder why the US is bickering so much with Hugo Chávez, the President of Venezuela? One reason is that Chávez is a leftist who likes to throw his weight around. But another is that Venuzuela is sitting on top of lots of heavy oil. This is a gooey substance - a form of "unconventional oil" - that our economy will naturally turn to as conventional oil supplies start running out. Let me quote a little of this paper: Unconventional oil is an umbrella term for oil resources that are typically more challenging to extract than conventional oil. While many unconventional oil resources cannot be economically produced at the present time, two exceptions are extra-heavy oil from Venezuela's Orinoco oil belt region and bitumen - a tar-like hydrocarbon that is abundant in Canada's tar sands. These resources are already being economically produced and are likely, in coming years, to become increasingly important to global oil supplies generally, and to U.S. oil security in particular, given their close proximity to U.S. markets. In 2002, the Oil and Gas Journal accepted Canada's classification of 174 billion barrels of oil sands as established reserves and Canada became the second largest oil reserve-holding nation in the world after Saudi Arabia. If the 235 billion barrels of extra-heavy oil that Venezuela considers recoverable, but that are not currently acknowledged as established or proven, are re-classified in the same way as Canada's oil sands, Venezuela would be credited with the largest oil reserves in the world. Just to give you some sense of what this means: as of 2006, the Oil and Gas Journal said the total proven worldwide oil reserves were 1,293 billion barrels. (This counts the Canadian oil sands listed above, but not the Venezuelan heavy oil.) The Energy Information Administration, run by the US government, guesses that these reserves will grow by 730 billion barrels over time, and throws in a guess of 939 billion extra completely undiscovered barrels, for a guess of 2962 billion barrels of oil left worldwide. In 2003 the world used 29 billion barrels of oil per year. By 2030, the EIA predicts this demand will grow to 43 billion per year. They predict that oil use will peak sometime between 2055 and 2065, and crash quite rapidly after that. If something like this comes to pass, Venezuela will be very important in the years to come... and Canada too, but I'm sure the US feels more threatened by Venezuela! For more information, try this: I can't see the EIA prediction that oil use will peak around 2055-2065 on their website. I found it here: The time at which peak oil will occur is highly controversial, and nobody else seems to think it will occur so late. A lot of people think it's happening soon, or even that it's already happened! I can't tell who's right. It's one of those questions that's so important that everyone likes to tell their own story about it: June 22, 2006 Let's ponder that chart up there. Most people are arguing about when peak oil will happen, not whether. And, if we take the long view, the disagreements are minor: everyone who contributed a line on the chart says sometime between now and 2070. An updated version of the chart shows even better agreement. So, the question is: what next? This is actually a huge interlocking network of questions. How much does the whole "growth is good" philosophy of economics rely on the assumption of ever greater energy usage? When we hit the wall, what will happen? Can economic growth occur in ways that don't require greater energy usage? Will we decide that perpetual economic growth is an unreasonable goal for occupants of a finite planet? Or could we revamp our concept of "economic growth" to make it a bit subtler and less destructive? There are, of course, vast untapped reaches of ethical, spiritual and intellectual growth waiting to be explored. Why are they almost neglected in our current definition of "economics"? Can we change this? Will we? Or: are we so locked into our current course that the carbon burning economy gets pushed to its logical limit, despite the cost of global warming? On December 18, 2005 I mentioned an article in Wired listing various forms of carbon we have left to burn, measured in oil barrel equivalents. Here are the biggies: You can see where the pro-growth folks will wind up: digging for methane hydrates under the Arctic permafrost and the bottoms of seabeds. If we burn all this stuff, we'll have a burst of carbon dioxide emission that makes what we're seeing now look puny. You can see how carbon dioxide goes hand in hand with global temperatures: We see here the last 4 glacials (or "ice ages") in the last 400,000 years BP - "before present". Notice the incredible red spike at the far far right of the graph: that's what we're doing now! If we burn through all the methane hydrates, this will shoot way off the graph, and so will global temperatures. To get a feel for some numbers: in 2003, people around the globe consumed about 440 quintillion joules (420 quadrillion BTU) of energy, mostly fossil fuels. This is the energy equivalent of 72 billion barrels of oil, and it caused the emission of roughly 8 billion tons of carbon into the atmosphere. Doing this sort of thing for about a century caused the red and blue spikes on the edge of that graph. Of course, energy usage started out much lower a century ago... so multiply all the numbers in the previous paragraph by about 20 or 50, and you'll get the figures for the last century. But: to get the figures for what'll happen if we burn all the methane hydrates, you have to multiply those numbers by about a thousand! Of course, we wouldn't burn this stuff all of a sudden, so there will be time for some CO2 to get eaten up by various processes. Nonetheless, we're talking about a major disruption of the climate if we don't end our carbon addiction. Something orders of magnitude greater than what we've seen so far. The moral: the oil peak may be upon us, but the end of cheap oil won't save our climate, because the carbon peak will be much bigger - unless we move towards other energy sources, or less energy consumption. (Here are my calculations and sources, so you can catch my mistakes if you want: there are lots of weird units involved. About 420 quadrillion BTU of energy were used in 2003, according to the EIA, which doesn't use metric. A barrel of crude oil equals roughly 5.8 million BTU. So, the energy usage was equivalent to 72 billion oil barrels. The actual oil usage was about 150 quadrillion BTU, or 25 billion barrels, or 36% of all energy usage. Burning a quadrillion BTU of fossil fuel causes the emission of roughly - roughly - 20 million tons of carbon. Of course it actually depends on how much hydrogen the fuel contains - so, 26 million tons for coal, about 20 million for petroleum, versus only 15 million for natural gas. But, I'm just trying for rough estimates here, so I'm cutting all sorts of corners: I should subtract the amount of energy not coming from fossil fuels, for example - about 10% or so. More carefully prepared statistics on carbon dioxide emissions are available from the IEA. Finally, a BTU is 1055 joules, so 420 quadrillion BTU is about 440 quintillion joules, or 4.4 × 1020 joules.) June 23, 2006 Lisa left for Wuhan at 2 a.m. today - she's going to a conference on Chinese archaeology. I spent the day catching up with James Dolan, who has been thinking a lot about an intricate web of ideas related to Dynkin diagrams, including Vaughan Jones' work on subfactors and its relation to the McKay correspondence. I was happy to see that International Astronomical Union has officially approved names for the two newly discovered moons of Pluto - Nix and Hydra. Here's a picture of them taken by the Hubble space telescope: While visiting my sister in DC a while ago, we saw a bunch of sparrows living in the huge mall at Tysons Corner. This made me wonder - yet again - about why some animals seem so much better than others at living around humans. Sparrows, rats, pigeons, cockroaches and coyotes do well. Turtles, frogs, manatees, passenger pigeons and lions don't. I believe all animals that don't do well around us will either go extinct or wind up living at our sufferance in zoos or game reserves. So, we are selecting the animal kingdom for certain traits. Animals either need the traits that let them eke out an existence in a human dominated world, or they need to be cute enough that we'll take care of them. Otherwise they will die. This is a strange new kind of selection pressure. It's part of what Bill McKibben calls The End of Nature. So, what traits do animals need to survive well around us? My sister just sent me an interesting article about this: Greenberg's noticed that animals differ vastly in their "neophobia" - their tendency to shy away from new things. A chestnut-sided warbler will not eat its favorite food if a new object is placed nearby. A bay-breasted warbler chows down happily: Greenberg hypothesized that since humans create a rapidly changing environment, animals will less neophobia will fare better around us. But, it turned out that some species closely associated with us are among the most neophobic of all! Mallards, which get along well with people, are more neophobic than wood ducks. Norway and black rats, ravens, crows, and house sparrows are all highly neophobic! This is why it's hard to trap or poison these critters. And that's part of why they do well around us. In short, "persecuted commensals" - animals that require human presence to do well, but which we keep trying to kill - must balance adaptability with neophobia. They need to keep adapting to new environments and trying new foods, but avoid our sneaky traps. They need to be curious... but still cautious. That's what Greenberg says. And it makes me wonder: does this balance require a kind of intelligence? Are we selecting for intelligence? June 24, 2006 I drove to the coast with James Dolan to visit my friends Chris Lee and Meenakshi Roy. Among other things, we went on a long walk on the beach from Playa Del Rey almost down to Hermosa Beach. Chris and Meenakshi study cool stuff like alternative splicing in human genes and evolution of drug resistance in HIV. But, Chris wants to do more theoretical work on bioinformatics, and he's writing a book about it that starts with the fundamentals: Bayesian reasoning, entropy, and so on. So, we mainly talked about that sort of stuff. Chris described a conjecture about entropy maximization, and Jim came up with an interesting idea for deriving the maximum entropy principle from Bayes' law! I need to find out if someone has already worked on these ideas.... According to Chris, people in bioinformatics are expected to run "labs", following the pattern in other branches of biology. They spend lots of time managing grad students, applying for grants, and so on - leaving little time to talk with colleagues and dream up new ideas. Each lab is like a little business competing with the rest in cranking out data. It's very different in math and theoretical physics. There are reasons for this, to be sure, but it seems that now there's enough data in biology to create a niche for "theorists" who spend some time thinking about what it all means. June 25, 2006 More about animals living with people: A sad thing about visiting my parents' beautiful house in Great Falls, Virginia was seeing how deer have overrun the woods. With no natural predators to keep their population down, they eat every last little bit of plant life they can get find; their population must be limited by starvation. So, the forest has no brush in it... and no new saplings! It's a dying forest. I mentioned how coyotes have moved into this area. Unfortunately, coyotes don't eat deer. At least, not often - maybe occasionally they grab an unlucky doe, but they prefer much smaller food, like mice. Luckily, my sister said that mountain lions have entered the area! I hope they eat lots of deer and not too many people. Here's an article on a similar phenomenon in New England: Tracking the Cats Mountain Lions Roam Region's Forests - Origins a Mystery Wendy Williams Northern Sky News June 2002 In September 2000, less than 150 miles north of Boston, hunter Roddy Glover was following a wildlife trail through the woods when a tawny-colored animal caught his eye. At first he thought it was a deer, but he soon realized it was some kind of cat. As the cat came closer, Glover saw that it was much too big for a bobcat, the only wild feline known to roam that area. He lay low in the ferns to watch. “Then—it kinda shocked the hell out of me—I realized it was a mountain lion. And she had a kitten with her.” Mountain lions were extirpated from New England by early in the last century, often hunted for the bounty placed on their tails. For decades, sightings of mountain lions roaming in New England’s north woods have been steeped in controversy. Those who believe in the presence of mountain lions have often been considered apt to believe in Bigfoot. Today, most wildlife biologists agree that there is increasing evidence of mountain lions in the area. But whether or not the animals —also known as catamounts, pumas, cougars or panthers—are breeding here remains unclear. As for Roddy Glover, he wanted proof that he wasn’t crazy. Seeing tracks left by the female mountain lion in the mud, he called state biologist Keel Kemper, who arrived at the Monmouth, Maine site within the hour, looked at the tracks, took photos and made a plaster cast. “This is a big cat print,” says Kemper of his plaster cast. “But if I had only this cat print, I would be foolish to say there was no doubt it was a mountain lion. I have Roddy Glover, experienced outdoorsman, who watched the cats for at least five minutes, from only 50 yards away. I’m about as convinced as I could be.” A week later at the same location, Glover found another set of what he believed were mountain lion prints left on railroad ties. This time a biologist who had done mountain lion research out west came. “Yep,” Glover quotes the biologist as saying, “those are mountain lion prints.” In over 60 years, this is the first sighting of a mountain lion roaming free through New England’s forests that is officially confirmed by accompanying physical proof (the last was a lion killed in northwest Maine in 1938). But there have been a number of credible sightings and several other tantalizing occurrences in recent years. In 1997, near Massachusetts’ Quabbin Reservoir, wildlife tracker John McCarter found a deposit of large scat covered with debris in the fashion of a mountain lion. McCarter, and tracker and teacher Paul Rezendes, sent the scat to a DNA sequencing lab at New York’s Wildlife Conservation Society. Those tests showed it to be mountain lion scat, a finding later confirmed by a second qualified DNA testing lab at Virginia Polytechnic Institute. Rezendes, author of Tracking and the Art of Seeing, has been following up on McCarter’s finding: “We’re going to make more of a concerted effort to find something. Now we’re going to set a track line out this winter... We will be following up any credible sightings. Anybody who has tracks, scat, anything like that that sounds credible—if we find something, I’ll be ready to go.” Massachusetts state biologists accept that the scat was probably mountain lion, but question the animal’s origins. “One could speculate that a captive cougar escaped or was released in the area and survived long enough to feed on a beaver and leave this tangible evidence,” wrote Massachusetts wildlife biologist Susan Langlois. Throughout northern Maine, Vermont and New Hampshire, an increasing number of sightings by very credible and experienced outdoorsmen have been reported. None of these have been confirmed by physical evidence, however. Some observers have followed tracks in the snow. In the Brattleboro-Putney area of southern Vermont, in the winter of 2000, a number of independent sightings were reported over a series of several days. But to date, nothing has been confirmed. “We have a semiformal policy of taking all sightings and all calls,” says Vermont state wildlife biologist Doug Blodgett. “We’re documenting everything we get, including misidentifications. We’re putting it on a data base and we’re keeping track.” Blodgett says that when biologists follow up on many of the calls, the animal turns out to have been a bobcat, a feral house cat, a coyote—or even a deer. Because of the similarity in coat color, it’s quite common for the most experienced people to mistake a deer in a low-crawl for a mountain lion. “I had that experience myself once,” says Blodgett. “One night I was certain I was seeing a mountain lion, but when I checked the tracks it was a deer.” Biologists across the continent tell similar stories of mistaken identities. Nevertheless, many regional experts agree that, on at least a few occasions, observers are reporting valid sightings. But, says Blodgett, it is not clear where the lions are coming from. “We have a lot of people who are quite cranked up about this, who really want to believe that the lions are here,” he says. “Some have speculated that there have been some intentional releases. They’re commercially available—you can buy them on the Internet.” I keep hearing that there are mountain lions in the park behind our house, but I've never seen one - which is just fine with me. Do you know what to do if you meet one? Some good news: Santa Monica has banned styrofoam and other non-recyclable plastics for businesses like fast-food restaurants. This stuff is virtually indestructible and accumulates on beaches and elsewhere. It's made of petroleum, so it's getting more expensive, and people are naturally turning to cups and plates made from corn starch, sugar cane, and other biodegradable materials. Some bad news: this summer we'll probably see lots of wildfires in the western USA. It's just as dry as it was in 2002, which was the worst wildfire season ever, and the sky here was full of smoke and ash for days - it looked like Hell. Of course, wildfires may not be all bad in the grand scheme of things. It's hard to tell... hard to tell what the "grand scheme of things" really is! That's part of what I'm trying to figure out in this diary. From Thin Ice, where the author was interviewing climate scientist Lonnie Thompson: " There was a time about 3.5 billion years ago when there was no oxygen in the atmosphere, and a kind of anaerobic bacteria occupied all the oceans of the world. They produced oxygen just by living, the same way we produce CO2, and they multiplied until they occupied every part of the earth. But the oxygen they gave off was poisonous to them, so they eventually changed the atmosphere to the point that they killed themselves off [....]" "I think humans are like every other organism: they try to maximize the system to their advantage, take every resource they can use to make whatever it is they're trying to produce, and they will keep doing it until that resource is no longer available to them. Our economic system is based on that: maximum production. And every country in the world wants to be like the Western countries - same lifestyle, same air-conditioning, same TVs. We have fine universities, we train people to think; but actions speak louder than words, and as long as we stay on this path I don't think we're any smarter than bacteria. We're behaving the same way they did. You can do that until you exceed the boundaries of the system, and then it will collapse." "You mean the whole system will fall apart?" I asked. "Oh no, the system will keep working. I'm very optimistic about the system. The system will take care of itself. This is like a cancer growing on the surface. The planet will react in a way as to stop that cancer." "The earth will stay healthy?" "Yes. It might be big storms; it might be wiping out Bangladesh or Africa; the world will go on, and there will be creatures that will multiply in that new world. Plants like CO2; maybe the world will be dominated by plants. Whenever a creature exceeds its resource base, its population collapses - think of lemmings - and I think that's ultimately what will happen to humans." June 28, 2006 Yesterday I talked to Danny Stevenson and Alissa Crans about representations of Lie 2-algebras and Lie 2-groups. We were mainly battling with the puzzle of giving our 2-category of 2-vector spaces a nice tensor product and hom. The last few days I've also been talking with James Dolan about the McKay correspondence and ambidextrous adjunctions between 2-vector spaces. In the first reported case of fatal hilarity, the Greek fortune-teller Calchas is supposed to have died of laughter on the day he was predicted to die, when the prediction didn't seem to be coming true. Google has a new mirror site. Make sure to type in your entry backwards. You can find many other strange things on Wikipedia: June 30, 2006 I'm gradually gearing up for my trip to Shanghai on Monday July 3rd. This may be my last diary entry for a while, but Lisa has found an apartment with broadband internet access - apparently quite common there - so I should be back in business once we get set up. It'll be an adventure! My 2003 summer in Hong Kong was great, so I'm not scared, but it will be quite something living in such a huge city. We'll be near Fudan University, not the heart of town. You can see it near the top of this map. Somehow I got a subscription to Cell magazine. One issue had a neat article on the genetic origins of left-right asymmetry in vertebrates, which I've summarized in the Addendum to week73. But even more cool are these two articles: The first article describes how bacteria communicate using chemicals. For example, in a process called quorum sensing, bacteria emit traces of a chemical, which rises to a level they can detect only when their population density reaches a certain threshold. The chemical then affects their behavior! For example, a bioluminescent bacterium in the ocean called Vibrio harveyi glows only when it reaches a certain density - and in an extreme case of this phenomenon, a glowing patch of the Indian Ocean 15,000 square kilometers in size was visible from space for three nights! But the phenomenon of quorum sensing has recently turned out to be far more common in less exotic circumstances. It causes "competence" in Streptococcus, a state in which bacteria can pick up DNA molecules and change their genetic properties. It also controls virulence factor secretion, biofilm formation and sporulation. These are various spooky tricks bacteria like to play.... The article describes many other forms of inter-bacterial communication. For example, bacteria in water send water-insoluble molecules to each other in little packages called vesicles. And, some of these packages are fatal to bacteria of other species! As if this weren't enough, it turns out that advanced life forms like us - eukaryotes, to be precise - are able to pass on traits not just using their DNA and RNA, but also using a trick called histone methylation. In eukaryotic cells, DNA is wound around around proteins called histones. Adding one, two or three methyl groups to these proteins controls whether and how gene will be expressed in a given cell. This is one way cells in our body get to be very different even though they have the same DNA! It's quite complicated and interesting - and in a surprising twist reminiscent of Lamarckian evolution, a mother can apparently do histone methylation to genes in her child's embryo! So, traits picked up during her life, encoded not in DNA or RNA but in histone methylation, can be passed on to her offspring. In short, besides genetics we must also study epigenetics - the science of reversible but heritable changes in gene expression that can occur without any changes in our DNA! Evolution is like a game that life has been playing for billions of years. The strategies in play are surely far deeper than we've been able to fathom so far. We're like kids watching grand masters play chess. We should continue to expect surprises.... For my July 2006 diary, go here. The [...] spirit will soar eagerly into the heavenly spheres, but rarely stays there: it returns to the workaday world: it insists that ideals shall be translated into action, precept into practice, the spiritual applied to the physical, the abstract to the concrete. - Hugh Schonfield © 2006 John Baez
58c591cabc79b2c8
Take the 2-minute tour × Recently there have been some interesting questions on standard QM and especially on uncertainty principle and I enjoyed reviewing these basic concepts. And I came to realize I have an interesting question of my own. I guess the answer should be known but I wasn't able to resolve the problem myself so I hope it's not entirely trivial. So, what do we know about the error of simultaneous measurement under time evolution? More precisely, is it always true that for $t \geq 0$ $$\left<x(t)^2\right>\left<p(t)^2\right> \geq \left<x(0)^2\right>\left<p(0)^2\right>$$ (here argument $(t)$ denotes expectation in evolved state $\psi(t)$, or equivalently for operator in Heisenberg picture). I tried to get general bounds from Schrodinger equation and decomposition into energy eigenstates, etc. but I don't see any way of proving this. I know this statement is true for a free Gaussian wave packet. In this case we obtain equality, in fact (because the packet stays Gaussian and because it minimizes HUP). I believe this is in fact the best we can get and for other distributions we would obtain strict inequality. So, to summarize the questions 1. Is the statement true? 2. If so, how does one prove it? And is there an intuitive way to see it is true? share|improve this question Why do you think it would apply? You can't really make a measurement that way (either you measure at $t=0$ or at $t=T$, but never both), so you basically have two different $\psi$ solutions. Both will obey the principle independently. Am I misunderstanding your question? –  Sklivvz Mar 19 '11 at 16:06 If your wavepacket, to begin with, saturates the uncertainty bound (i.e. is a coherent state) then this is trivially true - coherent states stay coherent under time-evolution. If your initial state is not a coherent state then the evolution is clearly more involved, but in that case you could expand your arbitrary initial state in the coherent state basis - so that this inequality (as established for coherent states) could still be used, component by component to show that it remains true for the arbitrary state. Or perhaps not. Chug and plug, baby, chug and plug. –  user346 Mar 19 '11 at 16:08 I don’t think the statement is true. Put the minimum uncertainty wave packet at t=0. What was the uncertainty before, at t<0? it was larger so it has been decreasing before t=0. More generally, you cannot derive time asymmetric statements from time symmetric laws. –  user566 Mar 19 '11 at 16:39 @Moshe: there are loopholes in your argument: there might be no minimum for a given system (just infimum) and if there is minimum, it might be preserved in evolution (as for free Gaussian). Still, nice idea and I'll try to use it to find a counterexample in some simple system. As for the second statement: right, so I am sure you'll tell me that we can't obtain second law too... just kiddin', I don't want to get into this discussion that made Boltzmann commit suicide :) –  Marek Mar 19 '11 at 16:47 @Marek, in any example you can solve the Schrodinger equation, you'll find that the quantity you are interested in grows away from t=0, both towards the past and towards the future, this is guaranteed by symmetry. As for the general statement, it is also true for the second law. You cannot derive time asymmetric conclusions from time symmetric laws without extra input, this is just basic logic, nothing to do with physics. The whole discussion is what is that extra input and where does it come in. –  user566 Mar 19 '11 at 16:57 5 Answers 5 up vote 36 down vote accepted The question asks about the time dependence of the function $$f(t) := \langle\psi(t)|(\Delta \hat{x})^2|\psi(t)\rangle \langle\psi(t)|(\Delta \hat{p})^2|\psi(t)\rangle,$$ $$\Delta \hat{x} := \hat{x} - \langle\psi(t)|\hat{x}|\psi(t)\rangle, \qquad \Delta \hat{p} := \hat{p} - \langle\psi(t)|\hat{p}|\psi(t)\rangle, \qquad \langle\psi(t)|\psi(t)\rangle=1.$$ We will here use the Schroedinger picture where operators are constant in time, while the kets and bras are evolving. Edit: Spurred by remarks of Moshe R. and Ted Bunn let us add that (under assumption (1) below) the Schroedinger equation itself is invariant under the time reversal operator $\hat{T}$, which is a conjugated linear operator, so that $$\hat{T} t = - t \hat{T}, \qquad \hat{T}\hat{x} = \hat{x}\hat{T}, \qquad \hat{T}\hat{p} = -\hat{p}\hat{T}, \qquad \hat{T}^2=1.$$ Here we are restricting ourselves to Hamiltonians $\hat{H}$ so that $$[\hat{T},\hat{H}]=0.\qquad (1)$$ Moreover, if $$|\psi(t)\rangle = \sum_n\psi_n(t) |n\rangle$$ is a solution to the Schroedinger equation in a certain basis $|n\rangle$, then $$\hat{T}|\psi(t)\rangle := \sum_n\psi^{*}_n(-t) |n\rangle$$ will also be a solution to the Schroedinger equation with a time reflected function $f(-t)$. Thus if $f(t)$ is non-constant in time, then we may assume (possibly after a time reversal operation) that there exist two times $t_1<t_2$ with $f(t_1)>f(t_2)$. This would contradict the statement in the original question. To finish the argument, we provide below an example of a non-constant function $f(t)$. Consider a simple harmonic oscillator Hamiltonian with the zero point energy $\frac{1}{2}\hbar\omega$ subtracted for later convenience. $$\hat{H}:=\frac{\hat{p}^2}{2m}+\frac{1}{2}m\omega^{2}\hat{x}^2 -\frac{1}{2}\hbar\omega=\hbar\omega\hat{N},$$ where $\hat{N}:=\hat{a}^{\dagger}\hat{a}$ is the number operator. Let us put the constants $m=\hbar=\omega=1$ to one for simplicity. Then the annihilation and creation operators are $$\hat{a}=\frac{1}{\sqrt{2}}(\hat{x} + i \hat{p}), \qquad \hat{a}^{\dagger}=\frac{1}{\sqrt{2}}(\hat{x} - i \hat{p}), \qquad [\hat{a},\hat{a}^{\dagger}]=1,$$ or conversely, $$\hat{x}=\frac{1}{\sqrt{2}}(\hat{a}^{\dagger}+\hat{a}), \qquad \hat{p}=\frac{i}{\sqrt{2}}(\hat{a}^{\dagger}-\hat{a}), \qquad [\hat{x},\hat{p}]=i,$$ $$\hat{x}^2=\hat{N}+\frac{1}{2}\left(1+\hat{a}^2+(\hat{a}^{\dagger})^2\right), \qquad \hat{p}^2=\hat{N}+\frac{1}{2}\left(1-\hat{a}^2-(\hat{a}^{\dagger})^2\right).$$ Consider Fock space $|n\rangle := \frac{1}{\sqrt{n!}}(\hat{a}^{\dagger})^n |0\rangle$ such that $\hat{a}|0\rangle = 0$. Consider initial state $$|\psi(0)\rangle := \frac{1}{\sqrt{2}}\left(|0\rangle+|2\rangle\right), \qquad \langle \psi(0)| = \frac{1}{\sqrt{2}}\left(\langle 0|+\langle 2|\right).$$ $$|\psi(t)\rangle = e^{-i\hat{H}t}|\psi(0)\rangle = \frac{1}{\sqrt{2}}\left(|0\rangle+e^{-2it}|2\rangle\right),$$ $$\langle \psi(t)| = \langle\psi(0)|e^{i\hat{H}t} = \frac{1}{\sqrt{2}}\left(\langle 0|+\langle 2|e^{2it}\right),$$ $$\langle\psi(t)|\hat{x}|\psi(t)\rangle=0, \qquad \langle\psi(t)|\hat{p}|\psi(t)\rangle=0.$$ $$\langle\psi(t)|\hat{x}^2|\psi(t)\rangle=\frac{3}{2}+\frac{1}{\sqrt{2}}\cos(2t), \qquad \langle\psi(t)|\hat{p}^2|\psi(t)\rangle=\frac{3}{2}-\frac{1}{\sqrt{2}}\cos(2t),$$ because $\hat{a}^2|2\rangle=\sqrt{2}|0\rangle$. Therefore, $$f(t) = \frac{9}{4} - \frac{1}{2}\cos^2(2t),$$ which is non-constant in time, and we are done. Or alternatively, we can complete the counter-example without the use of above time reversal argument by simply performing an appropriate time translation $t\to t-t_0$. share|improve this answer I was thinking of trying to work out some harmonic oscillator example myself (because I have few further questions and it seems like simplest system where something nontrivial is happening) but you've beat me to it. Thanks! –  Marek Mar 20 '11 at 18:57 Although there is one thing that bugs me. I believe the calculation is essentially right, however we have $f(0) = 1/4$ which means it minimizes HUP (unless I am misunderstanding your conventions) and therefore $\psi(0)$ would have to be Gaussian -- a contradiction with your initial state. Is there a little mistake in calculation somewhere or do I have a flaw in my argument? –  Marek Mar 20 '11 at 19:02 Okay, I fixed it (I hope) :) –  Marek Mar 20 '11 at 19:20 Dear @Marek: I agree, there was powers of $2$ missing in three formulas. –  Qmechanic Mar 20 '11 at 19:32 One thing that's worth noting: you say that the Schrodinger equation is not invariant under time reversal. It's true that simply substituting $t\to -t$ is not invariant, but simultaneously changing $t\to -t$ and complex conjugating $\psi\to\psi^*$ does leave the equation invariant. That means that, for every solution $\psi(t)$, there is a corresponding solution $\psi^*(-t)$ that "looks like" the same state going backwards in time (and in particular has the same expectation values for all operators). That's what people mean when they say that the Schrodinger equation has time-reversal symmetry. –  Ted Bunn Mar 21 '11 at 13:02 No. Here's a simple example where it shrinks: You have a particle that has a 50% chance of being on the left going right, and a 50% chance of being on the right going left. This has a macroscopic error in both position and momentum. If you wait until it passes half way, it has a 100% chance of being in the middle. This has a microscopic error in position. There will also only be a microscopic change in momentum. (I'm not entirely sure of this as the possibilities hit each other, but if you just look right before that, or make them miss a little, it still works.) As such, the error in position decreased significantly, but the error in momentum stayed about the same. share|improve this answer A physical way of seeing this is that the phase space volume of a system is preserved. Hamiltonian mechanics preserves the volume of a system on its energy surface H = E, which in quantum mechanics corresponds to the Schrodinger equation. The phase space volume on the energy surface of phase space is composed of units of volume $\hbar^{2n}$ for the momentum and position variables plus the $\hbar$ of the energy $i\hbar\partial\psi/\partial t~=~H\psi$. This is then preserved. Any growth in the uncertainty $\Delta p\Delta q~=~\hbar/2$ would then imply the growth in the phase space volume of the system. This would then mean there is some dissipative process, or the quantum dynamics is replaced by some master equation with a thermal or environmental loss of some form. For a pure unitary evolution however the phase space volume of the system, or equivalently the $Tr\rho$ and $Tr\rho^2$ are constant. This means the uncertainty relationship is a Fourier transform between complementary observables which preserve an area $\propto~\hbar$. share|improve this answer -1, this is completely irrelevant to my question. I am interested just in pure states and for those phase volume is always zero and so trivially conserved. But this doesn't give any information on the behavior of uncertainty. –  Marek Mar 21 '11 at 13:20 The volume a system occupies in phase space defines entropy as $S~=~k~log(\Omega)$ for $\Omega$. The von Neumann entropy $$ S~=~-k~Tr~\rho log(\rho). $$ A mixed state has each element of $\rho~=~1/n$ and the trace is $\sum(1/n)log(1/n)$ $~=~log(n)$. A pure state then occupies a phase space region that is normalized to unit volume --- not zero. –  Lawrence B. Crowell Mar 21 '11 at 14:45 Think in terms of Harmonic Functions and their Maximum Principle (or Mean Value Theorem). For simplicity (and, in fact, without loss of generality), let's just think in terms of a free particle, ie, $V(x,y,z) = 0$. When the Potential vanishes, the Schrödinger equation is nothing but a Laplace one (or Poisson equation, if you want to put a source term). And, in this case, you can apply the Mean Value Theorem (or the Maximum Principle) and get a result pertaining your question: in this situation you saturate the equality. Now, if you have a Potential, you can think in terms of a Laplace-Beltrami operator: all you need to do is 'absorb' the Potential in the Kinetic term via a Jacobi Metric: $\tilde{\mathrm{g}} = 2\, (E - V)\, \mathrm{g}$. (Note this is just a conformal transformation of the original metric in your problem.) And, once this is done, you can just turn the same crank we did above, ie, we reduced the problem to the same one as above. ;-) I hope this helps a bit. share|improve this answer I am sorry but I don't see how this is related to uncertainty and time evolution. Could you explain that? –  Marek Mar 19 '11 at 20:51 @Marek: the point was made explicit by Qmechanic, in his answer above. If you apply what i said in the Schrödinger picture, you get evolving states whose magnitude is always bound by the Mean Value Theorem. (If we were talking about bounded operators, this could be made rigorous with a bit of Functional Analysis.) –  Daniel Mar 20 '11 at 19:32 The Schrodinger equation is time-symmetric. The answer is therefore No. From all of the comments, I feel like I must be oversimplifying or missing something, but I can't see what. share|improve this answer I'm with you, but it is probably useful for Marek to see for himself how this works in the simple example to be convinced of the general statement. –  user566 Mar 19 '11 at 17:19 Yes, this seems like a good argument to settle the original question. But it brings in further questions :) In particular, Moshe's solution (minimum growing towards both future and past) is a kind of bounce. But on both sides of that bounce I suppose the inequality would be satisfied. In other words, would the statement hold if we allowed these simple bouncy solutions and the time "t=0". Or to put it more clearly: I should've asked more general question of what does the uncertainty as a function of time look like... We now know it need not be monotone but perhaps it has other nice properties. –  Marek Mar 19 '11 at 18:07 I can't make heads or tails of this sentence: In other words, would the statement hold if we allowed these simple bouncy solutions and the time "t=0". I don't know if anything interesting in general can be said about the time evolution of $\Delta x\,\Delta p$, other than of course that it's bounded below. –  Ted Bunn Mar 19 '11 at 18:09 @Ted: ah, that was indeed not very clear. The best rephrasing is probably this: whether there exists time $t_0$ such that the inequality holds for all times $t \geq t_0$. But it is a different question. –  Marek Mar 19 '11 at 20:15 I thnk that @Marek and I are in complete agreement. Just to be explicit, let me answer @Carl's question about how we know $\Delta p$ is constant. Marek is right: For a free particle, $p^n$ commutes with the Hamiltonian, so all expectation values $\langle p^n\rangle$ are constant. So $\Delta p^2=\langle p^2\rangle-\langle p\rangle^2$ is constant. (Indeed, the entire probability distribution for $p$ is constant in time.) As a result, a Gaussian wave packet for a free particle does not remain minimum-uncertainty for all time. It spreads in real space while remaining the same in momentum space. –  Ted Bunn Mar 20 '11 at 14:05 Your Answer
628e09d948434f09
Take the 2-minute tour × We start from an abstract state vector $ \newcommand{\ket}[1]{|{#1}\rangle} \ket{\Psi}$ as a description of a state of a system and the Schrödinger equation in the following form $$ \DeclareMathOperator{\dif}{d \!} \newcommand{\ramuno}{\mathrm{i}} \newcommand{\exponent}{\mathrm{e}} \newcommand{\bra}[1]{\langle{#1}|} \newcommand{\braket}[2]{\langle{#1}|{#2}\rangle} \newcommand{\bracket}[3]{\langle{#1}|{#2}|{#3}\rangle} \newcommand{\linop}[1]{\hat{#1}} \newcommand{\dpd}[2]{\frac{\partial #1}{\partial #2}} \newcommand{\dod}[2]{\frac{\dif{#1}}{\dif{#2}}} \ramuno \hbar \dod{}{t} \ket{\Psi(t)} = \hat{H} \ket{\Psi(t)} \, . \quad (1) $$ Now if we move to position representation of the state vector what will happen to the Schrödinger equation? In Mathematics for Quantum Mechanics: An Introductory Survey of Operators, Eigenvalues And Linear Vector Spaces by John David Jackson I found the following information (pp. 77-78). By taking an inner product of both sides of (1) with $\ket{x}$ and using the resolution of identity $\linop{I} = \int\nolimits_{-\infty}^{+\infty} \ket{x'} \bra{x'} \dif{x'}$ on the right side we obtain $$ \ramuno \hbar \dod{}{t} \braket{x}{\Psi(t)} = \int\limits_{-\infty}^{+\infty} \bracket{ x }{ \linop{H} }{ x' } \braket{ x' }{ \Psi(t) } \dif{x'} \, . $$ We then introduce a wave function $\Psi(x,t) \equiv \braket{x}{\Psi(t)}$ and if I understand correctly $\bracket{ x }{ \linop{H} }{ x' }$ is also replaced by a function $h(x, x')$ which will lead us to $$ \ramuno \hbar \dod{}{t} \Psi(x, t) = \int\limits_{-\infty}^{+\infty} h(x, x') \Psi(x', t) \dif{x'} \, . $$ Now we are one step away from the familiar Schrödinger equation in position representation: we need Hamiltonian operator in position representation $\linop{H}(x, \frac{\hbar}{\ramuno} \dod{}{x})$ to be given by $$ \linop{H}(x, \frac{\hbar}{\ramuno} \dod{}{x}) \Psi(x, t) = \int\limits_{-\infty}^{+\infty} h(x, x') \Psi(x', t) \dif{x'} \, . $$ Author claims (p. 44) that For our purposes the general linear operator $K$ can be written in the explicit form $$ g = K f \rightarrow g(x) = \int\limits_{a}^{b} k(x,x') f(x') \dif{x'} \quad(2) $$ The function $k(x,x')$ is called the kernel of the operator $K$. It is not that I do not trust the author, but since my knowledge of mathematics is not great and I've never seen something like (2) before, I'm confused with this "for our purposes". What does it actually mean? Is (2) true for any linear operator or for a particular kind of linear operators, say, for self-adjoint linear operators on Hilbert spaces? share|improve this question 2 Answers 2 up vote 3 down vote accepted Let me first say that I think Tobias Kienzler has done a great job of discussing the intuition behind your question in going from finite to infinite dimensions. I'll, instead, attempt to address the mathematical content of Jackson's statements. My basic claim will be that Whether you are working in finite or infinite dimension, writing the Schrodinger equation in a specific basis only involves making definitions. To see this clearly without having to worry about possible mathematical subtleties, let's first consider Finite dimension In this case, we can be certain that there exists an orthnormal basis $\{\ket{n}\}_{n=1, \dots N}$ for the Hilbert space $\mathcal H$. Now for any state $|\psi(t)\rangle$ we define the so-called matrix elements of the state and Hamiltonian as follows: \begin{align} \psi_n(t) = \langle n|\psi(t)\rangle, \qquad H_{mn} = \langle m|H|n\rangle \end{align} Now take the inner product of both sides of the Schrodinger equation with $\langle m|$, and use linearity of the inner product and derivative to write \begin{align} \langle m|\frac{d}{dt}|\psi(t)\rangle=\frac{d}{dt}\langle m|\psi(t)\rangle=\frac{d\psi_m}{dt}(t) \end{align} The fact that our basis is orthonormal tells us that we have the resolution of the indentity \begin{align} I = \sum_{m=1}^N|m\rangle\langle m| \end{align} So that after taking the inner product with $\langle m|$, the write hand side of Schrodinger's equation can be written as follows: \begin{align} \langle m|H|\psi(t)\rangle = \sum_{m=1}^N\langle n|H|m\rangle\langle m|\psi(t)\rangle = \sum_{m=1}^N H_{nm}\psi_m(t) \end{align} Equating putting this all together gives the Schrodinger equation in the $\{|n\rangle\}$ basis; \begin{align} \frac{d\psi_n}{dt}(t) = \sum_{m=1}^NH_{nm}\psi_m(t) \end{align} Infinite dimension With an infinite number of dimensions, we can choose to write the Schrodinger equation either in a discrete (countable) basis for the Hilbert space $\mathcal H$, which always exists by the way since quantum mechanical Hilbert spaces all possess a countable, orthonormal basis, or we can choose a continuous "basis" like the position "basis" in which to write the equation. I put basis in quotes here because the position space wavefunctions are not actually elements of the Hilbert space since they are not square-integrable functions. In the case of a countable orthonormal basis, the computation performed above for writing the Schodinger equation in a basis follows through in precisely the same way with the replacement of $N$ with $\infty$ everywhere. In the case of the "basis" $\{|x\rangle\rangle_{x\in\mathbb R}$, the computation above carries through almost in the exact same way (as your question essentially shows), except the definitions we made in the beginning change slightly. In particular, we define functions $\psi:\mathbb R^2\to\mathbb C$ and $h:\mathbb R^2\to\mathbb C$ by \begin{align} \psi(x,t) = \langle x|\psi(t)\rangle, \qquad h(x,x') = \langle x|H|x'\rangle \end{align} Then the position space representation of the Schrodinger equation follows by taking the inner product of both sides of the equation with $\langle x|$ and using the resolution of the identity \begin{align} I = \int_{-\infty}^\infty dx'\, |x'\rangle\langle x'| \end{align} The only real mathematical subtleties you have to worry about in this case are exactly what sorts of objects the symbols $|x\rangle$ represent (since they are not in the Hilbert space) and in what sense one can write a resolution of the identity for such objects. But once you have taken care of these issues, the conversion of the Schrodinger equation into its expression in a particular "representation" is just a matter of making the appropriate definitions. share|improve this answer Think of a linear operator as taking the limit of an infinitely large matrix with discrete indices to one with continuous "indices" called coordinates. $K$ would denote the Matrix, while $k(x,x')$ is what one writes as $K_{x x'}$ for matrices. When you apply the linear operator to a function, it's like multiplying a matrix with a vector, only that instead of summation over the discrete second index you now integrate over the continuous second coordinate, i.e. $\sum_{x'}K_{x x'}f_{x'} \to \int dx'\, k(x,x')f(x')$. There's a bit more to it of course due to going from $\{1,2,3,...,n\}$ via $\mathbb N$ to $\mathbb R$ as "index" involves some mathematical messiness, but in most cases it just works fine without any special consideration. share|improve this answer Your Answer
ebb84f76d1335d74
Quantum mechanics 2007 Schools Wikipedia Selection. Related subjects: General Physics Quantum mechanics is a fundamental branch of theoretical physics that replaces classical mechanics and classical electromagnetism at the atomic and subatomic levels. It is the underlying mathematical framework of many fields of physics and chemistry, including condensed matter physics, atomic physics, molecular physics, computational chemistry, quantum chemistry, particle physics, and nuclear physics. Along with general relativity, quantum mechanics is one of the pillars of modern physics. The term quantum (Latin, "how much") refers to discrete units that the theory assigns to certain physical quantities, such as the energy of an atom at rest (see Figure 1, at right). The discovery that waves could be measured in particle-like small packets of energy called quanta led to the branch of physics that deals with atomic and subatomic systems which we today call Quantum Mechanics. The foundations of quantum mechanics were established during the first half of the twentieth century by Werner Heisenberg, Max Planck, Louis de Broglie, Niels Bohr, Erwin Schrödinger, Max Born, John von Neumann, Paul Dirac, Albert Einstein, Wolfgang Pauli and others. Some fundamental aspects of the theory are still actively studied. Quantum mechanics is a more fundamental theory than Newtonian mechanics and classical electromagnetism, in the sense that it provides accurate and precise descriptions for many phenomena that these "classical" theories simply cannot explain on the atomic and subatomic level. It is necessary to use quantum mechanics to understand the behaviour of systems at atomic length scales and smaller. For example, if Newtonian mechanics governed the workings of an atom, electrons would rapidly travel towards and collide with the nucleus. However, in the natural world the electron normally remains in a stable orbit around a nucleus — seemingly defying classical electromagnetism. Since the early days of quantum theory, physicists have made many attempts to combine it with the other highly successful theory of the twentieth century, Albert Einstein's General Theory of Relativity. While quantum mechanics is entirely consistent with special relativity, serious problems emerge when one tries to join the quantum laws with general relativity, a more elaborate description of spacetime which incorporates gravitation. Resolving these inconsistencies has been a major goal of twentieth- and twenty-first-century physics. Despite the proposal of many novel ideas, the unification of quantum mechanics—which reigns in the domain of the very small—and general relativity—a superb description of the very large—remains a tantalizing future possibility. (See quantum gravity, string theory.) Because everything is composed of quantum-mechanical particles, the laws of classical physics must approximate the laws of quantum mechanics in the appropriate limit. This is often expressed by saying that in case of large quantum numbers quantum mechanics "reduces" to classical mechanics and classical electromagnetism. This requirement is called the correspondence, or classical limit. Generally, quantum mechanics does not assign definite values to observables. Instead, it makes predictions about probability distributions; that is, the probability of obtaining each of the possible outcomes from measuring an observable. Naturally, these probabilities will depend on the quantum state at the instant of the measurement. There are, however, certain states that are associated with a definite value of a particular observable. These are known as "eigenstates" of the observable ("eigen" meaning "own" in German). In the everyday world, it is natural and intuitive to think of everything being in an eigenstate of every observable. Everything appears to have a definite position, a definite momentum, and a definite time of occurrence. However, quantum mechanics does not pinpoint the exact values for the position or momentum of a certain particle in a given space in a finite time, but, rather, it only provides a range of probabilities of where that particle might be. Therefore, it became necessary to use different words for a) the state of something having an uncertainty relation and b) a state that has a definite value. The latter is called the "eigenstate" of the property being measured. A concrete example will be useful here. Let us consider a free particle. In quantum mechanics, there is wave-particle duality so the properties of the particle can be described as a wave. Therefore, its quantum state can be represented as a wave, of arbitrary shape and extending over all of space, called a wavefunction. The position and momentum of the particle are observables. The Uncertainty Principle of quantum mechanics states that both the position and the momentum cannot simultaneously be known with infinite precision at the same time. However, we can measure just the position alone of a moving free particle creating an eigenstate of position with a wavefunction that is very large at a particular position x, and zero everywhere else. If we perform a position measurement on such a wavefunction, we will obtain the result x with 100% probability. In other words, we will know the position of the free particle. This is called an eigenstate of position. If the particle is in an eigenstate of position then its momentum is completely unknown. An eigenstate of momentum, on the other hand, has the form of a plane wave. It can be shown that the wavelength is equal to h/p, where h is Planck's constant and p is the momentum of the eigenstate. If the particle is in an eigenstate of momentum then its position is completely blurred out. Usually, a system will not be in an eigenstate of whatever observable we are interested in. However, if we measure the observable, the wavefunction will instantaneously be an eigenstate of that observable. This process is known as wavefunction collapse. It involves expanding the system under study to include the measurement device, so that a detailed quantum calculation would no longer be feasible and a classical description must be used. If we know the wavefunction at the instant before the measurement, we will be able to compute the probability of collapsing into each of the possible eigenstates. For example, the free particle in our previous example will usually have a wavefunction that is a wave packet centered around some mean position x0, neither an eigenstate of position nor of momentum. When we measure the position of the particle, it is impossible for us to predict with certainty the result that we will obtain. It is probable, but not certain, that it will be near x0, where the amplitude of the wavefunction is large. After we perform the measurement, obtaining some result x, the wavefunction collapses into a position eigenstate centered at x. Mathematical formulation Interactions with other scientific theories Philosophical consequences Albert Einstein, himself one of the founders of quantum theory, disliked this loss of determinism in measurement (Hence his famous quote "God does not play dice with the universe."). He held that there should be a local hidden variable theory underlying quantum mechanics and consequently the present theory was incomplete. He produced a series of objections to the theory, the most famous of which has become known as the EPR paradox. John Bell showed that the EPR paradox led to experimentally testable differences between quantum mechanics and local hidden variable theories. Experiments have been taken as confirming that quantum mechanics is correct and the real world cannot be described in terms of such hidden variables. " Potential loopholes" in the experiments, however, mean that the question is still not quite settled. In 1900, the German physicist Max Planck introduced the idea that energy is quantized, in order to derive a formula for the observed frequency dependence of the energy emitted by a black body. In 1905, Einstein explained the photoelectric effect by postulating that light energy comes in quanta called photons. The idea that each photon had to consist of energy in terms of quanta was a remarkable achievement as it effectively removed the possibility of black body radiation attaining infinite energy if it were to be explained in terms of wave forms only. In 1913, Bohr explained the spectral lines of the hydrogen atom, again by using quantization, in his paper of July 1913 On the Constitution of Atoms and Molecules. In 1924, the French physicist Louis de Broglie put forward his theory of matter waves by stating that particles can exhibit wave characteristics and vice versa. These theories, though successful, were strictly phenomenological: there was no rigorous justification for quantization (aside, perhaps, for Henri Poincaré's discussion of Planck's theory in his 1912 paper Sur la théorie des quanta). They are collectively known as the old quantum theory. Modern quantum mechanics was born in 1925, when the German physicist Heisenberg developed matrix mechanics and the Austrian physicist Schrödinger invented wave mechanics and the non-relativistic Schrödinger equation. Schrödinger subsequently showed that the two approaches were equivalent. Heisenberg formulated his uncertainty principle in 1927, and the Copenhagen interpretation took shape at about the same time. Starting around 1927, Paul Dirac began the process of unifying quantum mechanics with special relativity by proposing the Dirac equation for the electron. He also pioneered the use of operator theory, including the influential bra-ket notation, as described in his famous 1930 textbook. During the same period, Hungarian polymath John von Neumann formulated the rigorous mathematical basis for quantum mechanics as the theory of linear operators on Hilbert spaces, as described in his likewise famous 1932 textbook. These, like many other works from the founding period still stand, and remain widely used. The field of quantum chemistry was pioneered by physicists Walter Heitler and Fritz London, who published a study of the covalent bond of the hydrogen molecule in 1927. Quantum chemistry was subsequently developed by a large number of workers, including the American theoretical chemist Linus Pauling at Cal Tech, and John Slater into various theories such as Molecular Orbital Theory or Valence Theory. Beginning in 1927, attempts were made to apply quantum mechanics to fields rather than single particles, resulting in what are known as quantum field theories. Early workers in this area included Dirac, Pauli, Weisskopf, and Jordan. This area of research culminated in the formulation of quantum electrodynamics by Feynman, Dyson, Schwinger, and Tomonaga during the 1940s. Quantum electrodynamics is a quantum theory of electrons, positrons, and the electromagnetic field, and served as a role model for subsequent quantum field theories. The theory of quantum chromodynamics was formulated beginning in the early 1960s. The theory as we know it today was formulated by Politzer, Gross and Wilzcek in 1975. Building on pioneering work by Schwinger, Higgs, Goldstone, Glashow, Weinberg and Salam independently showed how the weak nuclear force and quantum electrodynamics could be merged into a single electroweak force. Founding experiments • Thomas Young's double-slit experiment demonstrating the wave nature of light (c1805) • Henri Becquerel discovers radioactivity (1896) • Joseph John Thomson's cathode ray tube experiments (discovers the electron and its negative charge) (1897) • The study of black body radiation between 1850 and 1900, which could not be explained without quantum concepts. • The photoelectric effect: Einstein explained this in 1905 (and later received a Nobel prize for it) using the concept of photons, particles of light with quantized energy • Robert Millikan's oil-drop experiment, which showed that electric charge occurs as quanta (whole units), (1909) • Ernest Rutherford's gold foil experiment disproved the plum pudding model of the atom which suggested that the mass and positive charge of the atom are almost uniformly distributed. (1911) • Otto Stern and Walther Gerlach conduct the Stern-Gerlach experiment, which demonstrates the quantized nature of particle spin (1920) • Clinton Davisson and Lester Germer demonstrate the wave nature of the electron in the Electron diffraction experiment (1927) • Clyde L. Cowan and Frederick Reines confirm the existence of the neutrino in the neutrino experiment (1955) • Claus Jönsson`s double-slit experiment with electrons (1961) • The Quantum Hall effect, discovered in 1980 by Klaus von Klitzing. The quantized version of the Hall effect has allowed for the definition of a new practical standard for electrical resistance and for an extremely precise independent determination of the fine structure constant. 2. ^ 3. ^ The Davisson-Germer experiment, which demonstrates the wave nature of the electron Retrieved from " http://en.wikipedia.org/wiki/Quantum_mechanics"
49a6d3d5885fe00d
Microscopic reversibility From Wikipedia, the free encyclopedia Jump to: navigation, search The principle of microscopic reversibility in physics and chemistry is twofold: • First, it states that the microscopic detailed dynamics of particles and fields is time-reversible because the microscopic equations of motion are symmetric with respect to inversion in time (T-symmetry); • Second, it relates to the statistical description of the kinetics of macroscopic or mesoscopic systems as an ensemble of elementary processes: collisions, elementary transitions or reactions. For these processes, the consequence of the microscopic T-symmetry is: Corresponding to every individual process there is a reverse process, and in a state of equilibrium the average rate of every process is equal to the average rate of its reverse process.[1] History of microscopic reversibility[edit] The idea of microscopic reversibility was born together with physical kinetics. In 1872, Ludwig Boltzmann represented kinetics of gases as statistical ensemble of elementary collisions.[2] Equations of mechanics are reversible in time, hence, the reverse collisions obey the same laws. This reversibility of collisions is the first example of microreversibility. According to Boltzmann, this microreversibility implies the principle of detailed balance for collisions: at the equilibrium ensemble each collision is equilibrated by its reverse collision.[2] These ideas of Boltzmann were analyzed in detail and generalized by Richard C. Tolman.[3] In chemistry, J. H. van't Hoff (1884)[4] came up with the idea that equilibrium has dynamical nature and is a result of the balance between the forward and backward reaction rates. He did not study reaction mechanisms with many elementary reactions and could not formulate the principle of detailed balance for complex reactions. In 1901, Rudolf Wegscheider introduced the principle of detailed balance for complex chemical reactions.[5] He found that for a complex reaction the principle of detailed balance implies important and non-trivial relations between reaction rate constants for different reactions. In particular, he demonstrated that the irreversible cycles of reaction are impossible and for the reversible cycles the product of constants of the forward reactions (in the "clockwise" direction) is equal to the product of constants of the reverse reactions (in the "anticlockwise" direction). Lars Onsager (1931) used these relations in his well known work,[6] without direct citation but with the following remark: "Here, however, the chemists are accustomed to impose a very interesting additional restriction, namely: when the equilibrium is reached each individual reaction must balance itself. They require that the transition A\to B must take place just as frequently as the reverse transition B\to A etc." The quantum theory of emission and absorption developed by Albert Einstein (1916, 1917)[7] gives an example of application of the microreversibility and detailed balance to development of a new branch of kinetic theory. Sometimes, the principle of detailed balance is formulated in the narrow sense, for chemical reactions only[8] but in the history of physics it has the broader use: it was invented for collisions, used for emission and absorption of quanta, for transport processes[9] and for many other phenomena. In its modern form, the principle of microreversibility was published by Lewis (1925).[1] In the classical textbooks[3][10] full theory and many examples of applications are presented. Time-reversibility of dynamics[edit] The Newton and the Schrödinger equations in the absence of the macroscopic magnetic fields and in the inertial frame of reference are T-invariant: if X(t) is a solution then X(-t) is also a solution (here X is the vector of all dynamic variables, including all the coordinates of particles for the Newton equations and the wave function in the configuration space for the Schrödinger equation). There are two sources of the violation of this rule: • First, if dynamics depend on a pseudovector like the magnetic field or the rotation angular speed in the rotating frame then the T-symmetry does not hold. • Second, in microphysics of weak interaction the T-symmetry may be violated and only the combined CPT symmetry holds. Macroscopic consequences of the time-reversibility of dynamics[edit] In physics and chemistry, there are two main macroscopic consequences of the time-reversibility of microscopic dynamics: the principle of detailed balance and the Onsager reciprocal relations. The statistical description of the macroscopic process as an ensemble of the elementary indivisible events (collisions) was invented by L. Boltzmann and formalised in the Boltzmann equation. He discovered that the time-reversibility of the Newtonian dynamics leads to the detailed balance for collision: in equilibrium collisions are equilibrated by their reverse collisions. This principle allowed Boltzmann to deduce simple and nice formula for entropy production and prove his famous H-theorem.[2] In this way, microscopic reversibility was used to prove macroscopic irreversibility and convergence of ensembles of molecules to their thermodynamic equilibria. Another macroscopic consequence of microscopic reversibility is the symmetry of kinetic coefficients, the so-called reciprocal relations. The reciprocal relations were discovered in the 19th century by Thomson and Helmholtz for some phenomena but the general theory was proposed by Lars Onsager in 1931.[6] He found also the connection between the reciprocal relations and detailed balance. For the equations of the law of mass action the reciprocal relations appear in the linear approximation near equilibrium as a consequence of the detailed balance conditions. According to the reciprocal relations, the damped oscillations in homogeneous closed systems near thermodynamic equilibria are impossible because the spectrum of symmetric operators is real. Therefore, the relaxation to equilibrium in such a system is monotone if it is sufficiently close to the equilibrium. 1. ^ a b Lewis, G.N. (1925) A new principle of equilibrium, PNAS March 1, 1925 vol. 11 no. 3 179-183. 2. ^ a b c Boltzmann, L. (1964), Lectures on gas theory, Berkeley, CA, USA: U. of California Press. 3. ^ a b Tolman, R. C. (1938). The Principles of Statistical Mechanics. Oxford University Press, London, UK. 4. ^ Van't Hoff, J.H. Etudes de dynamique chimique. Frederic Muller, Amsterdam, 1884. 5. ^ Wegscheider, R. (1901) Über simultane Gleichgewichte und die Beziehungen zwischen Thermodynamik und Reactionskinetik homogener Systeme, Monatshefte für Chemie / Chemical Monthly 32(8), 849--906. 6. ^ a b Onsager, L. (1931), Reciprocal relations in irreversible processes. I, Phys. Rev. 37, 405-426. 7. ^ Einstein, A. (1917). Zur Quantentheorie der Strahlung [=On the quantum theory of radiation], Physikalische Zeitschrift 18 (1917), 121-128. English translation: D. ter Haar (1967): The Old Quantum Theory. Pergamon Press, pp. 167-183. 8. ^ Principle of microscopic reversibility. Encyclopædia Britannica Online. Encyclopædia Britannica Inc., 2012. 9. ^ Gorban, A.N., Sargsyan, H.P., and Wahab, H.A. Quasichemical Models of Multicomponent Nonlinear Diffusion, Mathematical Modelling of Natural Phenomena, Volume 6 / Issue 05, (2011), 184−262. 10. ^ Lifshitz, E. M.; and Pitaevskii, L. P. (1981). Physical kinetics. London: Pergamon. ISBN 0-08-026480-8. ISBN 0-7506-2635-6.  Vol. 10 of the Course of Theoretical Physics(3rd Ed). See also[edit]
3f79cb2ea8e675c2
Tables for Volume B Reciprocal space Edited by U. Shmueli International Tables for Crystallography (2010). Vol. B, ch. 5.2, p. 651   | 1 | 2 | Section 5.2.12. Multislice A. F. Moodie,a J. M. Cowleyb and P. Goodmanc aDepartment of Applied Physics, Royal Melbourne Institute of Technology, 124 La Trobe Street, Melbourne, Victoria 3000, Australia,bArizona State University, Box 871504, Department of Physics and Astronomy, Tempe, AZ 85287–1504, USA, and cSchool of Physics, University of Melbourne, Parkville, Australia 5.2.12. Multislice | top | pdf | Multislice derives from a formulation that generates a solution in the form of a Born series (Cowley & Moodie, 1962[link]). The crystal is treated as a series of scattering planes on to which the potential from the slice between z and [z + \Delta z] is projected, separated by vacuum gaps [\Delta z], not necessarily corresponding to any planes or spacings of the material structure. The phase change in the electron beam produced by passage through a slice is given by [q = \exp \left \{- i\sigma \textstyle\int\limits_{z_{1}}^{z_{1} + \Delta z} \varphi (x, y, z)\, \hbox{d}z\right \},]and the phase distribution in the xy plane resulting from propagation between slices is given by [p = \exp \left \{{ik(x^{2} + y^{2})\over 2\Delta z}\right \},]where the wavefront has been approximated by a paraboloid. Thus, the wavefunction for the (n + 1)th slice is given by [\eqalignno{ \psi_{n + 1} &= \left[\psi_{n} * \exp \left \{{ik(x^{2} + y^{2})\over 2\Delta z}\right \}\right] \exp \{- i\sigma \varphi_{n + 1}\}\cr &= [\psi_{n} * p]q, &(}]where [\ast] is the convolution operator (Cowley, 1981[link]). This equation can be regarded as the finite difference form of the Schrödinger equation derived by Feynman's (1948[link]) method. The calculation need be correct only to first order in [\Delta z]. Writing the convolution in equation ([link] explicitly, and expanding in a Taylor series, the integrals can be evaluated to yield equation ([link] (Goodman & Moodie, 1974[link]). If equation ([link] is Fourier transformed with respect to x and y, the resulting recurrence relation is of the form [U_{n + 1} = [U_{n}P] * Q_{n}, \eqno(]where P and Q are obtained by Fourier transforming p and q above. This form is convenient for numerical work since, for a perfect crystal, it is: discrete, as distinct from equation ([link] which is continuous in the variables [see IT C (2004[link], Section[link] )]; numerically stable at least up to 5000 beams; fast; and only requires a computer memory proportional to the number of beams (Goodman & Moodie, 1974[link]). International Tables for Crystallography (2004). Vol. C. Mathematical, Physical and Chemical Tables, edited by E. Prince. Dordrecht: Kluwer Academic Publishers. Cowley, J. M. (1981). Diffraction Physics, pp. 26–30. Amsterdam: North-Holland. Cowley, J. M. & Moodie, A. F. (1962). The scattering of electrons by thin crystals. J. Phys. Soc. Jpn, 17, Suppl. B11, 86–91. Feynman, R. (1948). Space–time approach to non-relativistic quantum mechanics. Rev. Mod. Phys. 201, 367–387. Goodman, P. & Moodie, A. F. (1974). Numerical evaluation of N-beam wave functions in electron scattering by the multislice method. Acta Cryst. A30, 280–290. to end of page to top of page
ca0e1948c0105239
Friday, June 08, 2018 Myths of Copenhagen Discussing the Copenhagen interpretation of quantum mechanics with Adam Becker and Jim Baggott makes me think it would be worthwhile setting down how I see it. I don’t claim that this is necessarily the “right” way to look at Copenhagen (there probably isn’t a right way), and I’m conscious that what Bohr wrote and said is often hard to fathom – not, I think, because his thinking was vague, but because he struggled to express it through the limited medium of language. Many people have pored over Bohr’s words more closely than I have, and they might find different interpretations. So if anyone takes issue with what I say here, please do tell me. Part of the problem too, as Adam said (and reiterates in his excellent new book What is Real?, is that there isn’t really a “Copenhagen interpretation”. I think James Cushing makes a good case that it was largely a retrospective invention of Heisenberg’s, quite possibly as an attempt to rehabilitate himself into the physics community after the war. As I say in Beyond Weird, my feeling is that when we talk about “Copenhagen”, we ought really to stick as close as we can to Bohr – not just for consistency but also because he was the most careful of the Copenhagenist thinkers. It’s perhaps for this reason too that I think there are misconceptions about the Copenhagen interpretation. The first is that it denies any reality beyond what we can measure: that it is anti-realist. I see no reason to think this. People might read that into Bohr’s famous words: “There is no quantum world. There is only an abstract quantum physical description.” But it seems to me that the meaning here is quite clear: quantum mechanics does not describe a physical reality. We cannot mine it to discover “bits of the world”, nor “histories of the world”. Quantum mechanics is the formal apparatus that allows us to make predictions about the world. There is nothing in that formulation, however, that denies the existence of some underlying stratum in which phenomena take place that produce the outcomes quantum mechanics enables us to predict. Indeed, what Bohr goes on to say makes this perfectly clear: “It is wrong to think that the task of physics is to find out how nature is. Physics concerns what we can say about nature.” (Here you can see the influence of Kant on Bohr, who read him.) Here Bohr explicitly acknowledges the existence of “nature” – an underlying reality – but doesn’t think we can get at it, beyond what we can observe. This is what I like about Copenhagen. I don’t think that Bohr is necessarily right to abandon a quest to probe beneath the theory’s capacity to predict, but I think he is right to caution that nothing in quantum mechanics obviously permits us to make assumptions about that. Once we accept the Born rule, which makes the wavefunction a probability density distribution, we are forced to recognize that. Here’s the next fallacy about the Copenhagen interpretation: that it insists classical physics, such as governs measuring apparatus, works according to fundamentally different rules from quantum physics, and we just have to accept that sharp division. Again, I understand why it looks as though Bohr might be saying that. But what he’s really saying is that measurements exist only in the classical realm. Only there can we claim definitive knowledge of some quantum state of affairs – what the position of an electron “is”, say. This split, then, is epistemic: knowledge is classical (because we are). Bohr didn’t see any prospect of that ever being otherwise. What’s often forgotten is how absolute the distinction seemed in Bohr’s day between the atomic/microscopic and the macroscopic. Schrödinger, who was of course no Copenhagenist, made that clear in What Is Life?, which expresses not the slightest notion that we could ever see individual molecules and follow their behaviour. To him, as to Bohr, we must describe the microscopic world in necessarily statistical terms, and it would have seemed absurd to imagine we would ever point to this or that molecule. Bohr’s comments about the quantum/classical divide reflect this mindset. It’s a great shame he hasn’t been around to see it dissolve – to see us probe the mesoscale and even manipulate single atoms and photons. It would have been great to know what he would have made of it. But I don’t believe there is any reason to suppose that, as is sometimes said, he felt that quantum mechanics just had to “stop working” at some particular scale, and classical physics take over. And of course today we have absolutely no reason to suppose that happens. On the contrary, the theory of decoherence (pioneered by the late Dieter Zeh) can go an awfully long way to deconstructing and demystifying measurement. It’s enabled us to chip away at Bohr’s overly pessimistic epistemological quantum-classical divide, both theoretically and experimentally, and understand a great deal about how classical rules emerge from quantum. Some think it has in fact pretty much solved the “measurement problem”, but I think that’s too optimistic, for the reasons below. But I don’t see anything in those developments that conflicts with Copenhagen. After all, one of the pioneers of such developments, Anton Zeilinger, would describe himself (I’m reliably told) as basically a Copenhagenist. Some will object to this that Bohr was so vague that his ideas can be made to fit anything. But I believe that, in this much at least, apparent conflicts with work on decoherence come from not attending carefully enough to what Bohr said. (I think Henrik Zinkernagel’s discussions of “what Bohr said” are useful here and here.) I think that in fact these recent developments have helped to refine Bohr’s picture until we can see more clearly what it really boils down to. Bohr saw measurement as an irreversible process, in the sense that once you had classical knowledge about an outcome, that outcome could not be undone. From the perspective of decoherence, this is now viewed in terms that sound a little like the Second Law: measurement entails the entanglement of quantum object and environment, which, as it proceeds and spreads, becomes for all practical purposes irreversible because you can’t hope to untangle it again. (We know that in some special cases where you can keep track, recoherence is possible, much as it is possible in principle to “undo” the Second Law if you keep track of all the interactions and collisions.) This decoherence remains a “fully quantum” process, even while we can see how it gives rise to classical-like behaviour (via Zurek’s quantum Darwinism, for example). But what the theory can’t then do, as Roland Omnès has pointed out, is explain uniqueness of outcomes: why only one particular outcome is (classically) observed. In my view, that is the right way to put into more specific and updated language what Bohr was driving at with his insistence on the classicality of measurement. Omnès is content to posit uniqueness of outcomes as an axiom: he thinks we have a complete theory of measurement that amounts to “decoherence + uniqueness”. The Everett interpretation, of course, ditches uniqueness, on the grounds of “why add an extra, arbitrary axiom?” To my mind, and for the reasons explained in my book, I think this leads to a “cognitive instability”, to purloin Sean Carroll’s useful phrase, in our ability to explain the world. So the incoherence that Adam sees in Copenhagen, I see in the Everett view (albeit for different reasons). But this then is the value I see in Copenhagen: if we stick with it through the theory of decoherence, it takes us to the crux of the matter: the part it just can’t explain, which is uniqueness of outcomes. And by that I mean (irreversible) uniqueness of our knowledge – better known as facts. What the Copenhagenists called collapse or reduction of the wavefunction boils down to the emergence of facts about the world. And because I think they – at least, Bohr – always saw wavefunction collapse in epistemic terms, there is a consistency to this. So Copenhagen doesn’t solve the problem, but it leads us to the right question (indeed, the question that confronts the Everettian view too). One might say that the Bohmian interpretation solves that issue, because it is a realist model: the facts are there all along, albeit hidden from us. I can see the attraction of that. My problem with it is that the solution comes by fiat – one puts in the hidden facts from the outset, and then explains all the potential problems with that by fiat too: by devising a form of nonlocality that does everything you need it to, without any real physical basis, and insisting that this type of nonlocality just – well, just is. It is ingenious, and sometimes useful, but it doesn’t seem to me that you satisfactorily solve a problem by building the solution into the axioms. I don’t understand the Bohmian model well enough to know how it deals with issues of contextuality and the apparent “non-universality of facts” (as this paper by Caslav Brukner points out), but on the face of it those seem to pose problems for a realist viewpoint too. It seems to me that a currently very fruitful way to approach quantum mechanics is to think about the issue of why the answers the world gives us seem to depend on the questions we ask (à la John Wheeler’s “20 Questions” analogy). And I feel that Bohr helps point us in that direction, and without any need to suppose some mystical “effect of consciousness on physical reality”. He didn’t have all the answers – but we do him no favours by misrepresenting his questions. A tyrannical imposition of the Copenhagen position is bad for quantum mechanics, but Copenhagen itself is not the problem. Adam Becker said... Hi Philip, I like this, but I don't agree with everything you've said here. On the issue of contextuality, which you're right to emphasize, I think that Bohr himself gives one possible good answer: he talked about "the impossibility of any sharp distinction between the behaviour of atomic objects and the interaction with the measuring instruments which serve to define the conditions under which the phenomena appear." When dealing with very small objects, our large measurement devices are necessarily clumsy, by virtue of their largeness. So contextuality can be seen as a purely mechanical effect. This is, for example, how it works in the Bohmian interpretation: there, contextuality is guaranteed by the interaction between a measurement device and the thing it's measuring. But more generally, I am loath to ascribe positions to Bohr. He really was unclear. His students said he spoke of a complementarity between clarity and truth, and thus Bohr's seeming incomprehensibility was merely the result of his concern for the truth. I think that you're giving one possible reading of Bohr, but it's certainly not clear that this is the single best way to read Bohr. Another possible reading is that he really did see a divide between the world of the classical and the world of the quantum, and was simply unclear about where that divide might lie. And another possibility is that he changed his mind a lot, or was simply (and understandably) confused. As Jim said on Twitter, Mara Beller's Quantum Dialogue is particularly good on this subject. I also don't think it's right to say that knowledge is classical. (I see the connection to Kant, but I don't think that helps much.) It's simply not true that human experience of the everyday world is necessarily classical, any more than it's necessarily Aristotelian or necessarily astrological. Classical physics has plenty of profoundly counterintuitive consequences. Think of the first time you held a spinning bicycle wheel and tried to move its axis, the way it kicked back at you in an unexpected way. Or, even more fundamental, the idea that an object in motion tends to stay in motion -- certainly not an idea that lines up with everyday experience on Earth! If there's a way that all human minds universally organize perceptions (a thesis I'm somewhat skeptical of to begin with), it sure ain't classical. This is a great deal of what was at stake in the debates between Einstein and Bohr: Bohr (the conservative) insisted that classical concepts like energy and momentum were required for thinking about the outcomes of experiments, whereas Einstein (the radical, as always) insisted that we could develop new concepts that would give a greater understanding of what was actually happening in the quantum realm, just as spacetime replaced the concepts of individual space and time. (to be continued, I hit the character limit for comments...) Adam Becker said... (Continuing where I left off in my previous comment.) I'll end with a question: it sounds to me like what you're really defending here, aside from a particular reading of Bohr, is the idea that a psi-epistemic viewpoint (the wave function is knowledge about something, rather than a real thing in the world) is not incompatible with a broadly realist stance about the world, including the world of very small things. Is that your position? If so, I agree with you! But I am somewhat more sympathetic to psi-ontic views (the wave function is something real, be it physical or lawlike). This is, in part, because the PBR theorem is a problem for the most straightforward kinds of psi-epistemic positions. (Matt Leifer, a psi-epistemicist and realist, has a good post on this here.) Furthermore, being psi-epistemic doesn't automatically give you a way out of the kind of nonlocality that Bell's theorem demands, especially if you still want to be a realist of some stripe. So given the choice between "the wave function is my information about something, I don't know what that something is, but that something is nonlocal" and "the wave function is a thing, I know what it is, and it's nonlocal," I'll probably choose the latter. (That's not the choice, and there are both psi-epistemic and psi-ontic ways to avoid nonlocality. But those ways out have other unpleasant consequences of their own.) Philip Ball said... Thanks so much for these comments Adam. Clearly it's not possible to say for sure which of us is right about Bohr; mine is just the generous interpretation. I do agree that it would seem unwise to regard his writings as monolithic and consistent. I'm not sure what you mean by saying that human experience is not necessarily classical. By that I certainly didn't mean that human intuitions must fit with what classical physics tells us, because as you say there can be plenty that is counter-intuitive about classical physics too. I mean that all perception, and thus measurement in Bohr's sense, ultimately takes place at the classical, macroscopic limit, where decoherence has kicked in. This is, I think, what Bohr was saying, though now our understanding of decoherence allows us to express it in clearer terms. I'm surprised to find that there are notions that quantum contextuality can be explained as a purely mechanical effect. To me that smacks of Heisenberg's (misconceived) gamma-ray microscope. My understanding is that Kochen and Specker (and indeed Bell, though he published it later than them) established that contextuality is as fundamental as nonlocality: just as we can be confident that any "deeper" theory below QM will have to be nonlocal, it will have to be contextual. Your phrasing "the wave function is my information about something, I don't know what that something is, but that something is nonlocal" actually sums up very nicely the way I tend to lean, though I wouldn't be dogmatic about it. To my mind, that encapsulates what we can currently say with confidence about QM (and you can probably see why I think Copenhagen at least starts us, imperfectly, down that road). In contrast, to say "The wave function is a thing, I know what it is" strikes me as an article of faith right now - it may turn out to be true, but we can't be sure about that right now. Perhaps I'm just more conservative! I do like it that you make Einstein the radical! It's so tiresome how he is so often portrayed as the stick-in-the-mud about QM. C Adams said... Thanks for your interesting piece and opening a discussion. A couple of comments. “to “undo” the Second Law if you keep track of all the interactions and collisions” You do not undo the 2nd law. The second law (increase in entropy) only kicks in if you throw the information away, but if you can recover it, then you have not really thrown it away. “But what the theory can’t then do, as Roland Omnès has pointed out, is explain uniqueness of outcomes” One way to view this is that there was only ever going to be one outcome, it is just that we did not know which. Classical physics says we do not know if because we do not have sufficient information. Quantum mechanics says that we still do not know, even if we have all the information that it is possible to have. In summary, quantum mechanics is simply a formulation of what we can say about the World, it does make any claims on whether the World exists. Clara, once known as Nemo said... Philip, Adam, you speculate about a possible deeper theory for the wave function, one that is nonlocal and contextual. For a number of years, I have followed what Schiller proposes on this topic at . In his talk slides, he indeed presents a deeper theory, where psi is an average of a crossing density due to fluctuating strands. That model for psi is nonlocal and it is contextual, following Bohr, Copenhagen and decoherence rather closely. My own view is somewhat different - I do not think that psi is a "thing" - but the proposal keeps me wondering whether Bohr might have been right after all. Jim said... If it's not too late... I think it's important to be clear on what the PBR theorem is saying (and, indeed, Matt Leifer's blog post and subsequent paper on this are models of clarity). PBR's no-go theorem does not (it cannot) rule out 'pure' psi-epistemic interpretations of the wavefunction. It does rule out epistemic interpretations in which what we see is presumed to result from the statistical behaviour of underlying real (ontic) physical states. And, whilst a pure psi-epistemic interpretation is anti-realist, this shouldn't be taken to imply that advocates of such an interpretation are out-and-out empiricists or deny at least some aspects of scientific realism. I believe it is possible to hold to a position which accepts objective reality (the Moon is still there when nobody looks); entity realism ("if you can spray them then they are real"); and yet still question whether the QM *representation* and particularly the wavefunction corresponds (directly or statistically) with the real physical states of such entities. This, I believe, is essentially Carlo Rovelli's position. Then, as I said in a tweet, once you start doubting the QM representation, you can't help but wonder if we've been kidding ourselves all these years with classical mechanics... Chris Dewdney said... I don’t agree that the “solution comes by fiat”. In deBB theory, as Bohm presented it in 1952, one reformulates the quantum theory, in a form as close as possible to classical theory, by simply rewriting the complex Schrödinger equation as two, real equations. One of these equations is similar to a classical Hamilton-Jacobi equation (with an extra quantum potential) and the other to a continuity equation for the probability density. The pair of equations is just quantum theory rewritten in a pseudo-classical form, a form that allows one to maintain a very natural definition of particle trajectories. Every particle has a definite (but unknown and uncontrollable) trajectory and the trajectories accounts for the definite results of measurement. Regarding nonlocality, the essential point is that it concerns many particles (there is no non locality for a single particle) - for which the Schrödinger equation determines the evolution of a configuration space wave function. This is often overlooked when the emphasis of discussion is on single particle quantum mechanics (it might be said to be deceptively spacious). In deBB theory the velocity of the individual particles depends on the multi-particle configuration space wave function and hence on all of the particle coordinates at once. In the Hamilton-Jacobi form of this configuration space description one naturally finds nonlocal quantum forces - without adding them artificially. So, in deBB theory one does not proceed by “devising a form of nonlocality ….without any real physical basis…” instead, nonlocality arises naturally within the simple mathematical reformulation of the theory. I saw this amazingly clearly when, working in JP Vigier’s lab in the Institut Henri Poincaré in Paris and sitting at what had been de Broglie’s desk, I first calculated the trajectories of a pair of spin one-half particles, in a spin zero singlet state, undergoing spin measurements in Bohm’s version of the EPR experiment (see the 1986 paper - Spin and non-locality in quantum mechanics C Dewdney, PR Holland, A Kyprianidis, JP Vigier Nature 336 (6199), 536). It became very clear from these calculations that what happened to one of the particles depended not only on where the other particle was, but also on which measurements were carried out at the location of the distant particle. Contextuality is also a natural part of deBB theory. In deBB theory the value assigned to an individual physical observable depends not only on the set of hidden variables but also on the wave function. Consequently, the value revealed by a measurement depends on the hidden variables, the initial wave function and the measurement Hamiltonian (describing all of the measurements taking place). (see Constraints on quantum hidden-variables and the Bohm theory. C Dewdney 1992 J. Phys. A: Math. Gen. 25 3615). When I was a PhD student at Birkbeck in the late 70’s, I remember Bohm saying to me in discussion, that one could imagine the counterfactual historical scenario in which de Broglie’s theory had been accepted at the outset (there were no conclusive arguments against it). Nonlocally correlated particle trajectories would then have been recognised as a natural and irreducible aspect of quantum theory from the beginning and there would have been no measurement problem relying on observers for its “resolution”. Further imagine, he said, that after some 25 years it was suggested that one should remove “by fiat” the idea of particle trajectories from quantum theory, this would have looked very strange indeed, and would have been rejected by the physics community, as it would immediately have given rise to the host of interpretational difficulties with which we all too familiar today. From this point of view, one could argue that all of the interpretational difficulties within quantum theory arise as a result of the removal from physics of the idea of particle trajectories – “by fiat”. Jayarava Attwood said... Understanding quantum is a two-sided problem. Firstly there is the weirdness of quantum mechanics and secondly the weirdness of "understanding". Sometimes we focus on the quantum side of things without considering what knowledge even is. As you say Kant was entirely pessimistic about knowledge of reality. But Kant was writing before the development of modern science and I think we can safely say that he was not entirely right. We can infer a great deal about reality from comparing notes about how we experience it. On the scales of mass, length, and energy where classical physics is a good description, we understand reality quite well. It so happens that we are somewhere in the middle of the scales of mass, length, and energy spanning 60-100 orders of magnitude. We experience about as much of reality as we see of the EM spectrum. The quantum problem is that we cannot *experience* quantum phenomena. Thus knowledge about reality at the quantum level is always going to be abstract. The same is, less obviously, true on the largest scales. I may grasp that there is EM radiation I cannot see or feel, but do I really understand the *reality* of radio waves or X-rays? I can probably with some revision cope with Maxwell's equations. But so what? I still have no experience of radio waves because none of my senses can detect them. When they X-rayed my broken wrist last year, it gave me no sense impressions that I might develop into knowledge. Along with the breakdown of classical physics, there is a breakdown of the classical concept of knowledge at the scales where quantum descriptions rule. We talk about having images of atoms, for example, but there are many layers of technology between us and the object. If I see a static image of an atom, for example, I tacitly "translate" that into a classical object and believe I understand what I am seeing. But in many ways this picture is false. It tells me nothing about atoms generally if I measure the intensity of electric fields around an atom frozen close to absolute zero and plot them on a graph. The map is not the territory, let alone the pixelated image of the map. Philip Ball said... Thanks very much Chris for those very helpful comments. I don't by any means reject the deBB formalism, any more than I reject most of the other interpretations. Indeed, I can see that it has virtues. My impression is that many quantum physicists don't engage with it simply because it seems like a lot of effort for no real gain - they end up with exactly the same predictions as the standard quantum formalism (by design, of course!). I think that the two state vector formalism of Aharonov suffers from neglect for the same reasons, though I appreciate that both can, in certain circumstances, offer a useful way of looking at quantum problems that is not easily evident in other viewpoints. The question, of course, is whether one should deduce any actual ontology from these reformulations of quantum theory. I don't understand the formalism well enough to be sure, but my understanding is that to connect with the standard quantum formalism you do need at least one extra assumption in the deBB approach (aside from those hidden variables) - the quantum equilibrium hypothesis. And the fact that nonlocality comes out quite naturally doesn't in itself seem an obvious gain over standard QM. I shouldn't say that this nonlocality is "put in by hand", but rather, that it seems to me the deBB formalism just ushers the nonlocality of standard QM into a particular place that then allows one to create a realist, deterministic description of the rest. That's an interesting way to do things, but I'm not convinced it is obviously an advance. All the same, the counterfactual history you suggest is indeed an interesting one, and I fully buy James Cushing's argument that things could look very different now if the Copenhagen interpretation had not, for what ever reason, got in first. I suspect people would then just be railing against the "absurd Bohmian tyranny" and demanding that a Copenhagenist view be admitted to the textbooks too... Nicophil said... E.T. Jaynes wrote : "" Although Bohr's whole way of thinking was very different from Einstein's, it does not follow that either was wrong. Einstein's thinking is always on the ontological level traditional in physics; trying to describe the realities of Nature. Bohr's thinking is always on the epistemological level, describing not reality but only our information about reality. The peculiar flavor of his language arises from the absence of all words with any ontological import. Those who, like Einstein, tried to read ontological meaning into Bohr's statements, were quite unable to comprehend his message. This applies not only to his critics but equally to his disciples, who undoubtedly embarrassed Bohr considerably by offering such ontological explanations as [...] the remark of Pauli quoted above, which might be rendered loosely as "Not only are you and I ignorant of x and p ; Nature herself does not know what they are" [or "Eine prinzipielle Unbestimmtheit, nicht nur Unbekanntheit"]. We routinely commit the Mind Projection Fallacy: supposing that creations of our own imagination are real properties of Nature, or that our own ignorance signifies some indecision on the part of Nature. It is then impossible to agree on the proper place of information in physics. This muddying up of the distinction between reality and our knowledge of reality is carried to the point where we find some otherwise rational physicists, on the basis of the Bell inequality experiments, asserting the objective reality of probabilities, while denying the objective reality of atoms !"" Adam Becker said... Belatedly throwing in a few final comments: Philip, I believe you're correct about dBB needing a quantum equilibrium hypothesis regarding the initial conditions. But basically every cosmological theory requires an initial condition that's somehow special, so that's not a problem that's unique to dBB (though of course it doesn't mean it's not a problem at all). To clarify what I was saying about contextuality in dBB: in that theory, position is a privileged observable. Measurements of all other observables boil down to measurements of position in dBB, and the outcome of position measurements depend not only on the hidden variables, but on the wave function and the interaction Hamiltonian between the measurement apparatus and the thing being measured, just as Chris said. That's how contextuality works in dBB, or at least that's my understanding of it. So in dBB, contextuality really does come down to a mechanical disturbance of a particle's position by the measurement device — even when position isn't one of the observables being measured. Also, a historical note: Bell published his proof of contextuality before Kochen and Specker. His paper was written in 1964 and published in 1966 (the two year delay was due to an editorial snafu). Kochen and Specker's result was published in 1967. So it should really be called the Bell-Kochen-Specker theorem. Finally, regarding PBR: you can definitely hold the view that the wave function is our information about some underlying reality, you just need to give up on one of the assumptions of the PBR theorem to do that. Leifer, for example, lifts the ban on retrocausality, which seems like a reasonable move to me in light of the difficulties here. But as usual, just because I'm saying Leifer's view is reasonable doesn't mean I subscribe to it (or to dBB, or MWI). PS. Jim, I don't understand how Rovelli's interpretation is realist. But that's probably my fault for not reading enough of his work. carrissa saputri said... qiu qiu online domino qiu qiu daftar poker online chintia lim said... Life is a GAME. It's a game where no one tells you the rules, where the best players, are cheaters and where you never know, if you're WINNING. Untuk Registrasi silahkan klik link di bawah ini ^^ <== Daftar Di sini BBM: 2AD05265 WA: +855968010699 Facebook : smsqqkontenviralterkini Steve said... There is an interesting new take on this. Maybe just maybe there really is a quantum/classical divide. I know this is a comment on an article from a while ago but I would love to hear some feedback on this. Best of all it would end this MWI nonsense for good as would GOC Dr. Ball? Jorge Frew said... Video Sange said...
d8047b096980968f
Laws and Approximations as Stages of Deduction - History... Laws and Approximations as Deduction Steps When the scientific concept is "put in order", it is obvious that in conceptual terms the principles precede the laws. After analyzing the nature, it is reasonable to refer to the status of laws. It is necessary to distinguish hypothetical (deductive) and inductive laws. Consideration of experimental facts allows us to identify laws in the process of induction. They are called experimental laws. But induction is not an experiment. Strictly speaking, the laws discovered in the process of induction should be called inductive rather than experimental positions. Often scientists talk about universal laws. Universal physical and chemical laws are expressed by the so-called universal conditional statement & quot ;. Its simplest type is written as follows: This expression is read as follows: for any x, if x has the sign of P, then it also has the sign of Q. The law expresses the relationship of the signs of all in connection with which symbolic variables are used (), i runs through a number of integer values ​​from 1 to where n is the total number of x. In the records of laws, the concept of the class of elements is always used. Elements form a certain class if they have at least one common feature. This condition is met. The law deals with the classes of elements, and not with an arbitrary sample of their number. In the event of refusal to consider the classes of elements, science would acquire an exceptionally unusual form. The history of its development unambiguously testifies to the expediency of considering precisely the classes of elements. But this circumstance determines the far from obvious status of scientific laws. Above we considered the concept of a universal law. As it turns out, it is not all satisfactory. In science, when encountering previously unknown laws, strictly speaking, they are guided not by universal laws but by hypothetical-deductive laws. They are considered valid only for the phenomena under study. It is entirely assumed that the results of cognition will force us to abandon hypothetical laws. Thus, they are subject to certain restrictions. For a long time scientists believed that hypothetical laws are verified (confirmed) in experiments. Much was clarified after the speeches of the critical rationalist K. Popper, who never tired of emphasizing that a hypothetical law is not certified, but is falsified. Criticism of Popper was directed against neopositivists, in particular, R. Carnap. Under the pressure of Popper, they had to retreat. But, strangely enough, both sides admitted a certain mistake. The fact is that an inductive law is established, and a hypothetico-deductive law is introduced through the operation of abduction. Both Carnap and Popper did not make a clear distinction between deductive and inductive laws. The hypothetical law is falsified by experiment. In relation to the inductive law, the experiment is its condition. Now let us consider the deductive transition from hypothetical laws to predictable facts. In this connection, the operation of approximation, which will be discussed below, takes on special significance. It was noted above that scan theory acts as a transduction. As applied to quantum chemistry, this means that it is not enough to just write down the Schrodinger equation (law), it also needs to be solved. The history of the development of quantum chemistry shows that in this connection it is impossible to do without approximations (from Latin approximare - to approach). By approximation in science it is usually understood the expression of some quantities through others, which are considered more simple. Suppose that we are considering N electrons. In the case under consideration, we mean that we should simplify the N-electron wave function ( N is the number of electrons) in such a way that it becomes computable. Usually this circumstance is interpreted as follows: with N & gt; 2 the correct description of the electronic wave function is impossible, so there is nothing else to do but go on to simplifications, it is impossible to preserve theoretical chastity, at least with the current level of science. According to the author, this kind of argument is not deep. Indeed, if the so-called flawlessly correct approach were known, then it would be possible to characteristically characterize the departure from it. But since it is unknown, we should refrain from describing its opposite, incorrect approach. Theoretically, meaningful approximations should be understood not as simplifications, but as necessary stages of transduction. In this context, the topic of simplifications is of secondary importance. In support of this conclusion, we give this argument. The fairly often used approximations are in excellent agreement with the experimental data. In this case, researchers do not need to insist on their incorrectness. However, this idyll is invariably broken, and then it is necessary to introduce more refined approximations. How is this to be understood? As a continuation of scientific transduction, which involves the growth of scientific knowledge. Thus, since transduction can not be realized without approximations, they act as its legitimate features. The growth of scientific knowledge forces us to reconsider the relevance of not all approximations, but only those of them, whose consistency has been disproved. The dynamics of scientific knowledge is often interpreted as a series of endless delusions. In fact, it is a string of achievements. The growth of scientific knowledge is ensured not by errors, but by achievements. So, the approximations should be interpreted only in the context of transduction. It is no accident that the approximations, as a rule, are the result of the exclusively selfless work of the researchers. The Hartree-Fock method takes center stage among all the approximations used in quantum chemistry, so it makes sense to address it first. Historical excursion The history of the development of the Hartree-Fock method is very indicative. E. Schrodinger recorded his famous equation in 1926. The following year D. Hartree proposed a method for solving it. In this method, the wave function of a multielectron atom is represented as a product of the wave functions of individual electrons corresponding to their different quantum states in an atom. The motion of each electron is determined by the field created by all other particles averaged in a certain way and given by some potentials. Hartree's intention was to strive to give a solution to the Schrödinger equation ab initio, that is, based on the fundamental quantum-mechanical principles. The significance of his theoretical innovations was realized far from immediately. This happened only after J. Slater showed that the Hartree method puts in the theoretical form a variational principle: one-electron wave functions are chosen from the condition of minimum mean energy. In 1930, VA Fock perfected the Hartree method, giving the wave functions a symmetry form ensuring the fulfillment of the Pauli principle, that is, he took into account the presence of spins in electrons. As a result, Fock linked the method under consideration with the theory of groups. In 1935, Hartree was able to give his method a form suitable for mathematical calculations. But their effectiveness was revealed only in the early 1950's, after the advent of electronic computers. Thus, only a quarter of a century after the initial development of the Hartree-Fock method, its effectiveness was revealed. The electronic Schrödinger equation for molecular systems is often resolved in accordance with the so-called valence bond method. In this case, the wave function of the molecule is expressed in terms of the wave functions of its constituent atoms. To each valence bond there corresponds not one-electron but two-electron function: where X is the spatial wave and σ is the spin wave function, the numbers 1 and 2 refer to two electrons. In the description of molecular systems, as a rule, linear combinations of the wave functions of several valence bonds are used. The coefficients in the linear combination are determined by the variational method from the condition of minimum energy. The Hartree-Fock method is often matched with perturbation theory, which uses a representation of the unperturbed, and perturbed, , the Hamiltonians. The difference between them is considered as a perturbation, and only corrections of lower orders are taken into account from the corrections depending on this difference. This is sufficient to obtain results compatible with the experimental data. In the theory of molecular formations containing many-electron atoms, the density functional method occupies a central place. The main goal of the theory of the density functional is to replace the many-electron wave function by electron density. This leads to an essential simplification of the problem, since the many-electron wave function depends on 3 N variables - 3 spatial coordinates for each of the electrons, while the density is a function of only three spatial coordinates. But this method is correct only in the case of a fairly uniform distribution of the electron density. Its undoubted merit lies in the possibility of calculating molecular systems consisting of hundreds and sometimes thousands of atoms. Of course, it does not dispense with the use of different approximations. The theory of density functional has always been suspected of departing from the ideals of quantum chemistry. Thanks to the research of P. Hohenberg and V. Kohn, the groundlessness of these suspicions is largely shown. The theory of the density functional goes back to the works of Thomas L. 1927 and E. Fermi in 1928, who were able to calculate the energy of an atom according to the concept of electron density. It was believed that their method was surpassed by the Hartree-Fock method. But the desire to cope with the calculation of the many-electron system forced the chemists to return to the ideas of Thomas and Fermi. Their quantum nature is explained in many respects by the second Hoenberg-Kohn theorem (1964), according to which the energy of the electronic subsystem recorded as an electron density functional has a minimum equal to the ground-state energy, that is, it is a variational principle of quantum mechanics. As proved in the above theorem, the wave function of the ground state F0 is a functional of the electron density in the ground state . Thus, the concepts of wave function and electron density are closely related to each other. This is especially obvious for the ground state, but not just for him. Interestingly, the drag of the density functional method has two peaks, separated by a gap of three decades (1960s and 1990s). In both cases, they were associated with the development of computer technology. A rather cursory review of chemical methods conducted by the author shows a nontrivial content of different ways of carrying out transduction in quantum chemistry. Η. F. Stepanov and Yu. V. Novakovskaya quite rightly point out the necessity of manifesting the "proper attention to which methods and in what approximation can and should be used in solving a particular problem." The path from the fundamental laws, in particular the Schrödinger equation, to direct contact with the experimental data is both difficult and thorny. Here the conceptual surprises are waiting for the researcher at every step. But, which is extremely important, all the steps of deduction are interconnected. Unfortunately, very often transduction at the stage of deduction is reduced to the use of approximate methods, which allegedly do not correspond to the original strictness of the theory. This erroneous opinion is considered further on the example of certain interpretations of the problem of approximations in quantum chemistry. Scientists argue In this regard, an extremely interesting article is presented by V. Ostrovsky "Towards a philosophy of approximations in" exact "theories." Correctly noting that the problem of approximations is not given due attention in the philosophical literature, he ends his article with the following four conclusions, which we give here in a condensed form. 1. It is inadmissible to regard the approximations as weaknesses of exact sciences, they are everywhere in it. This conclusion is not refuted by the presence of unjustified approximations. 2. Scientifically justified approximations are not the lowest in theories, but a reflection of the characteristics of its nature. The hierarchy of approximations creates a unique way of recreating scientific images of a qualitative nature. 3. They are the most significant results of scientific research, which must be considered in the philosophy of science first. 4. The so-called quantitative methods and qualitative images that we owe to approximations complement each other in the sense of Bohr's complementarity principle. According to the author, the theory of Ostrovsky approximations is worthy of high evaluation. Of course, it, like any other scientific position, deserves a critical examination. According to Ostrovsky's point of view, all the basic scientific concepts are approximations. In particular, the Schrödinger equation itself serves as an approximation, since it does not take into account relativistic effects. It is possible to take them into account, but then it will become clear that the size of the particles, etc., has not been taken into account. All principles are also approximations. In the author's opinion, the approximations take a certain place in transduction, namely, their hour comes when the transition from principles and laws to predictable variables is made. It is extremely important to express the metamorphosis of deduction, its conceptual switching. The world of science is not reduced to mere approximations. Any theory is problematic, and therefore it deserves to be placed under the fire of scientific criticism. But there is no reason to identify the problematic theory with the presence of steps of approximation in transduction. At this point, it makes sense to emphasize the appropriateness of distinguishing between approximations and approximations. They are usually identified. But in this case it is difficult to comprehend the conceptual content of transduction. Using approximations, the researcher deliberately, for example, pursuing didactic goals, abandons the most developed theory, which, nevertheless, hovers before his eyes. Approaches are, as a rule, simplifications, refusal to consider certain aspects of the reality being studied. The meaning of the same approximations is not in simplifications, but in the continuation of the transduction line, initiated by the presentation of principles and laws. Approximations are freed from congestion on the transduction line. This circumstance is realized only in recent years. A vivid example of such an understanding is the theory of V. Ostrovsky. Historically, it happened that the approximations did not differ from the approximations, their meaning was interpreted in literal correspondence with the etymology of the Latin word approximare, meaning approximation. But in accordance with the scientific structure of the theory, approximation does not appear as an approximation to the law (equation), but as a development of its potential. The growth of scientific knowledge leads to a reassessment of the approximations already undertaken in the process of transduction, but this circumstance should not be misleading. The meaning of approximations and approximations is different. Q. Ostrovsky very accurately characterizes the nature of the approximations by examining the sense of the Born-Oppenheimer approximation, considering the existence of forms in molecules and their motion along orbits. His line of reasoning, which he calls realistic, consists in the indispensable closure of his reasoning by the characteristic of the actual state of affairs. This is the correct way of argumentation, for it is inadmissible to interrupt the transduction already on the approaches to understanding the experimental results. In this regard, Ostrovsky is critical of the concept of a theoretical (subjective, or ideal) artifact, which is only a help in the activities of the researcher, which has no direct relationship to chemical reality. The Born-Oppenheimer approximation takes into account the difference in the mass of nuclei and electrons () and their velocities () . If both conditions are met, then the cores are considered fixed, located at a certain distance from each other. But if the condition is not fulfilled, for example, with respect to some excited states of molecules, then the mentioned distance ceases to be a sign of atoms and molecules. Ostrovsky proves that the introduction of the idea of ​​the signs of atoms and molecules is always connected with some approximations, but all of them are not absolute in nature, because if they do not correspond to the chemical reality, then they should be abandoned. The concept of a quantum orbit caused scientists to be extremely interested. Some methodologists of chemical science began to assert that they do not exist, but are just mathematical constructs and, therefore, can not be observable. And in assessing the question of the reality of quantum orbits, Ostrovsky's position seems very weighted. He notes that within the framework of the Hartree-Fock approximation, according to which each individual electron moves under the influence of the average field formed by nucleons and other electrons, the concept of quantum orbits is not only appropriate, but also inevitable. It has a physical meaning. As for observations of orbits, they are also possible, for example, in the energy approximation. The signs of chemical reality can be judged only on the basis of approximations. On the other hand, scientifically justified approximation in one form or another is indicative of the features of reality itself. According to Ostrovsky, philosophical comprehension of the topic of approximations implies an appeal to the principle of complementarity N. Bohr. "Exact" quantitative methods and intuition-inspired approximations form an additional pair in the universal sense of the complementary relationships that exist, according to Niels Bohr, in society and nature. In this dual relation, quantitative methods represent the more objective side of nature, while the qualitative images generated by approximations remain on the subjective side of the interpretation of nature by researchers. Very often we progress in science due to the development of approximation methods & quot ;. Somewhat earlier, Ostrovsky explains the additionality he introduces in the following way: the "more precisely" equations, the less their explanatory power. Conversely, the higher the heuristic potential of approximations, the less "accurate". According to the author, the appeal of V. Ostrovsky in an attempt to create a theory of approximations to Bohr's complementarity principle is a philosophical error. Quantitative and qualitative definitions are not in an additional relationship, in the sense of Bohr, to each other. This can be shown most simply by considering any chemical variable, for example, the mass of an atom of a chemical element m i. In this case, m is the quality, and its i-th quantity is the quantity, m i is some measure. There is no relation, presupposed by the principle of Bohr, according to which one decreases and the other, on the contrary, increases. The essence of the matter does not change with the transition to equations, since all the same variables appear in them. The more precise the solution, the more relevant the knowledge of the chemical reality. In this case, there is no reason to take the word more precisely in quotation marks. Ostrovsky always does not forget to put the words exact (for example, science), exact (in particular, the solution) in quotation marks. This shows his caution, for he perfectly understands that it is impossible to achieve exact solutions having no chemical meaning without approximations. But, turning to the principle of Bohr, V. Ostrovsky, forgetting about the need for scientific vigilance, compares the exact, quantitative (in inverted commas) with the qualitative (without the quotes). Only in this case is an additionality so attractive for him. The attempt of Ostrovsky to impose an objective quantity mainly on the department, and subjective on the quality department, is also unsuccessful. This attempt is declarative, because the categories of the subjective and the objective are considered casually, without proper argumentation. The noted shortcomings of the theory of V. Ostrovsky's approximations do not undo her undoubted merits. In his interpretation, approximations appear as far from ordinary concepts of scientific theory. This conclusion, of course, deserves attention. But according to the author's argument, if we want to understand the approximations in a systematic form, then they should be considered in the context of transduction. However, there remain significant difficulties in understanding the internal mechanism of transduction, including in relation to approximations. In the author's opinion, it should be understood as a kind of probabilistic-game strategy. Another interpretation of the approximations deserves consideration, namely, as a characteristic of the limited possibilities of cognition. According to the famous American physicist and cosmologist J. Hartle, our knowledge has limits of three kinds: a) the difference between the observed and the predicted (we mean that we can observe very complex phenomena, and predict relatively simple, because the laws are simple), b) impossibility to provide the desired volume of calculations, c) limited opportunities for knowledge of theories through induction and verification. Starting from Hartle's ideas, the Italian chemist A. Tontini aims to establish the limits of chemical cognition, paying special attention to the inability to synthesize the desired chemical substance. According to the author, both Hartle and Tontini do not pay due attention to one extremely essential subtlety. The so-called restrictive theorems point not to the limits of the possibilities of our cognitive abilities, but to the structure of the reality under study. The Heisenberg uncertainty relation characterizes the chemical world itself, and not our cognitive abilities. The progress of knowledge indicates its unlimited possibilities. Neither in physics, nor in chemistry, such phenomena are indicated, the knowledge of which is inaccessible to man. The dilemma "the world is complex - the laws are simple is not a scientific, but a speculative contrast. On the basis of scientific material, it is only permissible to conclude that the complex world is learned through scientific laws, and knowledge itself is devoid of any boundaries. Cognition is unfinished, this is true, but it does not follow from this that it is powerless before anything. Approximations express the features of the phenomena studied, and not our powerlessness over their complexity. 1. Laws and approximations are stages of deduction in the composition of transduction. 2. The meaning of approximation is to ensure deduction. Also We Can Offer! Other services that we offer How to ...
05504a19d475eeec
Wheeler–DeWitt equation From Wikipedia, the free encyclopedia Jump to navigation Jump to search The Wheeler–DeWitt equation[1] is a field equation. It is part of a theory that attempts to combine mathematically the ideas of quantum mechanics and general relativity, a step towards a theory of quantum gravity. In this approach, time plays a role different from what it does in non-relativistic quantum mechanics, leading to the so-called 'problem of time'.[2] More specifically, the equation describes the quantum version of the Hamiltonian constraint using metric variables. Its commutation relations with the diffeomorphism constraints generate the Bergman-Komar "group" (which is the diffeomorphism group on-shell). Quantum gravity[edit] All defined and understood descriptions of string/M-theory deal with fixed asymptotic conditions on the background spacetime. At infinity, the "right"[clarification needed] choice of the time coordinate "t" is determined (because the space-time is asymptotic to some fixed space-time) in every description, so there is a preferred definition of the Hamiltonian (with nonzero eigenvalues) to evolve states of the system forwards in time. This avoids all the need to dynamically generate a time dimension using the Wheeler-DeWitt equation. Thus, the equation has not played a role in string theory thus far. There could exist a Wheeler-DeWitt style manner to describe the bulk dynamics of quantum theory of gravity. Some experts believe that this equation still holds the potential for understanding quantum gravity; however, decades after the equation was published, completely different approaches, such as string theory, have brought physicists as clear results about quantum gravity. Motivation and background[edit] In canonical gravity, spacetime is foliated into spacelike submanifolds. The three-metric (i.e., metric on the hypersurface) is and given by In that equation the Roman indices run over the values 1, 2, 3 and the Greek indices run over the values 1, 2, 3, 4. The three-metric is the field, and we denote its conjugate momenta as . The Hamiltonian is a constraint (characteristic of most relativistic systems) where and is the Wheeler-DeWitt metric. Quantization "puts hats" on the momenta and field variables; that is, the functions of numbers in the classical case become operators that modify the state function in the quantum case. Thus we obtain the operator Working in "position space", these operators are One can apply the operator to a general wave functional of the metric where: Which would give a set of constraints amongst the coefficients . Which means the amplitudes for N gravitons at certain positions is related to the amplitudes for a different number of gravitons at different positions. Or one could use the two field formalism treating as an independent field so the wave function is Derivation from path integral[edit] The Wheeler–DeWitt equation can be derived from a path integral using the gravitational action in the Euclidean quantum gravity paradigm:[3] where one integrates over a class of Riemannian four-metrics and matter fields matching certain boundary conditions. Because the concept of a universal time coordinate seems unphysical, and at odds with the principles of general relativity, the action is evaluated around a 3-metric which we take as the boundary of the classes of four-metrics and on which a certain configuration of matter fields exists. This latter might for example be the current configuration of matter in our universe as we observe it today. Evaluating the action so that it only depends on the 3-metric and the matter fields is sufficient to remove the need for a time coordinate as it effectively fixes a point in the evolution of the universe. We obtain the Hamiltonian constraint from where is the Einstein-Hilbert action, and is the lapse function, i.e. the Lagrange multiplier for the Hamiltonian constraint. The demand for this variation of our gravitational action to vanish corresponds, in fact, to the background independence in general relativity.[4] This is purely classical so far. We can recover the Wheeler–DeWitt equation from where is the three-dimensional boundary. Observe that this expression vanishes, implying that the functional derivative also vanishes, giving us the Wheeler–DeWitt equation. A similar statement may be made for the diffeomorphism constraint (take functional derivative with respect to the shift functions instead). Mathematical formalism[edit] The Wheeler–DeWitt equation[1] is a functional differential equation. It is ill-defined in the general case, but very important in theoretical physics, especially in quantum gravity. It is a functional differential equation on the space of three dimensional spatial metrics. The Wheeler–DeWitt equation has the form of an operator acting on a wave functional, the functional reduces to a function in cosmology. Contrary to the general case, the Wheeler–DeWitt equation is well defined in minisuperspaces like the configuration space of cosmological theories. An example of such a wave function is the Hartle–Hawking state. Bryce DeWitt first published this equation in 1967 under the name "Einstein–Schrödinger equation"; it was later renamed the "Wheeler–DeWitt equation".[5] Hamiltonian constraint[edit] Simply speaking, the Wheeler–DeWitt equation says where is the Hamiltonian constraint in quantized general relativity and stands for the wave function of the universe. Unlike ordinary quantum field theory or quantum mechanics, the Hamiltonian is a first class constraint on physical states. We also have an independent constraint for each point in space. Although the symbols and may appear familiar, their interpretation in the Wheeler–DeWitt equation is substantially different from non-relativistic quantum mechanics. is no longer a spatial wave function in the traditional sense of a complex-valued function that is defined on a 3-dimensional space-like surface and normalized to unity. Instead it is a functional of field configurations on all of spacetime. This wave function contains all of the information about the geometry and matter content of the universe. is still an operator that acts on the Hilbert space of wave functions, but it is not the same Hilbert space as in the nonrelativistic case, and the Hamiltonian no longer determines evolution of the system, so the Schrödinger equation no longer applies. This property is known as timelessness. The reemergence of time requires the tools of decoherence and clock operators[citation needed] (or the use of a scalar field). Momentum constraint[edit] We also need to augment the Hamiltonian constraint with momentum constraints associated with spatial diffeomorphism invariance. In minisuperspace approximations, we only have one Hamiltonian constraint (instead of infinitely many of them). In fact, the principle of general covariance in general relativity implies that global evolution per se does not exist; the time is just a label we assign to one of the coordinate axes. Thus, what we think about as time evolution of any physical system is just a gauge transformation, similar to that of QED induced by U(1) local gauge transformation where plays the role of local time. The role of a Hamiltonian is simply to restrict the space of the "kinematic" states of the Universe to that of "physical" states - the ones that follow gauge orbits. For this reason we call it a "Hamiltonian constraint." Upon quantization, physical states become wave functions that lie in the kernel of the Hamiltonian operator. In general, the Hamiltonian[clarification needed] vanishes for a theory with general covariance or time-scaling invariance. See also[edit] 1. ^ a b DeWitt, B. S. (1967). "Quantum Theory of Gravity. I. The Canonical Theory". Phys. Rev. 160 (5): 1113–1148. Bibcode:1967PhRv..160.1113D. doi:10.1103/PhysRev.160.1113. 2. ^ Blog, The Physics arXiv (23 October 2013). "Quantum Experiment Shows How Time 'Emerges' from Entanglement". medium.com. 3. ^ See J. B. Hartle and S. W. Hawking, "Wave function of the Universe." Phys. Rev. D 28 (1983) 2960–2975, eprint. 4. ^ https://javierrubioblog.files.wordpress.com/2016/09/notes_wheeler-dewitt_talk.pdf 5. ^ Go to Arxiv.org to read "Notes for a Brief History of Quantum Gravity" by Carlo Rovelli
3da31c0db8f21650
SciELO - Scientific Electronic Library Online vol.80 número1EditorialEducation from a post-post-foundationalist perspective and for post-post-foundationalist conditions índice de autoresíndice de assuntospesquisa de artigos Home Pagelista alfabética de periódicos   Serviços Personalizados Links relacionados • Em processo de indexaçãoCitado por Google • Em processo de indexaçãoSimilares em Google versão On-line ISSN 2304-8557 versão impressa ISSN 0023-270X Koers (Online) vol.80 no.1 Pretoria  2015  Between postmodernism, positivism and (new) atheism Danie Strauss School of Philosophy , North-West University, Potchefstroom Campus The Renaissance introduced the autonomy of being human which in turn resulted in promoting the position of human understanding as the formal law-giver of nature. Twentieth century philosophy of science acknowledged the necessity of a theoretical frame of reference (paradigm) as well as ultimate (more-than-rational) commitments. Historicism and the linguistic turn, however, relativized the objectivity and neutrality of scientific reason (with its universality) and co-influenced the rise of postmodernism. After discussing the distinction between linear and nonlinear thinking it is shown that Derrida does accept universality outside the human mind. The denial of ontic universality influenced the nominalistic orientation of modern biology, particularly since Darwin's Origin of Species, consistently denying the reality of type laws. Under the spell of Leibniz's slogan that nature does not make leaps, as natural selection merely exemplifies the overriding law of continuity. Darwin was in two minds about accepting his biological idea of nonprogression and his socio-cultural conservatism in which progress was dominant. More recently new atheism divinized natural laws, identified them with human reason, while Hawking even claims that the law of gravity would create the universe out of nothing. Finally physicalism is subjected to immanent criticism, the pretence that mathematics is exact is questioned and some recent problems facing neo-Darwinism are highlighted. Keywords: autonomy, law-giver, historicism, paradigm, ultimate commitment, atheism, epigenetic information, out of nowhere origination Die Renaissance-waardering van die mense se outonomie het uitgeloop op die verheffing van die menslike verstand as die formele wetgewer van die natuur. Nogtans sou die wetenskapsfilosofie van die twintigste eeu erkenning verleen aan die noodsaaklikheid van 'n teoretiese verwysingsraamwerk (paradigma) en 'n diepste (meer-as-rasionele) grondoortuiging. Die historisme en die taalwending het die idee van 'n objektief-neutrale rede gerelativeer en bygedra tot die ontstaan van die postmodernisme. Na 'n bespreking van die onderskeiding tussen liniêre en nie- liniêre denke is aangetoon dat Derrida universaliteit buite die menslike gees erken. Die ontkenning van ontiese universaliteit het die nominalistiese oriëntasie van die moderne biologie, veral sedert Darwin se Origin of Species beïnvloed - wat konsekwent die realiteit van tipe-wette misken het. Betower deur die slagspreuk van Leibniz dat die natuur nie spronge maak nie sien Darwin natuurlike seleksie bloot as 'n beliggaming van die oorkoepelende wet van kontinuïteit. Darwin was in twee verdeel tussen sy biologies-non-progressionistiese benadering en sy sosiaal-kulturele konserwatisme waarin die idee van vooruitgang dominant was. Meer onlangs sou die nuwe ateïsme natuurwette goddelik ag en met die menslike rede vereenselwig, met Hawking wat selfs beweer dat die swaartekragwet uit niks die heelal sal skep. Ten slotte is die fisikalisme aan immanente kritiek onderwerp, is die aanspraak dat die wiskunde eksak is bevraagteken en is saaklik aandag geskenk aan enkele resente probleme waarmee die neo-Darwinisme worstel. Sleutelwoorde: outonomie, wetgewer, historisme, paradigm, grondoortuiging, ateïsme, epigenetiese informasie, van nêrens af ontstaan Since the Renaissance the deification of reason, already found in Greek culture, has experienced a new secularized revitalization. What it left behind is the Greek-Medieval realistic metaphysics which used the concept of being to generate a hierarchical view of reality. The human being is no longer understood as being part of an objective order of being. For Descartes even certainty about the existence of God is now obtained only on the basis of clear and distinct thinking. Von Weizsäcker points out that the world no longer guarantees my existence since the world now solely appears as the object of my self-assured thinking. In a subtle way this self-assured thinking is elevated to the rank of what is divine. Therefore it should not be surprising that the new motive of logical creation soon inspired Immanuel Kant to elevate human understanding to become the formal lawgiver of nature. More recently the desire to be liberated from "supra-natural" Gods led atheists (or rather: anti-theists) to the identification of God with the laws of nature, forgetting that Nietzsche already realized that laws are distinct from a Lawgiver. In support of the cause of atheism, Nietzsche therefore prefers not to speak of laws but rather of necessities (see Strauss, 2009:408). The fusion of human rationality and natural law culminates in Hawking's recent idea that the law of gravity on its own could well create the universe. Kant's view of understanding as formal law-giver of nature consolidated the preceding natural science ideal of modern humanism and provided the platform for the ideal of an objective and neutral science advanced by positivism - from Auguste Comte up to the Vienna Circle. However, as one of the key figures in the mid-twentieth century philosophy of science, Karl Popper claimed the fame to have "killed" positivism (see Popper, 1974:69). Kuhn challenged the positivist appeal to "facts" (identified with sense data) for it turned out that the interpretation of facts is embedded in theoretical frameworks (designated as paradigms), captured in the slogan the facts are "theory-laden." In addition, prominent figures within the domain of the philosophy of science of the twentieth century acknowledged that scholarly activities are embedded in intellectual communities and in the final analysis directed by more-than-theoretical (i.e. supra-theoretical) commitments, as emphasized by Popper and Stegmüller. Karl Popper stated that the faith in the rationality of reason is not itself rational - he speaks about "an irrational faith in reason" (Popper, 1966-II:231). Stegmüller holds the view that there is not a single area in which self-assured of human thinking is possible - one already has to believe in something, in order to justify something else (Stegmüller, 1969:314) Yet, in spite of all these developments, most special scientists working within the natural sciences and the humanities are still victims of a kind of "naïve positivism", still adhering to the modernist idea of the objectivity and neutrality of science. The remarkable exception in this regard is the well-known neo-Darwinian biologist Stephen Gould (initial field: palaeontology) who updated himself with what happened in the philosophy of science of the previous century. He remarks: "Facts have no independent existence in science, or in any human endeavor; theories grant differing weights, values, and descriptions, even to the most empirical and undeniable of observations" (Gould 2002:762). If the deified human understanding assumed the role of judge, even regarding the existence of God, then the authority assigned to it not only gives it the power to decide what will count as divine, but also endows it with the power to deny any divinity whatsoever - the ultimate position of contemporary atheism. Many of these atheists justified their stance with reference to atrocities committed in the name of "religion" (such as 9/11). Already during the Enlightenment Kant advocated an elevated position for human reason: Our age is, in every sense of the word, the age of criticism and everything must submit to it. Religion, on the strength of its sanctity, and law on the strength of its majesty, try to withdraw themselves from it; but by doing so they arouse just suspicions, and cannot claim that sincere respect which reason pays to those only who have been able to stand its free and open examination (Kant, 1781:A-12 -translation F.M. Müller - see Müller, 1961:21). Of course closer scrutiny soon reveals that neither the (persistent) positivism nor the new atheism represents a sound position. In particular the pervasive influence of historicism during the nineteenth and early twentieth century relativized the certainties of modernity. In the "linguistic turn" historicism found a strong ally, for with language as horizon alternative interpretations surfaced prominently. As noted briefly above, these lines of thought served as points of departure for developments within the philosophy of science of the twentieth century. It appeared to be inevitable to use theoretical frameworks (paradigms) which themselves are in the grip of ultimate commitments. Interestingly these developments within twentieth century philosophy of science were anticipated by Dooyeweerd. It prompted Van Peursen to say that Dooyeweerd's philosophy is today more relevant than ever and he added the remark that many books written within the domain of philosophy of science should not have been written, had the authors first read what Dooyeweerd had written (see Van Peursen 1995). The combined effect of historical relativity and alternative interpretations in turn gave rise to postmodernism according to which every so-called meta-narrative is questioned, owing to the fact that everyone of us only disposes over our own particular stories. The new kind of knowledge emerging within the postmodern mode of thought apparently challenged long- standing conceptions. Amidst the introduction of themes and entities, such as fractals (somewhere in between one and two dimensions) and chaos theory, it is claimed that modernist thinking is linear and postmodern thinking is non-linear. Lyotard mentions "incommensurabilities" and the fact that "the continuous differentiable function is losing its preeminence as a paradigm of knowledge and prediction" and then continues: "Postmodern science - by concerning itself with such things as undecidables, the limits of precise control, conflicts characterized by incomplete information, 'fracta,' catastrophes, and pragmatic paradoxes - is theorizing its own evolution as discontinuous, catastrophic, nonrectifiable, and paradoxical" (Lyotard, 1987:60). Without properly specifying in which sense they speak of linear thinking postmodern thinkers pursue the ideal of non-linear thinking. Mathematicians speak of linear equations when, for example, there are two variables that are related in a specific way. Co-ordinate geometry says that points whose co-ordinates satisfy an equation of the first degree, such as y = ax + b (with a and b as constants), are lying on a straight line. An equation such as y = x2 is therefore non-linear. Postmodern authors want to distance themselves from the rationalistic trait of "modern science" with its reductionism and faith in numbers. In opposition to this "out-dated" mode of thinking such postmodern thinkers advocate a non-linear mode of thinking, apparently built upon a methodology of intuition and of subjective observation, exceeding human rationality. Sokal and Bricmont mention the words of a postmodern thinker, Robert Markley, who claims that "quantum physics, the bootstrap theory, the theory of complex numbers, and chaos theory share the basic assumption that reality cannot be described in linear concepts, that non-linear - and non-solvable - equations provide the only possible means to describe a complex, chaotic and non-deterministic reality" (Sokal & Bricmont 1999:166, note 26). On the same page they highlight the fact that many postmodern authors interpret chaos theory as a revolution directed against Newton's mechanics, with quantum theory as an example of non-linear thinking. Unfortunately Newton's "linear thinking" contains equations which are fully non-linear. In reality many examples of chaos theory derive from Newton's mechanics, which means that chaos research is in fact nothing but a Renaissance of Newton's mechanics. Even more embarrassing is the fact that while quantum physics is currently represented as a prime example of "postmodern science," it is not realized that the basic equation of quantum physics, the well-known Schrödinger equation, is absolutely linear (Sokal & Bricmont 1999:166-167). Moreover, there are very difficult linear problems and quite simple non-linear problems. Contrary to a widespread misunderstanding a non-linear system is not necessarily chaotic. Postmodern thinkers tend to shy away from universality by emphasizing what is particular or singular. Caputo mentioned to Derrida that in connection with justice and care in Derrida's writings he discerns a resonation of the biblical concern for singularity. This is opposed to the "philosophical notion where justice is defined in terms of universality" (Derrida 997:20). Remarkably Derrida's reaction was immediately to emphasize the unbreakable co-existence of universality and singularity: "I would not oppose, as you did, universality and singularity. I would try to keep the two together" (Derrida 1997:22). According to Derrida faith is universal, it displays a universal structure and for this reason it should be distinguished from "religion." Actually, for him there is "no such thing as 'religion'." There are only singular religions, such as Judaism, Christianity, Islam and so on. This distinction between (universal) faith and (particular) religions runs parallel with his distinction between messianicity and messianism (Derrida 1997:21) and it explains his mode of speech where he declares: "So this faith is not religious, strictly speaking; at least it cannot be totally determined by a given religion. That is why this faith is absolutely universal. This attention to what is the singularity is not opposed to universality" (Derrida 1997:22). Derrida here undoubtedly explores the ontic universality of "faith," of "messianicity" and so on - which disqualifies him, strictly speaking, from being a postmodernist thinker, for postmodernism generally attempts to shy away from universality. Since the era of Enlightenment the trust in universal (conceptual) knowledge guided the idea of rational progress. One way to define rationalism is actually to see it as a reification of conceptual knowledge. Likewise, irrationalism can then be defined as a deification of concept-transcending knowledge (idea-knowledge), focused on what is unique, individual or singular. The decisive role played by nominalism in modern philosophy since the Renaissance is seen in its denial of universality outside the human mind: universality is only and solely acknowledged within the human "mind". That we actually have to account for two kinds of universality is often concealed behind interchangeably employing expressions such as law, law for, order for, orderliness of, lawfulness of, law-conformity, regularities and so on. Whatever meets the order for its existence behaves in an orderly fashion, manifested in its own orderliness or law-conformity. An order for and the orderliness of is equivalent to the conditions for the existence of something and meeting those conditions. In general there is a strict correlation between law and what is factually subjected to it. But when reality (the ontic) is stripped of its universality, then it is at once deprived of its order for side as well as the orderliness of reality conforming to this order. What is lost sight of is the fact that denying universality "outside the human mind" did not succeed in getting rid of universality because the feature of being individual universally holds for whatever is individual. In spite of his sharp critical analysis of the ideas of Hawking, John Lennox still does not properly distinguish between law and regularity: "Newton's laws describe the regularities, the pattern, to which motion in the universe conforms under certain initial conditions. It was God, however, and not Newton who created the universe with those regularities and patterns" (Lennox 2011a:35). Law-conformity is a feature of what is subjected to laws and the only way to understand physical laws is to study the regularities evinced in their behaviour. It would therefore be better to say that Newton's laws are human formulations of the God-given laws for nature, making possible all the regularities we can observe and describe. God did not create the regularities, for what has been created function in an orderly way, providing scholars with those regularities pointing at the God-given creational laws. This entails that we have to acknowledge the universality of different types of entities, because our experience is not populated by just one kind of entity, whatever it may be. No one would defend the view that everything is an x - where x could be filled in by: "a quark", "an atom", "a cell" or whatever. The diversity of entities within the horizon of human experience straightforwardly necessitates the acknowledgement of a multiplicity of types or kinds. The ontic reality is that the correlation between law and factuality cannot avoid the idea of type-laws. Yet since the dominant nominalistic assumption of modern philosophy denies universality outside the human mind, the entire system of biological classification is reduced to a functionalistic (physicalistic) perspective. Simpson categorically states that organisms are not types and do not have types (Simpson 1969:8-9). This view continues the conviction of Darwin that "no line of demarcation can be drawn between species" (Darwin 1859:443) which entails that according to Darwin "we shall have to treat species in the same manner as those naturalists treat genera, who admit that genera are merely artificial combinations made for convenience" (Darwin 1859:456). The discreteness (discontinuities) marking the currently existing diversity of plants and animals as well as the dominant theme of palaeontology (stasis/constancy: a type abruptly appears, remains constant over millions of years and then suddenly disappears) squarely contradicts Darwin's core scientific belief that there must have been an infinitesimal, incremental and continuous development stretched over millions of years. A contemporary neo-Darwinist, Jerry Coyne, openly struggles with the tension between discreteness and continuity. He advances the view that species are discrete clusters of living entities: "And at first sight, their existence looks like a problem for evolutionary theory. Evolution is, after all, a continuous process, so how can it produce groups of animals and plants that are discrete and discontinuous, separated from others by gaps in appearance and behavior?" (Coyne 2009:184). He also designates a species as "a discrete cluster of sexually reproducing organisms" and then on the same page he continues in a realistic fashion by maintaining that the discontinuities of nature are "not arbitrary, but an objective fact" (Coyne 2009:184). Whereas Darwin therefore advocated a nominalistic position regarding living entities, Coyne reverts to a realistic idea of living entities. Within modern philosophy the emphasis soon shifted to functional relations which, particularly in the thought of Leibniz, resulted in his famous lex continui (law of continuity) according to which nature does not make any leaps (natura non facit saltus). Dooyeweerd characterized this view as the continuity postulate of humanistic philosophy and Gould argues that this postulate assumed in Darwin's thought even a more central position than natural selection. He calls upon the physicist and historian of science, Silvan S. Scheber when he claims: "In fact, I would advance the even stronger claim that the theory of natural selection is, in essence, Adam Smith's economics transferred to nature" (Gould 2002:122). And gradualism precedes in importance natural selection. Gould relates Darwin's position here to a confusion of the different senses of gradualism, for example the validity of natural selection and the acceptance of slow and continuous flux: "This conflation came easily (and probably unconsciously) to Darwin, in large part because gradualism stood prior to natural selection in the core of his beliefs about the nature of things. Natural selection exemplified gradualism, not vice versa - and the various forms of gradualism converged to a single, coordinated view of life that extended its compass far beyond natural selection and even evolution itself" (Gould 2002:154-155). Yet in spite of his achievements as a radical intellectual, advocating a theory without any claims to progress, Gould notes that Darwin considered it as his greatest failure that he did not succeed in reconciling his intellectual rejection of progress with his acceptance of a cultural context in which progress was one of the characteristics of the Victorian culture to which he belonged (see Gould 2002:467). Darwin holds that his greatest improvement compared to other evolutionary theories is given in banishing inherent progress. Gould writes: "Moreover, Darwin regarded the banishment of inherent progress as perhaps his greatest conceptual advance over previous evolutionary theories." And to this he adds the words of Darwin, formulated in reaction to the progressionist palaeontologist Alpheus Hyatt (on December 4, 1872): "After long reflection I cannot avoid the conviction that no innate tendency to progressive development exists" (Gould 2002:468). Ironically, close to the end of The Origin of Species, we read: "And as natural selection works solely by and for the good of each being, all corporeal and mental endowments will tend to progress towards perfection" (Darwin 1859:459). Since Aristotle vitalistic theories in biology assumed that goal-directedness (finality/purpose) is inherent to living entities, something rejected by Darwin in the words just quoted. Theistic evolutionists of our day deem it possible to accept Darwin's views (on random variation and natural selection) and at the same time advance the (contradictory) view that God guided the process of evolution all the way. Sometimes emergent-evolutionism, which wants to have it both ways - continuity in descent and discontinuity in existence - also surfaces in the thought of theistic evolutionists. The theologian Wentzel Van Huyssteen on the one hand holds that our universe and "all it contains is in principle explicable by the natural sciences" (Van Huyssteen, 1998:75). But a bit further in this work he alleges the opposite when he warns that we should not overextend rationality "to explain everything in our world in the name of natural science" (Van Huyssteen 1998:115). Later on he believes that cultural evolution (including the evolution of ideas, scientific theories, and religious worldviews) cannot be reduced to biological evolution (Van Huyssteen 2006:86-87). On the basis of his emergent-evolutionistic view Klapwijk also attempts to combine neo-Darwinian chance with purpose (see Klapwijk 2008 and 2009). Gould explains that within the fossil record there is no clear signal of progress: I believe that the most knowledgeable students of life's history have always sensed the failure of the fossil record to supply the most desired ingredient of Western comfort: a clear signal of progress measured as some form of steadily increasing complexity for life as a whole through time. The basic evidence cannot support such a view, for simple forms still predominate in most environments, as they always have. Faced with this undeniable fact, supporters of progress (that is, nearly all of us throughout the history of evolutionary thought) have shifted criteria and ended up grasping at straws (Gould 1996:166-167). The idea of type-laws, briefly alluded to above, containing an acknowledgement of different types of living entities constituted by a limited number of them falling within each "type-category," is eliminated in the nominalistic classification of neo-Darwinism with its claim that "organisms" are not types and do not have types (Simpson). The popular contemporary reference to "bio-diversity" is actually stripped of meaningful content, because if the classification of living entities is merely the result of arbitrary and artificial thought constructions, lacking an ontic foundation (in the reality "out there"), then the intended diversity (reflecting typical differences determined by distinct type-laws) collapses into a structureless continuum. The speculative continuity postulate still rules the day! The denial of the specified universality entailed in type-laws finds its foundation in a more basic misunderstanding, which is given in denying the "ontic diversity" of functional (modal) aspects. It is the merit of reformational philosophy that it subjected the multiple functions or modal aspects of our experiential world to a transcendental-empirical analysis. The key idea is that the ontic universality of each one of these aspects, from the numerical up to the certitudinal aspect, codetermines whatever there is. Every concrete (natural and societal) entity functions within all these aspects which not only serve as modes of being and modes of experience but also as modes of explanation. When particular modes of explanation are over-emphasized at the cost of other modes of explanation - just recall the words of Van Huyssteen that our universe and all it contains "is in principle explicable by the natural sciences" - a reductionist approach surfaces, denying the ontic diversity of modal aspects. The physicalistic or materialistic orientation of neo-Darwinism and of the new atheists has currently succeeded in establishing a firm hold on scholarly journals and the public media. Their ultimate reductionist claim is that "everything is material". Such a materialistic view in the final analysis believes, as Roy Clouser phrases it, "that reality is ultimately physical, so that everything is either matter or dependent upon matter". Clouser also mentions Paul Ziff who once remarked that he is not certain why he is a materialist: "It's not because of the arguments. I guess I'd just have to say that reality looks irresistibly physical to me" (Clouser 2005:38). Apart from trying to give an answer to the difficult question: "What is matter?" the basic statement that everything is material is self-defeating. Merely contemplate the status of laws holding for material things. They are not themselves material, just as little as the conditions (laws) for being an atom is itself an atom. But if the conditions (laws) for being material are not themselves material, then the claim that everything is material does not hold, because the physical laws for matter are not material. In addition the statement that everything is material is presented as being true. But truth is a matter of epistemology and logic, not a physical one. Moreover, the statement is formulated in a sentence, showing that we have to distinguish between the logical-analytical aspect (the basic statement) and the lingual aspect of the utterance (the sentence formulated). That is to say, the basic conviction of physicalism (materialism) could be approximated from different modes of experience. However, as long as "laws of nature" are accepted, the atheist will constantly be haunted by the quest for the Creator of such laws, the search for the Law-Giver. Therefore the last step in the attempt to get rid of the Creator is, as Lennox phrases it, to confer "creatorial powers on something that is not in itself capable of doing any creating" (Lennox 2011:52). This something may be scientific theories or even the laws addressed in such theories. According to Lennox for these scientists and philosophers "the term 'God' has become a synonym for the laws of nature" (Lennox 2011a:22). In order to get rid of God Stephen Hawking settled for the law of gravity as the substitute ultimate origin of the universe. In his book, The Grand Design (co-author is the physicist Leonard Mlodinow) we read: The law of gravity now replaces God - forgetting that it is merely a God-given creational law. Hawking also forgets that every physical law is always related to what is subjected to it and correlated with it. Lennox aptly remarks that laws create nothing in any world for they can only "act on something that is already there" (Lennox 2011:71). Ironically enough, no single physical law could be explained in a purely physical way because the physical aspect of reality does not exist in isolation from the other aspects of reality. Newton's formulation of the law of gravitation contains the term force (F), the gravitational constant (G), two mass-points (m1 an m2), and the distance between m1 and m2 (r). The gravitational force between m1 and m2 is directly proportional to the product of their masses and indirectly proportional to the square of the distance between them. But mass is a physical quantity (highlighting the fundamental connection between the physical aspect and the numerical aspect). Distance, in turn, pre-supposes the meaning of (physical) space, whereas the idea of a constant reveals the coherence between the meaning of the physical aspect and a uniform [constant] motion. From this it appears that the formulation of the law of gravitation is made possible in the first place by the coherence of the physical aspect with three foundational non-physical aspects (namely number, space, and movement). These non-physical aspects serve as the foundation for the meaning of the physical aspect. Formulated in terms of the theory of modal aspects, the law of energy-constancy, for example, analogically reflects the kinematic meaning of constancy on the law-side of the physical aspect. Given these conditions and interconnectedness one may well ask: how could these non-physical aspects (and, for that matter, the universe itself) then merely emerge from the physical aspect of creation or originate from a physical law? Hawking attempts to pull himself up with the bag in which he positioned himself -something clearly seen by Lennox. Of course the law of gravity is something implying that if the universe is created by this law the starting-point is something (the law of gravity) and not "nothing." The statement "the universe can and will create itself from nothing" is self-contradictory: "If I say 'X creates Y', this presupposes the existence of X" (Lennox 2011a:32). Materialism simply entangles itself in unsolvable antinomies -the "reward" for not respecting the God-given creational laws in their uniqueness and unbreakable coherence distinguishing between God and God's law. The only way in which we can approximate the laws for physical entities is through an investigation of their orderliness, law-conformity or regularities. The above-mentioned example used by Derrida concerning the universal structure of faith (messianicity) and particular ("singular") religions, implicitly alludes to the universality of the certitudinal aspect of reality. Particularly in respect of the conviction (!) of the new atheists that they do not have faith at all, the modal universality of the faith aspect implies the opposite. But we have noticed that if one does not accept God as Creator, the only alternative is to find a substitute within creation - and in the case of contemporary atheism this substitute for God is most of the time found in matter. The ultimate commitment of the new Atheists is therefore justly characterized as materialistic or physicalistic - and it is inevitably caught up in the above-mentioned inconsistencies. Materialism over-emphasizes a single mode of explanation, namely the physical. However, such an orientation embodies a more-than-theoretical commitment - just recall the remark of Paul Ziff who said "that reality looks irresistibly physical to" him. No reason is given, just an underlying trust in (physical) reason! It represents therefore a particular faith in reason, namely the trust in the rational reliability of physical reasoning. The onto-diversity of modal aspects is challenged from the outset. From this state of affairs we can conclude that "faith" ("trust") inherently belongs to the practice of the natural sciences. What is more is that "rationality" (or: "reason") is connected to faith in the sense of intellectual trust. Yet in the course of the historical development of Western philosophy "reason and faith" eventually appeared in opposition to each other, as if each on its own is an entity in its own right. Quite recently this is still done by Pope John Paul II in his Encyclical Letter Fides et Ratio (1998). In this letter he portrays both as entitylike, inter-dependent realities. He claims that faith does not fear reason but trust it: "Faith therefore has no fear of reason, but seeks it out and has trust in it" (John Paul 1998). Of course thinking ("reason") and believing ("faith") are concrete acts of human beings which, like every concrete (natural and social) structure or event, in principle function within all the aspects of reality. The latter, namely the dimension of aspects, provides a universal modal order co-determining concrete events and processes. In an ontic sense they lie at the foundation of our experience of entities and their functions. Therefore the first level of investigating the interconnections between "faith" and "reason" should commence with an analysis of the meaning of the logical-analytical aspect and the meaning of the certitudinal aspect, abstracting for the moment from the fact that every concrete act of faith at once functions in the logical-analytical aspect and that every concrete thought-act also functions within the faith aspect. The terms trust or certainty may be used to capture the core meaning of the faith aspect. The inter-modal coherence between the various ontic aspects entails that the terms trust and certainty will also appear within other aspects in an analogical way, normally captured in compound phrases such as legal trust, social trust, moral trust and economic trust (credit). Given the order relation between the logical and certitudinal aspects an expression such as intellectual trust highlights a forward-pointing connection between the logical and faith aspects, in technical philosophical parlance also designated as a certitudinal anticipatory analogy between these two aspects. Likewise configurations such as legal trust, social trust, moral trust and economic trust reveal anticipations from the legal, social, modal and economic aspects to the faith aspect. In the same way the faith aspect reveals its unique meaning only in coherence with all the other aspects of reality, including the logical-analytical aspect. The core meaning of the logical aspect is found in analysis (identification and distinguishing). When we therefore lack faith distinctions in our trusting and do not identify the core elements of our faith we will end up with a "blind faith". Therefore it should be acknowledged that there also exists an intrinsic connection between the faith aspect and the logical-analytical aspect, manifest in faith distinctions and identifying what is crucial to faith convictions. "Reason" and "faith" surely are not "strangers" because human acts qualified either by the logical aspect or the certitudinal aspect structurally display an internal coherence with the nonqualifying aspects of acts like these. In respect of the nature of intellectual trust this insight is acknowledged in his own way by the philosopher of science, Wolfgang Stegmüller, where he explains that one first has to believe in something in order to justify something else (Stegmüller 1969:314). Nonetheless an uncritical adherence to what we have earlier designated as a "naïve positivism" is still widespread. Special scientists and laymen think that the ultimate judge of truth is "science" - the assumed anonymous (rational) power supposedly capable of solving all our problems. The scope of "science" is restricted to mathematics, physics and (the physical or molecular foundations) of biology. This modernist over-estimation of "science" up to the present implicitly continues the modern natural science ideal of objectivity and neutrality. In the case of positivism the criterion of sense perception matches the (internally antinomic) reductionism found in materialism because it cannot account for the epistemic status of descriptive terms derived from what we have called the onto-diversity of modal aspects. Once something has been observed (sensed) it is in need of a scientific description and every description has to employ specific terms. However, the history of the concept of matter shows that alternative modes of explanation have been chosen. It commenced with the Pythagorean belief that everything essentially is number, then it continues with the switch within Greek mathematics to geometry (after die discovery of incommensurability - the fact that it is not possible to describe all spatial relationships merely in terms of fractions), then, after the Renaissance, the choice for (reversible) motion as basic denominator, and finally reaching the current state of physics which had to acknowledge that (irreversible) energy-operation characterizes the uniqueness of this aspect. Clearly, during the history of physics different modal ponts of entry were used in describing material entities, namely the numerical, the spatial, the kinematic and the physical. But since these functional modes of reality are not concrete entities or events themselves, they are not open to the senses as such. One cannot weigh, smell, hear, feel or see anyone of these aspects, simply because they are not belonging to the entitative dimensions of reality. The classical positivist neutrality postulate had to face other objections as well. Perhaps the most important of these objections are related to the history of every academic discipline, which relativizes any temporarily (assumed) "up-to-date" theoretical stance. Whatever is currently appreciated as the "generally accepted" standpoint within the discipline differs from what the case fifty, hundred or more years ago was, apart from the fact that the majority is not a yardstick for truth (as correctly identified in text books on logic where one of the informal fallacies is designated as the majority fallacy; see Bowell and Kemp, 2005:131 ff.). And within the forthcoming decades and millennia the emphasis may shift again and again. This explains why not even the "exact" discipline of mathematics succeeded in avoiding concurrent and successive alternative theoretical stances. The remarkable historical fact is that the three main sub-divisions of Kant's Critique of Pure Reason (1781) provided the starting-point for the three main schools of thought found in twentieth-century mathematics: intuitionistic mathematics explored the transcendental aesthetics (Brouwer & Weyl), logicism, the transcendental analytic (Russell & Gödel) and axiomatic formalism, the transcendental dialectics (Hilbert & his followers). Regarding the mathematical status of intuitionism Beth writes: "It is clear that intuitionistic mathematics is not merely that part of classical mathematics which would remain if one removed certain methods not acceptable to the intuitionists. On the contrary, intuitionistic mathematics replaces the methods by other ones that lead to results which find no counterpart in classical mathematics" (Beth 1965:89). But listen to what Brouwer himself has to say. He believes that "classical analysis ... has less mathematical truth than intuitionistic analysis" (Brouwer 1964:78) - to which he adds in respect of the differences between intuitionism and formalism: As a matter of course also the languages of the two mathematical schools diverge. And even in those mathematical theories which are covered by a neutral language, i.e. by a language understandable on both sides, either school operates with mathematical entities not recognized by the other one: there are intuitionist structures which cannot be fitted into any classical logical frame, and there are classical arguments not applying to any introspective image. Likewise, in the theories mentioned, mathematical entities recognized by both parties on each side are found satisfying theorems which for the other school are either false, or senseless, or even in a way contradictory. In particular, theorems holding in intuitionism, but not in classical mathematics, often originate from the circumstance that for mathematical entities belonging to a certain species, the possession of a certain property imposes a special character on their way of development from the basic intuition, and that from this special character of their way of development from the basic intuition, properties ensue which for classical mathematics are false. A striking example is the intuitionist theorem that a full function of the unity continuum, i.e. a function assigning a real number to every non-negative real number not exceeding unity, is necessarily uniformly continuous (Brouwer 1964:79). Beth elaborates this divergence in a broader context by mentioning multiple other orientations informed by distinct philosophical positions and he even questions the appreciation of axiomatic set theory as the ultimate foundation of mathematics (Beth 1965:161-203). Differences such as these prompted the mathematician Kline to come up with a pretty negative assessment of the situation within mathematics: The developments in the foundations of mathematics since 1900 are bewildering, and the present state of mathematics is anomalous and deplorable. The light of truth no longer illuminates the road to follow. In place of the unique, universally admired and universally accepted body of mathematics whose proofs, though sometimes requiring emendation, were regarded as the acme of sound reasoning, we now have conflicting approaches to mathematics. Beyond the logicist, intuitionist, and formalist bases, the approach through set theory alone gives many options. Some divergent and even conflicting positions are possible even within the other schools. Thus the constructivist movement within the intuitionist philosophy has many splinter groups. Within formalism there are choices to be made about what principles of metamathematics may be employed. Non-standard analysis, though not a doctrine of any one school, permits an alternative approach to analysis which may also lead to conflicting views. At the very least what was considered to be illogical and to be banished is now accepted by some schools as logically sound (Kline 1980:275-276). The topicality of these diverging orientations is currently still reflected in the encompassing Oxford Handbook published by Oxford University Press in 2005 on philosophy, mathematics and logic - with Shapiro as Editor (833 pages). This work inter alia contains contributions on empiricism and logical positivism (1), on logicism (3), on Wittgenstein (1), on formalism (1), on intuitionism (3), on naturalism (2), on nominalism (2) and on structuralism (2). An article on "non-denumerability" which appeared in the Journal Koers shows that alternative philosophical assumptions regarding the nature of the infinite lead to mutually opposing interpretations (see Strauss 2011). Interestingly, the editor of an accredited journal refused to publish this article because one of the reviewers objected by stating that it might mislead the youth to think that mathematics is not "an exact science"! In addition to the extensive quote from Brouwer given above, we may challenge the idea of an exact science by briefly looking at the impasse of arithmeticism, such as the argumentation of Grünbaum published in 1952 aimed at to explaining the continuous extension of a straight line as being constituted by non-extended elements. This circularity is only apparent when it is realized that whereas our awareness of succession (and discreteness) originally belongs to the irreducible core meaning of number, the awareness of a totality (a whole with its parts) originally belongs to the core meaning of the spatial aspect. Once this is seen, it is clear that the idea of infinite totalities merely represents an anticipatory analogy pointing from the numerical aspect to the spatial aspect. However, the idea of an infinite totality presupposes the idea of the at once infinite (traditionally known as the actual infinite), which stands and falls with the deepening or disclosure of the meaning of number under the guidance of the meaning of space. For this reason the at once infinite in principle differs from the primitive meaning of infinity in the literal sense of one, another one, yet another one, and so on (traditionally known as the potential infinite but preferably designated as the successive infinite -endlessness). The decisive point in the argument pursued by Grünbaum is given in the employment of the at once infinite which is needed in Cantor's proof of the non-denumerability of the real numbers. If the real numbers cannot be enumerated, they cannot be added - apparently providing an opening for degenerate intervals to constitute a measure larger than zero (practically boiling down to adding zeros in order to exceed zero, apparently justified by the fact the addition is not defined in the case of non-denumerable infinity). Grünbaum writes explicitly: "The consistency of the metrical analysis which I have given depends crucially on the non-denumerability of the infinite point-sets constituting the intervals on the line" (Grünbaum 1952:302). Therefore the entire arithmeticistic argument begs the question. The attempted arithmetization crucially depends upon the use of the idea of infinite totalities, which needs the at once infinite, and which finally presupposes the irreducible meaning of the spatial order of at once and the (correlated) spatial whole-parts relation. The perspective which we have advanced thus far challenged the idea of "an exact science." But since biology is oftentimes incorporated in the restricted notion of "science" we now briefly highlight some of the increasing problems facing neo-Darwinism with its law-like mechanism of random mutation and natural selection (of course, accepting the constancy of this mechanism contradicts the neo-Darwinian claim that "everything changes"). In the Prologue of his recent book, Darwin's Doubt (2013), Stephen Meyer states the following in connection with the assumed origination of the first living entities: "The type of information present in living cells - that is, 'specified' information in which the sequence of characters matters to the function of the sequence as a whole - has generated an acute mystery. No undirected physical or chemical process has demonstrated the capacity to produce specified information starting 'from purely physical or chemical' precursors. For this reason, chemical evolutionary theories have failed to solve the mystery of the origin of first life - a claim that few mainstream evolutionary theorists now dispute" [This book is dedicated to the mystery of the Cambrian explosion (initially estimated to have occurred within a time-span of 20 to 40 million years, but now reduced to 5-6 million years (Meyer 2013: 72).] Although neo-Darwinians therefore have to concede that the origination of the first living entity is a mystery, they still BELIEVE that it did happen "spontaneously", through purely material processes. However, apart from the extreme improbability of such a process, there are no clues as to how the information found in living entities came into being - the "hardware" (material) does not explain the "software" (such as ordered DNS sequences, epigenetic information or complex proteins). The equally mysterious appearance of new animal phyla during the Cambrian explosion is now attributed to information not stored in genes, namely epigenetic information. Add to this that similar information sequences do not affirm common ancestor genes. The reality that genes with information-rich sequences cannot be derived from common ancestral genes, is underscored by recent "genomic studies which reveal that hundreds of thousands of genes in many diverse organisms exhibit no significant similarity in sequence to any other known gene" (Meyer 2013:215). In addition Meyer mentions that these ORFfan genes (derived from "open reading frames of unknown origin") have "turned up in every major group of organisms, including plants and animals as well as both eukaryotic and prokaryotic one-celled living entities. In some organisms, as much as one-half of the entire genome comprises ORFan genes" (Meyer 2013: 216). While having no homologs ORFans cannot be related to a common ancestral gene, a "fact tacitly acknowledged by the increasing number of evolutionary biologists who attempt to 'explain' the origin of such genes through de novo ('out of nowhere') origination" (Meyer 2013:216). Clearly, questions concerning origins increasingly recede into the mystical realm of "coming from nowhere" (which is synonymous with: ultimately we do not know and approximating the possibility of creation)! Likewise, the Cambrian expert, Douglas Erwin (trained at the University of California), in collaboration with Eric Davidson, "have now ruled out standard neo-Darwinian theory" because it "gives rise to lethal errors", to which Erwin and Davidson add that no current theory of evolution explains the origin of the de novo body plans found in the Cambrian explosion (see Meyer 2013:356). On the same page Meyer mentions Erwin saying that establishing these novel body plans does not have "any parallel to currently observed biological processes" because he insists that the events of the past were fundamentally different. Meyer summarizes this succinctly: "the cause responsible for generating the new animal forms, whatever it was, must have been unlike any observed biological process operating in actual living populations today" (Meyer 2013:356). When the principle of uniformity is challenged the door is opened for speculating about origination phenomena which are indeed unlike any biotical processes observed in currently living populations. How can anyone come to terms with the uncertainties and speculation increasingly surrounding (and even rejecting) the neo-Darwinian mechanism of random mutation and natural selection? Reverting to "out of nowhere" and a "fundamentally different past" underscore the mystery surrounding the unique origination of living entities, including the evidence of the Cambrian explosion which, according to Erwin and Davidson (2002), is not accounted for by any known (micro or macro) theory of evolution. In conclusion, it should be mentioned that implicit in our entire preceding analysis of the shortcomings in and problems of postmodernism, positivism and atheism one can discern key elements of a non-reductionist ontology motivated by the supra-theoretical ultimate commitment to accepting God as Creator of the universe in Whom all things hang together. The idea of type-laws (with their specified universality) and the idea of universal (unspecified) modal laws occupy a key position in such a non-reductionist ontology. We are indebted to the founders of this philosophical legacy who developed their crucial insights during the first half of the previous century. Among them also Stoker articulated his own assessment of what those who are involved in scholarship should acknowledge. He did this within the perspective of Christianizing all of life (Stoker 1967:65) which for him entailed the idea of God's law-order (Stoker 1967:52) on the basis of explicitly promoting the ideal of a non-reductionist ontology (Stoker 1967:61). It is a privilege to be able to make a humble contribution to the further development of this philosophical legacy at an institution where Professor Stoker spent his fruitful academic career. BENACERRAF, P. & PUTNAM, H., 1964 (Eds.), Philosophy of Mathematics, Selected Readings, Basil Blackwell, Oxford.         [ Links ] BETH, E., 1865, Mathematical Thought, D. Reidel, Dordrecht.         [ Links ] Bowell, T. & KEMP, G., 2006, Critical thinking: A concise guide, Routledge, London.         [ Links ] BROUWER, L.E.J., 1964, Consciousness, Philosophy and Mathematics, In: Benacer-raf et al., pp.78-84.         [ Links ] CLOUSER, R.A., 2005, The Myth of Religious Neutrality: An Essay on the Hidden Role of Religious Belief in Theories, University of Notre Dame Press (new revised edition, first edition 1991), Notre Dame.         [ Links ] COYNE, J.A., 2009, Why Evolution is True, Oxford University Press, Oxford.         [ Links ] DARWIN, C., 1859, On the Origin of Species by Means of Natural Selection or the Preservation of favoured races in the struggle for life, Edited with an Introduction by J.W. Burrow. Hardmondsworth: Penguin Books 1968. The version available on the WEB (downloaded on October 29, 2005) is: slightly differs in some respects from the Penguin edition.         [ Links ] DERRIDA, J., 1997, Deconstruction in a Nutshell, A Conversation with Derrida, Edited with a Commentary by John D. Caputo (The Villanova Roundtable), Fordham University Press, New York.         [ Links ] ERWIN D.H. & DAVIDSON, E., 2002, 'The Last Common Bilaterian Ancestor', Development, 129:3021-3032.         [ Links ] FISHMAN J., 2011, 'Part Ape, Part Human, A new ancestor emerges from the richest collection of fossil skeletons ever found', National Geographic, August 22(2)2011:120-133).         [ Links ] GOULD, S.J., 1996, Life's Grandeur, Vintage (Random House), London.         [ Links ] GOULD, S.J., 2002, The Structure of Evolutionary Theory, University Press, Cambridge.         [ Links ] GRÜNBAUM, A., 1952, 'A consistent conception of the extended linear continuum as an aggregate of unextended elements, Philosophy of Science, 19(2):288-306.         [ Links ] HAWKING, S.W. & MLODINOW, L, 2010, The Grand Design, The Belknap Press of Harvard University Press, Massachusetts.         [ Links ] KANT, I., 1781, Kritik der reinen Vernunft, 1st Edition, Felix Meiner edition (1956), Hamburg.         [ Links ] KLAPWIJK, J., 2008, Purpose in the Living World, University Press, Cambridge.         [ Links ] KLAPWIJK, J., 2009, Heeft de Evolutie een Doel?, Kok, Kampen.         [ Links ] KLINE, M., 1980., Mathematics, The Loss of Certainty, Oxford University Press, New York.         [ Links ] LENNOX, J., 2011, God and Stephen Hawking: Whose Design is it Anyway? Lion Books, Oxford.         [ Links ] LENNOX, J., 2011a, Gunning for God. Why the New Atheists are Missing the Target, Lion Hudson, Oxford.         [ Links ] LYOTARD, J-F., 1987, The Postmoden Condition: A Report on Knowledge, University Press, Manchester.         [ Links ] MEYER, S., 2013, Darwin's Doubt, Harper Collins, New York.         [ Links ] MÜLLER, F.M., 1961, Translator of Kant's Critique of Pure Reason, Second Edition, Revised. Dolphin Books, Doubleday & Company, New York.         [ Links ] JOHN PAUL II, 1998, 'Encyclical Letter: Fides Et Ratio.' To the Bishops of the Catholic Church on the relationship between Faith and Reason. Website: (10/01/2009).         [ Links ] POPPER, K., 1966, The Open Society and its Enemies, (Vols. I & II), Routledge & Kegan Paul, London.         [ Links ] POPPER, K.R., 1974, Intellectual Autobiography, In The Library of Living Philosophers, Ed. A. Schilpp, Volume XIV, I, Open Court Publishing Company, Lazalle, Illinois.         [ Links ] SHAPIRO, S., 2005 (Ed), The Oxford Handbook of Philosophy of Mathematics and Logic, University Press, Oxford.         [ Links ] SIMPSON, G.G., 1969, Biology and Man, Harcourt, New York.         [ Links ] SOKAL, S. & Bricmont, J., 1999, Eleganter Unsinn: Wie die Denker der Postmoderne die Wissenschaften missbrauchen, C.H. Beck, München.         [ Links ] STEGMÜLLER, W., 1969, Metaphysik, Skepsis, Wissenschaft (first edition 1954), Springer, Berlin/New York.         [ Links ] STOKER, H.G., 1967, Oorsprong en Rigting, Nasionale Handelsdrukkery, Elsiesrivier.         [ Links ] STRAUSS, D.F.M., 2009, 'Calvyn in die Intellektuele Erfenis van die Weste, Tydskrif vir Geesteswetenskappe, 49(3):397-409.         [ Links ] STRAUSS, D.F.M., 2011, '(Oor)aftelbaarheid, Koers, 76(4):637-659.         [ Links ] VAN HUYSSTEEN, J.W.V., 1998, Duet or Duel? Theology and Science in a Postmodern World, Trinity Press International, Harrisburg.         [ Links ] VAN HUYSSTEEN, W.J.V., 2006, Alone in the World? Human uniqueness in science and theology, William B. Eerdmans, Grand Rapids.         [ Links ] VAN PEURSEN, C.A., 1995, Dooyeweerd en de wetenschapsfilosofische discussie, in 'Dooyeweerd herdacht, edited by J. De Bruin, VU-Uitgeverij, Amsterdam.         [ Links ] Danie Strauss Private Bag X6001 Potchefstroom Campus North-West University Potchefstroom, 2520 South Africa Accepted: 01 Jun. 2015 Published: 01 Sep. 2015 1 An earlier version of this article was presented as a Stoker-Lecture at the University of North West, September 2013.
7ea2e45a08b2e691
Questaal Home The Input File (CTRL) This guide aims to detail the structure and use of the main input file (called the ctrl file) and related topics. As this guide explains, data is organized by categories, and the contents of each category is documented. A more careful description of the input file’s syntax can be found in this manual. Input can be supplied through a parallel input stream, namely the command line switches. Switches are flagged by command-line arguments beginning with  ‘‘ . They serve many purposes: some switches apply to all executables, others are specific to one or a few of them. They can also modify the contents of the input file described here, as variables can be assigned from the command-line before the input file is parsed. Switches are documented here for most executables; also any executable provides a brief summary of most switches it recognizes if you run it with --help, e.g. lmf --help. Table of Contents 1. Input File Structure Here is a sample input file for the compound Bi2Te3 written for the lmf code. categories tokens VERS LM:7 FP:7 HAM AUTOBAS[PNU=1 LOC=1 LMTO=3 MTO=1 GW=0] ITER MIX=B2,b=.3 NIT=10 CONVC=1e-5 BZ NKABC=3 METAL=5 N=2 W=.01 NSPEC=2 NBAS=5 NL=4 PLAT= 1 0 4.0154392 -0.5 0.8660254 4.0154392 -0.5 -0.8660254 4.0154392 ATOM=Te Z= 52 R= 2.870279 ATOM=Bi Z= 83 R= 2.856141 ATOM=Te POS= 0.0000000 0.0000000 0.0000000 ATOM=Te POS= -0.5000000 -0.8660254 1.4616199 ATOM=Te POS= 0.5000000 0.8660254 -1.4616199 ATOM=Bi POS= 0.5000000 0.8660254 0.8030878 ATOM=Bi POS= -0.5000000 -0.8660254 -0.8030878 Each element of data follows a token. The token tells the reader what the data signifies. Each token belongs to a category. VERS, ITER, BZ, STRUC, SPEC, SITE are categories that organize the input by topic. Any text that begins in the first column is a category. The full identifier (tag) consists of a sequence of branches, usually trunk and branch e.g. BZ_METAL. The leading component (trunk) is the category; the last is the token, which points to actual data. Thus the category groups tags into themes, the token identifies a particular type of data within the theme. Sometimes a tag has three branches, e.g. HAM_AUTOBAS_LOC. Note: input files described here (ctrl.ext) can be automatically constructed from init files using the blm utility. init files and ctrl files are structured with categories and tokens in essentially the same way. For a another description of categories and tokens, see the init file documentation. Tags, Categories and Tokens The input file offers a very flexible free format: tags identify data to be read by a program, e.g. reads a number (.01) from token W=. In this case W= belongs to the BZ category, so the full tag name is BZ_W. A category holds information for a family of data, for example BZ contains parameters associated with Brillouin zone integration. The entire input system has at present a grand total of 17 categories, though any one program uses only a subset of them. Consider the Brillouin zone integration category. You plan to carry out the BZ integration using the Methfessel-Paxton sampling method. M-P integration has two parameters: polynomial order n and gaussian width w. Two tags are used to identify them: BZ_N and BZ_W; they are usually expressed in the input file as follows: BZ N=2 W=.01 This format style is the most commonly used because it is clean and easy to read; but it conceals the tree structure a little. The same data can equally be written: BZ[ N=2 W=.01] Now the tree structure is apparent: [..] delimits the scope of tag BZ. Any tag that starts in the first column is a category, so any non-white character appearing in the first column automatically starts a new category, and also terminates any prior category. N= and W= mark tokens BZ_N and BZ_W. Apart from the special use of the first column to identify categories, data is largely free-format, though there are a few mild exceptions. Thus: BZ N=2 BZ W=.01 N=2 BZ[ W=.01 N=2] all represent the same information. Note: if two categories appear in an input file, only the first is used. Subsequent categories are ignored. Generally, only the first tag is used when more than one appears within a given scope. Usually the tag tree has only two levels (category and token) but not always. For example, data associated with atomic sites must be supplied for each site. In this case the tree has three levels, e.g. SITE_ATOM_POS. Site data is typically represented in a format along the following lines: SITE ATOM=Ga POS= 0 0 0 RELAX=T ATOM=As POS= .25 .25 .25 The scope of SITE starts at “SITE” and terminates just before “END”. There will be multiple instances of the SITE_ATOM tag, one for each site. The scope of the first instance begins with the first occurrence of ATOM and terminates just before the second: ATOM=Ga POS= 0 0 0 RELAX=T And the scope of the second SITE_ATOM is ATOM=As POS= .25 .25 .25 Note that ATOM simultaneously acts like a token pointing to data (e.g. Ga) and as a tag holding tokens within it, in this case SITE_ATOM_POS and (for the first site) SITE_ATOM_RELAX. Some tags are required; others are optional; still others (in fact most) may not be used at all by a particular program. If a code needs site data, SITE_ATOM_POS is required, but SITE_ATOM_RELAX is probably optional, or not read at all. Note: this manual contains a more careful description of the input file’s syntax. Input lines are passed through a preprocessor, which provides a wide flexibility in how input files are structured. The preprocessor has many features in common with a programming language, including the ability to declare and assign variables, evaluate algebraic expressions; and it has constructs for branching and looping, to make possible multiple or conditional reading of input lines. For example, supposing through a prior preprocessor instruction you have declared a variable range, and it has been assigned the value 3. This line in the input file: is turned in to: The preprocessor treats text inside brackets {…} as an expression (usually an algebraic expression), which is evaluated and rendered back as an ASCII string. See this annotated lmf output for an example. The preprocessor’s programming language makes it possible for a single file to serve as input for many materials systems in the manner of a database; or as documentation. Also you can easily vary input conditions in a parameteric fashion. Other files besides ctrl.ext are first parsed by the preprocessor — files for site positions, Euler angles for noncollinear magnetism are read through the preprocessor, among others. 2. Help with finding tokens Seeing the effect of the preprocessor The preprocessor can act in nontrivial ways. To see the effect of the preprocessor, use the --showp command-line option. See this annotated output for an example. Finding what tags the parser seeks It is often the case that you want to input some information but don’t know the name of the tag you need. Try searching this page for a keyword. You can list each tag a particular tool reads, together with a synopsis of its function, by adding --input to the command-line. Search for keywords in the text to find what you need. Take for an example: lmchk --input This switch tells the parser not to try and read anything, but print out information about what it would try to read. Several useful bits of information are given, including a brief description of each tag in the following format. A snippet of the output is reproduced below: Tag Input cast (size,min) IO_VERBOS opt i4v 5, 1 default = 35 Verbosity stack for printout. May also be set from the command-line: --pr#1[,#2] IO_IACTIV opt i4 1, 1 default = 0 Turn on interactive mode. May also be controlled from the command-line: --iactiv or --iactiv=no STRUC_FILE opt chr 1, 0 (Not used if data read from EXPRESS_file) Name of site file containing basis and lattice information. Read NBAS, PLAT, and optionally ALAT from site file, if specified. Otherwise, they are read from the ctrl file. STRUC_PLAT reqd r8v 9, 9 Primitive lattice vectors, in units of alat SPEC_ATOM_LMX opt i4 1, 1 (default depends on prior input) l-cutoff for basis SITE_ATOM_POS reqd* r8v 3, 1 Atom coordinates, in units of alat - If preceding token is not parsed, attempt to read the following: SITE_ATOM_XPOS reqd r8v 3, 1 Atom coordinates, as (fractional) multiples of the lattice vectors The table tells you IO_VERBOS and IO_IACTIV are optional tags; default values are 35 and 0, respectively. A single integer will be read from the latter tag, and between one and five integers will be read from IO_VERBOS. There is a brief synopsis explaining the functions of each. For these particular cases, the output gives alternative means to perform equivalent functions through command-line switches. STRUC_FILE=fname is optional. Here fname is a character string: it should be the site file name fname.ext from which lattice information is read. If you do use this tag, other tags in the STRUC category (NBAS, PLAT, ALAT) may be omitted. Otherwise, STRUC_PLAT is required input; the parser requires 9 numbers. The synopsis also tells you that you can specify the same information using EXPRESS_file=fname (see EXPRESS category below). SPEC_ATOM_LMX is optional input whose default value depends on other input (in this case, atomic number). SITE_ATOM_POS is required input in the sense that you must supply either it or SITE_ATOM_XPOS. The * in reqd* the information in SITE_ATOM_POS can be supplied by an alternate tag – SITE_ATOM_XPOS in this case. Note: if site data is given through a site file, all the other tags in the SITE category will be ignored. The cast (real, integer, character) of each tag is indicated, and also how many numbers are to be read. Sometimes tags will look for more than one number, but allow you to supply fewer. For example, BZ_NKABC in the snippet below looks for three numbers to determine the k-mesh, which are the number of divisions only each of the reciprocal lattice vectors. If you supply only one number it is copied to elements 2 and 3. BZ_NKABC reqd i4v 3, 1 (Not used if data read from EXPRESS_nkabc) No. qp along each of 3 lattice vectors. Supply one number for all vectors or a separate number for each vector. Command-line options --help performs a similar function for the command line arguments: it prints out a brief summary of arguments effective in the executable you are using. A more complete description of general-purpose command line options can be found on this page. See this annotated lmfa output for an example. Displaying tags read by the parser To see what is actually read by a particular tool, run your tool with --show=2 or --show. See the annotated lmf output for an example. These special modes are summarized here. 3. The EXPRESS category Section 3 provides some description of the input and purpose of tags in each category. There is one special category, EXPRESS, whose purpose is to simplify and streamline input files. Tags in EXPRESS are effectively aliases for tags in other categories, e.g. reading EXPRESS_gmax reads the same input as HAM_GMAX. If you put a tag into EXPRESS, it will be read there and any tag appearing in its usual location will be ignored. Thus including GMAX in HAM would have no effect if gmax is present in EXPRESS. EXPRESS collects the most commonly used tags in one place. There is usually a one-to-one correspondence between the tag in EXPRESS and its usual location. The sole exception to this is EXPRESS_file, which performs the same function as the pair of tags, STRUC_FILE and SITE_FILE. Thus in using EXPRESS_file all structural data is supplied through the site file. 4. Input File Categories This section details the various categories and tokens used in the input file. Note: The tables below list the input systems’ tokens and their function. Tables are organized by category. • The Arguments column refers to the cast belonging to the token (“l”, “i”, “r”, and “c” refer to logical, integer, floating-point and character data, respectively) • The Program column indicates which programs the token is specific to, if any • The Optional column indicates whether the token is optional (Y) or required (N) • The Default column indicates the default value, if any • The Explanation column describes the token’s function. See Table of Contents Category BZ holds information concerning the numerical integration of quantities such as energy bands over the Brillouin Zone (BZ). The LMTO programs permit both sampling and tetrahedron integration methods. Both are described in bzintegration, and the relative merits of the two different methods are discussed. As implemented both methods use a uniform, regularly spaced mesh of k-points, which divides the BZ into microcells as described here. Normally you specify this mesh by the number of divisions of each of the three primitive reciprocal lattice vectors (which are the inverse, transpose of the lattice vectors PLAT); NKABC below. These tokens are read by programs that make hamiltonians in periodic crystals (lmf,lm,lmgf,lmpg,tbe). Some tokens apply only to codes that make energy bands, (lmf,lm,tbe). GETQPl YFRead list of k-points from disk file qpts.ext. This is a special mode, and you normally would let the program choose its own mesh by specifying the number of divisions (see NKABC). If token is not parsed, the program will attempt to parse NKABC. NKABC1 to 3 i N The number of divisions in the three directions of the reciprocal lattice vectors. k-points are generated along a uniform mesh on each of these axes. (This is the optimal general purpose quadrature for periodic functions as it integrates the largest number of sine and cosine functions exactly for a specified number of points.) The parser will attempt to read three integers. If only one number is read, the missing second and third entries assume the value of the first. Information from NKABC, together with BZJOB below, contains specifications equivalent to the widely used “Monkhorst Pack” scheme. But it is more transparent and easier to understand. The number of k-points in the full BZ is the product of these numbers; the number of irreducible k-points may be reduced by symmetry operations. PUTQPl YFIf T, write out the list of irreducible k-points to file qpts, and the weights for tetrahedron integration if available. BZJOB1 to 3 i Y0Controls the centering of the k-points in the BZ: 0: the mesh is centered so that one point lies at the origin. 1: points symmetrically straddle the origin. Three numbers are supplied, corresponding to each of the three primitive reciprocal lattice vectors. As with NKABC if only one number is read the missing second and third entries assume the value of the first. METALilmf, lm, tbeY5Specifies how the weights are generated for Brillouin zone integration. For a detailed description, see this page. The METAL token accepts the following: 0. System assumed to be an insulator; weights determined a priori. 1. Eigenvectors are written to disk, in which case the integration for the charge density can be deferred until all the bands are obtained. 2. Integration weights are read from file wkp.ext, which will have been generated in a prior band pass. If wkp.ext is unavailable, the program will temporarily switch to METAL=3. 3. Two band passes are made; the first generates only eigenvalues to determine EF. It is slower than METAL=2, but it is more stable which can be important in difficult cases. 4. Three distinct Fermi levels are assumed and weights generated for each. After EF is determined, the actual weights are calculated by quadratic interpolation through the three points. 5. Like METAL=3 in which two passes are made but eigenvectors are kept in memory, so there is no additional overhead in making the first pass. This is the recommended mode for lmf unless you are working with a large system and need to conserve memory. The ASA implements METAL=0,1,2; the FP codes METAL=0,2,3,4,5. TETRAilmf,lm,tbeY1Selects BZ integration method. 0: Methfessel-Paxton sampling integration. Tokens NPTS, N, W, EF0, DELEF, DOS (see below) are relevant to this integration scheme. 1: tetrahedron integration, with Bloechl weights Nilmf,lm,tbeY0Polynomial order for M-P sampling integration. (Not used with tetrahedron integration or for insulators).  0: integration uses standard gaussian method. >0: integration uses generalized gaussian functions, i.e. polynomial of order N × gaussian to generate integration weights. −1: use the Fermi function rather than gaussians to broaden the δ-function. This generates the actual electron (fermi) distribution for a finite temperature. Add 100: by default, if a gap is found separating occupied and unoccupied states, the program will treat the system as an insulator, even when METAL>0. To suppress this, add 100 to N (use −101 for Fermi distribution). Wrlmf,lm,tbeY5e-3Case BZ_N≥0 :  broadening (Gaussian width) for Gaussian sampling integration (Ry). Case BZ_N<0 :  kBT (Ry) where kB is the Boltzmann constant and T the temperature. W is not used for insulators or when using tetrahedron integration. EF0rlmf,lm,tbeY0Initial guess at Fermi energy. Used when  TETRA=0, or when  BZ_METAL=4 (which does not use the tetrahedron method for the density). DELEFrlmf,lm,tbeY0.05Initial uncertainty in Fermi level for sampling integration. Used when  TETRA=0, or when  BZ_METAL=4 (which does not use the tetrahedron method for the density). As the system approaches self-consistency this window is reduced. ZBAKrlmf,lmY0Homogeneous background charge SAVDOSilmf,lm,tbeY00: does not save dos on disk. 1: writes the total density of states on NPTS energy mesh points to disk file dos.ext. 2: Write weights to disk for partial DOS (does not work for lmf; in the ASA this occurs automatically). 4: Same as (2), but write weights m-resolved (ASA). 1. SAVDOS>0 uses DOS and NPTS tags also. 2. You may also cause lm or lmf to generate m-resolved dos the from command-line (see --pdos). DOS2 r Y-1,0Energy window over which DOS accumulated (Ry). Needed either for sampling integration or if SAVDOS>0. NPTSi Y1001Number of points in the density-of-states energy mesh used in conjunction with sampling integration. Needed either for sampling integration or if SAVDOS>0. EFMAXrlmf,lm,tbeY2Only eigenvectors whose eigenvalues are less than EFMAX are computed; this improves execution efficiency. NEVMXilmf,lm,tbeY0>0 : Find at most NEVMX eigenvectors. =0 : program uses internal default. <0 : no eigenvectors are generated (and correspondingly, nothing associated with eigenvectors such as density). Caution: if you want to look at partial DOS well above the Fermi level (which usually comes out around 0), you must set EFMAX and NEVMX high enough to encompass the range of interest. ZVALr Yall LDANumber of electrons to accumulate in BZ integration. Normally zval is computed by the program. NOINVllmf,lm,tbeYFSuppress the automatic addition of the inversion to the list of point group operations. Usually the inversion symmetry can be included in the determination of the irreducible part of the BZ because of time reversal symmetry. There may be cases where this symmetry is broken: e.g. when spin-orbit coupling is included or when the (beyond LDA) self-energy breaks time-reversal symmetry. In most cases, the program will automatically disable this addition in cases that it knows the symmetry is broken. FSMOM2 rlmf,lmY0 0Set the global magnetic moment (collinear magnetic case). In the fixed-spin moment method, a spin-dependent potential shift Beff is added to constrain the total magnetic moment to value assigned by FSMOM. No constraint is imposed if this value is zero (the default). Optional second argument #2 supplies an initial Beff. It is applied whether or not the first argument #1 is 0. If #1 ≠ 0, Beff is made consistent with it. DMATKllmf,lmgfYFCalculate the density matrix. Implementation still not ready. INVITllmf,lmYTGenerate eigenvectors by inverse iteration (this is the default). It is more efficient than the QL method, but occasionally fails to find all the vectors. When this happens, the program stops with the message: DIAGNO: tinvit cannot find all evecs If you encounter this message set INVIT=F. EMESHrlmgf,lmpgY10,0,-1,…Parameters defining contour integration for Green’s function methods. See also the GF documentation. 1. number of energy points n. 2. contour type:  0: Uniform mesh of nz points: Real part of z between emin and emax  1: Same as 0, but reverse sign of Im z  10: elliptical contour  11: same as 10, but reverse sign of Im z  100s digit used for special modifications  Add 100 for nonequil part using Im(z)=delne  Add 200 for nonequil part using Im(z)=del00  Add 300 for mixed elliptical contour + real axis to find fermi level  Add 1000 to set nonequil part only. 3. lower bound for energy contour emin (on the real axis). 4. upper bound for energy contour emax, e.g. Fermi level (on the real axis). 5. (elliptical contour) eccentricity: ranges between 0 (circle) and 1 (line)  (uniform contour) Im z. 6. (elliptical contour) bunching parameter eps : ranges between 0  (distributed symmetrically) and 1 (bunched toward emax)  (uniform contour) not used. 7. (nonequilibrium GF, lmpg) nzne = number of points on nonequilibrium contour. 8. (nonequilibrium GF, lmpg) vne = difference in fermi energies of right and left leads. 9. (nonequilibrium GF, lmpg) delne = Im part of E for nonequilibrium contour. 10 (nonequilibrium GF, lmpg) substitutes for delne when making the surface self-energy. MULLitbeY0Mulliken population analysis. Mulliken population analysis is also implemented in lmf, but you specify the analysis with a command-line argument. See Table of Contents This category enables users to declare variables in algebraic expressions. The syntax is a string of declarations inside the category, e.g: CONST a=10.69 nspec=4+2 Variables declared this way are similar to, but distinct from variables declared for the preprocessor, such as % const nbas=5 In the latter case the preprocessor makes a pass, and may use expressions involving variables declared by e.g. “% const nbas=5” to alter the structure of the input file. Variables declared for use by the preprocessor lose their definition after the preprocessor completes. The following code segment illustrates both types: % const nbas=5 CONST a=10.69 nspec=4 STRUC ALAT=a NSPEC=nspec NBAS={nbas} After the preprocessor compiles, the input file appears as: CONST a=10.69 nspec=4 When the  CONST  category is read (it is read before other categories), variables  a  and  nspec  are defined and used in the  SPEC  category. See Table of Contents Contains parameters for molecular statics and dynamics. NITilmf, lmmc, tbeY maximum number of relaxation steps (molecular statics). SSTAT[…] lm, lmgfY (noncollinear magnetism) parameters specifying how spin statics (rotation of quantization axes to minimze energy) is carried out. SSTAT_MODEilm, lmgfN00: no spin statics or dynamics. -1: Landau-Gilbert spin dynamics. 1: spin statics: quantization axis determined by making output density matrix diagonal. 2: spin statics: size and direction of relaxation determined from spin torque. Add 10 to mix angles independently of P,Q (Euler angles are mixed with prior iterations to accelerate convergence). Add 1000 to mix Euler angles independently of P,Q. SSTAT_SCALEilm, lmgfN0(used with mode=2) scale factor amplifying magnetic forces. SSTAT_MAXTilm, lmgfN0maximum allowed change in angle. SSTAT_TAUilm, lmgfN0(used with mode=-1) time step. SSTAT_ETOLilm, lmgfN0(used with mode=-1) Set tau=0 this iter if etot-ehf>ETOL. MSTAT[…] lmf, lmmc, tbeY (molecular statics) parameters specifiying how site positions are relaxed given the internuclear forces. MSTAT_MODEilmf, lmmc, tbeN00: no relaxation. 4: relax with conjugate gradients algorithm (not generally recommended). 5: relax with Fletcher-Powell alogirithm. Find minimum along a line; a new line is chosen. The Hessian matrix is updated only at the start of a new line minimization. Fletcher-Powell is more stable but usually less efficient then Broyden. 6: relax with Broyden algorithm. This is essentially a Newton-Raphson algorithm, where Hessian matrix and direction of descent are updated each iteration. MSTAT_HESSllmf, lmmc, tbeNTT: Read hessian matrix from file, if it exists. F: assume initial hessian is the unit matrix. MSTAT_XTOLrlmf, lmmc, tbeY1e-3Convergence criterion for change in atomic displacements. >0: criterion satisfied when xtol > net shift (shifts summed over all sites). <0: criterion satisfied when xtol > max shift of any site. 0: Do not use this criterion to check convergence. Note: When molecular statics are performed, either GTOL or XTOL must be specified. Both may be specified. MSTAT_GTOLrlmf,lmmc,tbeY0Convergence criterion for tolerance in forces. >0: criterion satisfied when gtol > “net” force (forces summed over all sites). <0: criterion satisfied when xtol > max absolute force at any site. 0: Do not use this criterion to check convergence. MSTAT_STEPrlmf, lmmc, tbeY0.015Initial (and maximum) step length. MSTAT_NKILLilmf, lmmc, tbeY0 0: Never remove Hessian. >0: Remove hessian after NKILL iterations. <0: Remove hessian after -NKILL iterations, and also remove all memory of the hessian in the relaxation algorithm. MSTAT_PDEF=rlmf, lmmc, tbeY0 0 0 …Lattice deformation modes (not documented). MD[…] lmmc, tbeY Parameters for molecular dynamics. MD_MODEilmmcN00: no MD 1: NVE 2: NVT 3: NPT MD_TSTEPrlmmcY20.671Time step (a.u.) NB: 1 fs = 20.67098 a.u. MD_TEMPrlmmcY0.00189999Temperature (a.u.) NB: 1 deg K = 6.3333e-6 a.u. MD_TAUPrlmmcY206.71Thermostat relaxation time (a.u.) MD_TIMErlmmcN20671000Total MD time (a.u.) MD_TAUBrlmmcY2067.1Barostat relaxation time (a.u.) See Table of Contents Category EWALD holds information controlling the Ewald sums for structure constants entering into, e.g. the Madelung summations and Bloch summed structure constants (lmf). Most programs use quantities in this category to carry out Ewald sums (exceptions are lmstr and the molecules code lmmc). ASr Y2Controls the relative number of lattice vectors in the real and reciprocal space. TOLr Y1e-8Tolerance in the Ewald sums. NKDMXi Y800The maximum number of real-space lattice vectors entering into the Ewald sum, used for memory allocation. Normally you should not need this token. Increase NKDMX if you encounter an error message like this one: xlgen: too many vectors, n=… RPADr Y0Scale rcutoff by RPAD when lattice vectors padded in oblong geometries. See Table of Contents This category contains parameters defining the one-particle hamiltonian. Portions of HAM are read by these codes: NSPINiALLY11 for non-spin-polarized calculations. 2 for spin-polarized calculations. NB: For the magnetic parameters below to be active, use NSPIN=2. RELiALLY10: for nonrelativistic Schrödinger equation. 1: for scalar relativistic approximation to the Dirac equation. 2: for Dirac equation (ASA only). 11: compute cores with the Dirac equation (lmfa only). SOiALLY00: no SO coupling. 1: Add L·S to hamiltonian. However, only the spin-diagonal part of the density is retained. 2: Add Lz·Sz only to the hamiltonian, so the spin channels remain distinct. 3: Like 2, but also L·S−LzSz is included perturbatively in the eigenvalues only and in a manner that preserves independence in the spin channels. This generates eigenvalues very close to LS for a given potential, but the eigenfunctions are generated from H+LzSz only. As a result the eigenfunctions (and then the density) remain spin-diagonal. There is some effect on the density, but the approximation seems to be rather good since the error on the eigenfunctions is of 2nd order in the perturbation. 11: Same as 1, but additionally decompose SO by site. See here for analysis and description of the different approximations. GW-based codes at present requires the spin channels to be kept separated and works, then, with SO=2,3 only. NONCOLlASAYFF: collinear magnetism. T: non-collinear magnetism. SS4 rASAY0Magnetic spin spiral, direction vector and angle. Example: nc/test/ 1 BFIELDilm, lmfY00: no external magnetic field applied. 1: add site-dependent constant external Zeeman field (requires NONCOL=T). Fields are read from file bfield.ext. 2: add Bz·Sz only to hamiltonian. fp/test/test.fp gdn nc/test/ 5 BXCSCALilm, lmgfY0This tag provides an alternative means to add an effective external magnetic field in the LDA. 0: no special scaling of the exchange-correlation field. 1: scale the magnetic part of the LDA XC field by a site-dependent factor 1 + sbxci as described below. 2: scale the magnetic part of the LDA XC field by a site-dependent factor as described below. This is a special mode used to impose constraining fields on rotations, used, e.g. by the CPA code. Site-dependent scalings sbxci are read from file bxc.ext. XCFUNiALLY2Specifies local part exchange-correlation functional. 0,#2,#3: Use libxc exchange functional #2 and correlation functional #3 1: Ceperly-Alder 2: Barth-Hedin (ASW fit) GGAiALLY0Specifies gradient additions to exchange-correlation functional (not used when XCFUN=0,#2,#3). 0. No GGA (LDA only) 1. Langreth-Mehl 2. PW91 3. PBE 4. PBE with Becke exchange This tutorial uses the PBE functional. To compare the internally coded PBE functional with libxc, try fp/test/test.fp te PWMODEilmf, lmfgwdY0Controls how APWs are added to the LMTO basis. 1s digit: 0. LMTO basis only 1. Mixed LMTO+PW 2. PW basis only Examples: fp/test/test.fp srtio3  and  fp/test/test.fp felz 4 10s digit: 0. PW basis fixed to (less accurate, but simpler) 1. PW basis symmetry-consistent, but basis depends on k. Example:  fp/test/test.fp te PWEMINrlmf, lmfgwdY0Include APWs with energy E > PWEMIN (Ry) PWEMAXrlmf, lmfgwdY Include APWs with energy E < PWEMAX (Ry) NPWPADilmf, lmfgwdY-1If >0, overrides default padding of variable basis dimension. Certain arrays have fixed dimension that must be at least as large as the rank of the hamiltonian. The APW basis is depends on k if PWMODE>10, so some padding must be added to this fixed dimesion to ensure that these arrays can accommodate any k. Normally the code will internally select a sensible default. In the event it is not large enough (the program will stop), you can enlarge the padding with this token. RDSIGilmf, lmfgwd, lm, lmgfY0Controls how the QSGW self-energy Σ0 substitutes for the LDA exchange correlation functional. Note: the GW codes store in file sigm.ext. 1s digit:  0 do not read Σ0  1 read file sigm.ext, if it exists, and add it to the LDA potential  2 same as 1 but symmetrize sigm after reading  Add 4 to retain only real part of real-space sigma 10s digit:  0 simple interpolation (not recommended).  1 approximate high energy parts of sigm by diagonal. Optionally add the following (the same functionality using --rsig on the command line): 10000 to indicate the sigma file was stored in the full BZ (no symmetry operations are assumed). 20000 to use the minimum neighbor table (only one translation vector at the surfaces or edges; cannot be used with symmetrization). 40000 to allow mismatch between expected k-points and file values. RSSTOLrALLY5e-6Max tolerance in Bloch sum error for real-space Σ0. Σ0 is read in k-space and is immediately converted to real space by inverse Bloch transform. The real space form is forward Bloch summed and checked against the original k-space Σ0. If the difference exceeds RSSTOL the program will abort. The conversion should be exact to machine precision unless the range of Σ0 is truncated. You can control the range of real-space Σ0 with RSRNGE below. RSRNGErALLY5Maximum range of connecting vectors for real-space Σ0 (units of ALAT). NMTOiASAY0Order of polynomial approximation for NMTO hamiltonian. KMTOrASAY Corresponding NMTO kinetic energies. Read NMTO values, or skip if NMTO=0. EWALDllmYFMake strux by Ewald summation (NMTO only). VMTZrASAY0Muffin-tin zero defining wave functions. QASAiASAY3A parameter specifying the definition of ASA moments Q0,Q1,Q2 0. band code accumulates Q1, Q2 from true energy moments of sphere charges (KKR style).  Sphere code generates density from Q0× + Q2×.  This (Methfessel convention) is approximate but decouples potential parameters from charges. 1. Sphere code generates density from Q0× + Q2×; thus Q0 is the sphere charge. 2. Q1,Q2 accumulated from and , rather than power moments (not applicable to lmgf, lmpg). 3. 1+2 (Standard conventions). Add 4 to cause the sphere integrator to construct and by outward radial integration only. PMINr,r,…ALLY0 0 0 …Global minimum in fractional part of the continuous principal quantum number . Enter values for l=0,..lmx. 0: no minimum constraint. # : with #<1, fractional part of . 1: use free-electron value as minimum. Note: lmf always uses a minimum constraint, the free-electron value (or slightly higher if AUTOBAS_GW is set). You can set the floor still higher with PMIN=#. PMAXr,r,…ALLY0 0 0 …Global maximum in fractional part of the continuous principal quantum number . Enter values for l=0,..lmx. 0 : no maximum constraint. #: with #<1, uppper bound of fractional P is #. OVEPSrALLY0The overlap is diagonalized and the hilbert space is contracted, discarding the part with eigenvalues of overlap < OVEPS. Especially useful with the PMT basis, where the combination of smooth Hankel functions and APWs has a tendency to make the basis overcomplete. OVNCUTiALLY0This tag has a similar objective to OVEPS. The overlap is diagonalized and the hilbert space is contracted, discarding the part belonging to lowest OVNCUT evals of overlap. Supersedes OVEPS, if present. GMAXrlmf, lmfgwdN G-vector cutoff used to create the mesh for the interstitial density (Ry1/2). A uniform mesh with spacing between points in the three directions as homogeneous as possible, with G vectors |G| < GMAX. This input is required; but you may omit it if you supply information with the FTMESH token. FTMESHi1 [i2 i3]FPN The number of divisions specifying the uniform mesh density along the three lattice vectors. The second and third arguments default to the value of the first one, if they are not specified. This input is used only if the parser failed to read the GMAX token. TOLrFPY1e-6Specifies the precision to which the generalized LMTO envelope functions are expanded in a Fourier expansion of G vectors. FRZWFlFPYFSet to T to freeze the shape of the augmented part of the wave functions. Normally their shape is updated as the potential changes, but with FRZWF=t the potential used to make augmentation wave functions is frozen at what is read from the restart file (or free-atom potential if starting from superposing free atoms). This is not normally necessary, and freezing wave functions makes the basis slightly less accurate. However, there are slight inconsistencies when these orbitals are allowed to change shape. Notably the calculated forces do not take this shape change into account, and they will be slightly inconsistent with the total energy. FORCESiFPY0Controls how forces are to be calculated, and how the second-order corrections are to be evaluated. Through the variational principle, the total energy is correct to second order in deviations from self-consistency, but forces are correct only to first order. To obtain forces to second order, it is necessary to know how the density would change with a (virtual) displacement of the core+nucleus, which requires a linear response treatment. lmf estimates this change using one of ansatz:1.  the free-atom density is subtracted from the total density for nuclei centered at the original position and added back again at the (virtually) displaced position. The core+nucleus is shifted and screened assuming a Lindhard dielectric response. You also must specify ELIND, below. ELINDrlmfY-1A parameter in the Lindhard response function, (the Fermi level for a free-electron gas relative to the bottom of the band). You can specify this energy directly, by using a positive number for the parameter. If you instead use a negative number, the program will choose a default value from the total number of valence electrons and assuming a free-electron gas, scale that default by the absolute value of the number you specify. If you have a simple sp bonded system, the default value is a good choice. If you have d or f electrons, it tends to overestimate the response. Use something smaller, e.g. ELIND=-0.7. ELIND is used in three contexts: (1) in the force correction term; see FORCES= above. (2) to estimate a self-consistent density from the input and output densities after a band pass. (3) to estimate a reasonable smooth density from a starting density after atoms are moved in a relaxation step. SIGP[…] lmf, lmfgwdY Parameters used to interpolate the self-energy . Used in conjunction with the GW package. See gw for description. Default: not used. SIGP_MODEilmf, lmfgwdY4Specifies the linear function used for matrix elements of at highly-lying energies. High-lying states should be far enough away from the Fermi level that their effect should be small, and the result should depend very little on the choice of the constraint. By approximating for these states, one ensures that the LDA and quasiparticle eigenvectors for those states are the same. 0. constrain to be greater than . 1. constrain to be equal to . 2. constrain to be defined in the interval . 3. constrain as in SIGP_MODE=1. The difference between modes 1 and 3 are merely informational. 4. constrain to be a constant. Its value is calculated by the GW package and read from sigm.ext. This mode requires no information from the user. It is the recommended mode, available in version 7.7 or later. SIGP_NMAXilmf, lmfgwdY0Integer specifying which of the highest self-energy matrix elements are to be approximated. States higher than SIGP_NMAX have the off-diagonal part of sigma stripped; unlike the low-lying states, the diagonal part of is constrained (see SIGP_MODE above). If SIGP_NMAX is lower or equal to 0, it is not used; see SIGP_EMAX below. SIGP_EMAXrlmf, lmfgwdY2.0Alternative way to specify approximation of high-lying elements of the self-energy matrix. It is only used if SIGP_NMAX is lower or equal to 0, which case SIGP_EMAX is an energy cutoff: states above SIGP_EMAX are approximated. SIGP_NMINilmf, lmfgwdY0Integer specifying how many of the lowest-lying states are approximated by discarding the off-diagonal parts in the basis of LDA functions. If SIGP_NMIN is zero, no low-lying states are approximated. SIGP_EMINrlmf, lmfgwdY0.0Alternative way to specify approximations of low-lying elements of the self-energy matrix. It is only used if SIGP_NMIN<0, which case SIGP_EMIN is an energy cutoff: states below SIGP_EMIN are approximated. SIGP_Arlmf, lmfgwdY0.02Coefficient in the linear fit (see SIGP_MODE=0,…,3). If SIGP_MODE=4, SIGP_A is not used. In the linear constraints (SIGP_MODE=0,1) it is the constant coefficient; for SIGP_MODE=2, it is the lower bound. Note that its default value is a good estimate for Si. SIGP_Brlmf, lmfgwdY0.06Coefficient in the linear fit (see SIGP_MODE=0,…,3). If SIGP_MODE=4, SIGP_B is not used. In the linear constraints (SIGP_MODE=0,1) it is the linear coefficient; for SIGP_MODE=2, it is the upper bound. Note that its default value is a good estimate for Si. SIGP_EFITrlmf, lmfgwdY0Lower bound for the least squares fit required for a reasonable evaluation of the above coefficients SIGP_A and SIGP_B when SIGP_MODE=0,…,3. For SIGP_MODE<3, lmf will make a least-squares fit to for states higher than SIGP_EFIT. For SIGP_MODE=3, lmf will make a least-squares fit for states between SIGP_EFIT and SIGP_EMAX, which must be used if one is going to evaluate for states above some SIGP_EMAX. For the case SIGP_MODE<3 one must invoke lmf one the mesh of k-points for which the self-energy is known (there appear to be fewer problems with interpolation on that mesh). lmf accumulates the minimum, maximum, and least-squares fit for the for all the states above the cutoff. Look in the output for a line beginning with “hambls:”. Also, setting the verbosity above 45, lmf will print out the calculated for each of these states, together with the constrained value. lmf will write to file sigii.ext the data used to make the fit, and summarize the fit and the end of the file. If SIGP_MODE=4, SIGP_EFIT is not needed. AUTOBAS[…] lmfa, lmf, lmfgwdY Parameters associated with the automatic determination of the basis set. These switches greatly simplify the creation of an input file for lmf. Note: Programs lmfa and lmf both use tokens in the AUTOBAS tag but they mean different things, as described below. This is because lmfa generates the parameters while lmf uses them. Default: not used. AUTOBAS_GWilmfaY0Set to 1 to tailor the autogenerated basis set file basp0.ext to a somewhat larger basis, better suited for GW. AUTOBAS_GWilmfY0Set to 1 to float log derivatives a bit more conservatively — better suited to GW calculations. AUTOBAS_LMTOilmfaY0lmfa autogenerates a trial basis set, saving the result into basp0.ext. LMTO is used in an algorithm to determine how large a basis it should construct: the number of orbitals increases as you increase LMTO. This algorithm also depends on which states in the free atom carry charge. Let lq be the highest l which carries charge in the free atom. There are the following choices for LMTO: 0. standard minimal basis; same as LMTO=3. 1. The hyperminimal basis, which consists of envelope functions corresponding those l which carry charge in the free atom, e.g. Ga sp and Mo sd (this basis is only sensible when used in conjunction with APWs). 2. All l up to lq+1 if lq<2; otherwise all l up to lq. 3. All l up to min(lq+1, 3). For elements lighter than Kr, restrict l≤2. For elements heavier than Kr, include l to 3. 4. (Standard basis) Same as LMTO=3, but restrict l≤2 for elements lighter than Ar. 5. (Large basis) All l up to max(lq+1,3) except for H, He, Li, B (use l=spd). Use the MTO token (see below) in combination with this one. MTO controls whether the LMTO basis is 1-κ or 2-κ, meaning whether 1 or 2 envelope functions are allowed per l channel. AUTOBAS_MTOilmfaY0Autogenerate parameters that control which LMTO basis functions are to be included, and their shape. Tokens RSMH,EH (and possibly RSMH2,EH2) determine the shape of the MTO basis. lmfa will determine a reasonable set of RSMH,EH automatically (and RSMH2,EH2 for a 2-κ basis), fitting to radial wave functions of the free atom. Note: lmfa can generate parameters and write them to file basp0.ext. lmf can read parameters from basp.ext. You must manually create basp.ext, e.g. by copying basp0.ext into basp.ext. You can tailor basp.ext with a text editor. Here are the following choices for MTO: 0: do not autogenerate basis parameters. 1 or 3 : 1-κ parameters with Z-dependent LMX. 2 or 4: 2-κ parameters with Z-dependent LMX. For lmfa 1 and 3 are equivalent, as are 2 and 4. AUTOBAS_MTOilmf, lmfgwdY0Read parameters RSMH,EH,RSMH2,EH2 that control which LMTO basis functions enter the basis. Once initial values have been generated you can tune these parameters automatically for the solid, using lmf with the –optbas switch; see here (or for a simple input file guide, here) and here. The –optbas step is not essential, especially for large basis sets, but it is a way to improve on the basis without increasing the size. Here are the following choices for MTO: 0 Parameters not read from basp.ext; they are specified in the input file ctrl.ext. 1 or 3: 1-κ parameters may be read from the basis file basp.ext, if they exist. 2 or 4: 2-κ parameters may be read from the basis file basp.ext, if they exist. 1 or 2: Parameters read from ctrl.ext take precedence over basp.ext. 3 or 4: Parameters read from basp.ext take precedence over those read from ctrl.ext. AUTOBAS_PNUilmfaY0Autoset boundary condition for augmentation part of basis, through specification of the continuous principal quantum number . 0 do not make P 1 Find P for l < SPEC_LMXA from free atom wave function; save in basp0.ext. AUTOBAS_PNUilmf, lmfgwdY0Autoset boundary condition for augmentation part of basis, through specification of the continuous principal quantum number . 0 do not attempt to read P from basp.ext. 1 Read P from basp.ext, for species which P is supplied. AUTOBAS_LOCilmfa, lmf, lmfgwdY0Autoset local orbital parameters PZ, which determine which deep or high-lying states are to be included as local orbitals. Used by lmfa to control whether parameters PZ are to be sought: 0: do not autogenerate PZ. 1 or 2: autogenerate PZ. Default: 0 Used by lmf and lmfgwd to control how PZ is read: 1 or 2: read parameters PZ. 1: Nonzero values from ctrl file take precedence over basis file input. Default: 1 AUTOBAS_RSMMXrlmfaY2/3sets an upper bound to LMTO smoothing radius RSMH, when autogenerating a basis set. Value is a multiple of the MT radius. AUTOBAS_EHMXrlmfaYsets an upper bound to LMTO smoothed Hankel energy EH, when autogenerating a basis set. Default depends on whether AUTOBAS_GW is set. AUTOBAS_ELOCrlmfaY-2 RyThe first of two criteria to decide which orbitals should be included in the valence as local orbitals. If the energy of the free atom wave function exceeds (is more shallow than) ELOC, the orbital is included as a local orbital. AUTOBAS_QLOCrlmfaY0.005The second of two criteria to decide which orbitals should be included in the valence as local orbitals. If the fraction of the free atom wave function’s charge outside the augmentation radius exceeds QLOC, the orbital is included as a local orbital. AUTOBAS_PFLOATi1 i2lmf, lmfgwdy1 1Governs how the Pnu are set and floated in the course of a self-consistency cycle. The 1st argument controls default starting values of P and lower bounds to P when it is floated. 0: Use pre-2002 (i.e. version 6) lower bound for P (lmf only). 1: Use defaults and float lower bound designed for LDA. 2: Use defaults and float lower bound designed for GW. The 2nd argument controls how the band center of gravity (CG) is determined — used when floating P. 0: band CG is found by a traditional method. 1: band CG is found from the true energy moment of the density. See Table of Contents Category GF is intended for parameters specific to the Green’s function code lmgf. and is read by that code. See, for example the Introductory Tutorial for lmgf. MODEiASAY00: do nothing. 1: self-consistent cycle. 10: Transverse magnetic exchange interactions J(q). 11: Read J(q) from disk and analyze results. 14: Longitudinal exchange interactions. 20: Transverse χ+− from ASA Green’s function. 21: Read χ from disk and analyze results. 20: Transverse χ++, χ−− from ASA Green’s function Caution: Modes 14 and higher have not been maintained. GFOPTScASAY ASCII string with switches governing execution of lmgf or lmpg. Use  ’;’  to separate the switches, e.g. GFOPTS=p3;padtol=1e-7 . Switches in GFOPTS are documented on the Green’s function web page. DLMiALLY0Disordered local moments for CPA. Governs self-consistency for both chemical CPA and magnetic CPA. 12 : normal CPA/DLM calculation: charge and coherent potential Ω both iterated to self-consistency. 32 : Ω alone is iterated to self-consistency. BXY1ALLYF(DLM) Setting this switch to T generates a site-dependent constraining field to properly align magnetic moments. In this context constraining field is applied by scaling the LDA exchange-correlation field. The scaling factor is [1+bxc(ib)^2]1/2. A table of bxc is kept for each site in the first column of file shfac.ext. TEMPrALLY0(DLM) spin temperature. See Table of Contents Category GW holds parameters specific to GW calculations, particularly for the GW driver lmfgwd. Most of these tokens supply values for tags in the GWinput template when lmfgwd generates it (--jobgw -1). CODEilmfgwdY2This token tells what GW code you are creating input files for. lmfgwd serves as a driver to several GW codes. 0. First GW version v033a5 (code still works but it is no longer maintained) . 2. Current version of GW codes . 1. Driver for the Julich spex code (not fully debugged or maintained). NKABC1 to 3 i Y Defines the k-mesh for GW. This token serves the same function for GW as BZ_NKABC does for the LDA codes, and the input format is the same. When generating a GWinput template, lmfgwd passes the contents of NKABC to the n1n2n3 tag. Note: Shell scripts lmgw and lmgwsc used for the GW codes may also use this token. When invoked with switches –getsigp or –getnk, they will modify the n1n2n3 in GWinput. The data they use is taken from GW_NKABC. MKSIGilmfgwdY3(self-consistent calculations only). Controls the form of (the QSGW approximation to the dynamical self-energy , where refers to a matrix element of Σ between eigenstates n and n′, at energy E relative to EF. When generating a GWinput template, lmfgwd passes MKSIG to the iSigMode tag. Values of this tag have the following meanings. 0. do not make Σ0 1. Σ0 = Σnn (EF) if nn’, and Σnn(En) if n=n’: mode B, Eq.(11) in Phys. Rev. B76, 165106 (2007) 3. Σ0 = 1/2[Σnn (En) + Σnn (En)]: mode A, Eq.(10) in Phys. Rev. B76, 165106 (2007) 5. “eigenvalue only” self-consistency Σ0 = δnnΣnn‘ (En) GCUTBrlmfgwdY2.7G-vector cutoff for basis envelope functions as used in the GW package (Ry1/2). When generating a GWinput template, lmfgwd passes GCUTB to the QpGcut_psi tag in GWinput.. GCUTXrlmfgwdY2.2G-vector cutoff for interstitial part of two-particle objects such as the screened coulomb interaction (Ry1/2). When generating a GWinput template, lmfgwd passes GCUTX to the QpGcut_cou tag. ECUTSrlmfgwdY2.5 Ry(for self-consistent calculations only). Maximum energy for which to calculate the described in MKSIG above. This energy should be larger than HAM_SIGP_EMAX which is used to interpolate . When generating a GWinput template, lmfgwd passes ECUTS+1/2 to the emax_sigm tag in the GWinput file. NIMEilmfgwdY6Number of frequencies on the imaginary integration axis when making the correlation part of Σ. When generating a GWinput template, lmfgwd passes NIME to the new tag. DELRErlmfgwdY0.01, 0.1Frequency mesh parameters DW and OMG defining the real axis mesh in the calculation of Im . The ith mesh point is given by: ωi=DW×(i−1) + [DW×(i−1)]2/OMG/2 Points are approximately uniformly spaced, separated by DW, up to frequency OMG, around which point the spacing begins to increase linearly with frequency. When generating a GWinput template, lmfgwd passes DELRE(1) to the dw tag and DELRE(2) to the omg_c tag. Note: the similarity to OPTICS_DW used by the optics part of lmf and lm. DELTArlmfgwdY-1e-4δ-function broadening for calculating χ0, in atomic units. Tetrahedron integration is used if DELTA<0. When generating a GWinput template, lmfgwd passes DELTA to the delta tag. GSMEARrlmfgwdY.003Broadening width for smearing pole in the Green’s function when calculating Σ. This parameter is sometimes important in metals, e.g. Fe. When generating a GWinput template, lmfgwd passes GSMEAR to the esmr tag. The tag is described in this manual PBTOLrlmfgwdY.001Overlap criterion for product basis functions inside augmentation spheres. The overlap matrix of the basis of product functions generated and diagonalized for each l. Functions with overlaps less than PBTOL are removed from the product basis. When generating a GWinput template, lmfgwd passes PBTOL to the second line after the start of the PRODUCT_BASIS section. See Table of Contents This category is optional, and merely prints to the standard output whatever text is in the category. For example: HEADER This line and the following one are printed to standard output whenever a program is run. HEADER [ In this form only two lines reside within the category delimiters,] and only two lines are printed. See Table of Contents This optional category controls what kind of information, and how much, is written to the standard output file. SHOW1allYFEcho lines as they are read from input file and parsed by the proprocessor. Command-line argument --show provides the same functionality. HELP1allYFShow what input would be sought, without attempting to read data. Command-line argument --input provides the same functionality. VERBOS1 to 3allY30Sets the verbosity. 20 is terse, 30 slightly terse, 40 slightly verbose, 50 verbose, and so on. If more than one number is given, later numbers control verbosity in subsections of the code, notably the parts dealing with augmentation spheres. IACTIV1allYFTurn on interactive mode. Programs will prompt you with queries, in various contexts. TIM1 or 2allY0, 0Prints out CPU usage of blocks of code in a tree format. First value sets tree depth. Second value, if present, prints timings on the fly. May also be controlled from the command-line: --time=#1[,#2] See Table of Contents The ITER category contains parameters that control the requirements to reach self-consistency. It applies to all programs that iterate to self-consistency: lmlmflmmclmgflmpgtbelmfa. A detailed discussion can be found at the end of this document. NITiallY1Maximum number of iterations in the self-consistency cycle. MIXcallY A string of mixing rules for mixing input, output density in the self-consistency cycle. The syntax is given below. See here for detailed description of the mixing. CONVrallY1e-5Maximum energy change from the prior iteration for self-consistency to be reached. See annotated lmf output. CONVCrallY3e-5Maximum in the RMS difference in the density noutnin. See below. UMIXrallY1Mixing parameter for density matrix; used with LDA+U TOLUrallY0Tolerance for density matrix; used with LDA+U NITUiallY0Maximum number of LDA+U iterations of density matrix AMIXcASAY Mixing rules when extra degrees of freedom, e.g. Euler angles, are mixed independently. Uses the same syntax as MIX. NRMIXi1 i2ASA, lmfaY80, 2Used when self-consistency is needed inside an augmentation sphere. This occurs when the density is determined from the momentsQ0,Q1,Q2 in the ASA; or in the free atom code, just Q0. i1: max number of iterations i2: number of prior iterations for Anderson mixing2 of the sphere density Note: You will probably never need to use this token. See Table of Contents Optics functions available with the ASA extension packages OPTICS. It is read by lm and lmf. MODEiOPTICSY00: make no optics calculations 1: generate linear 20: generate second harmonic ε  Example: optics/test/test.optics sic The following cases (MODE<0) generate joint or single density-of-states. Note: MODE<0 works only with LTET=3 described below. −1: generate joint density-of-states  (ASA) optics/test/test.optics --all 4  (FP) fp/test/test.fp zbgan −2: generate joint density-of-states, spin 2  Example:optics/test/test.optics fe 6 −3: generate up-down joint density-of-states −4: generate down-up joint density-of-states −5: generate spin-up single density-of-states  Example: optics/test/test.optics --all 7 −6: generate spin-dn single density-of-states LTETiOPTICSY00: Integration by Methfessel-Paxton sampling 1: standard tetrahedron integration 3: enhanced tetrahedron integration Note: In the metallic case, states near the Fermi level must be treated with partial occupancy. LTET=3 is the only scheme that handles this properly. It was adapted from the GW package and has extensions, e.g. the ability to handle non-vertical transitions . WINDOWr1 r2OPTICSN0 1Energy (frequency) window over which to calculate Im[ε(ω)]. Im ε is calculated on a mesh of points . The mesh spacing is specified by NPTS or DW, below. NPTSiOPTICSN501Number of mesh points in the energy (frequency) window. Together with WINDOW, NPTS specifies the frequency mesh as: = WINDOW(1) + DW×(i−1) where DW = (WINDOW(2)−WINDOW(1))/(NPTS−1) Note: you may alternatively specify DW below. DWr1 [r2]OPTICSY Frequency mesh spacing DW[,OMG]. You can supply either one argument, or two. If one argument (DW) is supplied, the mesh will consist of evenly spaced points separated by DW. If a second argument (OMG) is supplied, points are spaced quadratically as: = WINDOW(1) + DW×(i−1) + [DW×(i−1)]2/OMG/2 Spacing is approximately uniform up to frequency OMG; beyond which it increases linearly. Note: The quadratic spacing can be used only with LTET=3. FILBNDi1 [i2]OPTICSY0 no. electronsi1[,i2] occupied energy bands from which to calculate ε using first order perturbation theory, without local fields. i1 = lowest occupied band i2 = highest occupied band (defaults to no. electrons) EMPBNDi1 [i2]OPTICSY0 no. bandsi1[,i2] empty energy bands from which to calculate ε using first order perturbation theory, without local fields. i1 = lowest unoccupied band i2 = highest unoccupied band (defaults to no. bands) PARTiOPTICSY0Resolve ε or joint DOS into band-to-band contributions, or by k. Result is output into file popt.ext. 0. No decomposition 1. Resolve ε or DOS into individual (occ,unocc) contributions  Example: optics/test/test.optics ogan 5 2. Resolve ε or DOS by k  Example: optics/test/test.optics --all 6 3. Both 1 and 2 Add 10 to write popt as a binary file. CHI2[..] lmY Tag containing parameters for second harmonic generation. Not calculated unless tag is parsed.  Example: optics/test/test.optics sic CHI2_NCHI2ilmN0Number of direction vectors for which to calculate χ2, i.e. the nonlinear susceptibility tensor. CHI2_AXESi1, i2, i3lmN Direction vectors for each of the NCHI2 sets ESCISSrOPTICSY0Scissors operator (constant energy added to unoccupied levels, in Ry) ECUTrOPTICSY0.2Energy safety margin for determining (occ,unocc) window. lmf will attempt to reduce the number of (occ,unocc) pairs by restricting, for each k, transitions that contribute to the response, i.e. to those inside the optics WINDOW. The window is padded by ECUT to include states outside, but near the edge of the window. States outside window may nevertheless make contribution, e.g. because they can be part of a tetrahedron that does contribute. If you do not want lmf to restrict the range, use ECUT<0. NMPiOPTICSYBZ_NIf present, supersedes BZ_N for the optics energy integration WrOPTICSYBZ_WIf present, supersedes BZ_W for energy integration entering into the dielectric function MEFACiOPTICSY0Contribution from nonlocal self-energy to velocity operator. 1. include 2. approximate correction to using ratio of QP to LDA eigenvalues. (Approximation is exact if LDA and QP eigenvalues are the same). FFMTiOPTICSY0Governs formatting of optics file 0. fortran F format 1. fortran E format IQi1, i2, i3OPTICSY0q vector for JDOS(q), in multiples of qlat/BZ_NKABC ESMRrOPTICSY0.05Energy smearing width for determining (occ,unocc) window. States are excluded for which occ<EF-ESMR or unocc>EF+ESMR. ALLTRANSlOPTICSYFDo not limit allowed transitions to occ<EF-ESMR and unocc>EF+ESMR FERMIrOPTICSYNULLIf not NULL, supersede calculated Fermi level with given value when calculating dielectric function. IMREFr1 r2OPTICSYNULLIf not NULL, quasi-Fermi levels for occ and unocc states (nonequilibrium optics) KTrOPTICSY-Temperature for Fermi functions (Ry). Used when NMP<0. See Table of Contents Portions of OPTIONS are read by these codes: HF1lm, lmfYFIf T, use the Harris-Foulkes functional only; do not evaluate output density. SHARM1ASA, lmf, lmfgwdYFIf T, use true spherical harmonics, rather than real harmonics. FRZlallYF(ASA) If T, freezes core wave functions. (FP) If T, freezes the potential used to make augmented partial waves, so that the basis set does not change with potential. SAVVEC1lmYFSave eigenvectors on disk. (This may be enabled automatically in some circumstances) Q=strncallY  Q=SHOW,  Q=ATOM,  Q=HAM,  Q=POT,  Q=BAND,  Q=DOS,  Q=RHO  make the program stop at selected points without completing a full iteration. SCRiASAY0Is connected with the generation or use of the q->0 ASA dielectric response function. It is useful in cases when there is difficulty in making the density self-consistent. See here for documentation. 0. Do not screen qout−qin. 1. Make the ASA response function P0. 2. Use P0 to screen qout−qin and the change in ves. 3. 1+2 (lmgf only). 4. Screen qout−qin from a model P0. 5. Illegal input. 6. Use P0 to screen the change in ves only. P0 and U should be updated every iteration, but this is expensive and not worth the cost. However, you can: Add 10k to recompute intra-site contribution U every kth iteration, 0<k≤9. Add 100k to recompute P0 every kth iteration (lmgf only).  Examples: testing/test.scr and gf/test/ mnpt 6 ASA[…]rASAN Parameters associated with ASA-specific input. ASA_ADNF1ASAYFEnables automatic downfolding of orbitals. ASA_NSPH1ASAY0Set to 1 to generate l>0 contributions (from neighboring sites) to l=0 electrostatic potential ASA_TWOCiASAY0Set to 1 to use the two-center approximation ASA hamiltonian ASA_GAMMAiASAY0Set to 1 to rotate to the (orthogonal) gamma representation. This should have no effect on the eigenvalues for the usual three-center hamiltonian, but converts the two-center hamiltonian from first order to second order. Set to 2 to rotate to the spin-averaged gamma representation. The lm code does not allow downfolding with GAMMA≠0. ASA_CCOR1lmYTIf F, suppresses the combined correction. By default it is enabled. Note: NB: if any orbitals are downfolded, CCOR is automatically enabled. ASA_NEWREP1lmYFSet to 1 to rotate structure constants to a user-specified representation. It requires special compilation to be effective ASA_NOHYB1lmYFSet to 1 to turn off hybridization ASA_MTCOR1lmYFSet to T to turn on Ewald MT correction ASA_QMTrNCY0Override standard background charge for Ewald MT correction Input only meaningful if MTCOR=T RMINESrlmchkN1Minimum augmentation radius when finding new empty sites (--getwsr) RMAXESrlmchkN2Maximum augmentation radius when finding new empty sites (--getwsr) NESABCi,i,ilmchkN100Number of mesh divisions when searching for empty spheres (--getwsr) See Table of Contents Category PGF concerns calculations with the layer Green’s function program lmpg. It is read by lmpg and lmstr. MODEiASAY 0: do nothing. 1: diagonal layer GF.  Examples: pgf/test/test.pgf -all 5 and pgf/test/test.pgf -all 6 2: left- and right-bulk GF. 3: find k(E) for left bulk.  Example: pgf/test/test.pgf 2 4: find k(E) for right bulk. 5: Calculate ballistic current.  Example: pgf/test/test.pgf femgo SPARSEiASAY00: Calculate G layer by layer using Dyson’s equation  Example: pgf/test/test.pgf -all 5 1: Calculate G using LU decomposition  Example: pgf/test/test.pgf -all 6 PLATLrASAN The third lattice vector of left bulk region PLATRrASAN The third lattice vector of right bulk region GFOPTScASAY ASCII string with switches governing execution of lmgf or lmpg. Use  ‘;’ to separate the switches. Available switches: p1 First order of potential function p3 Third order of potential function pz Exact potential function (some problems; not recommended) Use only one of the above; if none are used, the code makes second order potential functions idos integrated DOS (by principal layer in the lmpg case) noidos suppress calculation of integrated DOS pdos accumulate partial DOS emom accumulate output moments; use noemom to suppress noemom suppresss accumulation of output moments sdmat make site density-matrix dmat make density-matrix frzvc do not update potential shift needed to obtain charge neutrality ‘padtol** Tolerance in Pade correction to charge. If tolerance exceeded, lmgf will repeat the band pass with an updated Fermi level omgtol (CPA) tolerance criterion for convergence in coherent potential omgmix (CPA) linear mixing parameter for iterating convergence in coherent potential nitmax (CPA) maximum number of iterations to iterate for coherent potential lotf (CPA) dz (CPA) See Table of Contents Category SITE holds site information. As in the SPEC category, tokens must read for each site entry; a similar restriction applies to the order of tokens. Token ATOM= must be the first token for each site, and all tokens defining parameters for that site must occur before a subsequent ATOM=. FILEcallY Provides a mechanism to read site data from a separate file. File subs/iosite.f documents the syntax of the site file structure. The reccommended (standard) format has the following syntax: The first line should contain a ‘%’ in the first column, and a `version’ token vn=#. Structural data (see category STRUC documentation) may also be included in this line. Each subsequent line supplies input for one site. In the simplest format, a line would have the following: spid x y z where spid is the species identifier (same information would otherwise be specified by token ATOM= below) and x y z are the site positions. Examples: fp/test/test.fp er and fp/test/test.fp tio2 Bug: when you read site data from an alternate file, the reader doesn’t compute the reference energy. Kotani format (documented here but no longer maintained). In this alternative format the first four lines always specify data read in the STRUC category; see FILE= in STRUC. Then follow lines, one line for each site ib iclass spid x y z The first number is merely a basis index and should increment 1,2,3,4,… in successive lines. The second class index is ignored by these programs. The remaining columns are the species identifier for the site positions. If SITE_FILE is missing, the following are read from the ctrl file: ATOMcallN Identifies the species (by label) to which this atom belongs. It is a fatal error for the species not to have been defined. ATOM_POSr1 r2 r3allN The basis vector (3 elements), in dimensionless Cartesian coordinates. As with the primitive lattice translation vectors, the true vectors (in atomic units) are scaled from these by ALAT in category STRUC. NB: XPOS and POS are alternative forms of input. One or the other is required. ATMOM_XPOSr1 r2 r3allN Atom coordinates, as (fractional) multiples of the lattice vectors. ATOM_DPOSr1 r2 r3allY0 0 0Shift in atom coordinates to POS ATOM_RELAXi1 i2 i3allY1 1 1Relax site positions (lattice dynamics or molecular statics) or Euler angles (spin dynamics, ASA). Three numbers correspond to , , Cartesian components. 0 constrains component not to move; 1 allows it to move. ATOM_RMAXSrFPY Site-dependent radial cutoff for structure constants, in a.u. ATOM_ROTcASAY Rotation of spin quantization axis at this site ATOM_PLilmpgY0(lmpg) Assign principal layer number to this site See Table of Contents Category SPEC contains species-specific information. Because data must be read for each species, tokens are repeated (once for each species). For this reason, there is some restriction as to the order of tokens. Data for a specific species (Z=, R=, R/W=, LMX=, IDXDN= and the like described below) begins with a token ATOM=;  input of tokens specific to that species must precede the next occurence of ATOM=. The following tokens apply to the automatic sphere resizer: SCLWSRrALLY0SCLWSR>0 turns on the automatic sphere resizer. It defaults to 0, which turns off the resizer. The 10’s digit tells the resizer how to deal with resizing empty spheres; see this page. OMAX1r1 r2 r3ALLY0.16, 0.18, 0.2Constrains maximum allowed values of sphere overlaps. This overlap is defined as , where and ae the two sphere radii, and is the bond length. See this page. You may input up to three numbers, which correspond to atom-atom, atom-empty-sphere, and empty-sphere-empty-sphere overlaps respectively. OMAX2r1 r2 r3ALLY0.4, 0.45, 0.5Constrains maximum allowed values of sphere overlaps defined as ; see this page. Both constraints are applied. WSRMAXrALLY0Imposes an upper limit to any one sphere radius The following tokens are input for each species. Data sandwiched between successive occurences of ATOM apply to one species. ATOMcallN A character string (8 characters or fewer) that labels this species. This label is used, e.g. by the SITE category to associate a species with an atom at a given site. The species ID also names a disk file with information about that atom (potential parameters, moments, potential and some sundry other information). More precisely, species are split into classes, the program differentiates class names by appending integers to the species label. The first class associated with the species has the species label; subsequent ones have integers appended.  Example: testing/test.ovlp 3 ZrallN Nuclear charge. Normally an integer, but Z can be a fractional number. A fractional number implies a virtual crystal approximation to an alloy with some Z intermediate between the two integers sandwiching it. RrallN The augmentation sphere radius, in atomic units. This is a required input for most programs: choose one of R=, R/W= or R/A=. Read descriptions of the R/W AND R/A below for further remarks; also see this page for a more complete discussion on the choice of sphere radii. lmchk can find sphere radii automatically. Invoke lmchk with -\–getwsr. You can also rescale as-given radii to meet constraints with the SCLWSR token. R/WrallN R/W= ratio of the augmentation sphere radius to the average Wigner Seitz radius W. W is the radius of a sphere such that (4πW3<\sup>/3) = V/N, where V/N is the volume per atom. Thus if all radii are equal with R/W=1, the sum of sphere volumes would fill space, as is usual in the ASA. You must choose the radii so that the sum of sphere volumes (4π/3ΣiRi3) equals the unit cell volume V; otherwise results may become unreliable. The space-filling requirement means sphere may overlap quite a lot, particularly in open systems. If sphere overlaps get too large, (>20% or so) accuracy becomes an issue. In such a case you should add “empty spheres” to fill space. Use lmchk to print out sphere overlaps. lmchk also has an automatic empty spheres finder, which you invoke with the –findes switch; see here for a discussion. Example: testing/test.ovlp 3 FP results are much less sensitive to the choice of sphere radii. Strictly, the spheres should not overlap, but because of lmf’s unique augmentation scheme, overlaps of up to 10% cause negligibly small errors as a rule. (This does not apply to GW calculations!) Even so, it is not advisable to let the overlaps get too large. As a general rule the L-cutoff should increase as the sphere radius increases. Also it has been found in practice that self-consistency is harder to accomplish when spheres overlap significantly. R/ArallN R/A = ratio of the aumentation sphere radius to the lattice constant ArallY0.025Radial mesh point spacing parameter. All programs dealing with augmentation spheres represent the density on a shifted logarithmic radial mesh. The ith point on the mesh is . b is determined from the number of radial mesh points specified by NR. NRiallYDepends on other inputNumber of radial mesh points LMXiallYNL-1Basis l-cutoff inside the sphere. If not specified, it defaults to NL−1 RSMHr,r,…lmf, lmfgwdY0Smoothing radii defining basis (a.u.), one radius for each l. RSMH and EH together define the shape of basis function in lmf. To optimize, try running lmf with --optbas. EHr,r,…lmf, lmfgwdY Hankel energies for basis (Ry), one energy for each l. RSMH and EH together define the shape of basis function in lmf. RSMH2r,r,…lmf, lmfgwdY0Basis smoothing radii, second group EH2r,r,…lmf, lmfgwdY Basis Hankel function energies, second group LMXAiFPYNL - 1Angular momentum l-cutoff for projection of wave functions tails centered at other sites in this sphere. Must be at least the basis l-cutoff (specified by LMX=). IDXDNiASAY1A set of integers, one for each l-channel marking which orbitals should be downfolded. 0 use automatic downfolding in this channel. 1 leaves the orbitals in the basis. 2 folds down about the inverse potential function at 3 folds down about the screening constant alpha. In the FP case, 1 includes the orbital in the basis; >1 removes it KMXAilmf, lmfgwdY3Polynomial cutoff for projection of wave functions in sphere. Smoothed Hankels are expanded in polynomials around other sites instead of Bessel functions as in the case of normal Hankels. RSMArlmf, lmfgwdYR * 0.4Smoothing radius for projection of smoothed Hankel tails onto augmentation spheres. These functions are expanded in polynomials by integrating with Gaussians of radius RSMA at that site. RSMA very small reduces the polynomial expansion to a Taylor series expansion about the origin. For large KMXA the choice is irrelevant, but RSMA is best chosen that maximizes the convergence of smooth Hankel functions with KMXA. LMXLilmf, lmfgwdYNL - 1Angular momentum l-cutoff for explicit representation of local charge on a radial mesh. RSMGrlmf, lmfgwdYR/4Smoothing radius for Gaussians added to sphere densities to correct multipole moments needed for electrostatics. Value should be as large as possible but small enough that the Gaussian doesn’t spill out significantly beyond the Radius of the Muffin-Tin (RMT). LFOCAiFPY1Prescribes how the core density is treated. 0 confines core to within RMT. Usually the least accurate. 1 treats the core as frozen but lets it spill into the interstitial 2 same as 1, but interstitial contribution to vxc treated perturbatively. RFOCArFPYR × 0.4Smoothing radius fitting tails of core density. A large radius produces smoother interstitial charge, but less accurate fit. RSMFArFPYR/2Smoothing radius for tails of free-atom charge density. Irrelevant except first iteration only (non-self-consistent calculations using Harris functional). A large radius produces smoother interstitial charge, but somewhat less accurate fit. RS3rFPY1Minimum allowed smoothing radius for local orbital HCRrlmY Hard sphere radii for structure constants. If token is not parsed, attempt to read HCR/R below HCR/RrlmY0.7Hard sphere radii for structure constants, in units of R ALPHArASAY Screening parameters for structure constants DVrASAY0Artificial constant potential shift added to spheres belonging to this species MIX1ASAYFSet to suppress self-consistency of classes in this species IDMODiallY00 : floats Pl aka continuous principal quantum number to band center of gravity 1 : freezes 2 : freezes linearization energy . CSTRMX1allYFSet to T to exclude this species when automatically resizing sphere radii GRP2iASAY0Species with a common nonzero value of GRP2 are symmetrized, independent of symmetry operations. The sign of GRP2 is used as a switch, so species with negative GRP2 are symmetrized but with spins flipped (NSPIN=2) FRZWF1FPYFSet to T to freeze augmentation wave functions for this species IDUiallY0LDA+U mode: 0 No LDA+U 1 LDA+U with Around Mean Field limit double counting 2 LDA+U with Fully Localized Limit double counting 3 LDA+U with mixed double counting. IDU is a vector, with one number for each l. UHrallY0Hubbard U for LDA+U (Ry). UH is a vector, with one number for each l. JHrallY0Exchange parameter J for LDA+U (Ry). JH is a vector, with one number for each l. EREF=rallY0Reference energy subtracted from total energy AMASS=rFPY Nuclear mass in a.u. (for dynamics) C-HOLEclmf, lmY Channel for core hole. You can force partial core occupation. Syntax consists of two characters, the principal quantum number and the second one of ‘s’, ‘p’, ‘d’, ‘f’ for the l quantum number, e.g. ‘2s’ See Partially occupied core holes for description and examples. Default: nothing C-HQr[,r]allY-1 0First number specifies the number of electrons to remove from the l channel specified by C-HOLE=. Second (optional) number specifies the hole magnetic moment. See Partially occupied core holes for description and examples. Pr,r,…allY Starting values for Pl, aka “continuous principal quantum number”, one for each l=0..LMXA Default: taken from an internal table. PZr,r,…FPY0starting values for local orbital’s potential functions, one for each of l=0..LMX. Setting PZ=0 for any l means that no local orbital is specified for this l. Each integer part of PZ must be either one less than P (semicore state) or one greater (high-lying state). Qr,r,…allY Charges for each l-channel making up free-atom density Default: taken from an internal table. MMOMr,r,…allY0Magnetic moments for each l-channel making up free-atom density Relevant only for the spin-polarized case. See Table of Contents Category STR contains information connected with real-space structure constants, used by the ASA programs. It is read by lmstr, lmxbs, lmchk, and tbe. RMAXSrallY Radial cutoff for strux, in a.u. If token is not parsed, attempt to read RMAX, below RMAXrallY0The maximum sphere radius (in units of the average WSR) over which neighbors will be included in the generation of structure constants. This takes a default value and is not required input. It is an interesting exercise to see how much the structure constants and eigenvalues change when this radius is increased. NEIGHBiFPY30Minimum number of neighbors in cluster ENV_MODEiallY0Type of envelope functions: 0 2nd generation 1 SSSW (3rd generation) 3 SSSW and val-lap basis ENV_NELilm, lmstrY (NMTO only) Number of NMTO energies ENV_ELrlm, lmstrN0SSSW of NMTO energies, in a.u. DELRXrASAY3Range of screened function beyond last site in cluster TOLGrFPY1e-6Tolerance in l=0 gaussians, which determines their range RVL/RrallY0.7Radial cutoff for val-lap basis (this is experimental) VLFUNiallY0Functions for val-lap basis (this is experimental) 0 G0 + G1 1 G0 + Hsm 2 G0 + Hsm-dot MXNBRiASAY0Make lmstr allocate enough memory in dimensioning arrays for MXNBR neighbors in the neighbor table. This is rarely needed. SHOW1lmstrYFShow strux after generating them EQUIV1lmstrYFIf true, try to find equivalent neighbor tables, to reduce the computational effort in generating strux. Not generally recommended LMAXWilmstrY-1l-cutoff for (optional) Watson sphere, used to help localize strux DELRWrlmstrY0.1Range extending beyond cluster radius for Watson sphere IINV_NIT=ilmstrY0Number of iterations IINV_NCUTilmstrY0Number of sites for inner block IINV_TOLrlmstrY0Tolerance in errors *IINV parameters govern iterative solutions to screened strux See Table of Contents Category START is specific to the ASA. It controls whether the code starts with moments P,Q or potential parameters; also the moments P,Q may be input in this category. It is read by lm, lmgf, lmpg, and tbe. BEGMOMiASAY1When true, causes program lm to begin with moments from which potential parameters are generated. If false, the potential parameters are used and the program proceeds directly to the band calculation. FREE1ASAYFIs intended to facilitate a self-consistent free-atom calculation. When FREE is true, the program uses rmax=30 for the sphere radius rather than whatever rmax is passed to it; the boundary conditions at rmax are taken to be value=slope=0 (rmax=30 should be large enough that these boundary conditions are sufficiently close to that of a free atom.); subroutine atscpp does not calculate potential parameters or save anything to disk; and lm terminates after all the atoms have been calculated. CNTROL1ASAYFWhen CONTRL=T, the parser attempts to read the “continuously variable principal quantum numbers” P and moments Q0,Q1,Q2 for each l channel; see P,Q below. ATOMcASAY Class label. P,Q (and possibly other data) is given by class. Tokens following a class label and preceding the next class label belong to that class. ATOM_P= and ATOM_QcASAY Read “continuously variable principal quantum numbers” for this class (P=…), or energy moments Q0,Q1,Q2 (Q=…). P consists of one number per l channel, Q of three numbers (Q0,Q1,Q2) for each l. Note In spin polarized calculations, a second set of parameters must follow the first, and the moments should all be half of what they are in non-spin polarized calculations. In this sample input file for Si, P,Q is given as: ATOM=SI P=3.5 3.5 3.5 Q=1 0 0 2 0 0 0 0 0 ATOM=ES P=1.5 2.5 3.5 Q=.5 0 0 .5 0 0 0 0 0 One electron is put in the Si s orbital, 2 in the p and none in the d, while 0.5 electrons are put in the s and p channels for the empty sphere. All first and second moments are zero. This rough guess produces a correspondingly rough potential. You do not have to supply information here for every class; but for classes you do, you must supply all of (P,Q0,Q1,Q2). Data read in START supersedes whatever may have been read from disk. Remarks below provide further information about how P,Q is read and printed. RDVES1ASAYFRead Ves(RMT) from the START category along with P,Q ATOM_ENUrASAY Linearization energies Sample START category The following is taken for the distribution’s test of La2 Cu O4. ATOM=LA P= 6.3055046 6.3000000 5.2308707 Q= 0.4770507 0.0000000 0.0610692 0.9882047 -0.3905638 0.2327244 2.0252993 0.0000000 0.1272500 ATOM=CU P= 4.6331214 4.3438861 3.8947075 Q= 0.4910799 0.0000000 0.0974578 0.6087341 0.0000000 0.1140513 9.4164169 0.0000000 0.2018023 ATOM=OX P= 2.8833091 2.8438183 3.1896353 Q= 1.6741779 0.0000000 0.0653497 4.2304006 0.0000000 0.1036699 0.0404676 0.0000000 0.0023966 ATOM=OX2 P= 2.8840328 2.8447249 3.1806967 Q= 1.6660490 0.0000000 0.0257208 4.1318836 0.0000000 0.0365535 0.0083512 0.0000000 0.0003608 Notes on parsing P and Q In the ASA, knowledge of P and Q is sufficient to completely determine the ASA density. Several ways are available to read these important quantities. The parser returns (P,Q) as a set according the following priorities: • The (P,Q) set is read from the disk, if supplied, (along possibly with other quantities such as potential parameters El, C, Δ, γ.) One file is created for each class that contains this data and other class-specific information. Some or all of the data may be missing from the disk files. Alternatively, you may read these data from a restart file rsta.ext, which if it exists contains data for all classes in one file. The program will not read this data by default; use --rs=1 to have it read from the rsta file. To write class data to rsta, use --rs=,1 ( must be 0 or 1) • If START_CONTRL=T, (P,Q) (and possibly other quantities) are read from START for classes you supply (usually all classes). Data read from this category supersedes any that might have been read from disk. If class data read from either of these sources, the input system returns it. For classes where none is available the parser will pick a default: • If data from a different class but in the same species is available, use it. • Otherwise use some preset default values for (P,Q). After a calculation finishes you can run lmctl to read (P,Q) from disk and format it in a form ready to insert into the START category,e.g. ATOM=SI P= 3.8303101 3.7074067 3.2545634 Q= 1.1694276 0.0000000 0.0297168 1.8803181 0.0000000 0.0489234 0.1742629 0.0000000 0.0063520 ATOM=ES P= 1.4162942 2.2521617 3.1546386 Q= 0.2873686 0.0000000 0.0129888 0.3485430 0.0000000 0.0165416 0.1400664 0.0000000 0.0055459 Thus all the information needed to generate a self-consistent ASA density can be embedded in the ctrl file. Because the P’s float to the band center-of gravity (i.e. center of gravity of the occupied states for a particular site and l channel) the corresponding first moments Q1 vanish. P’s are floated by default since it minimizes the linearization error. Caution: Sometimes it is necessary to override this default: If the band CG (of the occupied states) is far removed from the natural CG of a particular channel, you must restrict how far P can be shifted to the band CG. In some cases, allowing P to float completely will result in “ghost bands”. The high-lying Ga 4d state is a classic example. To restrict P to a fixed value, see SPEC_ATOM_IDMOD. In such cases, you want to pick the fractional part of P to be small, but not so low as to cause problems (about 0.5 for s orbitals and 0.15 for d orbitals; see here). See Table of Contents By default structural information is read through the ctrl file. But some of the essential data can be read in multiple ways, in particular from site file. Questaal has utilities that will import this information from other formats such as cif files. FILEcallY Read structural data (ALAT, NBAS, PLAT) from an independent site file. The file structure is documented here; see also this tutorial. Note: EXPRESS_file performs the same function as STRUC_FILE, and supersedes STRUC_FILE if it is present. NBASiallN† Number of the sites in the primitive unit cell. NSPECiallY Number of atom species ALATrallN† A scaling, in atomic units, of the primitive lattice and basis vectors DALATrallY0is added to ALAT. It can be useful in contexts certain quantities that depend on ALAT are to be kept fixed (e.g. SPEC_ATOM_R/A) while ALAT varies. PLATr,r,…allN† (dimensionless) primitive translation vectors SLATr,r,…lmscellN Superlattice vectors NLiallY3Sets a global default value for l-cutoffs lcut = NL−1. NL is used for both basis set and augmentation cutoffs. SHEARr,r,r,rallY Enables shearing of the lattice in a volume-conserving manner. If SHEAR=#1,#2,#3,#4,  #1,#2,#3=direction vector;  #4=distortion amplitude. Example: SHEAR=0,0,1,0.01 distorts a lattice in initially cubic symmetry to tetragonal symmetry, with 0.01 shear. ROTcallY Rotates the lattice and basis vectors, and the symmetry group operations by a unitary matrix. Example: ROT=z:pi/4,y:pi/3,z:pi/2 generates a rotation matrix corresponding to the Euler angles α=π/4, β=π/3, γ=π/2. See this document for the general syntax. Lattice and basis vectors, and point group operations (SYMGRP) are all rotated. DEFGRDr,r,…allY A 3×3 matrix defining a general linear transformation of the lattice vectors. STRAINr,r,…allY A sequence of six numbers defining a general distortion of the lattice vectors ALPHArallN Amount of Voigt strain †Information may be obtained from a site file See Table of Contents Category SYMGRP provides symmetry information; it helps in two ways. First, it provides the relevant information to find which sites are equivalent, this makes for a simpler and more accurate band calculations. Secondly, it reduces the number of k-points needed in Brillouin zone integrations. Normally you don’t need SYMGRP; the program is capable of finding its own symmetry operations. However, there are cases where it is useful or even necessary to manually specify them. For example when including spin-orbit coupling or noncollinear magnetism where the symmetry group isn’t only specified by the atomic positions. In this case you need to supply extra information. You can use SYMGRP to explicitly declare a set of generators from which the entire group can be created. For example, the three operations R4X, MX and R3D are sufficient to generate all 48 elements of cubic symmetry. Unless conditions are set for noncollinear magnetism and/or SO coupling, the inversion is assumed by default as a consequence of time-reversal symmetry. A tag describing a generator for a point group operation has the form O(nx,ny,nz) where O is one of M, I or Rj, or E, for mirror, inversion j-fold rotation and identity operation, respectively. nx,ny,nz are a triplet of indices specifying the axis of rotation. You may use X, Y, Z or D as shorthand for (1,0,0), (0,1,0), (0,0,1), and (1,1,1) respectively. You may also enter products of rotations, such as I*R4X. specifies three generators (4-fold rotation around x, mirror in x, 3-fold rotation around (1,1,1)). Generating all possible combinations of these rotations will result in the 48 symmetry operations of the cube. To suppress all symmetry operations, use In the ASA, owing to the spherical approximation to the potential only the point group is required for self-consistency. But in general you must specify the full space group. The translation part gets appended to rotation part in one of the following forms:  :(x1,x2,x3)  or alternatively  ::(p1,p2,p3)  with the double ‘::’. The first defines the translation in Cartesian coordinates in units of ALAT, second in crystal coordinates. These two lines (taken from testing/ctrl.cr3si6) provide equivalent specifications: SYMGRP r6z:(0,0,0.4778973) r2(1/2,sqrt(3)/2,0) SYMGRP r6z::(0,0,1/3) r2(1/2,sqrt(3)/2,0) Keywords in the SYMGRP category SYMGRP accepts, in addition to symmetry operations the following keywords: • find tells the program to determine its own symmetry operations. Thus: SYMGRP find amounts to the same as not incuding a SYMGRP category in the input at all You can also specify a mix of generators you supply, and tell the program to find any others that might exist. For example: SYMGRP r4x find specifies that 4-fold rotation be included, and  find  tells the program to look for any additional symops that might exist. • AFM: For certain antiferromagnets, certain translation operations exist provided the rotation/shift is accompanied by a spin flip. Say a translation of (-1/2,1/2,1/2)a restores the crystal structure, but all atoms after translation have opposite spin. Specify this symmetry with: SYMGRP ... AFM::-1/2,1/2,1/2 This operation is used only by lmf. • SOC or SOC=2: Tells the symmetry group generator to exclude operations that do not preserve the z axis. This is used particularly for spin-orbit coupling where the crystal symmetry is reduced (z is the quantization axis). SOC=2 is like SOC but allows operations that preserve z or flip z to −z. This works in some cases. Note: This keyword is only active when the two spin channels are linked, e.g. SO coupling or noncollinear magnetism. • GRP2 turns on a switch that can force the density among inequivalent classes that share a common species to be averaged. In the ASA codes the density is spherical and the averaging is complete; in the FP case only the spherical part of the densities can be averaged. This helps sometimes with stabilizing difficult cases in the path to self-consistency. You specify which species are to be averaged with the SPEC_ATOM_GRP2 token. GRP2 averages the input density; GRP2=2 averages the output density; GRP2=3 averages both the input and the output density. • RHOPOS turns on a switch that forces the density positive at all points. You can also accomplish this with the command-line switch --rhopos. See Table of Contents This category is used for version control. As of version 7, the input file must have the following tokens for any program in the suite: It tells the input system that you have a v7 style input file. For a particular program you need an additional token to tell the parser that this file is set up for that program. Thus your VERS category should read: VERS LM:7 ASA:7 for lm, lmgf or lmpg VERS LM:7 FP:7 for lmf or lmfgwd VERS LM:7 MOL:3 for a molecules codes such as lmmc VERS LM:7 TB:9 for the empirical tight-binding tbe and so on. Add version control tokens for whatever programs your input file supports. See Table of Contents Notes on gradient corrected functionals The semilocal exchange-correlation potential is defined as where and are the exchange or correlation potentials and energy densities, respectively, for the Homogeneous Electron Gas (HEG), which are determined by the options XCFUN=1 or XCFUN=2. is the exchange or correlation GGA enhancement factor which introduce semilocal effects and can be choosen using the token GGA. is the reduced gradient. Then, the final exchange-correlation potential/functional is determined using two different tokens, i.e. one for the local (HEG) part and the other one for the semilocal (GGA) correction. Note that hybrid and meta-GGA exchange-correlation functional schemes are not implemented in the QUESTAAL package. As consequence, only LDA and GGA libXC functionals can be used. Interactive mode For the most part Questaal codes are designed to run without receiving information through standard input. The various editors are an exception (though even editor instructions can be run in batch mode; see e.g. dynamical self-energy editor tutorial). It is often convenient to have some interactive facility, e.g. whether to limit the number of iterations in a self-consistency cycle. The Questaal codes have a interactive mode, which you can turn on with token IO_IACTIV in the ctrl file, or on the command-line with the  --iactive  switch. For example, if you run lm or lmf interactively you will be prompted with QUERY: beta (def=0.3)? and the program will wait for input. To see what your options are, enter  ? <RET> You should see (A)bort (I)active (V)erb (C)pu (T)iming (W)ork (S)et value QUERY: beta (def=0.3)? Here it is asking if you want to modify the existing value for the charge mixing parameterbeta. Enter one of: • a   Program aborts execution • i   Toggles interactive mode • v  #  Sets verbosity to  # • c   Prints out CPU usage so far • t   Turns on timing printout • w   Not used now • s  #  Sets parameter to  # If you enter s .5 <RET> in this instance, the program will modify its value for beta to 0.5 and prompt you again. If you don’t want to make any (further) changes, just enter  <RET>. The most commonly changed parameter is the number of iterations, called  maxit. You can increase or decrease it; if you decrease maxit below the current iteration number, the program will stop. Simulacrum of interactive mode You can enter interactive mode instructions that are normally read by the standard input by entering them into a file iact.ext. The executable first looks for that file, reads its contents, and executes its instructions before prompting you. It will perform the instruction, e.g. set the value of the parameter, without turning on the true interactive mode. However if you do turn it on, e.g. put  i  into iact.ext, the program will revert to true interactive mode and prompt you for instructions. There is an important difference in the normal and simulacrum operations when setting a parameter. In the latter case, you must tell the program which parameter. Do this by naming the parameter after the  s. Thus iact.ext would contain a line like s maxit 3 Interactive mode with MPI Normal interactive mode is not available when running with multiple processors. The simulacrum mode does work, but only for a subset of parameters. Most importantly,  maxit is one parameter that is read; thus you can adjust the number of iterations a job will do after execution starts. slatsm/query.f contains the source code controlling this mode. See Table of Contents ITER_MIX is a token in the ITER category that controls how Questaal codes iterate to self-consistency. Its contents are a string consisting of mixing options, described here. Questaal codes follow the usual procedure of mixing a linear combination of input density nin and output density nout to make a trial guess n*  for the self-consistent density (see for example Chapter 9 in Richard Martin’s book1. Questaal uses two independent techniques to accelerate convergence to the self-consistency condition noutnin. First, the quantities are mixed making use of model for the dielectric function. Second, multiple (nin,nout) pairs (taken from prior iterations) can be used to accelerate convergence. The contents of ITER_MIX control options for both kinds of approaches. Charge mixing, general considerations In a perfect mixing scheme, n* would be the self-consistent density. If the static dielectric response is known, n* can be estimated to linear order in noutnin. It is not difficult to show that n* = nin + ε−1 (noutnin).     (1) ε is a function of source and field point coordinates r and r′: ε = ε(r,r′) and in any case it is not given by the standard self-consistency procedure. The Thomas Fermi approximation provides a reasonable, if rough estimate for ε, which reads in reciprocal space Eq.(2) has one free parameter, the Thomas Fermi wave number kTF. It can be estimated given the total number of electrons qval from the free electron gas formula. kF = (3π3/vol×qval)1/3 = EF1/2 If the density were expanded in plane waves n = ΣGCGnG, a simple mixing scheme would be to mix each CG separately according to Eq.(2). This is called the “Kerker mixing” algorithm. One can use the Lindhard function instead. The idea is similar, but the Lindhard function is exact for free electrons. In any case the Questaal codes do not have a plane wave representation so they do something else. The ASA uses a simplified mixing scheme since the logarithmic derivative parameters P and energy moments of charge Q for each class is sufficient to completely specify the charge density. The density is not explicitly mixed. lmf, by contrast, uses a density consisting of three parts: a smooth density n0 carried on a uniform mesh, defined everywhere in space and two local densities: the true density n1 and a one-center expansion n2 of the smooth density The mixing algorithm must mix all of them and it is somewhat involved. See fp/mixrho.f for details. The mixing process reduces to estimating a vector X* related to the density (e.g.  P,Q  in the ASA) where δX = XoutXin vanishes at Xin = X*. Mixing algorithms mix linear combinations of (Xin,Xout) pairs taken from the current iteration together with pairs from prior iterations. If there are no prior iterations, then X* = Xin + beta × (XoutXin)     (3) It is evident from Eq.(1) that beta should be connected with the dielectric function. However, beta is just a number. If beta=1, X* = Xout; if beta→0, X* scarcely changes from Xin. Thus in that case you move like an “amoeba” downhill towards the self-consistent solution. For small systems it is usually sufficient to take beta on the order of, but smaller than unity. For large systems charge sloshing becomes a problem so you have to do something different. This is because the potential change goes as δV ~ G−2×δn so small G components of δn determine the rate of mixing. The simplest (but inefficient) choice is to make beta small. The beauty of Kerker mixing is that charges in small G components of the density get damped out, while the short-ranged, large G components do not. An alternative is to use an estimate ε for the dielectric function. Construct δn = ε−1 (noutnin) and build δX from δn. Then estimate X* = Xin + beta × δX     (4) Now beta can be much larger again, of order unity. lmf uses a Lindhard function for the uniform mesh density (similar to Thomas Fermi screening; only the Lindhard function is the actual dielectric function for the free electron gas) and attempts to compensate for the contribution from local densities in an approximate way. The ASA codes (lm, lmgf, lmpg) offer two options: 1. A rough ε is obtained from eigenvalues of the Madelung matrix (OPTIONS_SCR=4). 2. The q=0 discretized polarization at q=0 is explicitly calculated (see OPTIONS_SCR). There is some overhead associated with the second option, but it is not too large and having it greatly facilitates convergence in large systems. This is particularly important in magnetic metals, where there are low-energy degrees of freedom associated with the magnetic parts that require large beta. The ITER_MIX tag and how to use it Mixing proceeds through (Xin,Xout) pairs taken from the current iteration together with pairs from prior iterations. As noted in the previous section it is generally better to mix δX than δX; but the mixing scheme works for either. You can choose between Broyden3 and Anderson2 methods. The string belonging to ITER_MIX should begin with one of which tells the mixer which scheme to use. slatsm/amix.f describes the mathematics behind the Anderson scheme. n is the maximum number of prior iterations to include in the mix. As programs proceed to self-consistency, they dump prior iterations to disk, to read them the next time through. Data is I/O to mixm.ext. The Anderson scheme is particularly simple to monitor. How much of δX from prior iterations is included in the final mixed vector is printed to stdout as parameter tj, e.g. tj: 0.47741 &larr; iteration 2 tj:-0.39609 -0.44764 &larr; iteration 3 tj:-0.05454 0.01980 &larr; iteration 4 tj: 0.24975 tj: 0.48650 In the second iteration, one prior iteration was mixed; in the third and fourth, two; and after that, only one. (When the normal matrix picks up a small eigenvalue the Anderson mixing algorithm reduces the number of prior iterations). Consider the case when a single prior iteration was mixed. • If tj=0, the new X is entirely composed of the current iteration. This means self-consistency is proceeding in an optimal manner. • If tj=1, it means that the new X is composed 100% of the prior iteration. This means that the algorithm doesn’t like how the mixing is proceeding, and is discarding the current iteration. If you see successive iterations where tj is close to (or worse, larger than) unity, you should change something, e.g. reduce beta. • If tj<0, the algorithm thinks you can mix more of Xout and less of Xin. If you see successive iterations where tj is significantly negative (less than −1), increase beta. In a simple metal, the Lindhard function pretty well describes the actual dielectric function, and tj should be small, as see in this tutorial. Broyden mixing3 uses a more sophisticated procedure, in which it tries to build up the Hessian matrix. It usually works better but has more pitfalls than Anderson. Broyden has an additional parameter,  wc, that controls how much weight is given to prior iterations in the mix (see below). The general syntax is for ITER_MIX is An[,b=beta][,b2=b2][,bv=betv][,n=nit][,w=w1,w2][,fn=name][,k=nkill][,elind=#][;...] or The options are described below. They are parsed in routine subs/parmxp.f. Parameters (b, wc, etc.) may occur in any order.: • An or Bn:  maximum number of prior iterations to include in the mix (the mixing file may contain more than n prior iterations). n=0 implies linear mixing. Default: B2. • b=beta:  the mixing parameter beta in Eq. 4 above. Default: 0.3. • b2=b2:  Not documented. The ASA code does not use this tag. • n=nit:  the number of iterations to use mix with this set of parameters before passing on to the next set. After the last set is exhausted, it starts over with the first set. • name=fn:  mixing file name (mixm is the default). Must be eight characters or fewer. • k=nkill:  kill mixing file after nkill iterations. This is helpful when the mixing runs out of steam, or when the mixing parameters change. Default: 7. • wc=wc:  (Broyden only) that controls how much weight is given to prior iterations in estimating the Jacobian. wc=1 is fairly conservative. Choosing wc<0 assigns a floating value to the actual wc, proportional to wc/rms-error. This increases wc as the error becomes small. wc defaults to −1 if it is not specified. See Johnson’s paper3 for the definition of  wc. • w=w1,w2:  (spin-polarized calculations only) The up- and down- spin channels are mixed independently. Instead the sum (up+down) and difference (up-down) are mixed. The two combinations are weighted by w1 and w2 in the mixing, more heavily emphasizing the more heavily weighted. As special cases, w1=0 freezes the charge and mixes the magnetic moments only while w2=0 freezes the moments and mixes the charge only. • elind=elind:  The Fermi energy entering into the Lindhard dielectric function: . elind<0: Use the free-electron gas value, scaled by elind. The default value is −1. • wa:  (ASA only) weight for extra quantities included with P,Q in the mixing procedure. For noncollinear magnetism, includes the Euler angles. • locm:  (FP only) not documented yet. • r=expr:  continue this block of mixing sequence until rms error < expr. Example:  MIX=A4,b=.2,k=4 uses the Anderson method2, killing the mixing file each fourth iteration. The mixing  beta**  is 0.2. You can string together several rules. One set of rules applies for a certain number of iterations; followed by another set. Rules are separated by a “ ; ”. Example:  MIX=B10,n=8,w=2,1,fn=mxm,wc=11,k=4;A2,b=1 does 8 iterations of Broyden mixing, followed by Anderson mixing. The Broyden iterations weight the (up+down) double that of (up-down) for the magnetic case, and iterations are saved in a file which is deleted at the end of every fourth iteration. wc is 11. beta assumes the default value. The Anderson rules mix two prior iterations with beta=1. See Table of Contents 1 R. M. Martin, Electronic Structure, Cambridge University Press (2004). 2 D. G. Anderson. Iterative procedures for nonlinear integral equations. J. Assoc. Comput. Mach., 12:547–560, 1965 3 D. D. Johnson, Phys. Rev. B 38, 12807 (1988). Questions or Comments
b69806bb7a72bb14
You are here Applied Mathematics for Chemistry Majors Rachel Neville1, Amber T. Krummel2, Nancy E. Levinger2, Patrick D. Shipman3 1 University of Arizona, Department of Mathematics, Tucson, Arizona, United States 2 Colorado State University, Department of Chemistry, Fort Collins, Colorado, United States 3 Colorado State University, Department of Mathematics, Fort Collins, Colorado, United States 11/15/17 to 11/23/17 The math that chemistry students need is significant.  In physical chemistry, students need to be comfortable with ordinary and partial differential equations and linear operators. These topics are not traditionally taught in the calculus sequence that chemistry students are required to take at Colorado State University, thus mathematics can present a significant barrier to success in physical chemistry courses.  Through the collaboration of the mathematics and chemistry departments, Colorado State University has developed and implemented a two-semester sequence of courses, Applied Mathematics for Chemists (MfC), aimed specifically at providing exposure to the math necessary for chemistry students to succeed in physical chemistry.  The prerequisite for the sequence is a first semester of Calculus for Physical Scientists—that is, a working knowledge of derivatives, integrals and their relation through the Fundamental Theorem of Calculus.  MfC begins with a look at the Fundamental Theorem of Calculus that emphasizes a scientific realization that it provides, namely an understanding of physical phenomena in terms of an initial condition and the rate of change.  This introduces the first topic of MfC, namely first- and then second-order differential equations.  Working with differential equations at the start of the course allows for questions from chemistry to motivate the mathematics throughout the sequence.  Solving the differential equations naturally introduces students to another fundamental mathematical concept for physical chemistry, and another theme of the course, namely linear operators.  The flow of the course allows for topics traditional to second and third semesters of calculus, such as Taylor series and complex numbers, to be motivated by solving chemical problems and leads to some topics, such as Fourier series, which are not part of the standard calculus sequence.  Feedback from students who have taken MfC and then physical chemistry has been positive. The depth and breadth of mathematical skills that chemists need is significant. Like most American college and university chemistry curricula leading to the BA or BS degree, Colorado State University (CSU) has previously required students complete three semesters of calculus. This more than fulfills the requirements for the ACS approved chemistry degree (ACS, 2015). However, these calculus courses omit mathematical topics such as differential equations and linear operators that are imperative for understanding physical chemistry.  Similarly, the traditional calculus courses like those at CSU, cover content such as a broad range of integration techniques that are not of immediate use in physical chemistry.  From the instructors' perspective, the chemistry major would require students to take significantly more mathematics, including linear algebra and differential equations, prior to taking physical chemistry.  However, requiring these math courses would add credits to the chemistry major that already requires a lot of classes, making the curriculum less flexible and potentially decreasing the number of students majoring in chemistry.  To provide chemistry students with appropriate mathematical background, and to refresh topics that students may have forgotten since their last math course, some CSU chemistry instructors have offered a "just-in-time math review" as an addendum to the Physical Chemistry 1 course. Because it is optional, not all students enrolled in the math review, reducing its potential impact.  To address the mismatch between the required calculus courses and to provide a math curriculum more aligned with the needs of chemistry courses, we have developed a two-semester math sequence, Applied Math for Chemists I and II (MATH 271 and 272) at CSU.  Motivation and Background Among students at CSU and elsewhere, physical chemistry has the reputation of being a very challenging course. Derrick and Derrick studied success of students at Valdosta State University and suggest that the “formidable perception'' of physical chemistry is due to the mathematical and conceptual difficulty rather than the chemistry itself (Derrick & Derrick, 2002). Early attempts to identify students who would struggle in a physical chemistry course resulted in a diagnostic quiz that tests students’ background in mathematical concepts deemed necessary for physical chemistry (Porile, 1976). Prior success in math courses significantly impacts a student's success in physical chemistry. For instance, Hahn and Polik showed that student success in physical chemistry correlate significantly both with the amount of mathematics that a student has taken and grades earned in these mathematics courses (Hahn & Polik, 2004). Instructors at CSU have observed the same trend. In another study surveying instructors of physical chemistry courses across several hundred universities, 61% of instructors indicated that students struggle because they lack the necessary mathematical background and a third of instructors reported that students do not make connections between physical chemistry concepts and the mathematics on which those concepts are based (Fox & Roehring, 2015). This suggests that not only the mathematical concepts, but also their connections to chemistry are important to student success. In fact, after lengthy conversations with colleagues, one professor concluded “College students in the sciences often grasp the operations of mathematics but miss the connection between mathematical operations and the physical systems they describe.'' (DeSieno, 1975) Given these observations, it seems that we could provide a better math background to help our students succeed in physical chemistry.   In 2000, the Mathematical Association of America (MAA) organized a series of Curricular Foundations Workshops to seek input on mathematics curriculum from chemists, biologists, physicists, and engineers whose students rely on a strong foundation in mathematics (Craig, 2001). Various working groups developed recommendations regarding the mathematical skills necessary for students in specific fields.  A working group composed of chemistry and mathematics faculty from different institutions gave a thorough recommendation of the content and conceptual principles that students should be taught and a recommendation for the division of responsibility (see table in the appendix of (Craig, 2001)). Several topics were given high priority for mathematics competence of students in the chemical sciences, namely multivariate calculus, creating and interpreting graphs, spatial representations and linear algebra. Nearly all relations that students will encounter in chemistry contexts are multivariate. Therefore, students should be comfortable with handling multivariate problems, thinking of variables as more than merely a spatial extent or time. Due to large variations in physical scale of problems, students should be able to decide if solutions are reasonable with estimation techniques and order of magnitude calculations. There should also be an emphasis in visualizing structures in three dimensions. The course sequence at Colorado State University was initiated by a request by faculty members in the Department in Chemistry who were seeking ways to improve student performance in the two-semester, upper-division undergraduate course in physical chemistry.  These faculty members believed that deficiency in mathematical preparedness presented a significant barrier to student success, both in terms of the mathematical topics covered in the prerequisite courses (a standard three-semester calculus sequence covering topics through multivariate calculus and targeting students in physical sciences and engineering) and student ability to apply the mathematical topics covered in those courses in their chemistry courses.  Faculty members from the Departments of Chemistry and Mathematics collaborated to design the sequence of two 4-credit, semester-long courses, called Applied Mathematics for Chemists (MfC).  The sequence was taught as an experimental course in the academic years 2014-2015 and 2015-2016 (with temporary course numbers, standard at CSU) and was accepted into curriculum of the Mathematics Department and as a prerequisite for the physical chemistry sequence in 2016 (course numbers MATH 271 and MATH 272). Course Content MfC has a prerequisite of Calculus for Physical Scientists 1 (derivatives and integrals) and serves as the mathematics prerequisite for the physical chemistry course. While there is some necessary mathematical background required for other chemistry courses, physical chemistry has the highest mathematical demands. The goal of the MfC courses is to provide students with a working proficiency of the mathematics so that they can focus on learning and understanding the chemistry. Two texts are used for MfC, namely Enrich Steiner's The Chemistry Maths Book (Steiner, 2007), and Donald McQuarrie's Mathematics for Physical Chemistry (McQuarrie, 2008).  Both books focus specifically on mathematical topics relevant to chemists. These texts take a practical, straightforward approach, with less emphasis on theory or proofs of theorems and more emphasis on developing a student's mathematical tools applied to practical problems. The texts cover similar material, but the Steiner book is more complete mathematically, whereas the McQuarrie book has more detail on connections with physical chemistry. Students appreciated the full solutions freely available on the publisher’s website for The Chemistry Maths Book, as it offered quick feedback and an opportunity for individual practice. Mathematics for Physical Chemistry is written by the same author as the text that is used in the physical chemistry course at CSU and expands on math review sections that are included in the chemistry text (McQuarrie, 2008).    Clear recommendations for mathematics courses for chemistry majors were given in the MAA Curricular Foundations Workshops (Craig, 2001), specific to the chemistry context. The expectation is set that math courses should develop 14 conceptual principles, nearly all of which are addressed in MfC. The exceptions are an extensive discussion of numerical methods, representation of information as analog or digital, statistics and curve fitting. Statistics and regression are covered in a statistics course that chemistry students are also required to take. All principles are marked in two categories, (1) they should be developed by mathematicians, and (2) the teaching of the mathematical concept in the specific context of chemistry is particularly effective. The material covered in this course is substantial, though necessary for the future success of chemistry students. The course is topically divided into five parts. Parts 1 (differential equations, series, and complex variables) and 2 (linear algebra), are covered in the first semester.  The second semester covers parts 3 (inner product spaces and Fourier series), 4 (multivariable calculus), and 5 (partial differential equations). The highlight in the prerequisite course (one semester of The Calculus) is the Fundamental Theorem of Calculus (FTC), typically written as  Students see two interpretations of this relation.  With s equal to a spatial variable x, the FTC gives an area underneath the graph of f ' (x) in the domain a ≤ x ≤ b . With s equal to time t, the FTC gives the total change in f over the time interval a ≤ t ≤ b.  But, honestly, why calculate the total change f(b) - f(a) by some complicated integral?  MfC opens with a slight but tremendously revealing rewriting of the FTC; Any differentiable function f(t) can be written in terms of an initial condition f(0) and a rate of change f '(t).  This mathematical insight also opens up a whole new way of thinking scientifically and leads into the first part of MfC, namely ordinary differential equations.   We cover chemical basic first- and second-order linear homogeneous and inhomogeneous differential equations and solution methods such as separation of variables, integrating factors, and the method of undetermined coefficients. Applications in chemical kinetics, the harmonic oscillator and a first look at Schrödinger's equations for a particle in a box motivate each class of equations. Complex numbers and series are taught as necessary theory for working with more complex systems. The grand finale of the unit on ordinary differential equations is the method of using power series to solve differential equations. Chemistry students are typically not exposed to these mathematical topics because they comprise topics in an ordinary differential equations course, which is not required for chemistry majors. Part 2 covers linear algebra. Students are introduced to vectors and are encouraged to think of vectors as coordinates in physical space as well as holding variables that are not necessarily distance. There is an emphasis on what insights determinants and eigenvalues give when modeling a physical system. Symmetries and group axioms are taught primarily through linear transformations, with some discussion on finding group representations. Compelling examples come from symmetries of planar molecules (Hückel molecular orbital method) and distributions of electrons in p-orbitals. Several students reported this application as being the most compelling example from the entire course. The second semester and Part 3 of MfC begins with the notion of a vector space and a basis. As inner product spaces are introduced, parallels are drawn between finite-dimensional vector spaces and infinite-dimensional inner product spaces. This gives students a concrete footing in a topic that they find very theoretical.  Orthogonal polynomials (including special sets of polynomials) are introduced. Rather than emphasizing the (often fairly involved) derivation of these polynomials, students are challenged to understand them as a basis for modeling specific physical systems. This notion is initiated here and developed further in the end-of-the-year project. Finally, students learn Fourier series and work with Fourier transforms and their interpretation in a mini-Matlab project. This section is generally the most challenging for students. Part 4 returns to material that is usually covered in a standard course on multivariate calculus (third semester of a traditional calculus sequence). By this point in the course, students have become comfortable with working with expressions in multiple variables. Visualization in three dimensions is taught, as well as partial derivatives and multiple integrals. There is an emphasis on physical interpretation of these quantities. However, the level of coverage is not as extensive as a typical third semester calculus course.  For example, a topic from a typical course in multivariable calculus that is not covered in MfC is Stoke's Theorem. The concluding part, the shorter Part 5 is a basic introduction to partial differential equations.  Students are introduced to separation of variables and the method is applied to solve the heat equation and the classical wave equations. Boundary conditions and initial conditions are discussed, again with an emphasis on modeling a physical system. This is a topic that students would not encounter until a course in partial differential equations after a course in ordinary differential equations, a course that very few chemistry students take.  We considered taking more time in Part IV and omitting Part V, but an advantage of covering Part V is that many concepts from the course come together when solving partial differential equations.  Indeed, this topic allows students to combine their knowledge of ordinary differential equation boundary value problems, partial derivatives, and Fourier series. Another advantage is that students are likely to see the wave equation near the start of a physical chemistry course, and we want them to feel like they are mathematically prepared from the beginning of the course. Near the end of MfC, students are assigned a group project, applying separation of variables. This project is discussed further in Section 4. To allow for this material to be covered in a year-long course, some sacrifices from the traditional sequence clearly need to be made.  This includes some integration techniques and theorems on convergence of sequences and series as well as Stoke's theorem.  Although the topics covered in MfC range from differential equations to linear algebra to understanding multivariable relationships, the fact that they are tied together by a theme of linear operators helps to unite the course and allows for the reinforcement of previously learned topics throughout.   The focus of this course is on developing students' mathematical dexterity and reasoning skills with motivation coming from chemistry.  One challenge is that some of the most compelling examples require a good deal of chemistry to understand.   For example, students were assigned a project on Nuclear Magnetic Resonance (NMR). This is a compelling application of Fourier transforms. However, the theory on molecular structure and NMR is taught in Organic chemistry. The students who had taken organic chemistry (i.e. had seen NMR in a classroom setting) thought the application was neat, though oversimplified. The students who had not had an organic chemistry course, could perform the transform but were at a loss when it came to connecting the output signal to the molecular structure, even with an (oversimplified) explanation in the project description. At the end of the second semester, students were given a final group project. Students are guided through the analytic solution to the Schrödinger equation for the hydrogen atom. This project pulls together concepts from operators, correct handling of multiple variables, partial derivatives, techniques of solving differential equations and partial differential equations, visualizing in three dimensions, and the postulates of quantum mechanics in an example that is very compelling for chemistry students. Impact in Physical Chemistry Course The difference in students completing the calculus sequence versus the MfC sequence to fulfill their mathematics requirements for chemistry is dramatic.  The difference in students’ daily engagement in the Physical Chemistry course is different between the two student populations.  For example, in Physical Chemistry 1, students that have taken the MfC course sequence have already been exposed to the concept of a differential equation so they do not have to grasp what a differential equation is before striving to understand the interpretation of the solutions they generate for the Schrödinger equation.  Instead, students that have taken the MfC sequence are confident in their practical knowledge of finding solutions to ordinary differential equations.  Thus, they have the capacity and are free to begin thinking about the interpretation of solutions to the Schrödinger equation, rather than being stuck on mathematical mechanics associated with solving differential equations.  Likewise, students having completed MfC approach the Maxwell relations in thermodynamics without trepidation having already manipulated partial differential equations.  These are only two of many examples that speak to the divide that MfC bridges by producing a course that nests the mathematics required as a chemistry practitioner in chemical applications.  The feedback from students who have taken MfC and then physical chemistry has been positive.  These students have encouraged their colleagues to take MfC rather than the traditional Calculus sequence, noting that students having taken the traditional Calculus sequence struggle more in Physical Chemistry than students having taken the MfC course sequence. Even students who struggled in MfC have remarked how familiar they found the math in physical chemistry, which improved their outlook about the traditionally dreaded physical chemistry course.  Finally, MfC offered at CSU does not require additional credit hours of math for our chemistry majors.  Instead, we have tailored the mathematics and the application of the mathematics to be aligned with the needs of a chemistry practitioner. To accommodate transfer students and students changing majors, we still allow chemistry majors to take the traditional three semesters of Calculus for Physical Scientists, but strongly urge our majors to take MfC.  The authors would like to thank Francis Motta for his contribution to developing materials for this course. ACS. (2015). Guidelines and Evaluation Procedures for Bachelor's Degree Programs. Washington DC: American Chemical Society Committee of Professional Training. Bressoud, D. (2002, Aug./Sept. ). The Curriculum Foundations Workshop on Chemistry. FOCUS, 22(6). Washington D.C.: Mathematical Association of America. Course Catalog. (2015-2016). Colorado State University. Retrieved from Craig, N. (2001). Chemistry Report: MAA-CUPM Curriculum Foundations Workshop in Biology and Chemistry. Journal of Chemical Education, 78, 582-6. Derrick, M., & Derrick, F. (2002). Predictors of Success in Physical Chemistry. Journal of Chemical Education, 79(8), 1013-1016. DeSieno, R. (1975). How Do You Know Where to Begin? Journal of Chemical Education, 52(12), 783. Fox, L., & Roehring, G. (2015). Nationwide Survey of the Undergraduate Physical Chemistry Course. Journal of Chemical Education, 92, 1456-1465. Hahn, K., & Polik, W. (2004). Factors Influencing Success in Physical Chemistry. Journal of Chemical Eduaction, 81(4), 567-572. McQuarrie, D. (2008). Mathematics for Physical Chemistry: Opening Doors. University Science Books. N. Craig, D. B. (2000). CRAFTY Curriculum Foundations Project: Chemistry. Porile, N. (1976). Diagnostic quiz to identify failing students in physical chemistry. Journal of Chemical Education, 2(53), 109. Prussel, D. (2009). Enhancing Interdisciplinary, Mathematics, and Physical Science in an Undergraduate Life Science Program through Physical Chemistry. CBE- Life Sciences Education, 8(1), 15-28. Steiner, E. (2007). The Chemistry Maths Book. Oxford UP. Rich Messeder's picture I'd like to start the week off by saying how much I appreciate the time and effort that went into preparing these papers. These threads are a most valuable resource to me, and made more useful by the comprehensive nature of the comments. I am primarily research- and engineeering-oriented, but I value the intrinsic worth of each student. I hope that the threads are available after the conference, because I intend to mine them for ideas. Rich -- My understanding is that if you go to the main ConfChem site, the "Useful Links" will be posted on the left as they are on this page, and the "temporal article list" will have all of the articles and threads. -- rick nelson Hi All, We actually  have three nonredundant backups of the ConfChem discussion archive.  First, there are the actual papers, which as Rick states, can be found through the temporal article list, but also through the sortable article list (just pick the ConfChem you want to see), and note you can tag the papers if that helps your research (but not the comments, although those can be tagged with an annotation service like Second is the actual Confchem List archive at UALR, .  Just choose the month, and there are several sort options (subject line, date...). Third, after the ConfChem is over the authors have the option to submit these to the Journal of Chemical Education as a series of bundled communications.  Attached to each communication as "Supporting Information" is the actual ConfChem paper with the discussions, and so if the CCCE website goes down, and the UALR list goes down, you still have the discussions archived in the supporting Information of the JCE communications. I should add, that we remove personal identifiers like the names and images of the people making comments in the JCE supporting information, but what they say is preserved. Rich Messeder's picture Info captured for the future. Dear CSU Team, If I understand correctly, what you have done at CSU is to give chemistry majors a choice of the traditional CSU sequence of 3 semesters of 4-credit Calculus for Physical Scientists, or the new sequence of one semester of Calculus for Physical Scientists followed by 2 semesters of Applied Math for Chemists (taught by the math department). My suspicion would be there were obstacles that needed to be overcome to achieve these offerings of “Calculus Customized for Chemistry.” For instructors who would like to gain the same type of sequence on their campuses, would there be advice you might be able to give on what bottlenecks to anticipate and how they might address them? -- rick nelson Yes, you understand correctly. The main obstacle may be that for a math department to make a sequence of courses that is ideal for evey major would make for a lot of different course sequences and it gets unwieldy!  Plus, it takes special grad students like Rachel to teach the course--they need to be willing to learn some p-chem, perhaps even to learn some maths that they never learned (self-adjoint operators, for example) so this course can't be taught in the normal factory method of teaching calculus.    So, chemistry needs to really have enough students to put into the course, and maths faculty need to realize that this is a fun course to teach. Maths faculty are used to the traditional sequence, and it can be hard to fathom some variation on it--differential equations are supposed to be a topic after Calc III, and we do them right at the beginning of the MfC sequence!  First of all, I congratulate Dr. Nelson and his co-organiser (whose name momentarily escapes me, apologies) for their commendable organisation and moderation of this conference on internet.  This occasion is the first in which I have participated in such a meeting of minds, and I am pleased to read the various points of view and the dedication to improve the teaching and learning of chemistry. The fact that students who are admitted to tertiary institutions are poorly prepared in mathematics is clearly not confined to USA, but is likely worse there consistent with the standing of USA in comparison with other developed countries.  One topic, to which I here respond, is the provision of courses by departments of mathematics for students of other departments.  Some years ago, a responsible senior professor of mathematics informed me that the policy of his department was to respond to any such request from another department, but within standard courses of mathematics there was no attempt to include examples or problems in any applied area, because although that content might be of interest to a fraction of students in the common course it would be boring and a distraction for the other students.  He emphasised that mathematicians were prepared by their academic experience to teach mathematics, not chemistry nor physics nor ....  One approach that is more common in European universities than in Canadian and USA institutions is to have a course such as 'mathematics for chemistry' taught by either an instructor of mathematics devoted to a particular department, such as was the practice in Danish Technical University in the past, or a chemistry instructor who is suitably prepared to undertake such tasks.  In University of York UK two distinguished instructors, both active in fields of quantum chemistry, taught such courses, and even published a textbook Mathematics for Chemistry, by G. Doggett and B. T. Sutcliffe.  On examining that book, I ventured to express the opinion that I considered it to be at a rather low level, but Dr. Doggett replied that it was actually deemed to be beyond many students in British universities; for that reason he was asked to write two other short books at an even lower level, "Maths for Chemists (Tutorial Chemistry Texts)" published by Royal Society of Chemistry.  The problem addressed in this conference has two parts:  the mathematical preparation of students entering general chemistry, most discussed, and the mathematical requirements for succeeding courses in chemistry.  The fundamental solution to the preceding case is to improve the teaching and learning of arithmetic and mathematics in school, according to all the aspects described here, including estimation; in lieu of that solution for the present students, remedial courses at the tertiary level must be arranged, whether organised by chemists or mathematicians.  For higher courses in chemistry, if what departments of mathematics offer is deemed unsatisfactory or insufficient, chemistry instructors can offer their own courses, based on such a textbook as I specified above, for example, or other comparable books.  My interactive electronic textbook Mathematics for Chemistry is an alternative approach in which the objective is to teach, with advanced mathematical software (Maple), the concepts and principles of all pertinent mathematical topics and aspects, from arithmetic to group theory and graph theory, and then to encourage the students to apply their knowledge of the use of that software to solve chemical problems.  I despair of that book being of significant utility if students lack the basic arithmetical and symbolic skills that participants in this conference have decried. When I was an undergraduate in a Canadian university, I was required to complete five year courses in mathematics (equivalent to ten semester courses) as a requisite of an honours degree in chemistry.  The average requirement of mathematics for chemistry in Canadian universities is now three semester courses, although the nominal mathematical level of content of chemistry courses has risen significantly in the interim.  I have noticed that standard textbooks of general physics, which might in many cases be a corequisite with general chemistry, have mathematical content at an increasingly high level.  For instance, Schroedinger's partial differential equation is introduced before its solutions and their properties.  How can students of chemistry cope with such content when we have read that even multiplication of small numbers challenges the capability of many such students? Much of the discussion within this conference addresses a complaint that students entering general chemistry are unprepared mathematically.  It is ironic that the instructors who complain so vociferously are themselves unprepared mathematically to teach even general chemistry.  I refer to the content of all textbooks of general chemistry that I have seen that includes orbitals, electronic configurations of atoms and analogous material based ultinately on quantum mechanics, which is now recognised to be not a chemical theory, not even a physical theory, but a collection of mathematical methods, or algorithms, that one might apply to systems on an atomic scale -- which is far from the laboratory experience of general chemistry.  The authors of such textbooks, and the instructors who duly prescribe and teach the textbook in the fashion of a parrot, so act not because they understand the underlying mathematics and its consequences but because they do not so understand.  How many of you are aware that there is not just one set of orbitals for the hydrogen atom but four sets, not just one set of quantum numbers associated with those orbitals but one set of quantum numbers for each set of orbitals?  [The descriptions of these orbitals are freely available from 1709.04759, 1709.04338, 1612.05098, 1603.00839] You are "the credulous masses [or their successsive generations] -- that sad benighted chemistry professoriate -- dazzled with beguiling simplifications" by Pauling, "a master salesman and showman" [A. Valiunas, The man who thought of everything, The New Atlantis, No. 45, 60 - 98, 2015;, J. S. Rigden, Review of Linus Pauling -- a man and his science, by A. Serafini, Physics Today,  43 (5), 81 - 82, 1990].  Some years ago, I wondered whether a really intelligent student who encountered this incomprensible and indigestible rubbish about orbitals in general chemistry might decide that, because he could not understand what he was taught but would be forced merely to memorize the material quasi-religiously, the fault was his, so that he transferred to some other subject such as computer science that he could genuinely understand, leaving the mediocre students of general chemistry to progress onward in chemistry and eventually to become the next generation of professors to perpetuate the charade. How many of you who have taught, in any shape or form, orbitals, which are indisputably solutions to the Schroedinger equation for the hydrogen atom, have actually read Schroedinger's papers?  They are available in authorised English translation; within them you can learn about a second set of orbitals and quantum numbers, beyond the first set for spherical polar coordinates with quantum numbers k,l,m.  Pauling never admitted the existence of this second solution, which would accordingly undermine his proffered ideas.  Your libraries might contain this book at QC172..... Next time you feel like complaining about the quality of mathematical preparation of students entering chemistry, please reflect that your students have absolutely the same right to complain about the quality of mathematical preparation and understanding of their instructors -- only the ignorance of the students precludes such complaint, just as the mathematical ignorance of instructors of chemistry -- "that sad benighted chemistry professoriate" -- perpetuates the current paradigm of teaching chemistry. Rich Messeder's picture Hailing from a sub-culture in the US noted for its often too-frank speech, I appreciate John's frank discourse, however unsettling I found its direct criticisms. His comments directed toward "that sad benighted chemistry professoriate" apply equally to other fields, I opine. I see two root problems in academia (and perhaps beyond) that seemingly contradict one another. The first is that there seems to be a great deal of insecurity among academics, in my experience, and this relates to an unwillingness to be different, lest one not be "accepted". The second also relates to insecurity: arrogance. I think that this varies by institution, but I have seen it in all academic quarters over the past 4 decades. It is a bad enough example among academic peers, but it is especially destructive when faculty "talk down to students". We need outspoken leaders, yes. I am the captain of my ship, and my students know that as well as did those serving under me years ago in the military. Students look for confidence and leadership, but shy from arrogance. "dazzled with beguiling simplifications" I have lately coined the term "simplication" to mean just this. Why? Because I have seen too many instances of complicated material "simplified" in discussion to the point of irrelevance. For example, when working on a major physics research project recently, we were tasked with implementing an algorithm that was kicked around for years in popular terms that over-simplified its real complexity and impact on research. It was not until several frustrating years of increasing pressure from approaching deadlines that the issue was put on the table for open discussion. I recall that at one meeting the PI in charge of the research was stunned that he did not truly understand the algorithm that he been advocating all along. No one did, because they had all (PhDs) been kicking around a "simplicated" version of the algorithm. The beginning of the algorithm was replaced with an equivalent, much simpler, statistical model. There are places for both appropriate simplification ("as simple as possible, but no simpler"), and "the harrowing complexity of honest science." I might be accused of being frank, as Frank is my middle name! Having passed some years as a visiting professor or equivalent in significant departments of mathematics and physics, I have some intimate knowledge of those two fields, accumulated long after my undergraduate degree of which the programme was formally described as combined honours in physics and chemistry, with a healthy component of mathematics (equivalent to ten semester courses, as I mentioned) plus a course in mathematical physics as part of the physics sequence.  On the basis of that direct experience, I find it implausible that those two subjects suffer from the same systemic rot to the same extent as chemistry arising from orbitals and related rubbish -- even though physicists might be prone to attribute experimental observations directly to mythical 'hybridisation', clearly an infection from chemistry.  I have no doubt of the value of quantum-chemical calculations -- they were invaluable in our identification of two new boron hydrides, B2H4 and B3H3, for instance -- although for molecules containing elements other than boron perhaps 'molecular mechanics' would have produced similar results. One must, however, distinguish between orbitals, which pertain only to the hydrogen (or one-electron) atom, and members of a basis set that might be applied in quantum-chemical calculations.  Even the latter are superfluous because density-functional theory without an orbital basis set is a practical alternative. What is an orbital?  It is incontestably an algebraic function.  An ignorance by chemistry instructors of the mathematical basis of such concepts is just as reprehensible as students admitted to general chemistry being incapable of undertaking basic arithmetical and mathematical operations for the solution of chemical problems. The sources of the quotations cited in my preceding comment can provide an ample basis for the recognition of the deficiencies that must be rectified. Yes, one of the challenges of teaching new things is eliminating students' conflicting misconceptions, some of which have been installed by well-meaning teachers seeking to help students feel that science "makes sense".  On the subject of atomic and molecular orbitals, I think these are some (among many) pedagogically useful articles (and two of the authors are also named "Frank"!): "4s is Always Above 3d! or, How to Tell the Orbitals from the Wavefunctions," Frank Pilar, J. Chem. Educ., vol. 55 #1, Jan. 1978, pp. 2-6. "Tomographic Imaging of Molecular Orbitals," D. M. Villeneuve and coworkers, Nature, vol. 432, 16 Dec. 2004, pp. 867-871 (includes a tomographic reconstruction of the HOMO of N2) This image also appeared in C&E News, vol. 82 # 51, 20 Dec. 2004,  p. 10 "The Covalent Bond Examined Using the Virial Theorem," Frank Rioux, Chem. Educator, vol. 8, 2003, pp. 10-12.    George Box pointed out many years ago that all models are wrong, some are useful.  Anybody who talks to an organic chemist knows the truth of this. The standard sequence of teaching general chemistry proceeds through any number of simple models for chemical bonding and reaction following the historical development of the science. The atomic/molecular orbital model neatly summarizes these and also explains much about their limits of applicability.  Thus it is both useful and teachable at a beginners level.  Indeed, as far as structure and reactivity on the atomic level is concerned there is very little mathematics in the first semester of GChem and a whole lot of visualization and memorization for which atomic orbitals are sufficient.  If the correct description of electron density is 95% pz and 5% whatever, for teaching a general chemistry student does this make a difference? I have found that the "atoms first" sequence has the advantage of making it easier to justify earlier models to the students, such as valence, oxidation number, octet rule, etc.  each of which, as well as atomic & molecular orbitals, have something useful to say about bonding and reactions as well as about the "real" nature of molecules as shown in the image referenced by Doreen (thanks:) and others that are increasing appearing showing bonds, defined as regions of high electron density between atoms similar to what a good GChem student would write on an exam using orbitals. Orbitals are, of course, algebraic functions, but they are not unconstrained by physical limits and they do convey useful and generally correct information about the shapes and reactivity of atoms and molecules which is what we are trying to teach.  The "atoms first" sequence has the natural advantage that the students in the laboratory for general chemistry work directly with single atoms and molecules, and the students can directly see and measure the orbitals -- is that not the case? -- so that there is a direct connection between the lecture material and the laboratory material, which is a primary pedagogical objective. Which orbitals do you use for your explanations?  There are four sets of orbitals for the hydrogen (or one-electron) atom, and each set has its individual shapes and set of quantum numbers -- but of course you understand that if you have been teaching about orbitals.  Unfortunately, all those orbitals apply strictly to the hydrogen atom; the corresponding algebraic functions of the helium atom -- yes, they are known -- have quite distinct and complicated algebraic forms.  You would not commit the logical fallacy of extrapolation from a point, fron H to any other atom, would you, Dr. Halpern? The orbitals of H are presented both algebraically and pictorially in these four items freely available from 1709.04759,  1709.04338,  1612.05098,   1603.00899 Instructors who teach about orbitals might wish also to read "The nature of the chemical bond, 1990 -- there is no such thing as orbital", Journal of Chemical Education, 67, 280 - 289 (1990), republished, by request of the editors, with additional material in Conceptual Trends in Quantum Chemistry, Kluwer, 1994 First, IMHO, Dr. Ogilvie starts from a bad place.  It is not necessary that students use algebraic functions to describe hydrogen like orbitals for hydrogen or other atoms.  There are wonderful visualization tools that provide these images, including 3-D JMol, versions etc.  For discussion of atomic and molecular orbitals at the GChem level these are all that is needed and used.  Atomic and molecular structure in GChem are a visual, not a mathematical exercise.  A good site for these images is Moving on what kind of experiments could one do in a gchem lab starting with atoms first?  Now obviously what follows can be refined and improved, but I believe it is a place to start.  YMMV My general approach would be to have the students make a measurement and then interpret the results using concepts taught in class.  Availability of on line apps and data bases makes this much simpler than in the past.  I have a few suggestions, and others will hopefully chime in (pun intended).  Of course dry labs are also possible but really not what Dr. Ogilvie or I would want, at least in part. For example, a mass spectrometer with unit resolution measuring the isotopes in a simple sulfur compound like SO2, could demonstrate isotopes and results and interpretation could be done using the NIST webbook.   A simple limiting reagant experiment could be used to explain the mole concept and its relationship to atomic number and atomic weight.  The emphasis would be on the mole concept and not the stoichiometry Both of these are taught at the beginning of an atoms first course as well as in the historical sequence.   Moving on to atomic structure students could measure the hydrogen Balmer band spectra and relate that to the Bohr formula.  You could then measure the sodium atom spectrum and see how it does not exactly fit the Bohr formula, and use that to motivate the discussion of how the hydrogen orbitals are not quite the ones for sodium.  One could compare the hydrogen orbitals to the hydrogen like ones for more complex atoms.  See for example  The web site I mentioned above also discusses this as well as showing the more complex shapes of the hydrogen like orbitals There are worksheets at VIPEr that can be used in conjunction   The NIST atomic spectra data base would help the students assign lines to transitions between hydrogen like orbitals.  Since these spectra are relatively sparse and in the visible region, small spectrometers such as those sold by ocean optics would be fine.  If you are interested in He I, then the NIST atomic spectra data base  which generates Grotrian diagrams would be key to assigning the orbitals involved in the one electron transitions. Molecular bonding could be demonstrated by selecting a small molecule.  The students would then describe bonding in the molecule using (the dreaded hybrid) orbitals.  Then a simple ab inito program would be used to calculate electron density maps and IR spectra.  The calculated spectra would be compared to the measured ones (either directly, or using the NIST Webbook or similar). The electron density maps would be compared to the initial prediction.  The calculation would, be all black box, Gaussian abuse as it were, but the students would start the calculations and measure the spectra as well as starting with the prediction of the shape and orbitals involved. FWIW folks, might enjoy playing with this app on their phones and comparing the orbitals to Dr. Ogilvie's And so, good night to all :) Dr. Halpern has raised some stimulating points, on only a few of which I comment here. Despite the fact that the knowledge of orbitals existing in four distinct sets, each with its individual shapes and set of quantum numbers, has been available since at least 1976, I have no doubt that Dr. Halpern, like almost all other instructors of chemistry who teach orbitals, is blissfully ignorant of this fact.  "Where ignorance is bliss, 'tis folly to be wise."  [Thomas Gray, 1742]  If Dr. Halpern were so aware, he might have difficulty justifying the selected particular set of orbitals, in spherical polar coordinates, for the purpose of his teaching, but blissful ignorance precludes such a tiresome chore.  Both the formulae and the pictures (plots) of orbitals in all four sets are freely available at Dr. Halpern suggests the use of a "simple ab initio program to calculate electron density maps and IR spectra".  The problem is that a truly "ab initio program" will calculate no such results.  If the atomic nuclei and electrons are treated in an equitable manner, there is NO molecular structure (apart from trivial cases such as diatomic molecules); cf. M. Cafiero, L. Adamowicz, Molecular structure in non-Born-Oppenheimer quantum mechanics,  Chemical Physics Letters, 387 (1), 136 - 141, 2004.  Dr. Halpern seems confused between truly "ab initio" quantum-chemical calculations and semi-empirical calculations, in which a structure is included in the input with some chosen 'canned' basis set. I applaud Dr. Halpern for having arranged the use of a mass spectrometer, even with resolution only unit mass (dalton), for the direct use -- "hands on" -- of each of his students in his laboratory for general chemistry.  Perhaps other expensive instruments might preclude the necessity for students to learn to titrate a base with an acid, but in any case beginning with "atoms first" might require several years of courses to reach the level of practical chemistry in other than the most superficial manner. When the senior author. P. Corkum, of this paper by Itatani, Villeneuve and others presented a lecture on this topic, I challenged him to define an orbital, but he demurred.  The authors of that paper understood what they claimed to measure neither during the experiments nor afterward.  Anybody who takes seriously the claim of these authors to have recorded an image of a molecular orbital (not molecular orbitals, plural) has only the most superficial understanding of the experiment and its interpretation, which is replete with errors.  cf Foundations of Chemistry, 13, 87 - 91 (2011); DOI 10.1007/s10698-011-9113-1 Cary Kilner's picture A (hopefully) balanced response from the co-moderator (I respectfully forgive the slight): John’s vociferous post (and not meant as a derogatory description) provokes the following response. He makes many good points in his diatribe, but we need to get back to the masses we are trying to educate. Of course, we seek the excellence he wishes for the upper-level courses for majors. But the fact remains that we must focus on our service-course clientele, who will be our future health-care professionals and who are the most challenged students in the physical-sciences and the least prepared mathematically. While he decries instructor misunderstandings in teaching MO theory, I decry instructors who take mathematical competence for granted and still ply the sink-or-swim perspective. If students’ primary and secondary education is not sufficient for their study of chemistry, we simply must take up the baton ourselves—hence this ConfChem. His point regarding mathematics teachers teaching “formal math” and neglecting applications seems to be a fault of mathematics instructors more interested in their own egos than in any collective effort to prepare students for a meaningful career that requires the use of mathematics in any capacity. After all, mathematic instruction represents even more of a service course than our own gen-chem, serving a greater number of students and more diverse majors. We might address this issue through pleas to the NCTM, who seem to be at the forefront of mathematics education. And I agree with his argument, which I shift slightly here, that the student might rightly complain of the chemistry instructor being unable to understand and address the neophyte students’ troubles with formal mathematics and with its translation into chem-math -- which, of course, is why we presented this ConfChem on mathematics in the teaching of chemistry—for the edification of interested and concerned chemistry instructors. I really don’t care if these students know an orbital from an orbit. I want them to understand how to make a serial dilution, how to calculate the volume of gas at a given temperature and pressure from a given reaction, how to determine if a given reactant is limiting or in-excess, how to perform a successful titration, how to use Beer’s law and do UV-vis spectroscopy, how to conduct a meaningful calorimetry experiment. Some instructors might feel that these calculations are too abstract for the life-science majors. But I believe that you simply cannot teach chemistry meaningfully without showing how the science developed from an engineering perspective, i.e. in the service of solving practical problems. I want them to know that a chloride salt is NOT a “pale green gas,” and that carbon has several allotropic forms, and the difference in behavior between concentrated sulphuric acid and concentrated nitric acid. In other words, I want them to know some descriptive chemistry with its associated chem-math measurements and calculations. An understanding of a need for chemical calculations (“chem-math”) has to arise from a need to understand interesting chemical and physical phenomena, either presented via provocative demonstrations or carefully-developed wet-chemistry activities or formal experiments. For instance, in seventh-grade I was reading about the shock sensitivity of potassium chlorate. Of course my local mentor and pharmacist sold me some of this salt, since back then pharmacists WERE, in fact, “chemists.” I also obtained chromates and dichromates and potassium permanganate and iodine crystals. My father allowed me to take sodium hydroxide pellets from the 55-gallon barrels in his shop, where I avoided inhaling the aggravating dust and observed the pellets immediately take on water from the humid air. My grandfather helped me obtain the concentrated acids I needed for my home basement laboratory. Back to my KClO3 story; since I understood stoichiometry as a way to DO chemistry, I was able to balance the equation for its reaction with table sugar, that I knew to be a disaccharide, and to calculate how much sugar to mix with my one gram of KClO3. Unfortunately this was too large a mixture. As I ground it on the cement floor of the basement using a lead plate I had melted down, it detonated with a huge BANG, instantly filling the basement with a fog. My parents cheerfully called down the back stairs, “Everything O-K down there?” To which I responded in a cold sweat, “Yeah, it’s all good.” The main point in my paper for this ConfChem is that to address difficulties in mathematics, you must first have the student PRESENT – not on his/her smart-phone, not downloading a power-point, not sitting in the back scribbling inchoate notes, not practicing Educational Darwinism and merely passing with a D- to get the credit, but actually engaged with the material. Otherwise, how else can they learn? And why else are they there? The long-lost lecture-demonstration pedagogy, with a formally-hired and designated demonstrator/demonstration coordinator was the way in the past that we have been able to engage a large lecture hall of students—not as entertainment but to show how concepts are related to phenomena, with concomitant measurements. This speaks to the value of the flipped classroom and of POGIL as a way to engage students. Nevertheless, however uniquely interested and energetic instructors have tried to implement these initiatives into 100+ classrooms, it is really the small classroom that enables these practices to work well, where the instructor can get in the face of EACH student to ensure he or she is engaged, and to ferret out issues preventing engagement. I’m speaking, of course, from 23 years of high-school teaching with the luxury of 15-25 student classes. And small liberal-arts colleges have this luxury as well. It’s up to chemistry educators to continue to research ways to effectively engage students so they are actively THERE in the large lecture halls of our large public universities. Finally getting back to mathematics, in my doctoral research I examined thirty-five pamphlets, booklets, paperbacks, small books and textbook chapters and appendices, to see how chem-math was being addressed by other concerned instructors. Of all these I found "Maths for Chemistry; A chemistry’s toolkit of calculations,” by Paul Monk and Lindsey Munro (Oxford U. Press, 2nd edition) to be outstanding; written very clearly and the best of the bunch. John and some other participants in this conference have cited various British publications, so I wonder if he is familiar with this fine book. It may not have quite the depth he requests, but it is very thorough. Besides the dimensional-analysis, algebra (and graphing) review typical of most chem-math primers, it provides three chapters on powers and logs, two on statistics, one on trig, six on differentiation, four on integration, and one each on matrices (including group-theory), vectors, and complex-numbers. So it seems to me this chem-math text would serve most physical-chemistry instructors well. (my apology again to Dr. Kilner) The objective of my attention to mathematics for chemistry has been mathematics for chemists, i.e. students proceeding to an academic degree with chemistry as major subject.  I had been unaware, and am somewhat astonished, at the severity of the mathematical incapability of students of general chemistry for most of whom the ultimate interests lie elsewhere than in chemistry.  The latter problem evidently requires concerted attention, such as remedial courses for present students and reform of school curriculum for future students, and this conference has been addressed mainly to this concern. "His point regarding mathematics teachers teaching “formal math” and neglecting applications seems to be a fault of mathematics instructors more interested in their own egos than in any collective effort to prepare students for ..."  How other than "interest in their own egos" can one explain the propensity of instructors of general chemistry, following the authors of their selected textbooks, to teach orbitals and electron configurations to students of biology, nursing ... within common courses of general chemistry?  I persist in maintaining that, if those instructors, and the authors, understood the mathematics, they would not teach that material because it is nonsense and irrelevant for chemistry.  Furthermore, is that fact that chemistry teachers teach "formal chem", such as electron configuartions, and neglect applications in nursing or biology not the same fault of which mathematics teachers are accused?  Both mathematics and chemistry are academic disciplines in their own rights, and chemistry is a science with an associated chemical industry.  Since I discovered in 1971 the existence of practical 'computer algebra' (IBM Formac), I have devoted efforts first to do my own extensive mathematical calculations for chemical or physical applications with software, and then, as that software developed into its present advanced form, to teach mathematics with that software (progressively Mumath, Reduce, Derive, Maple ... through more than three decades).  For me, mathematics consists not merely of reading a book and scribbling separate calculations with pen on nearby paper but of reading a large computer screen that describes, with sufficient profundity, the concepts, principles and practice, and then that reader implements the appropriate operations with the same software on the same screen.  That scheme underlies my interactive electronic textbook Mathematics for Chemistry, now in its fifth edition, and I respectfully suggest that an analogous design of teaching arithmetic to algebra, with interactive testing built into the content of the lessons, would be an effective pedagogical approach, provided that the students were sufficiently prepared to cope with that software.  Is 'computer-aided instruction' really so novel in year 2017?  The ratio of students to instructor becomes then not 15 or 25 to 1 but 1 to 1.  This approach would seem to be applicable for remedial purposes of the students of general chemistry.  For further chemistry my electronic textbook might be brought to bear. Dr. Kilner mentioned a book by Monk and Munro that includes various topics; my own electronic textbook includes all those topics and more, with rotatable plots in three dimensions and other pedagogical devices beyond the printed page. I have made no effort to become acquainted with various printed textbooks of mathematics for chemistry; I mentioned that by Sutcliffe and Doggett merely in relation to the discussion of the varied level of mathematics.  I consider the entire concept of the traditional printed static textbook to be obsolescent, although when I read for pleasure I greatly prefer a book in my hands to staring at a computer screen, especially a small one. Cary raises key points.  The most important thing we could do is to agree on the way forward.  Allow some suggestions, starting with the easiest one First, the issue with chemistry majors might be best met by a Mathematics for Chemistry course as the terminal math course for chemistry majors with dropping of Diff Eq or maybe even Calc III.  This is the path that physics and engineering have taken.  It, IMHO, should be some combination of differential equations, linear algebra, statistical analysis and computation.  It could be team taught with maybe the analytical chemists taking the lead on statistics.  I would strongly recommend that it be centered around a symbolic computation system such as Mathematica, Maple, or shudder, MathCAD, the later because of its ubiquity in engineering, the other two depending on the local license situation.  If we can reach some agreement on this it is something to be brought to the Committee on Professional Training. Second, the more difficult question is the mathematical preparation for GChem.  Much of the discussion has been about identifying those students who need help.  We might start with a list of tools that have been suggested and perhaps then survey ourselves about which the majority feel are the most useful.  An open discussion as we have seen can be scattered. If we can come to agreement on what students need to know and generally how to identify the students who need help, that alone will be useful in discussing remediation with our colleagues (maybe not so much), our chairs, deans and so forth because it is no longer simply a personal or local opinion but something broader.  Perhaps then the moderators could draft a short paper for J Chem Ed. How to remediate is a much more difficult problem. As Cary says you go to class with the students you have, not the students you want to have.  A point that recently came up on Twitter is that the first thing a new Assistant Professor needs to know is that they were not the typical student in their GChem class.  As we have heard there is no magic bullet, although, again, I agree with Cary small classes or recitation sections are key. In closing, thanks to all for their constructive work. Excellents points were raised during these discussions.   I agree on "As Cary says you go to class with the students you have, not the students you want to have" . If I may add based on my experiences, most of the students are aware of their limitations and are willing to do the extra work to get on the same page with math. A bit of guidance on math is much appreciated by them (especially with commuter/returning students). Perhaps a free online "math for general chemistry" course with short videos on math related to general chemistry topics (maybe on ACS or elsewhere). Students from diverse math backgrounds can watch these short videos and bring their math upto speed for Gen Chem classes. Every instructor can direct students to the same place. Thank you all for sharing your great pedagogies/ideas. scerri's picture Orbitals may not have 'real physical significance' and may indeed be unobservable. Yet they are very useful in rationalizing many aspects of spectroscopy and in chemical education.   Eric Scerri, UCLA Department of Chemistry Both atomic and molecular orbitals have been observed.  See the recent work of Wilson Ho at UC Irvine.  An earlier perspective is found in Dinse and Pratt, "Orbital Rotation", JACS 104, 2036 (1982). Rich Messeder's picture Many ideas in this conference address how to move forward, and I think that they should all be given serious consideration in the immediate future. My perspective is one of finding a core approach that is useful for STEM students, and have that core modified for specific fields (chem, physics, etc.). I iterate the recommendation for using computers to relieve faculty of the press of supporting many students in low-level reviews. For example, ALL entering STEM students could be enrolled in a computer class that "teaches" those concepts that we want memorized. Students would be //required// to meet minimum performance criteria; for example, be able to enter the answer to multiplication tables through 12s, randomly presented, in some reasonable time (constrained, calculators not permitted), with, say, 95% accuracy. Students who meet the requirement on the first pass have effectively complete the course. To ensure that "the cramming effect" is not active, students who did not pass everything on the first pass would be required to log in and retake the drill|exam periodically (weekly? monthly?) to provide the repetition that we have discussed here. These remarks are a just a starting place. I first saw computer-aided teaching at the UI/Urbana campus in the 1970s. Students were very excited to get time on the systems, which were advanced for the day, but primitive by today's standards. Nonetheless, I have been surprised by how little computers have been used for teaching over the decades. Amateur radio operators (I'm one) have used computers for decades to help them commit to memory material necessary to pass the different FCC license level exams. There are many resources available, and perhaps one of the tasks of academia is to sift through those resources and find the best of them to recommend to students. This would be an ongoing process, and it would be nice to have a sort of "clearinghouse" for all of it. This suggestion has the risk that the task will become overwhelming. Faculty from different institutions must be ready to "tolerate" recommendations from various sources. At any rate, for the fundamental material that we would like to see our students demonstrate mastery, this approach may be useful. The benefit of this approach is that it is easily absorbed by secondary schools, relieving them of the same time-burden, and could shorten the time that it takes to raise the math capabilities of our students. This approach also lends itself to "teaching" basic math tools, such as spreadsheets and basic MATLAB, Maple, etc., programming. Cary Kilner's picture Kudos to you and your team for showing how the problem of understanding mathematics in upper-level coursework can be addressed! My original degree was chemical engineering, so I did study much of this mathematics myself (not that I remember it). Here are some questions for you. I have seen little discussion of this issue in teaching P-Chem, although the problem must exist for many programs and instructors. Why do you suppose it has received so little attention, despite the importance for the conceptual understanding of this material for chem-majors in this very important course? (I know the chem-majors are certainly a very small subset of the gen-chem students we have been discussing in this ConfChem.) Do you think it likely that we have seen a decline in mathematics facility and understanding with even the stronger mathematics students found in the major’s track in the past few decades? Is this the result of changes in the way upper-level mathematics is being taught? And do you feel this could also be a reflection of recent changes in earlier mathematics education? Thank you! An obstacle to giving attention to maths for p-chem may be simply that the usual sequence works OK for other STEM disciplines, but a slightly different set of material (more understanding of linear operators, self-adjoint operators, orthogonal functions,  for example) is needed for p-chem, and there are usually not enough chemistry majors for math departments to worry about them. Regarding your question on decline in understanding of math majors, I can only give my impressions.  One hears growing frustrations with decline in students' ability in proof, particularly what we call "analysis"--proving the theorems of calculus rigorously.  However, I don't think that that is true for the stronger mathematics students--they seem as strong as ever to me.  Rich Messeder's picture I looked online at the course texts you referenced, examining the table of contents, and browsing pages where it was permitted. It seemed to me that the scope is such that it would cover 90%+ of applied math for several STEM majors. In a chem-specific context, examples use the vocabulary and semantics of chemistry. But it seems to me that most often college-level intro math courses spend a great deal of time on theory, sacrificing application, so that students walk away from the class able to write proofs, but less able to actually use the math. Much of the math course development that I see represented in these papers seems to be oriented toward practical application, which seems appropriate to me (even for theoretical physics). Did you consult with other STEM departments at your institution? Do you think that your course can be used by other departments with little modification? Physicists and engineers, for example, might want more numerical methods. I noted especially the comment about 3D visualization, which I think is an important element often overlooked at the undergrad level. What aspects of 3D visualization are covered? And +1 for Dr Nelson's question: I am surprised at the fast track to course acceptance. To what do you attribute this success? It is fairly understood that prior knowledge is the most predictive variable towards success in any course.  I would love to see you develop your anecdotal observations into a research study and attempt to discover where students' level of proficiency is: are they proficient in algebra, pre-cal, cal?  We might all be surprised to see where the problems stem from, and it might not be limited to just mathematics skills.   Discovering levels of proficiency of entering students would be a good starting point. One of my other questions is how do you get students to participate in the drill, practice, rest, revisit idea without using marks as a reward?  I also believe that getting students to get into the habit of doing practice before entering college or university will help students in being successful beyond secondary or high school.  Rich Messeder's picture No. And, you might have guessed, I won't be surprised to see research supports this answer. I have tried to refrain from writing pages of replies, in order that I not seem too pushy. My experiences in the private sector, especially as an engineering supervisor, and my further experiences teaching at HS and university, suggest strongly to me that the same principles and goals that are appropriate for success in the private sector apply to academia. I have taken management courses over the years that address the psychology of managers:employees::faculty:students, and paid close attention over the years to what works and what does not, to what adds or detracts from me as a leader and as a teacher, in an effort to continually improve myself. Regarding the suggestion that I might turn my experiences in to a research project: There are reasons why that is not likely to occur, though a related project could apply. Why? Time is of the essence. We are wasting our students' time, and doing both them and their employers an injustice, as well as hugely impacting scientific and engineering progress. It is time for well-thought action. Some of the research here that I find so useful and relevant is a decade or more old. I admire those institutions that have stepped up to the plate and changed for the better; the majority, it seems to me, have not, and it shows in the quality of students entering STEM courses in university. For example: In class, I regularly emphasize collaboration on studying and problem-solving, followed by individual writing...and relate specific factual anecdotes from my experiences that reflect dramatic improvements in performance. I roll these out occasionally during the course...Why occasionally? Because it seems to me that students are not up to speed on either of these points (collab & literacy), as evidenced in part by comments from the private sector regarding grads entering the workforce, and, just as we have been saying about repetition in math, repetition in all things results in internalization. I grade my STEM students on literacy, and tell them so the first day of class. An example of comments from the private sector: I have sat in on many industry "panels" at the undergraduate level. These panels are ostensibly to share industry perspectives, but I often think of them as recruiting opportunities (which is OK, too). Almost invariably, at the end of a panel discussion, some student will ask what the panelists opine that students might take from their undergrad experiences other than strictly academic work (grades, papers, etc.), and almost invariably the panelists reply with "collaboration skills and literacy". Part of my research: I have personal knowledge of a series of events (circa mid-1980s) at a huge private facility where the consequences of certain kinds of equipment failures posed significant risks to inhabitants of local communities. One day, the computers that monitored all that equipment made a mistake, and declared that something was wrong (everything BUT the computers was working just fine, it later proved). Nonetheless, this spurious fault condition triggered an attempt to activate several safety systems. One very important system did not activate. Post-game analysis showed that it had a design flaw, and that design flaw was identified by an engineer earlier. It seemed that the engineer was reviewing systems (for reasons I never knew), and decided that his calculations indicated a design flaw. He wrote it up and passed it to his immediate lead engineer. Well, that certainly should have gotten some attention, eh? But the document was so poorly worded that the lead engineer didn't get the point, and then, not realizing what the problem was, didn't follow up with the author. (Two significant problems: literacy and leadership.) When all this surfaced, the VP of engineering of this very large engineering staff had all the engineers take a literacy exam. All those who failed were tasked with taking remedial classes of the VP's choosing --- on their own time --- with the understanding that those who failed the first exam would be tested ~6 months later. Those who failed a 2nd time would be fired. Scientific research? Not exactly. Message received? Absolutely. Yet, when I mentioned this to a university physics faculty member, he said that literacy was not his concern...that's what the English department is for. But, I opine, standards /there/ are as poor as they are in mathematics, and anyway writing technical papers is very different from writing a paper criticising a novel. (Prior to the event mentioned above, I had already informed my engineers that literacy would be part of their annual review.) I opine that most US students entering college or university do not meet the reading, writing, and math skills of their forerunners of a few decades ago. They struggle to read challenging material, they struggle to write with any degree of literacy appropriate to their level of education, and they struggle to manage conceptual material in STEM classes because of the issues addressed in this conference. Side note: Sorry if I have already mentioned this: Compare the user manuals for the HP-11C and 15C (on the web) with that for the TI-89. This comment is NOT about the devices themselves, but about user manual content then and now. Sorry, I don't have a reference for older TI calculators. I find the research here, and referenced here, quite valuable, because it gives me something substantial to add to my anecdotes as I continue to work toward improving the quality of US academic life. A common criticism of courses of mathematics is that the theory is emphasized at the expense of the practice.  The same criticism might be made of general chemistry in that orbitals and other baggage eventually traceable to fraudulent quantum-mechanical bases are emphasized at the expense of the real basis of chemistry as a practical science; in the latter case, the instructors of general chemistry, merely teaching ill chosen textbooks, teach that material not because they understand it but because they fail to understand it.  I find nearly impossible to believe that instructors of mathematics at any level in general exhibit the analogous ignorance or that the textbooks of mathematics contain a similar proportion of rubbish, because mathematics is much more readily intrinsically testable.  The problem of poor mathematical preparation for general chemistry seems to be ultimately attributable to the failure in learning arithmetic -- multiplication tables et cetera, long before algebra and geometry, let alone calculus, are confronted.  To any strategies within the environment of college or university applied to entering students to redress the accumulated arithmetical or mathematical deficiencies, I admire the recognition of the need and the practice to remedy the deficiencies, as discussed in this conference. At the level above general chemistry, and under the assumption that the deficiencies noted above have been resolved for the students who advance therefrom, one can then apply various courses of mathematics within chemistry departments, to avoid the excessive emphasis on 'theory' -- theorems, corollaries, lemmas -- in courses taught by mathematicians who have little interest or knowledge of chemical or other applications.  The use of advanced mathematical software in the latter circumstances can be greatly beneficial, but is of no use if the students lack the fundamental skills of arithmetic. I do agree that this course (with some modification) could be opened up to other departments as an applied math sequence. In fact, CSU does have a Calculus for Biological Scientists sequence as well, which uses in part the text by Erich Steiner. However, that course sequence is two semesters in total, the second of which is not a required course for the major. My biggest concern might be for students wishing to go further in math. Since topics come from a variety of traditional math courses, it is a little bit unclear what other math classes they might take if they chose to continue with math. For example, students have some differential equations, but not a whole course worth.  Numerical methods, for example, may be more important to other fields, but adding this in would cut other topics in an already full course.  In my opinion, part of the joy of teaching this course was having the chance to engage with students mathematically on topics that they already cared about/felt were valuable to think about. Because all of my students were chemists, I think that keeping chemical examples central helped with student buy-in. Instead of a set of rules to be memorized, math was shown to be useful for thinking about physical systems that interested them.  I know not every school is large enough to be able to support these types of "flavored" math classes. I wonder if some of that feeling of relevance would have disappeared if the applications were varied.  To answer the question on 3D visualization, we spent time learning how to sketch surfaces in 3D,  set up and evaluate volume integral and some work with plotting in matlab.   I guess it was kind of fast.  We ran the course two years as an experimental course, and in the second year the process was put through to make it a regular course.  I think that we can officially run a course three years as an experimental course.  It was important that the Maths department chair and undergrad director were supportive and that faculty from chemistry were involved with the course design and wanted the course to continue running.  Positive feedback from students was important too. Since we introduced the course, faculty from physics, computer science, and chemical engineering have all expressed interest in switching to this sequence.  The possible obstable to having physics and computer science join is that we want to keep the focus on p-chem applications.  Also, physics and chemical engineering majors will need to take a differential equations course as well, and that makes for a lot of overlap with the first course in the Maths for Chemists sequence.  I have suggested for chemical engineering that they do the first semester of Maths for Chemists, and then switch back to the normal Calc. III course. I think that even for maths majors a sequence of i) Calculus I (up to fundamental theorem of calculus, ii) differential equations-based course, much like the first semester of Maths for Chemists, iii) Calculus III would be a good sequence.  The Calc II coordinator here is interested in taking some of the ideas from the MfC sequence and giving more of a focus on differential equations in Calc. II. Your paper states one of your goals as: “students should be able to decide if solutions are reasonable with estimation techniques and order of magnitude calculations.” That topic was addressed previously in this conference (in Paper #1) for a course in physical chemistry. How is this done in your program? Are some numeric calculations on graded assignments expected to be done without a calculator? -- rick nelson I appreciate the careful thought and methodology of teaching estimation in the first paper. In practice, in our program, estimation was a recurring theme throughout the semester, not a topic taught on its own. It was often the response to "Does this answer make sense?" or "Is this what we expected?". I would run through estimations on the board or verbally after computations (often with a calculator) were completed. The accuracy of estimation would depend on the problem, sometimes it would be pretty rough--even just an order of magnitude argument. Hopefully, these checks became part of the routine of problem-solving.  I did not require students to do computations without a calculator. I can see how this forces students to sharpen arithmetic skills.  However, I hope that using estimation to check answers was communicating that even when using a calculator you need to do a "gut check" at the end.  I was the managing author and 2nd listed author of the book The Unified Learning Model (Shell et al.) referred to elsewhere in this conference. Willingham's book (Why Don't Students Like School?) is an excellent, readable summary of where educational psychologists are today in terms of their views on learning. IF that book has a shortcoming, it is that reading it will suggest to you that students actually DO like school. Of course, that would not be the title of a best seller. Two of my colleagues have joined me in writing a newer book rooted in information theory. This book is available at: The new book is edited periodically. A new round of edits will be posted before the end of the year. The edits are maintained such that readers of earlier editions are directed to the new changes.  The book has four sections. First is the general theory. Next are the applications. Third are elements of the basic underpinning science (such as EEG or studies of snails). Last are the enumerated edits. The book was first posted in August 2015. The advantages of a Web-based book include the opportunity to edit based on new information and the ability to link directly to multimedia. For example, check out: Interesting--concentrating on other things makes the details hard to see!  Dr. Brooks – It is rare to find a single source on the “science of learning for educators” that is both comprehensive and up-to-date. Your “Minds, Models, and Mentors,” at the link you provided, I think is Number One. For instructors in “science-major chemistry” with its focus on well-structured problem solving, my personal sequence of “recommended reading,” from short and simple to more comprehensive, would include: 1. Four pages on fundamentals of how the brain solves problems in the section “The Human Brain – Learning 101,”on pages 8-11 at 2. Eight pages on problems in the physical sciences and math on pages 4-2 to 4-10 of The Report of the Task Group on Learning Processes in the Final Report of the National Mathematics Advisory Panel (NMAP) at The section on “automaticity” is especially important in helping students solve scientific calculations. 3. The book Make It Stick by Brown, Roediger, and McDaniel (2014) describing specific study strategies such as retrieval and interleaved practice, summary sheets, and elaboration. 4. Your “Minds, Models, and Mentors” as a comprehensive summary of the brain’s structure and its impact on learning. -- rick nelson Cary Kilner's picture Thank you for your contribution to the ConfChem. You may recall that we met at Princeton in 1984 at the Woodrow Wilson Dreyfus Master Teachers Institute, where our charge was periodicity and descriptive chemistry. That one month was an outstanding experience for me; it kick-started my career as a chemistry teacher (I had been a professional musician after college), and reinvigorated my love of DOING chemistry, and not just talking about it. It encouraged me to continue to develop demonstrations, which I eventually used to help teach chem-math to apprehensive students. As I recall you were editor of an ACS publication, and came for a week to participate as a leader. I don’t believe it was Chem-Matters, but maybe so. Please refresh my memory. I ordered a class set and used it for 20 years. My students looked forward to it coming every few months and loved reading it. I had them read back issues as well as the four that came each year. Were you interested in cognitive science at that time? I will certainly check out your references.
a7fb82cf3b2c52e3
Probabilistic integrals: mathematical aspects From Scholarpedia Sergio Albeverio et al. (2017), Scholarpedia, 12(5):10429. doi:10.4249/scholarpedia.10429 revision #182435 [link to/cite this article] Jump to: navigation, search Post-publication activity Curator: Zdzislaw Brzezniak Integrals over spaces of paths or more generally of fields have been introduced as heuristic tools in several areas of physics and mathematics. Mathematically, they should be intended as extensions of finite dimensional integrals suitable to cover the applications the heuristic path integrals were originally thought for. Eponymous naming conventions are: functional integrals, infinite dimensional integrals, field integrals. Feynman path integrals (or functionals), and Wiener path integrals (or integrals with respect to Wiener-type measures) are special cases. In probability the concept of flat integral(or integrals with respect to a flat measure) also occurs. A particular realization of Gaussian path integrals is given by "white noise functionals". In the present article, the mathematical theory of path integrals of probabilistic type, as Wiener path integrals, will be presented, while the theory and the applications of path integrals of Feynman's type is presented in another article of Scholarpedia: Path integral: mathematical aspects. In fact, a general setting is presented, having in mind mainly stochastic processes taking values in finite dimensional spaces and their applications. The case of the infinite dimensional processes related to random fields and stochastic partial differential equations will be discussed under another heading. Stochastic processes and probability measures on spaces of paths We can look at the theory of path integrals, i.e. of integrals on function spaces, of probabilistic type from two different, but equivalent, points of view. On the one hand, for the readers who are more familiar with infinite dimensional analysis, one should introduce a function space $\Gamma$, endowed with a $\sigma$-algebra ${\cal A} (\Gamma)$ and, given a $\sigma$-additive positive measure $\mu$ with $\mu (\Gamma)=1$, define the integral of a bounded measurable function $f:\Gamma\to{\mathbb C} $ as the (absolutely convergent) Lebesgue integral $\int_\Gamma f(\gamma) d\mu(\gamma)$. On the other hand, for the readers who are more familiar with a probabilistic language, one could introduce a stochastic process $X=(X_t)_{t\in I}$, i.e., a family of random variables (i.e. measurable maps) $X_t:\Omega\to {\mathbb R}$ indexed by the element $t$ of an interval $I\subset {\mathbb R}$, defined on a common probability space $(\Omega, {\cal A},{\mathbb P})$, where $\Omega $ is an nonempty set, ${\cal A}$ is a $\sigma-$algebra of subsets of $\Omega$ and ${\mathbb P}$ is a probability measure on ${\cal A}$. If, more generally, the random variables $X_t$ have range in a measurable space $(E, {\cal A}')$ instead of ${\mathbb R}$, then the stochastic process is said to have state space $E$. As examples we can take $E={\mathbb R}^d$ or $E$ a finite dimensional manifold or $E$ an infinite dimensional space with an appropriately chosen $\sigma$-field ${\cal A}'$ (usually the Borel $\sigma$-field). As anticipated, the two approaches are deeply related. Given a stochastic process $\{X_t\}_{t\in I}$ on $(\Omega, {\cal A},P)$ with (a "suitably regular") state space $E$, it is possible to construct a probability measure $\mu$ on the set $\Gamma=E^I$ of all functions $\gamma:I\to E$, called the ‘‘sample paths’’ of the process, equipped with the $\sigma$-algebra ${\cal A}(\Gamma)$ generated by the sets of the form \begin{equation}\tag{1}B(t_1,...,t_n;B_1,...,B_n):=\{\gamma \in \Gamma : \gamma (t_1)\in B_1, ...,\gamma (t_n)\in B_n\}, \quad \hbox{for some }\, t_1, t_2,...,t_n\in I, \,B_1,B_2,...,B_n\in {\cal A}'. \end{equation} The measure $\mu$ is defined on the sets of the form (1) as $$\mu (B(t_1,...,t_n;B_1,...,B_n)):=P(\{\omega\in \Omega : X_{t_{1}}(\omega )\in B_1,...,X_{t_{n}}(\omega )\in B_n).$$ Conversely, any probability measure $\mu $ on $(\Gamma\equiv E^I, {\cal A}(\Gamma))$ gives rise to a stochastic process $X_t$, $t\in I$, on the probability space $(\Omega, {\cal A}, P)=(\Gamma, {\cal A}(\Gamma), \mu)$, where $\gamma\in \Gamma$, $ t \in I$ and $X_t(\gamma):=\gamma (t)$. A natural generalization of the concept of stochastic process is the random field, i.e., a family of random variables $\{X_y\}_{y\in Y}$ indexed by the elements of a set $Y$, which replaces the time interval $I\subset\R$. The historical development of probabilistic path integrals The advent, at the beginning of last century, of Borel and Lebesgue's development of measure and integration theory for functions on finite dimensional spaces opened the way, on the one hand, to abstract integration theory and, on the other hand, to measure and integration theory on infinite dimensional spaces, which just started to be systematically studied at that time (the impulses coming here from other topics, like calculus of variations). N. Wiener in the early 20s made the crucial discovery of a natural measure on the space of continuous paths, connecting it with the description of physical Brownian motion studied (via heuristic limits from symmetric random walks) by Einstein and Smoluchowski about 15 years before. Precursors of such limits of random walks are to be found both in the statistical astronomical work by Thiele (in the 1870's) and in Bachelier's studies in finance (1900). Kolmogorov (1933) in the development of the axiomatic foundations of probability theory and the theory of stochastic processes proved his fundamental theorem on the construction of measures on infinite dimensional spaces. This construction is in terms of limits of projective systems of probabilities on product spaces. This, complemented by a support theorem of Kolmogorov and Chentsov, yields the Wiener measure as a particular case, as well as the case of product measures on general spaces. Kolmogorov's theorem has also become the basis for vast extensions, let us mention in particular the one provided by Minlos theorem, which covers the cases of distributional spaces (which are natural in many applications, e.g., in physics and engineering). L. Gross' theory of abstract Wiener spaces provides another important extension, to Hilbert and Banach spaces, close to the spirit of Wiener's space construction and thus also suitable for the development of infinite dimensional stochastic analysis. Among the most studied classes of measures on infinite dimensional spaces, let us mention Gaussian measures, connected via the Wiener-Itô-Segal isomorphism with Fock spaces (of great relevance in quantum field theory). Also of importance are other measures associated with stochastic processes and corresponding to projective systems constructed from Markov kernels (see the section below and the ones on diffusion processes and Lévy processes). These were developed starting from around 1940 (in the work by Bernstein, Kakutani, Itô, Skorohod, Doob). Measures on infinite dimensional spaces From a mathematical point of view, the implementation of an integration theory of Lebesgue type on a space of paths, i.e., on an infinite dimensional space, in terms of a \(\sigma-\) additive measure is a non trivial task. Contrary to the case of a finite-dimensional Euclidean space, it is impossible to construct a nontrivial Lebesgue-type measure on an infinite dimensional Hilbert space ${\mathcal H}$, i.e., a regular Borel measure which is invariant under rotations or translations. Indeed, the assumption of the existence of a \(\sigma\)-additive measure \(\mu\) with Euclidean invariance properties, which assigns a positive finite measure to all bounded open sets, leads to a contraddiction. In fact, by taking an orthonormal system \(\{e_i\}_{i\in\N}\) and by considering the open balls $B_i=\{ x\in {\mathcal H}, \Vert x-e_i\Vert<1/2\}$, one has that they are pairwise disjoint and their union is contained, e.g., in the open ball $B(0,2)=\{ x\in {\mathcal H}, \Vert x\Vert<2\}.$ By the Euclidean invariance of the Lebesgue-type measure \(\mu\) one can deduce that \(\mu (B_i)=a\), \(0<a<+\infty\), for all \(i\in{\mathbb N}\). By the \(\sigma\)-additivity one has \[ \mu(B(0,2))\geq\mu(\cup_iB_i)=\sum _i\mu (B_i)=+\infty, \] but, on the other hand, \(\mu(B(0,1))\) should be finite as \(B(0,2)\) is bounded. It is interesting to point out that this argument forbids also the existence of a standard Gaussian measure on any infinite-dimensional Hilbert space, because of its rotational invariance. The study of this problem led to the development of the theory of abstract Wiener spaces (see Gross (1967), Kuo (1975)). As a matter of fact the argument above can be generalized to the case where the Hilbert space $\mathcal H$ is replaced by an infinite-dimensional topological vector space $X$. Let us denote by $X^*$ the algebraical dual of $X$ and by $R$ a linear subspace of $X^*$. Let ${\cal B}_R$ be the smallest $\sigma$-algebra on $X$ in which any $\xi\in R$ is measurable. The couple $(X,{\cal B}_R)$ turns out to be a measurable additive group, in the sense that the map $T:X\times X\to X$ defined as $T(x,y):=x-y$ is measurable from $(X\times X, {\cal B}_R\times {\cal B}_R) $ to $(X,{\cal B}_R)$. By applying the theory of invariant (Haar) measure on topological groups (see Yamasaki (1985)), we obtain that there cannot exist a nontrivial invariant measure (under translations) on $(X,{\cal B}_R)$. The first non trivial example of a Gaussian probability measure on the Banach space $C[0,t]$ of continuous functions on the interval [0,t] ("paths") was provided by Norber Wiener in the years 1920-1923. Wiener constructed a stochastic process $X\equiv (X _t)_{t\geq 0}$, which is nowadays called "Wiener process" or "mathematical Brownian motion" as it presents a mathematical model for the physical Brownian motion studied by A. Einstein e M. Smoluchowski between 1905 and 1909. This also gives a rigorous realization of the process introduced by L. Bachelier (1900) to describe prices evolution in finance. Kolmogorov theorem The main tool for the construction of probability measures on infinite dimensional spaces starting from their finite dimensional approximations is the celebrated Kolmogorov existence theorem. It was originally stated and proved by Kolmogorov in the case of measures on the space $\Omega={\mathbb R}^{I}$ of real valued functions (called paths) defined on an interval $I$ of the real line and later generalized by S. Bochner to projective limit spaces. In the following we shall present a version which is sufficiently general for our purposes. Let $(E, {\cal A}')$ be a measure space. We shall assume $E$ to be a Polish space, i.e., a topological space with a countable basis and topology derived from a complete metric, while $ {\cal A}'$ will be the Borel $\sigma-$algebra ${\cal B}(E)$ on $E$. In the most common applications, $E$ will be $\R^d$ or a $d-$ dimensional Riemannian manifold $M$, or a separable real Hilbert space. Let ${\cal F } (I)$ be the set of all finite subsets of the interval $I\subset {\mathbb R}$, endowed with the partial order relation $\leq$ defined by $J\leq K $ if $J\subseteq K$. The set ${\cal F } (I)$ has the structure of a directed set, in the sense that for any $J,K\in{\cal F } (I)$ there is an $H\in {\cal F } (I)$ such that $J\leq H$ and $K\leq H$. Given a $J\in {\cal F } (I)$, with $J=\{t_1,t_2,...,t_n\}$, $0\leq t_1<t_2...<t_n$, $t_i\in I$ $i=1,..., n$, let us consider the set $E^J$ of all maps from $J$ to $E$. An element of $E^J$ is an $n-$tuple $(\gamma(t_1),\gamma(t_2),...,\gamma(t_n) )\in E^n$, $n$ being the cardinality of $J$ (clearly, $E^J$ is naturally isomorphic to $E^n$). Let us consider on $E^J$ the product topology and the Borel $\sigma-$ algebra ${\cal B}(E^J)$. Furthermore, let us consider the projection $\Pi_J:E^I\to E^J$ which assigns to each path $\gamma\in E^I$ its restriction to the set $J$, namely, if $J=\{t_1,t_2,...,t_n\}\subset I$, then $$\Pi_J:E^I\ni \gamma\mapsto (\gamma(t_1),\gamma(t_2),...,\gamma(t_n) )\in E^{J}.$$ Let us focus on the cylinder sets, i.e., the subsets of $\Omega=E^I$ of the form $\Pi_J^{-1}(B_J)$ for some $J\in {\cal F } (I)$ and some Borel set $B_J\in {\cal B}(E^J)$. Let ${\cal C}$ denote the set of all cylinder sets, and let ${\cal A}$ be $\sigma$-algebra generated by ${\cal C}$. Given a measure $\mu$ on $(\Omega, {\cal A})$, for any $J\in {\cal F } (I)$ one defines a measure $\mu_J$ on $(E^J, {\cal B}({E}^J))$ as $\mu_J:=\Pi_J(\mu)$, i.e., $$\mu_J(B_J):=\mu(\Pi_J^{-1}(B_J)), \qquad B_J\in {\cal B}({E}^J).$$ Given two elements $J,K\in {\cal F} (I)$, with $J\leq K$, let $\Pi_J^K:E^K\to E^J$ be the projection map, which is continuous and hence Borel measurable. By construction, the measures $\mu_J$ on $(E^J, {\cal B}(E^J))$ and $\mu_K$ on $(E^K, {\cal B}(E^K))$ are related by the equation $\mu_J=\Pi_J^K(\mu_K)$, that means \begin{equation}\tag{2}\mu_J(B_J):=\mu_K((\Pi_J^K)^{-1}(B_J)), \qquad B_J\in {\cal B}(E^J),\end{equation} as one can verify by means of the equation $\Pi_J=\Pi^K_J\circ\Pi_K$. A family of measures $\{\mu_J\}_{J\in {\cal F}(I)}$ satisfying the compatibility condition (2) is called a projective family of measures. In the case of probability measures, the converse is also true. Indeed, according to the Kolmogorov existence theorem, given a family of probability measures $\{\mu_J\}_{J\in {\cal F}(I)}$ satisfying the compatibility condition (2), there exists a unique probability measure $\mu$ on $({E}^{I},{\cal A})$ such that for any $J\in {\cal F}(I) $ $\mu_J=\Pi_J(\mu)$. In other words, it is possible to construct a measure on the (infinite dimensional) space of paths $\Omega ={E}^{I}$ by means of its finite dimensional approximations. The measure $\mu$ described by the Kolmogorov theorem is called the projective limit of the projective family $\{\mu_J\}$. A cylinder function is a function $f:{E}^I\to {\mathbb C}$ of the form $f(\gamma)=F (\gamma(t_1),\gamma(t_2),...,\gamma(t_n) )$, for some $t_1,...,t_n\in I$ and a Borel function $F:E^n\to {\mathbb C}$. Clearly, a cylinder function is measurable with respect to the $\sigma$-algebra ${\cal A}$ generated by the cylinder sets. Given a measure $\mu$ on $E^I$ constructed as the projective limit of a projective family $\{\mu_J\}$, the (path) integral of a cylinder function $f$ with respect to the measure $\mu$ can be computed as $$\int_{{E}^I}f(\gamma)d\mu (\gamma)=\int_{{E}^n}F(x_1,...,x_n)d\mu_J (x_1,...,x_n), \qquad J=\{t_1,...,t_n\}.$$ By adopting a probabilistic language and introducing the stochastic process $X=(x_t)_{t\in I}$, defined by $X_t\equiv \gamma(t)$, on $\Omega\equiv E^I$, the Kolmogorov theorem states that the process is completely determined by its finite dimensional distributions, namely by the quantities $$P(X_{t_1}\in B_1, ...,X_{t_n}\in B_n\})=\mu_J(B_1\times ...\times B_n), \qquad J=\{t_1,...,t_n\}, B_i\in {\cal B}(E), i=1,...,n.$$ The expected value of a measurable function $f:\Omega \to {\mathbb C}$ is defined to be the integral ${\mathbb E}[f(X)]=\int_{{E}^I}f(\gamma)d\mu (\gamma)$. A different type of extension theorem which does not need any topological assumption on the state space $E$ has been proved by Ionescu-Tulcea (1949). Construction of Markov stochastic processes via Markov kernels Kolmogorov theorem provides a tool for the construction of probability measures on the space of paths $E^I$, once a projective family of measures $\{\mu_J\}_{J\in {\cal F}(I)}$ is given. We are in a position to show an important method for the construction of a projective family $\{\mu_J\}_{J\in {\cal F}(I)}$, which, incidentally, anticipates the connection between probability theory (in particular Markov processes) and parabolic equations associated to second order elliptic operators (namely, the theory of Markov semigroups). Thinking of a probability measure on $E^I$ as a description of the possible trajectories of a point particle moving in the topological space $E$, an important object is the transition probability $P_{t_{j}, t_{j+1}}(x,B)$ that the particle starting at time $t_j$ a the point $x\in E$, reaches at time $t_{j+1} $ the set $B\in {\cal B}(E)$. If this probability depends only on $t_{j}, t_{j+1},x$ and $B$, but not on the trajectory up to time $t_j$, namely if the particle has no "memory", the Markov property is satisfied. Assuming that, given an intermediate time instant $t_k$ with $t_{j}<t_k< t_{j+1}$, the random motion during the interval $[t_j,t_k]$ is independent of the motion during $[t_k,t_{j+1}]$, the following relation, called Chapman-Kolmogorov equation, holds \[ \tag{3} P_{t_j, t_{j+1}}(x,B)=\int_E P_{t_j, t_{k}}(x,dx')P_{t_{k}, t_{j+1}}(x', B) , \qquad \forall x\in E, B\in {\cal B}(E), t_{j}<t_k< t_{j+1}\] A Markov kernel is a map $P:E\times {\cal B}(E)\to [0,1]$ satisfying the following conditions: 1. the map $x \in E\mapsto P(x,B)$ is measurable for each $B\in {\cal B}(E)$, 2. the map $B\in {\cal B}(E) \mapsto P(x,B)$ is a probability measure on $ {\cal B}(E)$ for each $x\in E$ A family $\{P_{t,s}\}$ of Markov kernels indexed by $(s,t)\in I^2$ such that $s\leq t$, satisfying the condition (3) is called Markov transition function. Given a Markov transition function $\{P_{t,s}\}$, $s\leq t$, and a probability measure $\nu_0$ on $(E,{\cal B}(E))$ (which plays the role of an initial distribution), it is possible to build up a projective family of probability measures $\{\mu_J\}_{J\in {\cal F}(I)}$ as \begin{equation}\tag{4} \mu_J(B_J)=\int_{E\times B_J}P_{t_n,t_{n-1}}(x_{n-1},dx_n)\dots P_{t_2,t_1}(x_1,dx_2) P_{t_1,t_0}(x_0,dx_1)d\nu(x_0) \end{equation} where $J=\{t_1,t_2,...,t_n\}$, with $t_0<t_1<t_2<...<t_n<+\infty$, and $B_J\in {\cal B}({\mathbb R}^n)$. The compatibility condition (2) is a direct consequence of Eq. (3). Due to the Kolmogorov existence theorem, there exists a unique probability measure $\mu$ on the space of paths $E^I $ such that $$\mu(\{\gamma \in E^I : (\gamma(t_1),\gamma(t_2),...,\gamma(t_n))\in B_J)=\mu_J(B_J)$$ The path integral of a cylinder function $f:E^I\to {\mathbb C}$ of the form $f(\gamma)=F(\gamma(t_1),\gamma(t_2),...,\gamma(t_n))$ for some $F\in {\cal B}(E^n) $ is defined to be $$\int _{E^I}f(\gamma)d\mu (\gamma) =\int _{E^{n+1}}F(x_1, ..., x_n)P_{t_n,t_{n-1}}(x_{n-1},dx_n)\dots P_{t_2,t_1}(x_1,dx_2) P_{t_1,t_0}(x_0,dx_1)d\nu(x_0).$$ The associated stochastic process $X=(X_t)_{t\in I}$ on $(E^I, {\cal A},\mu)$ is called a Markov process. If the initial distribution $\nu_0$ is the Dirac point measure at $x\in E$, then the process is said to start at $x$. The expectation with respect to the associated probability measure is denoted by ${\mathbb E}^x$. The kernels $\{P_{t,s}\}$, $s\leq t$, are the transition probabilities of $X_t$, i.e., $$P_{t,s}(x,B)=P(X_t\in B \,|\, X_s=x)={\mathbb E}^x[{\bf 1}_{X_t^{-1}(B)}].$$ Stochastic processes with stationary and independent increments: Lévy processes If $I=[0,+\infty)$ and for any $t,s\in I$, $s\leq t$, and $x\in E$ the transition probabilities $ P_{t,s}(x,\, \cdot \,)$ depend only on the difference $t-s$ then the stationarity property is satisfied and $P_{t, s}(x,\, \cdot \,)\equiv P_{t-s}(x,\, \cdot \,)$, where $P_t(x,\, \cdot \,)):=P_{t,0}(x, \, \cdot \,)$ by definition. From a rigorous mathematical point of view, the set of transition probabilities is represented by a semigroup of Markov kernels, i.e., a family of Markov kernels $\{P_t\}_{t\in \R}$ satisfying the semigroup property $P_{t+s}(x,B)=\int_E P_t(x,dx')P_s(x', B)$ for all $t,s\in \R_+$, $x\in E$, $B\in {\cal B}(E)$. Given a semigroup of Markov kernels, the Kolmogorov theorem provides a unique probability measure $\mu $ on the space of paths $E^I $ such that for all $J=\{t_1,t_2,...,t_n\}$, with $0<t_1<t_2<...<t_n<+\infty$, and $B_J\in {\cal B}(E^n)$, \begin{equation}\tag{5}\mu(\{\gamma \in E^I : (\gamma(t_1),\gamma(t_2),...,\gamma(t_n))\in B_J)=\int_{ B_J}P_{t_n-t_{n-1}}(x_{n-1},dx_n)\dots P_{t_2-t_1}(x_1,dx_2) P_{t_1}(0,dx_1).\end{equation} The path integral of a cylinder function $f:E^I\to {\mathbb C}$ of the form $f(\gamma)=F(\gamma(t_1),\gamma(t_2),...,\gamma(t_n))$, for some $F\in {\cal B}({\mathbb R}^n) $ and $\{t_1,t_2,...,t_n\}\subset I$ with $0<t_1<t_2<...<t_n$, is defined to be \begin{equation}\tag{6}\int _{E^I}f(\gamma)d\mu (\gamma) =\int _{E^{n}}F(x_1, ..., x_n)P_{t_n-t_{n-1}}(x_{n-1},dx_n)\dots P_{t_2-t_1}(x_1,dx_2) P_{t_1}(0,dx_1).\end{equation} The stochastic process $X=\{X_t\}_{t\in I}$ on $\Omega =E^I$ naturally associated to the measure $\mu$ by setting $X_t(\gamma):=\gamma (t)$, $\gamma \in \Omega$, $t\in I$, has, by construction, independent and stationary increments on the probability space $(\Omega, {\cal A},\mu)$. Indeed, for any $s,t\in I$, the random variables $X_t-X_s$ and $X_{t-s}$ have the same distributions. Moreover, by the explicit form of the finite dimensional distribution of the process (see Eq (5)), for any $t_1,...,t_n\in I$ with $t_1<t_2<...<t_n$, the $n-1$ random variables $X_{t_2}-X_{t_1}, X_{t_3}-X_{t_2}, ...,X_{t_n}-X_{t_{n-1}}$ are independent. If, furthermore, the measures $P_t$ converge weakly as $t\downarrow 0$ to the Dirac $\delta $ measure, i.e., for all continuous bounded functions $f:E\to {\mathbb R}$ one has $\lim_{t\to 0} \int _E f(y)P_t(x,dy)=f(x)$, then it is possible to prove (see e.g. Applebaum (2009)) that the process $X_t$ has the following continuity property: \begin{equation}\tag{7}\lim_{t\to s}P(|X_t-X_s|>a)=0, \qquad \forall a>0, t,s \in I. \end{equation} Such a stochastic process is called a Lévy process. An elegant characterization of Lévy processes on ${\mathbb R}^d$ is provided by the Lévy-Khinchine formula, which describes the general form of the characteristic function of the process, namely of the map $x\mapsto {\mathbb E}[e^{ix \cdot X_t}]$, $x\in {\mathbb R}^d$. Indeed, there exists a vector $b\in {\mathbb R}^d$, a positive-definite symmetric $d\times d$ matrix $A$ and a Borel measure $\nu_L$ on ${\mathbb R}^d\setminus 0$ satisfying $\int_{{\mathbb R}^d\setminus 0}{\mathrm min}(1, |y|^2)\nu_L(dy)<\infty$, such that for any $t\geq 0$ and $x\in {\mathbb R}^d$, $${\mathbb E}[e^{ix X_t}]=e^{t\eta (x)},$$ where $$\eta(x)=i b\cdot x -\frac{1}{2}xAx+\int_{ {\mathbb R}^d\setminus 0 }[e^{ix\cdot y }-1-ix\cdot y{\bf 1}_{B(0,1)}(y)]\nu_L(dy).$$ Here $B(0,1)=\{y\in {\mathbb R}^d \colon |y|<1\}$. The Wiener measure and the Wiener process Let us focus on the semigroup of Markov kernels on ${\mathbb R}^d$ defined by $P_t(x,B)=\int_B(2\pi t)^{-d/2}e^{-\frac{|x-y|^2}{2t}}dy$, $t>0$ for $ B\in {\cal B}({\mathbb R}^d)$ and $P_0(x,dy)=\delta_x(dy)$. The resulting probability measure $\mu $ on the space of paths ${\bf q} :[0,T]\to {\mathbb R}^d$ is called the Wiener measure and the associated process is denoted by $W_t$ and called the Wiener process (or mathematical Brownian motion), as it provides a mathematical model for the physical Brownian motion studied by A. Einstein e M. Smoluchowski (see, e.g. Albeverio (1997)). The density $p_t$ of $P_t$, $t>0$, with respect to Lebesgue measure, i.e. $p_t(y)=(2\pi t)^{-d/2}e^{-\frac{|x-y|^2}{2t}}$, $y\in {\mathbb R}^d$, is called heat kernel (on ${\mathbb R}^d$). The measure of a cylinder set of paths ${\bf q}:[0,T]\to {\mathbb R}^d$ (or equivalently the probability $P(W _{t_1}\in B_1,\dots ,W _{t_n}\in B_n)$ that the process is at time $t_i$ in the Borel subset $B _i\in {\cal B}\R^d$), $i=1, ...,n$, $0<t_1<...<t_n$, namely Eq. (5), becomes \begin{equation}\tag{8} \mu (\{ {\bf q}(t_1)\in B_1,..., {\bf q}(t_n)\in B_n\})= \int_{B_n}\cdots\int_{B_1} \left((2\pi)^n (t_{n}-t_{n-1})\ldots (t_{1}-t_0) \right)^{-d/2}e^{-\frac{1}{2}\sum_{j=0}^{n-1}\frac{\vert x_{i+1}-x_i\vert^2}{t_{i+1}-t_i}} dx_1\ldots dx_n, \end{equation} where $t_0\equiv 0$, $x_0\equiv x$. By introducing (using the notation of Path integral) "piecewise linear paths" ${\bf q}_c :[0,T]\to \R^d$, such that ${\bf q}_c (t_j)=x_j$ and ${\bf q}_c (\tau)$ for $\tau \in [t_j,t_{j+1}]$ coincides with the constant velocity path connecting $x_j$ with $x _{j+1}$: $${\bf q}_c (\tau)=\sum_{j=0}^{N-1}{\bf 1}_{[t_j, t_{j+1}]}(\tau)\Big( x_j+\frac{x_{j+1}-x_j}{t_{j+1}-t_j}(\tau-t_j)\Big),\qquad \tau\in [0,t],$$ the exponent on the right hand side of (8) is equal to the classical action functional $S_0({\bf q})\equiv\frac{1}{2}\int _0^T \vert\dot {\bf q} (\tau)\vert ^2d\tau $ evaluated along the path ${\bf q}_c$, i.e., $S_0({\bf q}_c)\equiv\frac{1}{2}\sum_{j=0}^{n-1} \Big| \frac{x_{j+1}- x _j}{t_{j+1}- t _j}\Big|^2 (t_{j+1}- t _j)$. By setting $Z\equiv ((2\pi)^n (t_n-t_{n-1})\ldots (t_1-t_0))^{\frac{d}{2}}$, and $d {\bf q}=dx_1...dx_n$, Eq. (8) can be heuristically written as $Z^{-1}\int e^{-S_0({\bf q})}d {\bf q}$, obtaining the intuitive formula (7) in Path integral. It has the interpretation of a "path integral" on the space of polygonal paths with respect to the measure $Z^{-1}e^{-S_0 ({\bf q})}d {\bf q}$. The symbol $d {\bf q}$ can be regarded as a "flat measure" on $\R ^{nd}$, $Z$ a normalization constant and the term $e^{-S_0 ({\bf q})}$ as a Gibbs factor, in the sense of statistical mechanics, as $S_0$ is an action functional). In fact, the Wiener measure defined originally on the whole path space $({\mathbb R}^d)^{[0,+\infty)}$ is supported on the space $C([0,+\infty))$ of continuous paths. Moreover, the Wiener measure is supported, for each $\theta <1/2$, on the space of $\theta$-Hölder continuous paths. This is because for any $p\geq 2$ there exists a $c_p>0$ such that $${\mathbb E}\left[|W_t-W_s|^p\right]\leq c_p|t-s|^{p/2}, \quad t,s\geq 0.$$ Within an abstract formulation, the Kolmogorov-Chensov theorem establishes that if a process $X=(X_t)_{t\in [0, +\infty)}$ defined on a probability space $(\Omega ,{\cal F},{\mathbb P})$ satisfies \begin{equation}\tag{9}{\mathbb E}[|X_t-X_s|^\alpha]\leq c|t-s|^{1+\beta}\end{equation} for some $\alpha>0$, $\beta>0$ and $c>0$, then there exists another process $\tilde X=(\tilde X_t)_{t\in [0, +\infty)}$ such that $X_t=\tilde X_t$ ${\mathbb P}$-almost surely for all $t\in [0,+\infty)$, and every trajectory of the process $\tilde X$ is $\theta-$Hölder continuous or every $\theta \in (0, \beta/ \alpha)$. Furthermore, in the case of the Wiener process $W_t$, it is possible to prove that the sample paths are almost surely nowhere differentiable. For a detailed description of the Wiener process see, e.g., Revuz and Yor (1999). Applications of the Wiener measure In the following we present some suggestive probabilistic representations of solutions of partial differential equations. For an overview of the scope of functional integration tecniques, in particular when applied to quantum or statistical physics, see, e.g. Simon (2005), Kac (1980) and Path integral. The Feynman-Kac formula The density of the Markov kernel $p_t(x,y)=(2\pi t)^{-d/2}e^{-\frac{|x-y|^2}{2t}}$ , $t>0$, which leads to the construction of the Wiener measure is, in fact, also the fundamental solution of the heat equation, in the sense that the associated initial value problem \begin{align} \frac{\partial}{\partial t}u(t,x)&=\frac{1}{2}\Delta_x u(t,x), \tag{10}\\ u(0,x)&=u_0(x), \quad x\in {\mathbb R}^d, t\in [0,+\infty), \end{align} has, for $u_0:{\mathbb R}^d\to {\mathbb R}$ a Borel bounded function, a classical solution of the form $$u(t,x)=\int_{{\mathbb R}^d}u_0(y)p_t(x,y)dy=\int_{{\mathbb R}^d}u_0(y)\frac{e^{-\frac{|x-y|^2}{2t}}}{(2\pi t)^{d/2}}dy ={\mathbb E}^x[u_0(W_t)]$$ As pointed out in 1949 by M. Kac (who was actually inspired by a lecture by R. Feynman at Cornell University), a probabilistic representation can also be proved for the solution of the perturbed initial value problem \begin{align} \frac{\partial}{\partial t}u(t,x)&=\frac{1}{2}\Delta_x u(t,x)-V(x)u(t,x), \tag{11}\\ u(0,x)&=u_0(x), \quad x\in {\mathbb R}^d, t\in [0,+\infty), \end{align} where $V:{\mathbb R}^d\to {\mathbb R}$ is continuous and bounded from above (these conditions can be relaxed, see, e.g., Simon (2005) for further details). The following Wiener integral representation for the solution of the heat equation with potential (11) $$u(t,x)={\mathbb E}^x[u_0(W_t)e^{-\int_0^t V(W_\tau)d\tau}]$$ is known as Feynman-Kac formula. The solution of the Dirichlet problem for the Laplace equation Let $D\subset {\mathbb R}^d$ be an open connected (nonempty) set, $x\in D$ and let $\tau_D:\Omega \to {\mathbb R}^+$ be the first hitting time of the complement $D^c$ of $D$ by the Wiener process starting at $x$, that is $$\tau_D=\inf \{t>0 |W(t)\in D^c\},$$ where $\inf \emptyset =+\infty$. The random variable $\tau _D$ is also called the first exit time from $\bar D$. We shall assume that any point $x$ belonging to the boundary $\partial D$ of $D$ is regular, in the sense that $P^x(\tau_D=0)=1$ for $x\in \partial D$, where $P^x$ denotes the distribution of the Wiener process $W$ starting at $x$. It turns out (see, e.g., Dynkin and Yushkevich (1969)) that in the case $d=2$ a sufficient condition for the regularity of a point $x\in \partial D$ is to be the vertex of a triangle $T\subset D^c$ (in particular, if $D$ is delimited by smooth curves, every point of $\partial D$ is regular), while in the case $d=3$ a point $x\in \partial D$ which is the vertex of some tetrahedron $T\subset D^c$ is regular. Let us consider the boundary value problem associated to the Laplace equation \begin{align} \Delta u(x)=0 , \qquad & x\in D,\tag{12}\\ u(x)=f(x), \qquad & x\in \partial D, \end{align} where $f:\partial D\to {\mathbb R}$ is a bounded continuous function. Under the stated assumptions (see Chung (1982), Dynkin and Yushkevich (1969)) the function $u:\bar D\to \R$ defined by $$u(x):={\mathbb E}^x[f(W(\tau_D))]$$ is a classical solution of (12), i.e., it is of $C^2$-class in $D$, continuous on $\bar D$ satisfying $\Delta u=0$ on $D$ and coinciding with $f$ on $\partial D$. Moreover, if $D\subset {\mathbb R}^d$ is an open connected bounded set and every point on $\partial D$ is regular, then the function $m_D:\bar D\to \R$ defined by the mean exit time $m_D(x):={\mathbb E}^x[\tau_D]$, is the unique classical solution of the Poisson equation $\Delta m=-2$ which is continuous on $\bar D$ and such that $m_=0$ on $\partial D$. Other stochastic processes Brownian motion on manifolds Let $(M,g)$ be a connected closed Riemannian manifold and let $\Delta _{LB}$ be the Laplace-Beltrami operator on $M$. Let us consider the heat equation on $M$, namely \begin{align} \frac{\partial}{\partial t}u(t,x)&=\frac{1}{2}\Delta_{LB}u(t,x),\tag{13}\\ u(0,x)&=u_0(x),\quad x\in M, \, t\in [0,+\infty) \end{align} and let $p_t(x,y)$ be its fundamental solution, also called heat kernel, such that the solution of the initial value problem (13) is given by $u(t,x)=\int _Mp(x,y)u_0(y)dy$, where $dy$ is the Riemannian volume measure on $M$. Let us focus on the measure $\mu $ on $M^{[0,+\infty)}$ associated via Eq(5) to the semigroup of Markov kernels defined by $P_t(x,B):=\int_B p_t(x,y)dy$, $P_0=\delta_x$, $x\in M$. The measure $\mu$ is called Wiener measure on $M$ and the associated stochastic process $W=(W_t)$ is the Brownian motion on $M$ (starting at $x\in M$). In fact, as in the case where $M={\mathbb R}^d$, the application of the Kolmogorov-Chentsov theorem allows to restrict $\mu$ to the space $C([0,+\infty),M)$ of continuous paths on $M$. More precisely, for any $\theta \in (0,1/2)$ there exists a modification $\tilde =(\tilde W_t)_{t\geq 0}$ of $W=(W_t)_{t\geq 0}$ with $\theta$-Hölder continuous paths. By the very construction of $\mu$, the Feynman-Kac formula follows directly, namely for $V:M\to {\mathbb R}$, continuous and bounded, the solution of the heat equation on $(M,g)$ with potential $V$ \begin{align} \frac{\partial}{\partial t}u(t,x)&=\frac{1}{2}\Delta_{LB}u(t,x)-V(x)u(t,x),\tag{14}\\ u(0,x)&=u_0(x), \end{align} is given by $$u(t,x)={\mathbb E}^x\left[u_0(W(t))e^{-\int_0^tV(W_s)ds}\right]$$ The Ornstein-Uhlenbeck process For a parameter $\alpha >0$ and a point $x\in {\mathbb R}$, let us consider the semigroup of Markov kernels on $({\mathbb R}, {\cal B}({\mathbb R}) )$ defined by: • $P_0(x, \mathrm{d}y) = \delta_x(\mathrm{d}y)$ for $t=0$ and $y\in {\mathbb R}$, • for $t>0$, the kernel $P_t(y,B)$ is defined by $P_t(y,B)=\int_B\left(2\pi(1-e^{-2\alpha t})\right)^{-1/2}\exp\left(\frac{(z-e^{-\alpha t}y)^2}{2(1-e^{-2\alpha t})}\right)dz$, where $B\in {\cal B}({\mathbb R})$ is a Borel set in $\mathbb R$. The process $X^x=(X_t^x)_{t\geq 0}$ on $E=\mathbb{R}$ obtained from the Kolmogorov construction with $\nu=\delta_x$ is called Ornstein-Uhlenbeck process. It is Gaussian with mean $e^{-\alpha t}x$ and covariance $c_{\alpha}(s,t):=E(X_t^x X_s^x) = \frac{\exp(-\alpha \left| s-t \right|)}{2\alpha}$, $s,t \geq 0$. Note that $c_{\alpha}(t,s)=\tilde c_\alpha (|t-s|)$, where $\tilde c_\alpha (t)=\frac{\exp(-\alpha |t|)}{2\alpha}$ and the map $\tilde c_\alpha :{\mathbb R}\to {\mathbb R}$ is the fundamental solution (in the distribution sense) of $\left(-\frac{\mathrm{d}^2}{\mathrm{d}\tau^2} + \alpha\right)\tilde c_\alpha =\delta_0 $. The standard Gaussian measure $N(0,1)$ is an invariant probability measure for the process $X=(X_t)_{t\geq 0}$. In particular, if the process is started from a $\nu_0$-distributed initial position instead of $x$, with $\nu_0= N(0,1)$, then the process is stationary for all times $t \in \mathbb{R}^+$. Furthermore it is Gaussian, with mean zero and covariance kernel $c_{\alpha}(s,t)$, $s,t\in\R$. $X_t^x$ satisfies the stochastic differential equation (Langevin equation for the Ornstein-Uhlenbeck velocity process) $\mathrm{d}X_t^x = -\alpha X_t^x + \sqrt{2\alpha}\mathrm{d}B_t$, with $X_0^x = x$ and $B_t$ a standard Brownian motion on $\mathbb{R}$. $\alpha-$stable processes Let $\alpha \in (0,2)$. It can be proved that for any $c>0$, $t\geq 0$ a unique probability measure $\nu_t$ exists on $({\mathbb R},{\cal B}({\mathbb R})$ whose characteristic function $\hat \nu _t:{\mathbb R}\to{\mathbb C}$, defined by $\hat \nu _t(x)=\int e^{ixy} \nu _t(dy)$, satisfies $$\hat \nu _t(x)=e^{-ct|x|^\alpha}, \qquad x\in \R.$$ Moreover, for any $t,s>0$, the composition property $\nu_{t+s}=\nu_t *\nu_s$ holds (where $*$ denotes the convolution of measures). This implies that the family of kernels $\{P_t\}_{t\geq 0}$ defined by $P_t(x,B):=\nu_t(B-x)$, $ x \in {\mathbb R}$, $B\in {\cal B}({\mathbb R})$, is a Markov semigroup of kernels. The associated process $X=(X_t)_{t\geq 0}$ is Lévy and it is said to be $\alpha$-stable. If $\alpha =2$ then $(X_t)_{t\geq 0}$ is the Wiener process. If $\alpha =c=1$ then $X_t$ is called the Cauchy process, in this case $\nu_t(y)=\frac{1}{\pi }\frac{t}{t^2+x^2}$, $t>0$, $x\in {\mathbb R}$. Fourier transform of measures. Bochner-Minlos theorem An important tool for the construction of probability measures on vector spaces or, more generally, on locally compact abelian groups, is harmonic analysis. Let $X$ be a real vector space and $X^*$ its (algebraic) dual. Let ${\cal B} (X^*)$ be the smallest $\sigma$-algebra on $X^*$ in which the map $\xi\mapsto \xi (x)=\langle \xi , x \rangle$ is measurable for any $x\in X$. Given a positive measure $\mu$ on $(X^*, {\cal B} (X^*)$, let us define its characteristic function as the map $\hat \mu:X\to {\mathbb C}$ $$\hat \mu(x)=\int_{X^a}e^{i\xi (x)}d\mu (\xi), \qquad x\in X.$$ It is rather simple to prove that it is a positive definite function, namely it satisfies for any $n\in {\mathbb N}$, $x_1,...x_n\in X$ and $\alpha_1,...,\alpha_n\in {\mathbb C}$: $$\sum_{j,k=1}^n\alpha _j\bar \alpha_k \hat \mu (x_j-x_k)\geq 0.$$ If $X={\mathbb R}^d$ (which can be identified with its dual), it is simple to verify that $\hat \mu $ is continuous with respect to the Euclidean topology. The classical Bochner theorem states that these two properties identify univocally the characteristic functions, indeed any continuous and positive definite function on ${\mathbb R}^d$ is the characteristic function of a measure $\mu $ on ${\mathbb R}^d$. In the case of an infinite dimensional vector space $X$, an application of Kolmogorov theorem allows us to prove that any positive-definite map $f:X\to {\mathbb C}$ which is continuous on any finite dimensional subspace $Y\subset X$ (with respect to the Euclidean topology on $Y$) is the characteristic of a measure $\mu $ on $(X^*, {\cal B} (X^*)$. If $X$ is endowed with a topology $\tau$ making it a topological vector space, in general it is not true that a positive-definite and continuous map $f:X\to {\mathbb C}$ is the characteristic function of a measure on the topological dual $X'$ of $X$. In fact, the validity of this result depends on particular conditions either on the topology $\tau$ or on the function $f$. When $X$ is a real separable Hilbert space $(H, \langle \, ,\, \rangle)$ (which can be identified via Riesz theorem with its topological dual), according to a result of Minlos and Sazonov, a positive-definite function $f:H\to {\mathbb R}$ is the Fourier transform of a Borel measure on $H$ iff there exists a Hilbert-Schmidt operator $T:H\to H$ on $H$ such that $f$ is continuous with respect to the norm $\| \, \cdot \,\|$ defined by $\|x\|:=\langle Tx,Tx \rangle$. Let us consider now the case of a nuclear space $X$, i.e. a locally convex topological vector space whose topology is defined by a family $\{ \| \, \|_\alpha\}$ of Hilbert seminorms, i.e., induced by some inner product $\langle \, , \, \rangle _\alpha$ such that $ \| x \|_\alpha^2 =\langle x,x \rangle _\alpha$, and such that for any $\alpha$ there exists an $\alpha'$ with $\| \, \|_\alpha$ Hilbert-Schmidt with respect to $\| \, \|_{\alpha'}$. The last condition means that there exists an $M_{\alpha,\alpha'}>0$ such that $\sum _n\|x_n\|_\alpha ^2\leq M^2_{\alpha,\alpha'} $ for any orthonormal basis $\{x_n\}$ in $\langle \,\cdot\, , \,\cdot \, \rangle _{\alpha'}$. The generalization of Bochner theorem to this case asserts that any continuous positive definite function on a nuclear space $X$ is the Fourier transform of a measure on $X'$. An important example of nuclear space is the Schwartz space $S({\mathbb R}^d)$ of smooth functions on $\mathbb{R}^d$ for which the derivatives of all orders are rapidly decreasing, whose dual is the space $S'({\mathbb R}^d)$ of Schwartz distributions. Gaussian measures and Gaussian integrals Bochner theorem allows the characterization of measures $\mu$ on ${\mathbb R}^d$ or, more generally, on topological vector spaces $X$ via their Fourier transform $\hat \mu$. From this point of view, a Borel measure $\mu_G$ on ${\mathbb R}^d$ is Gaussian if its characteristic function is of the form $\hat \mu_G(x)=e^{i\langle x,a\rangle}e^{i\langle x,Qx\rangle}$, $x\in {\mathbb R}^d$, for some vector $a\in {\mathbb R}^d$ and some nonnegative symmetric $d\times d$ matrix $Q$. The vector $a$ is called the mean while the matrix $Q$ is called the covariance operator of $\mu_G$. They are related to the first two moments of the measure in the following way: $$a=\int xd\mu_G(x), \qquad \langle x, Q y\rangle =\int \langle x, z-a\rangle \langle y, z-a\rangle d\mu_G(z), \quad x,y \in {\mathbb R}^d$$ In the case where $Q=0$, $\mu_G$ is the $\delta$ point measure at $a\in {\mathbb R}^d$. If ${\mathbb R}^d$ is replaced by a real infinite dimensional Hilbert space $H$, we say that a Borel measure on $H$ is Gaussian iff for any $x\in H$ the law of the random variable $\langle x, \, \cdot \,\rangle$ is Gaussian. By applying the Bochner-Milnos-Sazonov theorem (see above), it turns out that a measure $\mu_G$ on $H$ is Gaussian iff its Fourier transform is of the form $$\hat \mu_G(x)=e^{i\langle x,a\rangle}e^{i\langle x,Qx\rangle}$$ for some vector $a\in H$ and some non negative symmetric trace-class operator $Q:H\to H$. From this result, one can infer that on an infinite dimensional Hilbert space there cannot exists a Gaussian measure having the identity as covariance operator, as the latter is not trace class. In the case we consider the dual $X'$ of a nuclear space $X$, endowed by the $\sigma $-algebra generated by the cylider sets of the form $\{\xi \in X' : (\xi (x_1),..., \xi (x_n))\in B\}$ for some $x_1, ...,x_n\in X$ and some Borel set $B\subset {\mathbb R}^n$, a measure $\mu_G$ is said to be Gaussian if for each finite dimensional subspace $Y\subset X$ the restriction of $\mu_G$ to the $Y$-cylinder sets is Gaussian. In this case, it arises that for any $\xi \in X'$ and any contimuous nonnegative linear map $Q:X\to X'$ there exists a unique Gaussian measure $\mu_G $ on $X'$ such that $\hat \mu_G(x) =e^{i\langle xi , x\rangle }e^{i\langle Q x , x\rangle }$, $\langle \, , \,\rangle $ denoting the dual pairing between $X'$ and $X$. Examples of Gaussian processes Let us consider the Hilbert space $H=L^2([0,T])$ , $T>0$, and the integral operator $Q:H\to H$ with kernel $q(s,t)$. In all the examples below it is possible to prove that $Q$ is bounded, symmetric, positive and trace-class and the function $x\mapsto e^{-\frac{1}{2}\langle x,Q x\rangle}$ is the Fourier transform of a unique Gaussian measure on $L^2([0,T])$ with mean $a=0$ and covariance operator $Q$. • In the case $q(s,t)=s\land t$ the measure $\mu_G$ is a realization of the Wiener measure. • If, more generally, one considers the kernel $q(s,t):=\frac{1}{2}\left(|t|^{2h}+|s|^{2h}+|t-s|^{2h}\right)$, with $h\in (0,1)$, again the operator $Q$ is positive and trace class and the associated Gaussian measure is the distribution of the fractional Brownian motion. In particular if $h=1/2$, we get again the Brownian motion. If $h\neq 1/2$, then the increments of the process are not independent. In particular, if $h>1/2$ they are positively correlated, while if $h<1/2$ they are negatively correlated. • If instead of the choice $q(s,t) = s \wedge t$ for the Wiener measure (to the standard Brownian motion process $B_t$) we make the choice $$q(s,t) = \begin{cases} s(t-1), 0 \le s \le t \\ t(1-s), t \le s \le 1, \end{cases}$$ then we still have a positive trace-class operator $Q$ in $L^2[0,1]$. The inverse of $Q$ in $L^2[0,1]$ is the operator $(Q^{-1}f)(t) = f''(t)$ for all $f\in D(Q^{-1}) = H^{1,2}(0,1) \cap H^{1,2}_0(0,1)$, see e.g. Da Prato (2006). The corresponding Gaussian measure is called the Brownian bridge process. A process $\beta=(\beta(t))_{t\in [0,1]}$ defined by $\beta(t) = B(t) - tB(1)$, $t \in [0,1]$, has the probability distribution of the Brownian bridge process. • If we make the choice $$q(t,s)=\frac{1}{2k^2(1-\exp(-a\delta))}\left(\exp(-\delta|t-s|)+ \exp(-\delta(a-|s-t|))\right)$$ where $k>0$ and $\delta>0$ are parameters, the resulting process is the so-called Hoegh-Krohn process, also known as periodic Ornstein-Uhlenbeck process (see Albeverio et al. (2009) or Brzezniak and van Neerven(2001) ). Examples of Gaussian integrals The computation of concrete Gaussian integrals permits to study the interplay with certain problems in differential and integral equations. A classical computation relates the expectation of a particular functional of the 1-dimensional Wiener process $W_t$ ( i.e. a particular integral with respect to Wiener measure) with the eigenvalues of the Sturm-Liuville problem in $L^2([0,1])$: $$ f''(t)+\lambda p(t) f(t) =0, \qquad t\in [0,1],$$ with the boundary values $f(0)=f'(1)=0$, where $p$ is a non-negative continuous function on $[0,1]$. The relation is $${\mathbb E}[e^{i\alpha \int_0^1p(t)W^2_tdt}]=\prod _{j=1}^\infty (1-2\alpha \lambda_j)^{-1/2},\qquad \alpha \in {\mathbb R}.$$ For other examples of explicit computations with respect to Gaussian measures see, e.g., Simon (2005), Albeverio and Mazzucchi (2015), Albeverio, Kondratiev, Kozitsky, Röckner (2009). A collection of concrete computations of integrals involving Brownian motion and related processes appears in Borodin and Salminen (2002). For computations involving other types of infinite dimensional measures (associated, e.g., to processes with jumps) see, e.g., Barndorff-Nielsen, Mikosh, Resnick (2001), Duquesne, Barndorff-Nielsen, Bertoin (2010), Cont and Tankov (2004) and Mandrekar and Rüdiger (2015). Equivalence and orthogonality of Gaussian measures Important work for applications involves controlling certain transformations on the infinite-dimensional space where the measures have their support. For example, given two Gaussian measures $\mu,\nu$ on an Hilbert space $H$ with mean $a\in H$ resp. $0\in H$ and the same covariance operator $Q$, they are absolutely continuous with respect to each other if and only if the vector $a$ is in the range of $\sqrt Q$ in $H$. In this case, the corresponding Radon-Nikodym derivative (the density of $\nu$ with respect to $\mu$) reads $$\frac{d\nu}{d\mu}(x)=e^{-\frac{1}{2}\|(\sqrt Q)^{-1}(a)\|^2}e^{\langle (\sqrt Q)^{-1}(a),(\sqrt Q)^{-1}(x)\rangle},$$ for all $x\in H$ (with $\| \; \|$, resp. $\langle \; , \; \rangle $, being the norm, resp. the scalar product, in $H$), see, e.g., Kuo (1975), Th. 3.1. On the other hand, if $a=0$ but $\mu$ and $\nu$ have different covariance operators $Q_\mu$ resp. $Q_\nu$, then if $\mu$ is absolutely continuous with respect to $\nu$, then the operators $Q_\mu$ and $Q_\nu$ have to be related by $$Q_\nu=\sqrt Q_\mu T \sqrt Q_\mu,$$ with $T$ a positive bounded invertible linear operator such that $I-T$ is an Hilbert-Schmidt operator on $H$. In fact, this is a special case of a theorem by Feldman and Hajek (see, e.g., Kuo (1975)). Asymptotics of path integrals The classical Laplace method concerns the asymptotic expansion in (fractional) powers of a small parameter $\epsilon>0$ for integrals of the form $I(\epsilon):=\int_{\R^n}e^{-\frac{1}{\epsilon}S(x)}g(x)dx$, with $g,S:\R^n\to \R$ smooth and such that the integral exists, in Lebesgue sense. In the case the phase function $S$ has a single absolute minimum say at $x=x_c\in\R^n$ which is non-degenerate (so that the determinant $D(x)$ of the Hessian matrix $\left( \frac{\partial ^2 S}{\partial x_i \partial x_j}\right)_{i,j=1....n}$ is strictly positive) the expansion in ascending powers of $\epsilon$ takes the form $$I(\epsilon)=\sum_{j=0}^Na_j(x_c)\epsilon^j+R_N(\epsilon, x_c),$$ where the coefficients $a_j(x_c)$ can be computed from $S,g$ and their derivatives. The leading term given by $$a_0(x_c)=(2\pi\epsilon)^{n/2}D(x_c)^{-1/2}e^{-\frac{1}{\epsilon}S(x_c)}g(x_c)$$ and the remainder $R_N(\epsilon, x_c)$ satisfies the estimate $|R_N(\epsilon,x_c)|\leq C_N(x_c)\epsilon^{N+1}$, with $C_N(x_c)$ independent of $\epsilon$. The case of a finite number of local non-degenerate minima can be reduced (by a smooth partition of unity) to a sum of terms of this form, corresponding to expansions around each minimum. In the case of infinitely many local non degenerate minima the same holds, provided the sum over the contributions of the single minima converges. In the case of degeneracy of a minimum, the form of the expansion around that minimum depends essentially on the form of degeneracy, in particular the leading term, instead of a power $\epsilon ^{n/2}$, has a power depending on the degeneracy. As a matter of fact the expansion generally involves powers and terms which are logarithmic in $\epsilon$, see, e.g. Combet (1982). The extension of these methods to the infinite dimensional case, where $\R^n$ is replaced by an infinite dimensional Hilbert space has also been worked out. In this case a (smooth, not necessarily Gaussian) reference measure has been used, see Albeverio and Steblovskaya (1999). Other types of dependence on the parameters have been studied by other methods, in relation to the heat equation with potentials or stochastic differential equations. In this connection, expectations of the form $$\int_H \psi (\sqrt \epsilon Y) e^{-\frac{1}{\epsilon}F(\sqrt \epsilon Y)}P(dY)$$ over a separable Hilbert space $H$ arise, where $P$ is a Gaussian probability measure. Work in the case where $P$ is Wiener measure goes back to Donsker's school, with contributions from Schilder, Pincus, Varadhan, see, e.g., Ellis and Rosen (1982), Simon (2005). Application of probabilistic path integrals are numerous and spread out over many different areas, from mathematics and natural sciences to engineering and technical sciences, as well as to socio-economical sciences. First of all, let us mention some applications within mathematics (to areas other than stochastics), starting from analysis on Eucliedean spaces or manifolds. We already saw that solutions of some parabolic PDEs, like the heat equation, allow for representations in terms of Wiener integrals. Extensions of such representations either to other linear systems of PDEs or to nonlinear PDEs exist, including, e.g., representations of hydrodynamical equations by infinite dimensional integrals, see. e.g., Burdzy (2014) and Stroock (2012). Such representations have also been applied to many problems of differential geometry and related areas, where heat kernels (of semigroups associated to second order elliptic or hypoelliptic operators of geometric relevance) play an important role. In particular, it is relevant to mention formulas connecting the trace of the heat semigroup on a Riemannian manifold $M$ (i.e. a sum involving eigenvalues of the Laplace-Beltrami operator on $M$) with a sum of lengths of periodic geodesics (counting multiplicity). Since in turn the heat semigroup can be expressed by a Wiener integral on the manifold $M$, one gets an interpretation of such formulas in terms of infinite dimensional probabilistic integrals. These extend to much more general context of certain second order differential operators on Riemannian manifolds, where the closed geodesics are replaced by periodic orbits of an underlying classical Hamiltonian system. In this case the formulas are to be understood as asymptotic expansions with respect to a suitable parameter, the expansion being obtained by a Laplace method applied to the infinite dimensional Wiener type integral yielding the probabilistic representation for the semigroup. Such generalized trace formulas can be looked upon as rigorous mathematical implementation of ideas of semiclassical qiuantization (Gutzwiller’s trace formulas), which have found important applications in the study of the relations between chaotic classical and chaotic quantum systems, see, e.g. Schuss (2010) and Ikeda and Matsumoto (1999). For related applications to homotopy theory see, e.g., Sunada (1992). Probabilistic infinite dimensional integrals are also a powerful tool for providing upper and lower bounds for heat kernels, in the case of manifolds with singularities resp. degeneracies (see, e.g., Ledoux (2013) and Burdzy (2014)). Related small time, resp. large time, expansions have been obtained by representations in terms of such integrals, see, e.g., Uemura (1987) and Albeverio and Arede (1985). Particularly striking are those connected to index theory and topological invariants, see Bismut (1984) and Albeverio, Hahn and Sengupta (2004) . Even though path integration techniques play a central role in present days physics (see, e.g., the Scholarpedia article Path integral by Zinn-Justin), many of the path integrals involved are not yet brought into a mathematical rigorous form. Nevertheless, the applications of those which are under mathematical control are already numerous and spread over many areas of physics. Various early applications are mentioned in the collections of essays by Wax (1954) (including areas like astrophysics and signal analysis). Kac (1980) discusses applications both in classical (potential theory) and quantum (spectral) problems. Simon (2005) presents important results reached in the study of bound states problem (number and locations of eigenvalues of Schoedinger operators). Other applications concern the stability of matter, lower estimates for multi particle Hamiltonians (see Lieb and Seiringer (2010) ). Recently Bose-Einstein condensation phenomena have been discussed in terms of probabilistic integrals in De vecchi and Ugolini (2014) . Other areas of physics where there are good applications of rigorous probabilistic integrals are equilibrium and non-equilibrium statistical mechanics and kinetic theory, both classical (see, e.g., Kac (1980)) and quantum, see, e.g., Jona-Lasinio (1976), Presutti (2009) and Albeverio et al (2009). For applications in solid state physics see, e.g. the discussion on polaron model in Schulman (1981). For work in hydrodynamics see, e.g., Albeverio, Flandoli and Sinai (2002) . In quantum field theory, probabilistic integrals have played a decisive role in constructing non trivial low dimensional models, see, e.g., Glimm and Jaffe (1987) and Simon (1974). In the modeling of long polymer chains in chemistry and physics, methods of infinite dimensional integration have found important applications. An interesting example is Edwards’ model describing polymers in ${\mathbb R}^d$ by means of a Gibbs-type measure with respect to a Wiener measure. The Gibbs factor makes unlikely self-intersections of the Brownian path modeling the long molecular chains constituting the polymer. Rigorous results exist for $d=1,2,3$, see, e.g., the references in Streit et al (2015). Lett us also mention that polymer related models and corresponding probabilistic methods have found applications to DNA modeling, see, e.g. Cotta-Ramusino and Maddocks (2010) and Bellomo and Pulvirenti (2000). Probabilistic path integrals also have applications in various areas of biology (e.g. epidemiology, immunology, genetic, population dynamics), see, e.g., Ricciardi (1977) and, for some newer developments, Bovier el al (2015) . Neurobiology has also been a favorite area of application of such integrals, from simple axon model to neuron networks, see, e.g., Tuckwell (1989). In economics, the use of infinite dimensional integrals is implicit in pioneering work by Bachelier (see , e.g., the article by Schachermayer in Albeverio, Schachermayer and Talagrand (2003) ), which has lead to an intense activity in probabilistic methods in mathematical finance producing, e.g., the famous Black-Scholes formula for the price of a European call option (expressed by a Wiener integral involving a functional of a brownian motion process). This is just a prototype of more general models and computations, see, e.g., Øksendal (2003). Also in macroeconomical modeling some use of probabilistic integrals is present, e.g. in Mallianis and Broch (1982) . For some use of modeling by probabilistic integrals in sciences of society see, e.g. Weidlich (2000). Probabilistic integrals are applied also in ecology and climate research, following the pioneering work by Haselmann, see, e.g., Imkeller (2002). Engineering applications include stochastic control theory and robotics, signal transmission, filtering and civil engineerings, see, e.g., Blankenship et al (2000) and Kree and Soize (1986). Let us stress that what we have mentioned does not in any way expect to be exhaustive, both in terms of topics and references, but should be rather considered as a first orientation in a fascinating rapidly expanding area of research. • Albeverio, S. ( 1997). Wiener and Feynman path integrals and their applications Proc. Symp. Appl. Math 52: 163-194 . • Albeverio, S. and Mazzucchi, S. ( 2015 ). An introduction to Infinite-dimensional Oscillatory and Probabilistic Integrals. In "Stochastic Analysis: A Series of Lectures" R. Dalang, M. Dozzi, F. Flandoli, F. Russo editors Birkhäuser, Basel. Pages 1-54 • Albeverio, S and Mazzucchi, S. ( 2016). A unified approach to infinite-dimensional integration. Rev. Math. Phys. 28: 1650005 . • Applebaum, David (2009). Lévy processes and stochastic calculus. Cambridge University Press, Cambridge. • Barndorff-Nielsen, O ; Mikosh , Th and Resnick , S I (eds.) (2001). Lévy processes:Theory and Applications. Birkhäuser, Boston, MA,. • Bauer, Heinz (2001). Measure and integration theory. Walter de Gruyter & Co., Berlin. • Berezin, F.A. and Shubin, M.A. (1993). The Schrödinger equation. Kluver, Dortrecht. • Bochner, Salomon (1955). Harmonic analysis and the theory of probability. University of California Press, Berkeley and Los Angeles. • Bochner, Salomon (1959). Lectures on Fourier integrals. Princeton University Press, Princeton, N.J. . • Borodin, A N and Salminen , P (2002). Handbook of Brownian motion—facts and formulae. Birkhäuser Verlag, Basel . • Bogachev, V (1998). Gaussian measures. American Mathematical Society, Providence, RI. • Cameron, R H and Martin, W T ( 1945). Evolution of various Wiener integrals by use of certain Sturm-Liouville differential equations BMS 51: 73-90 . • Chung, Kai Lai (1982). Lectures from Markov processes to Brownian motion. Springer, New York . • Combet, Edmond (1982). Intégrales exponentielles. Développements asymptotiques, propriétés lagrangiennes. Springer-Verlag , Berlin-New York . • Da Prato, Giuseppe (2006). An Introduction to Infinite Dimensional Analysis. Springer , Berlin . • Duquesne , T; Barndorff-Nielsen, O E and Bertoin , J ( 2010). Lévy matters I:Recent Progresss in Theory and Applications. Springer-Verlag, Berlin. • Dynkin , Evgenii B. and Yushkevich, Aleksandr A. ( 1969). Markov processes: theorems and problems. Plenum press, New York. • Egorov, A. D.; Sobolevsky, P. I. and Yanovich, L. A. (1993). Functional integrals: approximate evaluation and applications. Kluwer Academic Publishers Group, Dordrecht. • Elworthy, D. and Ikeda, N. (1993). Asymptotic problems in probability theory: Wiener functionals and asymptotics. Pitman Research Notes, New York. • Gīhman, Ĭ. Ī. and Skorohod, A. V. (1980). The theory of stochastic processes. I-II Springer-Verlag, Berlin-New York. • Gross, L. (1967). Abstract Wiener spaces. Proc. Fifth Berkeley Sympos. Math. Statist. and Probability (Berkeley, Calif., 1965/66), Vol. II. Univ. California Press , Berkeley. • Hida , T. (1980). Brownian motion. Springer, Berlin. • Kac, Mark (1980). Integration in function spaces and some of its applications. Lezioni Fermiane. [Fermi Lectures] Accademia Nazionale dei Lincei, Pisa. • Kuo , H.H. (1975). Gaussian measures in Banach spaces. Springer-Verlag , Berlin-New York . • Parthasarathy, K. R. (1980). Probability measures on metric spaces. Springer, Berlin. • Revuz, Daniel and Yor, Marc (1999). Continuous martingales and Brownian motion. Springer-Verlag, Berlin. • Sato, Ken-iti (2013). Lévy processes and infinitely divisible distributions. Cambridge University Press, Cambridge. • Schilling, R:L. and Partzsch, L. (2014). Brownian motion. An introduction to stochastic processes. With a chapter on simulation by Björn Böttcher. Second edition. De Gruyter, Berlin. • Schwartz, L. (1973). Radon measures on arbitrary topological spaces and cylindrical measures. Tata Institute of Fundamental Research Studies in Mathematics. Oxford University Press, London. • Simon, Barry (2005). Functional integration and quantum physics. AMS Chelsea Publishing, Providence, RI. • Skorohod , A. V. (1974). Integration in Hilbert space. Springer , Berlin . • Wiener, N. ; Siegel, A. ; Rankin, B. and Martin, W. (1966). Differential space, quantum systems, and prediction. The M.I.T. Press, Cambridge, Mass.-London. • Yamasaki, Y. (1985). Measures on infinite-dimensional spaces. World Scientific Publishing Co., Singapore. Further reading • Albeverio, S. and Arede , T ( 1985 ). The relation between quantum mechanics and classical mechanics: a survey of some mathematical aspects. in G. Casati (ed), Chaotic behavior of quantum systems. Theory and applications. Plenum Press, New York. • Albeverio, S. ; Flandoli , F. and Sinai, Y. G. ( 2002 ). SPDE in hydrodynamic: recent progress and prospects. Lectures given at the C.I.M.E. Summer School held in Cetraro. Springer-Verlag, Berlin. • Albeverio, S; Hahn, A and Sengupta, A (2003). Chern-Simons theory, Hida distributions, and state models. Infin. Dimens. Anal. Quantum Probab. Relat. Top. (6) : 65–81. • Albeverio, Sergio; Kondratiev, Yuri; Kozitsky, Yuri and Röckner, Michael (2009). The statistical mechanics of quantum lattice systems. A path integral approach. European Mathematical Society , Zürich. • Albeverio, S; Schachermayer, W and Talagrand, M ( 2003). Lectures on probability theory and statistics. Lectures from the 30th Summer School on Probability Theory held in Saint-Flour, August 17–September 3, 2000. Springer-Verlag, Berlin. • Abeverio, S. and Steblovskaya, V. (1999). Asymptotics of infinite-dimensional integrals with respect to smooth measures. I. Infin. Dimens. Anal. Quantum Probab. Relat. Top. 2 (4): 529–556. • Andersson, Lars and Driver, Bruce K. (1999). Finite-dimensional approximations to Wiener measure and path integral formulas on manifolds. J. Funct. Anal. 165 (2) : 430–498. • Bär, Christian and Pfäffle, Frank (2011). Wiener measures on Riemannian manifolds and the Feynman-Kac formula. Mat. Contemp. 40  : 37–90. • Bellomo, N and Pulvirenti, M ( 2000). Generalized kinetic models in applied sciences. Birkhäuser Boston, Boston, MA. • Bismut, Jean-Michel ( 1984). The Atiyah-Singer theorems: a probabilistic approach. I. The index theorem. J. Funct. Anal. (57) 1 : 56–99. • Bock, W; Oliveira, M; da Silva, J and Streit, L (2015). Polymer measure: Varadhan's renormalization revisited. Rev. Math. Phys. (27) 3 : 1550009, 5 pp.. • Bakry , D ; Gentil , I and Ledoux, M (2014). Analysis and geometry of Markov diffusion operators. Springer , Cham. • Bovier , A and den Hollander, F ( 2015). Metastability. A potential-theoretic approach. Springer, Cham. • Brzezniak, Z and van Neerven, J (2001). Banach space valued Ornstein-Uhlenbeck processes indexed by the circle. Proceedings of the Conference of Evolution Equations and their Applications, Bad Herrenalb 1998, Lecture Notes in Pure and Appl. Math., 215, Dekker, New York. 435-452 • Burdzy, K ( 2014). Brownian motion and its applications to mathematical analysis. Springer, Heidelberg New York Dordrecht London . • Cont, R and Tankov, P (2004). Financial Modelling with Jump Processes. Chapman and Hall/CRC, Boca Raton, FL. • Cotta-Ramusino, L and , Maddocks (2010). Looping probabilities of elastic chains: A path integral approach. Phys. Rev. E (82): 051924 . • Daletskii, Yu and Fomin, S (1991). Measures and differential equations in infinite-dimensional space. Kluver, Dordrecht. • De Vecchi, F and Ugolini, S (2014). An entropy approach to Bose-Einstein condensation. Commun. Stoch. Anal. (8) 4 : 517–529 . • Ellis, Richard S. and Rosen, Jay S. (1982). Laplace's method for Gaussian integrals with an application to statistical mechanics. Ann. Probab. 10 (1): 47–66. • Elworthy, D. (1982). Stochastic differential equations on manifolds. Cambridge University Press, Cambridge-New York. • Freidlin, M. (1985). Functional integration and partial differential equations. Princeton University Press, Princeton, NJ. • Gallavotti , G. (2002). Foundations of fluid dynamics. Springer-Verlag, Berlin. • Gardiner, C. (2009). Stochastic methods. A handbook for the natural and social sciences. Fourth edition. Springer-Verlag, Berlin. • Gel'fand, I.M. and Vilenkin, N. Ya. (2016). Generalized functions. AMS Chelsea Publishing , Providence, RI. • Glimm , J and Jaffe , A (1987). Quantum physics. A functional integral point of view. Springer-Verlag, New York. • Hida, T. (1970). Stationary stochastic processes. Princeton University Press, Princeton. • Hida, T.; Kuo, H.H; Potthoff, J. and Streit, L. (1993). White noise. An infinite-dimensional calculus. Kluwer Academic Publishers Group , Dordrecht. • Huang , Zhi-yuan and Yan, Jia-an (2000). Introduction to infinite dimensional stochastic analysis. Kluwer Academic Publishers, Dordrecht. • Ionescu Tulcea, C T (1949). Mesures dan les espaces produits. Atti Accad. Naz. Lincei. Rend. Cl. Sci. Fis. Mat. Nat. (8) 7: 208–211. • Ikeda, N and Matsumoto, H (1999). Brownian motion on the hyperbolic plane and Selberg trace formula. J. Funct. Anal. (163) 1: 63–110. • Imkeller, P and Monahan, A (eds.) (2002). Special issue on stochastic climate models. Stoch. Dyn. (2) 3 : . • Jona-Lasinio, G ( 1976). Phase transitions and critical phenomena. Academic Press , London-New York. • Krée, P and Soize, C ( 1986). Mathematics of random phenomena. Random vibrations of mechanical structures. D. Reidel Publishing Co., Dordrecht. • Kolokoltsov, V. N. (2000). Semiclassical analysis for diffusions and stochastic processes. Springer-Verlag, Berlin. • Kwatny, H and Blankenship, G ( 2000.). Nonlinear control and analytical mechanics. A computational approach. Birkhäuser Boston, Inc., Boston, MA. • Lieb, E and Seiringer, R (2010). The stability of matter in quantum mechanics. Cambridge University Press, Cambridge. • Malliaris , A and Brock, W ( 1982). Stochastic methods in economics and finance. North-Holland Publishing Co., Amsterdam-New York. • Malliavin , P (1997). Stochastic analysis. Springer-Verlag, Berlin. • Mandrekar, Vidyadhar and Rüdiger, Barbara ( 2015). Stochastic integration in Banach spaces. Theory and applications. Springer, Cham. • Martin , W T and Segal , I ( 1963). Analysis in function space. The M.I.T. Press, Cambridge, Mass.. • Nualart , D. (2006). The Malliavin calculus and related topics. Springer-Verlag, Berlin. • Øksendal , B. (2003). Stochastic differential equations. An introduction with applications. Springer-Verlag, Berlin. • Presutti , E ( 2009). Scaling limits in statistical mechanics and microstructures in continuum mechanics. Springer, Berlin. • Ricciardi, L ( 1977). Diffusion processes and related topics in biology. Lecture Notes in Biomathematics, Vol. 14. Springer-Verlag, Berlin-New York. • Roepstorff , G. (1994). Path integral approach to quantum physics. An introduction. Springer-Verlag, Berlin. • Schulman, L ( 1981). Techniques and applications of path integration. A Wiley-Interscience Publication. John Wiley & Sons, Inc., New York. • Schuss , Z ( 2010). Theory and applications of stochastic processes. Springer, New York. • Simon , B ( 1974). The $P(\phi)_2$ Euclidean (quantum) field theory. Princeton University Press, Princeton, N.J.. • Stroock , D ( 2012). Partial differential equations for probabilists. Cambridge University Press, Cambridge. • Sunada, T (1992). Homology, Wiener integrals, and integrated densities of states. J. Funct. Anal. (106) 1: 50–58. . • Tuckwell, H (1989). Stochastic processes in the neurosciences. CBMS-NSF Regional Conference Series in Applied Mathematics, 56. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA. • Uemura, H (1987). On a short time expansions of the fundamental solution of heat equations by the method of Wiener functional. J. Math. Kyoto Univ. (27) 3 : 417-431 . • Van Casteren, J.A.. and Demuth, M. (2000). Stochastic spectral theory for selfadjoint Feller operators. A functional integration approach. Birkhäuser Verlag, Basel. • Wax, N. (ed.) (1954). Selected papers on noise and stochastic processes. Dover, New York. • Weidlich, W ( 2000). Sociodynamics. A systematic approach to mathematical modelling in the social sciences. Harwood Academic Publishers, Amsterdam. Personal tools Focal areas
1309a206ddf3340d
Notice: Undefined offset: 3469 in /var/www/scholarpedia.org/mediawiki/includes/parser/Parser.php on line 5961 Lyapunov exponent - Scholarpedia Lyapunov exponent From Scholarpedia Antonio Politi (2013), Scholarpedia, 8(3):2722. doi:10.4249/scholarpedia.2722 revision #137286 [link to/cite this article] Jump to: navigation, search Post-publication activity Curator: Antonio Politi Historical remarks As soon as scientists realized that the evolution of physical systems can be described in terms of mathematical equations, the stability of the various dynamical regimes was recognized as a matter of primary importance. The interest for this question was not only motivated by general curiosity, but also by the need to know, in the XIX century, to what extent the behavior of suitable mechanical devices remains unchanged, once their configuration has been perturbed. As a result, illustrious scientists such as Lagrange, Poisson, Maxwell and others deeply thought about ways of quantifying the stability both in general and specific contexts. The first exact definition of stability was given by the Russian mathematician Aleksandr Lyapunov who addressed the problem in his PhD Thesis in 1892, where he introduced two methods, the first of which is based on the linearization of the equations of motion and has originated what has later been termed Lyapunov exponents (LE). (Lyapunov 1992) LEs measure the growth rates of generic perturbations, in a regime where their evolution is ruled by linear equations, \( \tag{1} \dot {\bf u} = {\bf J}(t) {\bf u} \) where \(\bf u\) is an \(N\) dimensional vector and \({\bf J}\) is a (time-dependent) \( N\times N \) matrix. In some contexts, such as that of linear stochastic differential equations, \({\bf J}\) fluctuates because of the presence of disorder or multiplicative noise (Arnold, 1986). More commonly, in the context of deterministic dynamical systems, \({\bf J}\) is the Jacobian of a suitable velocity field \(\bf F\), computed along a trajectory \({\bf x}(t)\) that satisfies the ordinary differential equation, \( \tag{2} \dot {\bf x} = {\bf F} ({\bf x}) \quad . \) If \({\bf x}(t)={\bf x}_0\) is a solution (i.e. if \({\bf F}({\bf x}_0)=0\)), then, the stability of this fixed point is quantified by the eigenvalues of the (constant) operator \({\bf J}\). In this simple case, the LEs \(\lambda_i\) are the real parts of the eigenvalues. They measure the exponential contraction/expansion rate of infinitesimal perturbations. A slightly more complicated example is that of a periodic orbit \({\bf x}(t+T) = {\bf x}(t)\). In this case, it is necessary to integrate Eq. (1) over a time \(T\), to obtain the discrete time evolution operator \(\bf M\), \( {\bf u}(t+T)= {\bf M} {\bf u}(t) \quad . \) From the eigenvalues \(m_i\) of \(\bf M\), one can thereby determine the Floquet exponents \(\mu_i=(\ln m_i)/T\); the LE \(\lambda_i\) are their real parts. Since trajectories are not, in general, periodic, a different approach is required. The most general definition involves the computation of the eigenvalues \(\alpha_i\) of yet another matrix, namely \({\bf M}(t){\bf M}^T(t)\). A typical instance of the behavior of \(\alpha_i\) is illustrated in the upper part of Figure 1. From the knowledge of \(\alpha_i\), one naturally introduces the finite-time LE as \( \tag{3} \lambda_i(t) = \frac{\ln \alpha_i(t)}{2t} . \) Since \(\lambda_i(t)\) is, in general, a fluctuating quantity (see the lower part of Figure 1), it is necessary to consider the infinite time limit, to determine the asymptotic (in time) behaviour. This leads to the following definition of LE, \( \tag{4} \lambda_i = \limsup_{t \to\infty} \lambda_i(t) \) where the \(\limsup\) is considered to account for the worst possible fluctuations: this is important whenever the stability of a given regime must be assessed. The Oseledets multiplicative ergodic theorem guarantees that LEs are independent of the initial condition (Oseledets 1968). Figure 1: Time dependence of a generic perturbation amplitude It is interesting to notice that while it makes sense to determine the imaginary part of the Lyapunov exponents for fixed points and periodic orbits, this question cannot, in general, be addressed for an aperiodic motion. In fact the \(\alpha_i\)'s are, by definition, real quantities and there is no way to extend the definition to include rotations. One can at most introduce a rotation number, to characterize the rotation of a generic perturbation around the reference trajectory (Ruelle 1985). In practice, Lyapunov exponents can be computed by exploiting the natural tendency of an \(n\)-dimensional volume to align along the \(n\) most expanding subspace. From the expansion rate of an \(n\)-dimensional volume, one obtains the sum of the \(n\) largest Lyapunov exponents. Altogether, the procedure requires evolving \(n\) linearly independent perturbations and one is faced with the problem that all vectors tend to align along the same direction. However, as shown in the late '70s, this numerical instability can be counterbalanced by orthonormalizing the vectors with the help of the Gram-Schmidt procedure (Benettin et al. 1980, Shimada and Nagashima 1979) (or, equivalently with a QR decomposition). As a result, the LE \(\lambda_i\), naturally ordered from the largest to the most negative one, can be computed: they are altogether referred to as the Lyapunov spectrum. • The LEs are independent of both the metric used to determine the distance between perturbations and the choice of variables. This property implies they are dynamical invariants and thereby provide an objective characterization of the corresponding dynamics. • A strictly positive maximal Lyapunov exponent is synonymous of exponential instability, but one should be warned that in some special cases, this may not be true (see, e.g., the so-called Perron effect) (Leonov and Kuznetsov 2006) • A strictly positive maximal Lyapunov exponent is often considered as a definition of deterministic chaos. This makes sense only when the corresponding unstable manifold folds back remaining confined within a bounded domain (an unstable fixed point is NOT chaotic). • Typical trajectories are characterized by the same LEs, but there exists a zero-measure subset with different stability properties. The infinitely many periodic orbits embedded in a chaotic attractor are one such example. • One-dimensional maps \(x_{n+1}= G(x_n)\) are characterized by just one LE, which is equal to the average value of \(\ln |dG/dx|\). In other words, the LE can be determined as an ensemble average, rather than a time average. In principle, it is possible to extend the idea to higher dimensions, but it would turn out in a rather impractical methods, because of the difficulty of reconstructing the invariant measure together with the need to identify the local directions of the various vectors. • The sum \(\Sigma\) of all LEs measures the contraction rate of volumes in the whole phase space. In the so-called dissipative systems, \(\Sigma<0\), meaning that volumes visited by generic trajectories shrink exponentially to zero. In Hamiltonian systems, \(\Sigma=0\), i.e. volumes are preserved (see Liouville theorem). • In symplectic systems, LEs come in pairs (\(\lambda_i,\lambda_{2N-i+1}\)) such that their sum is equal to zero. This means that the Lyapunov spectrum is symmetric. It is a way of emphasizing the invariance of Hamiltonian dynamics under change of the time arrow. • Any (bounded) infinite trajectory that does not converge towards a fixed point is characterized by at least one zero LE: it corresponds to a perturbation of the phase point along its own trajectory. Other vanishing exponents may signal the existence of constants of motion. Zero exponents may also (non generically) occur in correspondence of bifurcation points, where some direction is marginally stable. In such cases, it is necessary to go beyond the linear approach to determine the stability. Characterization of deterministic chaos The knowledge of the LEs allows determining additional invariants such as the fractal dimension of the underlying attractor and its dynamical entropy. The Kaplan-Yorke formula provides an upper bound for the information dimension of the attractor, (Kaplan and Yorke 1979) \( \tag{5} D_{KY}= J + \frac{\Lambda_J}{|\lambda_{J+1}|} \) where \(\Lambda_j\equiv \sum_{i=1}^j\lambda_i\) and \(J\) is the largest \(j\)-value such that \(\Lambda_j>0\). This equation can be understood in the following way. A strictly positive \(\Lambda_j\) implies that the hyper-volume of a generic \(j\)-dimensional box diverges while spreading over the attractor. This implies that the dimension is larger than \(j\), since it is like asking to measure the "length" of a square: the length of a line covering the square is obviously infinite! For the same reason, \(\Lambda_j<0\) signals that the dimension is smaller than \(j\). Altogether, one can view the Kaplan-Yorke formula as a linear interpolation between the largest \(j\) such that \(\Lambda_j>0\) and the smallest such that the opposite is true (the procedure is schematically reproduced in Figure 2). In general, \(D_{KY}\) provides an upper bound to the information dimension, but in three dimensional flows (two-dimensional maps) and in random dynamical systems it has been proved to coincide with it. The Kaplan-Yorke formula provides also approximate information on the number of the active degrees of freedom. In fact, in typical dissipative models, the phase-space dimension is infinite, but the number of independent variables that are necessary to uniquely identify the different points of the attractors is finite and sometimes even small. Another dynamical invariant that is connected with the LE is the Kolmogorov-Sinai entropy \(H_{KS}\) which measures the growth rate of the entropy due to the exponential instability of the chaotic motion. In this case, the relationship is expressed by the Pesin formula (Pesin 1977) \( \tag{6} H_P \equiv \Lambda_j > H_{KS} \) where the sum in \(\Lambda_j\) is restricted to the strictly expanding directions (see Figure 2 for a schematic representation). In order to take into account the possible fractal structure along the unstable directions (this happens in the case of repellors, i.e. transient chaos) this formula must be extended to, \( \tag{7} H_P = \sum_{\lambda_i>0} d_i \lambda_i \) where \(d_i\) represents the fractal dimension along the ith direction (in standard chaotic attractors \(d_i=1\)). As schematically illustrated in Figure 1, the finite-time LE fluctuates. The central limit theorem guarantees that such fluctuations vanish when time goes to infinity. However, the so-called generalized LE (Fujisaka 1983, Benzi et al. 1985) \(\mathcal{L}(q)\) \( \tag{8} \mathcal{L}(q) = \limsup_{t\to\infty} \frac{1}{q} \ln \left\langle {\rm e}^{q\lambda(t)} \right\rangle \) (in this section, for simplicity, we drop the dependence on the index \(i\)) is sensitive to such fluctuations. It is easy to see that in the limit \(q\to 0\), the usual LE definition is recovered. The same problem can be approached in a more transparent way, by expressing the probability \(P(\lambda,t)\) that a trajectory of length \(t\) is characterized by an exponent \(\lambda\) (in the limit of finite but large enough \(t\)) in terms of the large-deviation function \(g(\lambda)\), \( \tag{9} P(\lambda,t) \simeq {\rm e}^{-g(\lambda)t} . \) Figure 2: Lyapunov spectrum and its integrated version \(g(\lambda)\) is a nonnegative function with a typically quadratic maximum in correspondence of the usual LE \(\overline \lambda\), where \(g(\overline \lambda) = 0\). This condition implies that the probability of observing \(\lambda=\overline \lambda\) does not vanish (exponentially) for increasing time. \(g(\lambda)\) and \(\mathcal{L}(q)\) are related to one another by a Legendre transform. The large-deviation function \(g\) is a powerful tool to detect deviations from a perfectly hyperbolic behaviour (for instance, discovering that the domain of definition of for a positive exponent extends to negative values as well, as a result of homoclinic tangencies.) Generalized LE are important to establish the connection with different definitions of fractal dimensions: for instance, the correlation dimension, that is measured by implementing the Grassberger-Procaccia algorithm (Grassberger and Procaccia 1983), is connected with \(\mathcal{L}(1)\). Spatially extended systems For simplicity we refer to one-dimensional lattices of length \(N\) and assume that a single variable \(x_i\) is defined on each lattice site. As a result, the phase-space dimension is \(N\). There are two natural limits that one wishes to consider: thermodynamic and continuum limit. In the former case, we let \(N\) go to infinity, by increasing the number of sites and leaving their mutual distance constant. In the latter case, \(N\) is increased by reducing the spatial separation. In the thermodynamic limit, it has been observed and proven that the LEs come closer to each other in such a way that it makes sense to speak of a Lyapunov spectrum (Ruelle 2004, Grassberger 1989) \( \tag{10} \lambda(\rho=i/N) = \lambda_i \) The existence of a Lyapunov spectrum can be interpreted as the evidence of the extensive character of space-time chaos. In fact, this means that the entropy \(H_P\) and the fractal dimension \(D_{KY}\) are proportional to the system size. In other words, the dynamics in sufficiently separated regions (of the physical space) are independent of one another. In the continuum limit, additional (negative) exponents appear, which characterize the fast relaxation phenomena occurring on short spatial scales. Chronotopic approach Lyapunov exponents have been introduced with the goal of characterizing the time evolution of perturbations of lumped dynamical systems. However in spatially extended systems, it is important to describe the spatial evolution as well. A first generalization of the LE is obtained by introducing the convective exponent, to describe the growth of an initially localized perturbation (Deissler and Kaneko 1987) \( \tag{11} u(x,t) = {\rm e}^{ L(v)t} u(x,0) \) where \(v=i/t\) is the world line along which the evolution is measured and \(u(x,0)\) is restricted to some finite interval around \(x=0\). Figure 3: Two different instances of the convective Lyapunov spectrum Figure 4: Geometric construction to determine the convective exponent In chaotic systems with left-right symmetry, \(L(v)\) is symmetric too and attains its maximum value for zero velocity; \(L(0)\) coincides with the standard maximum LE (see Figure 3, left panel). As the velocity increases (in absolute value), \(L(v)\) decreases to eventually become negative, beyond some critical value \(v_0\) which can be interpreted as the maximal propagation of (infinitesimal) perturbations. Whenever there is no left-right symmetry, it may happen that only perturbations propagating with some finite velocity do expand. In such cases, one speaks of convective instabilities (see the right panel in Figure 3. If the system is open, it locally relaxes back to the previous equilibrium state, once the perturbation has travelled away. Convective exponents are an example of the additional information that can be extracted by implementing the so-called chronotopic approach (Lepri et al. 1996), which is based on the definition of the growth rate of exponentially distributed perturbations \(u(x) = {\rm e}^{\mu x}u_\mu(x)\) (standard LE are obtained by assuming \(\mu =0\)). By assuming a generic \(\mu\)-value in the original evolution equations in tangent space, one can determine the generalized temporal Lyapunov spectrum \(\lambda(\rho,\mu)\). The convective exponents can be obtained by Legendre transforming \(\lambda(0,\mu)\), i.e. \( L(v) = \lambda(0,\mu) - \lambda \mu \qquad v = \frac{d\lambda}{d\mu} \) The corresponding geometrical construction is presented in Figure 4. Notice that one can equivalently proceed from \(L(v)\) to \(\lambda(0,\mu)\), in which case \(\mu\) is determined as \(\mu = dL/dv\). By exchanging the role of space and time variables, one can define the complementary spatial Lyapunov exponents \(\mu(\lambda,\rho)\). In one dimensional systems, it has been conjectured that the two kinds of spectra are related to one another and follow from the existence of a superinvariant (as it is independent of the space-time parametrization) entropy potential (Lepri et al. 1997). Lyapunov vectors While the LEs correspond to the limit eigenvalues of a suitable product of matrices, there is no corresponding unique set of eigenvectors, as they depend on the current position of the phase point. In fact this dependence reflects the typically nonlinear shape of both stable and unstable manifolds. However, one cannot directly invoke the vectors \({\bf V}_i\) arising from the Gram-Schmidt orthogonalization procedure, as they are not covariant, i.e. the vector \({\bf V}_i({\bf x})\) defined in \(\bf x\) is not transformed into \({\bf V}_i({\bf y})\) when \(\bf x\) is mapped onto \(\bf y\). A proper definition requires to generalize the concept of eigenvectors of linear operators (Eckmann and Ruelle 1985). Roughly speaking, the covariant vectors can be obtained by iterating forward and backward along the same trajectory to identify the \(i\)th vector \({\bf W}_i\) as the (backward) most expanding direction within the (forward) most expanding subspace of dimension \(i\). Effective algorithms for the determination of the covariant vectors have been proposed only recently (Wolfe and Samelson 2007, Ginelli et al. 2007) Finite amplitude Lyapunov exponents Figure 5: Growth of a generic finite-amplitude perturbation In some cases it is useful, if not even necessary, to consider finite-amplitude perturbations. Apart from experimental time series, where, in the absence of a model, one is forced to consider finite distances, it is useful to extend the concept of Lyapunov exponents to regimes where nonlinearities are possibly relevant. Finite-amplitude exponents may be defined in the following way. Given any two nearby trajectories, let \(\Delta(t)\) denote their mutual distance and measure the times \(t_n\) when \(|\Delta(t_n)|\) crosses (for the first time) a series of exponentially spaced thresholds \(\theta_n\) (\(\theta_n = r \theta_{n-1}\) - see Figure 5). By averaging the time separation between consecutive crossings over different pairs of trajectories, one obtains the finite-amplitude Lyapunov exponent (Aurell et al. 1996) \( \ell = \frac{r}{\langle t_n-t_{n-1}\rangle} \) For small enough thresholds, one recovers the usual (maximum) Lyapunov exponent, while for large amplitudes, \(\ell\) saturates to zero, since a perturbation cannot be larger than the size of the accessible phase-space. In the intermediate range, \(\ell\) tells us how the growth of a perturbation is affected by nonlinearities. As the definition of finite-amplitude LE does neither involve an infinite-time limit nor that of infinitesimal perturbations, is not mathematically well posed, as the result will depend on the selection of the variables. Nevertheless, it may profitably be used to extract useful information on the presence of collective dynamics, where one would like to distinguish between the stability of microscopic and macroscopic perturbations, or in the presence of different time scales, when some directions saturate very rapidly. LEs prove useful in various contexts. Within dynamical systems, LEs, besides providing a detailed characterization of chaotic dynamics, can help to assess various forms of synchronization (Pikovsky 2007). Another context where LEs help to clarify the underlying dynamics is chaotic advection, i.e. the evolution of particles transported by a (possibly time-dependent) velocity field, \( \dot {\bf x} = {\bf F}({\bf x},t) \) where \({\bf x}(t) \) denotes the Lagrangian trajectory of a generic particle in the physical space. In this case, the existence of a positive Lyapunov exponent is synonymous of chaotic mixing (Ottino, 1989). Another prominent example is Anderson localization of the eigenfunctions \( \psi(x)\) of the Schrödinger equation in the presence of disorder. In this case, the object of study is the spatial dependence of \( \psi(x)\) (see also the section on the chronotopic approach). In one-dimensional systems, in the tight-binding approximation, \( x \) is an integer variable and the spatial evolution corresponds to multiplying by a \(2\times2 \) random matrix. This is the so-called transfer matrix approach: the invariance under spatial reversal implies that the two (spatial) LEs are opposite of each other. The most important result is that the positive LE coincides with the inverse of the localization length \( \ell_c \) (Borland 1963, Furstenberg 1963). The transfer-matrix approach can be applied also in higher-dimensional spaces, in which case, the inverse localization length coincides with the minimal positive LE. • L. Arnold, Lyapunov exponents, Lecture Notes in Mathematics, 1186 (1986) • E. Aurell, G. Boffetta, A. Crisanti, G. Paladin, and A. Vulpiani, Growth of Noninfinitesimal Perturbations in Turbulence, Phys. Rev. Lett. 77:1262 (1996). • G. Benettin, L. Galgani, A. Giorgilli, and J. M. Strelcyn, Lyapunov characteristic exponents for smooth dynamical systems: a method for computing all of them, Meccanica 15 9:15, 9:21 (1980). • R. Benzi, G. Paladin, G. Parisi, and A. Vulpiani, Characterisation of intermittency in chaotic systems, J. Phys A 18:2157 (1985). • R.E. Borland, The nature of the electronic states in disordered one-dimensional systems, Proc. R. Soc. London, A274:529 (1963). • R. J. Deissler and K. Kaneko, Velocity-Dependent Liapunov Exponents as a Measure of Chaos for Open Flow Systems, Phys. Lett.119A:397 (1987). • J.-P. Eckmann and D. Ruelle, Ergodic theory of chaos and strange attractors, Rev. Mod. Phys. 57:617 (1985). • H. Fujisaka, Statistical Dynamics Generated by Fluctuations of Local Lyapunov Exponents, Prog. Theor. Phys, 70:1264 (1983). • H. Furstenberg, Noncommuting random products, Trans. Amer. Math. Soc., 108:377 (1963). • F. Ginelli, P. Poggi, A. Turchi, H. Chat\'e, R. Livi, A. Politi, Characterizing dynamics with covariant Lyapunov vectors, Phys. Rev. Lett. 99, 130601 (2007). • P. Grassberger and I. Procaccia, Characterization of strange attractors, Phys. Rev. Lett. 50:346 (1983). • P. Grassberger, Information Content and Predictability of Lumped and Distributed Dynamical Systems, Physca Scripta 40:346 (1989). • J.L. Kaplan and J.A. Yorke, In Functional Differential Equations and Approximations of Fixed Points, ed. H.-O. Peitgen and H.-O. Walther, 2049 (Berlin, Springer-Verlag, 1979). • G.A. Leonov and N.V. Kuznetsov, Time-varying linearization and the Perron effects International Journal of Bifurcation and Chaos 17:1079 (2007) • S. Lepri, A. Politi, A. Torcini, Chronotopic Lyapunov analysis: (I) A comprehensive characterization of 1D systems, J. Stat. Phys. 82, 1429 (1996). • S. Lepri, A. Politi, A. Torcini, Entropy potential and Lyapunov exponents, CHAOS 7, 701 (1997). • A.M. Lyapunov The General Problem of the Stability of Motion, Taylor & Francis, London 1992. • V.I. Oseledets, A multiplicative ergodic theorem. Lyapunov characteristic numbers for dynamical systems, Trans. Moscow Math. Soc. 19:197 (1968). • J.M. Ottino, The Kinematics of Mixing: Stretching, Chaos and Transport (Cambridge University Press 1989). • Y. Pesin, Characteristic Lyapunov exponents and smooth ergodic theory Russian Math. Surveys, 32:55 (1977). • D. Ruelle, Rotation numbers for diffeomorphisms and flows, Annales de l'IHP sec. 4, 42:109 (1985). • D. Ruelle, Thermodynamic formalism (Cambridge University Press, 2004). • I. Shimada and T. Nagashima, A numerical approach to ergodic problem of dissipative dynamical systems, Prog. Theor. Phys. 61:1605 (1979) • C.L. Wolfe, R.M. Samelson, An efficient method for recovering Lyapunov vectors from singular vectors, Tellus, 59A:355 (2007). Internal references • Edward Ott, Scholarpedia, 3(3):2110 (2008). • Arkady Pikovsky, Misha Rosenblum, Scholarpedia, 2(12):1459 (2007). • Yakov Sinai, Scholarpedia, 4(3):2034 (2009). Personal tools Focal areas
ba1762d95d3ed5f1
Checked content Related subjects: Physics Background Information SOS Children, which runs nearly 200 sos schools in the developing world, organised this selection. Sponsoring children helps children in the developing world to learn too. Lightning is the electric breakdown of air by strong electric fields, producing a plasma, which causes an energy transfer from the electric field to heat, mechanical energy (the random motion of air molecules caused by the heat), and light. In physics and other sciences, energy (from the Greek ενεργός, energos, "active, working") is a scalar physical quantity that is a property of objects and systems which is conserved by nature. Energy is often defined as the ability to do work. Several different forms of energy, including kinetic, potential, thermal, gravitational, elastic, electromagnetic, chemical, nuclear, and mass have been defined to explain all known natural phenomena. Although the total energy of a system does not change with time, its value may depend on the frame of reference. For example, a seated passenger in a moving airplane has zero kinetic energy relative to the airplane, but nonzero kinetic energy relative to the earth. Thomas Young - the first to use the term "energy" in the modern sense. The concept of energy emerged out of the idea of vis viva, which Leibniz defined as the product of the mass of an object and its velocity squared; he believed that total vis viva was conserved. To account for slowing due to friction, Leibniz claimed that heat consisted of the random motion of the constituent parts of matter — a view shared by Isaac Newton, although it would be more than a century until this was generally accepted. In 1807, Thomas Young was the first to use the term "energy", instead of vis viva, in its modern sense. Gustave-Gaspard Coriolis described "kinetic energy" in 1829 in its modern sense, and in 1853, William Rankine coined the term " potential energy." It was argued for some years whether energy was a substance (the caloric) or merely a physical quantity, such as momentum. He amalgamated all of these laws into the laws of thermodynamics, which aided in the rapid development of explanations of chemical processes using the concept of energy by Rudolf Clausius, Josiah Willard Gibbs and Walther Nernst. It also led to a mathematical formulation of the concept of entropy by Clausius, and to the introduction of laws of radiant energy by Jožef Stefan. The Feynman Lectures on Physics Since 1918 it has been known that the law of conservation of energy is the direct mathematical consequence of the translational symmetry of the quantity conjugate to energy, namely time. That is, energy is conserved because the laws of physics do not distinguish between different moments of time (see Noether's theorem). Energy in various contexts since the beginning of the universe The concept of energy is used often in all fields of science. In chemistry, the energy differences between substances determine whether, and to what extent, they can be converted into other substances or react with other substances. In biology, chemical bonds are broken and made during metabolic processes, and the associated changes in available energy are studied in the subfield of bioenergetics. Energy is often stored by cells in the form of substances such as carbohydrate molecules (including sugars) and lipids, which release energy when reacted with oxygen. In geology and meteorology, continental drift, mountain ranges, volcanos, and earthquakes are phenomena that can be explained in terms of energy transformations in the Earth's interior. While meteorological phenomena like wind, rain, hail, snow, lightning, tornadoes and hurricanes, are all a result of energy transformations brought about by solar energy on the planet Earth. Energy transformations in the universe over time are characterized by various kinds of potential energy which has been available since the Big Bang, later being "released" (transformed to more active types of energy such as kinetic or radiant energy), when a triggering mechanism is available. Familiar examples of such processes include nuclear decay, in which energy is released which was originally "stored" in heavy isotopes (such as uranium and thorium), by nucleosynthesis, a process which ultimately uses the gravitational potential energy released from the gravitational collapse of supernovae, to store energy in the creation of these heavy elements before they were incorporated into the solar system and the Earth. This energy is triggered and released in nuclear fission bombs. In a slower process, heat from nuclear decay of these atoms in the core of the Earth releases heat, which in turn may lift mountains, via orogenesis. This slow lifting represents a kind of gravitational potential energy storage of the heat energy, which may be released to active kinetic energy in landslides, after a triggering event. Earthquakes also release stored elastic potential energy in rocks, a store which has been produced ultimately from the same radioactive heat sources. Thus, according to present understanding, familiar events such as landslides and earthquakes release energy which has been stored as potential energy in the Earth's gravitational field or elastic strain (mechanical potential energy) in rocks; but prior to this, represents energy that has been stored in heavy atoms since the collapse of long-destroyed stars created these atoms. In another similar chain of transformations beginning at the dawn of the universe, nuclear fusion of hydrogen in the Sun releases another store of potential energy which was created at the time of the Big Bang. At that time, according to theory, space expanded and the universe cooled too rapidly for hydrogen to completely fuse into heavier elements. This meant that hydrogen represents a store of potential energy which can be released by fusion. Such a fusion process is triggered by heat and pressure generated from gravitational collapse of hydrogen clouds when they produce stars, and some of the fusion energy is then transformed into sunlight. Such sunlight from our Sun may again be stored as gravitational potential energy after it strikes the Earth, as (for example) water evaporates from oceans and is deposited upon mountains (where, after being released at a hydroelectric dam, it can be used to drive turbine/generators to produce electricity). Sunlight also drives all weather phenomenon, including such events as those triggered in a hurricane, when large unstable areas of warm ocean, heated over months, give up some of their thermal energy suddently to power a few days of violent air movement. Sunlight is also is captured by plants as chemical potential energy, when carbon dioxide and water are converted into a combustable combination of carbohydrates, lipids, and oxygen. Release of this energy as heat and light may be triggered suddenly by a spark, in a forest fire; or it may be available more slowly for animal or human metabolism, when these molecules are ingested, and catabolism is triggered by enzyme action. Through all of these tranformation chains, potential energy stored at the time of the Big Bang is later released by intermediate events, sometimes being stored in a number of ways over time between releases, as more active energy. In all these events, one kind of energy is converted to other types of energy, including heat. Regarding applications of the concept of energy • The total energy of a system can be subdivided and classified in various ways. For example, it is sometimes convenient to distinguish potential energy (which is a function of coordinates only) from kinetic energy (which is a function of coordinate time derivatives only). It may also be convenient to distinguish gravitational energy, electric energy, thermal energy, and other forms. These classifications overlap; for instance thermal energy usually consists partly of kinetic and partly of potential energy. • The transfer of energy can take various forms; familiar examples include work, heat flow, and advection, as discussed below. • The word "energy" is also used outside of physics in many ways, which can lead to ambiguity and inconsistency. The vernacular terminology is not consistent with technical terminology. For example, the important public-service announcement, "Please conserve energy" uses vernacular notions of "conservation" and "energy" which make sense in their own context but are utterly incompatible with the technical notions of "conservation" and "energy" (such as are used in the law of conservation of energy). In classical physics energy is considered a scalar quantity, the canonical conjugate to time. In special relativity energy is also a scalar (although not a Lorentz scalar but a time component of the energy-momentum 4-vector). In other words, energy is invariant with respect to rotations of space, but not invariant with respect to rotations of space-time (= boosts). Energy transfer Because energy is strictly conserved and is also locally conserved (wherever it can be defined), it is important to remember that by definition of energy the transfer of energy between the "system" and adjacent regions is work. A familiar example is mechanical work. In simple cases this is written as: \Delta{}E = W             (1) if there are no other energy-transfer processes involved. Here \Delta{}E  is the amount of energy transferred, and W  represents the work done on the system. More generally, the energy transfer can be split into two categories: \Delta{}E = W + Q             (2) where Q  represents the heat flow into the system. There are other ways in which an open system can gain or lose energy. If mass is counted as energy (as in many relativistic problems) then E must contain a term for mass lost or gained. In chemical systems, energy can be added to a system by means of adding substances with different chemical potentials, which potentials are then extracted (both of these process are illustrated by fueling an auto, a system which gains in energy thereby, without addition of either work or heat). Winding a clock would be adding energy to a mechanical system. These terms may be added to the above equation, or they can generally be subsumed into a quantity called "energy addition term E" which refers to any type of energy carried over the surface of a control volume or system volume. Examples may be seen above, and many others can be imagined (for example, the kinetic energy of a stream of particles entering a system, or energy from a laser beam adds to system energy, without either being either work-done or heat-added, in the classic senses). \Delta{}E = W + Q + E             (3) Energy is also transferred from potential energy (E_p) to kinetic energy (E_k) and then back to potential energy constantly. This is referred to as conservation of energy. In this closed system, energy can not be created or destroyed, so the initial energy and the final energy will be equal to each other. This can be demonstrated by the following: E_{pi} + E_{ki} = E_{pF} + E_{kF}''' The equation can then be simplified further since E_p = mgh (mass times acceleration due to gravity times the height) and E_k = \frac{1}{2} mv^2 (half times mass times velocity squared). Then the total amount of energy can be found by adding E_p + E_k = E_{total}. Energy and the laws of motion The Hamiltonian The Lagrangian Another energy-related concept is called the Lagrangian, after Joseph Louis Lagrange. This is even more fundamental than the Hamiltonian, and can be used to derive the equations of motion. In non-relativistic physics, the Lagrangian is the kinetic energy minus potential energy. Usually, the Lagrange formalism is mathematically more convenient than the Hamiltonian for non-conservative systems (like systems with friction). Energy and thermodynamics Internal energy Type Composition of Internal Energy (U) Chemical energy the internal energy associated with the different kinds of aggregration of atoms in matter. The laws of thermodynamics According to the second law of thermodynamics, work can be totally converted into heat, but not vice versa.This is a mathematical consequence of statistical mechanics. The first law of thermodynamics simply asserts that energy is conserved, and that heat is included as a form of energy transfer. A commonly-used corollary of the first law is that for a "system" subject only to pressure forces and heat transfer (e.g. a cylinder-full of gas), the differential change in energy of the system (with a gain in energy signified by a positive quantity) is given by: where the first term on the right is the heat transfer into the system, defined in terms of temperature T and entropy S (in which entropy increases and the change dS is positive when the system is heated); and the last term on the right hand side is identified as "work" done on the system, where pressure is P and volume V (the negative sign results since compression of the system is needed to do work on it, so that the volume change dV is negative when work is done on the system). Although this equation is the standard text-book example of energy conservation in classical thermodynamics, it is highly specific, ignoring all chemical, electric, nuclear, and gravitational forces, effects such as advection of any form of energy other than heat, and because it contains a term that depends on temperature. The most general statement of the first law — i.e. conservation of energy — is valid even in situations in which temperature is undefinable. Energy is sometimes expressed as: \mathrm{d}E=\delta Q+\delta W\,, which is unsatisfactory because there cannot exist any thermodynamic state functions W or Q that are meaningful on the right hand side of this equation, except perhaps in trivial cases. Equipartition of energy The energy of a mechanical harmonic oscillator (a mass on a spring) is alternatively kinetic and potential. At two points in the oscillation cycle it is entirely kinetic, and alternatively at two other points it is entirely potential. Over the whole cycle, or over many cycles net energy is thus equally split between kinetic and potential. This is called equipartition principle - total energy of a system with many degrees of freedom is equally split among all these degrees of freedom. This principle is vitally important to understanding the behaviour of a quantity closely related to energy, called entropy. Entropy is a measure of evenness of a distribution of energy between parts of a system. This concept is also related to the second law of thermodynamics which basically states that when an isolated system is given more degrees of freedom (= is given new available energy states which are the same as existing states), then energy spreads over all available degrees equally without distinction between "new" and "old" degrees. Oscillators, phonons, and photons In an ensemble (connected collection) of unsynchronized oscillators, the average energy is spread equally between kinetic and potential types. In a solid, thermal energy (often referred to loosely as heat content) can be accurately described by an ensemble of thermal phonons that act as mechanical oscillators. In this model, thermal energy is equally kinetic and potential. In an ideal gas, the interaction potential between particles is essentially the delta function which stores no energy: thus, all of the thermal energy is kinetic. Because an electric oscillator (LC circuit) is analogous to a mechanical oscillator, its energy must be, on average, equally kinetic and potential. It is entirely arbitrary whether the magnetic energy is considered kinetic and the electric energy considered potential, or vice versa. That is, either the inductor is analogous to the mass while the capacitor is analogous to the spring, or vice versa. 1. By extension of the previous line of thought, in free space the electromagnetic field can be considered an ensemble of oscillators, meaning that radiation energy can be considered equally potential and kinetic. This model is useful, for example, when the electromagnetic Lagrangian is of primary interest and is interpreted in terms of potential and kinetic energy. 1. On the other hand, in the key equation m^2 c^4 = E^2 - p^2 c^2, the contribution mc^2 is called the rest energy, and all other contributions to the energy are called kinetic energy. For a particle that has mass, this implies that the kinetic energy is 0.5 p^2/m at speeds much smaller than c, as can be proved by writing E = mc^2  √(1 + p^2 m^{-2}c^{-2}) and expanding the square root to lowest order. By this line of reasoning, the energy of a photon is entirely kinetic, because the photon is massless and has no rest energy. This expression is useful, for example, when the energy-versus-momentum relationship is of primary interest. The two analyses are entirely consistent. The electric and magnetic degrees of freedom in item 1 are transverse to the direction of motion, while the speed in item 2 is along the direction of motion. For non-relativistic particles these two notions of potential versus kinetic energy are numerically equal, so the ambiguity is harmless, but not so for relativistic particles. Work and virtual work Work is roughly force times distance. But more precisely, it is This says that the work (W) is equal to the integral (along a certain path) of the force; for details see the mechanical work article. Work and thus energy is frame dependent. For example, consider a ball being hit by a bat. In the centre-of-mass reference frame, the bat does no work on the ball. But, in the reference frame of the person swinging the bat, considerable work is done on the ball. Quantum mechanics In quantum mechanics energy is defined in terms of the energy operator as a time derivative of the wave function. The Schrödinger equation equates energy operator to the full energy of a particle or a system. It thus can be considered as a definition of measurement of energy in quantum mechanics. The Schrödinger equation describes the space- and time-dependence of the wave function of quantum systems. The solution of this equation for bound system is discrete (a set of permitted states, each characterized by an energy level) which results in the concept of quanta. In the solution of the Schrödinger equation for any oscillator (vibrator) and for electromagnetic wave in vacuum, the resulting energy states are related to the frequency by the Planck equation E = h\nu (where h is the Planck's constant and \nu the frequency). In the case of electromagnetic wave these energy states are called quanta of light or photons. When calculating kinetic energy (= work to accelerate a mass from zero speed to some finite speed) relativistically - using Lorentz transformations instead of Newtonian mechanics, Einstein discovered unexpected by-product of these calculations to be an energy term which does not vanish at zero speed. He called it rest mass energy - energy which every mass must possess even when being at rest. The amount of energy is directly proportional to the mass of body: E = m c^2 , m is the mass, c is the speed of light in vacuum, E is the rest mass energy. In general relativity, the stress-energy tensor serves as the source term for the gravitational field, in rough analogy to the way mass serves as the source term in the non-relativistic Newtonian approximation. It is not uncommon to hear that energy is "equivalent" to mass. It would be more accurate to state that every energy has inertia and gravity equivalent, and because mass is a form of energy, then mass too has inertia and gravity associated with it. There is no absolute measure of energy, because energy is defined as the work that one system does (or can do) on another. Thus, only of the transition of a system from one state into another can be defined and thus measured. The methods for the measurement of energy often deploy methods for the measurement of still more fundamental concepts of science, namely mass, distance, radiation, temperature, time, electric charge and electric current. A Calorimeter - An instrument used by physicists to measure energy Conventionally the technique most often employed is calorimetry, a thermodynamic technique that relies on the measurement of temperature using a thermometer or of intensity of radiation using a bolometer. Throughout the history of science, energy has been expressed in several different units such as ergs and calories. At present, the accepted unit of measurement for energy is the SI unit of energy, the joule. Forms of energy Heat, a form of energy, is partly potential energy and partly kinetic energy. Classical mechanics distinguishes between potential energy, which is a function of the position of an object, and kinetic energy, which is a function of its movement. Both position and movement are relative to a frame of reference, which must be specified: this is often (and originally) an arbitrary fixed point on the surface of the Earth, the terrestrial frame of reference. Some introductory authors attempt to separate all forms of energy in either kinetic or potential: this is not incorrect, but neither is it clear that it is a real simplification, as Feynman points out: Examples of the interconversion of energy Mechanical energy is converted into by Mechanical energy Lever Thermal energy Brakes Electric energy Dynamo Electromagnetic radiation Synchrotron Chemical energy Matches Nuclear energy Particle accelerator Potential energy Potential energy, symbols Ep, V or Φ, is defined as the work done against a given force (= work of given force with minus sign) in changing the position of an object with respect to a reference position (often taken to be infinite separation). If F is the force and s is the displacement, E_{\rm p} = -\int \mathbf{F}\cdot{\rm d}\mathbf{s} with the dot representing the scalar product of the two vectors. Gravitational potential energy The gravitational force near the Earth's surface varies very little with the height, h, and is equal to the mass, m, multiplied by the gravitational acceleration, g = 9.81 m/s². In these cases, the gravitational potential energy is given by E_{\rm p,g} = mgh A more general expression for the potential energy due to Newtonian gravitation between two bodies of masses m1 and m2, useful in astronomy, is E_{\rm p,g} = -G{{m_1m_2}\over{r}}, where r is the separation between the two bodies and G is the gravitational constant, 6.6742(10)×10−11 m³kg−1s−2. In this case, the reference point is the infinite separation of the two bodies. Elastic potential energy Elastic potential energy is defined as a work needed to compress (or expand) a spring. The force, F, in a spring or any other system which obeys Hooke's law is proportional to the extension or compression, x, F = -kx where k is the force constant of the particular spring (or system). In this case, the calculated work becomes E_{\rm p,e} = {1\over 2}kx^2. Hooke's law is a good approximation for behaviour of chemical bonds under normal conditions, i.e. when they are not being broken or formed. Kinetic energy Kinetic energy, symbols Ek, T or K, is the work required to accelerate an object to a given speed. Indeed, calculating this work one easily obtains the following: E_{\rm k} = \int \mathbf{F} \cdot d \mathbf{x} = \int \mathbf{v} \cdot d \mathbf{p}= {1\over 2}mv^2 At speeds approaching the speed of light, c, this work must be calculated using Lorentz transformations, which results in the following: E_{\rm k} = m c^2\left(\frac{1}{\sqrt{1 - (v/c)^2}} - 1\right) This equation reduces to the one above it, at small (compared to c) speed. A mathematical by-product of this work (which is immediately seen in the last equation) is that even at rest a mass has the amount of energy equal to: E_{\rm rest} = mc^2 This energy is thus called rest mass energy. Thermal energy Examples of the interconversion of energy Thermal energy is converted into by Mechanical energy Steam turbine Thermal energy Heat exchanger Electric energy Thermocouple Electromagnetic radiation Hot objects Chemical energy Blast furnace Nuclear energy Supernova The general definition of thermal energy, symbols q or Q, is also problematic. A practical definition for small transfers of heat is \Delta q = \int C_{\rm v}{\rm d}T Electric energy Examples of the interconversion of energy Electric energy is converted into by Mechanical energy Electric motor Thermal energy Resistor Electric energy Transformer Electromagnetic radiation Light-emitting diode Chemical energy Electrolysis Nuclear energy Synchrotron E_{\rm p,e} = {1\over {4\pi\epsilon_0}}{{Q_1Q_2}\over{r}} where ε0 is the electric constant of a vacuum, 107/4πc0² or 8.854188…×10−12 F/m. If the charge is accumulated in a capacitor (of capacitance C), the reference configuration is usually selected not to be infinite separation of charges, but vice versa - charges at an extremely close proximity to each other (so there is zero net charge on each plate of a capacitor). In this case the work and thus the electric potential energy becomes E_{\rm p,e} = {{Q^2}\over{2C}} E = UQ = UIt = Pt = U^2t/R Magnetic energy E_{\rm p,m} = -m\cdot B while the energy stored in a inductor (of inductance L) when current I is passing via it is E_{\rm p,m} = {1\over 2}LI^2. This second expression forms the basis for superconducting magnetic energy storage. Electromagnetic fields Examples of the interconversion of energy Electromagnetic radiation is converted into by Mechanical energy Solar sail Thermal energy Solar collector Electric energy Solar cell Electromagnetic radiation Non-linear optics Chemical energy Photosynthesis Nuclear energy Mössbauer spectroscopy u_e=\frac{\epsilon_0}{2} E^2 u_m=\frac{1}{2\mu_0} B^2 , in SI units. Electromagnetic radiation, such as microwaves, visible light or gamma rays, represents a flow of electromagnetic energy. Applying the above expressions to magnetic and electric components of electromagnetic field both the volumetric density and the flow of energy in e/m field can be calculated. The resulting Poynting vector, which is expressed as \mathbf{S} = \frac{1}{\mu} \mathbf{E} \times \mathbf{B}, The energy of electromagnetic radiation is quantized (has discrete energy levels). The spacing between these levels is equal to E = h\nu where h is the Planck constant, 6.6260693(11)×10−34 Js, and ν is the frequency of the radiation. This quantity of electromagnetic energy is usually called a photon. The photons which make up visible light have energies of 270–520 yJ, equivalent to 160–310 kJ/mol, the strength of weaker chemical bonds. Chemical energy Examples of the interconversion of energy Chemical energy is converted into by Mechanical energy Muscle Thermal energy Fire Electric energy Fuel cell Electromagnetic radiation Glowworms Chemical energy Chemical reaction Chemical energy is the energy due to associations of atoms in molecules and various other kinds of aggregates of matter. It may be defined as a work done by electric forces during re-arrangement of electric charges, electrons and protons, in the process of aggregation. If the chemical energy of a system decreases during a chemical reaction, it is transferred to the surroundings in some form of energy (often heat); on the other hand if the chemical energy of a system increases as a result of a chemical reaction - it is by converting another form of energy from the surroundings. For example, The chemical energy as defined above is also referred to by chemists as the internal energy, U: technically, this is measured by keeping the volume of the system constant. However, most practical chemistry is performed at constant pressure and, if the volume changes during the reaction (e.g. a gas is given off), a correction must be applied to take account of the work done by or on the atmosphere to obtain the enthalpy, H: ΔH = ΔU + pΔV tonne of coal equivalent (TCE) = 29 GJ tonne of oil equivalent (TOE) = 41.87 GJ Simple examples of chemical energy are batteries and food. When you eat the food is digested and turned into chemical energy which can be transformed to kinetic energy. Nuclear energy Examples of the interconversion of energy Nuclear binding energy is converted into by Mechanical energy Alpha radiation Thermal energy Sun Electric energy Beta radiation Electromagnetic radiation Gamma radiation Chemical energy Radioactive decay Nuclear energy Nuclear isomerism Nuclear potential energy, along with electric potential energy, provides the energy released from nuclear fission and nuclear fusion processes. The result of both these processes are nuclei in which strong nuclear forces bind nuclear particles more strongly and closely. Weak nuclear forces (different from strong forces) provide the potential energy for certain kinds of radioactive decay, such as beta decay. The energy released in nuclear processes is so large that the relativistic change in mass (after the energy has been removed) can be as much as several parts per thousand. Nuclear particles ( nucleons) like protons and neutrons are not destroyed (law of conservation of baryon number) in fission and fusion processes. A few lighter particles may be created or destroyed (example: beta minus and beta plus decay, or electron capture decay), but these minor processes are not important to the immediate energy release in fission and fusion. Rather, fission and fusion release energy when collections of baryons become more tightly bound, and it is the energy associated with a fraction of the mass of the nucleons (but not the whole particles) which appears as the heat and electromagnetic radiation generated by nuclear reactions. This heat and radiation retains the "missing" mass, but the mass is missing only because it escapes in the form of heat and light, which retain the mass and conduct it out of the system where it is not measured. The energy from the Sun, also called solar energy, is an example of this form of energy conversion. In the Sun, the process of hydrogen fusion converts about 4 million metric tons of solar matter per second into light, which is radiated into space, but during this process, the number of total protons and neutrons in the sun does not change. In this system, the light itself retains the inertial equivalent of this mass, and indeed the mass itself (as a system), which represents 4 million tons per second of electromagnetic radiation, moving into space. Each of the helium nuclei which are formed in the process are less massive than the four protons from they were formed, but (to a good approximation), no particles or atoms are destroyed in the process of turning the sun's nuclear potential energy into light. Surface energy If there is any kind of tension in a surface, such as a stretched sheet of rubber or material interfaces, it is possible to define surface energy. In particular, any meeting of dissimilar materials that don't mix will result in some kind of surface tension, if there is freedom for the surfaces to move then, as seen in capillary surfaces for example, the minimum energy will as usual be sought. Transformations of energy One form of energy can often be readily transformed into another with the help of a device- for instance, a battery, from chemical energy to electric energy; a dam: gravitational potential energy to kinetic energy of moving water (and the blades of a turbine) and ultimately to electric energy through an electric generator. Similarly, in the case of a chemical explosion, chemical potential energy is transformed to kinetic energy and thermal energy in a very short time. Yet another example is that of a pendulum. At its highest points the kinetic energy is zero and the gravitational potential energy is at maximum. At its lowest point the kinetic energy is at maximum and is equal to the decrease of potential energy. If one (unrealistically) assumes that there is no friction, the conversion of energy between these processes is perfect, and the pendulum will continue swinging forever. Energy can be converted into matter and vice versa. The mass-energy equivalence formula E = mc², derived independently by Albert Einstein and Henri Poincaré, quantifies the relationship between mass and rest energy. Since c^2 is extremely large relative to ordinary human scales, the conversion of mass to other forms of energy can liberate tremendous amounts of energy, as can be seen in nuclear reactors and nuclear weapons. Conversely, the mass equivalent of a unit of energy is minuscule, which is why a loss of energy from most systems is difficult to measure by weight, unless the energy loss is very large. Examples of energy transformation into matter (particles) are found in high energy nuclear physics. In nature, transformations of energy can be fundamentally classed into two kinds: those that are thermodynamically reversible, and those that are thermodynamically irreversible. A reversible process in thermodynamics is one in which no energy is dissipated into empty quantum states available in a volume, from which it cannot be recovered into more concentrated forms (fewer quantum states), without degradation of even more energy. A reversible process is one in which this sort of dissipation does not happen. For example, conversion of energy from one type of potential field to another, is reversible, as in the pendulum system described above. In processes where heat is generated, however, quantum states of lower energy, present as possible exitations in fields between atoms, act as a reservoir for part of the energy, from which it cannot be recovered, in order to be converted with 100% efficiency into other forms of energy. In this case, the energy must partly stay as heat, and cannot be completely recovered as usable energy, except at the price of an increase in some other kind of heat-like increase in disorder in quantum states, in the universe (such as an expansion of matter, or a randomization in a crystal). As the universe evolves in time, more and more of its energy becomes trapped in irreversible states (i.e., as heat or other kinds of increases in disorder). This has been referred to as the inevitable thermodynamic heat death of the universe. In this heat death the energy of the universe does not change, but the fraction of energy which is available to do work, or be transformed to other usable forms of energy, grows less and less. Law of conservation of energy Energy is subject to the law of conservation of energy. According to this law, energy can neither be created (produced) nor destroyed itself. It can only be transformed. Most kinds of energy (with gravitational energy being a notable exception) are also subject to strict local conservation laws, as well. In this case, energy can only be exchanged between adjacent regions of space, and all observers agree as to the volumetric density of energy in any given space. There is also a global law of conservation of energy, stating that the total energy of the universe cannot change; this is a corollary of the local law, but not vice versa. Conservation of energy is the mathematical consequence of translational symmetry of time (that is, the indistinguishability of time intervals taken at different time) - see Noether's theorem. Thus because energy is quantity which is canonical conjugate to time, it is impossible to define the exact amount of energy during any definite time interval - making it impossible to apply the law of conservation of energy. This must not be considered a "violation" of the law. We know the law still holds, because a succession of short time periods do not accumulate any violation of conservation of energy. \Delta E \Delta t \ge \frac {h} {2 \pi} Energy and life C6H12O6 + 6O2 → 6CO2 + 6H2O C57H110O6 + 81.5O2 → 57CO2 + 55H2O and some of the energy is used to convert ADP into ATP ADP + HPO42− → ATP + H2O Daily food intake of a normal adult: 6–8 MJ Retrieved from ""
b62c101d68a37ae5
What is Real? There’s a new popular book out this week about the interpretation of quantum mechanics, Adam Becker’s What is Real?: The Unfinished Quest for the Meaning of Quantum Physics. Ever since my high school days, the topic of quantum mechanics and what it really means has been a source of deep fascination to me, and usually I’m a sucker for any book such as this one. It’s well-written and contains some stories I had never encountered before in the wealth of other things I’ve read over the years. Unfortunately though, the author has decided to take a point of view on this topic that I think is quite problematic. To get an idea of the problem, here’s some of the promotional text for the book (yes, I know that this kind of text sometimes is exaggerated for effect): A mishmash of solipsism and poor reasoning, [the] Copenhagen [interpretation] claims that questions about the fundamental nature of reality are meaningless. Albert Einstein and others were skeptical of Copenhagen when it was first developed. But buoyed by political expediency, personal attacks, and the research priorities of the military industrial complex, the Copenhagen interpretation has enjoyed undue acceptance for nearly a century. The text then goes to describe Bohm, Everett and Bell as the “quantum rebels” trying to fight the good cause against Copenhagen. Part of the problem with this good vs. evil story is that, as the book itself explains, it’s not at all clear what the “Copenhagen interpretation” actually is, other than a generic name for the point of view the generation of theorists such as Bohr, Heisenberg, Pauli, Wigner and von Neumann developed as they struggled to reconcile quantum and classical mechanics. They weren’t solipsists with poor reasoning skills, but trying to come to terms with the extremely non-trivial and difficult problem of how the classical physics formalism we use to describe observations emerges out of the more fundamental quantum mechanical formalism. They found a workable set of rules to describe what the theory implied for results of measurements (collapse of the state vector with probabilities given by the Born rule), and these rules are in every textbook. That there is a “measurement problem” is something that most everyone was aware of, with Schrodinger’s cat example making it clear. Typically, for the good reason that it’s complicated and they have other topics they need to cover, textbooks don’t go into this in any depth (other than often telling about the cat). As usual these days, the alternative to Copenhagen being proposed is a simplistic version of Everett’s “Many Worlds”: the answer to the measurement problem is that the multiverse did it. The idea that one would also like the measurement apparatus to be described by quantum mechanics is taken to be a radical and daring insight. The Copenhagen papering over of the measurement problem by “collapse occurs, but we don’t know how” is replaced by “the wavefunction of the universe splits, but we don’t know how”. Becker pretty much ignores the problems with this “explanation”, other than mentioning that one needs to explain the resulting probability measure. String theory, inflation and the cosmological multiverse are then brought in as supporting Many Worlds (e.g. that probability measure problem is just like the measure problem of multiverse cosmology). There’s the usual straw man argument that those unhappy with the multiverse explanation are just ignorant Popperazzi, unaware of the subtleties of the falsifiability criterion: Ultimately, arguments against a multiverse purportedly based on falsifiability are really arguments based on ignorance and taste: some physicists are unaware of the history and philosophy of their own field and find multiverse theories unpalatable. But that does not mean that multiverse theories are unscientific. For a much better version of the same story and much more serious popular treatment of the measurement problem, I recommend a relatively short book that is now over 20 years old, David Lindley’s Where does the Weirdness Go?. Lindley’s explanation of Copenhagen vs. Many Worlds is short and to the point: The problem with Copenhagen is that it leaves measurement unexplained; how does a measurement select one outcome from many? Everett’s proposal keeps all outcomes alive, but this simply substitutes one problem for another: how does a measurement split apart parallel outcomes that were previously in intimate contact? In neither case is the physical mechanism of measurement accounted for; both employ sleight of hand at the crucial moment. Lindley ends with a discussion of the importance of the notion of decoherence (pioneered by Dieter Zeh) for understanding how classical behavior emerges from quantum mechanics. For a more recent serious take on the issues involved, I’d recommend reading something by Wojciech Zurek, for instance this article, a version of which was published in Physics Today. Trying to figure out what “interpretation” Zurek subscribes to, I notice that he refers to an “existential interpretation” in some of his papers. I don’t really know what that means. Unlike most discussions of “interpretations”, Zurek seems to be getting at the real physical issues involved, so I think I’ll adopt his (whatever it means) as my chosen “interpretation”. Update: For another take on much the same subject, out in the UK now is Philip Ball’s Beyond Weird. The US version will be out in the fall, and I think I’ll wait until then to take a look. In the meantime, Natalie Wolchover has a review at Nature. Update: There’s a new review of What is Real? at Nature. Update: Jim Holt points out that David Albert has a review of the Becker book in the latest New York Review of Books. I just read a print copy last night, presumably it should appear online soon here [Review now available here]. Update: Some comments from Adam Becker, the author of the book. I won’t try to rebut everything Peter has said about my book—there are some things we simply disagree about—but I would like to clear up two statements he makes about the book that are possibly misleading: Peter says that I claim the answer to the measurement problem is that “the multiverse did it.” But I don’t advocate for the many-worlds interpretation in my book. I merely lay it out as one of the reasonable available options for interpreting quantum mechanics (and I discuss some of its flaws as well). I do spend a fair bit of time talking about it, but that’s largely because my book takes a historical approach to the subject, and many-worlds has played a particularly important role in the history of quantum foundations. But there are other interpretations that have played similarly important roles, such as pilot-wave theory, and I spend a lot of time talking about those interpretations too. I am not a partisan of any particular interpretation of quantum mechanics. Second, I don’t think it’s quite fair to say that I paint Bohr as a villain. I mention several times in my book that Bohr was rather unclear in his writing, and that sussing out his true views is dicey. But what matters more that Bohr’s actual views is what later generations of physicists generally took his views to be, and the way Bohr’s work was uncritically invoked as a response to reasonable questions about the foundations of quantum mechanics. It’s true that this subtlety is lost in the jacket flap copy, but that’s publishing for you. Also, for what it’s worth, I do like talking about reality as it relates to quantum mechanics. But I suppose that’s hardly surprising, given that I just wrote a book on quantum foundations titled “What Is Real?”. I’d be happy to discuss all of this further over email if anyone is interested (though I’m pretty busy at the moment and it might take me some time to respond). This entry was posted in Book Reviews, Quantum Mechanics. Bookmark the permalink. 74 Responses to What is Real? 1. Carl Zetie says: My 13 year old son has been reading some of the introductory texts on QM and his reaction on discovering Everett was to ask me, “So Everett’s Many Worlds has the same problem as the Copenhagen interpretation, plus an enormous number of additional universes that are not detectable. How is that better?”. Good kid, but I’m going to need to find him a math tutor soon because he is overtaking my undergrad education. 2. Jim Baggott says: Give me a little more time. I hope to be able to set the record straight in a book to be published probably in 2020. You can read the preamble here: http://www.jimbaggott.com/articles/a-game-of-theories/ 3. Dear Prof. Woit, I don’t understand what you mean by “the wavefunction of the universe splits, but we don’t know how”. We know exactly how; this is the whole point of Many-Worlds: it is just normal Hamiltonian evolution. What do you think is lacking? 4. Ben Jones says: It’s fine to say that ‘in either interpretation, we don’t know what happens at the moment of measurement’. But I disagree that this leaves MW and Copenhagen on an equal footing. Doesn’t Copenhagen posit an additional _mechanism_, that of ‘collapse’, whereas Everett’s view wouldn’t? And are there not additional problems with Copenhagen, such as the ‘selection’ of a single branch is the only non-deterministic element of the whole system? @Carl the non-detectability of the ‘other universes’ follows naturally from how we understand QM to work, it doesn’t count as a piece of evidence against it if we wouldn’t _expect_ to be able to detect them. An unreasonable swish of Occam’s Razor. 5. David Brown says: “Popperazzi” (play on Italian “paparazzi”) is the preferred spelling. https://www.philosophersmag.com/index.php/footnotes-to-plato/77-string-theory-vs-the-popperazzi “String Theory vs the Popperazzi” by Massimo Pigliucci, 2015 6. Peter Woit says: David Brown, Thanks. Fixed. Mateus Araujo, Just saying “the Schrodinger equation does it” doesn’t solve the measurement problem (for instance, the preferred basis problem). All you’re doing is saying that, you don’t know how, but the Schrodinger equation is going to somehow give precisely the same implications for physics as the collapse postulate. Ben Jones, Copenhagen says collapse happens in a measurement, it’s silent on what the theory of collapse is (adding specific new physics to explain collapse is not Copenhagen but something else). 7. hdz says: Peter, I do not intend to enter the details of your discussion, since I have got tired about it. (If somebody should be interested, he/she may look at my website http://www.zeh-hd.de – especially the first two papers under “Quantum Theory”.) However, as a historical remark let me point out that in my understanding, von Neumann and Wigner were never part of the Copenhagen interpretation – rather they objected to it more to less openly (starting in Como). When Wigner used the term “orthodox interpretation”, he exclusively meant von Neumann’s book (including the collapse as a physical process – not just a “normal” increase of information), and I am told that he complained that, as a consequence, he was never invited to Copenhagen. Bohr always disagreed with any attempt to analyze the measurement problem in physical terms. Essentially, I agree with Mateus Aroujo and and Ben Jones Regards, Dieter 8. Jim Baggott says: This kind of discussion can get hopelessly confused very quickly. To my knowledge, the ‘collapse of the wavefunction’ was never part of the Copenhagen interpretation, which is based on some kind of unexplained ‘separation’ between the quantum and classical realms, or what John Bell would refer to as the ‘shifty split’. I believe the notion of a ‘collapse’ was introduced as a ‘projection postulate’ by von Neumann in his book Mathematical Foundations of Quantum Mechanics, first published in German in 1932 (in my English translation the projection postulate – a statistical, discontinuous process in contrast to the continuous, unitary evolution of the wavefunction – appears on p. 357). Influenced, I believe, by Leo Szilard, von Neumann speculates that the collapse might be triggered by the intervention of a (human) consciousness – what he refers to on p. 421 as the observer’s ‘abstract ego’. All this really shouldn’t detract from the main point. The formalism is the formalism and we know it works (and we know furthermore that it doesn’t accommodate local or crypto non-local hidden variables). The formalism is, for now, empirically unassailable. All *interpretations* of the formalism are then exercises in metaphysics, based on different preconceptions of how we think reality could or should be, such as deterministic (‘God does not play dice’). Of course, the aim of such speculations is to open up the possibility that we might learn something new, and I believe extensions which seek to make the ‘collapse’ physical, through spacetime curvature and/or decoherence, are well motivated. But until such time as one interpretation or extension can be demonstrated to be better than the other through empirical evidence, the debate (in my opinion) is a philosophical one. I’m just disappointed (and rather frustrated) by the apparent rise of a new breed of Many Worlds Taliban who claim – quite without any scientific justification – that the MWI is the only way and the one true faith. 9. Peter Woit, There are dozens of papers explaining how you can model the measurement process in unitary quantum mechanics via entanglement and decoherence; Zeh’s and Żurek’s, among them. It’s not as if anybody is saying “the Schrödinger equation did it” without explaining how. And the measurement problem is usually defined as the incompatibility between collapse and unitary evolution. So even just saying “the Schrödinger equation did it” does solve the problem, on this level. Of course, one must also explain emergence of classicality, permanence of records, probability, etc., but this is not the measurement problem. 10. Peter Woit says: Thanks Dieter. I should point out that Dieter Zeh is one of the major subjects of the book. He’s perhaps the best example of the story Becker wants to fit everything into: a “quantum rebel” who made significant advances in our understanding of foundations and the measurement problem, advances which were not initially recognized due to an entrenched “Copenhagen” ideology that denigrated any such work. Partly due to his work, I think for many decades now mindless Copenhagen ideology has not been such a problem (but we’re well on our way to mindless Many Worlds ideology being a major problem). As he notes and the book describes, there’s controversy over who was on the Copenhagen bus, partly because it was never clear exactly what the Copenhagen Interpretation was, and partly because many people took somewhat different points of view at different times. 11. Peter Woit says: Mateus Araújo, To me the (non-trivial) measurement problem is exactly the “emergence of classicality, permanence of records, probability, etc.,” problems you list, with entanglement and decoherence some of the insights needed to solve them. If you define the measurement problem as you do, then the solution you give by having quantum mechanics apply to the macroscopic world is a trivial one which most everyone expected anyway. 12. Peter Woit, I’m claiming that this is not “my” definition of the measurement problem, but rather *the* standard definition. See for example Maudlin’s “Three measurement problems”, a canonical reference on the subject. Of course, the solution *is* trivial; you merely need to give up collapse, or introduce hidden variables, or introduce physical collapse. The problem is that most people don’t want to do any of these, hence they are stuck with the measurement problem. Look, I agree wholeheartedly with you that the interesting problems are those I listed; I’m just insisting on narrow definitions and standard terminology, otherwise it becomes impossible to talk to each other. 13. Peter Woit says: Mateus Araújo, This whole subject is full of endless and heated arguments over problems that are meaningless and/or lacking in substance. Your definition of “the measurement problem” seems to me to insist on sticking to the substance-free aspect of a substantive problem, thus encouraging substance-free-discussion. What’s the correct term to refer to the substantive problem? By the way, on the meaningless side, Becker’s book has a lot about “What is Real?” and claims that the problem with Copenhagen is that it denies the reality of the microscopic world, a discussion I thought it best to just ignore. 14. Peter Woit, Indeed it is! And I think a great many heated and meaningless arguments happen because the parties are talking about different things =) But I don’t know which problem you refer to as “the substantive problem”. Do you have in mind a collective name for the three I listed? I don’t think there exists a standard name for this set, except maybe as “problems of the Many-Worlds interpretation”. They are different problems, that are usually studied separately, and referred to by their individual names. 15. Peter Woit says: Mateus Araújo, I guess I always thought “what happens to the cat?” was the substantive measurement problem, so I’m looking for a name for that. 16. Blake Stacey says: Asher Peres once wrote that there are at least as many Copenhagen Interpretations as people who use the term, probably more. Among other problems, saying “the Copenhagen Interpretation” glosses over substantial differences between Heisenberg and Bohr. Despite this issue being discussed by Pauli, von Neumann, …. 17. Peter Woit, You want unitarity to hold, so that atoms are described by quantum mechanics, and you want collapse to happen, so that the cat is definitely dead, but you can’t have both: the measurement problem! And the solution is again trivial, threefold. Give up on collapse: there are worlds with live cats and worlds with dead cats. Introduce hidden variables: the cat was always dead, you just didn’t know it. Introduce physical collapse: the superposition collapsed on the level of the geiger counter, so that the cat was always dead. Maybe the measurement problem is like the problem of having your cake and eating it too. You know it’s impossible, but you really really want to! 18. Jim Baggott says: I think we can agree that this is all a lot of fun. I’ve been studying these problems for more than 25 years, and I can discern relatively little progress in this time. But I’ve come to believe that it is helpful to distinguish between ‘reality’ (however you want to think about it) and the reality or otherwise of the *representation* we use to describe it. We can likely all agree that the moon is there when nobody looks, and even that invisible entities like electrons really do exist independently of observation (‘if you can spray them then they are real’, as philosopher Ian Hacking once declared). But this doesn’t necessarily mean that the concepts we use in our representation of this reality should be taken literally as real, physical things. If we choose to agree that the wavefunction isn’t real (as Carlo Rovelli argues in his relational interpretation), then the QM formalism is simply an algorithm for coding what we know about a physical system so that can make successful predictions. All the mystery then goes away – there is no problem of non-locality, no collapse of the wavefunction, and no ‘spooky’ action at a distance. I don’t particularly like this interpretation, as it is obviously instrumentalist and can’t answer our burning questions about how nature actually does that. But it does help to convince me that this endless debate over interpretation is really a philosophical debate, driven by everybody’s very different views on what ‘reality’ ought to be like. And, as such, we’re unlikely to see a resolution anytime soon… 19. Peter Woit says: I’d rather do almost anything with my time than try and moderate a discussion of what is “real” and what isn’t. Any further discussion of ontology will be ruthlessly suppressed. 20. Per Östborn says: Trying to stick to physical issues, I have always wanted to see proponents of the many worlds interpretation derive the spectrum of the hydrogen atom from some clearly defined postulates (or solve some other basic physical problem). Until then it is not clear to me what the theory actually is saying. If anyone can refer to such a calculation I would be glad. Clearly it is not enough to postulate the unitary evolution of the Schrödinger equation, but I have never been able to pinpoint the other postulates of the many worlds interpretation. 21. George Herold says: Thanks for the nice post and discussion. I’m an experimentalist, who is more comfortable with opamps than operators. I have perhaps a naive question; Do any of these books discuss the DeBroglie-Bohm (Pilot wave) theory? Since all these interpretations are just a matter of taste, I find this hidden variable type theory easiest to swallow. In a double slit, the particle ‘knows’ about both slits, but still goes through (only) one of them. Is there any reason not to be happy with this picture? 22. Peter Woit says: Per Östborn, The calculation in Many Worlds is exactly the same textbook calculation as in Copenhagen. It’s the same Schrodinger equation and you solve for its energy eigenvalues the same way. That is the problem: there’s no difference from the standard QM textbook. The Many Worlds people might claim as an advantage over Copenhagen that you can imagine doing much harder calculations about how a Hydrogen atom interacts with its environment during a measurement process, giving insight into the question of why we see energy eigenstates and the Born rule. My point of view would be that Copenhagen never said you shouldn’t do exactly those calculations if you wanted to better understand what was happening during a measurement, it was just a rule for when you couldn’t do such calculations. 23. Peter Woit says: George Herold, Becker’s book has a long and detailed section about the story of Bohm and Bohmian mechanics which you might find of interest. Personally I don’t find Bohmian mechanics compelling, both for reasons Becker mentions and for others having to with the much greater mathematical simplicity of the conventional formalism when applied to our best fundamental theories. Sorry, but I really don’t want to carry on more discussion here of Bohmian mechanics. I realize a lot of people are interested in it, but I’m not at all interested, so such a discussion should be conducted elsewhere. 24. Lee Smolin says: Dear Peter and all, Can I suggest a basic distinction between different approaches to quantum foundations? The first class hypothesizes that the standard quantum mechanics is incomplete as a physical theory, because it fails to give a complete description of individual phenomena. If so, the theory requires a completion, that is a theory which incorporates additional degrees of freedom, and/or additional dynamics. Pilot wave theory and physical collapse models are examples of these. The second class of approaches take it as given that the theory is complete, so the foundational puzzles are to be addressed by modifying how we interpret the equations of the theory. People engaged in the first class of approaches are trying to solve a very different kind of research problem than those in the second class. I would submit that both are worth pursuing, but that progress in physics will eventually depend on the success of the first kind of approach. 25. Peter Woit says: Thanks Lee, That’s a very useful distinction. The Becker book contains a lot of material about such attempts to complete quantum mechanics with new physics that would entail a different resolution of the measurement problem. I didn’t discuss these here, largely because I’m much less optimistic than you that the search for this kind of completion will be fruitful. 26. Per Östborn says: Dear Peter, It is not clear to me, but my impression is that MWI proponents claim that some of the standard postulates of QM can be removed (or replaced by other ones). Then their calculations from first principles must be different from the textbook ones, since they are forbidden to use the removed postulates, of course. As you write, they seem to want to gain insight into why we see energy eigenvalues and why we should use the Born rule. This suggests that they argue that these features of QM can be derived from a smaller or different set of postulates than the standard one. 27. Peter Woit says: Per Östborn, I’m no expert and don’t want to get into the details of exactly what “Many Worlds” is, I think there are many versions. One aspect of it though is yes, the idea that you can derive the relation to classical observables and the Born rule, not postulate them. There’s active research and debate on the extent to which you really can do this. I just want to point out that you naturally ask exactly the same questions in Copenhagen, whenever you decide you want to analyze in more detail what’s happening during a measurement. It’s exactly the same equation and physics. 28. Edward says: I think the most interesting Copenhagen/anti-Copenhagen split was between Heisenberg, Bohr, and others on the one hand and Einstein, Schrodinger, and to some extent de Broglie on the other. There was a generational split and Heisenberg’s views may have prevailed because the older physicists died off. 29. Narad says: Any further discussion of ontology will be ruthlessly suppressed. I think I’m going to have to have that printed on a T-shirt. 30. Chris W. says: I think it would be fair to say that the acceptance of the Copenhagen interpretation, or what various people took it to be, was substantially a function of the high regard in which Bohr and Heisenberg were held as pioneers in the development of quantum mechanics, combined with a strong desire to just “get on with it”—find a way to state problems and do calculations that most people felt they understood well enough to do research and make sensible progress. Later, when a great deal of research had been done and generally accepted progress had been made, it’s understandable that the “yes, but…” questions about quantum theory would be resurgent, especially with the conundrums of quantum gravity looming in the background, along with the growing exploration of quantum phenomena at mesoscopic scales. To my mind the current situation is reminiscent in some ways of Mach’s objections to the conventional understanding of Newtonian dynamics at a time (the mid- to late 19th century) when such concerns had little apparent significance for most working scientists. All this appeared in a different light with the advent of special and general relativity. 31. Peter Woit, I think this is pretty much correct; one can also simply postulate the Born rule in Many-Worlds, as done in single-world theories, but it feels a bit wrong, as one should also explain what probabilities are. Hence the interest in deriving the Born rule in Many-Worlds, as this should shed some light on the issue. One can, though, adapt these derivations also to single-world quantum mechanics, as done by Saunders here. I don’t think, however, that the derivations we have are completely satisfactory, and I’m personally trying to improve on them. 32. Peter Shor says: David Mermin, who didn’t particularly like the Copenhagen interpretation, found himself driven to use it when he was teaching quantum mechanics to computer scientists who didn’t know much about physics. (See his article Copenhagen Computation.) Bohr, at Copenhagen, similarly was teaching quantum mechanics to physicists who didn’t know much about quantum mechanics. So maybe this explained why he gravitated towards the Copenhagen interpretation. And then maybe Bohr came up with all this weird extraneous philosophy to try to convince himself that what he was teaching them actually made some sense. 33. Paddy says: Jim Baggott (or anyone else for that matter): You attribute “collapse” or reduction of the wavepacket to von Neumann (1932). A clear statement (albiet much shorter) is in Dirac’s Principles, Sect 10. Unfortunately our University Library foolishly left the first edition (1930) on the open shelves, and it has long since vanished, so I’ve only traced this statement back to the 2nd ed… My question to anyone who can lay hands on a copy is, is this bit in the original edition, or did Dirac add it after reading von Neumann? As I probably learned from one of Jim’s books, the “reduction” of the wavepacket is Heisenberg’s terminology, from the uncertainty principle paper, so von Neumann or Dirac were in any case just sharpening up Heisenberg’s insight. 34. Tim Maudlin says: It is a bit hard to know how to comment on a discussion of a book called “What is Real?” when it has been asserted that Any further discussion of ontology will be ruthlessly suppressed.” The question “What is real?” just is the question “What exists?” which is in turn just the question “What is the true physical ontology?” which is identical to the question “Which physical theory is true?”. Peter Woit begins by writing “Ever since my high school days, the topic of quantum mechanics and what it really means has been a source of deep fascination to me…”. But that just is the question: What might the empirical success of the quantum formalism imply about what is real? or What exists? or What is the ontology of the world? To say you are interested in understanding the implications of quantum mechanics for physical reality but then ruthlessly suppress discussions of ontology is either to be flatly self-contradictory or to misunderstand the meaning of “ontology” or of “real”. That is also reflected in the quite explicit rejection of any discussion of two of the three possible solutions to the Measurement Problem: pilot wave theories and objective collapse theories. Has Foundations made real progress since Copenhagen? Absolutely! We have two key theorems: Bell’s theorem and the PBR theorem. The first tells us that non-locality is here to stay, so Einstein’s main complaint about the “standard” account—namely its spooky action-at-a-distance—cannot be avoided. Anyone who thinks that they have a way around it is mistaken. That lesson has not yet been learned, even though Bell’s result is half a century old. PBR proves that the wavefunction assigned to a system reflects some real physical aspect of the individual system: two systems assigned different wavefunctions are physically different. So if Carlo Rovelli or the QBists or Rob Spekkens thinks that the wavefunction isn’t real, in the sense that it does not reflect some real physical feature of an individual system, then PBR has proven them wrong. Bell and PBR lay waste to many approaches to understanding quantum theory, including Copenhagen and QBism. Jim Baggot suggests that we can “choose to agree” that the wavefunction isn’t real and somehow thereby eliminate non-locality and spooky action at a distance. No you can’t, for reasons given by both Bell and PBR. A theorem is a theorem, and we are not free to ignore it. The choices are: additional (non-hidden!) variables (e.g. Bohm); objective collapses (e.g. GRW); Many Worlds (e.g. Everett). That’s it. Anything else has been ruled out by the empirical success of the quantum predictions. And this is no more a “philosophical” debate than any other dispute in physics is. It is a debate about what the correct physical understanding of the world is. Only once these real advances in our understanding have been generally acknowledged will we be in a position to make further progress. 35. Peter Woit says: Tim Maudlin, Your comment has some similarities to Lee Smolin’s, wanting to focus on the “real” question of whether QM is all there is (his “second class”, your “Everett” choice) or new physics is needed (his “first class”, your Bohm or GRW). While there’s a lot in Becker’s book about the history of “new physics” proposals, and you and Smolin are quite right that this is a true fundamental question, more significant than empty discussions of “interpretations” corresponding to identical physics, the problem here is that I’m just not interested. As with any proposals for “new physics”, people make their own evaluations based on different experiences, and it’s a good thing that others who see things differently think more about such proposals. This is a case where no such new physics proposals come with any experimental evidence in their favor, so personal criteria of what is worth spending time on are all one has. My own criteria for paying attention to speculative proposals weight heavily the mathematical structures involved. The proposals I’ve seen for supplementing QM invoke mathematical structures that to me seem quite a bit more complicated and ugly than the ones of QM, thus my lack of interest. Maybe someday someone will come up with a different proposal that’s more appealing. Til then, I’ll keep deciding not to spend more time thinking about such things. 36. hdz says: I do not always agree with Tim Maudlin, but in this case I do almost completely. Clearly, the philosophical debate about reality is worthless for physicists, but physicists usually (and tacitly) understand it in the sense of a conceptually consistent description of “Nature” (of what we observe with our senses). So it is certainly not compatible with complementarity. Of course, everybody is free to propose new concepts and theories, but I have decided to wait until such author presents some empirical success (what else can I do?). Mathematical structures have to be formally consistent, Peter, but that is no sufficient argument for their application to the empirical world. (Just consider Tegmark’s unreasonable Level IV of multiverses.) This is not only an argument against Sting Theory! (I have written a comment against Strings (or M-theory) in German long before “Not even wrong” appeared; it can be found on my website under “Deutsche Texte”.) So my conclusion from Tim’s three choices are that the first two ones are well possible (though not more), but we have to wait for empirical support. Then only Everett remains (until being falsified) as yet as a global unitary theory. Let me add that the concept of decoherence, which has clearly been experimentally confirmed, was derived from an extension of unitarity to the environment, while Everett is its further extension to the observer. So why is it “unmotivated” or incomplete? In my opinion there are no arguments but only emotions against Everett! So please take some time to study my website! 37. a1 says: Thanks Peter for pointing that the MWI is basically an inversion of the standard one, an evidence rarely mentioned. So the ‘real’ problem remains untouched: what are probabilities? Actually, surprisingly few physicists seem to be aware of the existence of Bertand’s pardox, something which has disturbing implications. The stance ‘it doesn’t matter what probabilities are as long as we know how to calculate them’ or saying ‘there is an axiomatic for them’ are just ways to avoid the question. Does something physical collapse or is it our ignorance that ends in a measurement? It is rather obvious that a meaningful distinction between a description and its referent (to avoid saying ‘reality’) can become badly twisted when it is not clear what is not known. 38. Dave Miller says: Peter Shor, You wrote: Does anyone ever teach Intro QM without implicitly using the Copenhagen interpretation? Or, to put it the other way around, does anyone teach Intro QM using MWI or the Bohm model? To connect with your own work, it does seem to me that MWI is the “natural” way to think about quantum computing. Not that I think MWI is true: I think it has two fatal flaws — the “preferred basis” problem that Peter Woit has mentioned and the probability-measure problem. Do you yourself see any affinity between MWI and quantum computing? Dave Miller 39. @Paddy Here is the first edition of Dirac for you to check: https://archive.org/details/in.ernet.dli.2015.177580 40. Peter Woit says: Commenter atreat wants to argue with Tim Maudlin, but also points to a 319 comment thread at Scott Aaronson’s blog which includes (besides a lot of other interesting comments) extensive discussion with Maudlin about these topics. I encourage those interested in this argument to visit and not to try and restart it here. 41. GoletaBeach says: To me, the more interesting question is not about whether theory is complete, but… what experiment will displace or extend QFT as the underpinning of all microscopic theory? David Mermin sometimes pointed out that the fact we still use QFT with no new constants of nature was a failing of experimental high energy physics. I tend to agree with him, but still… increasing energy is still an enormously attractive route to finding real chinks in QFT. Also, delayed choice types of measurements are attractive. Unattractive to me… characterizing experiment as a servant of whatever trend is popular in the theory community… strings, multiverses, etc. 42. Jim Baggott says: It’s certainly true that the idea that a quantum system ‘jumps’ into some eigenstate as the result of an observation or measurement is implicit in the early history of quantum mechanics, arguably as far back as Bohr’s 1913 atomic theory. Despite statements by Heisenberg and Dirac implying such ‘jumps’ as part of the measurement process, the reason we know this as ‘von Neumann’s theory of measurement’ is because he was the first to put it forward as an axiom of the theory, the collapse or projection being a statistical ‘process of the first kind’ vs. the unitary evolution of the wavefunction as a causal ‘process of the second kind’. I tend to trust Max Jammer on this – see his ‘Philosophy of Quantum Mechanics’, p. 474. Incidentally, von Neumann treated the apparatus as a quantum system, so departed from the Copenhagen interpretation’s ‘shifty split’ between quantum/classical domains. I sense the discussion of your review of Adam Becker’s book is now pretty well exhausted, but if you will allow I’d like to make a couple of final observations. I fully understand why you like to keep the discussion focused on what you consider to be meaningful questions – as posed here by Lee Smolin and to an extent by Tim Maudlin. I don’t want to make this seem too black-and-white but, generally, if you don’t think the wavefunction is ‘real’ then you’re likely to be satisfied that quantum mechanics is complete (Copenhagen, Rovelli, consistent histories, QBism). But if you prefer to think that the wavefunction is physically real (Einstein, Schrodinger, Bell, Leggett) then obviously something is missing – quantum mechanics is incomplete and the task is to find ways to reinterpret or extend the theory to explain away the mystery, and hopefully deepen our understanding of nature at the same time. Although the MWI in principle adds nothing to the QM formalism, I’d argue that it ‘completes’ QM by adding an infinite or near-infinite number of parallel universes. But all this comes back to a judgement about the ontological status of the wavefunction and hence a choice of philosophical position. I don’t think any kind of ‘new mathematics’ will help to resolve these questions. In fact, my studies of the historical development of QM suggest instead a triumph of physical intuition over mathematical rigour and consistency, which is why von Neumann felt the need to step in and fix things in 1932. My money is therefore on new physics, as and when this might become available, perhaps as part of the search for evidence to support a quantum theory of gravity. Can I leave you with a last thought? In the famous experiments of Alain Aspect and his colleagues in the early 80s, they prepared an entangled two-photon state using cascade emission from excited Ca atoms. We write this as a linear superposition of left and right circularly polarised photons, based on our experience with this kind of system. The apparatus then measures correlations between horizontally and vertically polarised photon pairs, detected within a short time window. So we *re-write the wavefunction in terms of the measurement eigenstates*, and we use the modulus-squares of the projection amplitudes to tell us what to expect. Now I can’t look at this procedure without getting a very bad feeling that all we’re really doing here is mathematically *coding* our knowledge in a way that gives us correct predictions (which we then confirm through experience). I really don’t like it, but I confess I’m very worried. 43. Peter Woit says: My semi-joking threat to suppress ontology is due to the fact that I seriously don’t know what you or others mean when they say something is/is not “real”, or “physically real”. I have the same problem with Lee Smolin when he writes about time being “real/not real”. Hanging one’s treatment of a complex issue on this highly ambiguous four-letter word seems to me to just obscure issues (I started to write “real issues”…). About MWI, I’m on board with the “everything evolves according to Schrodinger’s eq.” part, not so much with the “add an infinite number of parallel universes” part. While I think new deep mathematical ideas may inspire progress on unification, in the case of the measurement problem, the argument from mathematical depth and beauty goes the other way. The basic QM formalism already is based on very deep and powerful mathematics. Here my problem is with things like Bohmian mechanics and dynamical collapse models which deface this beauty by adding ugly complexity not forced by experiment. The measurement problem to me is essentially the problem of understanding how the effective classical theory emerges from the fundamental quantum formalism. Mathematics may or may not be helpful here, I don’t know. The confusions I see in most discussions of QM interpretations aren’t ones that mathematics will resolve, they need to be resolved by careful examination of the physics one is discussing. Some people seem to expect that the new physics needed to get quantum gravity will resolve the measurement problem, but I don’t see it. The measurement problem is there when you do a very low energy experiment, whereas the quantum gravity problem is about not understanding Planck-scale physics. I just don’t see how the quantization of gravitational degrees of freedom has anything at all to do with the measurement problem. About the photon experiments. I don’t think the mysterious thing is the “re-write the wavefunction in terms of the measurement eigenstates” part, that is very simple and completely understood. The mystery is the “The apparatus then measures correlations between horizontally and vertically polarised photon pairs” part, where a macroscopic apparatus described in classical terms is coming into play. 44. Lee Smolin says: Dear Peter, I agree the statement “Time is real” is misleading and I regret using it. The more precise claim is that “time is fundamental” in the sense that there is no deeper formulation of the laws of physics that does not involve the evolution of the present state or configuration, by the continual creation of novel events out of present events. ie at no level is time emergent from a timeless formulation of fundamental physics. Were the Wheeler-deWitt equation correct this would not be the case. I view my papers on the real ensemble formulation and relational hidden variables to be modest attempts to develop and explore new hypotheses to complete quantum mechanics. To answer Peter, these are inspired by developments in quantum gravity, particularly the hypotheses that if time is not emergent, space may be emergent from a network of dynamically evolving relationships, as it is in several approaches to quantum gravity such as spin foam models, causal dynamical triangulations and causal set models. But if space is emergent, so is locality and hence so must be non-locality. Hence we may aim to show the origin of Bell-non-locality from disorderings of locality which follows from the emergent nature of locality. This may be a way to realize the idea, which is currently popular, but was first stated by Penrose in his papers on spin networks, that the geometry of space and quantum entanglement may have a common origin. 45. Jim Baggott says: I honestly think you’ve answered your own question. You say “I don’t think the mysterious thing is the re-write the wavefunction in terms of the measurement eigenstates part – that is very simple and completely understood”. So what is it that exists? A quantum state made of circularly polarised states? A quantum state made of linearly polarised states? Or both, but involving a process that gets from one to the other that doesn’t appear anywhere in the formalism except as a postulate, based on a mechanism we can’t fathom? Or are we just using these ‘states’ as a convenient way of connecting one physical situation with another? 46. Blake Stacey says: I noticed an item in the latest quant-ph update that may be of interest to the folks participating here: a critical evaluation of Wallace’s attempt to get meaningful probabilities out of (neo-)Everettian QM. 47. Peter Woit says: The question “what exists?” is just as ill-defined as “what is real?”. The fundamental issue here is that I think I understand completely what a “quantum state” is (any one of the mathematically isomorphic descriptions), but I don’t understand completely what a “physical situation” is (it may involve some system under study, some macroscopic apparatus, my consciousness, and how they are interacting ). Everett tells me that a “physical situation” is also just a quantum state, and I’m willing to believe him since I don’t have evidence against this, but that doesn’t change my lack of full understanding of the “physical situation” I’m somehow part of. 48. Doug McDonld says: I fully agree with you in this. The problem is, explaining “how” this happens. I don’t mean in a philosophical sense, I mean describing it using, somehow, just plain quantum mechanics of increasingly complicated sets of interactions that end up in a macroscopic apparatus that everybody agrees is classical. This is doable. Its where the payoff is eventually going to come. The hard part is explaining it. By increasingly complicated, I mean in terms of say measuring the energy of a high energy photon slamming into an LHC calorimeter in terms of a cascade of events. Each even is small in quantum energy terms, collectively they add up to the answer, including both quantum uncertainty and statistical uncertainty. Its not called a calorimeter for nothing … that says its classical. Normally on blogs like this one my viewpoint usually gets moderated out. 49. Laurence Lurio says: When you are stuck in a philosophical morass, the solution, if one exists, is likely to come from experiment. There is a growing experimental community working on building quantum computers who are worrying about wave function collapse as a very practical matter. It will be interesting to see which interpretation that community gravitates to. Comments are closed.
c74e05e9c7be096b
Paper Review: The measurement theory of Everett and de Broglie’s pilot wave I’ve written before on this blog about quantum mechanics. We’ve looked at a questionable interpretation of a recent experiment, the way in which quantum mechanics is radically nonlocal, and certain theoretical constructs needed for the thought experiments used in physics. One of the main reasons why quantum physics fascinates people is because the phenomenon themselves violate our intuitions. Indeed, when first confronted with the empirical results, it is challenging to imagine a theory that can handle them. Of course, such theories have been developed. However, we have yet to converge on a single theory as the correct theory of quantum mechanics. Instead, we find ourselves faced with a constellation of competing theories. The field of quantum mechanics is not a monolith. One of the people who thought deeply about these issues was John S. Bell, a physicist. Actually one of the three paper reviews I linked to above was about one of his papers, and another about one of the fundamental results he proved in quantum physics. In the paper I’ll review in this post, Bell is interested in the connection between two prominent and competing theories of quantum mechanics: Everett’s and de Broglie’s. The central claim of Bell’s paper is that “the elimination of arbitrary and inessential elements from Everett’s theory leads back to, and throws new light on, the concepts of de Broglie.” (p. 93). This, then, is what Bell thinks the relationship between these two theories is. If we take away some bits of Everett (some of which we will see are actually absences of structure as opposed to structure), we recover de Broglie. It is also clear where Bell’s sympathies lie through his choice of words. Furthmore, anyone familiar with Bell’s work will know he is a big fan of the pilot wave theory. Bell starts off by given a brief summary of the history and reasoning that led to Everett’s theory. In the standard theory of quantum mechanics there is a strong separation between an observer and the rest of the world. The dynamical laws of quantum mechanics are framed in this way. Bell writes, “this usual interpretation refers only to the statistics of measurement results for an observer intervening from outside the quantum system” (p. 93). Everett, however, was concerned with a quantum theory of cosmology. This applies pressure to the standard theory for, when we are doing cosmology, we are trying to describe the whole universe. But if “that system is the whole world, there is nothing outside: (p. 93). Thus there is no observer outside the universe, and it is unclear how the standard theory can deal with this. This led Everett to develop a theory that treated the world and the observer in exactly the same way. Thus, it could describe the universe as a whole, because the same dynamical laws were applied to any and all parts of the universe. There was no need for an observer-external world reality. I’m trying to stay fairly high level here, but I do want to give a quick sketch of the theory so that we can better understand Bell’s argument for how it connects to the pilot wave theory. The sketch will be sketchy, and all too brief. In the standard theory physical systems can be prepared in certain states called super positions. For example, a particle could be in a superposition of being both inside and outside a certain box. We encounter nothing like this in our ordinary experience of the world. To us, it seems like properties like position are always well defined for an object. However, in the standard theory this is not true. A particle can fail to be in the box and fail to be outside of the box, instead being in a kind of combination in both of these states. However, whenever we go to look for the particle, we either find it inside the box or outside. But we just said it was neither. So what happened here? In the standard theory, whenever we measure some property — like whether or not the particle is in the box or outside of it — we always end up with a determinate result, even if the particle was in a superposition, because our measurement itself causes a collapse of the superposition to one of the possible measurement outcomes. I will call this the collapse dynamics. Furthermore, if a measurement does not occur, then the system evolves in a different, collapse-free way. I will call this the linear dynamics. Everett, in contrast, throws out the collapse dynamics — he sticks with just the linear dynamics. This removes the strong metaphysical separation between observer and external world. All situations can be treated by the same law. This leads to another puzzle though. If the particle is in a superposition, what happens when we look in the box? Bell writes Everett disposes of this vaguely defines suspension of the [linear dynamics] with the following bold proposal: it is just an illusion that the physical world makes a particular choice among the many macroscopic possibilities contained in the expansion; they are all realized, and no reduction of the wave function occurs. He seems to envisage the world as a multiplicity of ‘branch’ worlds… p. 95 This “reduction” Bell writes about is the collapse dynamics. So in throwing out the collapse dynamics, instead of finding either the particle in the box or out, I find it in both places. Or, rather, there is one branch world in which I find it in the box, and one in which I find it out of the box. Of course I would not notice this. For when I carry out the measurement, my experience in a sense splits into the two different experiences. One Daniel finds it in the box, one out. Up until then it was the same stream of experience; after the branch it is two, stemming from a common source. Of course, we can describe this talk of “experience” physically in terms of memory of a physical system (stored on a magnetic tape, for example), and this is precisely what Everett does. This, then, is a very quick and dirty introduction to Everett’s theory, sometimes called “many worlds”. Bell is unsatisfied with this account for two reasons. The first is a problem with what he calls “expansions”. This is a tricky one to explain without resorting to the mathematics of the theory, but the following is an attempt. I’ll rely on the explanation I gave here — scroll down to the pictures of the clocks for the relevant section. The issue is that when we are decomposing the vector describing the quantum state — that is, writing it out as a sum of some other vectors, each of which corresponds to a certain observable feature — there are an infinite number of ways to do this. This is abstract, so here is what I hope is a helpful analogy to get the point. We have some physical system, like a metal rod. It might be a certain length, say 1 meter. My choice of using meters doesn’t fix the length. I could describe it in feet, or furlongs, or football fields. The method of description doesn’t affect the actual physics. This should be the same for our choice of how to decompose the quantum state into a sum of other states. In Everett’s theory, it is the summands of this sum that are the equally real worlds. However, on one reading of Everett’s theory, this implies that the physical interpretation does depend on the way in which the state is decomposed, or expanded. Since there are an infinite number of ways, Bell claims that Everett is implicitly smuggling in a preferred basis — a preferred set of summands — into his theory. This is the first point in which Bell draws a connection between Everett’s many worlds theory of quantum mechanics and de Broglie’s pilot wave theory of quantum mechanics. In the pilot wave theory a particular observable — position — is made determinant. That is, contra my example earlier about the particle not being in the box or out of the box (as in standard quantum mechanics), in de Broglie the particle actually does have a determinate position at every point in time. We can see this connection clearly when Bell writes This preference for a particular set of operators is not dictated by the mathematical structure of the wave function ψ. It is just added (only tacitly by Everett, and only if I have not misunderstood) to make the model reflect human experience. The existence of such a preferred set of variables is one of the elements in the close correspondence between Everett’s theory and de Broglie’s — where the positions of particles have a particular. p. 96 Bell’s second concern is that if instrument readings are to be given such a fundamental role should we not be told more exactly what an instrument reading is, or indeed, an instrument, or a storage unit in a memory, or whatever? In diving the world into pieces A and B Everett is indeed following an old convention of abstract quantum measurement theory, that the world does fall neatly into such pieces — instruments and systems. p. 96 Bell thinks that Everett’s theory relies again on a kind of dichotomy between the measurement system and the environment. Furthermore, he thinks that de Broglie has given a formulation that does not suffer from the duality. I agree with Bell on the second point, but I don’t agree on the first. Of course I might be missing something here that Bell has seen, but my best understanding of Everett’s theory is that the choice to treat one system as measurement device and the other as the thing being measured is going to depend on what specific thing we are trying to calculate, and is not a fundamental part of his theory. Indeed, by representing the quantum states of the two systems as one state in the join Hilbert space allows one to apply the linear dynamics to the joint system, thus removing this duality. However, given how sophisticated a reasoner Bell is, I definitely want to think about this more to see if there is anything I am missing. Now Bell turns his attention toward de Broglie’s theory. This theory is definitely challenging to explain concisely, but here is a shot at giving an intution. Before with Everett we had that the state of the universe at any point in time was described by a wave function. All elements of this function (speaking loosely) were equally real. If there was a superposition of the particle being in the box and out of the box, both of these were equally real. In de Broglie’s theory, on the other hand, each particle has a specific location at each time. The evolution over time of the particle’s position is governed by the wave function very similar to the linear dynamics; a helpful analogy is that it moves like a massless particle being pushed around by an infinitely compressible fluid. In this picture there are also no collapses, and certainly no deep division between observer and observed. We can also think of this theory (a little informally) as picking out a particular term of the wave function to be the “real” one (when it comes to position), in contrast to Everett’s assertion that all are equally real. Bell writes This model is like Everett’s in employing a world wave function and an exact Schrödinger equation, and in superposing on this wave function an additional structure involving a preferred set of variables. p. 97 So we see with Bell that there is a deep similarity here, with the addition of extra structure in the de Broglie pilot wave theory that picks out a unique position to be the actual one. Bell then notes three main differences between Everett’s theory and de Broglie’s: 1. Everett’s theory is “vaguely anthropocentric” (p. 97), whereas de Broglie’s is not, since we can average over the positions of all particles composing a composite object to get a course grain description of the macroscopic world from the microscopic. The comment about Everett being anthropocentric or dualist in some way was the one with which I disagreed earlier. 2. Everett’s theory has that all elements of the superposition are realized “each in the appropriate branch universe” (p. 97), whereas de Broglie’s theory picks out a unique state to be the real one by specifying an actual position for each particle. This is what I mentioned just before this list. 3. Everett doesn’t really have a solid account of the continuous trajectories of objects, whereas, since in de Broglie’s theory each particle has a determinate position, de Broglie can provide an account with continuous deterministic trajectories. These, then, are the main relationships between the two theories. Bell ends the paper by remarking that one could do something like de Broglie’s theory without trajectories if one wanted, but writes But I do not like it. Emotionally, I would like to take more seriously the past of the world (and of myself) than this theory would permit. More professionally, I am uneasy about the possibility of incorporating relativity in a profound way. p. 98 We see here then a crucial lesson for science and the philosophy thereof: whenever evaluating a physical theory, we have to ask ourselves “what do we want from it?” Bell is laying his cards on the table, stating what he finds attractive and unattractive about the different possibilities. This is an essential step when formulating theories about the world. Leave a Reply You are commenting using your account. Log Out /  Change ) Google photo Twitter picture Facebook photo Connecting to %s
562202529aed3268
The Elephants in the Room – What every physicist should know about string theory The string wars seem to still be going on, with the latest salvos coming from Ashtekar and Witten. In a very interesting recent interview, at the end Ashtekar has some comments about string theory and how it is being pursued. About claims that string theory is the only possible way to get quantum gravity he says: About AdS/CFT and the current state of its relation to quantum gravity: A good example of the problems Ashtekar is concerned about is provided by an article in the latest Physics Today by Witten with the title What Every Physicist Should Know about String theory. It’s devoted to a simple argument that string theory doesn’t have the UV problems of quantum field theory, one that I’ve seen made by Witten and others in talks and expository articles many times over the last 30 years. This latest version takes ignoring the elephants in the room to an extreme, saying absolutely nothing about the problems with the idea of getting physics this way, even going so far as to not mention the first and most obvious problem, that of the necessity of ten dimensions. The title of the article is the most disturbing thing about it. Why should every physicist know a heuristic argument for a very speculative idea about unification and quantum gravity, without at the same time knowing what the problems with it are and why it hasn’t worked out? This seems to me to carry the “strong smell of theology” that Ashtekar notices in the way the subject is being pursued. Witten is a great physicist and a very lucid expositor, and the technical story he explains in the article is a very interesting one, with the idea that most physicists might want to hear about it a reasonable one. But the problems with the story also need to be acknowledged and explained, otherwise the whole thing is highly misleading. Besides the obvious problems of the ten dimensions, supersymmetry, compactifications, the string landscape, etc. that afflict attempts to connect this story to actual physics, there are a couple basic problems with the story itself. The first is that what Witten is explaining as a problematic framework to be generalized by string theory is not quantum field theory, but a first-quantized particle theory, with interactions put in by hand. This can be used to produce the perturbation series of a scalar field theory, but this is something very different than the SM quantum field theory, which has as fundamental objects fields, not particles, with interactions largely fixed by gauge symmetry, not put in by hand. For such QFTs, there is no necessary problem in the UV: QCD provides an example of such a theory with no ultraviolet problem at all, due to its property of asymptotic freedom. Another huge elephant in the room ignored by Witten’s story motivating string theory as a natural two-dimensional generalization of one-dimensional theories is that the one-dimensional theories he discusses are known to be a bad starting point, for reasons that go far beyond UV problems. A much better starting point is provided by quantized gauge fields and spinor fields coupled to them, which have a very different fundamental structure than that of the terms of a perturbation series of a scalar field theory. A virtue of Witten’s story is that it makes very clear (while not mentioning it) what the problem is with this motivation for string theory. All one gets out of it is an analog of something that is the wrong thing in the simpler one-dimensional case. The fundamental issue since the earliest days of string theory has always been “what is non-perturbative string theory?”, meaning “what is the theory that has the same relation to strings that QFT has to Witten’s one-dimensional story?” After 30 years of intense effort, there is still no known answer to this question. Given the thirty years of heavily oversold publicity for string theory, it is this and the other elephants in the room that every physicist should know about. Update: For another take on string theory that I meant to point out, there’s an article quoting Michael Turner: Turner described string theory as an “empty vessel,” and added: “the great thing about an empty vessel is that we can put our hopes and dreams in it.” The problem is that the empty vessel is of a rather specific shape, so only certain people’s hopes and dreams will fit… Update: Many commenters have written in to point out this article, but I don’t think it has anything at all to do with the topic of this posting. There are lots of highly speculative ideas about quantum gravity out there, most of which I don’t have the time or interest to learn more about and discuss sensibly here. Update: It is interesting to contrast the current Witten Physics Today article with a very similar one that appeared by him in the same publication nearly 20 years ago, entitled Reflections on the Fate of Spacetime. This makes almost the same argument as the new one, but does also explain one of the elephants in the room (lack of a non-perturbative string theory). It also includes an explanation of the T-duality idea that there is a “minimal length” in string, an explanation I was referring to in the comment section when describing what I don’t understand about his current argument. This entry was posted in Uncategorized. Bookmark the permalink. 41 Responses to The Elephants in the Room – What every physicist should know about string theory 1. manfred Requardt says: What Abhay Ashtekar is saying about String Theory and ADS/CFT is exactly to the point. I think it is a fair description of its scientific status. 2. Warren says: You are being too positive. The real problem is that all approaches to high-energy theoretical physics are seriously flawed. The Standard Model is not asymptotically free. Even QCD is not saved by asymptotic freedom, which has the problem of renormalons. Experience with rigorous approaches, as constructive quantum field theory or lattice gauge theory, has shown us that nonperturbative approaches do not solve perturbative problems. Loop quantum gravity has problems even with the continuum limit, and doesn’t solve renormalizability of gravity, only regularizes it. But string field theory does exist, & people have addressed nonperturbative problems with it. So rather than denouncing or over-promoting the various alternatives, a more useful attitude would be to just admit that there is no well-defined approach to this problem yet, and recognize the practicality of complementary formulations that are all incomplete. 3. martibal says: Is there a free link to Witten paper ? 4. Bori says: Is this just an American phenomenon because of the culture of self-promotion and exaggeration resulting in a natural counter-reaction? Speaking to physicists in Europe, they seem to be more pragmatic about string theory – a certain proportion of positions goes to string theorists and they do their job, but no hype and no bad feelings. 5. Tammie Lee Haynes says: Dera Dr Woit String Theory has “a smell of theology”? As a Christian, I am astonished that you would make such a statement. Christianity’s claims to being true are based on the accounts of witnesses to certain events such as the miracles of Jesus. The account of what a witness saw is the very definition of empirical evidence. (Of course, from time to time all of us find witness accounts to be non-credible, which is why you are not a Christian) But the point is this. Before String Theory can claim to have “a smell of theology”, it will need to have support from empirical evidence. Very truly yours Tammie Lee Haynes 6. Charles Day says: Martibal, click again on the link to the article. You’ll find that it’s now free. 7. anonymous says: Loop quantum gravity itself has a roughly comparable foundational issue : we are assuming that the 3-metric tensor of gravity – or canonically transformed analogues – can be subject to a “quantization” procedure analogous to those developed for fields on Minkowski space-time treated as assemblies of harmonic oscillators : themselves quantized by techniques developed for non-relativistic finite degrees of freedom. 8. Peter Woit says: I think there’s a difference between elephants in the room (we don’t know how to connect string theory to known 4d physics, with or without going to a string field theory) and something much smaller (mice? cockroaches?), such as the renormalon problems or the Landau pole at exponentially large energies. Thanks for making sure that Witten’s article is available. Tammie Lee Haynes, Perhaps theology is the wrong word. Maybe a better one for some string theory promotional activities would be “evangelical”, dedicated to spreading the good word. But, all, please resist temptation to discuss religion here. I don’t think there’s that much difference between string theorists in the US and Europe. There is a bigger market here for a small number that want to evangelize. However, one thing to say about both Ashtekar and Witten is that they are the sorts who would usually much prefer to stick to the technical details of the science, and not engage in public battles. I can see why Ashtekar might have had enough of the hype over AdS/CFT, I’m not so sure why Witten feels it necessary to make this well-worn claim at this point. Sure, a basic problem of quantum gravity is that we don’t know what fundamental variables to work with and/or what the correct quantization procedure is. Ashtekar is responsible (“Ashtekar variables”) for what seems to me the most intriguing such choice of variables. 9. Tom says: There’s a new book coming out in December, published by CRC press, called “Why String Theory” by a young theoretical physicist named Joseph Conlon (found on Amazon). Though the description seems to indicated it is pro-string theory, does anyone have advance knowledge of its main premise? Just another semi-religious screed, or does it turn on any critical spotlights? (Admittedly, I wasn’t interested enough to do a thorough web search.) 10. garcol euphrates says: A link from the interview in the Wire references one of Abhay Ashtekar’s formative books on cosmology, Gamow’s 1 2 3 … Infinity. The link referenced contains a reading list of science books (some of my favorites) compiled by a fellow named Robert Anton Wilson, a writer of some stature, who is quoted as: “Wilson also criticized scientific types with overly rigid belief systems, equating them with religious fundamentalists in their fanaticism.” Full circle! 11. Peter Woit says: That book does look like mainly an advertising effort, an expansion of the web-site of the same name: I notice there’s a chapter on “Direct Experimental Evidence for String Theory”. No page numbers, but I’m betting that one’s rather short… 12. Dave Miller in Sacramento says: You wrote: This is something that has bothered me for decades, but that I rarely see discussed. Isn’t it the case that string theory does not even rise to the level of a non-relativistic first-quantized theory? It seems to me that it is the 2-D equivalent of Klein-Gordon, which, of course, does not give an acceptable version of probability (until you second-quantize it). Specifically, does free string theory , even in principle, actually give a probability for a string to have a probability (probability density function, of course) to actually be in a specific configuration in spacetime? If so, how you do numerically calculate this? Everyone: I am honestly not trying to score rhetorical points here against string theory. These are sincere questions, and if I am ignorant of work that answers these questions, please point to that work. 13. Dave Miller in Sacramento says: You wrote: >But string field theory does exist, & people have addressed nonperturbative problems with it. I have followed enough of your work to know that you have seriously worked on the subject: I truly appreciate that someone seems to take these issues seriously. Could you point us to the best source with the most concrete discussion of the existence of string field theory, if possible a source that addresses basic issues such as the state space, actual calculation of probabilities for at least the free string case, etc.? 14. marten says: Does an empty vessel stop making most noise once filled only with hopes and dreams? 15. Warren says: Renormalons are a low energy problem in QCD (strong coupling), not large, because of asymptotic freedom. They make QCD nonperturbatively nonrenormalizable. A related problem is nobody knows how to calculate parton distribution functions. In particular, nobody can do higher-twist corrections, because they require experimental input of more PDF’s, ad infinitum — more nonrenormalizability (in the sense of lack of predictability). Of course, you could calculate PDF’s if you could calculate confinement, but you can’t. Free string field theory is the same as string quantum mechanics, by the correspondence principle. Work has been done on nonperturbative vacua (due to the tachyon) for the bosonic string: The basic result is that the bosonic string is crap. So you need supersymmetry, but little has been done in SFT there yet. A similar situation exists in QFT in D=4, where you need supersymmetry to eliminate renormalons, by making the theory finite. 16. Peter Orland says: QCD is far from a finished problem, but claiming renormalons ruin its properties is not an accurate statement. It is not known what the implications of renormalons are. They are special terms in perturbation theory, which (like instantons) ruin Borel resummability (all of which I’m sure you know). But there is no evidence that they render theories nonrenormalizable. There are models with perturbative renormalons which suffer from no such problem (I have worked on one of these). You may be familiar with the idea of resurgence, which does seem to deal well with Borel singularities in quantum mechanics (though not yet relativistic QFT). In any case, the lattice gives pretty good evidence that QCD is fine at large distance scales. There is even work under way to find pdf’s. The real problem is getting to extremely weak coupling (and string-inspired models do even worse). I am not disagreeing that there is a lot of work to be done in QFT (confinement, mass gaps, and a lot more), but it is not a failure. 17. Warren says: Peter Orland: The only models I know of that solve the renormalon problem are quantum mechanical, not QFT. “Nonrenormalizability” comes from arbitrary coefficients of each ambiguity in the inverse Borel transform, corresponding to VeV’s of an ∞ # of composite color-singlet operators. This is quite analogous to higher-derivative terms in nonrenormalizable field theory (the higher-twist problem even more so). This problem is seen also in constructive quantum field theory, as well as an analysis by ‘t Hooft in the complex coupling constant plane. In lattice QCD, this shows up as nonuniversality @ the origin in coupling constant space. I did not mean to imply QCD was a failure. Only that it has problems, just as string theory does. But Peter Woit was implying that QFT was OK, in contrast to string theory being a complete failure. I would say string theory is the only theory that shows Regge behavior & spectrum of the type corresponding to confinement. Lattice QCD, on the other hand, has managed to calculate only ground state properties, & can’t deal with even rotational invariance, so can’t calculate cross sections. 18. Bernhard says: I’m pretty sure I read this somewhere before… 19. Peter Orland says: Whatever problems of consistency QFT has, there is no comparison to those of string theory. I’m not claiming this makes it better than strings, just that it is a much better-developed field. As to your first paragraph, one can do better than quantum mechanics. Asymptotically-free models in 1+1 dimensions have renormalon singularities in perturbation theory, yet there is good analytic evidence of sensible non-perturbative solutions. The exact S matrices are known, and there is even information about correlation functions. Concerning your second paragraph; there is no serious controversy whether lattice QCD (or any lattice QFT) will have rotational invariance, in the continuum limit, assuming that limit exists. Approximate rotation invariance is seen on the lattice, although a weaker coupling would be better. There is also some numerical evidence of Regge behavior (although this may not have been studied much in more than three dimensions). A lot more is known than ground state properties. I do not mean there are theorems or analytic solutions (yet), but the computer is a good guide to what the theory predicts. You are right to say that QFT has problems (which is why people work on it), but string theory is not as far along. 20. ronab says: What are PDF’s? Googling the term just gets swamped by the other sort of PDF’s. Thanks. 21. Peter Woit says: My comments were about UV problems, which is what Witten was writing about, and claiming that string theory solves, unlike QFT. I’m still quite surprised that Witten thought such a claim was what “every physicist should know about string theory”. I’d have expected that title to be used these days to make an argument for string theory as a way to handle strong-coupling problems, via its relation to AdS/CFT. From the talks of his I’ve seen, Witten likes to claim that in string perturbation theory the only problems are infrared problems, not UV problems. That’s never seemed completely convincing, since conformal invariance can swap UV and IR. My attempts to understand exactly what the situation is by asking experts have just left me thinking, “it’s complicated”. The problems with QCD are IR problems, and I’m certainly willing to believe they can best be addressed by finding some form of string theory dual to QCD. As far as I know, there’s no well-defined candidate for such a theory (i.e. that matches QCD in the UV, behaves like we think QCD should in the IR). QCD has the virtue that it works beautifully in the UV, and has a conjectural definition in the IR (via the lattice), even if it is hard to calculate things. I’m not claiming this is satisfactory, just that it’s completely inconsistent with the story Witten is trying to tell about string theory being necessary to solve QFT UV problems. 22. Peter Woit says: PDFs = Parton distribution functions 23. Warren says: I wasn’t disagreeing, only that string theory is by far better than alternatives to its problems. If you want to compare apples to oranges, pQCD is by far worse than QED. S-matrices in 2D massless theories isn’t much better, since only backward & forward scattering, so just numbers, not functions (& usually reduces to just 2->2). Proof of rotational invariance & being able to calculate anything with angles are 2 quite different things. Similar remarks apply to Regge behavior & getting an actual Regge trajectory. And properties vs. numbers, which you really get from lattice QCD basically for ground states. I hope you’re not confusing conformal invariance on the worldsheet with that in spacetime. As far as solving UV problems, I’m pretty sure Witten means in quantum gravity. String theory seems to solve that. No alternative does (although some people have a conjecture for 4D N=8 supergravity). 24. andy norris says: “empty vessels make the loudest sound” 25. Peter Woit says: The problem with Witten’s article is that his story about what goes wrong in QFT that doesn’t in string theory is about the small proper-time behavior of loops in perturbation theory, applying equally well to QCD and GR. Given the fact that QCD has no UV problem, it seems to me that the story he’s telling is likely irrelevant to the GR divergences problem. My earlier comment was about worldsheet conformal invariance, specifically the action of the modular group. Witten’s argument for no UV problem, for the torus case, is that the analog of proper-time in loop integrals takes values in the fundamental domain in the upper half plane, and this is bounded away from zero, so no small-time problem. There are lots of potential technical problems to worry about (you need this argument to work for arbitrary genus, in super-moduli space, etc), but what I was wondering about was the following: the modular group acts on this domain, taking it in particular to another domain that isn’t bounded away from zero. From another related angle, discussions of T-duality in string theory often claim that the right picture is that of some “minimum length”, below which you should go to a T-dual picture (acting by the modular group). But, if there are potential IR problems, how do I know that those IR problems in the T-dual picture don’t appear now as I go to the UV? To be clear, I believe there is likely some answer to this, cleanly separating out what’s UV and what’s IR, but it’s not there in the Physics Today piece as far as I can tell. 26. Peter Orland says: “I wasn’t disagreeing, only that string theory is by far better than alternatives to its problems. ” I am not sure what you mean by this. It seems you are saying to say that string theory is on better theoretical footing than QFT. Such a position is simply not tenable. Calculations with QED and QCD aren’t perfect, but there are many reasons to trust them, not least of which is comparison with experiments. We agree that there are problems in calculating with QFT, but these do not invalidate its successes. And lattice measurements tell you more than vacuum properties (as I said earlier). 27. Peter Orland says: I meant “trying to say”, not “saying to say.” 28. Dave Miller in Sacramento says: You said: Doesn’t string quantum mechanics suffer from the same sort of problem as “first-quantized” Klein-Gordon, i.e., no sensible meaning for probability? Can you suggest any place you can point me to where I can see how this works in detail? A possibly related problem that bothers me is how you actually get a concrete number for the “probability” (really the probability density) in first-quantized string theory given the use of the Gupta-Bleuler constraints. In QED, you can always drop Gupta-Bleuler and just go to the Coulomb gauge: You then are stuck with an apparently instantaneous Coulomb interaction, but at least probabilities more or less make sense (positive-definite Hilbert space inner product, etc.). Can you point me to any detailed discussion of how the analog of all this works in string theory? E.g., I am tempted to think there must be some analog of the instantaneous Coulomb interaction but have never seen this mentioned. 29. Warren says: There are technical difficulties with higher-loop amplitudes in string theory, still being investigated. (A paper out today seems to claim to solve them.) But there are good arguments for finiteness that hold up to as many loops as have been evaluated. (Maybe they are proofs; I’m not enough of an expert to tell.) In any case, the arguments seem better then those for the next best alternative, 4D N=8 supergravity. (& for loop quantum gravity arguments indicate the opposite.) But you’re not going to get all the details into a Physics Today article. I meant to say that string theory is on more solid ground than any other quantum gravity theory, just as pQCD is for strongly interacting particles @ large transverse momenta. String theory clearly is not of as good standing as QCD, nor QCD as QED (“apples vs. oranges”). I am not aware of any calculations in lattice QCD that give the masses of radially excited hadrons as falling on linearly rising Regge trajectories, nor any that give low-energy scattering amplitudes of the sort described by nonlinear sigma models. @ the free level, strings are just a reducible, unitary representation of the Poincaré group. You can write the single-string Hilbert space as a direct sum of those for the “usual” particles. So if you understand free particles, you understand free strings. The free part of the string field theory action can be decomposed into the sum of field theory actions for particles of different spins. Nowadays quantization (for particles or strings) is better treated by BRST, which deals with unitarity in any gauge, particularly in the Feynman gauge, which is more practical than Coulomb gauge. Similar remarks apply to first-quantization: Stückelberg & Feynman solved the problem of the Klein paradox in that context by identifying “negative energy” particles as antiparticles. This interpretation can be applied either in classical mechanics, or in quantum mechanics in terms of the first-quantized path integral giving the Stückelberg-Feynman propagator. 30. Warren says: P.S. By “ground states” I meant in the quantum mechanical sense (like the hydrogen atom), i.e., masses (& some couplings) of the π,ρ,N,Δ,… 31. Peter Orland says: I think you are overly pessimistic. It’s true that most of the work on Regge trajectories in the lattice literature is only on glueballs, but there is no fundamental obstacle to asking the same question for mesons and baryons (it is more difficult, of course). There are attempts to work out scattering amplitudes. For example, the QCD corrections to light-by-light scattering are being studied on the lattice. I don’t work along these lines myself, but I don’t see them as impossible lines of inquiry. 32. chris says: here is a recent review of excited lattice spectroscopy: And here is just one recent example of lattice calculations of scattering amplitudes: It is rather strange that you have singled out these two topics as unsolved by lattice, since there are major efforts going on in the US, Europe and Japan to address exactly these questions. There are of course things to be cleaned up (especially mapping out resonances in a finite box is a tricky thing) but there is no doubt whatsoever that the lattice will solve this in the coming years given enough computer power and algorithmic improvements. By the way, for us lattice people it is funny to see claims that “QCD is nonperturbatively nonrenormalizable” and such. You know, because of asymptotic freedom the continuum limit of the lattice regularization scheme is well defined and that is QCD. In N-flavor QCD you define the physical point by N+1 experimental measurments, you go there and out pop all other observables. We don’t care about perturbative approximations or models – if there are problems in them, they are not problems of QCD. QCD is perfectly well defined and consistent and we know how to do the path integral. Oh, and one more thing. Your claim is a rather gross misstatement. There is plenty of evidence that gravity is asymptotically safe and no UV problem exists to begin with. For a recent review, see e.g. 33. TS says: Can someone explain to me what renormalization in a non-perturbative context means? Obviously, the meaning must be different than what we mean when we apply renormalization in perturbative calculations, namely the task of relating measured quantities to the quantities that appear in the calculation. Since the measured quantities involve all orders of perturbation theory, it is not trivial to see how the measured quantities can be related to the ones used in the calculation. In a non-perturbative setting OTOH I would expect that the objects entering the calculation are the ones we observe in the lab without any further requirements on the theory: say, two pions in -> machinery -> two pions out, where all pions have a mass of 140 MeV, spin 0 and charge = ±1 (and, except for the choice of units, these would come out of a sufficiently well-developed theory). A web search leaves me a bit puzzled: “non-perturbative renormalization” in this context appears to be not more than the setting of a numerical value for a mass (or several), which is more or less in line with what I would expect one needs for a non-perturbative computation, but why would one call this “renormalization” instead of “choice of units”? 34. Peter Woit says: This is starting to get a bit far afield from the topic of the posting. But I think the simple answer to your question is that whatever your definition of qft is, it appears to need a cutoff, and the cutoff qft will be characterized by certain parameters (“bare parameters”). You hope to compute observable numbers in terms of such bare parameters (in perturbation theory, by Feynman diagrams, non-perturbatively by some other method such as a lattice Monte-Carlo). The hope is to remove the cutoff, varying bare parameters with the cutoff in such a way as to get a well-defined limit for physical observables. This is what I’d call “renormalization” in general, perturbative or non-perturbative. A problem with Witten’s argument is that it’s purely about problems with renormalization in perturbation theory. If the short distance behavior of the theory is not governed by perturbation theory, such arguments will be irrelevant. 35. Warren says: Thanks for the references; I’ll look them up. In fact, there is no proof that the continuum limit of QCD is universal, for the reasons I gave, & it’s exactly because of asymptotic freedom. By “nonrenormalizable”, I mean an infinite number of parameters must be introduced, because there are an infinite of renormalons, corresponding to an infinite number of color-singlet operators whose VEV’s must be determined. “The lattice will solve this in the coming years given enough computer power and algorithmic improvements” is something I’ve been hearing for 40 years; given Moore’s law you’d think it would’ve been done by now. So, yes, I have plenty of doubt. The only work I’ve seen on asymptotic safety in gravity is based on doing an ε epsilon from D=2, & I’m not satisfied by arguments that treat 2 as ≪ 1. In fact, any renormalization group arguments for finite changes in the coupling constant are dubious due to the arbitrariness of the β function past 2 loops. The only calculations I’ve seen on Regge trajectories in lattice gauge theories gives the result only near argument 0: the intercept & the slope @ 0. As to not seeing impossible lines of inquiry, I would say the same for string theory. Advances have been made in both in the last 40 years, both are hard problems, & both leave much to be desired. 36. TS says: Thanks, Peter! The terminology makes a lot more sense to me now. 37. Warren says: Looked @ 1 of your papers: Uses HAL QCD approach, which calculates a 2-body potential from the lattice, then plugs that into a (continuum) Schrödinger equation to find the results. Not sure how much advantage that has to the old quark model stuff that began with linear (for confinement) plus Coulomb terms. But I’ll keep reading… But the asymptotic safety reference is the same old thing. Hard to calculate in an ∞ dimensional coupling space. 38. Attila says: While it is true that universality of the continuum limit in QCD has not been proven, if the continuum limit was not universal, I would not expect the continuum extrapolations of different observables, taken with different discretizations, to agree so well. Even if the low lying hadron spectrum and the equation of state at mu_B=0 was the only thing the lattice could calculate, which I think is not true, that would already be quite impressive, since no other method can do even that. I have to add that I am a lattice practicioner, so if I did not believe in lattice results in general, that would be somewhat strange. I think that is optimistic. As far as I know, one does have dual theories to gauge theories closer to QCD than N=4 SYM, the problem is that the dual theory gets crazier as you get closer to the real thing, and the whole approach loses the computational advantage it had. An example of a computation in a theory closer to the real thing (N=2* SYM) is: hep-th/1108.2053 39. Warren says: It’s all about optimism, isn’t it? People are optimistic about their own areas of research, & impressed with the results so far obtained, and dubious about claims in other areas. So lattice QCD can make good predictions for low energy (where renormalon contributions can be neglected, & plugging potentials into 1st-quantized calculations can be pretty good), with nothing but hopes for high energy, while the converse is true for string theory. Similar remarks apply to pQCD, which is good for large transverse momenta, where log corrections can be calculated, but power corrections (especially ones from higher twist) must be ignored. So we have a bunch of competing approaches staking out different “countries” in momentum space, each saying their country is better, & threatening to invade each other’s territory in the future. But history has taught us that these “threats” tend to be hollow, & people tend to ignore the diminishing returns from their own approaches. & each ignores the fact their own predictability problems prevent them from extending into the other territories, whether due to renormalons, twist, or compactification. 40. Attila says: While I fundamentally agree, it seems I am more optimistic in general, and not only in the case of my own area. 🙂 For example, in my own field (finite temperature field theory) there are actual examples where the applicability region of resummed perturbation theory and the lattice simulations overlap. In QCD (and even more in pure gauge theories) there are several observables, where high temperature lattice calculations can be compared to perturbative calculations, with good agreement. This includes the equation of state and fluctuations and correlations of conserved charges, for example. I do not think one approach needs to “rule them all”, it is not a realistic hope. A more realistic one would be to make contact, and find an overlap in the applicability of approaches. I know almost nothing about quantum gravity, but as an outsider, it does not seem completely hopeless to me that even if the UV completion is some kind of string theory, other approaches such as the asymptotic safety program with the functional renormalization group or CDT can be used as a low energy approximation of that. I could easilty be completely wrong on this one, but I don’t think I am wrong on QCD. Sorry for the off topic posts, I will stop now. Comments are closed.
64d6ced6deeb53ed
How can I use Noether's Theorem to show that the probability density $\rho (x)=|\psi(x)|^2$ for a wave function $\psi(x)$ satisfies the continuity equation $\frac{\partial \rho}{\partial t}+\nabla \cdot\vec{j}=0$, where $\vec{j}$ is the probability current defined in quantum mechanics? I have solved this problem before by other means but I don't think I understand Noether's Theorem well enough to apply it in this case. Any help would be greatly appreciated. First note that Schrödinger's equation can be understood to come from an action. The Lagrangian is $$L = \int~\mathrm d^3x \,\,\psi^†(x) \left(i \frac{\partial}{\partial t} - \frac{\nabla^2}{2m}\right)\psi(x) - \psi^†(x)\psi(x)V(x)$$ The Euler-Lagrange equation for $\psi^†(x)$ is exactly the Schrödinger equation. Since the dynamics of $\psi(x)$ are determined by Lagrangian mechanics in this way, Noether's theorem applies without any caveats.^^ In particular, this Schrödinger Lagrangian has a $U(1)$ symmetry corresponding to $\psi(x) \mapsto e^{i\alpha}\psi(x)$. The corresponding conserved charge current density is $$\rho = j^0 = \frac{\partial L}{\partial \dot{\psi}}\delta \psi = \psi^†\psi(x)$$ $$\vec{j}^i = \frac{\partial L}{\partial_i\psi}\delta \psi+\frac{\partial L}{\partial_i\psi^†}\delta \psi^†=\frac{i}{2m}\left((\partial^i\psi^†)\psi-\psi^†\partial^i\psi\right),$$ which is the well-known probability current density. ^^ In non-relativistic quantum mechanics the wavefunction $\psi(x)$ is a "classical" variable in that it is simply a function from space and time to $\mathbb{C}$. Noether's theorem works exactly the same for it as in classical mechanics. In quantum field theory the relevant objects $\psi(x)$ become quantum operators and the usual arguments have to be modified somewhat. • $\begingroup$ +1 you might mean, with caveats in your post. No big deal on this question, just a yes/no/ don't know reply is fine but with fields as operators, would the Ward identities make any kInd of sense in my comment above, if you know QFT? $\endgroup$ – user108787 Sep 12 '16 at 1:31 • $\begingroup$ One minor nitpick: Noether's theorem cannot be used “just” like in classical mechanics, since you can formulate quantum mechanics in the Hamiltonian as well as Lagrangian formalism (see e. g. the Marsden and Ratiu's book on Classical Mechanics), it simply is Noether's theorem. $\endgroup$ – Max Lein Sep 12 '16 at 2:04 Your Answer
df95f9f60211daa2
CATBox: An Interactive Course in Combinatorial Optimization by Winfried Hochstättler PDF By Winfried Hochstättler ISBN-10: 3540148876 ISBN-13: 9783540148876 Graph algorithms are effortless to imagine and certainly there already exists quite a few applications and courses to animate the dynamics while fixing difficulties from graph conception. nonetheless, and just a little strangely, it may be obscure the guidelines at the back of the set of rules from the dynamic demonstrate alone. CATBox involves a software program procedure for animating graph algorithms and a direction ebook which we constructed at the same time. The software program approach offers either the set of rules and the graph and places the consumer regularly accountable for the particular code that's performed. she or he can set breakpoints, continue in unmarried steps and hint into subroutines. The graph, and extra auxiliary graphs like residual networks, are displayed and supply visible suggestions. The path e-book, meant for readers at complicated undergraduate or graduate point, introduces the guidelines and discusses the mathematical history helpful for figuring out and verifying the correctness of the algorithms and their complexity. machine workouts and examples substitute the standard static photos of set of rules dynamics. For this quantity we now have selected exclusively algorithms for classical difficulties from combinatorial optimization, akin to minimal spanning bushes, shortest paths, greatest flows, minimal rate flows in addition to weighted and unweighted matchings either for bipartite and non-bipartite graphs. We give some thought to non-bipartite weighted matching, specifically within the geometrical case, a spotlight of combinatorial optimization. that allows you to allow the reader to completely benefit from the great thing about the primal-dual resolution set of rules for weighted matching, we current all mathematical fabric not just from the perspective of graph thought, but in addition with an emphasis on linear programming and its duality. This yields insightful and aesthetically entertaining photographs for matchings, but additionally for minimal spanning bushes. You can locate additional information at Show description Read or Download CATBox: An Interactive Course in Combinatorial Optimization PDF Best linear programming books Mathematical modelling of industrial processes: lectures - download pdf or read online The 1990 CIME direction on Mathematical Modelling of business procedures set out to demonstrate a few advances in questions of commercial arithmetic, i. e. of the functions of arithmetic (with all its "academic" rigour) to real-life difficulties. The papers describe the genesis of the types and illustrate their appropriate mathematical features. Stephen J. Wright's Primal-Dual Interior-Point Methods PDF There are primarily 2 well-developed sensible tools that dominate the answer equipment recognized for fixing linear programming (linear optimization) difficulties at the computing device. the 1st one is the "Simplex strategy" which used to be first built within the Nineteen Forties yet has on the grounds that developed into an effective strategy by using many algorithmic and reminiscence garage tips. Read e-book online Controllability of partial differential equations governed PDF The objective of this monograph is to handle the problem of the worldwide controllability of partial differential equations within the context of multiplicative (or bilinear) controls, which input the version equations as coefficients. The mathematical versions we learn contain the linear and nonlinear parabolic and hyperbolic PDE's, the Schrödinger equation, and paired hybrid nonlinear disbursed parameter structures modeling the swimming phenomenon. Download PDF by N. Sundararajan, P. Saratchandran, Yan Li: Fully Tuned Radial Basis Function Neural Networks for Flight Totally Tuned Radial foundation functionality Neural Networks for Flight keep watch over offers using the Radial foundation functionality (RBF) neural networks for adaptive keep watch over of nonlinear structures with emphasis on flight regulate functions. A Lyapunov synthesis technique is used to derive the tuning principles for the RBF controller parameters so one can warrantly the steadiness of the closed loop procedure. Additional info for CATBox: An Interactive Course in Combinatorial Optimization Example text Otherwise G is disconnected and its components form a non-trivial partition P0 of the vertex set, such that (u + − u − ) + u P < we for all e ∈ ∂ P0 . e∈∂ P Thus, if we update u P0 from zero to u P0 := mine∈∂ P0 we −(u + −u − )− e∈∂ P u P > 0 then u remains feasible. This yields a new dually feasible solution and we may proceed. In each iteration the number of components of the graph of tight edges decreases, and thus the algorithm terminates in at most |V | − 2 iterations. The main steps of the proposed algorithm thus are A LGORITHM PrimalDualKruskal 3 while nrOfComponents() > 1: UpdateDuals() ComputePartition() Software Exercise 28 There is a nice geometric interpretation which we learned from M. Problem 5 Let D = (V, A) be a directed graph, w : A → Z+ a non-negative weight function on the arcs and s ∈ V . Find a shortest path from s to v for all v ∈ V . To get an idea of Dijkstra’s method imagine the following mind experiment. We all know that the light travels on shortest paths. How does light behave, if it cannot follow the bee-line, say a signal is broadcasted in a net of glass fibers. In the beginning there is a distinguished node s that activates the experiment by sending the signal to all its neighbors. Q ˙ l is an arbitrary partition, we finally derive where V = Q 1 ∪ |P| k xe = |V | − |P| = i=1 e⊆Vi k (|Vi | − 1) = i=1 (|Vi | − 1). 2) i=1 If there exists an index j ∈ {1, . . , k} such that e⊆V j xe > |V j | − 1, then we consider another partition P j , which consists of the non-trivial class V j and a trivial class for each of the remaining vertices. The partition P j satisfies xe = |V | − 1 − e∈∂ P j xe < |V | − 1 − (|V j | − 1) = |P j | − 1, e⊆V j a contradiction. 3) e⊆V j in each class. Download PDF sample CATBox: An Interactive Course in Combinatorial Optimization by Winfried Hochstättler by Ronald Rated 4.79 of 5 – based on 7 votes
5b26c684fe35463f
All Issues Volume 13, 2019 Volume 12, 2018 Volume 11, 2017 Volume 10, 2016 Volume 9, 2015 Volume 8, 2014 Volume 7, 2013 Volume 6, 2012 Volume 5, 2011 Volume 4, 2010 Volume 3, 2009 Volume 2, 2008 Volume 1, 2007 Inverse Problems & Imaging February 2019 , Volume 13 , Issue 1 Select all articles Hyperpriors for Matérn fields with applications in Bayesian inversion Lassi Roininen, Mark Girolami, Sari Lasanen and Markku Markkanen 2019, 13(1): 1-29 doi: 10.3934/ipi.2019001 +[Abstract](1557) +[HTML](162) +[PDF](2476.56KB) We introduce non-stationary Matérn field priors with stochastic partial differential equations, and construct correlation length-scaling with hyperpriors. We model both the hyperprior and the Matérn prior as continuous-parameter random fields. As hypermodels, we use Cauchy and Gaussian random fields, which we map suitably to a desired correlation length-scaling range. For computations, we discretise the models with finite difference methods. We consider the convergence of the discretised prior and posterior to the discretisation limit. We apply the developed methodology to certain interpolation, numerical differentiation and deconvolution problems, and show numerically that we can make Bayesian inversion which promotes competing constraints of smoothness and edge-preservation. For computing the conditional mean estimator of the posterior distribution, we use a combination of Gibbs and Metropolis-within-Gibbs sampling algorithms. Inverse problems for the heat equation with memory Sergei A. Avdonin, Sergei A. Ivanov and Jun-Min Wang 2019, 13(1): 31-38 doi: 10.3934/ipi.2019002 +[Abstract](1670) +[HTML](185) +[PDF](321.32KB) We study inverse boundary problems for one dimensional linear integro-differential equation of the Gurtin-Pipkin type with the Dirichlet-to-Neumann map as the inverse data. Under natural conditions on the kernel of the integral operator, we give the explicit formula for the solution of the problem with the observation on the semiaxis t>0. For the observation on finite time interval, we prove the uniqueness result, which is similar to the local Borg-Marchenko theorem for the Schrödinger equation. Magnetic moment estimation and bounded extremal problems Laurent Baratchart, Sylvain Chevillard, Douglas Hardin, Juliette Leblond, Eduardo Andrade Lima and Jean-Paul Marmorat 2019, 13(1): 39-67 doi: 10.3934/ipi.2019003 +[Abstract](1218) +[HTML](142) +[PDF](3153.37KB) We consider the inverse problem in magnetostatics for recovering the moment of a planar magnetization from measurements of the normal component of the magnetic field at a distance from the support. Such issues arise in studies of magnetic material in general and in paleomagnetism in particular. Assuming the magnetization is a measure with L2-density, we construct linear forms to be applied on the data in order to estimate the moment. These forms are obtained as solutions to certain extremal problems in Sobolev classes of functions, and their computation reduces to solving an elliptic differential-integral equation, for which synthetic numerical experiments are presented. A partial inverse problem for the Sturm-Liouville operator on the lasso-graph Chuan-Fu Yang and Natalia Pavlovna Bondarenko 2019, 13(1): 69-79 doi: 10.3934/ipi.2019004 +[Abstract](1161) +[HTML](208) +[PDF](350.7KB) The Sturm-Liouville operator with singular potentials on the lasso graph is considered. We suppose that the potential is known a priori on the boundary edge, and recover the potential on the loop from a part of the spectrum and some additional data. We prove the uniqueness theorem and provide a constructive algorithm for the solution of this partial inverse problem. Recovering two coefficients in an elliptic equation via phaseless information Vladimir G. Romanov and Masahiro Yamamoto 2019, 13(1): 81-91 doi: 10.3934/ipi.2019005 +[Abstract](1019) +[HTML](148) +[PDF](355.95KB) For fixed \begin{document} $y \in \mathbb{R}^3$ \end{document}, we consider the equation \begin{document} $L u+k^2u = - δ(x-y), \>x \in \mathbb{R}^3$ \end{document}, where \begin{document} $L=\text{div}(n(x)^{-2}\nabla)+q(x)$ \end{document}, \begin{document} $k >0$ \end{document} is a frequency, \begin{document} $n(x)$ \end{document} is a refraction index and \begin{document} $q(x)$ \end{document} is a potential. Assuming that the refraction index \begin{document} $n(x)$ \end{document} is different from \begin{document} $1$ \end{document} only inside a bounded compact domain \begin{document} $Ω$ \end{document} with a smooth boundary \begin{document} $S$ \end{document} and the potential \begin{document} $q(x)$ \end{document} vanishes outside of the same domain, we study an inverse problem of finding both coefficients inside \begin{document} $Ω$ \end{document} from some given information on solutions of the elliptic equation. Namely, it is supposed that the point source located at point \begin{document} $y \in S$ \end{document} is a variable parameter of the problem. Then for the solution \begin{document} $u(x,y,k)$ \end{document} of the above equation satisfying the radiation condition, we assume to be given the following phaseless information \begin{document} $f(x,y,k)=|u(x,y,k)|^2$ \end{document} for all \begin{document} $x,y \in S$ \end{document} and for all \begin{document} $k≥ k_0>0$ \end{document}, where \begin{document} $k_0$ \end{document} is some constant. We prove that this phaseless information uniquely determines both coefficients \begin{document} $n(x)$ \end{document} and \begin{document} $q(x)$ \end{document} inside \begin{document} $Ω$ \end{document}. The regularized monotonicity method: Detecting irregular indefinite inclusions Henrik Garde and Stratos Staboulis 2019, 13(1): 93-116 doi: 10.3934/ipi.2019006 +[Abstract](1155) +[HTML](158) +[PDF](1007.28KB) In inclusion detection in electrical impedance tomography, the support of perturbations (inclusion) from a known background conductivity is typically reconstructed from idealized continuum data modelled by a Neumann-to-Dirichlet map. Only few reconstruction methods apply when detecting indefinite inclusions, where the conductivity distribution has both more and less conductive parts relative to the background conductivity; one such method is the monotonicity method of Harrach, Seo, and Ullrich [17,15]. We formulate the method for irregular indefinite inclusions, meaning that we make no regularity assumptions on the conductivity perturbations nor on the inclusion boundaries. We show, provided that the perturbations are bounded away from zero, that the outer support of the positive and negative parts of the inclusions can be reconstructed independently. Moreover, we formulate a regularization scheme that applies to a class of approximative measurement models, including the Complete Electrode Model, hence making the method robust against modelling error and noise. In particular, we demonstrate that for a convergent family of approximative models there exists a sequence of regularization parameters such that the outer shape of the inclusions is asymptotically exactly characterized. Finally, a peeling-type reconstruction algorithm is presented and, for the first time in literature, numerical examples of monotonicity reconstructions for indefinite inclusions are presented. Nonconvex TGV regularization model for multiplicative noise removal with spatially varying parameters Hanwool Na, Myeongmin Kang, Miyoun Jung and Myungjoo Kang 2019, 13(1): 117-147 doi: 10.3934/ipi.2019007 +[Abstract](1664) +[HTML](233) +[PDF](3461.13KB) In this article, we introduce a novel variational model for the restoration of images corrupted by multiplicative Gamma noise. The model incorporates a convex data-fidelity term with a nonconvex version of the total generalized variation (TGV). In addition, we adopt a spatially adaptive regularization parameter (SARP) approach. The nonconvex TGV regularization enables the efficient denoising of smooth regions, without staircasing artifacts that appear on total variation regularization-based models, and edges and details to be conserved. Moreover, the SARP approach further helps preserve fine structures and textures. To deal with the nonconvex regularization, we utilize an iteratively reweighted \begin{document}$\ell_1$\end{document} algorithm, and the alternating direction method of multipliers is employed to solve a convex subproblem. This leads to a fast and efficient iterative algorithm for solving the proposed model. Numerical experiments show that the proposed model produces better denoising results than the state-of-the-art models. Note on Calderón's inverse problem for measurable conductivities Matteo Santacesaria 2019, 13(1): 149-157 doi: 10.3934/ipi.2019008 +[Abstract](993) +[HTML](154) +[PDF](341.8KB) The unique determination of a measurable conductivity from the Dirichlet-to-Neumann map of the equation \begin{document} ${\rm{div}} (σ \nabla u) = 0$ \end{document} is the subject of this note. A new strategy, based on Clifford algebras and a higher dimensional analogue of the Beltrami equation, is here proposed. This represents a possible first step for a proof of uniqueness for the Calderón problem in three and higher dimensions in the \begin{document} $L^\infty$ \end{document} case. Teemu Tyni and Valery Serov 2019, 13(1): 159-175 doi: 10.3934/ipi.2019009 +[Abstract](1058) +[HTML](161) +[PDF](412.59KB) We consider an inverse scattering problem of recovering the unknown coefficients of quasi-linearly perturbed biharmonic operator on the line. These unknown complex-valued coefficients are assumed to satisfy some regularity conditions on their nonlinearity, but they can be discontinuous or singular in their space variable. We prove that the inverse Born approximation can be used to recover some essential information about the unknown coefficients from the knowledge of the reflection coefficient. This information is the jump discontinuities and the local singularities of the coefficients. A reference ball based iterative algorithm for imaging acoustic obstacle from phaseless far-field data Heping Dong, Deyue Zhang and Yukun Guo 2019, 13(1): 177-195 doi: 10.3934/ipi.2019010 +[Abstract](1209) +[HTML](148) +[PDF](542.74KB) In this paper, we consider the inverse problem of determining the location and the shape of a sound-soft obstacle from the modulus of the far-field data for a single incident plane wave. By adding a reference ball artificially to the inverse scattering system, we propose a system of nonlinear integral equations based iterative scheme to reconstruct both the location and the shape of the obstacle. The reference ball technique causes few extra computational costs, but breaks the translation invariance and brings information about the location of the obstacle. Several validating numerical examples are provided to illustrate the effectiveness and robustness of the proposed inversion algorithm. Simultaneously recovering potentials and embedded obstacles for anisotropic fractional Schrödinger operators Xinlin Cao, Yi-Hsuan Lin and Hongyu Liu 2019, 13(1): 197-210 doi: 10.3934/ipi.2019011 +[Abstract](1251) +[HTML](129) +[PDF](421.86KB) Let \begin{document}$A∈{\rm{Sym}}(n× n)$\end{document} be an elliptic 2-tensor. Consider the anisotropic fractional Schrödinger operator \begin{document}$\mathscr{L}_A^s+q$\end{document}, where \begin{document}$\mathscr{L}_A^s: = (-\nabla·(A(x)\nabla))^s$\end{document}, \begin{document}$s∈ (0, 1)$\end{document} and \begin{document}$q∈ L^∞$\end{document}. We are concerned with the simultaneous recovery of \begin{document}$q$\end{document} and possibly embedded soft or hard obstacles inside \begin{document}$q$\end{document} by the exterior Dirichlet-to-Neumann (DtN) map outside a bounded domain \begin{document}$Ω$\end{document} associated with \begin{document}$\mathscr{L}_A^s+q$\end{document}. It is shown that a single measurement can uniquely determine the embedded obstacle, independent of the surrounding potential \begin{document}$q$\end{document}. If multiple measurements are allowed, then the surrounding potential \begin{document}$q$\end{document} can also be uniquely recovered. These are surprising findings since in the local case, namely \begin{document}$s = 1$\end{document}, both the obstacle recovery by a single measurement and the simultaneous recovery of the surrounding potential by multiple measurements are long-standing problems and still remain open in the literature. Our argument for the nonlocal inverse problem is mainly based on the strong uniqueness property and Runge approximation property for anisotropic fractional Schrödinger operators. A connection between uniqueness of minimizers in Tikhonov-type regularization and Morozov-like discrepancy principles Vinicius Albani and Adriano De Cezaro 2019, 13(1): 211-229 doi: 10.3934/ipi.2019012 +[Abstract](1163) +[HTML](158) +[PDF](442.81KB) We state sufficient conditions for the uniqueness of minimizers of Tikhonov-type functionals. We further explore a connection between such results and the well-posedness of Morozov-like discrepancy principle. Moreover, we find appropriate conditions to apply such results to the local volatility surface calibration problem. 2018  Impact Factor: 1.469 Email Alert [Back to Top]
f66d0e1d34c7f8bf
Matthew Lickiss looks back at how our drawings of chemical structures have changed over time © iStock The conventions used to represent chemical structures are so deeply ingrained in the understanding and teaching of chemistry that they form a language every chemist learns to communicate in. However, representation of the structures of chemical compounds is also at the heart of many misconceptions that arise in chemistry teaching and is one reason that it is perceived as a ‘difficult’ subject. Ever since chemists began to realise that atoms in molecules could be arranged in multiple ways to make different chemical compounds, they have been developing ways to represent these arrangements. Initially, two huge hurdles to describing these structures had to be overcome. The first was that molecules could not be seen. Any representation of their structure was an attempt to model reality in abstract terms. The other, even bigger, challenge was to understand what that reality was – how chemical elements bond together and the possible arrangements they can form. The representation of chemical structures, like any language, was built up from a gradual understanding of the world it describes, with new conventions being added and inaccurate or unwieldy representations falling out of use. Getting to grips with the bond By the mid-19th century the theory of valency was gaining traction – the idea that elements would form a certain number of bonds to other elements, known at the time as ‘combining power’. Chemists began to consider how the ratios of elements in compounds were arranged at the molecular level. Couper's proposed structure of ethanol In 1858, Scottish chemist Archibald Scott Couper published the first structural formula representation of a molecule in his paper On a new chemical theory.1 He used elemental symbols to represent atoms with lines placed between them showing bonds. Couper’s structures were very condensed. Multiple atoms were shown with a single symbol accompanied by a superscript numeral – a convention that’s still in use today in a modified form as shorthand for many functional groups. Scottish chemist Alexander Crum Brown adapted this form of representation in 1861.2 He kept the element symbols, now in circles, and the bonds between them, opting for a much more schematic representation than Couper’s structures. While these first representations appear crude, they show many of the key features used in skeletal, or line-bond, formulas that form the basis of the representations we use today. Descriptions of molecular structures became a lot less familiar for a few years, before German chemist August Kekulé returned to line-bond conventions in 1865. Sausages and circles While Couper was advancing Brown’s style of representation, Kekulé was developing his own method of notation, colloquially referred to as ‘Kekulé sausages’. He first described his ideas for molecule structures in 1858 using a combination of ovals and circles to represent atoms, differentiating between them by varying the length of the ovals, or by shading the circles.3 The most striking difference between Kekulé sausages and line-bond notation is that there is no specific bond sign, the atoms are simply placed together. Because of this, the system lacked the ability to show the spatial relationship of the atoms in the structure. © Interfoto / Sammlung Rauch / Mary Evans Left: Brown's proposed structure of oxalic acid. Top: Isopropyl alcohol represented with Kekulé sausages. Bottom: Kekulé sausage structure of benzene Also at this time Josef Loschmidt presented an aesthetically attractive alternative to the line-bond method using interlocking circles.4 Each atom type in this system is shown using either circles of differing size (carbon is large, hydrogen is small) or with concentric circles (oxygen is shown with two concentric circles, nitrogen with three). As with Kekulé’s sausages, Loschmidt’s diagrams had no dedicated single bond sign component. However, these diagrams did include, for the first time, representations of double and triple bonds, shown as two or three lines across interlocking circles. While Loschmidt’s system was never used among the chemistry community, it is in many ways visually similar to 3D space filling models in use today. Sticking with it The use of Couper and Brown’s line-bond notation as the most common convention in chemical structure representation was confirmed in 1865 when Kekulé published the first representative structure of benzene. Realising that alternating double bonds in the carbon ring could account for the symmetrical properties of benzene if the structure was free to ‘resonate’ between two configurations, Kekulé proposed the structure of benzene using line-bond structures that are easily recognisable within modern conventions. This discovery was such a leap in understanding that still today line-bond structures are commonly referred to as Kekulé structures. Even though depiction of chemical structure had become standardised, the nature of a chemical bond was not well understood until American chemist Gilbert Lewis proposed that covalent bonds were formed by the interaction to two shared electrons. Along with this concept, which underpins valence bond theory, Lewis introduced a representation of molecules that dispensed with the depiction of bonds by lines.5 In Lewis dot structures, atoms are shown surrounded by dots that indicate their valence electrons. Bonds between atoms are represented by the pairs of electrons that comprise them (four and six electrons in the case of double and triple bonds). While the depiction of bonds in this way did not replace the use of lines, Lewis’ dots are still used in modern representations to indicate lone pairs of electrons or free radicals. Molecules in 3D Line-bond skeletal formulas clearly showed the bonds between atoms, but gave little indication of the 3D structure of the molecule. As chemists wrestled with valency and the nature of chemical bonds, representation of molecules remained firmly in 2D. But this was soon to change. Accounting for the observation of optical activity in many organic molecules, Jacobus Henricus van’t Hoff proposed in 1874 that saturated carbon atoms adopted a tetrahedral geometry. At the time, the four bonds of carbon were assumed to lie in a plane. With all its implications for stereochemistry, Van’t Hoff’s theory was revolutionary. Suddenly, chemists found that their 2D representations were no longer sufficient to describe molecules in 3D. As a result, several new representations built upon the line-bond convention so this stereochemical information could be included (see 3D vision). Seeing is believing The insights of Kekulé and Van’t Hoff into the nature of bonding and structure were conceptual leaps, but confirmation of their theories did not come for some time. With the advent of x-ray diffraction techniques in 1914 chemists, for the first time, could see how the atoms in molecules were arranged in space. William Henry Bragg was the first to study the crystals of organic molecules. In 1921 he published a study on the structure of naphthalene and its derivatives.6 Finally, with the development of valence bond theory in 1929 and the solution of the Schrödinger equation for atoms, at last chemists’ insights into the structure of molecules had a firm basis in quantum mechanics. Following on from this, chemists were able to develop models of molecules that more accurately depicted their structure. Space-filling models were developed where atoms are represented by spheres, the radius of which is proportional to the atomic radius. Bonding in these models is represented by overlapping of the spheres. Again, the distance between the centre of the spheres is proportional to the bond length between the atoms. Today, we display and draw chemical structures using various different conventions. Although all of them are based on the principles of the line-bond structure, their diversity allows chemists to describe and highlight different aspects of a molecule’s character. Indeed, many complex structures are commonly displayed using multiple conventions for differing parts of the molecule (see quinine below). With widely used conventions that haven’t changed significantly for many years, it’s easy to think the way we currently represent the structure of molecules is here to stay. But our conventions still have limitations in their descriptions. For example, there are many so-called non-Kekulé molecules whose structure cannot be easily described with line-bond conventions, or indeed molecules with axial chirality for which no satisfactory convention has yet been established. So, as stable as conventions have been, our chemical language might yet undergo some changes. Matthew Lickiss is a PhD student at the department of typography & graphic communication, University of Reading, UK
ab3da9181e4d5ad7
I am very disappointed by my experiment on this site. Four days ago, I posted this question: Numerical simulation of the double-slit experiment including watching the electrons . After less that 24 hours, it was put [on hold] by 5 persons as being unclear. So I edited my question a first time in the hope the [on hold] would be removed so that people can answer it, unfortunately it remained in the [on hold] state. I read from the rules that "Questions that are not reopened within five days will change from [on hold] to [closed]", so I went deeper in reading the rules from page https://physics.stackexchange.com/help/reopen-questions where it is suggested to re-edit the question and "Flag the question for moderator attention. Again, explain why it should be reopened. There is more than one moderator, and moderators do reconsider their decisions". So I followed these steps, re-edited the question a second time and flagged the question, explaining the moderator why in my opinion the question should be reopened. I was then surprised to get the following answer from a moderator: "declined - flags should only be used to make moderators aware of content that requires their intervention". Hence either he or me does not comply to the stackexchange policy... being novice here I guess that it is my fault, but could someone explain me what I did wrong? All in all, I am very disappointed: I have now spent many hours having to justify myself for asking a question which, unless I am completely stupid, has clearly some interest, and lost hope of getting an answer. I write it here again for information: The double-slit "thought experiment" described by Feynman in Lectures on Physics Volume 3 section I-6 Watching the electrons consists in firing electrons through a double-slit to observe the interference of electron waves, and watching them after passing the slits with a light source placed behind the double-slit, at equal distance of each slit. As electric charges scatter light, one can "detect" which slit the electron went through if the photon wavelength is small enough. Question: has this "thought experiment" been simulated by solving numerically the underlying Schrödinger equation? I am aware about numerical experiments of the double-slit, but did not find any including the interaction between the electrons and the photons just after the double-slit. The numerical simulation can address other types of particles, the crucial point being the simulation of the observation (here the photons being scattered by the electrons) and its effect on the wavefunction. Its interest could be in particular to better understand in which precise way the observation progressively becomes inoperative when the photon wavelength increases. • 8 $\begingroup$ Your disappointment is perfectly understable, I have the same feelings. Don't worry, you are right. If you would remain here, later, after collecting some reputation you could help our close/delete voters into a more clear and fair direction, with your votes. Until that, you can explain to them, that maybe a little bit more effort to answer the question combined with a more predictable and lenient review mechanism would be highly useful for everybody. $\endgroup$ – peterh Sep 22 '16 at 13:29 • 3 $\begingroup$ @peterh: thank you for the kind words, very much appreciated. $\endgroup$ – user130529 Sep 22 '16 at 14:04 • 1 $\begingroup$ Your question is now (as of this writing) reopened. $\endgroup$ – Rococo Sep 23 '16 at 6:01 • 1 $\begingroup$ @Rococo Thank you, you made my day. $\endgroup$ – user130529 Sep 23 '16 at 6:47 Questions which are edited are put into a queue to be reviewed for reopening. This is the primary mechanism by which questions get reopened after being put on hold. Flagging for moderator attention is a backup option, to be used when the review process isn't working - either because the question is too old to be reviewed, or, if you really believe that the outcome of the review process was wrong and have a good argument for it, you can make that case in a flag. However, in your case, the review hasn't been completed. It's not time to invoke the backup option yet. So your main mistake was flagging immediately after you edited. That being said, a declined flag isn't a big deal. When a flag is declined, it means the mods are just advising you to avoid flagging things like that in the future. In this case specifically, avoid flagging questions for reopening without waiting for the review. (Though to be fair, this review has been sitting around incomplete for quite a while.) There are a couple other things which I would advise you to do differently. First of all, you didn't actually explain a reason for reopening the question in your flag reason. If you want a moderator to reopen a question, your best bet is to briefly mention in the flag reason itself why the question should be reopened. This can be very simple; for example, "I've edited the question to clarify the issues brought up in the comments." Secondly, the way you worded your flag message is rather rude. Flag messages are considered private, so our normal "be nice" policy doesn't apply to them as strictly as it would apply to public posts, but bear in mind that when you cast a flag to ask someone for help, they will be more willing to help if you do phrase the message politely, and they will be less willing to help if you are impolite. Here is one example of how you could write a perfectly reasonable flag message for when you want a question reopened: Edited to address the issues brought up in comments; the question should be clear now, but I have been waiting a long time since the edits. Could you review this for reopening? • $\begingroup$ Thank you David Z for your answer. I hadn't realize being rude, sorry for that (I wanted to check again my flag but it now disappeared). I will put a new flag following your suggestion. $\endgroup$ – user130529 Sep 22 '16 at 12:35 • $\begingroup$ Well, it seems I can't flag the question again, it seems I better give up. $\endgroup$ – user130529 Sep 22 '16 at 12:45 • 1 $\begingroup$ The review is still ongoing, and this meta question is going to bring a good bit of attention to your post, so I'd say just leave it for another day or so. $\endgroup$ – David Z Sep 22 '16 at 12:58 • 2 $\begingroup$ @claudechuber Or you can keep waiting for the review queue to finish with it. Although I will say, I don't know that the question is a good fit still. I mean, it can be answered with a "Yes, it has been simulated, here's the reference" or "Nope, sorry, it hasn't been simulated" and that's not really the kinds of questions that are received well here. But, that's just my opinion. Others may disagree. I did not cast a vote in the queue one way or the other. $\endgroup$ – tpg2114 Sep 22 '16 at 12:59 • 2 $\begingroup$ @tpg2114 I dunno, I think those sorts of questions are fine, in general, as long as they don't come across as lazy. But let's not get into a discussion about the merits of that general type of questions here; it'd be a matter for a separate meta post, if anyone cares to bring it up. $\endgroup$ – David Z Sep 22 '16 at 13:05 • $\begingroup$ @David Z: thank you for the recommendation. $\endgroup$ – user130529 Sep 22 '16 at 14:24 • $\begingroup$ @tpg2114: I understand your point, it does look like a reference request and people may think "let him do the job like all of us and search in journals". However I am not asking this for personal academic purpose but for pure curiosity: this double slit experiment is such a key experiment in quantum mechanics, and such a challenge for the common intuition, that I think the existence of a numerical simulation of the observation could interest many people and shed light on what's happening once the wavefunction has passed the two slits and has been "observed". $\endgroup$ – user130529 Sep 22 '16 at 14:27 • 1 $\begingroup$ @claudechuber - if you think it's interesting for the site, nothing would prevent you from answering your own question... and I see it's now not only re-opened, but has attracted a significant bounty. $\endgroup$ – Floris Sep 26 '16 at 18:04 • $\begingroup$ @Floris Thank you for the suggestion, but unfortunately I don't know the answer (I wish I would). Nevertheless, following this question, I had a very interesting chat with Norbert Schuch. Yes, I saw the bounty, nice! $\endgroup$ – user130529 Sep 26 '16 at 20:15 You must log in to answer this question.
03e704775446b33c
8 July 2016 • Tunnel Visions • Statistical Methods for Data Analysis in Particle Physics • Path Integrals for Pedestrians • Books received Tunnel Visions By M Riordan, L Hoddeson and A W Kolb University of Chicago Press Also available at the CERN bookshop The Superconducting Super Collider (SSC), a huge accelerator to be built in Texas in the US, was expected by the physicists who supported it to be the place where the Higgs boson would be discovered. Instead, the remnants of the SSC facilities at Waxahachie are now property of the chemical company Magnablend, Inc. What happened in between? What did go wrong? What are the lessons to be learnt? Tunnel Visions responds to these historical questions in a very precise and exhaustive way. Contrary to my expectations, it is not a doom and gloom narration but a down to earth story of the national pride, good physics and bad economics of one of the biggest collider projects in history. The book depicts the political panorama during the 10 years (~1983–1993) of life of the SSC project. It started during the Reaganomics, hand in hand with the International Space Station (ISS), and concluded during the first Clinton presidency after the 1990s recession and the end of the Cold War. The ISS survived, possibly because political justifications for space adventure are easier to find, but most probably because from the beginning it was an international project. The book explains the management intricacies of such a large project, the partisan support and disregard, until the final SSC demise in the US congress. For the particle-physics community this is a well-known tale, but the historical details are welcome. However, the book is more than that, because it also sheds light on the lessons learnt. The final woes of the SSC signed the definitive opening of the US particle-physics community to full international collaboration. For 50 years, without doubt, the US had been the place to go for any particle physicist. Fermilab, SLAC and Brookhaven were, and still are, great stars in the physics firmament. Even if the SSC project had not been cut, those three had to keep working in order to maintain the progress in the field. But that was too much for essentially a zero-sum budget game. The show must go on, so Fermilab got the main injector, SLAC the BaBar factory, and Brookhaven the RHIC collider. Thanks to these upgrades, the three laboratories made important progress in particle physics: top quark discovery; W and Z boson precision measurements; Higgs boson mass hunt narrowing between 113 and 170 GeV; detection of possible discrepancies in the Standard Model associated with b-meson decay; and the discovery of the liquid-like quark–gluon plasma. Why did the SSC project collapse? The authors explain the real reasons, not related to technical problems but to poor management in the first years and the clash of cultures between the US particle-physics community and the US military-industrial system. But there are also reasons of opportunity. The SSC was several steps beyond its time. To put it into context: during the years of the SSC project, at CERN the conversion of the SPS into a collider took place, along with the whole LEP programme and the beginning of the LHC project. That effort prevented any possible European contribution to the SSC. The last-ditch attempt to internationalize the SSC into a trans-Pacific partnership with Japan was also unsuccessful. The lessons from history, the authors conclude, are that at the beginning of the 1990s the costs of frontier experimental particle physics had grown too much, even for a country like the US. Multilateral international collaboration was the only way out, as the ISS showed. The Higgs boson discovery was possible at CERN. The book avoids any “hare and tortoise” comparison here, however, since in the dawning of the new century, the US became a CERN observer state with a very important in-kind contribution. In my opinion, this is where the book grows in interest because it explains how the US particle-physics community took part in the LHC programme, becoming decisive. In particular, the US technological effort in developing superconducting magnets was not wasted. The book also talks about the suspense around the Higgs search when the Tevatron was the only one still in the game during the LHC shutdown after the infamous incident in September 2008. Useful appendices providing notes, a bibliography and even a short explanation of the Standard Model complete the text. • Rogelio Palomo, University of Sevilla, Spain. Statistical Methods for Data Analysis in Particle Physics By Luca Lista Also available at the CERN bookshop Particle-physics experiments are very expensive, not only in terms of the cost of building accelerators and detectors, but also due to the time spent by physicists and engineers in designing, building and running them. With the statistical analysis of the resulting data being relatively inexpensive, it is worth trying to use it optimally to extract the maximum information about the topic of interest, whilst avoiding claiming more than is justified. Thus, lectures on statistics have become regular in graduate courses, and workshops have been devoted to statistical issues in high-energy physics analysis. This also explains the number of books written by particle physicists on the practical applications of statistics to their field. This latest book by Lista is based on the lectures that he has given at his home university in Naples, and elsewhere. As part of the Springer series of “Lecture Notes in Particle Physics”, it has the attractive feature of being short – a mere 172 pages. The disadvantage of this is that some of the explanations of statistical concepts would have benefited from a somewhat fuller treatment. The range of topics covered is remarkably wide. The book starts with definitions of probability, while the final chapter is about discovery criteria and upper limits in searches for new phenomena, and benefits from Lista’s direct involvement in one of the large experiments at CERN’s LHC. It mentions such topics as the Feldman–Cousins method for confidence intervals, the CLs approach for upper limits, and the “look elsewhere effect”, which is relevant for discovery claims. However, there seems to be no mention of the fact that a motivation for the Feldman–Cousins method was to avoid empty intervals; the CLs method was introduced to protect against the possibility of excluding the signal plus background hypothesis when the analysis had little or no sensitivity to the presence or absence of the signal. The book has no index, nor problems for readers to solve. The latter is unfortunate. In common with learning to swim, play the violin and many other activities, it is virtually impossible to become proficient at statistics by merely reading about it: some practical exercise is also required. However, many worked examples are included. There are several minor typos that the editorial system failed to notice; and in addition, figure 2.17, in which the uncertainty region for a pair of parameters is compared to the uncertainties in each of them separately, is confusing. There are places where I disagree with Lista’s emphasis (although statistics is a subject that often does produce interesting discussions). For example, Lista claims it is counter-intuitive that, for a given observed number of events, an experiment that has a larger than expected number of background events (b) provides a tighter upper limit than one with a smaller background (i.e. a better experiment). However, if there are 10 observed events, it is reasonable that the upper limit on any possible signal is better if b = 10 than if b = 0. What is true is that the expected limit is better for the experiment with smaller backgrounds. Finally, the last three chapters could be useful to graduate students and postdocs entering the exciting field of searching for signs of new physics in high energy or non-accelerator experiments, provided that they have other resources to expand on some of Lista’s shorter explanations. • Louis Lyons, University of Oxford, UK. Path Integrals for Pedestrians By E Gozzi, E Cattaruzza and C Pagani World Scientific The path integral formulation of quantum mechanics is one the basic tools used to construct quantum field theories, especially gauge-invariant theories. It is the bread and butter of modern field theory. Feynman’s original formulation developed and extended some of the work of Dirac in the early 1930s, and provided an elegant and insightful solution to a generic Schrödinger equation. This short book provides a clear, pedagogical and insightful presentation of the subject. The derivations of the basic results are crystal clear, and the applications worked out to be rather original. It includes a nice presentation of the WKB approximation within this context, including the Van Vleck and functional determinant, the connections formulae and the semiclassical propagator. An interesting innovation in this book is that the authors provide a clear presentation of the path integral formulation of the Wigner functions, which are fundamental in the study of quantum statistical mechanics; and, for the first time in an elementary book, the work of Koopman and von Neumann on classical and statistical mechanics. The book closes with a well selected set of appendices, where some further technical details and clarifications are presented. Some of the more mathematical details in the basic derivations can be found there, as well as aspects of operator ordering as seen from the path integral point formulation, the formulation in momentum space, and the use of Grassmann variables, etc. It will be difficult to find a better and more compact introduction to this fundamental subject. • Luis Álvarez-Gaumé, CERN. Books received Bananaworld: Quantum Mechanics for Primates By Jeffrey Bub Oxford University Press This is not another “quantum mechanics for dummies” book, as the author himself states. Nevertheless, it is a text that talks about quantum mechanics but is not meant for experts in the field. It explains complex concepts of theoretical physics almost without bringing up formulas, and makes no reference to a specialist background. The book focuses on an intriguing issue of present-day physics: nonlocality and the associated phenomenon of entanglement. Thinking in macroscopic terms, we know that what happens here affects only the surrounding environment. But going down to the microscopic level where quantum mechanics applies, we see that things work in a different way. Scientists discovered that in this case, besides the local effects, there are less evident effects that reveal themselves in strange correlations that occur instantaneously between remote locations. Even stronger nonlocal correlations, still consistent with relativity, have been theoretically supposed, but have not been observed up to now. This complex subject is treated by the author using a particular metaphor, which is actually more than just that: he draws a metaphoric world made of magic bananas, and simple actions that can be performed on them. Thanks to this, he is able to explain nonlocality and other difficult physics concepts in a relatively easy and comprehensive way. Even if it requires some general knowledge of mathematics and familiarity with science, this book will be accessible and interesting to a wide range of readers, as well as being an entertaining read. Particles and the Universe: From the Ionian School to the Higgs Boson and Beyond By Stephan Narison World Scientific This book aims to present the history of particle physics, from the introduction of the concept of particles by Greek philosophers, to the discovery of the last tile of the Standard Model, the Higgs boson particle, which took place at CERN in 2012. Chronologically following the development of this field of science, the author gives an overview of the most important notions and theories of particle physics. The text is divided into seven sections. The first part provides the basics concepts and a summary of the history of physics, arriving at the modern theory of forces, which are the subject of the second part. It carries on with the Higgs boson discovery and the description of some of the experimental apparatus used to study particles (from the LHC at CERN to cosmic rays and neutrino experiments). The author also provides a brief treatment of general relativity, the Big Bang model and the evolution of the universe, and discusses the future developments of particle physics. In the main body of the book, the topics are presented in a non-technical fashion, in order to be accessible to non-experts. Nevertheless, a rich appendix provides demonstrations and further details for advanced readers. The text is accompanied by plenty of images, including paintings and photographs of many of the protagonists of particle physics. Beyond the Galaxy: How Humanity Looked Beyond our Milky Way and Discovered the Entire Universe By Ethan Siegel World Scientific This book provides an introduction to astrophysics and cosmology for absolute beginners, as well as for any reader looking for a general overview of the subject and an account of its latest developments. Besides presenting what we know about the history of the universe and the marvellous objects that populate it, the author is interested in explaining how we came to such knowledge. He traces a trajectory through the various theories and the discoveries that defined what we know about our universe, as well as the boundary of what is still to be understood. The first six chapters deal with the state-of-the-art of our knowledge about the structure of the universe, its origin and evolution, general relativity and the life of stars. The following five address the most important open problems, such as: why there is more matter than antimatter, what dark matter and dark energy are, what there was before the Big Bang, and what the fate of the universe is. Written in plain English, without formulas and equations, and characterized by a clear and fluid prose, this book is suitable for a wide range of readers. Modern Physics Letters A: Special Issue on Hadrontherapy By Saverio Braccini (ed.) World Scientific The applications of nuclear and particle physics to medicine have seen extraordinary development since the discovery of X-rays by Röntgen at the end of the 19th century. Medical imaging and oncologic therapy with photons and charged particles (specifically hadrons) are currently hot research topics. This special issue of Modern Physics Letters is dedicated to hadron therapy, which is the frontier of cancer radiation therapy, and aims at filling a gap in the current literature on medical physics. Through 10 invited review papers, the volume presents the basics of hadron therapy, along with the most recent scientific and technological developments in the field. The first part covers topics such as the history of hadron therapy, radiation biophysics, particle accelerators, dose-delivery systems and treatment planning. In the second part, more specific topics are treated, including dose and beam monitoring, proton computer tomography, innoacustics and microdosimetry. This volume will be very useful to students, researchers approaching medical physics, and scientists interested in this interdisciplinary and fast-moving field. By R Wagner and A Briggs Oxford University Press Wagner, Briggs This book uses an original perspective to trace the history of the human quest for making sense of the world we live in. Written in collaboration by a painter specialising in religious subjects and a physical scientist who is a professor in the UK and also the director of a centre for research in quantum information processing, it starts from the assumption that both religion and science are manifestations of human curiosity. Science and its methods, based on reproducible experiments and evidence-based conclusions, are able to find answers to the “how” questions, to explain how nature works. This is what the authors call the “penultimate curiosity”. But the “ultimate curiosity” is “why” the world is like it is. Science doesn’t necessarily have the answer to such a question. Religions were born to try and give an answer to this. In the book, science and religion are not placed in opposition to one another. On the contrary, it is shown how they can live in a mutually enriching relationship. The authors sweep human history from caveman times to the present day, explaining the nature and evolution of the entanglement between the two. The text is also accompanied by many beautiful illustrations that are an integral part of the argument. Entropy Demystified: The Second Law Reduced to Plain Common Sense (2nd edition) By Arieh Ben-Naim World Scientific In this book, the author explains entropy and the second law of thermodynamics in a clear and easy way, and with the help of many examples. He intends, in particular, to show that these physics laws are not intrinsically incomprehensible, as they appear at first. The fact that entropy, which is defined in terms of heat and temperature, can be also expressed in terms of order and disorder, which are intangible concepts, together with the evidence that entropy (or, in other words, disorder) increases perpetually, can puzzle students. Some mystery seems to be inevitably associated with these concepts. The author asserts that, looking at the second law from the molecular point of view, everything clears up. What a student needs to know is the atomistic formulation of entropy, which comes from statistical mechanics. The aim of the book is to clarify these concepts to readers who haven’t studied statistical mechanics. Many dice games and examples from everyday life are used to make readers familiar with the subject. They are guided along a path that allows them to discover by themselves what entropy is, how it changes, and why it always changes in one direction in a spontaneous process. In this second edition, seven simulated games are also included, so that the reader can experiment with and appreciate the joy of understanding the second law of thermodynamics. Copyright © 2019 by CERN bright-rec iop pub iop-science physcis connect
7b5c5a0060318544
The molecular orbitals of $\ce{O2}$ are typically shown as follows, with every orbital filled by a spin alpha (or up) and spin beta (or down) electron (except the homo levels which are singly occupied). O2 Molecular Orbitals Calculations using for instance Hartree-Fock and density functional theory however produce alpha and beta orbitals with different energies - no degeneracy of alpha and beta orbitals. For instance, the B3LYP/6-311G** HOMO/LUMO levels (taken from the computational chemistry database) are as follows: alpha HOMO (-0.31186), alpha LUMO (0.19755), beta HOMO (-0.46536), beta LUMO (-0.11702). Clearly the HOMO levels are not degenerate, at odds with what we expect from the above molecular orbital diagram. Similar mismatch between the alpha and beta orbitals exists for orbitals below the HOMO, as well as using other levels of theory. Is computational chemistry getting the orbitals of $\ce{O2}$ wrong? Am I missing something? Conceptually I can't grasp that alpha and beta orbitals have different energies since at least for the case of filled orbitals when the electrons pair up I expect them to have the same energy. The short answer is, yes, computational chemistry gets them wrong because we don't have a way to calculate orbital energies exactly. In the end, the root cause is the general problem with quantum many-fermion systems - we can't solve the Schrödinger equation exactly. Let's have a look to different computational chemistry methods to understand why that's the case. In general, more sophisticated methods will reduce the energy gap between alpha and beta electrons, but as we will see, none will guarantee that it doesn't exist. Hartree-Fock methods Hartree-Fock methods are inherently approximate: when we solve a Hartree-Fock calculation, we are solving not a system of N electrons, but N coupled systems of electrons that only interact indirectly. Of the energy terms in the Hamiltonian, HF accounts for the kinetic energy of each electron, its interaction with all the nuclei, the coulombic repulsion between electrons, and the exchange energy with other electrons with the same spin (because they are fermions), but we ignore electronic correlation. In HF, electrons only interact with each other indirectly - by contributing to a background external potential in the resolution of each electron's particular Hamiltonian. This makes it necessary to solve the equations iteratively using a mean field approach - what we usually call "self-consistent field" or SCF. Correlation requires explicitly considering more than one electron at the same time, so it's not compatible with the core HF model. But, even if it's approximate, shouldn't we still get the same, not exact, values for alpha and beta electrons? - sure we do, in systems where the error introduced by the HF assumptions is the same for alpha and beta electrons. For instance, if you run an unrestricted HF calculation on singlet $\ce{O_2}$, where you have the same number of alpha and beta electrons, you'll get exactly the same values for the alpha and beta orbitals (and these will be exactly the same as you'd get for a restricted HF calculation of the same system). However, for triplet $\ce{O_2}$, alpha and beta electrons are exposed to slightly different interactions - since correlation depends on the spin of the interacting electrons, ignoring it affects alpha and beta electrons differently. Post-Hartree-Fock methods Post-Hartree-Fock methods attempt to correct the HF limitations by first resolving a Hartree-Fock system and then adding a correction for electronic correlation. The problem, however is that the corrections aren't exact. Møller-Plesset perturbation methods take the HF wavefunction and then add a numerical correction to the energy in a perturbative fashion - you apply the HF method to double-excited (MP2), triple-excited (MP3), and so on versions of the same ground state HF wavefunction and then introduce a correction to the energy of your orbitals (not to the wavefunctions themselves) based on that. It has several problems - you are only accounting for dynamic correlation (how electrons influence the movement of other electrons within your HF wavefunction), and, because excited estates are not variationally limited, there's no guarantee that you will not overestimate your correction and will end up with an energy below the exact energy. But even if those problems could be completely corrected, since part of the correlation is not accounted for, it will still give slightly different energies for alpha and beta orbitals in a system with $\mathrm{S > 0}$. Configuration interaction (CI) methods apply the HF method not only to the ground state but also to a number of excited states, and then find the correct solution to the system variationally, as a linear combination of those configurations. This method is in principle exact - and it's variational, so we are guaranteed not to underestimate the energy - the problem is that it is a series expansion, in which we add a term for each possible configuration state function (CSF). Still, even a full CI taking into account all the possible CSFs won't account for all of the correlation - CI is only exact in the limit of a complete basis set. In practice, expanding the series is computationally very expensive, and it is truncated relatively early - full CI is only practical for the simplest of systems. When it's applicable, however, difference in energy between alpha and beta electrons should be negligible. Coupled cluster Coupled cluster (CC) has a theoretical foundation that is distinct from HF and post-HF methods; essentially, it takes a wavefunction (usually a Slater determinant) and performs a number of excitations on it, producing a coupled cluster description of the system that, like full CI, is in principle exact. However, like CI, CC is a series expansion - you have terms for single, double, triple, etc. excitations, and we would require all possible excitations to be taken into account to fully consider electron correlation. As with CI, we usually truncate the series quite early - the usual "gold standard" of computational chemistry, against which all other methods have usually been benchmarked in the last couple of decades, is CCSD(T) - meaning that we consider the first and second term, and the third term only partially. This is usually considered to be as exact as it gets in computational chemistry, but again, it is prohibitive for all but the simplest of systems. Like with full CI, CC differences between alpha and beta electrons will be negligible. Density functional theory I left DFT for the end because it's an entirely different beast, theoretically speaking, but in the end the problem comes down to the same issue: you don't account for correlation correctly, and that flaw impacts alpha and beta electrons differently for $\mathrm{S > 0}$. Density functional theory, in principle, shouldn't have the same problems Hartree-Fock-related methods have, since we only with a single, global, three-dimensional density function $\mathrm{\rho(r)}$; and, as per the second Hohenberg-Kohn theorem, the ground state electron density is unique and variationally limited, so in principle we could keep applying the functional to different three-dimensional functions until we find the absolute minimum - and that's our exact ground state density. There, are, however, two big problems. First, in practical terms, we need a way to translate the electron density to wavefunctions so we can understand what's happening at the electron level. For instance, the very values for orbital energies that you mention for a DFT calculation are not inherently in the density function that DFT deals with; there's no way to distinguish the first from the second occupied orbitals in the density function. What we do is introduce a formalism to be able to translate the density function to orbital terms - that's the Kohn-Sham formalism, which boils down to treating the global particle density function, which distributes all N particles in space, as the sum of N single-particle functions, each corresponding to one electron, and which interact with each other only indirectly through a background external potential, which accounts for the coulomb, exchange and correlation interactions, so we update densities and potentials iteratively in another mean field, self-consistent approach. You'll notice that this is pretty much the same thing we did with the wavefunction in Hartree-Fock. Two considerations, however. First, the Kohn-Sham formalism allows us to map the total density to orbital occupations and from there calculate a number of properties that we couldn't derive from the total density function, but it guarantees only that the sum of the single-electron densities is equal to the total density function for the ground state - not that the depiction of a particular orbital is accurate. So, values for specific molecular orbitals (such as orbital energy) should be taken with a pinch of salt. In fact, in practice, orbital properties from DFT are most accurate around the HOMO-LUMO gap, while values for core orbitals or virtual orbitals can diverge quite wildly from reference values. This is so because orbitals are constructed in a way that best fits density, and not one that best fits the wavefunction. You can find a lot of discussions about whether Kohn-Sham orbitals are physically meaningful and how they relate to the actual orbitals of the system. As you can imagine, if you have important spin effects in your system, spin deviation in these properties are almost guaranteed. But there's also an advantage over Hartree-Fock: in HF, we had to drop correlation because we couldn't incorporate it - it required treating more than one electron wavefunction explicitly. But, in DFT, wavefunctions are an afterthought - what we are minimising is the total density function, so there's nothing that prevents us from introducing correlation. We could even ditch the Kohn-Sham formalism and calculate correlation directly in terms of the total density function. So, aside from the problem of breaking down the total density function into a number of one-electron densities, we could in principle at least get an exact total alpha electron density and a total beta electron density, couldn't we? Unfortunately, no, because we don't know how the functional for electrons looks like. And, if we move from wavefunctions to density, we also lose the ability to calculate exchange interactions exactly. The only system for which we can calculate these exactly is the free electron gas; so all the functionals we regularly use are proposals for the exchange or correlation terms, or both, that approximate the exchange-correlation interactions but that don't represent them exactly. And as with all the other cases, if we are not capturing all the exchange and correlation effects, that will impact alpha and beta electrons differently in a system with $\mathrm{S > 0}$. • 4 $\begingroup$ This could be due to my misunderstanding, but how do we know the true picture has degenerate alpha and beta orbitals? Can we really say anything about orbitals of real compounds, since orbitals are just a construct we use to make forming an multielectron wavefunction easier? $\endgroup$ – Tyberius Mar 8 '18 at 1:53 • $\begingroup$ Yes, you are right - however, the decomposition of the total electronic wavefunction into an alpha wavefunction and a beta wavefunction, which would lead to different orbitals for alpha and beta electrons, is no less of a construct - it's a different decomposition of the same 3N-dimensional wavefunction into N single-electron wavefunctions. $\endgroup$ – user41033 Mar 10 '18 at 12:47 • 4 $\begingroup$ then I guess my question is how we know that the correct limit of using alpha and beta orbitals is that the pairs will be degenerate? If that's the case, why not just use ROHF which ensures that from the beginning? I guess I don't see why we should trust the simple MO theory picture over the computational picture. $\endgroup$ – Tyberius Mar 10 '18 at 15:57 • $\begingroup$ @Tyberius Even if we can't solve the schrödinger equation analytically, we know that energetically degenerate eigenstates can exist, which can be mathematically deduced by arguments of symmetry. Then, if our MO picture is symmetrical, does it not follow that their eigenvalues in a hypothetically analytical Hamiltonian would be degenerate? $\endgroup$ – Blaise Oct 12 '18 at 13:38 • $\begingroup$ For example: a doubly degenerate $E_{1u}=\Pi _u$ irrep from the $D_{\infty h}$ point group should be doubly degenerate in the Hamiltonian, right? $\endgroup$ – Blaise Oct 12 '18 at 14:00 A triplet species, as your MO scheme shows, has two unpaired alpha spin electrons, which is why you don't have to look at the beta orbitals at all but you have to look, if the $\alpha$-HOMO and $\alpha$-HOMO-1 are equivalent or not. After optimizing the geometry of triplet oxygen using the method you states, B3LYP 6-311G**, we end up with a bond length of $\pu{1.2058 Å}$ and are given the following energies: Population analysis using the SCF density. Orbital symmetries: Alpha Orbitals: Occupied (SGG) (SGU) (SGG) (SGU) (PIU) (PIU) (SGG) (PIG) Virtual (SGU) (PIU) (PIU) (SGU) (SGG) (SGG) (PIG) (PIG) (SGU) (PIU) (PIU) (DLTG) (DLTG) (DLTU) (DLTU) (SGG) (PIG) (PIG) (SGU) (PIU) (PIU) (SGG) (PIG) (PIG) (SGU) (SGG) (SGU) Beta Orbitals: Occupied (SGG) (SGU) (SGG) (SGU) (SGG) (PIU) (PIU) Virtual (PIG) (PIG) (SGU) (PIU) (PIU) (SGU) (SGG) (SGG) (PIG) (PIG) (SGU) (PIU) (PIU) (DLTG) (DLTG) (SGG) (DLTU) (DLTU) (PIG) (PIG) (SGU) (PIU) (PIU) (SGG) (PIG) (PIG) (SGU) (SGG) (SGU) The electronic state is 3-SGG. Alpha occ. eigenvalues -- -19.28009 -19.27992 -1.31462 -0.83929 -0.56862 Alpha occ. eigenvalues -- -0.56862 -0.55311 -0.31192 -0.31192 <------ Alpha virt. eigenvalues -- 0.19733 0.66242 0.66242 0.69937 0.71138 Alpha virt. eigenvalues -- 0.77370 0.78062 0.78062 1.23343 2.23120 Alpha virt. eigenvalues -- 2.23120 2.53472 2.53472 2.80188 2.80188 Alpha virt. eigenvalues -- 2.83167 3.40715 3.40715 3.75295 4.50288 Alpha virt. eigenvalues -- 4.50288 4.64528 4.80485 4.80485 5.97229 Alpha virt. eigenvalues -- 49.48209 49.67839 Beta occ. eigenvalues -- -19.24954 -19.24923 -1.25730 -0.75086 -0.51327 Beta occ. eigenvalues -- -0.46530 -0.46530 Beta virt. eigenvalues -- -0.11708 -0.11708 0.24099 0.70896 0.70896 Beta virt. eigenvalues -- 0.71935 0.72389 0.79954 0.83108 0.83108 Beta virt. eigenvalues -- 1.27043 2.27711 2.27711 2.62147 2.62147 Beta virt. eigenvalues -- 2.87800 2.89802 2.89802 3.46348 3.46348 Beta virt. eigenvalues -- 3.78751 4.58054 4.58054 4.66543 4.88322 Beta virt. eigenvalues -- 4.88322 6.00448 49.51237 49.70844 You can clearly see, that the $\alpha$-HOMO and $\alpha$-HOMO-1 are degenerate with both an energy of $\pu{-0.31192 Eh}$. If you look at both MOs you can easily see why, because they are qualitatively the same combination of 2p AOs but with different orientation. Both singly occupied alpha orbitals of triplet oxygen As symmetry was turned on, I guess, this needs to be the case, but the result stays the same, if it is turned off. So, does computational chemistry get the molecular orbitals of dioxygen wrong? No, the MOs are right. But actually you have a different question: Why are the energies of alpha and beta MOs in unrestricted open shell calculations not identical? There are two methods to treat open shell systems. One is restricted open (RO*) and the other is unrestricted (U*). To not expand it too much, the main difference is, that as restricted closed shell methods, the restricted open shell methods force the alpha and beta orbitals to share the same MOs. Then, of course, they also would have the same energies. But the energies that you show are from an unsrestricted open shell calculation. If you use this method, alpha and beta MOs are not forced to be in the same spatial MOs. And this freedom leads to different energies alpha and beta MOs. Your Answer
b6e0fb5a502f1997
Uncertainty principle From formulasearchengine Jump to navigation Jump to search {{#invoke:Hatnote|hatnote}} {{#invoke: Sidebar | collapsible }} In quantum mechanics, the uncertainty principle is any of a variety of mathematical inequalities asserting a fundamental limit to the precision with which certain pairs of physical properties of a particle known as complementary variables, such as position x and momentum p, can be known simultaneously. For instance, in 1927, Werner Heisenberg stated that the more precisely the position of some particle is determined, the less precisely its momentum can be known, and vice versa.[1] The formal inequality relating the standard deviation of position σx and the standard deviation of momentum σp was derived by Earle Hesse Kennard[2] later that year and by Hermann Weyl[3] in 1928: (ħ is the reduced Planck constant). Historically, the uncertainty principle has been confused[4][5] with a somewhat similar effect in physics, called the observer effect, which notes that measurements of certain systems cannot be made without affecting the systems. Heisenberg offered such an observer effect at the quantum level (see below) as a physical "explanation" of quantum uncertainty.[6] It has since become clear, however, that the uncertainty principle is inherent in the properties of all wave-like systems,[7] and that it arises in quantum mechanics simply due to the matter wave nature of all quantum objects. Thus, the uncertainty principle actually states a fundamental property of quantum systems, and is not a statement about the observational success of current technology.[8] It must be emphasized that measurement does not mean only a process in which a physicist-observer takes part, but rather any interaction between classical and quantum objects regardless of any observer.[9] Since the uncertainty principle is such a basic result in quantum mechanics, typical experiments in quantum mechanics routinely observe aspects of it. Certain experiments, however, may deliberately test a particular form of the uncertainty principle as part of their main research program. These include, for example, tests of number-phase uncertainty relations in superconducting[10] or quantum optics[11] systems. Applications dependent on the uncertainty principle for their operation include extremely low noise technology such as that required in gravitational-wave interferometers.[12] Click to see animation. The evolution of an initially very localized gaussian wave function of a free particle in two-dimensional space, with colour and intensity indicating phase and amplitude. The spreading of the wave function in all directions shows that the initial momentum has a spread of values, unmodified in time; while the spread in position increases in time: as a result, the uncertainty Δx Δp increases in time. The superposition of several plane waves to form a wave packet. This wave packet becomes increasingly localized with the addition of many waves. The Fourier transform is a mathematical operation that separates a wave packet into its individual plane waves. Note that the waves shown here are real for illustrative purposes only, whereas in quantum mechanics the wave function is generally complex. As a principle, Heisenberg's uncertainty relationship must be something that is in accord with all experience. However, humans do not form an intuitive understanding of this indeterminacy in everyday life, so it may be helpful to demonstrate how it is integral to more easily understood physical situations. Two alternative conceptualizations of quantum physics can be examined with the goal of demonstrating the key role the uncertainty principle plays. A wave mechanics picture of the uncertainty principle provides for a more visually intuitive demonstration, and the somewhat more abstract matrix mechanics picture provides for a demonstration of the uncertainty principle that is more easily generalized to cover a multitude of physical contexts. Mathematically, in wave mechanics, the uncertainty relation between position and momentum arises because the expressions of the wavefunction in the two corresponding orthonormal bases in Hilbert space are Fourier transforms of one another (i.e., position and momentum are conjugate variables). A nonzero function and its Fourier transform cannot both be sharply localized. A similar tradeoff between the variances of Fourier conjugates arises in all systems underlain by Fourier analysis, for example in sound waves: A pure tone is a sharp spike at a single frequency, while its Fourier transform gives the shape of the sound wave in the time domain, which is a completely delocalized sine wave. In quantum mechanics, the two key points are that the position of the particle takes the form of a matter wave, and momentum is its Fourier conjugate, assured by the de Broglie relation p = ħk, where Template:Mvar is the wavenumber. In matrix mechanics, the mathematical formulation of quantum mechanics, any pair of non-commuting self-adjoint operators representing observables are subject to similar uncertainty limits. An eigenstate of an observable represents the state of the wavefunction for a certain measurement value (the eigenvalue). For example, if a measurement of an observable Template:Mvar is performed, then the system is in a particular eigenstate Template:Mvar of that observable. However, the particular eigenstate of the observable Template:Mvar need not be an eigenstate of another observable Template:Mvar: If so, then it does not have a unique associated measurement for it, as the system is not in an eigenstate of that observable.[13] Wave mechanics interpretation {{#invoke:Multiple image|render}} {{#invoke:main|main}} {{#invoke:main|main}} According to the de Broglie hypothesis, every object in the universe is a wave, a situation which gives rise to this phenomenon. The position of the particle is described by a wave function . The time-independent wave function of a single-moded plane wave of wavenumber k0 or momentum p0 is The Born rule states that this should be interpreted as a probability density function in the sense that the probability of finding the particle between a and b is In the case of the single-moded plane wave, is a uniform distribution. In other words, the particle position is extremely uncertain in the sense that it could be essentially anywhere along the wave packet. Consider a wave function that is a sum of many waves, however, we may write this as where An represents the relative contribution of the mode pn to the overall total. The figures to the right show how with the addition of many plane waves, the wave packet can become more localized. We may take this a step further to the continuum limit, where the wave function is an integral over all possible modes with representing the amplitude of these modes and is called the wave function in momentum space. In mathematical terms, we say that is the Fourier transform of and that x and p are conjugate variables. Adding together all of these plane waves comes at a cost, namely the momentum has become less precise, having become a mixture of waves of many different momenta. One way to quantify the precision of the position and momentum is the standard deviation σ. Since is a probability density function for position, we calculate its standard deviation. The precision of the position is improved, i.e. reduced σx, by using many plane waves, thereby weakening the precision of the momentum, i.e. increased σp. Another way of stating this is that σx and σp have an inverse relationship or are at least bounded from below. This is the uncertainty principle, the exact limit of which is the Kennard bound. Click the show button below to see a semi-formal derivation of the Kennard inequality using wave mechanics. Matrix mechanics interpretation {{#invoke:main|main}} In matrix mechanics, observables such as position and momentum are represented by self-adjoint operators. When considering pairs of observables, an important quantity is the commutator. For a pair of operators Template:Mvar and Template:Mvar, one defines their commutator as In the case of position and momentum, the commutator is the canonical commutation relation The physical meaning of the non-commutativity can be understood by considering the effect of the commutator on position and momentum eigenstates. Let be a right eigenstate of position with a constant eigenvalue x0. By definition, this means that Applying the commutator to yields where Template:Mvar is the identity operator. Suppose, for the sake of proof by contradiction, that is also a right eigenstate of momentum, with constant eigenvalue Template:Mvar. If this were true, then one could write On the other hand, the above canonical commutation relation requires that This implies that no quantum state can simultaneously be both a position and a momentum eigenstate. When a state is measured, it is projected onto an eigenstate in the basis of the relevant observable. For example, if a particle's position is measured, then the state amounts to a position eigenstate. This means that the state is not a momentum eigenstate, however, but rather it can be represented as a sum of multiple momentum basis eigenstates. In other words, the momentum must be less precise. This precision may be quantified by the standard deviations, As in the wave mechanics interpretation above, one sees a tradeoff between the respective precisions of the two, quantified by the uncertainty principle. Robertson–Schrödinger uncertainty relations The most common general form of the uncertainty principle is the Robertson uncertainty relation.[14] For an arbitrary Hermitian operator we can associate a standard deviation where the brackets indicate an expectation value. For a pair of operators Template:Mvar and Template:Mvar, we may define their commutator as In this notation, the Robertson uncertainty relation is given by The Robertson uncertainty relation immediately follows from a slightly stronger inequality, the Schrödinger uncertainty relation,[15] where we have introduced the anticommutator, Since the Robertson and Schrödinger relations are for general operators, the relations can be applied to any two observables to obtain specific uncertainty relations. A few of the most common relations found in the literature are given below. where i, j, k are distinct and Ji denotes angular momentum along the xi axis. This relation implies that unless all three components vanish together, only a single component of a system's angular momentum can be defined with arbitrary precision, normally the component parallel to an external (magnetic or electric) field. Moreover, for , a choice , in angular momentum multiplets, ψ = |j, m 〉, bounds the Casimir invariant (angular momentum squared, ) from below and thus yields useful constraints such as j (j + 1) ≥ m (m + 1), and hence jm, among others. where σE is the standard deviation of the energy operator (Hamiltonian) in the state Template:Mvar, σB stands for the standard deviation of B. Although the second factor in the left-hand side has dimension of time, it is different from the time parameter that enters the Schrödinger equation. It is a lifetime of the state Template:Mvar with respect to the observable B: In other words, this is the time interval (Δt) after which the expectation value changes appreciably. An informal, heuristic meaning of the principle is the following: A state that only exists for a short time cannot have a definite energy. To have a definite energy, the frequency of the state must be defined accurately, and this requires the state to hang around for many cycles, the reciprocal of the required accuracy. For example, in spectroscopy, excited states have a finite lifetime. By the time-energy uncertainty principle, they do not have a definite energy, and, each time they decay, the energy they release is slightly different. The average energy of the outgoing photon has a peak at the theoretical energy of the state, but the distribution has a finite width called the natural linewidth. Fast-decaying states have a broad linewidth, while slow decaying states have a narrow linewidth.[24] The same linewidth effect also makes it difficult to specify the rest mass of unstable, fast-decaying particles in particle physics. The faster the particle decays (the shorter its lifetime), the less certain is its mass (the larger the particle's width). Quantum harmonic oscillator stationary states {{#invoke:main|main}} Consider a one-dimensional quantum harmonic oscillator (QHO). It is possible to express the position and momentum operators in terms of the creation and annihilation operators: Using the standard rules for creation and annihilation operators on the eigenstates of the QHO, the variances may be computed directly, The product of these standard deviations is then In particular, the above Kennard bound[2] is saturated for the ground state n=0, for which the probability density is just the normal distribution. Quantum harmonic oscillator with Gaussian initial condition {{#invoke:Multiple image|render}} In a quantum harmonic oscillator of characteristic angular frequency ω, place a state that is offset from the bottom of the potential by some displacement x0 as where Ω describes the width of the initial state but need not be the same as ω. Through integration over the propagator, we can solve for the Template:Not a typo-dependent solution. After many cancelations, the probability densities reduce to where we have used the notation to denote a normal distribution of mean μ and variance σ2. Copying the variances above and applying trigonometric identities, we can write the product of the standard deviations as From the relations we can conclude Coherent states {{#invoke:main|main}} A coherent state is a right eigenstate of the annihilation operator, which may be represented in terms of Fock states as In the picture where the coherent state is a massive particle in a QHO, the position and momentum operators may be expressed in terms of the annihilation operators in the same formulas above and used to calculate the variances, Therefore every coherent state saturates the Kennard bound with position and momentum each contributing an amount in a "balanced" way. Moreover every squeezed coherent state also saturates the Kennard bound although the individual contributions of position and momentum need not be balanced in general. Particle in a box {{#invoke:main|main}} Consider a particle in a one-dimensional box of length . The eigenfunctions in position and momentum space are where and we have used the de Broglie relation . The variances of and can be calculated explicitly: The product of the standard deviations is therefore For all , the quantity is greater than 1, so the uncertainty principle is never violated. For numerical concreteness, the smallest value occurs when , in which case Constant momentum File:Wave function of a Gaussian state moving at constant momentum.gif Position space probability density of an initially Gaussian state moving at minimally uncertain, constant momentum in free space. Assume a particle initially has a momentum space wave function described by a normal distribution around some constant momentum p0 according to where we have introduced a reference scale , with describing the width of the distribution−−cf. nondimensionalization. If the state is allowed to evolve in free space, then the time-dependent momentum and position space wave functions are Since and this can be interpreted as a particle moving along with constant momentum at arbitrarily high precision. On the other hand, the standard deviation of the position is such that the uncertainty product can only increase with time as Additional uncertainty relations Mixed states The Robertson–Schrödinger uncertainty relation may be generalized in a straightforward way to describe mixed states.[27] Phase space In the phase space formulation of quantum mechanics, the Robertson–Schrödinger relation follows from a positivity condition on a real star-square function. Given a Wigner function with star product ★ and a function f, the following is generally true:[28] Choosing , we arrive at Since this positivity condition is true for all a, b, and c, it follows that all the eigenvalues of the matrix are positive. The positive eigenvalues then imply a corresponding positivity condition on the determinant: or, explicitly, after algebraic manipulation, Systematic error The inequalities above focus on the statistical imprecision of observables as quantified by the standard deviation. Heisenberg's original version, however, was interested in systematic error, incurred by a disturbance of a quantum system by the measuring apparatus, i.e., an observer effect. If we let represent the error (i.e., accuracy) of a measurement of an observable and represent its disturbance by the measurement process, then the following inequality holds:[5] In fact, Heisenberg's uncertainty principle as originally described in the 1927 formulation mentions only the first term. Applying the notation above to Heisenberg's position-momentum relation, Heisenberg's argument could be rewritten as Such a formulation is both mathematically incorrect and experimentally refuted.[29] It is also possible to derive a similar uncertainty relation combining both the statistical and systematic error components.[30] Entropic uncertainty principle {{#invoke:main|main}} For many distributions, the standard deviation is not a particularly natural way of quantifying the structure. For example, uncertainty relations in which one of the observables is an angle has little physical meaning for fluctuations larger than one period.[21][31][32][33] Other examples include highly bimodal distributions, or unimodal distributions with divergent variance. A solution that overcomes these issues is an uncertainty based on entropic uncertainty instead of the product of variances. While formulating the many-worlds interpretation of quantum mechanics in 1957, Hugh Everett III conjectured a stronger extension of the uncertainty principle based on entropic certainty.[34] This conjecture, also studied by Hirschman[35] and proven in 1975 by Beckner[36] and by Iwo Bialynicki-Birula and Jerzy Mycielski[37] is where we have used the Shannon entropy (not the quantum von Neumann entropy) for some arbitrary fixed length scale . From the inverse logarithmic Sobolev inequalites[38] (equivalently, from the fact that normal distributions maximize the entropy of all such with a given variance), it readily follows that this entropic uncertainty principle is stronger than the one based on standard deviations, because A few remarks on these inequalities. First, the choice of base e is a matter of popular convention in physics. The logarithm can alternatively be in any base, provided that it be consistent on both sides of the inequality. Second, the numerical value on the right hand side assumes the unitary convention of the Fourier transform, used throughout physics and elsewhere in this article. Third, the normal distribution saturates the inequality, and it is the only distribution with this property, because it is the maximum entropy probability distribution among those with fixed variance (cf. here for proof). Harmonic analysis {{#invoke:main|main}} In the context of harmonic analysis, a branch of mathematics, the uncertainty principle implies that one cannot at the same time localize the value of a function and its Fourier transform. To wit, the following inequality holds, Further mathematical uncertainty inequalities, including the above entropic uncertainty, hold between a function Template:Mvar and its Fourier transform ƒ̂.[39][40][41] Signal processing {{safesubst:#invoke:anchor|main}} In the context of signal processing, and in particular time–frequency analysis, uncertainty principles are referred to as the Gabor limit, after Dennis Gabor, or sometimes the Heisenberg–Gabor limit. The basic result, which follows from "Benedicks's theorem", below, is that a function cannot be both time limited and band limited (a function and its Fourier transform cannot both have bounded domain)—see bandlimited versus timelimited. Stated alternatively, "One cannot simultaneously sharply localize a signal (function Template:Mvar ) in both the time domain and frequency domain ( ƒ̂, its Fourier transform)". When applied to filters, the result implies that one cannot achieve high temporal resolution and frequency resolution at the same time; a concrete example are the resolution issues of the short-time Fourier transform—if one uses a wide window, one achieves good frequency resolution at the cost of temporal resolution, while a narrow window has the opposite trade-off. Alternate theorems give more precise quantitative results, and, in time–frequency analysis, rather than interpreting the (1-dimensional) time and frequency domains separately, one instead interprets the limit as a lower limit on the support of a function in the (2-dimensional) time–frequency plane. In practice, the Gabor limit limits the simultaneous time–frequency resolution one can achieve without interference; it is possible to achieve higher resolution, but at the cost of different components of the signal interfering with each other. Benedicks's theorem Amrein-Berthier[42] and Benedicks's theorem[43] intuitively says that the set of points where Template:Mvar is non-zero and the set of points where ƒ̂ is nonzero cannot both be small. Specifically, it is impossible for a function Template:Mvar in L2(R) and its Fourier transform ƒ̂ to both be supported on sets of finite Lebesgue measure. A more quantitative version is[44][45] One expects that the factor may be replaced by , which is only known if either Template:Mvar or Template:Mvar is convex. Hardy's uncertainty principle The mathematician G. H. Hardy formulated the following uncertainty principle:[46] it is not possible for Template:Mvar and ƒ̂ to both be "very rapidly decreasing." Specifically, if Template:Mvar in L2(R) is such that ( an integer), then, if ab > 1, f = 0, while if ab=1, then there is a polynomial Template:Mvar of degree N such that This was later improved as follows: if fL2(Rd) is such that where Template:Mvar is a polynomial of degree (N−d)/2 and Template:Mvar is a real d×d positive definite matrix. This result was stated in Beurling's complete works without proof and proved in Hörmander[47] (the case ) and Bonami, Demange, and Jaming[48] for the general case. Note that Hörmander–Beurling's version implies the case ab > 1 in Hardy's Theorem while the version by Bonami–Demange–Jaming covers the full strength of Hardy's Theorem. A different proof of Beurling's theorem based on Liouville's theorem appeared in Hedenmalm.[49] A full description of the case ab<1 as well as the following extension to Schwarz class distributions appears in Demange:[50] Theorem. If a tempered distribution is such that for some convenient polynomial Template:Mvar and real positive definite matrix Template:Mvar of type d × d. Werner Heisenberg formulated the Uncertainty Principle at Niels Bohr's institute in Copenhagen, while working on the mathematical foundations of quantum mechanics.[51] Werner Heisenberg and Niels Bohr In 1925, following pioneering work with Hendrik Kramers, Heisenberg developed matrix mechanics, which replaced the ad-hoc old quantum theory with modern quantum mechanics. The central premise was that the classical concept of motion does not fit at the quantum level, as electrons in an atom do not travel on sharply defined orbits. Rather, their motion is smeared out in a strange way: the Fourier transform of its time dependence only involves those frequencies that could be observed in the quantum jumps of their radiation. Heisenberg's paper did not admit any unobservable quantities like the exact position of the electron in an orbit at any time; he only allowed the theorist to talk about the Fourier components of the motion. Since the Fourier components were not defined at the classical frequencies, they could not be used to construct an exact trajectory, so that the formalism could not answer certain overly precise questions about where the electron was or how fast it was going. In March 1926, working in Bohr's institute, Heisenberg realized that the non-commutativity implies the uncertainty principle. This implication provided a clear physical interpretation for the non-commutativity, and it laid the foundation for what became known as the Copenhagen interpretation of quantum mechanics. Heisenberg showed that the commutation relation implies an uncertainty, or in Bohr's language a complementarity.[52] Any two variables that do not commute cannot be measured simultaneously—the more precisely one is known, the less precisely the other can be known. Heisenberg wrote: It can be expressed in its simplest form as follows: One can never know with perfect accuracy both of those two important factors which determine the movement of one of the smallest particles—its position and its velocity. It is impossible to determine accurately both the position and the direction and speed of a particle at the same instant.[53] In his celebrated 1927 paper, "Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik" ("On the Perceptual Content of Quantum Theoretical Kinematics and Mechanics"), Heisenberg established this expression as the minimum amount of unavoidable momentum disturbance caused by any position measurement,[1] but he did not give a precise definition for the uncertainties Δx and Δp. Instead, he gave some plausible estimates in each case separately. In his Chicago lecture[54] he refined his principle: Kennard[2] in 1927 first proved the modern inequality: where ħ = h/2π, and σx, σp are the standard deviations of position and momentum. Heisenberg only proved relation (2) for the special case of Gaussian states.[54] Terminology and translation Throughout the main body of his original 1927 paper, written in German, Heisenberg used the word, "Ungenauigkeit" ("indeterminacy"),[1] to describe the basic theoretical principle. Only in the endnote did he switch to the word, "Unsicherheit" ("uncertainty"). When the English-language version of Heisenberg's textbook, The Physical Principles of the Quantum Theory, was published in 1930, however, the translation "uncertainty" was used, and it became the more commonly used term in the English language thereafter.[55] Heisenberg's microscope Heisenberg's gamma-ray microscope for locating an electron (shown in blue). The incoming gamma ray (shown in green) is scattered by the electron up into the microscope's aperture angle θ. The scattered gamma-ray is shown in red. Classical optics shows that the electron position can be resolved only up to an uncertainty Δx that depends on θ and the wavelength λ of the incoming light. The principle is quite counter-intuitive, so the early students of quantum theory had to be reassured that naive measurements to violate it were bound always to be unworkable. One way in which Heisenberg originally illustrated the intrinsic impossibility of violating the uncertainty principle is by using an imaginary microscope as a measuring device.[54] He imagines an experimenter trying to measure the position and momentum of an electron by shooting a photon at it. Problem 1 – If the photon has a short wavelength, and therefore, a large momentum, the position can be measured accurately. But the photon scatters in a random direction, transferring a large and uncertain amount of momentum to the electron. If the photon has a long wavelength and low momentum, the collision does not disturb the electron's momentum very much, but the scattering will reveal its position only vaguely. Problem 2 – If a large aperture is used for the microscope, the electron's location can be well resolved (see Rayleigh criterion); but by the principle of conservation of momentum, the transverse momentum of the incoming photon and hence, the new momentum of the electron resolves poorly. If a small aperture is used, the accuracy of both resolutions is the other way around. The combination of these trade-offs imply that no matter what photon wavelength and aperture size are used, the product of the uncertainty in measured position and measured momentum is greater than or equal to a lower limit, which is (up to a small numerical factor) equal to Planck's constant.[56] Heisenberg did not care to formulate the uncertainty principle as an exact limit (which is elaborated below), and preferred to use it instead, as a heuristic quantitative statement, correct up to small numerical factors, which makes the radically new noncommutativity of quantum mechanics inevitable. Critical reactions The Copenhagen interpretation of quantum mechanics and Heisenberg's Uncertainty Principle were, in fact, seen as twin targets by detractors who believed in an underlying determinism and realism. According to the Copenhagen interpretation of quantum mechanics, there is no fundamental reality that the quantum state describes, just a prescription for calculating experimental results. There is no way to say what the state of a system fundamentally is, only what the result of observations might be. Albert Einstein believed that randomness is a reflection of our ignorance of some fundamental property of reality, while Niels Bohr believed that the probability distributions are fundamental and irreducible, and depend on which measurements we choose to perform. Einstein and Bohr debated the uncertainty principle for many years. Some experiments within the first decade of the twenty-first century have cast doubt on how extensively the uncertainty principle applies.[57] Einstein's slit The first of Einstein's thought experiments challenging the uncertainty principle went as follows: Consider a particle passing through a slit of width Template:Mvar. The slit introduces an uncertainty in momentum of approximately h/d because the particle passes through the wall. But let us determine the momentum of the particle by measuring the recoil of the wall. In doing so, we find the momentum of the particle to arbitrary accuracy by conservation of momentum. Bohr's response was that the wall is quantum mechanical as well, and that to measure the recoil to accuracy Δp, the momentum of the wall must be known to this accuracy before the particle passes through. This introduces an uncertainty in the position of the wall and therefore the position of the slit equal to hp, and if the wall's momentum is known precisely enough to measure the recoil, the slit's position is uncertain enough to disallow a position measurement. A similar analysis with particles diffracting through multiple slits is given by Richard Feynman.[58] In another thought experiment Lawrence Marq Goldberg theorized that one could, for example, determine the position of a particle and then travel back in time to a point before the first reading to measure the velocity, then time travel back to a point before the second (earlier) reading was taken to deliver the resulting measurements before the particle was disturbed so that the measurements did not need to be taken. This, of course, would result in a temporal paradox. But it does support his contention that "the problems inherent to the uncertainly principle lay in the measuring not in the "uncertainty" of physics."{{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }} Einstein's box Bohr was present when Einstein proposed the thought experiment which has become known as Einstein's box. Einstein argued that "Heisenberg's uncertainty equation implied that the uncertainty in time was related to the uncertainty in energy, the product of the two being related to Planck's constant."[59] Consider, he said, an ideal box, lined with mirrors so that it can contain light indefinitely. The box could be weighed before a clockwork mechanism opened an ideal shutter at a chosen instant to allow one single photon to escape. "We now know, explained Einstein, precisely the time at which the photon left the box."[60] "Now, weigh the box again. The change of mass tells the energy of the emitted light. In this manner, said Einstein, one could measure the energy emitted and the time it was released with any desired precision, in contradiction to the uncertainty principle."[59] Bohr spent a sleepless night considering this argument, and eventually realized that it was flawed. He pointed out that if the box were to be weighed, say by a spring and a pointer on a scale, "since the box must move vertically with a change in its weight, there will be uncertainty in its vertical velocity and therefore an uncertainty in its height above the table. ... Furthermore, the uncertainty about the elevation above the earth's surface will result in an uncertainty in the rate of the clock,"[61] because of Einstein's own theory of gravity's effect on time. "Through this chain of uncertainties, Bohr showed that Einstein's light box experiment could not simultaneously measure exactly both the energy of the photon and the time of its escape."[62] EPR paradox for entangled particles Bohr was compelled to modify his understanding of the uncertainty principle after another thought experiment by Einstein. In 1935, Einstein, Podolsky and Rosen (see EPR paradox) published an analysis of widely separated entangled particles. Measuring one particle, Einstein realized, would alter the probability distribution of the other, yet here the other particle could not possibly be disturbed. This example led Bohr to revise his understanding of the principle, concluding that the uncertainty was not caused by a direct interaction.[63] But Einstein came to much more far-reaching conclusions from the same thought experiment. He believed the "natural basic assumption" that a complete description of reality, would have to predict the results of experiments from "locally changing deterministic quantities", and therefore, would have to include more information than the maximum possible allowed by the uncertainty principle. In 1964, John Bell showed that this assumption can be falsified, since it would imply a certain inequality between the probabilities of different experiments. Experimental results confirm the predictions of quantum mechanics, ruling out Einstein's basic assumption that led him to the suggestion of his hidden variables. Ironically this fact is one of the best pieces of evidence supporting Karl Popper's philosophy of invalidation of a theory by falsification-experiments. That is to say, here Einstein's "basic assumption" became falsified by experiments based on Bell's inequalities. For the objections of Karl Popper against the Heisenberg inequality itself, see below. While it is possible to assume that quantum mechanical predictions are due to nonlocal, hidden variables, and in fact David Bohm invented such a formulation, this resolution is not satisfactory to the vast majority of physicists. The question of whether a random outcome is predetermined by a nonlocal theory can be philosophical, and it can be potentially intractable. If the hidden variables are not constrained, they could just be a list of random digits that are used to produce the measurement outcomes. To make it sensible, the assumption of nonlocal hidden variables is sometimes augmented by a second assumption—that the size of the observable universe puts a limit on the computations that these variables can do. A nonlocal theory of this sort predicts that a quantum computer would encounter fundamental obstacles when attempting to factor numbers of approximately 10,000 digits or more; a potentially achievable task in quantum mechanics.[64] Popper's criticism {{#invoke:main|main}} Karl Popper approached the problem of indeterminacy as a logician and metaphysical realist.[65] He disagreed with the application of the uncertainty relations to individual particles rather than to ensembles of identically prepared particles, referring to them as "statistical scatter relations".[65][66] In this statistical interpretation, a particular measurement may be made to arbitrary precision without invalidating the quantum theory. This directly contrasts with the Copenhagen interpretation of quantum mechanics, which is non-deterministic but lacks local hidden variables. In 1934, Popper published Zur Kritik der Ungenauigkeitsrelationen (Critique of the Uncertainty Relations) in Naturwissenschaften,[67] and in the same year Logik der Forschung (translated and updated by the author as The Logic of Scientific Discovery in 1959), outlining his arguments for the statistical interpretation. In 1982, he further developed his theory in Quantum theory and the schism in Physics, writing: [Heisenberg's] formulae are, beyond all doubt, derivable statistical formulae of the quantum theory. But they have been habitually misinterpreted by those quantum theorists who said that these formulae can be interpreted as determining some upper limit to the precision of our measurements.[original emphasis][68] Popper proposed an experiment to falsify the uncertainty relations, although he later withdrew his initial version after discussions with Weizsäcker, Heisenberg, and Einstein; this experiment may have influenced the formulation of the EPR experiment.[65][69] Many-worlds uncertainty The many-worlds interpretation originally outlined by Hugh Everett III in 1957 is partly meant to reconcile the differences between the Einstein and Bohr's views by replacing Bohr's wave function collapse with an ensemble of deterministic and independent universes whose distribution is governed by wave functions and the Schrödinger equation. Thus, uncertainty in the many-worlds interpretation follows from each observer within any universe having no knowledge of what goes on in the other universes. Free will Some scientists including Arthur Compton[70] and Martin Heisenberg[71] have suggested that the uncertainty principle, or at least the general probabilistic nature of quantum mechanics, could be evidence for the two-stage model of free will. The standard view, however, is that apart from the basic role of quantum mechanics as a foundation for chemistry, nontrivial biological mechanisms requiring quantum mechanics are unlikely, due to the rapid decoherence time of quantum systems at room temperature.[72] See also 1. 1.0 1.1 1.2 {{#invoke:citation/CS1|citation |CitationClass=citation }}. Annotated pre-publication proof sheet of Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik, March 23, 1927. 2. 2.0 2.1 2.2 {{#invoke:citation/CS1|citation |CitationClass=citation }} 5. 5.0 5.1 {{#invoke:citation/CS1|citation |CitationClass=citation }} 6. Werner Heisenberg, The Physical Principles of the Quantum Theory, p. 20 7. Template:Cite doi 8. Template:YouTube 9. Quantum Mechanics Non-Relativistic Theory, Third Edition: Volume 3. Landau, Lifshitz 10. {{#invoke:citation/CS1|citation |CitationClass=citation }} 14. 14.0 14.1 {{#invoke:citation/CS1|citation |CitationClass=citation }} 15. 15.0 15.1 {{#invoke:citation/CS1|citation |CitationClass=citation }} 17. {{#invoke:citation/CS1|citation |CitationClass=citation }} 19. 19.0 19.1 {{#invoke:citation/CS1|citation |CitationClass=citation }} 21. 21.0 21.1 {{#invoke:citation/CS1|citation |CitationClass=citation }} 23. L. I. Mandelshtam, I. E. Tamm, The uncertainty relation between energy and time in nonrelativistic quantum mechanics, 1945 24. The broad linewidth of fast decaying states makes it difficult to accurately measure the energy of the state, and researchers have even used detuned microwave cavities to slow down the decay-rate, to get sharper peaks. {{#invoke:citation/CS1|citation |CitationClass=citation }} 25. {{#invoke:citation/CS1|citation |CitationClass=citation }} 26. {{#invoke:citation/CS1|citation |CitationClass=citation }} 27. Template:Cite web 28. Template:Cite doi 29. {{#invoke:citation/CS1|citation |CitationClass=citation }} 30. {{#invoke:citation/CS1|citation |CitationClass=citation }} 31. {{#invoke:citation/CS1|citation |CitationClass=citation }} 32. {{#invoke:citation/CS1|citation |CitationClass=citation }} 33. {{#invoke:citation/CS1|citation |CitationClass=citation }} 38. {{#invoke:citation/CS1|citation |CitationClass=citation }} 39. {{#invoke:citation/CS1|citation |CitationClass=citation }} 45. {{#invoke:citation/CS1|citation |CitationClass=citation }} 46. {{#invoke:citation/CS1|citation |CitationClass=citation }} 47. {{#invoke:citation/CS1|citation |CitationClass=citation }} 48. {{#invoke:citation/CS1|citation |CitationClass=citation }} 49. {{#invoke:citation/CS1|citation |CitationClass=citation }} 50. {{#invoke:citation/CS1|citation |CitationClass=citation }} 51. American Physical Society online exhibit on the Uncertainty Principle 52. {{#invoke:citation/CS1|citation |CitationClass=citation }} 53. Heisenberg, W., Die Physik der Atomkerne, Taylor & Francis, 1952, p. 30. 54. 54.0 54.1 54.2 {{#invoke:citation/CS1|citation |CitationClass=citation }} English translation The Physical Principles of Quantum Theory. Chicago: University of Chicago Press, 1930. 55. {{#invoke:citation/CS1|citation |CitationClass=citation }} 56. {{#invoke:citation/CS1|citation |CitationClass=citation }} 57. R&D Magazine & University of Toronto, September 10, 2012 Scientists cast doubt on the uncertainty principle retrieved Sept 10, 2012 58. Feynman lectures on Physics, vol 3, 2–2 59. 59.0 59.1 Gamow, G., The great physicists from Galileo to Einstein, Courier Dover, 1988, p.260. 60. Kumar, M., Quantum: Einstein, Bohr and the Great Debate About the Nature of Reality, Icon, 2009, p. 282. 61. Gamow, G., The great physicists from Galileo to Einstein, Courier Dover, 1988, p. 260–261. 63. {{#invoke:citation/CS1|citation |CitationClass=citation }} 64. Gerardus 't Hooft has at times advocated this point of view. 65. 65.0 65.1 65.2 {{#invoke:citation/CS1|citation |CitationClass=citation }} 66. {{#invoke:citation/CS1|citation |CitationClass=citation }} 67. {{#invoke:citation/CS1|citation |CitationClass=citation }} 68. Popper, K. Quantum theory and the schism in Physics, Unwin Hyman Ltd, 1982, pp. 53–54. 69. {{#invoke:citation/CS1|citation |CitationClass=citation }} 70. Template:Cite doi 71. Template:Cite doi 72. Template:Cite doi External links • {{#invoke:citation/CS1|citation |CitationClass=citation }}
2917f4a052f42615
Open Access What is a quantum simulator? EPJ Quantum Technology20141:10 Received: 12 May 2014 Accepted: 1 July 2014 Published: 23 July 2014 Quantum simulators are devices that actively use quantum effects to answer questions about model systems and, through them, real systems. In this review we expand on this definition by answering several fundamental questions about the nature and use of quantum simulators. Our answers address two important areas. First, the difference between an operation termed simulation and another termed computation. This distinction is related to the purpose of an operation, as well as our confidence in and expectation of its accuracy. Second, the threshold between quantum and classical simulations. Throughout, we provide a perspective on the achievements and directions of the field of quantum simulation. PACS Codes: 03.65.-w, 03.67.Ac, 03.67.Lx. quantum simulation computation definition perspective review 1 Introduction Simulating models of the physical world is instrumental in advancing scientific knowledge and developing technologies. Accordingly, the task has long been at the heart of science. For example, orreries have been used for millennia to simulate models of the motions of celestial objects [1]. More recently, differential analysers or mechanical integrators were developed to solve differential equations modelling e.g. heat flow and transmission lines [2, 3]. Unfortunately, simulation is not always easy. There are numerous important questions to which simulations would provide answers but which remain beyond current technological capabilities. These span a multitude of scientific research areas, from high-energy [4, 5], nuclear, atomic [6] and condensed matter physics [7, 8] to thermal rate constants [9] and molecular energies [10, 11] in chemistry [12, 13]. An exciting possibility is that the first simulation devices capable of answering some of these questions may be quantum, not classical, with this distinction to be clarified below. The types of quantum hardware proposed to perform such simulations are as hugely varying as the problems they aim to solve: trapped ions [1419], cold atoms in optical lattices [2022], liquid and solid-state NMR [2327], photons [2833], quantum dots [3436], superconducting circuits [6, 3742], and NV centres [43, 44]. At the time of writing, astonishing levels of control in proof-of-principle experiments (cf. the above references and citations within) suggest that quantum simulation is transitioning from a theoretical dream into a credible possibility. Here we complement recent reviews of quantum simulation [4550] by providing our answers to several fundamental but non-trivial and often contentious questions about quantum simulators, highlighting whenever there is a difference of opinion within the community. In particular, we discuss how quantum simulations are defined, the role they play in science, and the importance that should be given to verifying their accuracy. 2 What are simulators? Both simulators and computers are physical devices that reveal information about a mathematical function. Whether we call a device a simulator or a computer depends not only on the device, but also on what is supposed about the mathematical function and the intended use of the information obtained. If the function is interpreted as part of a physical model then we are likely to call the device a simulator. However, this brief definition neglects the typical purpose and context of a simulation (see Figure 1). As will become clear below, a simulation is usually the first step in a two-step process, with the second being the comparison of the physical model with a real physical system (see Section 4 ‘How are simulators used?’). This then makes simulation part of the usual scientific method. This context is why some loosely state that simulation is the use of one physical device to tell us about another real physical system [51]. It also affects the level of trust that can be reasonably demanded of the simulation (see Section 7 ‘When are quantum simulators trustworthy?’). Figure 1 The role of a quantum simulator. A quantum simulator reveals information about an abstract mathematical function relating to a physical model. However, it is important to consider the typical purpose and context of such a simulation. By comparing its results to a real system of interest, a simulation is used to decide whether or not the model accurately represents that system. If the representation is thought to be accurate, the quantum simulator can then loosely be considered as a simulator for the system of interest. We represent this in the figure by a feedback loop from the quantum device back to the system of interest. If the accuracy with which a device simulates a model can be arbitrarily controlled and guaranteed then it is often elevated to the status of a computer, a name that reflects our trust in the device. A consequence of this guaranteed accuracy is that it allows assured interpretation of the results of the operation, the information obtained about a mathematical function, without reference to some real system. Thus, as well as to imply accuracy, the term computer is also more often used to describe calculations that relate to more abstract mathematical functions, unconnected to a physical system, and are used outside of the scientific method. It is interesting to apply our definition of a simulator to well-known situations in which the term is used. The majority of experimental devices advertised as quantum simulations are so-called analogue simulators [4550]. They are devices whose Hamiltonians can be engineered to approximate those of a subset of models put forward to describe a real system. This closely fits our definition of simulators as well as their usual purpose and context outlined above. Another different type of device is Lloyd’s digital quantum simulator [52]. This replicates universal unitary evolution by mapping it, via Trotter decompositions, to a circuit, which can then be made arbitrarily accurate by the use of error correction. Whilst going by the name simulator, it is effectively a universal quantum computer. From our arguments above, we would also describe this as a computer: error correction ensures the result applying to the modelled system can be interpreted without comparison to a real physical system, thus playing the role of a computation. Finally, the company D-wave has developed a device to find the ground state of the classical Ising model [53]. While this is a device that returns a property of a physical model, it is advertised as a computer. We would agree, since its primary use seems to be in solving optimisation problems embedded in the Ising ground state, rather than comparing this to a real physical system. 3 What are quantum simulators? To complete the definition of a quantum simulator we need to define what is meant by a quantum device. This problem is also faced by quantum biology [5456] and other quantum technologies. It is complicated by the fact that, at some level, quantum mechanics describes the structure and dynamics of all physical objects. Quantumness may be structural and inert e.g. merely responsible for the available single-particle modes. Or quantumness may be active e.g. exploiting entanglement between modes, potentially achieving functionality more efficiently than a classical device (see Section 6 ‘Why do we need quantum simulators?’). To this end, we must distinguish between devices for which, during the operation of the simulator, the particular degrees of freedom doing the simulating do or do not behave classically. We choose here to define classical as when there is some single-particle basis in which the density operator ρ ˆ ( t ) describing the relevant degrees of freedom is, for the purposes of the simulation, diagonal at all times t. This is written ρ ˆ ( t ) = s , i p ( { N s , i } , t ) | { N s , i } , t { N s , i } , t | . Here | { N s , i } , t is a Fock state in which N s , i particles of species s occupy mode i. The mode annihilation operator is a ˆ s , i ( t ) = d r Ψ ˆ s ( r ) χ s , i ( r , t ) , with χ s , i ( r , t ) the corresponding single-particle modefunction and Ψ ˆ s ( r ) the field operator for species s. The diagonal elements p ( { N s , i } , t ) are the probabilities of the different occupations. This condition ensures there is always a single-particle basis in which dephasing would have no effect. This invariance under dephasing is a common way to define classicality [57]. The condition also disallows entanglement between different single-particle modes, as would be expected for a condition of classicality. It does allow the natural entanglement between identical particles in the same mode due to symmetrisation. Such entanglement can be mapped to entanglement between modes by operations that themselves do not contribute entanglement [58]. However, if such operations are never applied, it is reasonable to consider the device to be classical. In other words, we are less concerned with the potential of entanglement as a resource than how this resource is manifested during the operation of the device. Let us build confidence in our definition by using it to classify well-known devices as classical or quantum. Reassuringly, the operation of the room-temperature semi-conductor devices used to perform every-day computing are classical according to the definition. The relevant properties of inhomogeneous semi-conductors are captured by a model in which the degrees of freedom are valence (quasi) electrons that incoherently occupy single-particle states χ i ( r ) of the Bloch type [59]. Next, consider two devices for preparing the ground state of a classical Ising model, classical annealing [60] and quantum annealing [6165]. Classical annealing by coupling the Ising spins to a cooling environment is not quantum since at all times the thermal density matrix of the system is diagonal in the computational basis, a single-particle basis. However, preparing that same state by quantum annealing, adiabatically quenching a transverse field, is expected to be quantum. This is due to the fact that in the middle of the quench, which forms the main part of the simulation, the Ising spins will usually become entangled. Since these are particles in distinguishable modes, the device cannot behave classically at all times. Finally, consider a Bose-Einstein condensate [66, 67], that is many bosons in the same single-particle mode χ 0 ( t ) . Alternatively, consider a Poissonian mixture of different occupation numbers or equivalently a coherent number superposition of unknown phase, both of which are well approximated by N 0 bosons occupying χ 0 ( t ) , for large mean occupation N 0 . In these cases, the single occupied modefunction evolves according to the Gross-Pitaevskii equation and we would label the system as classical. When classifying the use of condensed Bose gases as simulators of gravitational models [6872] the classical or quantum assignment depends on whether, for the purposes of the simulation, the system is possibly described by a single condensate modefunction without fluctuations above the condensate. An example that falls clearly onto the quantum side is provided by a simulator of the Gibbons-Hawking effect [73, 74], which is fundamentally reliant on quantum vacuum fluctuations. Our chosen boundary between quantum and classical is one of many possibilities. Indeed, defining the quantumness of the simulation entirely in terms of the device is not common. Many others [49, 50] take the quantum in quantum simulator to relate to the model being simulated as well as to the simulating device. In common with definitions of quantum computation, our assignment of the quantum in quantum simulator based only on the device avoids the assumption that only simulating quantum models is hard enough to potentially benefit from a quantum device. This is not so: finding the ground state of even a classical Ising model is NP-hard and thus thought to be inefficient on both a classical and quantum device [75, 76]. 4 How are simulators used? A common perception (that goes right back to the language used at the conception of quantum simulation [51]) is that the purpose of a simulator is purely to reveal information about another real system. We pick an idealised model describing a system of interest, and then simulate that model, taking the output to describe not only the model but the system of interest. As long as the idealised model is a ‘good’ description of the system of interest then it is inferred that the simulator is a ‘good’ simulator of the system. While this inference is correct, it misses an important purpose of a simulator. This other crucial purpose of a simulator is to reveal information about a model and compare this to the behaviour of the real system of interest. This then allows us to infer whether or not the model provides a ‘good’ description of the system in the first place and whether or not the results bear any relevance to the real world. For example, simulating the Fermi-Hubbard model would be hugely important if it turned out that this model captures the behaviour of some high- T c superconductors (as suggested by some [7780]), but it may be that the main conclusion of simulations will be to rule this out (as expected by others [8183]). Only when we have developed confidence in a model accurately representing a system can we use the simulator of the model to inform us about the system. 5 Why do we need simulators? Above we have stated that simulators are used to find properties of a model, assess whether the model is relevant to and accurately describes the real system of interest, and, if so, learn about that system. Are there other ways to learn about a system without simulation? Do we need simulators? There are, of course, many examples of scientists making progress without simulation. Over a century ago, the phenomenon of superconductivity was discovered and later its properties analysed by experimental investigation largely unguided by analytical or numerical simulation [84]. Today, in cases where detailed simulation is not possible, we successfully design drugs largely by trial and error on a mass scale [85]. These two examples, however, also show why simulation is crucial. Computer-aided drug design [86, 87] exploits the simulation of molecular systems to drastically speed up and thus lower the cost of the design process. Similarly, if we wish to manufacture materials with enhanced superconducting properties, e.g. increase the critical temperature T c , then we might benefit from some understanding directing that manufacture, as would be provided by a model and a means of simulating it [88, 89]. Simulation can also be a convenience: in 2014 the USA bobsleigh team won Olympic bronze with a machine designed almost entirely virtually [90]. Simulation was used to optimise the aerodynamic performance without the need for a wind tunnel. 6 Why do we need quantum simulators? While the idea of simulations is centuries old [1, 2], the suggestion that a quantum device would make for a better mimic of some models than a classical device is commonly attributed to Feynman in 1982 [51]. He noted that calculating properties of an arbitrary quantum model on a classical device is a seemingly very inefficient thing to do (taking a time that scales exponentially with the number of particles in the model being simulated), but a quantum device might be able to do this efficiently (taking a time that scales at most polynomially with particle number [52]). This does not of course prohibit the simulation of many quantum models from being easy using classical devices and thus not in need of a quantum simulator. The classical numerical tools usually employed include exact calculations, mean-field [91] and dynamical mean-field theory [9294], tensor network theory [95102], density functional theory (DFT) [103107] or quantum Monte Carlo algorithms [108111], which all have their limitations. Exact calculations are only possible for small Hilbert spaces. Mean-field-based methods are only applicable when the correlations between the constituent parts of the system being modelled are weak. Tensor network methods are only applicable if there is a network structure to the Hilbert space and often fail in the presence of strong entanglement between contiguous bipartite subspaces [112], with this sensitivity to entanglement being much greater with two- or higher-dimensional models. For DFT, the functionals describing strong correlations are, in general, not believed to be efficient to find [113]. Quantum Monte Carlo struggles, for example, with Fermionic statistics or frustrated models, due to the sign problem [114, 115]. For the above reasons, quantum devices are expected to be crucial for large network (e.g. lattice) models, featuring Fermions or frustration and strong entanglement, or non-network based many-body models featuring states with strong correlations that are difficult to describe with DFT. Strong entanglement can arise, for example, near a phase transition, or after a non-equilibrium evolution [116]. It must be stated, however, that there is no guarantee that a classical device or algorithm will not sometime in the future be devised to efficiently study some subset of the above quantum models. In addition to the widely-accepted need for quantum devices for the quantum models discussed above, there are calls and proposals for quantum devices to simulate classical models [117, 118], for example, molecular dynamics [119] and lattice gas models [120, 121]. This also applies to any simulation that reduces to solving an eigenvalue equation [122] or a set of linear equations [123]. As with quantum models, many of these simulations, for example solving a set of linear equations, can be solved without much trouble on a classical device for small to medium simulations. The benefit of a quantum device is that the size of problems that can be tackled in a reasonable time grows significantly more quickly with the size of the simulating device than it does for a classical device, thus it is envisaged that quantum devices will one day be able to solve larger problems than their classical counterparts. It is clear from this last point that the scaling of classical and quantum simulators must be treated carefully, taking into account the sizes of problems that can be tackled by current or future devices. It is possible that the experimental difficulty of scaling up quantum simulation hardware might cause an overhead such that a quantum device does not surpass the accuracy obtained by a classical algorithm that in principle does not scale as well but runs on ever-improving hardware obeying Moore’s law. 7 When are quantum simulators trustworthy? So far we are yet to address perhaps the most difficult and important aspect of simulation, upon which its success rests. How can we asses whether the quantum simulator represents the model? How rigorous an assessment is needed? For this discussion we focus on analogue quantum simulators, because they are the most easily scaled quantum simulators and so are likely to be used in the near future to simulate large systems. They also most closely follow our definition of a simulator, as opposed to a computer (see Section 2 ‘What are simulators?’). The topic of falsifying bad quantum simulators has received some attention. In certain parameter regimes there may be efficiently calculable exact analytical results or it might be possible to perform a trusted classical simulation, against which the quantum simulator results may be compared [116]. Often there are bounds that some measurable quantities are known to obey, and this too can be tested [47]. Alternatively, it might be possible to check known relationships between two different simulations. For example, in an Ising model, flipping the direction of the magnetic field is equivalent to flipping the sign of the component of the spins along that field, thus giving two simulations whose results are expected to have a clear relationship. A natural extension of this strategy is to compare many quantum simulations realised by different devices, perhaps each with a slightly different source of error, trusting only the aspects of the results shared by all devices [124]. If any of the above tests fail beyond an acceptable accuracy, then we do not trust the simulation results. If a simulator passes all tests, then we may take this as support for the accuracy of that simulator. It would be incorrect, however, to say that such tests verify the accuracy of a simulator. A simulator could have significant errors yet pass these tests. It might be that the simulator is accurate in the regimes in which we have accurate analytical or classical numerical results, but is more sensitive to errors in regimes that are difficult to treat with other methods, e.g. near phase transitions, perhaps for the same reason. In fact, Hauke et al. gave an example of exactly this phenomenon in the transverse Ising model [47]. The danger with comparing simulations, even realised by different devices [124], is that there may be similar sources of error, or errors in the two simulations may manifest in the results in the same way. Although this makes simulation difficult to assess, it does not invalidate it; it would be unreasonably harsh to demand verification of all simulators. The reason for this is that, as illustrated in Figure 1, simulators are usually the first step in a two-step process: first a device is devised to simulate a model, and second the model is employed to study a real system (see Section 4 ‘How are they used?’). It might be unreasonable to demand a more rigorous testing of the first part of this process than the second. In the second part, when we devise a model to reproduce the behaviour of a physical system, we only demand that the model be falsifiable [125]. We seek as many fail-able tests as possible of the model, and to the extent that it passes these tests, we retain the model. It is difficult for experiments to verify a particular use of the model, rather successful experiments merely declare the model ‘not yet false’. This is the scientific method. We should not, therefore, demand anything more or less when going in the other direction, devising a physical device to reproduce the behaviour of a model. All we can do is test our simulators as much as possible, and slowly build confidence in accordance with the passing of these tests. If the capability of performing such tests lags behind the development of the simulator, then so naturally must our confidence. It becomes clear that the purpose of the device is crucial to how it is assessed, explaining our highlighting the purpose of a simulator alongside its definition. If we were using a device to provide information about a model without any additional motivation, as with a computer, then it would be reasonable to search for a means of verification and guarantees of accuracy, as with a computer. Eventually, quantum technologies might develop to a stage where large simulations of this type are feasible, e.g. via Lloyd’s digital simulator [52], but it is likely to be in the more distant future. It must be noted, however, that many of the devices we use regularly for computation are unverifiable in the strictest sense. Not every transistor in the classical computers we use (for instance to simulate quantum systems) can be verified to be functioning as desired [126]. We instead develop an understanding of the sources of error, perform some tests to check for obvious errors, and use the devices with caution. The words ‘trust’ and ‘confidence’ in the preceding paragraphs are chosen deliberately. They indicate that, since for simulation we do not always have verifiability, we are not discussing objective properties of devices, but our understanding of them. This will change in time (see an example of this in Figure 2). Further, confidence depends on the eventual goal of our use of the simulator. Some properties of a system may be too sensitive to Hamiltonian parameters to be realistically captured by a simulator, while other properties may be statistically robust against parameter variations [127]. In this sense trustworthiness is not a clear-cut topic that is established upon the initial development of a simulator. Instead, it is the result of a complex, time-consuming process in the period that follows. It is the responsibility of critics not to be overly harsh and unfairly demanding of new simulators to provide immediate proof of their trustworthiness, but it is also the responsibility of proponents not to declare trustworthiness before their simulator has earned it. Figure 2 Establishing trust in a simulator. Consider the displacement of a spring due to the pressure of a gas (far left), or the time taken for a dropped ball to fall (middle left). Simple models can be proposed to describe either system. The former might be modelled as an ideal gas trapped in a box by a frictionless piston held in place by a perfect spring. The latter as a frictionless body moving with uniform acceleration. Calculating the quantity of interest within either system, displacement or time, respectively, reduces within the model to calculating a square root. We thus consider four methods to perform this simulation. Building an approximation to either model system; analogue simulation. Alternatively, using an abacus (middle right) or a calculator (far right); digital simulation. With today’s knowledge, in the parlance used in this article, we would elevate the status of the latter two simulations to computations, because of the guaranteed accuracy with which each calculation reproduces the model. Meanwhile, the former two simulatiors are not so easily verified. Importantly, they are falsifiable, e.g. by comparing one to the other. This is similar to the state of analogue quantum simulators currently used to perform large-scale quantum simulations. However, the confidence in each simulator is a matter of perspective. It is not objective. Many centuries ago, we would only have trusted the abacus to perform such a calculation, since its principles were well understood and square-root algorithms with assured convergence were known even to the Babylonians. Once Gallileo began the development of mechanics, we might have considered the method of dropping a ball. Confidence in the simulation could have been established by testing the analogue simulator against the abacus. Nearly two centuries ago, when we first began to understand equilibrium thermodynamics, we might have preferred the gas-piston-spring method. Nowadays, we would all choose the calculator or a solid-state equivalent. This confidence is partly a result of testing the calculator against some known results, but also largely because, after the development of quantum mechanics, we feel we understand the components of solid-state systems to such a high level that we are willing to extrapolate this confidence to unknown territory. In a century, our confidence could well be placed most strongly in another system. 8 Where next for quantum simulation? The majority of the current effort on quantum simulation is, firstly, in matching models of interest to a suitable quantum device with which to perform a simulation [49, 50]. Secondly, experimentalists demonstrate a high level of control and flexibility with a simulator, performing some of the simple fail-able tests mentioned above [18, 22, 33]. This is very much along the lines of the five goals set out by Cirac and Zoller in 2012 [46], and great successes have led to claims that we are now able to perform simulations on a quantum device that we are unable to do on a classical device. In the future, the main direction of inquiry will continue to be along these lines. However, it is the very fact that the simulation capabilities of quantum devices are beginning to surpass those of classical devices that should prompt a more forceful investigation into the best approach to establishing confidence in quantum simulators. Hauke et al. proposed a set of requirements for a quantum simulator, an alternative to Cirac and Zoller’s, that focuses on establishing the reliability and efficiency of a simulator, and the connection between these two properties [47]. As we move to classically unsimulable system sizes and regimes where there is no clear expected behaviour, trustworthiness and falsifiability should no longer be an afterthought. In fact, they should be primary objectives of experimental and theoretical work, since quantum simulators cannot truly be useful until some level of trust is established. Can we predict in advance where the results of quantum simulators are more sensitive to errors? How does this overlap with the regimes of classical simulability? Are there even some results that will be exponentially sensitive to the Hamiltonian parameters and not expected to ever be simulable in a strict sense? These are difficult but important questions to answer, and the path towards answering them will be exciting and thought provoking. The authors thank the National Research Foundation and the Ministry of Education of Singapore for support. Authors’ Affiliations Centre for Quantum Technologies, National University of Singapore Clarendon Laboratory, University of Oxford Keble College, University of Oxford Institute for Scientific Interchange 1. Brewster D: Planetary machines. In The Edinburgh Encyclopedia Vol 16. William Blackwood and Proprietors, Edinburgh; 1830:624.Google Scholar 2. Thomson J: An integrating machine having a new kinematic principle. Proc. R. Soc. Lond. 1876, 24: 164–170.Google Scholar 3. Cairns WJ, Crank J, Lloyd EC: Some improvements in the construction of a small scale differential analyser and a review of recent applications. Technical report 27/44, Armament Research Department Theoretical Research; 1944. UK National Archives reference DEFE 15/751 C20779.Google Scholar 4. Barceló C, Liberati S, Visser M: Analogue gravity. Living Rev. Relativ. 2005., 8: Article ID 214 Article ID 214Google Scholar 5. Jordan SP, Lee KSM, Preskill J: Quantum algorithms for quantum field theories. Science 2012, 336: 1130–1133. 10.1126/science.1217069ADSView ArticleGoogle Scholar 6. You JQ, Nori F: Atomic physics and quantum optics using superconducting circuits. Nature 2011, 474: 589–597. 10.1038/nature10122ADSView ArticleGoogle Scholar 7. Lewenstein M, Sanpera A, Ahufinger V, Damski B, Sen A, Sen U: Ultracold atomic gases in optical lattices: mimicking condensed matter physics and beyond. Adv. Phys. 2007, 56: 243–379. 10.1080/00018730701223200ADSView ArticleGoogle Scholar 8. Lewenstein M, Sanpera A, Ahufinger V: Ultracold Atoms in Optical Lattices: Simulating Quantum Many-Body Systems. Oxford University Press, Oxford; 2012.View ArticleGoogle Scholar 9. Lidar DA, Wang H: Calculating the thermal rate constant with exponential speedup on a quantum computer. Phys. Rev. E 1999, 59: 2429–2438. 10.1103/PhysRevE.59.2429ADSView ArticleGoogle Scholar 10. Wang H, Kais S, Aspuru-Guzik A, Hoffmann MR: Quantum algorithm for obtaining the energy spectrum of molecular systems. Phys. Chem. Chem. Phys. 2008, 10: 5321–5393.View ArticleGoogle Scholar 11. Whitfield JD, Biamonte J, Aspuru-Guzik A: Simulation of electronic structure Hamiltonians using quantum computers. Mol. Phys. 2011, 109: 735–750. 10.1080/00268976.2011.552441ADSView ArticleGoogle Scholar 12. Kassal I, Whitfield JD, Perdomo-Ortiz A, Yung M-H, Aspuru-Guzik A: Simulating chemistry using quantum computers. Annu. Rev. Phys. Chem. 2011, 62: 185–207. 10.1146/annurev-physchem-032210-103512ADSView ArticleGoogle Scholar 13. Lu D, Xu B, Xu N, Li Z, Chen H, Peng X, Ruixue X, Du J: Quantum chemistry simulation on quantum computers: theories and experiments. Phys. Chem. Chem. Phys. 2012, 14: 9411–9420. 10.1039/c2cp23700hView ArticleGoogle Scholar 14. Porras D, Cirac JI: Effective quantum spin systems with trapped ions. Phys. Rev. Lett. 2004., 92: Article ID 207901 Article ID 207901Google Scholar 15. Kim K, Chang MS, Korenblit S, Islam R, Edwards EE, Freericks JK, Lin GD, Duan LM, Monroe C: Quantum simulation of frustrated Ising spins with trapped ions. Nature 2010, 465: 590–593. 10.1038/nature09071ADSView ArticleGoogle Scholar 16. Gerritsma R, Kirchmair G, Zähringer F, Solano E, Blatt R, Roos CF: Quantum simulation of the Dirac equation. Nature 2010, 463: 68–71. 10.1038/nature08688ADSView ArticleGoogle Scholar 17. Lanyon BP, Hempel C, Nigg D, Müller M, Gerritsma R, Zähringer F, Schindler P, Barreiro JT, Rambach M, Kirchmair G, Hennrich M, Zoller P, Blatt R, Roos CF: Universal digital quantum simulation with trapped ions. Science 2011, 334: 57–61. 10.1126/science.1208001ADSView ArticleGoogle Scholar 18. Blatt R, Roos CF: Quantum simulations with trapped ions. Nat. Phys. 2012, 8: 277–284. 10.1038/nphys2252View ArticleGoogle Scholar 19. Lamata L, Mezzacapo A, Casanova J, Solano E: Efficient quantum simulation of fermionic and bosonic models in trapped ions. EPJ Quantum Technol. 2014., 1(1): Article ID 9 Article ID 9Google Scholar 20. Jaksch D, Zoller P: The cold atom Hubbard toolbox. Ann. Phys. 2005, 315: 52–79. 10.1016/j.aop.2004.09.010MATHADSView ArticleGoogle Scholar 21. Dalibard J, Gerbier F, Juzeliūnas G, Öhberg P: Colloquium: artificial gauge potentials for neutral atoms. Rev. Mod. Phys. 2011, 83: 1523. 10.1103/RevModPhys.83.1523ADSView ArticleGoogle Scholar 22. Bloch I, Dalibard J, Nascimbène S: Quantum simulations with ultracold quantum gases. Nat. Phys. 2012, 8: 267–276. 10.1038/nphys2259View ArticleGoogle Scholar 23. Peng X-H, Zhang J, Du J, Suter D: Quantum simulation of a system with competing two-and three-body interactions. Phys. Rev. Lett. 2009., 103: Article ID 140501 Article ID 140501Google Scholar 24. Peng X-H, Suter D: Spin qubits for quantum simulations. Front. Phys. China 2010, 5: 1–25.ADSView ArticleGoogle Scholar 25. Du J, Xu N, Peng X, Wang P, Wu W, Lu D: NMR implementation of a molecular hydrogen quantum simulation with adiabatic state preparation. Phys. Rev. Lett. 2010., 104: Article ID 030502 Article ID 030502Google Scholar 26. Li Z, Yung MH, Chen H, Lu D, Whitfield JD, Peng X, Aspuru-Guzik A, Du J: Solving quantum ground-state problems with nuclear magnetic resonance. Sci. Rep. 2011., 1: Article ID 88 Article ID 88Google Scholar 27. Zhang J, Yung MH, Laflamme R, Aspuru-Guzik A, Baugh J: Digital quantum simulation of the statistical mechanics of a frustrated magnet. Nat. Commun. 2011., 3: Article ID 880 Article ID 880Google Scholar 28. Angelakis DG, Santos MF, Bose S: Photon-blockade-induced Mott transitions and XY spin models in coupled cavity arrays. Phys. Rev. A 2007., 76: Article ID 031805 Article ID 031805Google Scholar 29. Cho J, Angelakis DG, Bose S: Fractional quantum Hall state in coupled cavities. Phys. Rev. Lett. 2008., 101: Article ID 246809 Article ID 246809Google Scholar 30. Lu C-Y, Gao W-B, Gühne O, Zhou X-Q, Chen Z-B, Pan J-W: Demonstrating anyonic fractional statistics with a six-qubit quantum simulator. Phys. Rev. Lett. 2009., 102: Article ID 030502 Article ID 030502Google Scholar 31. Lanyon BP, Whitfield JD, Gillett GG, Goggin ME, Almeida MP, Kassal I, Biamonte JD, Mohseni M, Powell BJ, Barbieri M, Aspuru-Guzik A, White AG: Towards quantum chemistry on a quantum computer. Nat. Chem. 2009, 2: 106–111.View ArticleGoogle Scholar 32. Peruzzo A, Lobino M, Matthews JC, Matsuda N, Politi A, Poulios K, Zhou X-Q, Lahini Y, Ismail N, Wörhoff K, Bromberg Y, Silberberg Y, Thompson MG, OBrien JL: Quantum walks of correlated photons. Science 2010, 329: 1500–1503. 10.1126/science.1193515ADSView ArticleGoogle Scholar 33. Aspuru-Guzik A, Walther P: Photonic quantum simulators. Nat. Phys. 2012, 8: 285–291. 10.1038/nphys2253View ArticleGoogle Scholar 34. Manousakis E: A quantum-dot array as model for copper-oxide superconductors: a dedicated quantum simulator for the many-fermion problem. J. Low Temp. Phys. 2002, 126: 1501–1513. 10.1023/A:1014295416763ADSView ArticleGoogle Scholar 35. Smirnov AY, Savel’ev S, Mourokh LG, Nori F: Modelling chemical reactions using semiconductor quantum dots. Europhys. Lett. 2007., 80: Article ID 67008 Article ID 67008Google Scholar 36. Byrnes T, Kim NY, Kusudo K, Yamamoto Y: Quantum simulation of Fermi-Hubbard models in semiconductor quantum-dot arrays. Phys. Rev. B 2008., 78: Article ID 075320 Article ID 075320Google Scholar 37. You JQ, Nori F: Superconducting circuits and quantum information. Phys. Today 2005, 58: 42–47.View ArticleGoogle Scholar 38. Clarke J, Wilhelm F: Superconducting quantum bits. Nature 2008, 453: 1031–1042. 10.1038/nature07128ADSView ArticleGoogle Scholar 39. Houck AA, Türeci HE, Koch J: On-chip quantum simulation with superconducting circuits. Nat. Phys. 2012, 8: 292–299. 10.1038/nphys2251View ArticleGoogle Scholar 40. Schmidt S, Koch J: Circuit QED lattices: towards quantum simulation with superconducting circuits. Ann. Phys. 2013, 525(6):395–412. 10.1002/andp.201200261View ArticleGoogle Scholar 41. Mezzacapo A, Lamata L, Filipp S, Solano E: Many-body interactions with tunable-coupling transmon qubits. arXiv:1403.3652.Google Scholar 43. Cai J, Retzker A, Jelezko F, Plenio MB: A large-scale quantum simulator on a diamond surface at room temperature. Nat. Phys. 2013, 9: 168–173. 10.1038/nphys2519View ArticleGoogle Scholar 44. Wang Y, Dolde F, Biamonte J, Babbush R, Bergholm V, Yang S, Jakobi I, Neumann P, Aspuru-Guzik A, Whitfield JD, Wrachtrup J: Quantum simulation of helium hydride in a solid-state spin register. arXiv:1405.2696.Google Scholar 45. Kendon VM, Nemoto K, Munro WJ: Quantum analogue computing. Philos. Trans. R. Soc. Lond. A 2010, 368: 3609–3620. 10.1098/rsta.2010.0017MATHMathSciNetADSView ArticleGoogle Scholar 46. Cirac JI, Zoller P: Goals and opportunities in quantum simulation. Nat. Phys. 2010, 8: 264–266.View ArticleGoogle Scholar 47. Hauke P, Cucchietti FM, Tagliacozzo L, Deutsch I, Lewenstein M: Can one trust quantum simulators? Rep. Prog. Phys. 2012., 75: Article ID 082401 Article ID 082401Google Scholar 48. Schaetz T, Monroe CR, Esslinger T: Focus on quantum simulation. New J. Phys. 2013., 15: Article ID 085009 Article ID 085009Google Scholar 49. Buluta I, Nori F: Quantum simulators. Science 2009, 326: 108–111. 10.1126/science.1177838ADSView ArticleGoogle Scholar 50. Georgescu IM, Ashhab S, Nori F: Quantum simulation. Rev. Mod. Phys. 2013, 86: 153–185.ADSView ArticleGoogle Scholar 51. Feynman RP: Simulating physics with computers. Int. J. Theor. Phys. 1982, 21: 467–488. 10.1007/BF02650179MathSciNetView ArticleGoogle Scholar 52. Lloyd S: Universal quantum simulators. Science 1996, 273: 1073–1077. 10.1126/science.273.5278.1073MATHMathSciNetADSView ArticleGoogle Scholar 53. Johnson MW, Amin MHS, Gildert S, Lanting T, Hamze F, Dickson N, Harris R, Berkley AJ, Johansson J, Bunyk P, Chapple EM, Enderud C, Hilton JP, Karimi K, Ladizinsky E, Ladizinsky N, Oh T, Perminov I, Rich C, Thom MC, Tolkacheva E, Truncik CJS: Quantum annealing with manufactured spins. Nature 2011, 473: 194–198. 10.1038/nature10012ADSView ArticleGoogle Scholar 54. Davies PCW: Does quantum mechanics play a non-trivial role in life? Biosystems 2004, 78: 69–79. 10.1016/j.biosystems.2004.07.001View ArticleGoogle Scholar 55. Abbott D, Gea-Banacloche J, Davies PCW, Hameroff S, Zeilinger A, Eisert J, Wiseman HM, Bezrukov SM, Frauenfelder H: Plenary debate: quantum effects in biology: trivial or not? Fluct. Noise Lett. 2008, 8: C5-C26. 10.1142/S0219477508004301View ArticleGoogle Scholar 56. Lambert N, Chen YN, Cheng YC, Li CM, Chen GY, Nori F: Quantum biology. Nat. Phys. 2013, 9: 10–18.View ArticleGoogle Scholar 57. Meznaric S, Clark SR, Datta A: Quantifying the nonclassicality of operations. Phys. Rev. Lett. 2013., 110: Article ID 070502 Article ID 070502Google Scholar 58. Killoran N, Cramer M, Plenio MB: Extracting entanglement from identical particles. Phys. Rev. Lett. 2014., 112: Article ID 150501 Article ID 150501Google Scholar 59. Ashcroft NW, Mermin D: Solid State Physics. Holt, Rinehardt and Winston, New York; 1976.Google Scholar 60. Kirkpatrick S, Gelatt Jr CD, Vecchi MP: Optimization by simulated annealing. Science 1983, 220: 671–680. 10.1126/science.220.4598.671MATHMathSciNetADSView ArticleGoogle Scholar 61. Farhi E, Gutmann S: Analog analogue of a digital quantum computation. Phys. Rev. A 1998, 57: 2403. 10.1103/PhysRevA.57.2403MathSciNetADSView ArticleGoogle Scholar 62. Farhi E, Goldstone J, Gutmann S, Sipser M: Quantum computation by adiabatic evolution. arXiv:quant-ph/0001106.Google Scholar 63. Santoro GE, Tosatti E: Optimization using quantum mechanics: quantum annealing through adiabatic evolution. J. Phys. A, Math. Gen. 2006., 39: Article ID R393 Article ID R393Google Scholar 64. Das A, Chakrabarti BK: Colloquium: quantum annealing and analog quantum computation. Rev. Mod. Phys. 2008, 80: 1061. 10.1103/RevModPhys.80.1061MATHMathSciNetADSView ArticleGoogle Scholar 65. Biamonte JD, Bergholm V, Whitfield JD, Fitzsimons J, Aspuru-Guzik A: Adiabatic quantum simulators. AIP Adv. 2011., 1: Article ID 022126 Article ID 022126Google Scholar 66. Leggett AJ: Bose-Einstein condensation in the alkali gases: some fundamental concepts. Rev. Mod. Phys. 2001, 73: 307. 10.1103/RevModPhys.73.307ADSView ArticleGoogle Scholar 67. Pitaevskii L, Stringari S: Bose-Einstein Condensation. Clarendon Press, Oxford; 2003.MATHGoogle Scholar 68. Garay LJ, Anglin JR, Cirac JI, Zoller P: Sonic analog of gravitational black holes in Bose-Einstein condensates. Phys. Rev. Lett. 2000, 85: 4643. 10.1103/PhysRevLett.85.4643ADSView ArticleGoogle Scholar 69. Fischer UR, Schützhold R: Quantum simulation of cosmic inflation in two-component Bose-Einstein condensates. Phys. Rev. A 2004., 70: Article ID 063615 Article ID 063615Google Scholar 70. Fedichev PO, Fischer UR: Cosmological quasiparticle production in harmonically trapped superfluid gases. Phys. Rev. A 2004., 69: Article ID 033602 Article ID 033602Google Scholar 71. Barceló C, Liberati S, Visser M: Analogue gravity. Living Rev. Relativ. 2005., 8: Article ID 12 Article ID 12Google Scholar 72. Jain P, Weinfurtner S, Visser M, Gardiner CW: Analog model of a Friedmann-Robertson-Walker universe in Bose-Einstein condensates: application of the classical field method. Phys. Rev. A 2007., 76: Article ID 033616 Article ID 033616Google Scholar 73. Fedichev PO, Fischer UR: Gibbons-Hawking effect in the sonic de Sitter space-time of an expanding Bose-Einstein-condensed gas. Phys. Rev. Lett. 2003., 91: Article ID 240407 Article ID 240407Google Scholar 74. Fedichev PO, Fischer UR: Observer dependence for the phonon content of the sound field living on the effective curved space-time background of a Bose-Einstein condensate. Phys. Rev. D 2004., 69: Article ID 064021 Article ID 064021Google Scholar 75. Barahona F: On the computational complexity of Ising spin glass models. J. Phys. A, Math. Gen. 1982, 15: 3241. 10.1088/0305-4470/15/10/028MathSciNetADSView ArticleGoogle Scholar 76. Bernstein E, Vazirani U: Quantum complexity theory. SIAM J. Comput. 1997, 26: 1411–1473. 10.1137/S0097539796300921MATHMathSciNetView ArticleGoogle Scholar 77. Anderson PW: The resonating valence bond state in La2CuO4 and superconductivity. Science 1987, 235: 1196–1198. 10.1126/science.235.4793.1196ADSView ArticleGoogle Scholar 78. Zhang FC, Rice TM: Effective Hamiltonian for the superconducting Cu oxides. Phys. Rev. B 1988, 37: 3759(R). 10.1103/PhysRevB.37.3759ADSView ArticleGoogle Scholar 79. Anderson PW, Lee PA, Randeria M, Rice TM, Trivedi N, Zhang FC: The physics behind high-temperature superconducting cuprates: the ‘plain vanilla’ version of RVB. J. Phys. Condens. Matter 2004., 16: Article ID R755 Article ID R755Google Scholar 80. Le Hur K, Rice TM: Superconductivity close to the Mott state: from condensed-matter systems to superfluidity in optical lattices. Ann. Phys. 2009, 324: 1452–1515. 10.1016/j.aop.2009.02.004MATHMathSciNetADSView ArticleGoogle Scholar 81. Laughlin RB: Gossamer superconductivity. arXiv:cond-mat/0209269.Google Scholar 82. Leggett AJ: Quantum Liquids. Oxford University Press, Oxford; 2006.View ArticleGoogle Scholar 83. Laughlin RB: Hartree-Fock computation of the high- T c cuprate phase diagram. Phys. Rev. B 2014., 89: Article ID 035134 Article ID 035134Google Scholar 84. Ginzburg VL, Andryushin EA: Superconductivity. World Scientific, Singapore; 2004.MATHView ArticleGoogle Scholar 85. Madsen U, Krogsgaard-Larsen P, Liljefors T: Textbook of Drug Design and Discovery. Taylor and Francis, Washington DC; 2002.Google Scholar 86. Cohen NC: Guidebook on Molecular Modeling in Drug Design. Academic Press, Boston; 1996.Google Scholar 87. Zhou T, Huang D, Caflisch A: Quantum mechanical methods for drug design. Current Topics Med. Chem. 2010, 10: 33–45. 10.2174/156802610790232242View ArticleGoogle Scholar 88. Fausti D, Tobey RI, Dean N, Kaiser S, Dienst A, Hoffmann MC, Pyon S, Takayama T, Takagi H, Cavalleri A: Light-induced superconductivity in a stripe-ordered cuprate. Science 2011, 331: 189–191. 10.1126/science.1197294ADSView ArticleGoogle Scholar 89. Kaiser S, Nicoletti D, Hunt CR, Hu W, Gierz I, Liu HY, Le Tacon M, Loew T, Haug D, Keimer B, Cavalleri A: Light-induced inhomogeneous superconductivity far above Tc in YBa2Cu3O6+x. arXiv:1205.4661.Google Scholar 90. Moltenbrey K: American made. Comput. Graph. World 2014, 32(2). Moltenbrey K: American made. Comput. Graph. World 2014, 32(2).Google Scholar 91. Stanley HE: Introduction to Phase Transitions and Critical Phenomena. Oxford University Press, Oxford; 1971.Google Scholar 92. Georges A, Kotliar G, Krauth W, Rozenberg MJ: Dynamical mean-field theory of strongly correlated fermion systems and the limit of infinite dimensions. Rev. Mod. Phys. 1996, 68: 13. 10.1103/RevModPhys.68.13MathSciNetADSView ArticleGoogle Scholar 93. Kotliar G, Savrasov SY, Haule K, Oudovenko VS, Parcollet O, Marianetti CA: Electronic structure calculations with dynamical mean-field theory. Rev. Mod. Phys. 2006, 78: 865–952. 10.1103/RevModPhys.78.865ADSView ArticleGoogle Scholar 94. Aoki H, Tsuji N, Eckstein M, Kollar M, Oka T, Werner P: Nonequilibrium dynamical mean-field theory and its applications. arXiv:1310.5329.Google Scholar 95. Verstraete F, Murg V, Cirac JI: Matrix product states, projected entangled pair states, and variational renormalization group methods for quantum spin systems. Adv. Phys. 2008, 57: 143–224. 10.1080/14789940801912366ADSView ArticleGoogle Scholar 96. Cirac JI, Verstraete F: Renormalization and tensor product states in spin chains and lattices. J. Phys. A 2009., 42: Article ID 504004 Article ID 504004Google Scholar 97. Johnson TH, Clark SR, Jaksch D: Dynamical simulations of classical stochastic systems using matrix product states. Phys. Rev. E 2010., 82: Article ID 036702 Article ID 036702Google Scholar 98. Schollwöck U: The density-matrix renormalization group in the age of matrix product states. Ann. Phys. 2011, 326: 96–192. 10.1016/j.aop.2010.09.012MATHADSView ArticleGoogle Scholar 99. Biamonte JD, Clark SR, Jaksch D: Categorical tensor network states. AIP Adv. 2011., 1: Article ID 042172 Article ID 042172Google Scholar 100. Evenbly E, Vidal G: Quantum criticality with the multi-scale entanglement renormalization ansatz. In Strongly Correlated Systems: Numerical Methods. Edited by: Avella E, Mancini F. Springer, Heidelberg; 2013:99–130.View ArticleGoogle Scholar 101. Johnson TH, Biamonte JD, Clark SR, Jaksch D: Solving search problems by strongly simulating quantum circuits. Sci. Rep. 2013., 3: Article ID 1235 Article ID 1235Google Scholar 102. Orús R: A practical introduction to tensor networks: matrix product states and projected entangled pair states. arXiv:1306.2164.Google Scholar 103. Hohenberg P, Kohn W: Inhomogeneous electron gas. Phys. Rev. 1964., 136: Article ID B864 Article ID B864Google Scholar 104. Kohn W, Sham LJ: Self-consistent equations including exchange and correlation effects. Phys. Rev. 1965., 140: Article ID A1133 Article ID A1133Google Scholar 105. Parr RG: Density functional theory. Annu. Rev. Phys. Chem. 1983, 34: 631–656. 10.1146/annurev.pc.34.100183.003215ADSView ArticleGoogle Scholar 106. Martin RM: Electronic Structure: Basic Theory and Practical Methods. Cambridge University Press, Cambridge; 2004.View ArticleGoogle Scholar 107. Burke K, Werschnik J, Gross EKU: Time-dependent density functional theory: past, present, and future. J. Chem. Phys. 2005., 123: Article ID 062206 Article ID 062206Google Scholar 108. Foulkes WMC, Mitáš L, Needs RJ, Rajagopal G: Quantum Monte Carlo simulations of solids. Rev. Mod. Phys. 2001, 73: 33–83. 10.1103/RevModPhys.73.33ADSView ArticleGoogle Scholar 109. Gull E, Millis AJ, Lichtenstein AI, Rubtsov AN, Troyer M, Werner P: Continuous-time Monte Carlo methods for quantum impurity models. Rev. Mod. Phys. 2011, 83: 349. 10.1103/RevModPhys.83.349ADSView ArticleGoogle Scholar 110. Pollet L: Recent developments in quantum Monte Carlo simulations with applications for cold gases. Rep. Prog. Phys. 2012., 75: Article ID 094501 Article ID 094501Google Scholar 111. Austin BM, Zubarev DY, Lester WA: Quantum Monte Carlo and related approaches. Chem. Rev. 2012, 112: 263–288. 10.1021/cr2001564View ArticleGoogle Scholar 112. Eisert J, Cramer M, Plenio MB: Colloquium: area laws for the entanglement entropy. Rev. Mod. Phys. 2010, 82: 277. 10.1103/RevModPhys.82.277MATHMathSciNetADSView ArticleGoogle Scholar 113. Schuch N, Verstraete F: Computational complexity of interacting electrons and fundamental limitations of density functional theory. Nat. Phys. 2009, 5: 732–735. 10.1038/nphys1370View ArticleGoogle Scholar 114. Loh Jr EY, Gubernatis JE, Scalettar RT, White SR, Scalapino DJ, Sugar RL: Sign problem in the numerical simulation of many-electron systems. Phys. Rev. B 1990, 41: 9301. 10.1103/PhysRevB.41.9301ADSView ArticleGoogle Scholar 115. Troyer M, Wiese U-J: Computational complexity and fundamental limitations to fermionic quantum Monte Carlo simulations. Phys. Rev. Lett. 2005., 94: Article ID 170201 Article ID 170201Google Scholar 116. Trotzky S, Chen Y-A, Flesch A, McCulloch IP, Schollwöck U, Eisert J, Bloch I: Probing the relaxation towards equilibrium in an isolated strongly correlated one-dimensional Bose gas. Nat. Phys. 2012, 8: 325–330. 10.1038/nphys2232View ArticleGoogle Scholar 117. Yung M-H, Nagaj D, Whitfield JD, Aspuru-Guzik A: Simulation of classical thermal states on a quantum computer: a transfer-matrix approach. Phys. Rev. A 2010., 82: Article ID 060302(R) Article ID 060302(R)Google Scholar 118. Sinha S, Russer P: Quantum computing algorithm for electromagnetic field simulation. Quantum Inf. Process. 2010, 9: 385–404. 10.1007/s11128-009-0133-xMATHMathSciNetView ArticleGoogle Scholar 119. Harris SA, Kendon VM: Quantum-assisted biomolecular modelling. Philos. Trans. R. Soc. Lond. A 2010, 368: 3581–3592. 10.1098/rsta.2010.0087ADSView ArticleGoogle Scholar 120. Boghosian BM, Taylor W IV: Quantum lattice-gas model for the many-particle Schrödinger equation in d dimensions. Phys. Rev. E 1998, 57: 54. 10.1103/PhysRevE.57.54MathSciNetADSView ArticleGoogle Scholar 121. Meyer DA: Quantum computing classical physics. Philos. Trans. R. Soc. Lond. A 2002, 360: 395–405. 10.1098/rsta.2001.0936MATHADSView ArticleGoogle Scholar 122. Abrams DS, Lloyd S: Quantum algorithm providing exponential speed increase for finding eigenvalues and eigenvectors. Phys. Rev. Lett. 1999, 83: 5162–5165. 10.1103/PhysRevLett.83.5162ADSView ArticleGoogle Scholar 123. Harrow AW, Hassidim A, Lloyd S: Quantum algorithm for linear systems of equations. Phys. Rev. Lett. 2009., 103: Article ID 150502 Article ID 150502Google Scholar 124. Leibfried D: Could a boom in technologies trap Feynman’s simulator? Nature 2010, 463: 608.ADSView ArticleGoogle Scholar 125. Popper KR: Science as falsification. In Conjectures and Refutations. Routledge and Keagan Paul, London; 1963:30.Google Scholar 126. May D, Barrett G, Shepherd D, Hunt WA, Clarke EM: Designing chips that work [and discussion]. Philos. Trans. R. Soc. Lond. A 1992, 339: 3–19. 10.1098/rsta.1992.0022ADSView ArticleGoogle Scholar 127. Walschaers M, Diaz JF-D-C, Mulet R, Buchleitner A: Optimally designed quantum transport across disordered networks. Phys. Rev. Lett. 2013., 111: Article ID 180601 Article ID 180601Google Scholar © Johnson et al.; licensee Springer on behalf of EPJ. 2014
d29c6031a7639fbb
Definition of Orbital Approximation The orbital approximation is a method of visualizing electron orbitals for chemical species that have two or more electrons. It does this by modeling a multi-electron atom as a single-electron atom. Multi-Electron Atom The electron of interest feels the individual electric fields of the other electrons and the nucleus. Modeled Single-Electron Atom The electron of interest feels the electric field of the effective nuclear charge. This is the true nuclear charge minus the shielding effect of the other electrons. Hydrogen and hydrogenic ions can have their exact wave functions Ψ found using the Schrödinger equation. From Ψ and Ψ2 we obtain the information we need to fully characterize and visualize electron orbitals in their hydrogenic forms: s, p, d, f, etc. An exact solution to the Schrödinger equation for situations involving two or more electrons is not achievable. All of the electrons are governed by a single multi-electron wave function that cannot be written as an analytic mathematical function - the mathematics involved is too hard. Computers can provide a numerical solution for a multi-electron wave function using principles such as the variation theorem. Although such solutions are valuable in terms of characterizing electron behavior, they do not provide information suitable for visualizing electron orbitals. We write the true multi-electron wave function Ψ as the product of the single-electron wave functions for all the electrons in the system: Ψ = ψ1(r1) ψ2(r2) ψ3(r3)... ψn(rn) For example, the wave function of a lithium atom would be written: We obtain each of the single-electron wave functions by considering each electron in turn and basing our calculations on how the electron interacts with the effective nuclear charge. The nucleus's electric field is lowered at the electron of interest by the shielding effect of electrons closer to the nucleus Effective Nuclear Charge The electrons are regarded as having a spherical distribution around the central nucleus. The math is simplified by making the assumption that each electron feels the attraction of a positive point charge (the nucleus) and repulsion from the average charge distribution of all the other electrons. i.e. individual electron-electron interactions need not be calculated. The average repulsive electric field from all the other electrons on the electron of interest is modeled as a point negative charge at the nucleus. This reduces the attraction the electron of interest feels towards the positive nuclear charge to a lower value; this lower value is described as the effective nuclear charge or shielded nuclear charge. Zeff = Z - S Zeff is the effective number of protons Z is the number of protons S is the number of electrons between the electron of interest and the nucleus Using this approach the atomic orbital problem is simplified sufficiently to allow an exact wave function to be determined and orbitals visualized. Effective Nuclear Charges for Selected Atoms H 1s 1 1 He 1s 2 1.69 Li 1s, 2s 3 2.69,   1.28 Be 1s, 2s 4 3.68,   1.91 B 1s, 2s, 2p 5 4.68,   2.58,   2.42 F 1s, 2s, 2p 9 8.65,   5.13,   5.10 Na 1s, 2s, 2p, 3s 11 10.63,   6.57,   6.80,   2.51 Table Data from E. Clementi and D. L. Raimondi; The Journal of Chemical Physics 38, 2686 (1963). Search the Dictionary for More Terms
2ecb2852c348e940
In response to: “Super-Saturated Chemistry” (Vol. 2, No. 4). To the editors: As a chemist, and somebody who has tried to promote the philosophy of chemistry, I was delighted to see Marc Henry’s article.1 As numerous authors have emphasized, chemistry, the central science, seems to have been largely neglected by philosophers of science until about the mid 1990s.2 But even in the twenty or so subsequent years the field remains rather small and little appreciated. Physics is rightly regarded as the most fundamental of the natural sciences, and so receives the most attention from philosophers. About fifty years ago a philosophy of biology began to take shape and it has now become a well-established sub-discipline. This feat was not so difficult to achieve since life itself is clearly not reducible to physics. On the other hand, the question of the extent to which chemistry is reduced is more ambiguous. At first glance, it would seem that chemistry does reduce to physics. One only needs to think of the periodic table, that paradigm of chemistry, and the idea that its very existence can be explained by appealing to electron shells and orbitals in the atoms of the various elements which it houses. The fact that elements within the same vertical column such as lithium, sodium, and potassium, for example, behave in a similar fashion is explained by recognizing that the atoms of these elements all have one outer-shell electron. In fact, many classic texts in the philosophy of science refer to this example as the archetypal case of a successful reduction of one science (chemistry) to another (physics).3 The Periodic Table and Electronic Configurations Henry begins his article with a detailed analysis of the reduction claim, and this is the part of his article that I will concentrate upon because of my own interests and because of my own limitations when it comes to the more exotic physics and mathematics that he later turns to. As I think Henry correctly points out, chemistry remains a singular science, which has not been reduced to physics when matters are examined more closely. As he writes, the possible period lengths in the periodic table, namely 2, 8, 18, or 32 can be predicted in a completely rigorous manner by solving the time-independent Schrödinger equation for the hydrogen atom and augmenting the three quantum numbers that are thereby obtained with yet a fourth quantum number.4 This much is already a highly impressive feat on the part of theoretical physics and it is surely what fuels the claims for a complete reduction that one generally encounters. However, a deeper problem, also mentioned by Henry, is that the closing of periods, as opposed to the closing of sequential shells, occurs after the following sequences: 2, 8, 8, 18, 18, 32, and 32. This doubling in successive period lengths, with the exception of the very first period, is usually summarized by appealing to Madelung’s rule of n + l whereby electron occupation proceeds according to increasing numerical values of this quantity representing the sum of the first and second quantum numbers for any given atomic orbital.5 But I wish to take issue with Henry’s next claim, although it is a commonly held view and one that I have also regrettably reinforced in previously articles, several of which the author is generous enough to cite.6 In quoting from Löwdin, Henry says that the Madelung rule cannot be derived from quantum mechanics. For many years, I think it is fair to say, I was one of the main torchbearers for Löwdin’s statement. More recently I have somewhat changed my mind as I will now try to explain. Briefly put, whether or not the Madelung rule can be derived from quantum mechanics may be irrelevant to the fundamental question of whether the periodic table, and consequently an important part of chemistry, can be reduced to physics. When one examines the electronic configurations of atoms more closely, it emerges that the build-up of electronic orbitals does not always follow the Madelung rule. Contrary to 99% of textbook accounts, the configuration of the atom of element 21, or scandium, is [Ar]3d14s2 and not [Ar]4s23d1 as commonly stated.7 This innocent looking inversion in the final two orbitals harbors a great deal of information, which I need to explain briefly. The reason why the erroneous version continues to be propagated by textbook authors and chemistry instructors is that it seems to make a good deal of common sense. In a recent article, addressed mainly to chemical educators, I have called this state of affairs the “sloppy aufbau” to distinguish it from the more careful aufbau principle.8 The sloppy version assumes that in moving from the atom of element number 20, or calcium, to element 21, or scandium, the twenty-first electron is added to the configuration that already exists in calcium. Since calcium possesses a configuration of [Ar]4s2 it is very reasonable to suppose that the next element should show a configuration of [Ar]4s23d1. However, experimental evidence clearly points against such a simplistic view. First, spectroscopic evidence indicates the preferential occupation of a 3d orbital rather than 4s in scandium. The reason is that even though 4s possesses a lower energy than 3d for the atoms of potassium (19) and calcium (20), the opposite is the case for scandium and for all nine subsequent transition elements. Moreover, when a single electron is removed from a scandium atom, it is a 4s electron that is preferentially ionized and not one from a 3d orbital.9 This fact cannot be rationalized according to the popular notion that the configuration of scandium should be [Ar]4s23d1.10 But let me resume the question of the Madelung rule. According to what I have been saying, the building-up of atoms with electrons does not in fact follow this time-honored rule and so whether or not the rule can be derived from quantum mechanics is neither here nor there when it comes to the question of reduction of the periodic table.11 Nevertheless there is no need to jettison the Madelung rule altogether since it does provide the correct overall configuration in the majority of atoms. What I have been focusing on is that it does not provide the order of occupation of atomic orbitals within the independent electron approximation. Henry piles on the anti-reductionism further when he continues that some elements belonging to the same group, such as nickel, palladium and platinum show different outer-shell configurations. Now admittedly this fact points against the simple high-school account that holds that elements in the same group share the same outer-shell configuration. However, it fails to address the point at issue, namely the question of whether the periodic table can be reduced to quantum mechanics. If quantum mechanics were capable of predicting the experimentally-observed configurations for all three of these atoms, then reduction would have been achieved for all intents and purposes.12 Anomalous Configurations This talk of the configurations of nickel, palladium and platinum raises another interesting and related question. There are about twenty elements in the periodic table whose atoms do not have the electronic configuration that one would expect. These elements are anomalous in the sense of having one electron instead of two in their outermost shells.13 That is to say, the outermost shell has configuration s1 rather than the more usual s2. This issue is well known in chemical education and again there exist many ad hoc attempts to try to explain the facts, the most common of which is an appeal to the presence of a half-filled subshell in the case of atoms such as chromium and molybdenum. The only problem with this ploy is that the possession of a half-filled sub-shell is neither necessary nor sufficient for an atom to display an anomalous configuration of the type that I am discussing here.14 The only rigorous explanation that I am aware of that does not pull the proverbial wool over the reader’s eyes lies in an article that was published by the theoretical chemist Eugen Schwarz and his co-authors.15 What Schwarz has suggested is that one should not simply consider the lowest energy state of any particular atom but that one should take an average of the various spectroscopic contributions arising from any particular configuration. The traditional approach to determining the configuration of an atom from observed data consists of examining the spectra of gas phase atoms of any particular element and then simply looking for the spectroscopic term of lowest energy. One then tries to identify the electronic configuration which gives rise to this spectroscopic term and one takes this configuration to represent the ground state of the atom in question. However, in more accurate work one seeks the average for each configuration, taken over the energies of all the spectroscopic terms arising from that particular configuration, with the aim of identifying the one with the lowest energy. This is then regarded as the ground state configuration of the atom in question. Moreover, the manner in which this averaging is carried out requires making use of the J, or overall quantum number that is the result of coupling the total orbital angular momentum of the atom L with its total spin angular momentum, or S. In more physical terms, this approach also represents a move away from considering gas phase atoms to considering those in condensed phases and atoms that are in chemical combination. Broadly speaking, these features mean that one is dealing with more physically and chemically relevant species than gas phase atoms of isolated atoms, thus providing a further motivation for taking these alternative configurations more seriously. What emerges from this approach is that, as the atomic number increases, the energy of the s2 configuration shows a steady increase relative to that of the s1 configuration. Whereas the s2 configuration is considerably more stable for elements such as scandium, the energies of these configurations cross over each other once the atom of iron has been reached. In the case of the nickel atom, the s1 configuration is found to be approximately 1 eV lower than s2. These results imply that the ground state configurations for several atoms are different from what is generally stated in the traditional textbook approach. The alternative method for calculating electronic configurations of atoms provides a perfectly natural explanation for the so-called anomalous configurations. It could be argued that there are in fact no anomalies, since one is merely observing the result of the variation of two energies, those of the s1 and s2 configurations, which happen to cross at a certain point along each transition series. Moreover, and perhaps more pertinent to the question of reduction to quantum mechanics, the energies of these configurations can be computed from first principles via the Hartree-Fock method and they too show very similar trends, including the crossing of energies at more of less the same point along the transition series. Since condensed phases and bonded atoms are overwhelmingly more relevant to most of chemistry and physics, this represents a triumph rather than a failure of reduction. Experts in atomic physics and related areas in chemistry and physics are well aware of the limitations of focusing on the configurations of gas phase atoms. The electronic configurations that are generally believed to be anomalous because they feature an incomplete outer s-orbital are seen in a completely new light when one turns to considering the average configuration of atoms taken over all spectroscopic terms that emerge from any particular configuration. The reduction of the periodic table to quantum mechanics is far more successful than some chemists and contemporary philosophers of chemistry have been willing to admit. It is regrettable that a chemist like Henry should have been misled by such philosophical accounts. Could it be that the philosophy of chemistry is having more of an impact than I implied at the outset? Although we certainly do not need this kind of negative impact. The Born-Oppenheimer Approximation Another topic that has exercised a minority of chemists as well as the philosophy of chemistry community has been the question of the Born-Oppenheimer approximation. Because of the difficulties of solving the many-particle Schrödinger equation it helps to make whatever approximations one can safely get away with. It so happens that in molecules the relative motion of the nuclei is far, far smaller than that of the electrons. Consequently, it is customary to apply the clamped nuclei approximation and to omit any terms having to do with the motion of nuclei from the equations of motion. A few chemists that Henry cites have made careers of examining the consequences of making such an assumption. They have argued that molecular structure is therefore something of an epiphenomenon that is incompatible with a full quantum mechanical treatment. The structure of the molecule, largely defined by the relative positions of the nuclei, appears to be imposed by a mathematical approximation, they argue, and does not sit well with quantum mechanics, the putative underlying theory for all of chemistry. Some declare that this signals another failure of reductionism since molecular structure is so obviously real and undeniable, especially to the chemist. Henry buys into all of this and is once again ready to declare the failure of attempts to reduce chemistry. But, there is a possible escape, which actually involves a more recent development in quantum mechanics, namely the subject of quantum decoherence. Once upon a time it was thought that the collapse of the wavefunction occurred instantaneously. Today we understand that this is not the case. Similarly, it was once thought that the collapse of the wavefunction only occurred as a result of an observation, be it human or through mechanical devices. Today it is realized that interaction with the environment of a system can equally bring about the collapse of the wavefunction. Typical decoherence times for molecules like ethanol and dimethyl ether are on the order of picoseconds, or less. Consequently, even if we grant that the pure quantum mechanical formalism may be devoid of structural information in the absence of Born-Oppenheimer, a newly formed molecule will only lack structure, if we can put it in such terms, for something like a picosecond. This suggests that, for all intents and purposes, molecular structure, so meaningful to chemists, is indeed a real phenomenon, which does not conflict with quantum mechanics.16 As in the case of the periodic table and electronic configurations, I do not believe that appealing to the Born-Oppenheimer approximation and the putative mismatch between quantum mechanics and the existence of molecular structure gives any solace to the anti-reductionist. If chemistry is not dissolved by physics, there are surely better ways of upholding this claim than by focusing on what lurks behind the Born-Oppenheimer style clamping of nuclei. Having said all this, I hasten to mention that I am not turning my back on my own anti-reductionist position concerning the status of chemistry. It is just that I am increasingly appreciative of what physics has achieved. I may have been a little too hasty in the way that I attacked the reduction claims at earlier stages of my career. These days, if I am asked point-blank if chemistry has been reduced to physics my response tends to either be “It depends what you mean by reduction,” or, on other occasions, “Yes, almost but not quite.”17 One of the failings of philosophy of science, which unfortunately does a great disservice to the discipline, is that practitioners seldom have technical skill in the science that they claim to be philosophizing about. In this respect I am rather lucky to be a philosopher of chemistry who is hiding out in a chemistry department at a first rate research university where my colleagues do their best to keep me on the straight and narrow. It would have been interesting to see some kind of a conclusion to Henry’s article and a resumption of the question of reduction rather than a relentless pedagogical romp through a dazzling array of advanced topics in physics and mathematics followed by a rather abrupt ending. I also worry a little that Henry’s focus on these more mathematical topics seems to suggest that he has not entirely given up on the idea of reducing chemistry to physics. This is a little puzzling given the earlier parts of his article in which he confidently proclaims the impossibility of any such reduction. I must also mention the distinction between reduction in practice and reduction in principle. All that we can really say is whether reduction has been achieved in practice. In principle we have to admit that anything is possible. It would therefore be claiming far too much to say that chemistry will never be reduced to physics. Nevertheless, this is a small criticism that does not detract too much from the immense value of Henry’s stimulating and super-saturated article. I have learned a great many new facts from his interdisciplinary tour de force and will enjoy pursuing many of the references that he provides to the emerging areas that we humble chemists are seldom exposed to. I would have preferred it if the article had been saturated with more inherently chemical arguments aimed at warding off the reductive claims. Instead Henry has super-saturated his article with all manner of mathematics and physics, thus inadvertently seeming to arm the reductionists further in their quest to dissolve chemistry. But in saying this I am once again betraying my own philosophical prejudice, and, I suspect, that of many chemists, in support of the non-reducibility of our field. Perhaps the best way to think of reduction might be as a direction rather than as a goal. Attempts to explain chemical and physical phenomena generally result in deeper understanding. The goal of complete reduction may never be reached, but scientific knowledge continues to advance precisely because we adopt the general direction and program of reductionism. Eric Scerri Marc Henry replies: First, I would like to thank Eric Scerri for his interesting comments that give me the opportunity to dot the i’s and cross the t’s on some points that need further comments and/or development. I would simply add that, nowadays, students do not refer to textbooks, but rather learn science from the internet. As can be checked, the electronic configuration of scandium is correctly stated by the Royal Society of Chemistry website on the periodic table.18 Henry fails to mention that there have been various claims to the effect that the Madelung rule has been derived from first principles. Eric Scerri then gives in endnote eleven a reference to a paper published in 2001 by Valentin Ostrovsky. In fact, this paper is cited in my essay in endnote thirteen. So, the problem has been addressed, but not in great detail, in order to keep the essay at a reasonable length. I would add that, in France, the problem of Madelung’s rule is further complicated by the fact that this rule is not attributed to Madelung himself, but to a Russian chemist named Klechkowski. So, from a historical viewpoint, one may wonder what the relative contributions of Madelung and Klechkowski were to this problem. In fact, as correctly pointed by Scerri, it should be obvious that Madelung’s rule or Klechkowski’s rules are empirical findings and it should be very surprising to find a quantum-mechanical derivation from first principles of such a rule. My main message to the wide audience of this review was that deriving the correct electronic configuration from first principles for all the elements of the periodic table was, beyond any doubt, a mathematical experiment rather than a mathematical deduction. I would like to stress that Scerri was a true pioneer in delivering such a message for chemists, even if he seems now to regret having written so much on this crucial subject. It was brilliant work and will continue to stimulate open-minded chemists in the recognition of the fact that chemistry is definitively not soluble in physics. Concerning the problem of anomalous configurations, I thank Scerri for having stressed the importance of the paper published in 2009 by Wang and Schwartz. As I am a chemist synthesizing molecules made of bonded atoms, I am not very interested in atomic electronic configurations in the gaseous phase that are of little help in explaining observed chemical facts. Bonded atoms do not display the same properties as gaseous ones; bonded molecules in a crystal do not display the same shapes as gaseous molecules. The key point is that theoretical computations are always done in the gaseous phase, whereas most experiments in chemistry deal with liquids, solutions or solids. It is thus quite amazing that Hartree-Fock or density functional calculations in the gas phase are perfectly able to explain experimental facts observed in condensed states. For me, this does not represent a triumph at all, but merely points to the fact that some kind of experimental data not included in first principles are implicitly used. Accordingly, the choice of a good basis set that must be used to get reliable results is not driven by theory alone, but rather selected by its ability to reproduce experimental results in very simple systems. The only way to have computations truly disconnected from experience would be to rely on an infinite basis set, and this is just not possible. The essay was written in this spirit of not confusing mathematical deductions from first principles with mathematical experiments involving very clever algorithms of computation. The idea that the reduction of the periodic table to quantum mechanics is becoming more and more successful as computational power increases cannot be a demonstration that chemistry is a sub-branch of physics. In fact, this is just the opposite. The more sophisticated the computations become, the better the agreement with experience, but also the greater the gap between theoreticians and synthetic chemists. It follows that I am not really misled by philosophical accounts, as suggested by Scerri, but rather quite pleased to find that some philosophers were smart enough to predict the widening gap that we observe nowadays in chemistry between those who are mastering computers and algorithms and those who are mastering beakers and real chemical substances. The essay was not written for physicists nor philosophers, but rather for synthetic chemists in order to convince them that their way of thinking and practicing chemistry cannot be captured by a single mathematical equation, as suggested by Paul Dirac. Another important point addressed by Scerri concerns the case of quantum decoherence. In fact, the problem of coherence in condensed matter was exposed in another essay published in this review.19 One of the big problems with coherence is that the conceptual frame of thinking is not quantum mechanics, but rather quantum field theory. With this enlarged viewpoint, it becomes completely useless to solve Schrödinger’s or Dirac’s equations, because the coupling between atoms and the physical vacuum is ignored in both cases. I thus completely agree with Scerri that focusing on what lurks behind the Born-Oppenheimer approximation is not the best way of tackling the problem of what is a molecular structure. In his conclusion, Scerri argues about the apparent lack of conclusion of my essay and suggests that my focus on advanced topics in physics and mathematics is puzzling given my message that chemistry is not reducible to physics. First, the principal aim of the essay was to provide readers with important facts that are usually not discussed, or even ignored by mainstream science. The inescapable conclusion then is that scientific research tries to find answers to six basic questions: 1. Q: What is a universe? A: refer to general relativity (GR). 2. Q: What is a vacuum? A: refer to quantum mechanics (QM). 3. Q: What is light? A: refer to electromagnetism (EM). 4. Q: What is matter? A: refer to chemistry (CH). 5. Q: What is information? A: refer to thermodynamics (TH). 6. Q: What is life? A: refer to biology (BL). Consequently, neither physics (PH), nor mathematics (MT) have to respond to any particular question. Concerning physics, the main reason is the faith in its ability to give answers to any kind of question concerning Nature, a hegemonic position called reductionism. Concerning mathematics, the reason seems to be that it appears as a purely immaterial science, a purely speculative activity of the human brain. In fact, there is a strong tendency in the minds of physicists towards a pyramidal viewpoint, such as this one: In this diagram, GR has been set apart owing to the considerable difficulties in merging QM with GR within a single theoretical frame. If one sticks to such a pyramidal structure, making extensive use of mathematics may appear a reductionist attitude. I want to stress here that this is definitively not the viewpoint which has been defended in this essay. The proposed structure is, rather, based on a circle, with mathematics at the center and appearing as the natural link among the six basic sciences: QM (h)GR (G) EM (c)BL (e) TH (kB) In short, there is no place for physics. To add support to this viewpoint, I have also associated to each basic science a natural constant encapsulating the quintessence of each domain: Avogadro’s constant NA for chemistry, Planck’s constant h for quantum mechanics, Einstein’s constant c for electromagnetism, Boltzmann’s constant kB for thermodynamics, Coulomb’s constant e for biology and Newton’s constant G for astronomy. Of course, one may add other constants such as Hubble’s constant (GR), or Sommerfeld’s constant α (QM) for instance. The circular placement of the double arrows stresses the fact that each pole is interdependent with the other poles. It follows that the claim that one science is reducible to another is not only short-sighted, but also useless. It also follows that the privileged position of mathematics is not due to its immaterial character, but rather to its ability to facilitate exchanges of information among the six poles that may occur not only at the periphery (double arrows), but also by passing through the center, the only point allowing to go directly from one pole to any other pole. With such a circle in hand, do we really need to add a seventh pole named physics? My answer in the essay is simply, “No.” To justify this answer, I was obliged to go right to the center, mathematics in its more powerful expression: group theory, in order to retrieve this beautiful circular structure. Knowing that the symmetry of the vacuum was described by the SO(4,2) symmetry group having six independent parameters (1 for time, 3 for space and 2 for dilations in space or in time), the idea that there could exist such a thing as a GUT is doomed to fail. Accordingly, the mere fact that mass scales as L-1 in QM and as L in GR, precludes any kind of marriage between both approaches. I would thus say to Scerri that immersion in mathematics does not necessarily mean that we are further arming reductionists in their quest to dissolve chemistry. Instead we must learn to use the same arms in order to convince them of the vanity of this kind of quest. It should also be realized that what applies to chemistry, applies as well to thermodynamics and biology. Each pivotal science has its own way of thinking that is definitively not reducible to another way of thinking. Both philosophers and scientists should acknowledge this fundamental irreducibility. Using the universal group theoretical language, my conclusion could thus have been: GR, QM, EM, TH, CH and BL are irreducible representations of Nature’s SO(4,2) symmetry group. Everything is said concisely, leaving no room for any kind of reductionist assault. Eric Scerri is a historian and philosopher of chemistry in the Department of Chemistry & Biochemistry at UCLA. Marc Henry is a Professor of Chemistry, Materials Science and Quantum Physics at the University of Strasbourg. 1. I am the founder and editor of the Springer journal Foundations of Chemistry, now in its nineteenth year of publication.  2. J. Van Brakel, “On the Neglect of the Philosophy of Chemistry,” Foundations of Chemistry 1 (1999): 111–74.  3. E. Nagel, The Structure of Science: Problems in The Logic of Explanation (New York: Harcourt, Brace and World, 1961). H. Reichenbach, Selected Writings, 1909–1953, M. Reichenbach, R.S. Cohen, eds. (Dordrecht: Reidel, 1978).  4. The latter feat was first achieved by Wolfgang Pauli, even before the identification of his fourth quantum number with a classically non-describable property which became somewhat misleadingly termed “spin.”  5. Henry’s article contains a persistent typo in that L is used several times instead of . The usual meaning of L is the vector sum of the individual quantum numbers for the electrons in an atom that is an altogether different property.  6. Another article in which I examined this issue was E. R. Scerri, “The Changing Views of a Philosopher of Chemistry on the Question of Reduction,” in E. R. Scerri, G. Fisher, eds. Essays in the Philosophy of Chemistry (New York: Oxford University Press, 2016).  7. I know of literally only two textbooks that give a correct account. They are, S. Glasstone, Textbook of Physical Chemistry (1946) and D. W. Oxtoby, H. P. Gillis, and A. Campion, Principles of Modern Chemistry (2007).  8. E. R. Scerri, “The Trouble with the Aufbau Principle,” Education in Chemistry 50 (2013).  9. Strictly speaking, electrons are completely indistinguishable. When labeling electrons as belonging to particular orbitals I am operating within the independent electron approximation.  10. Textbooks, at least those that even recognize the problem, tend to appeal to all manner of ad hoc maneuvers in order to try to “have their cake and eat it.” What I mean is that they attempt to claim that 4s is preferentially occupied and yet also preferentially ionized in transition metal atoms. This is quite simply illogical and yet it is precisely what is maintained by the vast majority of chemistry, and physics textbooks, for that matter. Henry fails to mention that there have been various claims to the effect that the Madelung rule has been derived from first principles. L. C. Allen and E. T. Knight, International Journal of Quantum Chemistry 90 (2000); H. A. Bent and F. Weinhold, Journal of Chemical Education 85 (2007); V. N. Ostrovsky, Foundations of Chemistry 3 (2001).  11. The Madelung rule not only fails to describe the true situation in scandium, but also in the following nine transition elements, not to mention the remaining 30 transition elements that occur in the second, third, and fourth transition series. One might say that these elements rather follow an n rule rather than an n + l rule. Even more shockingly to the traditionalists, the Madelung rule is only really valid for the elements in the s-block of the periodic table, which accounts for only about 10% of all elements. In the case of the f-block elements, the situation is more complicated and here we encounter some genuine cases in which these inner transition elements do indeed follow the Madelung rule.  12. I do not have the space here to discuss this question further. The reader is referred to E. R. Scerri, “What is an Element? What is the Periodic Table? And What Does Quantum Mechanics Contribute to the Question,” Foundations of Chemistry 14 (2012): 69–81.  13. In just one particular case of the palladium atom, the configuration may be said to be doubly anomalous in that there are no electrons whatsoever in the outermost shell in which they are generally found in other transition elements in the same period.  14. E. R. Scerri, “The Changing Views of a Philosopher of Chemistry on the Question of Reduction,” in E. R. Scerri, G. Fisher, eds., Essays in the Philosophy of Chemistry (Mew York: Oxford University Press, 2016).  15. S-G Wang and W. H. E. Schwarz, Angewandte Chemie International Edition 48, no. 3,404 (2009).  16. Opposing views on this question are presented in R. F. Hendry, “Ontological Reduction and Molecular Structure,” Studies in History and Philosophy of Modern Physics 41 (2010): 183–91, and E. R. Scerri, “Top Down Causation Regarding the Chemistry – Physics Interface – A Skeptical View,” Interface Focus: Royal Society Publications 2 (2012): 20–25.  17. E. R. Scerri, “It All Depends What You Mean By Reduction,” in From Simplicity to Complexity, Information, Interaction, Emergence, Proceedings of the 1994 ZiF Meeting in Bielefeld, 77-93, K. Mainzer, A. Müller, and W. Saltzer, eds. (Wiesbaden: Vieweg-Verlag, 1994).  18. Royal Society of Chemistry, “Periodic Table: Scandium.”  19. Marc Henry, “The Hydrogen Bond,” Inference: International Review of Science 1, no. 2 (2015).
07e188c3f15b9b33
Dismiss Notice Join Physics Forums Today! De Broglie Waves and Complex Numbers 1. Jan 5, 2008 #1 We used complex variables to describe the wave function. People do that in acoustics and optics too, strictly for convenience, because the real and imaginary parts are rudundant. The wave function of quantum mechanics is "necessarily" complex, it's not just for convenience that we use complex numbers in quantum theory. Is there any physical reason for the wave function to be complex? Last edited: Jan 5, 2008 2. jcsd 3. Jan 5, 2008 #2 User Avatar Science Advisor I think there is more than one way to answer this question. One way to see that complex numbers are built into the theory rather than simply for convenience like your earlier examples is that Schrodinger's equation is a heat equation with imaginary dispersion coefficient. Thus the solutions are intrinsically complex. The real and imaginary parts of these solutions would also obey the Schrodinger equation since the equation is linear, but they do not generally also match the boundary conditions, and are therefore not generally solutions to your physical systems (UNLIKE your counterexamples of acoustics and optics). For a concrete example: a "left-moving particle" has a wavefunction [itex]e^{ikx}[/itex], NOT cos(kx) or sin(kx). That might not be what you call a "physical reason", but it is a mathematical one. 4. Jan 6, 2008 #3 I'll attempt an explanation without using equations. The wave function represents the state of the system. Because its time and space derivatives must be proportional to the wave function itself, it takes an exponential form. If the argument in the exponent is real, the wave function will either grow without limit or decay away. If we want the system to persist, we need to make the wave function periodic, and that requires an i in the argument. By the way, de Broglie waves don't have to be complex, but Schroedinger waves do. The Schroedinger equation contains an explicit i. 5. Jan 6, 2008 #4 User Avatar Homework Helper Gold Member Could you expand on the 'must be' please? Why must they be? 6. Jan 6, 2008 #5 For example, if the energy is constant, the time derivative of the wave function is proportional to the energy multiplied by the wave function (the eigenvalue equation). If we are going to use wave functions to describe motion, then they must be complex. 7. Jan 6, 2008 #6 User Avatar Homework Helper They don't have to be. An obvious example is the particle in a box, the solutions of which are not proportional to their odd spacial derivatives. Another example is the ground state (or any state) of a simple harmonic oscillator. Another example is *any* function other than an exponential. So, I think that he must just be saying that the Schrodinger equation is linear. But this does not mean that the solutions are always proportional to their space and time derivatives. 8. Jan 6, 2008 #7 User Avatar Gold Member Xeinstein - there is no physical reason why complex numbers should be used in QM. If it were so, it would be equivalent to saying it only worked in German, or in base 8 arithmetic. There are perfectly good QM's that do not use complex numbers. It is merely a (great) convenience. 9. Jan 6, 2008 #8 You are correct, of course, wave functions do not need to be complex. I was addressing the question about why they are (when they are). But even the solutions to Schroedinger's time-independent equation that you give as examples are associated with a time-dependent factor that is complex. 10. Jan 7, 2008 #9 Regarding the Schrödinger equation from a mathematical point of view it has just one derivative in time, whereas the Maxwell eqations have a second derivative in time. That's the big difference and requires the solution of Schrödingers equation to be complex. Physically there is another reason: Energypreservation demands that the solution has to be invariant by time transformation. If the solution would be real and you replace the t by -t there is no invariancy. But by replacing the imaginary part it to -it accomplishes the requirement. Another reason is the Spin description. The whole theory is only possible in the complex spere (For one Spin the Bloch sphere, minus the part for unity matrix). Spin eigen states have to be orthogonal and for one spin the vector has to be 2 dimensional (because there are two possibilites: Spin up and Spin down) . Further there have to be 3 eigen states and that is possible only if two are in real space and one is imaginary. So complex space enables more orthogonal eigen states. And then and I guess what's most important: The only way to write a vector product as an integral is in complex space, namely in Quantum physics the Hilbert space. I can't remember exactely, but it has something to do with the bilinear form ... There would be certainly more reasons ... but that's what's crossing my mind, wright now. (No guarantee that it is correct!) Similar Discussions: De Broglie Waves and Complex Numbers 1. De Broglie waves (Replies: 1)
24d85974132cccb4
Physicist, Startup Founder, Blogger, Dad Tuesday, July 31, 2007 Algorithm wars Some tantalizing Renaissance tidbits in this lawsuit against two former employees, both physics PhDs from MIT. Very interesting -- I think the subtleties of market making deserve further scrutiny :-) At least they're not among the huge number of funds rumored at the moment to be melting down from leveraged credit strategies. Previous coverage of Renaissance here. Ex-Simons Employees Say Firm Pursued Illegal Trades 2007-07-30 11:19 (New York) By Katherine Burton and Richard Teitelbaum July 30 (Bloomberg) -- Two former employees of RenaissanceTechnologies Corp., sued by the East Setauket, New York-based firm for theft of trade secrets, said the company violated securities laws and ``encouraged'' them to help. Renaissance, the largest hedge-fund manager, sought to block Alexander Belopolsky and Pavel Volfbeyn from using the allegations as a defense in the civil trade-secrets case. The request was denied in a July 19 order by New York State judge Ira Gammerman, who wrote that the firm provided no evidence to dispute the claims. The company denied the former employees' claims. ``The decision on this procedural motion makes no determination that there is any factual substance to the allegations,'' Renaissance said in a statement to Bloomberg News. ``These baseless charges are merely a smokescreen to distract from the case we are pursuing.'' Renaissance, run by billionaire investor James Simons, sued Belopolsky and Volfbeyn in December 2003, accusing them of misappropriating Renaissance's trade secrets by taking them to another firm, New York-based Millennium Partners LP. Renaissance settled its claims against Millennium in June. The men, who both hold Ph.D.'s in physics from the Massachusetts Institute of Technology, worked for the company from 2001 to mid-2003, according to the court document. ``We think the allegations are very serious and will have a significant impact on the outcome of the litigation,'' said Jonathan Willens, an attorney representing Volfbeyn and Belopolsky. ``We continue to think the allegations by Renaissance concerning the misappropriation of trade secrets is frivolous.'' `Quant' Fund Renaissance, founded by Simons in 1988, is a quantitative manager that uses mathematical and statistical models to buy and sell securities, options, futures, currencies and commodities. It oversees $36.8 billion for clients, most in the 2-year-old Renaissance Institutional Equities Fund. According to Gammerman's heavily redacted order, Volfbeyn said that he was instructed by his superiors to devise a way to ``defraud investors trading through the Portfolio System for Institutional Trading, or POSIT,'' an electronic order-matching system operated by Investment Technology Group Inc. Volfbeyn said that he was asked to create an algorithm, or set of computer instructions, to ``reveal information that POSIT intended to keep confidential.'' Refused to Build Volfbeyn told superiors at Renaissance that he believed the POSIT strategy violated securities laws and refused to build the algorithm, according to the court document. The project was reassigned to another employee and eventually Renaissance implemented the POSIT strategy, according to the document. New York-based Investment Technology Group took unspecified measures, according to the order, and Renaissance was forced to abandon the strategy, Volfbeyn said. Investment Technology Group spokeswoman Alicia Curran declined to comment. According to the order, Volfbeyn said that he also was asked to develop an algorithm for a second strategy involving limit orders, which are instructions to buy or sell a security at the best price available, up to a maximum or minimum set by the trader. Standing limit orders are compiled in files called limit order books on the New York Stock Exchange and Nasdaq and can be viewed by anyone. The redacted order doesn't provide details of the strategy. Volfbeyn refused to participate in the strategy because he believed it would violate securities laws. The limit-order strategy wasn't implemented before Volfbeyn left Renaissance, the two men said, according to the order. Swap `Scam' Claimed Volfbeyn and Belopolsky said that Renaissance was involved in a third strategy, involving swap transactions, which they describe as ``a massive scam'' in the court document. While they didn't disclose what type of swaps were involved, they said that Renaissance violated U.S. Securities and Exchange Commission and National Association of Securities Dealers rules governing short sales. Volfbeyn and Belopolsky said they were expected to help find ways to maximize the profits of the strategy, and Volfbeyn was directed to modify and improve computer code in connection with the strategy, according to the order. In a swap transaction, two counterparties exchange one stream of cash flows for another. Swaps are often used to hedge certain risks, such as a change in interest rates, or as a means of speculation. In a short sale, an investor borrows shares and then sells them in the hopes they can be bought back in the future at a cheaper price. Besides the $29 billion institutional equity fund, Renaissance manages Medallion, which is open only to Simons and his employees. Simons, 69, earned an estimated $1.7 billion last year, the most in the industry, according to Institutional Investor's Alpha magazine. I, Robot Infoworld article: ... RGguard accesses data gathered by a sophisticated automated testbed that has examined virtually every executable on the Internet. This testbed couples traditional anti-virus scanning techniques with two-pronged heuristic analysis. The proprietary Spyberus technology establishes causality between source, executable and malware, and user interface automation allows the computers to test programs just as a user would - but without any human intervention. Monday, July 30, 2007 Tyler Cowen and rationality I recently came across the paper How economists think about rationality by Tyler Cowen. Highly recommended -- a clear and honest overview. Although you might think the strong version of EMH is only important to traders and finance specialists, it is also very much related to the idea that markets are good optimizers of resource allocation for society. Do markets accurately reflect the "fundamental value of corporations"? See related discussion here. "Behavioral finance" is currently a fad in financial theory, and in the eyes of many it may become the new mainstream. Behavioral finance typically weakens rationality assumptions, usually with a view towards explaining "market anomalies." Almost always these models assume imperfect capital markets, to prevent a small number of rational investors from dwarfing the influence of behavioral factors. Robert J. Shiller claims that investors overreact to very small pieces of information, causing virtually irrelevant news to have a large impact on market prices. Other economists argue that some fund managers "churn" their portfolios, and trade for no good reason, simply to give their employers the impression that they are working hard. It appears that during the Internet stock boom, simply having the suffix "dot com" in the firm's name added value on share markets, and that after the bust it subtracted value.11 Behavioral models use looser notions of rationality than does EMH. Rarely do behavioral models postulate outright irrationality, rather the term "quasi-rationality" is popular in the literature. Most frequently, a behavioral model introduces only a single deviation from classical rationality postulates. The assumption of imperfect capital markets then creates the possibility that this quasi-rationality will have a real impact on market phenomena. The debates between the behavioral theories and EMH now form the central dispute in modern financial theory. In essence, one vision of rationality -- the rational overwhelm the influence of the irrational through perfect capital markets -- is pitted against another vision -- imperfect capital markets give real influence to quasi-rationality. These differing approaches to rationality, combined with assumptions about capital markets, are considered to be eminently testable. Saturday, July 28, 2007 From physics to finance One small comment: Bandyopadhyay says below that banks hire the very best PhDs from theoretical physics. I think he meant to say that, generally, they hire the very best among those who don't find jobs in physics. Unfortunately, few are able to find permanent positions in the field. Mike K. -- if you're reading this, why didn't you reply to the guy's email? :-) CB: So, we presume you went to NYC? CB: Did you get an offer from Banc of America? AB: Not at the first attempt. After the interview, I was quite positive that I would get an offer. However, as soon as I returned home, I received an email from Dr. Carr saying, "You are extremely smart, but the bank is composed of deal makers, traders, marketers, and investment bankers. We are looking for someone with business skills. You will not fit well here." He suggested that we both write a paper on my derivation of Black-Scholes/Merton partial differential equation, or even possibly a book. He also suggested I read thoroughly (and to work out all the problems of) the book "Dynamic Asset Pricing Theory" by Darrell Duffie. In fact, Duffie's book was my starting point in learning financial economics. I assume your readers never heard of this book. It is a notoriously difficult book on continuous time finance and it is intended for the very advanced Ph.D. students in financial economics. But, it was the right book for me - I read it without any difficulty in the math part and it provided me with a solid foundation in financial economics. Anyway, I think I am going too off tangent to your question. CB: What did you do during the internship? CB: Are there any 'special' moments on Wall Street that you would like to talk about? AB: Sure, there are many. But one that stands out is the day I started my internship at Banc of America. As is the norm in grad school or academia, I felt that I had to introduce myself to my colleagues. So, on my very first day of internship, I took the elevator to the floor where the top bosses of the bank had offices. I completely ignored the secretary at the front desk, knocked on the CEO and CFO's door, walked in, and briefly introduced myself. Little did I know that this was not the norm in the business world!!! Shortly thereafter, Dr. Carr called me and advised that I stick to my cube instead of 'just wandering around'! In retrospect, that was quite an experience! AB: You mean to say that professors here don't get paid top dollar? (laughs) I generally try to keep this blog free of kid pictures, but I found these old ones recently and couldn't resist! Thursday, July 26, 2007 Humans eke out poker victory But only due to a bad decision by the human designers of the robot team! :-) Earlier post here. Wednesday, July 25, 2007 Man vs machine: live poker! The history of AI tells us that capabilities initially regarded as sure signs of intelligence ("machines will never play chess like a human!") are discounted soon after machines master them. Personally I favor a strong version of the Turing test: interaction which takes place over a sufficiently long time that the tester can introduce new ideas and watch to see if learning occurs. Can you teach the machine quantum mechanics? At the end will it be able to solve some novel problems? Many humans would fail this Turing test :-) Earlier post on bots invading online poker. World-Class Poker Professionals Phil Laak and Ali Eslami Computer Poker Champion Polaris (University of Alberta) Can a computer program bluff? Yes -- probably better than any human. Bluff, trap, check-raise bluff, big lay-down -- name your poison. The patience of a monk or the fierce aggression of a tiger, changing gears in a single heartbeat. Polaris can make a pro's head spin. Psychology? That's just a human weakness. Odds and calculation? Computers can do a bit of that. Intimidation factor and mental toughness? Who would you choose? Many of the top pros, like Chris "Jesus" Ferguson, Paul Phillips, Andy Bloch and others, already understand what the future holds. Now the rest of the poker world will find out. Tuesday, July 24, 2007 What is a quant? The following log entry, which displays the origin of and referring search engine query for a pageload request to this blog, does not inspire confidence. Is the SEC full of too many JD's and not enough people who understand monte carlo simulation and stochastic processes? secfwopc.sec.gov (U.S. Securities & Exchange Commission) District Of Columbia, Washington, United States, 0 returning visits Date Time WebPage 24th July 2007 10:04:52 referer: www.google.com/search?hl=en&q=what is a quants&btnG=Search 24th July 2007 12:11:59 referer: www.google.com/search?hl=en&q=Charles Munger and the pricing of derivatives&btnG=Google Search Sunday, July 22, 2007 Income inequality and Marginal Revolution Tyler Cowen at Marginal Revolution discusses a recent demographic study of who, exactly, the top US wage earners are. We've discussed the problem of growing US income inequality here before. To make the top 1 percent in AGI (IRS: Adjusted Gross Income), you needed to earn $309,160. To make it to the top 0.1 percent, you needed $1.4 million (2004 figures). Here's a nice factoid: Somewhat misleading, as this includes returns on the hedgies' own capital invested as part of their funds. But, still, you get the picture of our gilded age :-) One of the interesting conclusions from the study is that executives of non-financial public companies are a numerically rather small component of top earners, comprising no more than 6.5%. Financiers comprise a similar, but perhaps larger, subset. Who are the remaining top earners? The study can't tell! (They don't know.) Obvious candidates are doctors in certain lucrative specialties, sports and entertainment stars and owners of private businesses. The category which I think is quite significant, but largely ignored, is founders and employees of startups that have successful exits. Below is the comment I added to Tyler's blog: The fact that C-level execs are not the numerically dominant subgroup is pretty obvious. The whole link between exec compensation and inequality is a red herring (except in that it symbolizes our acceptance of winner take all economics). I suspect that founders and early employees of successful private companies (startups) that have a liquidity event (i.e., an IPO or acquisition) are a large subset of the top AGI group. Note, though, that this population does not make it into the top tier (i.e., top 1 or .1%) with regularity, but rather only in a very successful year (the one in which they get their "exit"). Any decent tech IPO launches hundreds of employees into the top 1 or even .1%. It is very important to know what fraction of the top group are there each year (doctors, lawyers, financiers) versus those for whom it is a one-time event (sold the business they carefully built over many years). If it is predominantly the latter it's hard to attribute an increase in top percentile earnings to unhealthy inequality. To be more quantitative: suppose there are 1M employees at private companies (not just in technology, but in other industries as well) who each have a 10% chance per year of participating in a liquidity event that raises their AGI to the top 1% threshold. That would add 100k additional top earners each year, and thereby raise the average income of that group. If there are 150M workers in the US then there are 1.5M in the top 1%, so this subset of "rare exit" or employee stock option beneficiaries would make up about 7% of the total each year (similar to the corporate exec number). But these people are clearly not part of the oligarchy, and if the increase in income inequality is due to their shareholder participation, why is that a bad thing? We reported earlier on the geographic distribution of income gains to the top 1 percent: they are concentrated in tech hotbeds like silicon valley, which seems to support our thesis that the payouts are not going to the same people every year. Many Worlds: A brief guide for the perplexed I added this to the earlier post 50 years of Many Worlds and thought I would make it into a stand alone post as well. Many Worlds: A brief guide for the perplexed In quantum mechanics, states can exist in superpositions, such as (for an electron spin) (state)   =   (up)   +   (down) When a measurement on this state is performed, the Copenhagen interpretation says that the state (wavefunction) "collapses" to one of the two possible outcomes: (up)     or     (down), with some probability for each outcome depending on the initial state (e.g., 1/2 and 1/2 of measuring up and down). One fundamental difference between quantum and classical mechanics is that even if we have specified the state above as precisely as is allowed by nature, we are still left with only a probabilistic prediction for what will happen next. In classical physics knowing the state (e.g., position and velocity of a particle) allows perfect future prediction. There is no satisfactory understanding of how or exactly when the Copenhagen wavefunction "collapse" proceeds. Indeed, collapse introduces confusing issues like consciousness: what, exactly, constitutes an "observer", capable of causing the collapse? Everett suggested we simply remove wavefunction collapse from the theory. Then the state evolves in time always according to the Schrodinger equation. In fact, the whole universe can be described by a "universal wave function" which evolves according to the Schrodinger equation and never undergoes Copenhagen collapse. Suppose we follow our electron state through a device which measures its spin. For example: by deflecting the electron using a magnetic field and recording the spin-dependent path of the deflected electron using a detector which amplifies the result. The result is recorded in some macroscopic way: e.g., a red or green bulb lights up depending on whether deflection was up or down. The whole process is described by the Schrodinger equation, with the final state being (state)   =   (up) (device recorded up)   +   (down) (device recorded down) Here "device" could, but does not necessarily, refer to the human or robot brain which saw the detector bulb flash. What matters is that the device is macroscopic and has a large (e.g., Avogadro's number) number of degrees of freedom. In that case, as noted by Everett, the two sub-states of the world (or device) after the measurement are effectively orthogonal (have zero overlap). In other words, the quantum state describing a huge number of emitted red photons and zero emitted green photons is orthogonal to the complementary state. If a robot or human brain is watching the experiment, it perceives a unique outcome just as predicted by Copenhagen. That is, any macroscopic information processing device ends up in one of the possible macroscopic states (red light vs green light flash). The amplitude for those macroscopically different states to interfere is exponentially small, hence they can be treated thereafter as completely independent "branches" of the wavefunction. Success! The experimental outcome is predicted by a simpler (sans collapse) version of the theory. The tricky part: there are now necessarily parts of the final state (wavefunction) describing both the up and down outcomes (I saw red vs I saw green). These are the many worlds of the Everett interpretation. Personally, I prefer to call it No Collapse instead of Many Worlds -- why not emphasize the advantageous rather than the confusing part of the interpretation? Some eminent physicists who (as far as I can tell) believe(d) in MW: Feynman, Gell-Mann, Hawking, Steve Weinberg, Bryce DeWitt, David Deutsch, Sidney Coleman ... In fact, I was told that Feynman and Gell-Mann each claim(ed) to have independently invented MW, without any knowledge of Everett! Saturday, July 21, 2007 Man vs machine: poker It looks like we will soon add poker to the list of games (chess, checkers, backgammon) at which machines have surpassed humans. Note we're talking about heads up play here. I imagine machines are not as good at playing tournaments -- i.e., picking out and exploiting weak players at the table. How long until computers can play a decent game of Go? Associated Press: ...Computers have gotten a lot better at poker in recent years; they're good enough now to challenge top professionals like Laak, who won the World Poker Tour invitational in 2004. But it's only a matter of time before the machines take a commanding lead in the war for poker supremacy. Just as they already have in backgammon, checkers and chess, computers are expected to surpass even the best human poker players within a decade. They can already beat virtually any amateur player. "This match is extremely important, because it's the first time there's going to be a man-machine event where there's going to be a scientific component," said University of Alberta computing science professor Jonathan Schaeffer. The Canadian university's games research group is considered the best of its kind in the world. After defeating an Alberta-designed program several years ago, Laak was so impressed that he estimated his edge at a mere 5 percent. He figures he would have lost if the researchers hadn't let him examine the programming code and practice against the machine ahead of time. "This robot is going to do just fine," Laak predicted. The Alberta researchers have endowed the $50,000 contest with an ingenious design, making this the first man-machine contest to eliminate the luck of the draw as much as possible. Laak will play with a partner, fellow pro Ali Eslami. The two will be in separate rooms, and their games will be mirror images of one another, with Eslami getting the cards that the computer received in its hands against Laak, and vice versa. That way, a lousy hand for one human player will result in a correspondingly strong hand for his partner in the other room. At the end of the tournament the chips of both humans will be added together and compared to the computer's. The two-day contest, beginning Monday, takes place not at a casino, but at the annual conference of the Association for the Advancement of Artificial Intelligence in Vancouver, British Columbia. Researchers in the field have taken an increasing interest in poker over the past few years because one of the biggest problems they face is how to deal with uncertainty and incomplete information. "You don't have perfect information about what state the game is in, and particularly what cards your opponent has in his hand," said Dana S. Nau, a professor of computer science at the University of Maryland in College Park. "That means when an opponent does something, you can't be sure why." As a result, it is much harder for computer programmers to teach computers to play poker than other games. In chess, checkers and backgammon, every contest starts the same way, then evolves through an enormous, but finite, number of possible states according to a consistent set of rules. With enough computing power, a computer could simply build a tree with a branch representing every possible future move in the game, then choose the one that leads most directly to victory. ...The game-tree approach doesn't work in poker because in many situations there is no one best move. There isn't even a best strategy. A top-notch player adapts his play over time, exploiting his opponent's behavior. He bluffs against the timid and proceeds cautiously when players who only raise on the strongest hands are betting the limit. He learns how to vary his own strategy so others can't take advantage of him. That kind of insight is very hard to program into a computer. You can't just give the machine some rules to follow, because any reasonably competent human player will quickly intuit what the computer is going to do in various situations. "What makes poker interesting is that there is not a magic recipe," Schaeffer said. In fact, the simplest poker-playing programs fail because they are just a recipe, a set of rules telling the computer what to do based on the strength of its hand. A savvy opponent can soon gauge what cards the computer is holding based on how aggressively it is betting. That's how Laak was able to defeat a program called Poker Probot in a contest two years ago in Las Vegas. As the match progressed Laak correctly intuited that the computer was playing a consistently aggressive game, and capitalized on that observation by adapting his own play. Programmers can eliminate some of that weakness with game theory, a branch of mathematics pioneered by John von Neumann, who also helped develop the hydrogen bomb. In 1950 mathematician John Nash, whose life inspired the movie "A Brilliant Mind," showed that in certain games there is a set of strategies such that every player's return is maximized and no player would benefit from switching to a different strategy. In the simple game "Rock, Paper, Scissors," for example, the best strategy is to randomly select each of the options an equal proportion of the time. If any player diverted from that strategy by following a pattern or favoring one option over, the others would soon notice and adapt their own play to take advantage of it. Texas Hold 'em is a little more complicated than "Rock, Paper, Scissors," but Nash's math still applies. With game theory, computers know to vary their play so an opponent has a hard time figuring out whether they are bluffing or employing some other strategy. But game theory has inherent limits. In Nash equilibrium terms, success doesn't mean winning — it means not losing. "You basically compute a formula that can at least break even in the long run, no matter what your opponent does," Billings said. That's about where the best poker programs are today. Though the best game theory-based programs can usually hold their own against world-class human poker players, they aren't good enough to win big consistently. Squeezing that extra bit of performance out of a computer requires combining the sheer mathematical power of game theory with the ability to observe an opponent's play and adapt to it. Many legendary poker players do that by being experts of human nature. They quickly learn the tics, gestures and other "tells" that reveal exactly what another player is up to. A computer can't detect those, but it can keep track of how an opponent plays the game. It can observe how often an opponent tries to bluff with a weak hand, and how often she folds. Then the computer can take that information and incorporate it into the calculations that guide its own game. "The notion of forming some sort of model of what another player is like ... is a really important problem," Nau said. Computer scientists are only just beginning to incorporate that ability into their programs; days before their contest with Laak and Eslami, the University of Alberta researchers are still trying to tweak their program's adaptive elements. Billings will say only this about what the humans have in store: "They will be guaranteed to be seeing a lot of different styles." Friday, July 20, 2007 Visit to Redmond No startup odyssey is complete without a trip to Microsoft! I'm told there are 35k employees on their sprawling campus. Average age a bit higher than at Google, atmosphere a bit more serious and corporate, but still signs of geekery and techno wizardry. Fortunately for me, no one complained when I used a Mac Powerbook for my presentation :-) Monday, July 16, 2007 50 years of Many Worlds Max Tegmark has a nice essay in Nature on the Many Worlds (MW) interpretation of quantum mechanics. Previous discussion of Hugh Everett III and MW on this blog. Personally, I find MW more appealing than the conventional Copenhagen interpretation, which is certainly incomplete. This point of view is increasingly common among those who have to think about the QM of isolated, closed systems: quantum cosmologists, quantum information theorists, etc. Tegmark correctly points out in the essay below that progress in our understanding of decoherence in no way takes the place of MW in clarifying the problems with measurement and wavefunction collapse, although this is a common misconception. However, I believe there is a fundamental problem with deriving Born's rule for probability of outcomes in the MW context. See research paper here and talk given at Caltech IQI here. A brief guide for the perplexed: (state)   =   (up)   +   (down) (up)     or     (down), with some probability for each outcome depending on the initial state (e.g., 1/2 and 1/2 of measuring up and down). One fundamental difference between quantum and classical mechanics is that even though we have specified the state above as precisely as is allowed by nature, we are still left with a probabilistic prediction for what will happen next. In classical physics knowing the state (e.g., position and velocity of a particle) allows perfect future prediction. Everett suggested we simply remove wavefunction collapse from the theory. Then the state evolves in time always according to the Schrodinger equation. Suppose we follow our electron state through a device which measures its spin. For example: by deflecting the electron using a magnetic field and recording the spin-dendent path of the deflected electron using a detector which amplifies the result. The result is recorded in some macroscopic way: e.g., a red or green bulb lights up depending on whether deflection was up or down. The whole process is described by the Schrodinger equation, with the final state being Do the other worlds exist? Can we interact with them? These are the tricky questions remaining... Some eminent physicists who (as far as I can tell) believe in MW: Feynman, Gell-Mann, Hawking, Steve Weinberg, Bryce DeWitt, David Deutsch, ... In fact, I was told that Feynman and Gell-Mann each claim(ed) to have independently invented MW, without any knowledge of Everett! Many lives in many worlds Max Tegmark, Nature Almost all of my colleagues have an opinion about it, but almost none of them have read it. The first draft of Hugh Everett's PhD thesis, the shortened official version of which celebrates its 50th birthday this year, is buried in the out-of-print book The Many-Worlds Interpretation of Quantum Mechanics. I remember my excitement on finding it in a small Berkeley book store back in grad school, and still view it as one of the most brilliant texts I've ever read. By the time Everett started his graduate work with John Archibald Wheeler at Princeton University in New Jersey quantum mechanics had chalked up stunning successes in explaining the atomic realm, yet debate raged on as to what its mathematical formalism really meant. I was fortunate to get to discuss quantum mechanics with Wheeler during my postdoctorate years in Princeton, but never had the chance to meet Everett. Quantum mechanics specifies the state of the Universe not in classical terms, such as the positions and velocities of all particles, but in terms of a mathematical object called a wavefunction. According to the Schrödinger equation, this wavefunction evolves over time in a deterministic fashion that mathematicians term 'unitary'. Although quantum mechanics is often described as inherently random and uncertain, there is nothing random or uncertain about the way the wavefunction evolves. The sticky part is how to connect this wavefunction with what we observe. Many legitimate wavefunctions correspond to counterintuitive situations, such as Schrödinger's cat being dead and alive at the same time in a 'superposition' of states. In the 1920s, physicists explained away this weirdness by postulating that the wavefunction 'collapsed' into some random but definite classical outcome whenever someone made an observation. This add-on had the virtue of explaining observations, but rendered the theory incomplete, because there was no mathematics specifying what constituted an observation — that is, when the wavefunction was supposed to collapse. Everett's theory is simple to state but has complex consequences, including parallel universes. The theory can be summed up by saying that the Schrödinger equation applies at all times; in other words, that the wavefunction of the Universe never collapses. That's it — no mention of parallel universes or splitting worlds, which are implications of the theory rather than postulates. His brilliant insight was that this collapse-free quantum theory is, in fact, consistent with observation. Although it predicts that a wavefunction describing one classical reality gradually evolves into a wavefunction describing a superposition of many such realities — the many worlds — observers subjectively experience this splitting merely as a slight randomness (see 'Not so random'), with probabilities consistent with those calculated using the wavefunction-collapse recipe. Gaining acceptance It is often said that important scientific discoveries go though three phases: first they are completely ignored, then they are violently attacked, and finally they are brushed aside as well known. Everett's discovery was no exception: it took more than a decade before it started getting noticed. But it was too late for Everett, who left academia disillusioned1. Everett's no-collapse idea is not yet at stage three, but after being widely dismissed as too crazy during the 1970s and 1980s, it has gradually gained more acceptance. At an informal poll taken at a conference on the foundations of quantum theory in 1999, physicists rated the idea more highly than the alternatives, although many more physicists were still 'undecided'2. I believe the upward trend is clear. Why the change? I think there are several reasons. Predictions of other types of parallel universes from cosmological inflation and string theory have increased tolerance for weird-sounding ideas. New experiments have demonstrated quantum weirdness in ever larger systems. Finally, the discovery of a process known as decoherence has answered crucial questions that Everett's work had left dangling. For example, if these parallel universes exist, why don't we perceive them? Quantum superpositions cannot be confined — as most quantum experiments are — to the microworld. Because you are made of atoms, then if atoms can be in two places at once in superposition, so can you. The breakthrough came in 1970 with a seminal paper by H. Dieter Zeh, who showed that the Schrödinger equation itself gives rise to a type of censorship. This effect became known as 'decoherence', and was worked out in great detail by Wojciech Zurek, Zeh and others over the following decades. Quantum superpositions were found to remain observable only as long as they were kept secret from the rest of the world. The quantum card in our example (see 'Not so random') is constantly bumping into air molecules, photons and so on, which thereby find out whether it has fallen to the left or to the right, destroying the coherence of the superposition and making it unobservable. Decoherence also explains why states resembling classical physics have special status: they are the most robust to decoherence. Science or philosophy? The main motivation for introducing the notion of random wavefunction collapse into quantum physics had been to explain why we perceive probabilities and not strange macroscopic superpositions. After Everett had shown that things would appear random anyway (see 'Not so random') and decoherence had been found to explain why we never perceive anything strange, much of this motivation was gone. Even though the wavefunction technically never collapses in the Everett view, it is generally agreed that decoherence produces an effect that looks like a collapse and smells like a collapse. In my opinion, it is time to update the many quantum textbooks that introduce wavefunction collapse as a fundamental postulate of quantum mechanics. The idea of collapse still has utility as a calculational recipe, but students should be told that it is probably not a fundamental process violating the Schrödinger equation so as to avoid any subsequent confusion. If you are considering a quantum textbook that does not mention Everett and decoherence in the index, I recommend buying a more modern one. After 50 years we can celebrate the fact that Everett's interpretation is still consistent with quantum observations, but we face another pressing question: is it science or mere philosophy? The key point is that parallel universes are not a theory in themselves, but a prediction of certain theories. For a theory to be falsifiable, we need not observe and test all its predictions — one will do. Because Einstein's general theory of relativity has successfully predicted many things we can observe, we also take seriously its predictions for things we cannot, such as the internal structure of black holes. Analogously, successful predictions by unitary quantum mechanics have made scientists take more seriously its other predictions, including parallel universes. Moreover, Everett's theory is falsifiable by future lab experiments: no matter how large a system they probe, it says, they will not observe the wavefunction collapsing. Indeed, collapse-free superpositions have been demonstrated in systems with many atoms, such as carbon-60 molecules. Several groups are now attempting to create quantum superpositions of objects involving 1017 atoms or more, tantalizingly close to our human macroscopic scale. There is also a global effort to build quantum computers which, if successful, will be able to factor numbers exponentially faster than classical computers, effectively performing parallel computations in Everett's parallel worlds. The bird perspective So Everett's theory is testable and so far agrees with observation. But should you really believe it? When thinking about the ultimate nature of reality, I find it useful to distinguish between two ways of viewing a physical theory: the outside view of a physicist studying its mathematical equations, like a bird surveying a landscape from high above, and the inside view of an observer living in the world described by the equations, like a frog being watched by the bird. From the bird perspective, Everett's multiverse is simple. There is only one wavefunction, and it evolves smoothly and deterministically over time without any kind of splitting or parallelism. The abstract quantum world described by this evolving wavefunction contains within it a vast number of classical parallel storylines (worlds), continuously splitting and merging, as well as a number of quantum phenomena that lack a classical description. From their frog perspective, observers perceive only a tiny fraction of this full reality, and they perceive the splitting of classical storylines as quantum randomness. What is more fundamental — the frog perspective or the bird perspective? In other words, what is more basic to you: human language or mathematical language? If you opt for the former, you would probably prefer a 'many words' interpretation of quantum mechanics, where mathematical simplicity is sacrificed to collapse the wavefunction and eliminate parallel universes. But if you prefer a simple and purely mathematical theory, then you — like me — are stuck with the many-worlds interpretation. If you struggle with this you are in good company: in general, it has proved extremely difficult to formulate a mathematical theory that predicts everything we can observe and nothing else — and not just for quantum physics. Moreover, we should expect quantum mechanics to feel counterintuitive, because evolution endowed us with intuition only for those aspects of physics that had survival value for our distant ancestors, such as the trajectories of flying rocks. The choice is yours. But I worry that if we dismiss theories such as Everett's because we can't observe everything or because they seem weird, we risk missing true breakthroughs, perpetuating our instinctive reluctance to expand our horizons. To modern ears the Shapley–Curtis debate of 1920 about whether there was really a multitude of galaxies (parallel universes by the standards of the time) sounds positively quaint. If we dismiss theories because they seem weird, we risk missing true breakthroughs. Everett asked us to acknowledge that our physical world is grander than we had imagined, a humble suggestion that is probably easier to accept after the recent breakthroughs in cosmology than it was 50 years ago. I think Everett's only mistake was to be born ahead of his time. In another 50 years, I believe we will be more used to the weird ways of our cosmos, and even find its strangeness to be part of its charm. Saturday, July 14, 2007 Behavioral economics I found this overview and intellectual history of behavioral economics via a link from Economist's View. By now I think anyone who has looked at the data knows that the agents -- i.e., humans -- participating in markets are limited in many ways. (Only a mathematics-fetishizing autistic, completely disconnected from empiricism, could have thought otherwise.) If the agents aren't reliable or even particularly good processors of information, how does the system find its neoclassical equilibrium? (Can one even define the equilibrium if there are not individual and aggregate utility functions?) The next stage of the argument is whether the market magically aggregates the decisions of the individual agents in such a way that their errors cancel. In some simple cases (see Wisdom of Crowds for examples) this may be the case, but in more complicated markets I suspect (and the data apparently show; see below) that cancellation does not occur and outcomes are suboptimal. Where does this leave neoclassical economics? You be the judge! Related posts here (Mirowski) and here (irrational voters and rational agents?). The paper (PDF) is here. Some excerpts below. Opening quote from Samuelson and Conclusions: I wonder how much economic theory would be changed if [..] found to be empirically untrue. I suspect, very little. --Paul Samuelson Samuelson’s claim at the beginning of this paper that a falsification would have little effect on his economics remains largely an open question. On the basis of the overview provided in this paper, however, two developments can be observed. With respect to the first branch of behavioral economics, Samuelson is probably right. Although the first branch proposes some radical changes to traditional economics, it protects Samuelson’s economics by labeling it a normative theory. Kahneman, Tversky, and Thaler propose a research agenda that sets economics off in a different direction, but at the same time saves traditional economics as the objective anchor by which to stay on course. The second branch in behavioral economics is potentially much more destructive. It rejects Samuelson’s economics both as a positive and as a normative theory. By doubting the validity of the exogeneity of preference assumption, introducing the social environment as an explanatory factor, and promoting neuroscience as a basis for economics, it offers a range of alternatives for traditional economics. With game theory it furthermore possesses a powerful tool that is increasingly used in a number of related other sciences. ... Kahneman and Tversky: Over the past ten years Kahneman has gone one step beyond showing how traditional economics descriptively fails. Especially prominent, both in the number of publications Kahneman devotes to it and in the attention it receives, is his reinterpretation of the notion of utility.13 For Kahneman, the main reason that people do not make their decisions in accordance with the normative theory is that their valuation and perception of the factors of these choices systematically differ from the objective valuation of these factors. This is what amongst many articles Kahneman and Tversky (1979) shows. People’s subjective perception of probabilities and their subjective valuation of utility differ from their objective values. A theory that attempts to describe people’s decision behavior in the real world should thus start by measuring these subjective values of utility and probability. ... Thaler distinguishes his work, and behavioral economics generally, from experimental economics of for instance Vernon Smith and Charles Plott. Although Thaler’s remarks in this respect are scattered and mostly made in passing, two recurring arguments can be observed. Firstly, Thaler rejects experimental economics’ suggestion that the market (institutions) will correct the quasi-rational behavior of the individual. Simply put, if one extends the coffee-mug experiment described above with an (experimental) market in which subjects can trade their mugs, the endowment effect doesn’t change one single bit. Furthermore, there is no way in which a rational individual could use the market system to exploit quasi-rational individuals in the case of this endowment effect36. The implication is that quasi-rational behavior can survive. As rational agents cannot exploit quasi-rational behavior, and as there seems in most cases to be no ‘survival penalty’ on quasi-rational behavior, the evolutionary argument doesn’t work either. Secondly, experimental economics’ market experiments are not convincing according to Thaler. It makes two wrong assumptions. First of all, it assumes that individuals will quickly learn from their mistakes and discover the right solution. Thaler recounts how this has been falsified in numerous experiments. On the contrary, it is often the case that even when the correct solution has been repeatedly explained to them, individuals still persist in making the wrong decision. A second false assumption of experimental economics is to suppose that in the real world there exist ample opportunity to learn. This is labeled the Ground Hog Day argument37, in reference to a well-known movie starring Bill Murray. ... Subjects in (market) experiments who have to play the exact same game for tens or hundreds of rounds may perhaps be observed to (slowly) adjust to the rational solution. But real life is more like a constant sequence of the first few round of an experiment. The learning assumption of experimental economics is thus not valid. But perhaps even more destructive for economics is the fact that individuals’ intertemporal choices can be shown to be fundamentally inconsistent49. People who prefer A now over B now also prefer A in one month over B in two months. However, at the same time they also prefer B in one month and A in two months over A in one month and B in two months. The ultimatum game (player one proposes a division of a fixed sum of money, player two either accepts (the money is divided according to the proposed division), or rejects (both players get nothing)) has been played all over the world and leads always to the result that individuals do not play the ‘optimum’ (player one proposes the smallest amount possible to player two and player two accepts), but typically divide the money about half-half. The phenomenon is remarkably stable around the globe. However, the experiments have only been done with university students in advanced capitalist economies. The question is thus whether the results hold when tested in other environments. The surprising result is not so much that the average proposed and accepted divisions in the small-scale societies differ from those of university students, but how they differ. Roughly, the average proposed and accepted divisions go from [80%,20%] to [40%,60%]. The members of the different societies thus show a remarkable difference in the division they propose and accept. ...“preferences over economic choices are not exogenous as the canonical model would have it, but rather are shaped by the economic and social interactions of everyday life. ..." Camerer’s critique is similar to Loewenstein’s and can perhaps best be summed up with the conclusion that for Camerer there is no invisible hand. That is, for Camerer nothing mysterious happens between the behavior of the individual and the behavior of the market. If you know the behavior of the individuals, you can add up these behaviors to obtain the behavior of the market. In Anderson and Camerer (2000), for instance, it is shown that even when one allows learning to take place, a key issue for experimental economics, the game does not necessarily go to the global optimum, but as a result of path-dependency may easily get stuck in a sub-optimum. Camerer (1987) shows that, contrary to the common belief in experimental economics, decision biases persist in markets. In a laboratory experiment Camerer finds that a market institution does not reduce biases but may even increase them. ... The second branch of behavioral economics is organized around Camerer, Loewenstein, and Laibson. It considers the uncertainty of the decision behavior to be of an endogenous or strategic nature. That is, the uncertainty depends upon the fact that, like the individual, also the rest of the world tries to make the best decision. The most important theory to investigate individual decision behavior under endogenous uncertainty is game theory. The second branch of behavioral economics draws less on Kahneman and Tversky. What it takes from them is the idea that traditional Samuelson economics is plainly false. It argues, however, that traditional economics is both positively/descriptively and normatively wrong. Except for a few special cases, it neither tells how the individuals behave, nor how they should behave. The main project of the second branch is hence to build new positive theories of rational individual economic behavior under endogenous uncertainty. And here the race is basically still open. Made in China In an earlier post I linked to Bunnie Huang's blog, which describes (among other things) the manufacturing of his startup's hi-tech Chumby gadget in Shenzhen. At Foo Camp he and I ran a panel on the Future of China. In the audience, among others, were Jimmy Wales, the founder of Wikipedia, and Guido van Rossum, the creator of Python. Jimmy was typing on his laptop the whole time, but Guido asked a bunch of questions and recommended a book to me. Bunnie has some more posts up (including video) giving his impressions of manufacturing in China. Highly recommended! Made in China: Scale, Skill, Dedication, Feeding the factory. Below: Bunnie on the line, debugging what turns out to be a firmware problem with the Chumby. Look at those MIT wire boys go! :-) Wednesday, July 11, 2007 Hedge funds or market makers? To what extent are Citadel, DE Shaw and Renaissance really just big market makers? The essay excerpted below is by Harry Kat, a finance professor and former trader who was profiled in the New Yorker recently. First, from the New Yorker piece: It is notoriously difficult to distinguish between genuine investment skill and random variation. But firms like Renaissance Technologies, Citadel Investment Group, and D. E. Shaw appear to generate consistently high returns and low volatility. Shaw’s main equity fund has posted average annual returns, after fees, of twenty-one per cent since 1989; Renaissance has reportedly produced even higher returns. (Most of the top-performing hedge funds are closed to new investors.) Kat questioned whether such firms, which trade in huge volumes on a daily basis, ought to be categorized as hedge funds at all. “Basically, they are the largest market-making firms in the world, but they call themselves hedge funds because it sells better,” Kat said. “The average horizon on a trade for these guys is something like five seconds. They earn the spread. It’s very smart, but their skill is in technology. It’s in sucking up tick-by-tick data, processing all those data, and converting them into second-by-second positions in thousands of spreads worldwide. It’s just algorithmic market-making.” Next, the essay from Kat's academic web site. I suspect Kat exaggerates, but he does make an interesting point. Could a market maker really deliver such huge alpha? Only if it knows exactly where and when to take a position! Of Market Makers and Hedge Funds David and Ken both work for a large market making firm and both have the same dream: to start their own company. One day, David decides to quit his job and start a traditional market-making company. He puts in $10m of his own money and finds 9 others that are willing to do the same. The result: a company with $100m in equity, divided equally over 10 shareholders, meaning that each shareholder will share equally in the companyís operating costs and P&L. David will manage the company and will receive an annual salary of $1m for doing so. Ken decides to quit as well. He is going to do things differently though. Instead of packaging his market-making activities in the traditional corporate form, he is going to start a hedge fund. Like David, he also puts in $10m of his own money. Like David, he also finds 9 others willing to do the same. They are not called shareholders, however. They are investors in a hedge fund with a net asset value of $100m. Just like David, Ken has a double function. Apart from being one of the 10 investors in the fund, he will also be the fundís manager. As manager, he is entitled to 20% of the profit (over a 5% hurdle rate); the average incentive fee in the hedge fund industry. At first sight, it looks like David and Ken have accomplished the same thing. Both have a market-making operation with $100m in capital and 9 others to share the benefits with. There is, however, one big difference. Suppose David and Ken both made a net $100m. In Davidís company this would be shared equally between the shareholders, meaning that, including his salary, David received $11m. In Ken's hedge fund things are different, however. As the manager of the fund, he takes 20% of the profit, which, taking into account the $5m hurdle, would leave $81m to be divided among the 10 investors. Since he is also one of those 10 investors, however, this means that Ken would pocket a whopping $27.1m in total. Now suppose that both David and Ken lost $100m. In that case David would lose $9m, but Ken would still only lose $10m since as the fundís manager Ken gets 20% of the profit, but he does not participate in any losses. So if you wanted to be a market maker, how would you set yourself up? Of course, we are not the first to think of this. Some of the largest market maker firms in the world disguise themselves as hedge funds these days. Their activities are typically classified under fancy hedge fund names such as ëstatistical arbitrageí or ëmanaged futuresí, but basically these funds are market makers. This includes some of the most admired names in the hedge fund business such as D.E. Shaw, Renaissance, Citadel, and AHL, all of which are, not surprisingly, notorious for the sheer size of their daily trading volumes and their fairly consistent alpha. The above observation leads to a number of fascinating questions. The most interesting of these is of course how much of the profits of these market-making hedge funds stems from old-fashioned market making and how much is due to truly special insights and skill? Is the bulk of what these funds do very similar to what traditional market-making firms do, or are they responsible for major innovations and/or have they embedded major empirical discoveries in their market making? They tend to employ lots of PhDs and make a lot of fuzz about only hiring the best, etc. However, how much of that is window-dressing and how much is really adding value? Another question is whether market-making hedge funds get treated differently than traditional market makers when they go out to borrow money or securities. Given prime brokersí eagerness to service hedge funds these days, one might argue that in this respect market-making hedge funds are again better off then traditional market makers. So what is the conclusion? First of all, given the returns posted by the funds mentioned, it appears that high volume multi-market market making is a very good business to be in. Second, it looks like there could be a trade-off going on. Market-making hedge funds take a bigger slice of the pie, but the pie might be significantly bigger as well. Obviously, all of this could do with quite a bit more research. See if I can put a PhD on it. Monday, July 09, 2007 Theorists in diaspora Passing the time, two former theoretical physicists analyze a research article which only just appeared on the web. Between them, they manage over a billion dollars in hedge fund assets. While their computers process data in the background, vacuuming up nickels from the trading ether, the two discuss color magnetic flux, quark gluon plasma and acausal correlations. For fun, one of the two emails the paper to a former colleague, a humble professor still struggling with esoteric research... Quark-gluon plasma paradox D. Miskowiec Gesellschaft fur Schwerionenforschung mbH, Planckstr. 1, 64291 Darmstadt Based on simple physics arguments it is shown that the concept of quark-gluon plasma, a state of matter consisting of uncorrelated quarks, antiquarks, and gluons, has a fundamental problem. The result? The following email message. Dear Dr. Miskowiec, I read your interesting preprint on a possible QGP paradox. My comments are below. Best regards, Stephen Hsu In the paper it seems you are discussing a caricature of QGP, indeed a straw man. I don't know whether belief in this straw man is widespread among nuclear theorists; perhaps it is. But QGP is, after all, merely the high temperature phase of QCD. There *are* correlations (dynamics) that lead to preferential clustering of quarks into color neutral objects. These effects are absent at length scales much smaller than a fermi, due to asymptotic freedom. It is only on these short length scales that one can treat QCD as a (nearly) free gas of quarks and gluons. On sufficiently long length scales (i.e., much larger than a fermi) the system would still prefer to be color neutral. While it is true that at high temperatures the *linear* (confining) potential between color charges is no longer present, there is still an energetic cost for unscreened charge. It's a standard result in finite temperature QCD that, even at high temperatures, there are still infrared (long distance) nonperturbative effects. These are associated with a scale related to the magnetic screening length of gluons. The resulting dynamics are never fully perturbative, although thermodyamic quantities such as entropy density, pressure, etc. are close to those of a free gas of quarks and gluons. The limit to our ability to compute these thermodynamic quantities beyond a certain level in perturbation theory arises from the nonperturbative effects I mention. Consider the torus of QGP you discuss in your paper. Suppose I make a single "cut" in the torus, possibly separating quarks from each other in a way that leaves some uncancelled color charge. Once I pull the two faces apart by more than some distance (probably a few fermis), effects such as preferential hadronization into color neutral, integer baryon number, objects come into play. The energy required to make the cut and pull the faces apart is more than enough to create q-qbar pairs from the vacuum that can color neutralize each face. Note this is a *local* phenomenon taking place on fermi lengthscales. I believe the solution to your paradox is the third possibility you list. See below, taken from the paper, bottom of column 1 p.3. I only disagree with the last sentence: high temperature QCD is *not* best described as a gas of hadrons, but *does* prefer color neutrality. No rigorous calculation ever claimed a lack of correlations except at very short distances (due to asymptotic freedom). ...The third possibility is that local correlations between quarks make some cutting surfaces more probable than the others when it comes to cutting the ring and starting the hadronization. Obviously, in absence of such correlations the QGP ring basically looks like in Fig. 3 and no preferred breaking points can be recognized. If, however, some kind of interactions lead to clustering of quarks and gluons into (white) objects of integer baryon numbers like in Fig. 4 then starting hadronization from several points of the ring at the same time will not lead to any problem. However, this kind of matter would be hadron resonance matter rather than the QGP. Cooking the books: US News college rankings I found this amusing article from Slate. It turns out the dirty scoundrels at US News need a "logarithmic adjustor" (fudge factor) to keep Caltech from coming out ahead of HYP (Harvard-Yale-Princeton). Note the article is from back in 2000. The earlier Gottlieb article mentioned below discussing the 1999 rankings (where Caltech came out number 1) is here. For revealed preferences rankings of universities (i.e., where do students really choose to go when they are admitted to more than one school), see here. Cooking the School Books (Yet Again) The U.S. News college rankings get phonier and phonier. By Nicholas Thompson Posted Friday, Sept. 15, 2000, at 3:00 AM ET This year, according to U.S. News & World Report, Princeton is the best university in the country and Caltech is No. 4. This represents a pretty big switcheroo—last year, Caltech was the best and Princeton the fourth. Of course, it's not as though Caltech degenerated or Princeton improved over the past 12 months. As Bruce Gottlieb explained last year in Slate, changes like this come about mainly because U.S. News fiddles with the rules. Caltech catapulted up in 1999 because U.S. News changed the way it compares per-student spending; Caltech dropped back this year because the magazine decided to pretty much undo what it did last year. But I think Gottlieb wasn't quite right when he said that U.S. News makes changes in its formula just so that colleges will bounce around and give the annual rankings some phony drama. The magazine's motives are more devious than that. U.S. News changed the scores last year because a new team of editors and statisticians decided that the books had been cooked to ensure that Harvard, Yale, or Princeton (HYP) ended up on top. U.S. News changed the rankings back because those editors and statisticians are now gone and the magazine wanted HYP back on top. Just before the latest scores came out, I wrote an article in the Washington Monthly suggesting that this might happen. Even so, the fancy footwork was a little shocking. The story of how the rankings were cooked goes back to 1987, when the magazine's first attempt at a formula put a school in first that longtime editor Mel Elfin says he can't even remember, except that it wasn't HYP. So Elfin threw away that formula and brought in a statistician named Robert Morse who produced a new one. This one puts HYP on top, and Elfin frankly defends his use of this result to vindicate the process. He told me, "When you're picking the most valuable player in baseball and a utility player hitting .220 comes up as the MVP, it's not right." For the next decade, Elfin and Morse essentially ran the rankings as their own fiefdom, and no one else at the magazine really knew how the numbers worked. But during a series of recent leadership changes, Morse and Elfin moved out of their leadership roles and a new team came in. What they found, they say, was a bizarre statistical measure that discounted major differences in spending, for what seemed to be the sole purpose of keeping HYP at the top. So, last year, as U.S. News itself wrote, the magazine "brought [its] methodology into line with standard statistical procedure." With these new rankings, Caltech shot up and HYP was displaced for the first time ever. But the credibility of rankings like these depends on two semiconflicting rules. First, the system must be complicated enough to seem scientific. And second, the results must match, more or less, people's nonscientific prejudices. Last year's rankings failed the second test. There aren't many Techie graduates in the top ranks of U.S. News, and I'd be surprised if The New Yorker has published a story written by a Caltech grad, or even by someone married to one, in the last five years. Go out on the streets of Georgetown by the U.S. News offices and ask someone about the best college in the country. She probably won't start to talk about those hallowed labs in Pasadena. So, Morse was given back his job as director of data research, and the formula was juiced to put HYP back on top. According to the magazine: "[W]e adjusted each school's research spending according to the ratio of its undergraduates to graduate students ... [and] we applied a logarithmic adjuster to all spending values." If you're not up on your logarithms, here's a translation: If a school spends tons and tons of money building machines for its students, they only get a little bit of credit. They got lots last year—but that was a mistake. Amazingly, the only categories where U.S. News applies this logarithmic adjuster are also the only categories where Caltech has a huge lead over HYP. The fact that the formulas had to be rearranged to get HYP back on top doesn't mean that those three aren't the best schools in the country, whatever that means. After all, who knows whether last year's methodology was better than this year's? Is a school's quality more accurately measured by multiplying its spending per student by 0.15 or by taking a logarithmic adjuster to that value? A case could also be made for taking the square root. But the logical flaw in U.S. News' methodology should be obvious—at least to any Caltech graduate. If the test of a mathematical formula's validity is how closely the results it produces accord with pre-existing prejudices, then the formula adds nothing to the validity of the prejudice. It's just for show. And if you fiddle constantly with the formula to produce the result you want, it's not even good for that. U.S. News really only has one justification for its rankings: They must be right because the schools we know are the best come out on top. Last year, that logic fell apart. This year, the magazine has straightened it all out and HYP's back in charge—with the help of a logarithmic adjuster. Nicholas Thompson is a senior editor at Legal Affairs. Sunday, July 08, 2007 Myth of the Rational Voter The New Yorker has an excellent discussion by Louis Menand of Bryan Caplan's recent book The Myth of the Rational Voter. Best sentence in the article (I suppose this applies to physicists as well): Caplan is the sort of economist (are there other sorts? there must be) who engages with the views of non-economists in the way a bulldozer would engage with a picket fence if a bulldozer could express glee. Short summary (obvious to anyone who has thought about democracy): voters are clueless, and resulting policies and outcomes are suboptimal, but allowing everyone to have their say lends stability and legitimacy to the system. Democracy is a tradeoff, of course! While a wise and effective dictator (e.g., Lee Kwan Yew of Singapore, or, in Caplan's mind, a board of economic "experts") might outperform the electorate over a short period of time, the more common kind of dictator (stupid, egomaniacal) is capable of much, much worse. Without democracy, what keeps a corrupt and stupid dictator from succeeding the efficient and benevolent one? The analogous point for markets is that, for a short time (classic example: during a war), good central planning might be more effective for certain goals than market mechanisms. But over the long haul distributing the decisions over many participants will give a better outcome, both because of the complexity of economic decision making (e.g., how many bagels does NYC need each day? can a committee figure this out?) and because of the eventuality of bad central planning. When discussing free markets, people on the left always assume the alternative is good central planning, while those on the right always assume the opposite. Returning to Caplan, his view isn't just that voters are uninformed or stupid. He attacks an apparently widely believed feel-good story that says although most voters are clueless their mistakes are random and magically cancel out when aggregated, leaving the outcome in the hands of the wise fraction of the electorate. What a wonderfully fine-tuned dynamical system! (That is how markets are supposed to work, except when they don't, and instead horribly misprice things.) Caplan points out several common irrationalities of voters that do not cancel out, but rather tend to bias government in particular directions. Any data or argument supporting the irrationality of voters and suboptimality of democratic outcomes can be applied just as well to agents in markets. (What Menand calls "shortcuts" below others call heuristics or bounded cognition.) The claim that people make better decisions in market situations (e.g., buying a house or a choosing a career) because they are directly affected by the outcome is only marginally convincing to me. Evaluating the optimality of many economic decisions is about as hard as figuring out whether a particular vote or policy decision was optimal. Did your vote for Nader lead to G.W. Bush and the Iraq disaster? Did your votes for Reagan help end the cold war safely and in our favor? Would you have a higher net worth if you had bought a smaller house and invested the rest of your down payment in equities? Would the extra money in the bank compensate you for the reduced living space? Do typical people sit down and figure these things out? Do they come to correct conclusions, or just fool themselves? I doubt most people could even agree as to Reagan's effect on the cold war, over 20 years ago! I don't want to sound too negative. Let me clarify, before one of those little bulldozers engages with me :-) I regard markets as I regard democracy: flawed and suboptimal, but the best practical mechanisms we have for economic distribution and governance, respectively. My main dispute is with academics who really believe that woefully limited agents are capable of finding global optima. The average voter is not held in much esteem by economists and political scientists, and Caplan rehearses some of the reasons for this. The argument of his book, though, is that economists and political scientists have misunderstood the problem. They think that most voters are ignorant about political issues; Caplan thinks that most voters are wrong about the issues, which is a different matter, and that their wrong ideas lead to policies that make society as a whole worse off. We tend to assume that if the government enacts bad policies, it’s because the system isn’t working properly—and it isn’t working properly because voters are poorly informed, or they’re subject to demagoguery, or special interests thwart the public’s interest. Caplan thinks that these conditions are endemic to democracy. They are not distortions of the process; they are what you would expect to find in a system designed to serve the wishes of the people. “Democracy fails,” he says, “because it does what voters want.” It is sometimes said that the best cure for the ills of democracy is more democracy. Caplan thinks that the best cure is less democracy. He doesn’t quite say that the world ought to be run by economists, but he comes pretty close. The political knowledge of the average voter has been tested repeatedly, and the scores are impressively low. In polls taken since 1945, a majority of Americans have been unable to name a single branch of government, define the terms “liberal” and “conservative,” and explain what the Bill of Rights is. More than two-thirds have reported that they do not know the substance of Roe v. Wade and what the Food and Drug Administration does. Nearly half do not know that states have two senators and three-quarters do not know the length of a Senate term. More than fifty per cent of Americans cannot name their congressman; forty per cent cannot name either of their senators. Voters’ notions of government spending are wildly distorted: the public believes that foreign aid consumes twenty-four per cent of the federal budget, for example, though it actually consumes about one per cent. Even apart from ignorance of the basic facts, most people simply do not think politically. They cannot see, for example, that the opinion that taxes should be lower is incompatible with the opinion that there should be more government programs. Their grasp of terms such as “affirmative action” and “welfare” is perilously uncertain: if you ask people whether they favor spending more on welfare, most say no; if you ask whether they favor spending more on assistance to the poor, most say yes. And, over time, individuals give different answers to the same questions about their political opinions. People simply do not spend much time learning about political issues or thinking through their own positions. They may have opinions—if asked whether they are in favor of capital punishment or free-trade agreements, most people will give an answer—but the opinions are not based on information or derived from a coherent political philosophy. They are largely attitudinal and ad hoc. For fifty years, it has been standard to explain voter ignorance in economic terms. Caplan cites Anthony Downs’s “An Economic Theory of Democracy” (1957): “It is irrational to be politically well-informed because the low returns from data simply do not justify their cost in time and other resources.” In other words, it isn’t worth my while to spend time and energy acquiring information about candidates and issues, because my vote can’t change the outcome. I would not buy a car or a house without doing due diligence, because I pay a price if I make the wrong choice. But if I had voted for the candidate I did not prefer in every Presidential election since I began voting, it would have made no difference to me (or to anyone else). It would have made no difference if I had not voted at all. This doesn’t mean that I won’t vote, or that, when I do vote, I won’t care about the outcome. It only means that I have no incentive to learn more about the candidates or the issues, because the price of my ignorance is essentially zero. According to this economic model, people aren’t ignorant about politics because they’re stupid; they’re ignorant because they’re rational. If everyone doesn’t vote, then the system doesn’t work. But if I don’t vote, the system works just fine. So I find more productive ways to spend my time. Political scientists have proposed various theories aimed at salvaging some dignity for the democratic process. One is that elections are decided by the ten per cent or so of the electorate who are informed and have coherent political views. In this theory, the votes of the uninformed cancel each other out, since their choices are effectively random: they are flipping a coin. So candidates pitch their appeals to the informed voters, who decide on the merits, and this makes the outcome of an election politically meaningful. Another argument is that the average voter uses “shortcuts” to reach a decision about which candidate to vote for. The political party is an obvious shortcut: if you have decided that you prefer Democrats, you don’t really need more information to cast your ballot. Shortcuts can take other forms as well: the comments of a co-worker or a relative with a reputation for political wisdom, or a news item or photograph (John Kerry windsurfing) that can be used to make a quick-and-dirty calculation about whether the candidate is someone you should support. (People argue about how valid these shortcuts are as substitutes for fuller information, of course.) There is also the theory of what Caplan calls the Miracle of Aggregation. As James Surowiecki illustrates in “The Wisdom of Crowds” (2004), a large number of people with partial information and varying degrees of intelligence and expertise will collectively reach better or more accurate results than will a small number of like-minded, highly intelligent experts. Stock prices work this way, but so can many other things, such as determining the odds in sports gambling, guessing the number of jelly beans in a jar, and analyzing intelligence. An individual voter has limited amounts of information and political sense, but a hundred million voters, each with a different amount of information and political sense, will produce the “right” result. Then, there is the theory that people vote the same way that they act in the marketplace: they pursue their self-interest. In the market, selfish behavior conduces to the general good, and the same should be true for elections. Caplan thinks that democracy as it is now practiced cannot be salvaged, and his position is based on a simple observation: “Democracy is a commons, not a market.” A commons is an unregulated public resource—in the classic example, in Garrett Hardin’s essay “The Tragedy of the Commons” (1968), it is literally a commons, a public pasture on which anyone may graze his cattle. It is in the interest of each herdsman to graze as many of his own cattle as he can, since the resource is free, but too many cattle will result in overgrazing and the destruction of the pasture. So the pursuit of individual self-interest leads to a loss for everyone. (The subject Hardin was addressing was population growth: someone may be concerned about overpopulation but still decide to have another child, since the cost to the individual of adding one more person to the planet is much less than the benefit of having the child.) ...But, as Caplan certainly knows, though he does not give sufficient weight to it, the problem, if it is a problem, is more deeply rooted. It’s not a matter of information, or the lack of it; it’s a matter of psychology. Most people do not think politically, and they do not think like economists, either. People exaggerate the risk of loss; they like the status quo and tend to regard it as a norm; they overreact to sensational but unrepresentative information (the shark-attack phenomenon); they will pay extravagantly to punish cheaters, even when there is no benefit to themselves; and they often rank fairness and reciprocity ahead of self-interest. Most people, even if you explained to them what the economically rational choice was, would be reluctant to make it, because they value other things—in particular, they want to protect themselves from the downside of change. They would rather feel good about themselves than maximize (even legitimately) their profit, and they would rather not have more of something than run the risk, even if the risk is small by actuarial standards, of having significantly less. People are less modern than the times in which they live, in other words, and the failure to comprehend this is what can make economists seem like happy bulldozers. ... Blog Archive
80cd8ec87c766661
number theory and quantum mechanics By far, the most active area of research linking QM and number theory is the work concerning the 'spectral interpretation' of the Riemann zeta zeros, suggesting a possible approach to the Riemann hypothesis involving quantum chaos. The remainder of this page concerns more general connections between QM and number theory - both the use of number theoretical structures in the modelling of QM phenomena, and the application of QM-related techniques to number theoretical problems: T. Aoki, S. Kanemitsu, M. Nakahara, and Y. Ohno (Eds.), Zeta Functions, Topology and Quantum Physics, Developments in Mathematics 14 (Springer, 2005) [publisher's description:] "This volume focuses on various aspects of zeta functions: multiple zeta values, Ohno's relations, the Riemann hypothesis, L-functions, polylogarithms, and their interplay with other disciplines. Eleven articles on recent advances are written by outstanding experts in the above-mentioned fields. Each article starts with an introductory survey leading to the exciting new research developments accomplished by the contributors." D. Pozdnyakov, "Physical interpretation of the Riemann hypothesis" (preprint 03/2012) [abstract:] "An equivalent formulation of the Riemann hypothesis is given. The formulation is generalized. The physical interpretation of the Riemann hypothesis generalized formulation is given in the framework of quantum theory terminology. An axiom is laid down on the ground of the interpretation taking into account the observed properties of the surrounding reality. The Riemann hypothesis is true according to the axiom. It is shown that it is unprovable." Y. Pan, "How to measure the canonical commutation relation $[\hat{x},\hat{p}] = i\hbar$? in quantum mechanics with weak measurement?" (preprint 02/2017) [abstract:] "The quantum weak value draws many attentions recently from theoretical curiosity to experimental applications. Now we design an unusual weak measuring procedure as the pre-selection, mid-selection and post-selection to study the correlation function of two weak values, which we called the weak correlation function. In this paper, we proposed an weak measurement experiment to measure the canonical commutator $[\hat{x},\hat{p}] = i\hbar$? in quantum mechanics. Furthurmore, we found the intriguing equivalence between the canonical commutation relation and Riemann hypothesis, and then obtained the weak value of nontrivial Riemann zeros. Finally, as an nontrivial example of weak correlations, we also passed successfully a testing on the (anti-)commutators of Pauli operators, which followed the experimental setup of the landmark paper of Aharonov, et al. in 1988. Our proposed experiments could hopefully test the fundamental canonical relationship in quantum worlds and trigger more testing experiments on weak correlations." G. Cotti, "Coalescence phenomenon of quantum cohomology of Grassmannians and the distribution of prime numbers" (preprint 08/2016) [abstract:] "The occurrence and frequency of a phenomenon of resonance (namely the coalescence of some Dubrovin canonical coordinates) in the locus of Small Quantum Cohomology of complex Grassmannians is studied. It is shown that surprisingly this frequency is strictly subordinate and highly influenced by the distribution of prime numbers. Two equivalent formulations of the Riemann Hypothesis are given in terms of numbers of complex Grassmannians without coalescence: the former as a constraint on the disposition of singularities of the analytic continuation of the Dirichlet series associated to the sequence counting non-coalescing Grassmannians, the latter as asymptotic estimate (whose error term cannot be improved) for their distribution function." R. Ramanathan, M. Túlio Quintino, A. Belén Sainz, G. Murta, R. Augusiak, "On the tightness of correlation inequalities with no quantum violation" (preprint 07/2016) [abstract:] "We study the faces of the set of quantum correlations, i.e., the Bell and noncontextuality inequalities without any quantum violation. First, we investigate the question whether every proper (tight) Bell inequality for two parties, other than the trivial ones from positivity, normalization and no-signaling can be violated by quantum correlations, i.e., whether the classical Bell polytope or the smaller correlation polytope share any facets with their respective quantum sets. To do this, we develop a recently derived bound on the quantum value of linear games based on the norms of game matrices to give a simple sufficient condition to identify linear games with no quantum advantage. Additionally we show how this bound can be extended to the general class of unique games, illustrating it for the case of three outcomes. We then show as a main result that the paradigmatic examples of correlation Bell inequalities with no quantum violation, namely the non-local computation games do not constitute tight Bell inequalities, not even for the correlation polytope. We also extend this to an arbitrary prime number of outcomes for a specific class of these games. We then study the faces in the simplest CHSH Bell scenario of binary dichotomic measurements, and identify edges in the set of quantum correlations in this scenario. Finally, we relate the non-contextual polytope of single-party correlation inequalities with the cut polytope $CUT(\nabla G)$, where $G$ denotes the compatibility graph of observables in the contextuality scenario and $\nabla G$ denotes the suspension graph of $G$. We observe that there exist tight non-contextuality inequalities with no quantum violation, and furthermore that this set of inequalities is beyond those implied by the Consistent Exclusivity principle." C.R. de Oliveira and G.Q. Pellegrino, "(De)Localization in the prime Schrödinger operator", J. Phys. A 34(16), L239-L243 (2001) [abstract:] "It is reported a combined numerical approach to study the localization properties of the one-dimensional tight-binding model with potential modulated along the prime numbers. A localization-delocalization transition was found as function of the potential intensity; it is also argued that there are delocalized states for any value of the potential intensity." M. Krishna, "xi-zeta relation", Proceedings of the Indian Academy of Sciences 109 (4) (1999) 379-383 [abstract:] "In this note we prove a relation between the Riemann zeta function and the xi function (Krein spectral shift) associated with the Harmonic Oscillator in one dimension. This gives a new integral representation of the zeta function and also a reformulation of the Riemann hypothesis as a question in L1(R)." M.W. Coffey, "Theta and Riemann xi function representations from harmonic oscillator eigensolutions", Phys. Lett. A 362 (2007) 352-356 [abstract:] "From eigensolutions of the harmonic oscillator or Kepler-Coulomb Hamiltonian we extend the functional equation for the Riemann zeta function and develop integral representations for the Riemann xi function that is the completed classical zeta function. A key result provides a basis for generalizing the important Riemann-Siegel integral formula." A. Córdoba, C.L. Fefferman and L.A. Seco, "A Trigonometric Sum Relevant to the Non-relativistic Theory of Atoms" [abstract:] "We extend Van der Corput's method for exponential sums to study an oscillatory term appearing in the quantum theory of large atoms. We obtain an interpretation in terms of classical dynamics and we produce sharp asymptotic upper and lower bounds for the oscillations." C.L. Fefferman and L.A. Seco, "Arithmetic aspects of atomic structures" C.L. Fefferman and L.A. Seco, "A number-theoretic estimate for the Thomas-Fermi density" [abstract:] "In this paper we obtain an estimate for the Thomas-Fermi density which plays a role in the analysis of the atomic energy asymptotics. Such estimate has obvious number-theoretic features related to the radial symmetry of a certain Schrödinger operator, and we use number-theoretic methods in our proof. From the technical viewpoint, we also simplify and improve some of the original estimates in the proof of the Dirac-Schwinger correction to the atomic energy asymptotics." C.L. Fefferman and L.A. Seco, "Interval arithmetic in quantum mechanics" L.A. Seco, "Number Theory, Classical Mechanics and the Theory of Large Atoms" C.L. Fefferman and L.A. Seco, "Number Theory and Atomic Densities" B. Eckhardt, "Eigenvalue statistics in quantum ideal gases" "The eigenvalue statistics of quantum ideal gases with single particle energies $e_n=n^\alpha$ are studied. A recursion relation for the partition function allows to calculate the mean density of states from the asymptotic expansion for the single particle density. For integer $\alpha>1$ one expects and finds number theoretic degeneracies and deviations from the Poissonian spacing distribution. By semiclassical arguments, the length spectrum of the classical system is shown to be related to sums of integers to the power $\alpha/(\alpha-1)$. In particular, for $\alpha=3/2$, the periodic orbits are related to sums of cubes, for which one again expects number theoretic degeneracies, with consequences for the two point correlation function." O. Lablée, "Quantum revivals in two degrees of freedom integrable systems : the torus case" (preprint 09/2010) [abstract:] "The paper deals with the semi-classical behaviour of quantum dynamics for a semi-classical completely integrable system with two degrees of freedom near Liouville regular torus. The phenomomenon of wave packet revivals is demonstrated in this article. The framework of this paper is semi-classical analysis (limit). For the proofs we use standard tools of real analysis, Fourier analysis and basic analytic number theory." M.V. Suslov, G.B. Lesovik, G. Blatter, "Quantum abacus for counting and factorizing numbers" (preprint 11/2010) [abstract:] "We generalize the binary quantum counting algorithm of Lesovik, Suslov, and Blatter [Phys. Rev. A 82, 012316 (2010)] to higher counting bases. The algorithm makes use of qubits, qutrits, and qudits to count numbers in a base 2, base 3, or base d representation. In operating the algorithm, the number n < N = d^K is read into a K-qudit register through its interaction with a stream of n particles passing in a nearby wire; this step corresponds to a quantum Fourier transformation from the Hilbert space of particles to the Hilbert space of qudit states. An inverse quantum Fourier transformation provides the number n in the base d representation; the inverse transformation is fully quantum at the level of individual qudits, while a simpler semi-classical version can be used on the level of qudit registers. Combining registers of qubits, qutrits, and qudits, where d is a prime number, with a simpler single-shot measurement allows to find the powers of 2, 3, and other primes d in the number n. We show, that the counting task naturally leads to the shift operation and an algorithm based on the quantum Fourier transformation. We discuss possible implementations of the algorithm using quantum spin-d systems, d-well systems, and their emulation with spin-1/2 or double-well systems. We establish the analogy between our counting algorithm and the phase estimation algorithm and make use of the latter's performance analysis in stabilizing our scheme. Applications embrace a quantum metrological scheme to measure a voltage (analog to digital converter) and a simple procedure to entangle multi-particle states." F. Grosshans, T. Lawson, F. Morain and B. Smith, "Factoring safe semiprimes with a single quantum query" (preprint 11/2015) [abstract:] "Shor's factoring algorithm (SFA), by its ability to efficiently factor large numbers, has the potential to undermine contemporary encryption. At its heart is a process called order finding, which quantum mechanics lets us perform efficiently. SFA thus consists of a quantum order finding algorithm (QOFA), bookended by classical routines which, given the order, return the factors. But, with probability up to 1/2, these classical routines fail, and QOFA must be rerun. We modify these routines using elementary results in number theory, improving the likelihood that they return the factors. We present a new quantum factoring algorithm based on QOFA which is better than SFA at factoring safe semiprimes, an important class of numbers used in RSA encryption (and reputed to be the hardest to factor). With just one call to QOFA, our algorithm almost always factors safe semiprimes. As well as a speed-up, improving efficiency gives our algorithm other, practical advantages: unlike SFA, it does not need a randomly picked input, making it simpler to construct in the lab; and in the (unlikely) case of failure, the same circuit can be rerun, without modification. We consider generalising this result to other cases, although we do not find a simple extension, and conclude that SFA is still the best algorithm." J.L. Rosales, "Simulating factorization with a quantum computer" (preprint 05/2015) [abstract:] "Modern cryptography is largely based on complexity assumptions, for example, the ubiquitous RSA is based on the supposed complexity of the prime factorization problem. Thus, it is of fundamental importance to understand how a quantum computer would eventually weaken these algorithms. In this paper, one follows Feynman's prescription for a computer to simulate the physics corresponding to the algorithm of factoring a large number N into primes. Using Dirac–Jordan transformation theory one translates factorization into the language of quantum hermitical operators, acting on the vectors of the Hilbert space. This leads to obtaining the ensemble of factorization of N in terms of the Euler function f(N), that is quantized. On the other hand, considering N as a parameter of the computer, a Quantum Mechanical Prime Counting Function pQM(x), where x factorizes N, is derived. This function converges to p(x) when N >> x. It has no counterpart in analytic number theory and its derivation relies on semiclassical quantization alone." A. Sugamoto, "Factorization of number into prime numbers viewed as decay of particle into elementary particles conserving energy" (preprint 10/2009) [abstract:] "Number theory is considered, by proposing quantum mechanical models and string-like models at zero and finite temperatures, where the factorization of number into prime numbers is viewed as the decay of particle into elementary particles conserving energy. In these models, energy of a particle labeled by an integer $n$ is assumed or derived to being proportional to $\ln n$. The one-loop vacuum amplitudes, the free energies and the partition functions at finite temperature of the string-like models are estimated and compared with the zeta functions. The $SL(2, {\bf Z})$ modular symmetry, being manifest in the free energies is broken down to the additive symmetry of integers, ${\bf Z}_{+}$, after interactions are turned on. In the dynamical model existing behind the zeta function, prepared are the fields labeled by prime numbers. On the other hand the fields in our models are labeled, not by prime numbers but by integers. Nevertheless, we can understand whether a number is prime or not prime by the decay rate, namely by the corresponding particle can decay or can not decay through interactions conserving energy. Among the models proposed, the supersymmetric string-like model has the merit of that the zero point energies are cancelled and the energy levels may be stable against radiative corrections." J.L. Rosales and V. Martin, "On the quantum simulation of the factorization problem" (preprint 01/2016) [abstract:] "Feynman's prescription for a quantum computer was to find a Hamitonian for a system that could serve as a computer. Here we concentrate in a system to solve the problem of decomposing a large number $N$ into its prime factors. The spectrum of this computer is exactly calculated obtaining the factors of $N$ from the arithmetic function that represents the energy of the computer. As a corollary, in the semi-classical large $N$ limit, we compute a new prime counting asymptote $\pi(x|N)$, where $x$ is a candidate to factorize $N$, that has no counterpart in analytic number theory. This rises the conjecture that the quantum solution of factoring obtains prime numbers, thus reaching consistency with Euclid's unique factorization theorem: primes should be quantum numbers of a Feynman's factoring simulator." H. Mack, M. Bienert, F. Haug, M. Freyberger and W.P. Schleich, "Wave packets can factorize numbers", Phys. Stat. Sol. (B) 233, No. 3 (2002) 408–415. "We draw attention to various aspects of number theory emerging in the time evolution of elementary quantum systems with quadratic phases. Such model systems can be realized in actual experiments. Our analysis paves the way to a new, promising and effective method to factorize numbers." A. Donis-Vela and J.C. Garcia-Escartin, "A quantum primality test with order finding" (preprint 11/2017) [abstract:] "Determining whether a given integer is prime or composite is a basic task in number theory. We present a primality test based on quantum order finding and the converse of Fermat's theorem. For an integer $N$, the test tries to find an element of the multiplicative group of integers modulo $N$ with order $N-1$. If one is found, the number is known to be prime. During the test, we can also show most of the times $N$ is composite with certainty (and a witness) or, after $\log\log N$ unsuccessful attempts to find an element of order $N-1$, declare it composite with high probability. The algorithm requires $O((\log n)^2n^3)$ operations for a number $N$ with $n$ bits, which can be reduced to $O(\log\log n(\log n)^3n^2)$ operations in the asymptotic limit if we use fast multiplication." J.I. Latorre and G. Sierra, "Quantum computation of prime number functions" (preprint 02/2013) [abstract:] "We propose a quantum circuit that creates a pure state corresponding to the quantum superposition of all prime numbers less than $2^n$, where $n$ is the number of qubits of the register. This prime state can be built using Grover's algorithm, whose oracle is a quantum implementation of the classical Miller Rabin primality test. The prime state is highly entangled, and its entanglement measures encode number theoretical functions such as the distribution of twin primes or the Chebyshev bias. This algorithm can be further combined with the quantum Fourier transform to yield an estimate of the prime counting function, more efficiently than any classical algorithm and with an error below the bound that allows for the verification of the Riemann hypothesis. Arithmetic properties of prime numbers are then, in principle, amenable to experimental verifications on quantum systems." J.I. Latorre and G. Sierra, "There is entanglement in the primes" (preprint 03/2014) [abstract:] "Large series of prime numbers can be superposed on a single quantum register and then analyzed in full parallelism. The construction of this prime state is efficient, as it hinges on the use of a quantum version of any efficient primality test. We show that the prime state turns out to be very entangled as shown by the scaling properties of purity, Renyi entropy and von Neumann entropy. An analytical approximation to these measures of entanglement can be obtained from the detailed analysis of the entanglement spectrum of the prime state, which in turn produces new insights in the Hardy–Littlewood conjecture for the pairwise distribution of primes. The extension of these ideas to a twin prime state shows that this new state is even more entangled than the prime state, obeying majorization relations. We further discuss the construction of quantum states that encompass relevant series of numbers and opens the possibility of applying quantum computation to arithmetics in novel ways." J. Ryu, M. Marciniak, M. Wiesniak and M. Zukowski, "Entanglement conditions for integrated-optics multi-port quantum interferometry experiments" (preprint 01/2016) [abstract:] "Integrated optics allows one to perform interferometric experiments based upon multi-port beam-splitter. To observe entanglement effects one can use multi-mode parametric down-conversion emissions. When the structure of the Hamiltonian governing the emissions has (infinitely) many equivalent Schmidt decompositions into modes (beams), one can have perfect EPR-like correlations of numbers of photons emitted into "conjugate modes" which can be monitored at spatially separated detection stations. We provide series of entanglement conditions for all prime numbers of modes, and show their violations by bright multi-mode squeezed vacuum states. One family of such conditions is given in terms of the usual intensity-related variables. Moreover, we show that an alternative series of conditions expressed in terms averages of observed rates, which is a generalization of the ones given in arXiv:1508.02368, is a much better entanglement indicator. Thus the rates seem to emerge as a powerful concept in quantum optics. Generalizations of the approach are expected." T. Olupitan, C. Lei and A. Vourdas, "An analytic function approach to weak mutually unbiased bases" (preprint 07/2016) "Quantum systems with variables in $\mathbb{Z}(d)$ are considered, and three different structures are studied. The first is weak mutually unbiased bases, for which the absolute value of the overlap of any two vectors in two different bases is $1/\sqrt{k}$ (where $k\vert d$) or $0$. The second is maximal lines through the origin in the $\mathbb{Z}(d)\times \mathbb{Z}(d)$ phase space. The third is an analytic representation in the complex plane based on Theta functions, and their zeros. It is shown that there is a correspondence (triality) that links strongly these three apparently different structures. For simplicity, the case where $d = p_1\times p_2$, where $p_1$, $p_2$ are odd prime numbers different from each other, is considered." M. Asoudeh and V. Karimipour, "Quantum secret sharing and random hopping: Using single states instead of entanglement" (preprint 06/2015) [abstract:] "Quantum protocols for secret sharing usually rely on multi-party entanglement which with present technology is very difficult to achieve. Recently it has been shown that sequential manipulation and communication of a single $d$-level state can do the same task of secret sharing between $N$ parties, hence alleviating the need for entanglement. However the suggested protocol which is based on using mutually unbiased bases, works only when $d$ is a prime number. We propose a new sequential protocol which is valid for any $d$." R.V. Ramos, "Quantum physics, algorithmic information theory and the Riemanns hypothesis" (preprint 12/2017) [abstract:] "In the present work the Riemann hypothesis (RH) is discussed from four different perspectives. In the first case, coherent states and the Stengers approximation to Riemann-zeta function are used to show that RH avoids an indeterminacy of the type 0/0 in the inner product of two coherent states. In the second case, the Hilbert-Pólya conjecture with a quantum circuit is considered. In the third case, randomness, entanglement and the Moebius function are used to discuss the RH. At last, in the fourth case, the RH is discussed by inverting the first derivative of the Chebyshev function. The results obtained reinforce the belief that the RH is true." F.V. Mendes and R.V. Ramos, "Quantum sequence states" (preprint 08/2014) [abstract:] "In a recent paper it has been shown how to create a quantum state related to the prime number sequence using Grover's algorithm. Moreover, its multiqubit entanglement was analyzed. In the present work, we compare the multiqubit entanglement of several quantum sequence states as well we study the feasibility of producing such states using Grover's algorithm." R.V. Ramos and F.V. Mendes, "Riemannian quantum circuit" (preprint 05/2013) [abstract:] "Number theory is an abstract mathematical field that has found a fertile environment for development in theoretical physics. In particular, several physical systems were related to the zeros of the Riemann-zeta function. In this work we present the theory of a quantum circuit related to a finite number of zeros of the Riemann-zeta function. The existence of such circuit will permit in the future the solution of some number theory problems through the realization of quantum algorithms based on those zeros. " J.A. Smolin and G. Smith and A. Vargo, "Pretending to factor large numbers on a quantum computer" (preprint 01/2013) [abstract:] "Shor's algorithm for factoring in polynomial time on a quantum computer gives an enormous advantage over all known classical factoring algorithm. We demonstrate how to factor products of large prime numbers using a compiled version of Shor's quantum factoring algorithm. Our technique can factor all products of $p,q$ such that $p,q$ are unequal primes greater than two, runs in constant time, and requires only two coherent qubits. This illustrates that the correct measure of difficulty when implementing Shor's algorithm is not the size of number factored, but the length of the period found." J.S. Kim, E. Bae and S. Lee, "Quantum computational algorithm for hidden symmetry subgroup problems on semi-direct product of cyclic groups" (preprint 07/2013) [abstract:] "We characterize the algebraic structure of semi-direct product of cyclic groups, $\Z_{N}\rtimes\Z_{p}$, where $p$ is an odd prime number which does not divide $q-1$ for any prime factor $q$ of $N$, and provide a polynomial-time quantum computational algorithm solving hidden symmetry subgroup problem of the groups." M. Marvian and V. Karimipour, "Secure quantum carriers for distributing classical secrets and quantum states for a general threshold scheme" (preprint 07/2010) [abstract:] "We provide a secure quantum carrier for distributing a secret (classical symbol encoded into a state or a quantum state) among $n$ parties according to a $(k,n)$ threshold scheme, where $2k-1$ is a prime number. The quantum carrier \cite{bk} is an entangled state which is shared between all the participants, and is not measured at any stage. Quantum states are uploaded to the carrier and downloaded from it by the receivers. The quantum carrier is secure against eavesdropping by local Hadamard actions of the participants which leave it invariant. Contrary to measurement-based secret sharing schemes, our protocol can be used for sharing predetermined strings of symbols and quantum states." H. Bombin and M.A. Martin-Delgado, "Entanglement distillation protocols and number theory" (preprint 03/05) [abstract:] "We show that the analysis of entanglement distillation protocols for qudits of arbitrary dimension $D$ benefits from applying basic concepts from number theory, since the set $\zdn$ associated to Bell diagonal states is a module rather than a vector space. We find that a partition of $\zdn$ into divisor classes characterizes the invariant properties of mixed Bell diagonal states under local permutations. We construct a very general class of recursion protocols by means of unitary operations implementing these local permutations. We study these distillation protocols depending on whether we use twirling operations in the intermediate steps or not, and we study them both analytically and numerically with Monte Carlo methods. In the absence of twirling operations, we construct extensions of the quantum privacy algorithms valid for secure communications with qudits of any dimension $D$. When $D$ is a prime number, we show that distillation protocols are optimal both qualitatively and quantitatively." Y. Li and M. Ying, "Debugging quantum processes using monitoring measurements" (preprint 03/2014) [abstract:] "Since observation on a quantum system may cause the system state collapse, it is usually hard to find a way to monitor a quantum process, which is a quantum system that continuously evolves. We propose a protocol that can debug a quantum process by monitoring, but not disturb the evolution of the system. This protocol consists of an error detector and a debugging strategy. The detector is a projection operator that is orthogonal to the anticipated system state at a sequence of time points, and the strategy is used to specify these time points. As an example, we show how to debug the computational process of quantum search using this protocol. By applying the Skolem–Mahler–Lech theorem in algebraic number theory, we find an algorithm to construct all of the debugging protocols for quantum processes of time independent Hamiltonians." A. Klappenecker, M. Roetteler, I. Shparlinski and A. Winterhof, "On approximately symmetric informationally complete positive operator-valued measures and related systems of quantum states" (preprint 03/05) [abstract:] "We address the problem of constructing positive operator-valued measures (POVMs) in finite dimension n consisting of n2 operators of rank one which have an inner product close to uniform. This is motivated by the related question of constructing symmetric informationally complete POVMs (SIC-POVMs) for which the inner products are perfectly uniform. However, SIC-POVMs are notoriously hard to construct and despite some success of constructing them numerically, there is no analytic construction known. We present two constructions of approximate versions of SIC-POVMs, where a small deviation from uniformity of the inner products is allowed. The first construction is based on selecting vectors from a maximal collection of mutually unbiased bases and works whenever the dimension of the system is a prime power. The second construction is based on perturbing the matrix elements of a subset of mutually unbiased bases. Moreover, we construct vector systems in $\C^n$ which are almost orthogonal and which might turn out to be useful for quantum computation. Our constructions are based on results of analytic number theory." A.M. Childs, D. Jao, V. Soukharev, "Constructing elliptic curve isogenies in quantum subexponential time" (preprint 12/2010) [abstract:] "Given two elliptic curves over a finite field having the same cardinality and endomorphism ring, it is known that the curves admit an isogeny between them, but finding such an isogeny is believed to be computationally difficult. The fastest known classical algorithm takes exponential time, and prior to our work no faster quantum algorithm was known. Recently, public-key cryptosystems based on the presumed hardness of this problem have been proposed as candidates for post-quantum cryptography. In this paper, we give a subexponential-time quantum algorithm for constructing isogenies, assuming the Generalized Riemann Hypothesis (but with no other assumptions). This result suggests that isogeny-based cryptosystems may be uncompetitive with more mainstream quantum-resistant cryptosystems such as lattice-based cryptosystems. As part of our algorithm, we also obtain a second result of independent interest: we provide a new subexponential-time classical algorithm for evaluating a horizontal isogeny given its kernel ideal, assuming (only) GRH, eliminating the heuristic assumptions required by prior algorithms." Y. Chen, A. Prakash and T.-C. Wei, "Universal quantum computing using $(\mathbb_d)^3$ symmetry-protected topologically ordered states" (preprint 11/2017) [abstract:] "Measurement-based quantum computation describes a scheme where entanglement of resource states is utilized to simulate arbitrary quantum gates via local measurements. Recent works suggest that symmetry-protected topologically non-trivial, short-ranged entanged states are promising candidates for such a resource. Miller and Miyake [NPJ Quantum Information 2, 16036 (2016)] recently constructed a particular $\mathbb{Z}_2\times \mathbb{Z}_2\times \mathbb{Z}_2\times$ symmetry-protected topological state on the Union-Jack lattice and established its quantum computational universality. However, they suggested that the same construction on the triangular lattice might not lead to a universal resource. Instead of qubits, we generalize the construction to qudits and show that the resulting $(d-1)$ qudit nontrivial $\mathbb{Z}_2\times \mathbb{Z}_2\times \mathbb{Z}_2\times$ symmetry-protected topological states are universal on the triangular lattice, for $d$ being a prime number greater than $2$. The same construction also holds for other $3$-colorable lattices, including the Union-Jack lattice." C. Archer, "There is no generalization of known formulas for mutually unbiased bases" (preprint 12/03) [abstract:] "In a quantum system having a finite number N of orthogonal states, two orthonormal bases {ai} and {bj} are called mutually unbiased if all inner products <ai|bj> have the same modulus N-1/2. This concept appears in several quantum information problems. The number of pairwise mutually unbiased bases is at most N+1 and various constructions of N+1 such bases have been found when N is a power of a prime number. We study families of formulas that generalize these constructions to arbitrary dimensions using finite rings.We then prove that there exists a set of N+1 mutually unbiased bases described by such formulas, if and only if N is a power of a prime number." A. Fernández-Pérez, A.B. Klimov and C. Saavedra, "Quantum process reconstruction based on mutually unbiased basis" (preprint 04/2011) [abstract:] "We study a quantum process reconstruction based on the use of mutually unbiased projectors (MUB-projectors) as input states for a D-dimensional quantum system, with D being a power of a prime number. This approach connects the results of quantum-state tomography using mutually unbiased bases (MUB) with the coefficients of a quantum process, expanded in terms of MUB-projectors. We also study the performance of the reconstruction scheme against random errors when measuring probabilities at the MUB-projectors." X. F. Liu and C. P. Sun, "On the relative quantum entanglement with respect to tensor product structure" (preprint, 10/04) [abstract:] "Mathematical foundation of the novel concept of quantum tensor product by Zanardi et. al. is rigorously established. The concept of relative quantum entanglement is naturally introduced and its meaning is made clear both mathematically and physically. For a finite or an infinite dimensional vector space $W$ the so called tensor product partition (TPP) is introduced on $End(W)$, the set of endmorphisms of $W$, and a natural correspondence is constructed between the set of TPP's of $End(W)$ and the set of tensor product structures (TPS's) of $W$. As a byproduct, it is shown that an arbitrarily given wave function belonging to an n-dimensional Hilbert space, n being not a prime number, can be interpreted as a separable state with respect to some man-made TPS, and thus a quantum entangled state of a many-body system with respect to the "God-given" TPS can be regarded as a quantum state without entanglement in some sense. The concept of standard set of observables is also introduced to probe the underlying structure of the object TPP and to establish its connection with practical physical measurement." M. Revzen and F.C. Khanna, "von Neumann lattices in finite dimensions Hilbert spaces" (preprint 05/2008) [abstract:] "The prime number decomposition of a finite dimensional Hilbert space reflects itself in the representations that the space accommodates. The representations appear in conjugate pairs for factorization to two relative prime factors which can be viewed as two distinct degrees freedom. These, Schwinger's quantum degrees of freedom, are uniquely related to a von Neumann lattices in the phase space that characterizes the Hilbert space and specifies the simultaneous definitions of both (modular) positions and (modular) momenta. The area in phase space for each quantum state in each of these quantum degrees of freedom, is shown to be exactly $h$, Planck's constant." I. Bengtsson, "How much complementarity?" (preprint 02/2012) [abstract:] "Bohr placed complementary bases at the mathematical centre point of his view of quantum mechanics. On the technical side then my question translates into that of classifying complex Hadamard matrices. Recent work (with Barros e Sa) shows that the answer depends heavily on the prime number decomposition of the Hilbert space. By implication so does the geometry of quantum state space." P. Amore, "A method for classical and quantum mechanics" (preprint 11/04) [abstract:] "In many physical problems it is not possible to find an exact solution. However, when some parameter in the problem is small, one can obtain an approximate solution by expanding in this parameter. This is the basis of perturbative methods, which have been applied and developed practically in all areas of Physics. Unfortunately many interesting problems in Physics are of non-perturbative nature and it is not possible to gain insight on these problems only on the basis of perturbation theory: as a matter of fact it often happens that the perturbative series are not even convergent. In this paper we will describe a method which allows to obtain arbitrarily precise analytical approximations for the period of a classical oscillator. The same method is then also applied to obtain an analytical approximation to the spectrum of a quantum anharmonic potential by using it with the WKB method. In all these cases we observe exponential rates of convergence to the exact solutions. An application of the method to obtain a fastly convergent series for the Riemann zeta function is also discussed." G. Gutin, N.S. Jones, A. Rafiey, S. Severini and A. Yeo, "Mediated digraphs and quantum nonlocality" (preprint 11/04) [abstract:] "A digraph D=(V,A) is mediated if, for each pair x,y of distinct vertices of D, either xy belongs to A or yx belongs to A or there is a vertex z such that both xz,yz belong to A. For a digraph D, DELTA(D) is the maximum in-degree of a vertex in D. The "nth mediation number" mu(n) is the minimum of DELTA(D) over all mediated digraphs on n vertices. Mediated digraphs and mu(n) are of interest in the study of quantum nonlocality. We obtain a lower bound f(n) for mu(n) and determine infinite sequences of values of n for which mu(n)=f(n) and mu(n)>f(n), respectively. We derive upper bounds for mu(n) and prove that mu(n)=f(n)(1+o(1)). We conjecture that there is a constant c such that mu(n)<f(n)+c. Methods and results of graph theory, design theory and number theory are used." S. Egger né Endres and F. Steiner, "An exact trace formula and zeta functions for an infinite quantum graph with a non-standard Weyl asymptotics" (preprint 04/2011) [abstract:] "We study a quantum Hamiltonian that is given by the (negative) Laplacian and an infinite chain of $\delta$-like potentials with strength $\kappa>0$ on the half line $\rz_{\geq0}$ and which is equivalent to a one-parameter family of Laplacians on an infinite metric graph. This graph consists of an infinite chain of edges with the metric structure defined by assigning an interval $I_n=[0,l_n]$, $n\in\nz$, to each edge with length $l_n=\frac{\pi}{n}$. We show that the one-parameter family of quantum graphs possesses a purely discrete and strictly positive spectrum for each $\kappa>0$ and prove that the Dirichlet Laplacian is the limit of the one-parameter family in the strong resolvent sense. The spectrum of the resulting Dirichlet quantum graph is also purely discrete. The eigenvalues are given by $\lambda_n=n^2$, $n\in\nz$, with multiplicities $d(n)$, where $d(n)$ denotes the divisor function. We thus can relate the spectral problem of this infinite quantum graph to Dirichlet's famous divisor problem and infer the non-standard Weyl asymptotics $\mathcal{N}(\lambda)=\frac{\sqrt{\lambda}}{2}\ln\lambda +\Or(\sqrt{\lambda})$ for the eigenvalue counting function. Based on an exact trace formula, the Vorono\"i summation formula, we derive explicit formulae for the trace of the wave group, the heat kernel, the resolvent and for various spectral zeta functions. These results enable us to establish a well-defined (renormalized) secular equation and a Selberg-like zeta function defined in terms of the classical periodic orbits of the graph, for which we derive an exact functional equation and prove that the analogue of the Riemann hypothesis is true." A. Granville and K. Soundararajan, "An uncertainty principle for arithmetic sequences", Annals of Mathematics, 165 (2007) 593–635 [abstract:] "Analytic number theorists usually seek to show that sequences which appear naturally in arithmetic are ''well-distributed'' in some appropriate sense. In various discrepancy problems, combinatorics researchers have analyzed limitations to equidistribution, as have Fourier analysts when working with the ''uncertainty principle''. In this article we find that these ideas have a natural setting in the analysis of distributions of sequences in analytic number theory, formulating a general principle, and giving several examples." There is also another way that a "multiplicative version" of the uncertainty principle is connected with prime numbers, as observed by P. Pollack here. R.V. Ramos, "Riemann Hypothesis as an uncertainty relation" (preprint 04/2013) [abstract:] "Physics is a fertile environment for trying to solve some number theory problems. In particular, several tentative of linking the zeros of the Riemann-zeta function with physical phenomena were reported. In this work, the Riemann operator is introduced and used to transform the Riemann's hypothesis in a Heisenberg-type uncertainty relation, offering a new way for studying the zeros of Riemann's function." W.G. Ritter, "On the number of representations providing noiseless subsystems" (accepted for publication in Physical Review A) [abstract:] "This paper studies the combinatoric structure of the set of all representations, up to equivalence, of a finite-dimensional semisimple Lie algebra. This has intrinsic interest as a previously unsolved problem in representation theory, and also has applications to the understanding of quantum decoherence. We prove that for Hilbert spaces of sufficiently high dimension, decoherence-free subspaces exist for almost all representations of the error algebra. For decoherence-free subsystems, we plot the function fd(n) which is the fraction of all d-dimensional quantum systems which preserve n bits of information through DF subsystems, and note that this function fits an inverse beta distribution. The mathematical tools which arise include techniques from classical number theory." P. Benioff, "Space of quantum theory representations of natural numbers, integers, and rational numbers" (preprint 04/2007) [abstract:] "This paper extends earlier work on quantum theory representations of natural numbers N, integers I, and rational numbers Ra to describe a space of these representations and transformations on the space. The space is parameterized by 4-tuple points in a parameter set. Each point, (k,m,h,g), labels a specific representation of X = N, I, Ra as a Fock space F^{X}_{k,m,h} of states of finite length strings of qukits q and a string state basis B^{X}_{k,m,h,g}. The pair (m,h) locates the q string in a square integer lattice I \times I, k is the q base, and the function g fixes the gauge or basis states for each q. Maps on the parameter set induce transformations on on the representation space. There are two shifts, a base change operator W_{k',k}, and a basis or gauge transformation function U_{k}. The invariance of the axioms and theorems for N, I, and Ra under any transformation is discussed along with the dependence of the properties of W_{k',k} on the prime factors of k' and k. This suggests that one consider prime number q's, q_{2}, q_{3}, q_{5}, etc. as elementary and the base k q's as composites of the prime number q's." M. Planat, F. Anselmi and P. Solé, "Pauli graphs, Riemann hypothesis, Goldbach pairs" (preprint 03/2011) [abstract:] "Let consider the Pauli group $\mathcal{P}_q=$ with unitary quantum generators $X$ (shift) and $Z$ (clock) acting on the vectors of the $q$-dimensional Hilbert space via $X|s> =|s+1>$ and $Z|s> =\omega^s |s>$, with $\omega=\exp(2i\pi/q)$. It has been found that the number of maximal mutually commuting sets within $\mathcal{P}_q$ is controlled by the Dedekind psi function $\psi(q)=q \prod_{p|q}(1+\frac{1}{p})$ (with $p$ a prime) \cite{Planat2011} and that there exists a specific inequality $\frac{\psi (q)}{q}>e^{\gamma}\log \log q$, involving the Euler constant $\gamma \sim 0.577$, that is only satisfied at specific low dimensions $q \in \mathcal {A}=\{2,3,4,5,6,8,10,12,18,30\}$. The set $\mathcal{A}$ is closely related to the set $\mathcal{A} \cup \{1,24\}$ of integers that are totally Goldbach, i.e. that consist of all primes $p2$) is equivalent to Riemann hypothesis. Introducing the Hardy-Littlewood function $R(q)=2 C_2 \prod_{p|n}\frac{p-1}{p-2}$ (with $C_2 \sim 0.660$ the twin prime constant), that is used for estimating the number $g(q) \sim R(q) \frac{q}{\ln^2 q}$ of Goldbach pairs, one shows that the new inequality $\frac{R(N_r)}{\log \log N_r} \gtrapprox e^{\gamma}$ is also equivalent to Riemann hypothesis. In this paper, these number theoretical properties are discusssed in the context of the qudit commutation structure." A.O. Pittenger and M.H. Rubin, "Wigner functions and separability for finite systems" (preprint 01/05) [abstract:] "A discussion of discrete Wigner functions in phase space related to mutually unbiased bases is presented. This approach requires mathematical assumptions which limits it to systems with density matrices defined on complex Hilbert spaces of dimension pn where p is a prime number. With this limitation it is possible to define a phase space and Wigner functions in close analogy to the continuous case. That is, we use a phase space that is a direct sum of n two-dimensional vector spaces each containing p2 points. This is in contrast to the more usual choice of a two-dimensional phase space containing p2n points. A useful aspect of this approach is that we can relate complete separability of density matrices and their Wigner functions in a natural way. We discuss this in detail for bipartite systems and present the generalization to arbitrary numbers of subsystems when p is odd. Special attention is required for two qubits (p = 2) and our technique fails to establish the separability property for more than two qubits." R.W. Johnson, "Quantum mechanics associated with a finite group", submitted to Intern. J. Theor. Phys. [abstract:] "I describe, in the simplified context of finite groups and their representations, a mathematical model for a physical system that contains both its quantum and classical aspects. The physically observable system is associated with the space containing elements f x f for f an element in the regular representation of a given finite group G. The Hermitian portion of f x f is the Wigner distribution of f whose convolution with a test function leads to a mathematical description of the quantum measurement process. Starting with the Jacobi group that is formed from the semidirect product of the Heisenberg group with its automorphism group SL(2,F{N}) for N an odd prime number I show that the classical phase space is the first order term in a series of subspaces of the Hermitian portion of f x f that are stable under SL(2,F{N}). I define a derivative that is analogous to a pseudodifferential operator to enable a treatment that parallels the continuum case. I give a new derivation of the Schrödinger-Weil representation of the Jacobi group." M. Marcolli and A. Connes, "Q-lattices: quantum statistical mechanics and Galois theory", Journal of Geometry and Physics 56 no. 1 (2006) 2–23 G. Cornelissen and M. Marcolli, "Quantum Statistical Mechanics, $L$-series and Anabelian Geometry" (preprint 09/2010) [abstract:] "It is known that two number fields with the same Dedekind zeta function are not necessarily isomorphic. The zeta function of a number field can be interpreted as the partition function of an associated quantum statistical mechanical system, which is a C*-algebra with a one parameter group of automorphisms, built from Artin reciprocity. In the first part of this paper, we prove that isomorphism of number fields is the same as isomorphism of these associated systems. Considering the systems as noncommutative analogues of topological spaces, this result can be seen as another version of Grothendieck's "anabelian" program, much like the Neukirch--Uchida theorem characterizes isomorphism of number fields by topological isomorphism of their associated absolute Galois groups. In the second part of the paper, we use these systems to prove the following. If there is an isomorphism of character groups (viz., Pontrjagin duals) of the abelianized Galois groups of the two number fields that induces an equality of all corresponding $L$-series (not just the zeta function), then the number fields are isomorphic.This is also equivalent to the purely algebraic statement that there exists a topological group isomorphism as a above and a norm-preserving group isomorphism between the ideals of the fields that is compatible with the Artin maps via the other map." G. Cornelissen, "Number theory and physics, an eternal rusty braid", Eidnhoven Mathematics Colloquiums, 9th November 2011 [abstract:] "I will describe joint work with Matilde Marcolli in which we apply ideas from quantum statistical mechanics and dynamical systems to solve the number theoretical analogue of the problem how to hear the shape of a drum". G. Mussardo, "The quantum mechanical potential for the prime numbers", preprint ISAS/EP/97/153;see also R. Matthews, New Scientist, January 10th, 1998, p.18. "A simple criterion is derived in order that a number sequence Sn is a permitted spectrum of a quantised system. The sequence of prime numbers fulfils the criterion. The existence of such a potential implies that primality testing can in principle be resolved by the sole use of physical laws". P.W. Shor, "Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer", SIAM J. Computing 26 (1997) 1484-1509. P.W. Shor, "Quantum computing", Documenta Mathematica Extra Volume ICM I (1998) 467-486 "...A quantum computer is a hypothetical machine based on quantum mechanics. We explain quantum computing, and give an algorithm for prime factorization on a quantum computer that runs aymptotically much faster than the best known algorithm on a digital computer... ...In 1994, I showed that a quantum computer could factor large numbers in time polynomial in the length of the numbers, a nearly exponential speed-up over classical algorithms...the connection of quantum mechanics with number theory was itself surprising..." using quantum computation to factorise integers D.N. Goncalves and R. Portugal, "Solution to the Hidden Subgroup Problem for a Class of Noncommutative Groups" (preprint 04/2011) [abstract:] "The hidden subgroup problem (HSP) plays an important role in quantum computation, because many quantum algorithms that are exponentially faster than classical algorithms can be casted in the HSP structure. In this paper, we present a new polynomial-time quantum algorithm that solves the HSP over the group $\Z_{p^r} \rtimes \Z_{q^s}$, when $p^r/q= \up{poly}(\log p^r)$, where $p$, $q$ are any odd prime numbers and $r, s$ are any positive integers. To find the hidden subgroup, our algorithm uses the abelian quantum Fourier transform and a reduction procedure that simplifies the problem to find cyclic subgroups." Hong Wang, Zhi Ma, "Quantum algorithms for unit group and principal ideal problem" (preprint 04/2010) [abstract:] "Computing the unit group and solving the principal ideal problem for a number field are two of the main tasks in computational algebraic number theory. This paper proposes efficient quantum algorithms for these two problems when the number field has constant degree. We improve these algorithms proposed by Hallgren by using a period function which is not one-to-one on its fundamental period. Furthermore, given access to a function which encodes the lattice, a new method to compute the basis of an unknown real-valued lattice is presented. " B. Grohmann, "On the existence of certain quantum algorithms" (preprint 04/2009) [abstract:] "We investigate the question if quantum algorithms exist that compute the maximum of a set of conjugated elements of a given number field in quantum polynomial time. We will relate the existence of these algorithms for a certain family of number fields to an open conjecture from elementary number theory." M. Revzen, F.C. Khanna, A. Mann, J. Zak, "Factorizations and physical representations" (preprint 08/05) [abstract:] "Hilbert space in M dimensions is shown explicitly to accommodate representations that reflect the prime numbers decomposition of M. Representations that exhibit the factorization of M into two relatively prime numbers: the kq representation (J. Zak, Phys. Today, 23 (2), 51 (1970)), and related representations termed q1q2 representations (together with their conjugates) are analysed, as well as a representation that exhibits the complete factorization of M. In this latter representation each quantum number varies in a subspace that is associated with one of the prime numbers that make up M." M. Revzen, A. Mann and J. Zak, "Physics of factorization" (preprint 03/05) [abstract:] "The N distinct prime numbers that make up a composite number M allow 2N-1 bi partioning into two relatively prime factors. Each such pair defines a pair of conjugate representations. These pairs of conjugate representations, each of which spans the M dimensional space are the familiar complete sets of Zak transforms (J. Zak, Phys. Rev. Let. 19, 1385 (1967)) which are the most natural representations for periodic systems. Here we show their relevance to factorizations. An example is provided for the manifestation of the factorization." J. Maurice Rojas, "A number theoretic interpolation between quantum and classical complexity classes" (preprint 04/2006) [abstract:] "We reveal a natural algebraic problem whose complexity appears to interpolate between the well-known complexity classes BQP and NP: (*) Decide whether a univariate polynomial with exactly $m$ monomial terms has a $p$-adic rational root. In particular, we show that while (*) is doable in quantum randomized polynomial time when $m=2$ (and no classical randomized polynomial time algorithm is known), (*) is nearly NP-hard for general m: Under a plausible hypothesis involving primes in arithmetic progression (implied by the Generalized Riemann Hypothesis for certain cyclotomic fields), a randomized polynomial time algorithm for (*) would imply the widely disbelieved inclusion $NP \subseteq BPP$. This type of quantum/classical interpolation phenomenon appears to new." C.M.M. Cosme and R. Portugal, "Quantum algorithm for the hidden subgroup problem on a class of semidirect product groups" (preprint 03/2007) [abstract:] "We present an efficient quantum algorithm for the hidden subgroup problem (HSP) on the semidirect product of cyclic groups Z_p^r and Z_p^2, where p is any odd prime number and $r$ is any integer such that r>4. This quantum algorithm is exponentially faster than any classical algorithm for the same purpose." I. Cherednik, "On q-analogues of Riemann's zeta" [abstract:] "In the paper, we introduce q-deformations of the Riemann zeta function, extend them to the whole complex plane, and establish certain estimates of the number of roots. The construction is based on the recent difference generalization of the Harish-Chandra theory of zonal spherical functions. We also discuss numerical results, which indicate that the location of the zeros of the q-zeta functions is far from random." M.N. Tran, M.V.N. Murthy, R.K. Bhaduri, "On the quantum density of states and partitioning an integer" [abstract:] "This paper exploits the connection between the quantum many-particle density of states and the partitioning of an integer in number theory. For N bosons in a one dimensional harmonic oscillator potential, it is well known that the asymptotic (N -> infinity) density of states is identical to the Hardy-Ramanujan formula for the partitions p(n), of a number n into a sum of integers. We show that the same statistical mechanics technique for the density of states of bosons in a power-law spectrum yields the partitioning formula for ps(n), the latter being the number of partitions of n into a sum of s-th powers of a set of integers. By making an appropriate modification of the statistical technique, we are also able to obtain ds(n) for distinct partitions. We find that the distinct square partitions d2(n) show pronounced oscillations as a function of n about the smooth curve derived by us. The origin of these oscillations from the quantum point of view is discussed. After deriving the Erdös-Lehner formula for restricted partitions for the s = 1 case by our method, we generalize it to obtain a new formula for distinct restricted partitions." H.C. Rosu, J.M. Moran-Mirabal, M. Planat, "Milne phase for the Coulomb quantum problem related to Riemann's hypothesis" (Group 24: Physical and Mathematical Aspects of Symmetries, Eds. J.-P. Gazeau et. al., IOP Conf. Series No. 173 (2003) 695-697) [abstract:] "We use the Milne phase function in the continuum part of the spectrum of the particular Coulomb problem that has been employed by Bhaduri, Khare, and Law as an equivalent physical way for calculating the density of zeros of the Riemann's function on the critical line. The Milne function seems to be a promising approximate method to calculate the density of prime numbers." M. Planat, H.C. Rosu, "Cyclotomy and Ramanujan sums in quantum phase locking", Phys. Lett. A 315 (2003) 1-5 [abstract:] "Phase locking governs the phase noise in classical clocks through effects described in precise mathematical terms. We seek here a quantum counterpart of these effects by working in a finite Hilbert space. We use a coprimality condition to define phase-locked quantum states and the corresponding Pegg-Barnett type phase operator. Cyclotomic symmetries in matrix elements are revealed and related to Ramanujan sums in the theory of prime numbers. The phase-number commutator vanishes as in the classical case, but a new type of quantum phase noise emerges in expectation values of phase and phase variance. The employed mathematical procedures also emphasize the isomorphism between algebraic number theory and the theory of quantum entanglement." M. Planat, "Huyghens, Bohr, Riemann and Galois: Phase-Locking" (written in relation to the ICSSUR '05 conference held in Besancon, France - to be published at a special issue of IJMPB) [abstract:] "Several mathematical views of phase-locking are developed. The classical Huyghens approach is generalized to include all harmonic and subharmonic resonances and is found to be connected to 1/f noise and prime number theory. Two types of quantum phase-locking operators are defined, one acting on the rational numbers, the other on the elements of a Galois field. In both cases we analyse in detail the phase properties and find them related respectively to the Riemann zeta function and to incomplete Gauss sums." D. Ellinas and E.G. Floratos, "Prime decomposition and correlation measure of finite quantum systems" E.G. Floratas, et. al. - work on "finite quantum mechanics" (explanation and bibliography) Sze Kui Ng, "A computation of the mass spectrum of mesons and baryons" [abstract:] "In this paper we give a computation of the mass spectrum of mesons and baryons. By this computation we show that there is a consecutive numbering of the mass spectrum of mesons and baryons. We show that in this numbering many stable mesons and baryons are assigned with a prime number." Sze Kui Ng, "On a classification of mesons" [abstract:]" We give a mass formula for computing the mass spectrum of mesons. By this formula we show that there are many mesons with their masses corresponding to a prime number. In particular we show that all strange mesons are with their masses corresponding to a prime number. With these prime numbers indexing the mesons we give a classification of mesons. We set up a knot model of mesons to derive this mass formula. In this knot model mesons and their anti-particles are modeled by knots and their mirror images respectively. Then the amphichiral knots which are equivalent to their mirror images are used to model mesons which are identical with their anti-particles. With this knot model we show that there is a periodic phenomenon in the classification of mesons such that the starting nonet and the ending nonet are nonets of pseudoscalar mesons with the pi meson modelled by an amphichiral knot. From this periodic phenomenon we give a theoretical argument for the existence of charm-anticharm mesons." Sze Kui Ng, "Knot model of pseudoscalar and vector mesons" (preprint 03/2007) [author's description:] "In this paper I give a quantum knot model of mesons where prime knots are assigned with prime numbers. These prime numbers are from the masses of the mesons. From this quantum knot model we then have a closed relation between prime numbers, prime knots and mesons. For example the pi meson is modeled by the prime knot 4_1 which is assigned with the prime number 3." S. Matsutani, Y. Ônishi, "Wave-particle complementarity and reciprocity of Gauss sums on Talbot effects" [Abstract:] "Berry and Klein (J. Mod. Opt. (1997) 43 2139-2164) showed that the Talbot effects in classical optics are naturally explained by Gauss sums studied in number theory. Their result was based on Helmholtz equation. In this article, we explain the effect based on Fresnel integral also by Gauss sums. These two explanations are shown to agree with by the reciprocity law of Gauss sums. The relation between this agreement and the wave-particle complementarity is also discussed." S. Matsutani, "Gauss optics and Gauss sum on an optical phenomena" (preprint 03/2008) [abstract:] "In the previous article (Found Phys. Lett. 16 325-341), we showed that Gauss reciprocity is connected with the wave and particle complementary. In this article, we revise the previous investigation by considering a relation between the Gauss optics and the Gauss sum based upon the recent studies of the Weil representation for the finite group." S. Matsutani, "p-adic difference-difference Lotka-Volterra equation and ultra-discrete limit", Int. J. Math. and Math. Sci. 27 (2001) 251-260 S. Matsutani, "Lotka-Volterra equation over a finite ring $\mathbb{Z}/p^N \mathbb{Z}$", J. Phys. A 34 (2001) 10737-10744 [abstract:] "The discrete Lotka-Volterra equation over $p$-adic space was constructed since $p$-adic space is a prototype of spaces with non-Archimedean valuations and the space given by taking the ultra-discrete limit studied in soliton theory should be regarded as a space with the non-Archimedean valuations given in my previous paper (Matsutani, S 2001 Int. J. Math. Math. Sci.). In this paper, using the natural projection from a $p$-adic integer to a ring $\mathbb{Z}/p^N \mathbb{Z}$, a soliton equation is defined over the ring. Numerical computations show that it behaves regularly." R. De Luca, G.Gargiulo and F. Romeo, "Number theory implications on physical properties of elementary cubic networks of Josephson junctions", Phys. Rev. B 68 (2003) 092511 [abstract:] "Number theory concepts are used to investigate the periodicity properties of the voltage vs applied flux curves of elementary cubic networks of Josephson junctions. It is found that equatorial gaps appearing on the unitary sphere, on which points representing the directions in space for which these curves show periodicity are collected, can be understood by means of Gauss condition on the sum of the squares of three integers." V. Varadarajan, "Some remarks on arithmetic physics" (Abstract) "There have been some recent speculations on connections between quantum theory and modern number theory. At the boldest level these suggest that there are two ways of viewing the quantum world, the usual and the arithmetic, which are in some sense complementary. At a more conservative level they suggest that there is much mathematical interest in examining structures which are important in quantum theory and analyze to what extent they make sense when the real and complex fields are replaced by the more unconventional fields and rings, like finite or nonarchimedean fields and adele rings, that arise in number theory. This paper explores some aspects of these questions." V.S.Varadarajan, "Arithmetic quantum physics: why, what and whither", Proc. Steklov Inst. Math. 245 (2004) 258-265. Z. Rudnick, "Value distribution for eigenfunctions of desymmetrized quantum maps" "We study the value distribution and extreme values of eigenfunctions for the "quantized cat map". This is the quantization of a hyperbolic linear map of the torus. In a previous paper it was observed that there are quantum symmetries of the quantum map - a commutative group of unitary operators which commute with the map, which we called "Hecke operators". The eigenspaces of the quantum map thus admit an orthonormal basis consisting of eigenfunctions of all the Hecke operators, which we call "Hecke eigenfunctions". In this note we investigate suprema and value distribution of the Hecke eigenfunctions. For prime values of the inverse Planck constant N for which the map is diagonalizable modulo N (the "split primes" for the map), we show that the Hecke eigenfunctions are uniformly bounded and their absolute values (amplitudes) are either constant or have a semi-circle value distribution as N tends to infinity. Moreover in the latter case different eigenfunctions become statistically independent. We obtain these results via the Riemann hypothesis for curves over a finite field (Weil's theorem) and recent results of N. Katz on exponential sums. For general N we obtain a nontrivial bound on the supremum norm of these Hecke eigenfunctions." M.C. Gutzwiller, "Stochastic behavior in quantum scattering", Physica D: Nonlinear Phenomena 7 (1983) 341-355 [abstract:] "A 2-dimensional smooth orientable, but not compact space of constant negative curvature with the topology of a torus is investigated. It contains an open end, i.e. an exceptional point at infinite distance, through which a particle or a wave can enter or leave, as in the exponential horn of certain antennas or loud-speakers. In the Poincar model of hyperbolic geometry, the solutions of Schrödinger's equation for the reflection of a particle which enters through the horn are easily constructed. The scattering phase shift as a function of the momentum is essentially given by the phase angle of Riemann's zeta function on the imaginary axis, at a distance of from the famous critical line. This phase shift shows all the features of chaos, namely the ability to mimick any given smooth function, and great difficulty in its effective numerical computation. A plot shows the close connection with the zeros of Riemann's zeta function for low values of the momentum (quantum regime) which gets lost only at exceedingly large momenta (classical regime?) Some generalizations of this approach to chaos are mentioned." K. Bitar, "A study of the Riemann zeta function" "Using moment integrals over the zeta function we were. . . able to derive expressions for the distribution of the absolute value of the zeta function and its logarithm. These turn out to be expressible as inverse Mellin transforms over well known functions. Knowing these distributions allows the use of the zeta function in evaluating the path integrals for quantum mechanical systems. We have tested this on simple systems such as the anharmonic oscillator with good results. Further more since the zeta function is an analytic function with known properties its use in these applications may lead to a definition of the path integral in the continuum." K. Bitar, "Path integrals and Voronin's theorem on the universality of the Riemann zeta function" J. Twamey and G.J. Milburn, "The quantum Mellin transform", New J. Phys. 8 (2006) 328 [abstract:] "We uncover a new type of unitary operation for quantum mechanics on the half-line which yields a transformation to "Hyperbolic phase space". We show that this new unitary change of basis from the position x on the half line to the Hyperbolic momentum $p_\eta$, transforms the wavefunction via a Mellin transform on to the critial line $s=1/2-ip_\eta$. We utilise this new transform to find quantum wavefunctions whose Hyperbolic momentum representation approximate a class of higher transcendental functions, and in particular, approximate the Riemann Zeta function. We finally give possible physical realisations to perform an indirect measurement of the Hyperbolic momentum of a quantum system on the half-line." W. Merkel, H. Mack, W.P. Schleich, E. Lutz, G.G. Paulus, B. Girard "Chirping a two-photon transition in a multi-state ladder" (preprint 02/2007) [abstract:] "We consider a two-photon transition in a specific ladder system driven by a chirped laser pulse. In the weak field limit, we find that the excited state probability amplitude arises due to interference of multiple quantum paths which are weighted by quadratic phase factors. The excited state population has the form of a Gauss sum which plays a prominent role in number theory." J.C. Phillips, "Microscopic origin of collective exponentially small resistance states" (preprint, 03/03) [abstract:] "The formation of "zero" (exponentially small) resistance states (ESRS) in high mobility two-dimensional electron systems (2DES) in a static magnetic field B and subjected to strong microwave (MW) radiation has attracted great theoretical interest. These states appear to be associated with a new kind of energy gap $\Delta$. Here I show that the energy gap $\Delta$ is explained by a microscopic quantum model that involves the Prime Number Theorem, hitherto reserved for only mathematical contexts. The model also contains the zeroes of the zeta function, and explains the physical origin of the Riemann hypothesis." D. Kouzoudis, "Heisenberg s = ring consisting of a prime number of atoms", Journal of Magnetism and Magnetic Materials 173 (1997) 259-265 [abstract:] "In this work it will be shown that the dimensionality of the eigenvalue problem of a Heisenberg s = ring with a prime number N of atoms can be reduced by a factor of N. This makes small systems such as N = 5 and 7 particularly easy to solve analytically for the case of nearest-neighbor interactions, without the use of Bethe's ansatz, as well as for the general case of couplings beyond nearest neighbors. Exact expressions are given for both the magnon dispersion relations and the eigenvectors." I. Antoniou and Z. Suchanecki, "Quantum systems with fractal spectra", Chaos, Solitons and Fractals 14, (2002) 799-807 [abstract:] "We study Hamiltonians with singular spectra of Cantor type with a constant ratio of dissection and show strict connections between the decay properties of the states in the singular subspace and the algebraic number theory. More specifically, we study the decay properties of free n-particle systems and the computability of decaying and non-decaying states in the singular continuous subspace." A. Napoli and A. Messina, "An application of the arithmetic Euler function to the construction of nonclassical states of a quantum harmonic oscillator", Reports on Mathematical Physics 48 (2001) 159-166 [abstract:] "All quantum superpositions of two equal intensity coherent states exhibiting infinitely many zeros in their Fock distributions are explicitly constructed and studied. Our approach is based on results from number theory and, in particular, on the properties of arithmetic Euler function. The nonclassical nature of these states is briefly pointed out. Some interesting properties are brought to light." S. Ouvry, "Random Aharonov-Bohm vortices and some funny families of integrals" (preprint 02/05) [abstract:] "A review of the random magnetic impurity model, introduced in the context of the integer Quantum Hall effect, is presented. It models an electron moving in a plane and coupled to random Aharonov-Bohm vortices carrying a fraction of the quantum of flux. Recent results on its perturbative expansion are given. In particular, some funny families of integrals show up to be related to the Riemann $\zeta(3)$ and $\zeta(2)$." B. Basu-Mallick, T. Bhattacharyya and D. Sen, "Novel multi-band quantum soliton states for a derivative nonlinear Schrödinger model" (preprint 07/03) [abstract:]"We show that localized N-body soliton states exist for a quantum integrable derivative nonlinear Schrödinger model for several non-overlapping ranges (called bands) of the coupling constant \eta. The number of such distinct bands is given by Euler's \phi-function which appears in the context of number theory. The ranges of \eta within each band can also be determined completely using concepts from number theory such as Farey sequences and continued fractions. We observe that N-body soliton states appearing within each band can have both positive and negative momentum. Moreover, for all bands lying in the region \eta > 0, soliton states with positive momentum have positive binding energy (called bound states), while the states with negative momentum have negative binding energy (anti-bound states)." B. Basu-Mallick, T. Bhattacharyya and D. Sen, "Multi-band structure of the quantum bound states for a generalized nonlinear Schrödinger model" (preprint 02/05) [abstract:] "By using the method of coordinate Bethe ansatz, we study N-body bound states of a generalized nonlinear Schrödinger model having two real coupling constants c and \eta. It is found that such bound states exist for all possible values of c and within several nonoverlapping ranges (called bands) of \eta. The ranges of \eta within each band can be determined completely using Farey sequences in number theory. We observe that N-body bound states appearing within each band can have both positive and negative values of the momentum and binding energy." A.Z. Li and W.G. Harter, "Quantum revivals of Morse oscillators and Farey–Ford geometry" (preprint 08/2013) [abstract:] "Analytical eigensolutions for Morse oscillators are used to investigate quantum resonance and revivals and show how Morse anharmonicity affects revival times. A minimum semi-classical Morse revival time T_min-rev found by Heller is related to a complete quantum revival time T_rev using a quantum deviation parameter that in turn relates Trev to the maximum quantum beat period T_max-beat. Also, number theory of Farey and Thales-circle geometry of Ford is shown to elegantly analyze and display fractional revivals. Such quantum dynamical analysis may have applications for spectroscopy or quantum information processing and computing." E Pelantová, Š. Starosta and M. Znojil, "Markov constant and quantum instabilities" (preprint 10/2015) [abstract:] "For a qualitative analysis of spectra of a rectangular analogue of Pais–Uhlenbeck quantum oscillator several rigorous methods of number theory are shown productive and useful. These methods (and, in particular, a generalization of the concept of Markov constant known in Diophantine approximation theory) are shown to provide an entirely new mathematical insight in the phenomenologically relevant occurrence of spectral instabilities. Our results may inspire methodical innovations ranging from the description of the stability properties of metamaterials and of the so called crypto-unitary quantum evolution up to the clarification of the mechanisms of the occurrence of ghosts in quantum cosmology." R. Jozsa, "Notes on Hallgren's efficient quantum algorithm for solving Pell's equation" (preprint 02/03) "Pell's equation is x2 - dy2 = 1 where d is a square-free integer and we seek positive integer solutions x, y > 0. Let (x',y') be the smallest solution (i.e. having smallest A = x' + y'd1/2)). Lagrange showed that every solution can easily be constructed from A so given d it suffices to compute A. It is known that A can be exponentially large in d so just to write down A we need exponential time in the input size log d. Hence we introduce the regulator R = ln A and ask for the value of R to n decimal places. The best known classical algorithm has sub-exponential running time O(exp(sqrt(log d)), poly(n)). Hallgren's quantum algorithm gives the result in polynomial time O(poly(log d),poly(n)) with probability 1/poly(log A). The idea of the algorithm falls into two parts: using the formalism of algebraic number theory we convert the problem of solving Pell's equation into the problem of determining R as the period of a function on the real numbers. Then we generalise the quantum Fourier transform period finding algorithm to work in this situation of an irrational period on the (not finitely generated) abelian group of real numbers. These notes are intended to be accessible to a reader having no prior acquaintance with algebraic number theory; we give a self contained account of all the necessary concepts and we give elementary proofs of all the results needed. Then we go on to describe Hallgren's generalisation of the quantum period finding algorithm, which provides the efficient computational solution of Pell's equation in the above sense." J. H. Hannay and M. V. Berry, "Quantization of linear maps on a torus-fresnel diffraction by a periodic grating", Physica D: Nonlinear Phenomena 1 (1980) 267-290 "Quantization on a phase space q, p in the form of a torus (or periodized plane) with dimensions q, p requires the Planck's constant take one of the values h = qp/N, where N is an integer. Corresponding to a linear classical map T of points q, p is a unitary operator U mapping quantum states that are periodic in q and p; the construction of U involves techniques from number theory. U has eigenvalues exp(i). The 'eigenangles' must be multiples of 2/n (N), where n (N) is the lowest common multiple of the lengths of the classical 'cycles' mapped under T by those rational points in q, p which are multiples of q/N and p/N (i.e. n (N) is the 'period of T mod N'), at least for odd N. If T is hyperbolic, n is a very erratic function of N, and the classical limit N is very different from the 'Bohr-Sommerfeld' behaviour for parabolic maps. The degeneracy structure of the eigenangle spectrum is related to the distribution of cycle lengths. Computation of the quantal Wigner function shows that eigenstates of U do not correspond to individual cycles." M.V. Berry and P. Shukla, "Tuck's incompressibility function: statistics for zeta zeros and eigenvalues" (preprint 07/2008) [abstract:] "For any function that is real for real $x$, positivity of Tuck's function $Q(x)=D'^2(x)/(D'^2(x)-D"(x) D(x))$ is a condition for the absence of the complex zeros close to the real axis. Study of the probability distribution $P(Q)$, for $D(x)$ with $N$ zeros corresponding to eigenvalues of the Gaussian unitary ensemble (GUE), supports Tuck's observation that large values of $Q$ are very rare for the Riemann zeros. $P(Q)$ has singularities at $Q=0$, $Q=1$ and $Q=N$. The moments (averages of $Q^m$) are much smaller for the GUE than for uncorrelated random (Poisson-distributed) zeros. For the Poisson case, the large-$N$ limit of $P(Q)$ can be expressed as an integral with infinitely many poles, whose accumulation, requiring regularization with the Lerch transcendent, generates the singularity at $Q=1$, while the large-$Q$ decay is determined by the pole closest to the origin. Determining the large-$N$ limit of $P(Q)$ for the GUE seems difficult." J. Lagarias, "The Schrödinger operator with Morse potential on the right half line" (preprint 12/2007) [abstract:] "This paper studies the Schr\"{o}dinger operator with Morse potential on a right half line $[u, \infty)$ and determines the Weyl asymptotics of eigenvalues for constant boundary conditions. It obtains information on zeros of the Whittaker function $W_{\kappa, \mu}(x)$ for fixed real parameters $\kappa, x$, with $x$ positive, viewed as an entire function of the complex variable $\mu$. In this case all zeros lie on the imaginary axis, with the exception, if $k<0$, of a finite number of real zeros. We obtain an asymptotic formula for the number of zeros of modulus at most $T$ of form $N(T) = (2/\pi) T \log T + f(u) T + O(1)$. Some parallels are noted with zeros of the Riemann zeta function." T. Okazaki, "AdS2/CFT1, Whittaker vector and Wheeler–De Witt equation" (preprint 10/2015) [abstract:] "We study the energy representation of conformal quantum mechanics as the Whittaker vector without specifying classical Lagrangian. We show that a generating function of expectation values among two excited states of the dilatation operator in conformal quantum mechanics is a solution to the Wheeler–DeWitt equation and it corresponds to the AdS2 partition function evaluated as the minisuperspace wave function in Liouville field theory. We also show that the dilatation expectation values in conformal quantum mechanics lead to the asymptotic smoothed counting function of the Riemann zeros." R.L. Monaco and W.A. Rodrigues, Jr., "New integral representation of the solutions of the Schrödinger equation with arbitrary potentials", Physics Letters A 179 (1993) 235-238 [abstract:] "We present a new method for solving the Schrödinger equation with arbitrary potentials. The solution is given in terms of a Fourier-like integral representation which involves a universal function (Rk(z)) for the Schrödinger equation. The integral representation follows from number theory together with some results from the partition theory of operational calculus. The new method can be used to solve any linear differential equation and also can be extended to solve linear partial differential equations." I.I. Iliev, "Riemann zeta function and hydrogen spectrum", Electronic Journal of Theoretical Physics 10 (2013) 111–134 [abstract:] "Significant analytic and numerical evidence, as well as conjectures and ideas connect the Riemann zeta function with energy-related concepts. The present paper is devoted to further extension of this subject. The problem is analyzed from the point of view of geometry and physics as wavelengths of hydrogen spectrum are found to be in one-to-one correspondence with complex-valued positions. A Zeta Rule for the definition of the hydrogen spectrum is derived from well-known models and experimental evidence concerning the hydrogen atom. The Rydberg formula and Bohr's semiclassical quantization rule are modified. The real and the complex versions of the zeta function are developed on that basis. The real zeta is associated with a set of quantum harmonic oscillators with the help of relational and inversive geometric concepts. The zeta complex version is described to represent continuous rotation and parallel transport of this set within the plane. In both cases we derive the same wavelengths of hydrogen spectral series subject to certain requirements for quantization. The fractal structure of a specific set associated with $\zeta(s)$ is revealed to be represented by a unique box-counting dimension." A. Sowa, "Encoding spatial data into quantum observables" (preprint 09/2016) [abstract:] "The focus of this work is a correspondence between the Hilbert space operators on one hand, and doubly periodic generalized functions on the other. The linear map that implements it, referred to as the Q-transform, enables a direct application of the classical Harmonic Analysis in a study of quantum systems. In particular, the Q-transform makes it possible to reinterpret the dynamic of a quantum observable as a (typically nonlocal) dynamic of a classical observable. From this point of view we carry out an analysis of an open quantum system whose dynamics are governed by an asymptotically harmonic Hamiltonian and compact type Lindblad operators. It is established that the initial value problem of the equivalent nonlocal but classical evolution is well posed in the appropriately chosen Sobolev spaces. The second set of results pertains to a generalization of the basic Q-transform and highlights a certain type of asymptotic redundancy. This phenomenon, referred to as the broadband redundancy, is a consequence of a well-known property of the zeros of the Riemann zeta function, namely, the uniform distribution modulo one of their ordinates. Its relevance to the analysis of quantum dynamics is only a special instance of its utility in harmonic analysis in general. It remains to be seen if the phenomenon is significant also in the physical sense, but it appears well-justified—in particular, by the results presented here—to pose such a question." L. Campos Venuti, "The best quasi-free approximation: reconstructing the spectrum from ground state energies" (preprint 01/2011) [abstract:] "The sequence of ground state energy density at finite size, e_{L}, provides much more information than usually believed. Having at disposal $e_L$ for short lattice sizes, we show how to re-construct an approximate quasi-particle dispersion for any interacting model. The accuracy of this method relies on the best possible quasi-free approximation to the model, consistent with the observed values of the energy $e_L$. We also provide a simple criterion to assess whether such a quasi-free approximation is valid. Perhaps most importantly, our method is able to assess whether the nature of the quasi-particles is fermionic or bosonic together with the effective boundary conditions of the model. The success and some limitations of this procedure are discussed on the hand of the spin-1/2 Heisenberg model with or without explicit dimerization and of a spin-1 chain with single ion anisotropy. A connection with the Riemann Hypothesis is also pointed out." H. Suchowski and D.B. Uskov, "Complete population transfer in 4-level system via Pythagorean triple coupling" (preprint 11/2009) [abstract:] "We describe a relation between the requirement of complete population transfer in a four-mode system and the generating function of Pythagorean triples from number theory. We show that complete population transfer will occur if ratios between coupling coefficients exactly match one of the Pythagorean triples $(a; b; c)$ in $Z$, $c^{2} = a^{2} + b^{2}$. For a four-level ladder system this relation takes a simple form $(V_{12}; V_{23}; V_{34}) ~ (c; b; a)$, where coefficients $V_{ij}$ describe the coupling between modes. We find that the structure of the evolution operator and the period of complete population transfer are determined by two distinct frequencies. A combination of these frequencies provides a generalization of the two-mode Rabi frequency for a four-mode system." J. LaChapelle, "Evidence of a Gamma distribution for prime numbers" (preprint 07/2013) [abstract:] "If the occurrence of prime numbers is a random process, then analogy with quantum systems suggests that a gamma distribution governs the primes. Consequently, postulating underlying gamma statistics in the context of functional integration, more-or-less standard heuristic arguments from quantum mechanics allows to derive analytic expressions of several average counting functions associated with prime numbers. The expressions are certain sums of incomplete gamma functions that are closely related to logarithmic-type integral functions — which in turn are well-known to give the asymptotic dependence of the various counting functions up to error terms. The relatively broad success of quantum heuristics applied to functional integrals in general along with the excellent agreement of the subsequent analytic expressions obtained for the average counting functions provide strong evidence of a gamma distribution for prime numbers." M. Hage-Hassan, "A note on quarks and numbers theory" (preprint 02/2013) [abstract:] "We express the basis vectors of Cartan fundamental representations of unitary groups by binary numbers. We determine the expression of Gel'fand basis of SU(3) based on the usual subatomic quarks notations and we represent it by binary numbers. By analogy with the mesons and quarks we find a new property of prime numbers." C. Castro, "The Riemann Hypothesis is a consequence of CT-invariant quantum mechanics" (submitted to J. Phys. A, 02/2007) [abstract:] "The Riemann's hypothesis (RH) states that the nontrivial zeros of the Riemann zeta-function are of the form $s_n =1/2 + i lambda_n$. By constructing a continuous family of scaling-like operators involving the Gauss-Jacobi theta series, and by invoking a novel CT-invariant Quantum Mechanics, involving a judicious charge conjugation $C$ and time reversal $T$ operation, we show why the Riemann Hypothesis is true." This follows earlier attempts: S. Albeverio, R. Cianci, N. De Grande-De Kimpe, A. Khrennikov, "p-Adic probability and an interpretation of negative probabilities in quantum mechanics", Russian J. Math. Phys. 6 (1999) 3-19. A. Khrennikov, "p-Adic probability interpretation of Bell's inequality paradoxes", Phys. Lett. A 200 (1995) 119-223 A. Khrennikov, "p-Adic probability distribution of hidden variables", Physica A 215 (1995) 577-587 A. Khrennikov, "p-Adic stochastic hidden variable model", J. Math. Phys. 39 No. 3 (1998) 1388-1402 number theory and quantum chaos - spectral interpretation of Riemann zeros p-adic and adelic quantum mechanics number theory and quantum field theory number theory and quantum chromodynamics archive      tutorial      mystery      new      search      home      contact
25953c1cf768d0d9
HYLE--International Journal for Philosophy of Chemistry, Vol. 11, No.2 (2005), pp. 101-126. Copyright © 2005 by HYLE and Valentin N. Ostrovsky HYLE Article Towards a Philosophy of Approximations in the ‘Exact’ Sciences Valentin N. Ostrovsky* Abstract: The issue of approximations is mostly neglected in the philosophy of science, and sometimes misinterpreted. The paper demonstrates that approximations are in fact in the core of some recent discussions in the philosophy of chemistry: on the shape of molecules, the Born-Oppenheimer approximation, the role of orbitals, and the physical explanation of the Periodic Table of Elements. The ontological and epistemological significance of approximations in the exact sciences is analyzed. The crucial role of approximations in generating qualitative images and comprehensible models is emphasized. A complementarity relation between numerically ‘exact’ theories and explanatory approximate approaches is claimed. Keywords: Approximations in quantum chemistry, complementarity, shape of molecules, orbitals, Born-Oppenheimer approximation, Periodic Table. 1. Introduction The issue of approximations appears, explictly or implicitly, in many discussions in the philosophy of chemistry. Do molecules have a shape? Can orbitals be observed in experiments? Is the physical explanation of the Periodic Table of Elements really an explanation? All these subjects involve an analysis of the role of approximations. For instance, Garcia-Sucre & Bunge (1981) argued, "the Born-Oppenheimer approximation, although an artifact, does represent some important objective properties of quantum-mechanical systems, among them their geometry." How is that possible – an artifact representing some important objective properties? Is it by accident? Is this situation peculiar to the Born-Oppenheimer approximation? Could it be clarified or even remedied by a change of terminology, as suggested recently by Del Re 2003 (note 11)? We write "theorem" instead of "approximation" because the latter name has misled some researchers into believing that the Born-Oppenheimer study has no physical content: actually, it is the proof that quantum mechanics is compatible with the separation of nuclear motion from electronic motions as revealed by observed molecular spectra; and novelties are only found when two hypersurfaces cross. We meet a similar situation in the recent discussion on the status and observability of orbitals. According to Scerri (2001), Of course, the orbital model remains enormously useful as an approximation and lies in the heart of much of quantum chemistry but it is just that – a model, without physical significance, as all computational chemists and physicists are aware. Is this again by chance – a model without physical significance lying in the heart of quantum chemistry? Moreover, Scerri (2001) explains the experimentally obtained images of orbitals by Zuo et al. 1999, "I suggest that any similarities between the reported images and textbook orbitals may be completely coincidental". Is all that not too much coincidence for the ‘exact’ sciences? The examples illustrate that the discussions are not about some marginal technical details but about the very heart of quantum chemistry. Therefore, the meaning and significance of approximations in science deserve a deeper analysis from ontological and epistemological perspectives than it received before.[1] Are approximations necessary or can they be avoided in order to make a science really exact? Are they arbitrary and subjective (artifacts)? How can they be linked to something observable? These and other related issues are analyzed in this paper by further developing aspects of a previous paper (Ostrovsky 2001). 2. Approximations in physics: an insider’s view Although physics is considered an exact science, any practicing physicist knows that everything in physics is approximate. The prominent theoretical physicist A.B. Migdal starts his Qualitative Methods in Quantum Theory (1989) as follows: No problem in physics can ever be solved exactly. We always have to neglect the effect of various factors which are unimportant for the particular phenomenon we have in mind. It then becomes important to be able to estimate the magnitude of the quantities we have neglected. Moreover, before calculating a result numerically it is often necessary to investigate the phenomenon qualitatively, that is, to estimate the order of magnitude of the quantities we are interested in and to find out as much as possible about the general behavior of the solution. For the last dozen years theoretical physics has undergone strong changes. Under the influence of the theory, new fields of mathematics started being used and developed by theorists. Computational theoretical physics acquired a particular importance. Nevertheless, despite mathematization of physics, qualitative methods became even more important than before elements of the theory. They are sort of mathematical analog of the image-bearing mentality of sculptors and poets, feeding the intuition. I believe that now more than before, a beginning theoretician should master qualitative methods of reasoning. This is not some marginal opinion, but an authoritative judgment of an outstanding professional. The English edition of Migdal’s book was printed by one of the most authoritative publishing houses in the exact sciences, Addison-Wesley. In 1989 and 2000 the book reappeared in the series Advanced Book Classics. Its author, Professor A.B. Migdal, was a full member of the USSR Academy of Science, member of L. D. Landau Institute of Theoretical Physics and a Landau Prize Laureate. The book was translated from Russian by Anthony J. Leggett who became in 2003 the recipient of the Nobel Prize in physics. Migdal’s views are universally accepted by the community of physicists. Laypeople might be confused by such a statement as: ‘In quantum mechanics one can obtain an exact solution only for the hydrogen atom, but not for a multi-electron atom.’ In fact, such formulations contain implicit assumptions that are shared by specialists. In this particular example, it means that an exact solution is obtainable for the non-relativistic Schrödinger equation of the hydrogen atom. But the Schrödinger equation itself is an approximation: it does not account for relativistic effects. Strictly speaking, there is not such an object in nature[2] as a non-relativistic Schrödinger atom. It is a model, or an approximation, that allows calculating results that match only approximately experimental data.[3] An exact solution for the hydrogen atom can also be obtained from the Dirac equation that takes the relativity theory into account. It ensures a better agreement with experiment (for instance, by describing the fine structure of the energy levels), but again, it is an approximation. One need to take into account the size and structure of the atomic nuclei to improve the results. The Dirac equation does not account for the atomic interaction with the electromagnetic field. If one decides to go further and achieve higher accuracy (e.g., to describe the Lamb shift of levels), one has to turn to quantum electrodynamics. Even the latter theory does not provide an ‘exact’ equation to be solved. It only allows calculating properties of atoms and ions at some order of approximation over small parameters that characterize relativistic effects. Thus, physics is nothing else than a hierarchy of approximations, without a single exact equation or result. This is not a pitiful temporary drawback that might be removed in the course of time. It will continue forever, since it reflects the essence of the approach of physics to describing nature. First of all, the laws of physics are not given a priori, but are always experimentally tested with only some precision. Second, there are some inevitable approximations. Any researcher must select a piece of the universe to be studied and described (for instance, an atom, or a planetary system) and, by approximation, must neglect the rest of the world. Only some cosmological theories claim to avoid these limitations, but, of course, they contain an immense number of other approximations. Third, even if some more exact theory is known, it still makes a deep sense to resort to approximations, not only for pragmatic reasons, but also for epistemological reasons. Approximations immensely enrich our qualitative picture of nature. This aspect will be further discussed in Section 4, but it is worthwhile to indicate here that the basic models of chemistry (such as molecular shape, see Section 3.2) are not universal, but arise from appropriate approximations. Apparently the laws of conservation in physics have a somewhat special status. Some of them, initially considered strict, later proved to be only approximate, as the parity conservation. The most important and widely known one is the energy conservation law that seems to remain unshaken. However, this law has a special character as emphasized by Feynman (1992). As our knowledge of nature expands, new forms of energy are embraced by the law to obtain the total energy that is conserved. Thus, energy conservation actually means that up to now we have always managed to find new terms to be added to keep the total energy constant. This availability is, of course, a deeply rooted principle of nature. People interested in really exact results and statements should turn to mathematics rather than to physics. Mathematics is not a natural science, albeit widely applied in the natural sciences. Mathematics works with abstract constructions that should be internally consistent, i.e. without logical contradictions. No other restrictions are imposed. Mathematicians construct a logically non-contradictory ‘universe’ and work with it. They need not care if this universe is the one we live in. For instance, a mathematician is ready to consider a space of arbitrary dimensionality n. While this is extremely useful as a mathematical technique, a physicist is faced with fact that we live in a space with n=3, with all its peculiarities. Physicists cannot construct their universe; they have to study the only one available. Thus, no physical theory can be blamed for using approximations because, in fact, all theories do that. The only question is how the approximation in a specific theory or a specific application is justified. To develop a proper approximation and to be aware of the limits of its applicability is an important element of defining the qualification of a physicist. This skill cannot be put in the form of an algorithm, which is one of the reasons why a physicist cannot be replaced by a computer. The great chain of approximations bears deep epistemological meaning that is frequently unrecognized or underestimated. 3. Approximations in physics: a view from the outside Some nonphysicists seem to have radically different ideas about approximations. They adhere to an image of the ideal and immaculate exact science that does not resort to approximations. Since real science does not fit the ideal image but widely employs all kinds of approximations, some of its approaches and results are looked upon with skepticism, suspicion, and distrust. The issue of approximations is important for chemistry. It is in the center of many philosophical discussions in chemistry, as mentioned in the Introduction, so that a proper philosophical understanding of approximations is particularly important here. Below we at first discuss some specific, albeit vitally important, approximations. 3.1. Born-Oppenheimer approximation An issue much discussed in the recent literature is the problem of molecular shape. In quantum mechanics a multi-particle system generally does not possess such a property as a definite shape. However, a shape might be ascribed to a molecule within the Born-Oppenheimer approximation. The latter is instrumental in the quantum theory of molecules and therefore plays a very important role in quantum chemistry. In particular, chemical reactions that are not accompanied by a change of the electronic states are described within this approximation. Some authors exhibit deep dissatisfaction about the facts that chemistry is actually based upon approximations (and hence that more general theories exist) and that molecular shape is not an absolute but transient property with a limited domain of applicability. Garcia-Sucre and Bunge (1981) call the Born-Oppenheimer approximation an artifact. They do not elucidate the meaning of this term, but the context suggests that an artifact is something human-made and unrelated to nature. However, ‘human-made’ is not alien to science, nor does it mean unrelated to nature. Take, for instance, the ‘exact’ Schrödinger equation basic to non-relativistic quantum mechanics. It was suggested by Schrödinger, not by nature. It was intensively used by other human beings. Nature does not solve Schrödinger equation; it does not know anything about the wave function. Instead, it seems that nature acts like an old-fashioned analogous computer, without resort to digitization. All science was created by humans in a pursuit to describe and understand nature. In this sense all science is an artifact. Because the term ‘artifact’ is applied also to the material objects produced by humans, we may distinguish science and similar products by saying that they are ideal artifacts. Is there any principal difference between the two ideal artifacts of an ‘exact’ wave function and its approximate Born-Oppenheimer version? As discussed above, an exact solution of the Schrödinger equation describes something non-existent, as some philosophers would say. Actually this terminology is misleading, since in fact such an ‘exact’ wave function provides a good, physically justified approximation. However, the same might be said about wave functions obtained within the Born-Oppenheimer approximation. Quantum chemists use the Schrödinger equation in the domain where it is appropriate, although this equation is not exact (since it does not include relativistic effects) and even not the most accurate known. Some researchers go beyond the Schrödinger equation and find interesting and chemically significant relativistic effects (see, e.g., Pyykko 1988). Others go beyond the Born-Oppenheimer approximation and call this non-Born-Oppenheimer chemistry (Jasper et al. 2004). Thus, there is no principal difference between using the ‘exact’ (actually approximate) Schrödinger equation and the Born-Oppenheimer approximation. The difference is, first, in the numerical accuracy that can be ensured, and, second, in the possibility of developing a qualitative interpretation and understanding by different approximations. These two features are in a complementary relation, as discussed below in Section 4. Moreover, the term ‘artifact’ gives the impression of something artificial and subjective, not directly related to nature. This meaning is misleading. The Born-Oppenheimer approximation directly reflects the specific nature of molecules as quantum systems, namely, the fact that molecules consist of heavy particles (atomic nuclei) and light particles (electrons). The ratio of masses governs the accuracy of the approximation. The constitution of molecules and the ratio of masses are all objective properties that in no way depend on the researcher’s will. In this regard, the Born-Oppenheimer approximation is dictated by nature, in a similar sense as quantum properties of microparticles are. While myriads of approximations are feasible a priori, only a few of them are valid (applicable). This is not by accident, but because the latter ones reflect some important features of nature. These approximations reflect nature just as the ‘exact’ equations do, albeit in a different way. They reflect the more qualitative side of nature, whereas more exact theories tend to reflect quantitative aspects; but both sides are objective and not invented by researchers. Del Re (2003) has suggested to switch to a more acceptable terminology and talk about the Born-Oppenheimer theorem instead of approximation. The theorem could read as: ‘In the limit me/M0 the Born-Oppenheimer scheme of calculations provides an exact result, and thus nuclear and electronic motions are completely separated.’ (Here, me is the electron mass and M is the characteristic mass of atomic nuclei.) The formulation could even be proved in a mathematically rigorous way. However, the problem is that in reality the ratio me/M is not zero, although fairly small (me/M 1/1837 if the proton mass is chosen for M); this value is given by nature and cannot be varied. Therefore, the Born-Oppenheimer scheme for finite me/M inevitably remains an approximation, although it is well supported by the Born-Oppenheimer theorem. The example shows that, even if some exact mathematical results are available, they do not allow avoiding approximations in practical physical or chemical applications. It might be mentioned that on a somewhat deeper level the ratio of the characteristic velocities of the particles is physically more relevant than the ratio of masses. Smallness of the velocities ratio serves as a basis for the adiabatic approximation that is in principle different from the Born-Oppenheimer approximation, although close in some aspects. In molecules the characteristic velocities of electrons are usually much higher than those of atomic nuclei. However, in some molecular states the electrons are highly excited and might have velocities comparable to those of the nuclei or even lower. For these highly excited states the Born-Oppenheimer approximation becomes invalid. Such molecular states are less important to chemistry, although they play a significant role in atomic physics. Deviations from the Born-Oppenheimer approximation also occur when there are several equilibrium configurations of atomic nuclei (i.e. several minima on the potential surface) separated by potential barriers of moderate height. In this situation non-rigid molecules emerge, which are related to the issue of molecular shape discussed in the next subsection. The presence of several minima with the same depth is inevitable if a molecule contains two or more identical atomic nuclei. The permutation of these particles physically corresponds to the tunneling between different potential wells. The rate of such processes is usually very low, which explains why related effects are extremely small. In any case, they can be described within the general framework of the adiabatic approximation, so that the first principles of quantum mechanics are not violated. The manifestations of the Born-Oppenheimer approximation are apparent in experimental observations, also outside of chemistry. In molecular spectra we see vibro-rotational bands, and not just chaotic sets of lines that would appear in the spectra of general multi-particle systems. This is visible evidence that the Born-Oppenheimer (approximate) separation of nuclear and electronic motion is a feature of nature and not some wishful invention of researchers. Of course, to understand that the band character of a spectrum has this meaning requires some scientific qualification. But this is inevitable in modern science. Below (Section 3.4) we return to the issue of the observability of ideal artifacts. After this elucidation one might finally agree with Garcia-Sucre and Bunge (1981): an artifact (i.e. science created by human beings) does represent some important objective properties of nature. This is exactly what science is about, and there is nothing particular about the Born-Oppenheimer approximation here. Of course, some deeper questions might be pursued further. For instance, the eminent physicist E. Wigner (1995) was puzzled by "the unreasonable effectiveness of mathematics in the natural sciences". But the issue of the Born-Oppenheimer approximation does not present anything specific in this respect. 3.2. Molecular shape A subject closely related to Born-Oppenheimer approximation is the issue of molecular shape. As already mentioned, in quantum theory a molecule might be ascribed a definite shape by using the Born-Oppenheimer approximation. Within this approximation one has first to replace atomic nuclei by force centers fixed in space and then to solve the quantum problem of the molecular electrons for varying sets of nuclear coordinates. The solutions provide a potential surface that depends parametrically on the coordinates of the atomic nuclei. The next step involves locating minima on the potential surface that indicate the (equilibrium) positions of nuclei in a molecule. At this stage of the approximate construction, a definite molecular shape emerges. The status of molecular shape has induced much concern among philosophers. For instance, Ramsey (1997) remarks, "… shape is widely thought to be a physical as well as a chemical attribute of the world …" The paper contains an interesting discussion, but the cited statement expresses the origin of many philosophical misunderstandings. Indeed, the term ‘attribute’ is usually understood to describe some indispensable property of matter. The most popular examples are space and time: matter is invariably described in terms of space and time; but in no way does this apply to shape. The situation was well recognized already in antiquity: solid bodies have a shape and a fixed volume, while liquids possess only a volume, but no definite shape. As for gases, they have neither shape nor intrinsic volume, but fill any volume available. This trivial counterexample invalidates such statements as "Classically every physical thing has some geometry or other, but in the quantum theory the notions of spatial structure, shape, and size seem to become hazy if not outright inapplicable" (Garcia-Sucre & Bunge 1981). There is no need to resort to modern quantum physics to discover that some material entities (liquids or gases) do not have a shape of their own, but that the shapes are dictated by the environment (vessels). Interestingly, the analogy with molecules might be pursued further. When a molecule freely rotates, its field is averaged and the shape is not manifest. For instance, there are dipole molecules (such as H2O), but a freely rotating molecule (in a stationary state with definite values of rotational quantum numbers) cannot possess a dipole momentum. When an external field is applied, or some other molecule approaches, a molecule becomes oriented in space, and its shape or dipole momentum become clearly exhibited. The fact that the molecular shape is recovered only under the perturbation by some external agent has induced some hesitation among philosophers, but we see from our example that such a situation is not unusual even in elementary classical physics. Of course, the analogy is incomplete, since a molecule possesses some (maybe hidden) shape of its own while a gas or a liquid fits any shape. But since shape is not an attribute, it is not surprising that the molecular shape might remain latent in some situations (freely rotating molecule) while in other contexts it plays a crucial physico-chemical role (for instance, in the X-ray structural analysis of molecules oriented due to some reason, for example, by the surroundings in crystals). Since shape is obviously a transient property in the macroworld, there is no reason to anticipate that the situation would be different in the microworld. Many phenomena in chemistry are well understood in terms of rigid structures of atoms with definite shapes. However, this cannot be a reason for treating shape as an absolute property in the realm of molecules. Physics clearly shows the limited applicability of the notion of shape in systems of several quantum particles. A shape emerges if two or more particles (nuclei) have masses much larger than the masses of other particles (electrons); in this situation the Born-Oppenheimer approximation is valid, see Section 3.1.[4] A shape arises as a result of an approximation, and this is a common situation in the structure of the ‘exact’ sciences. A multitude of physically very useful and appealing concepts arise as a result of approximations to ‘exact’ physical equations; but they are not applicable to the most general systems or situations. Making some approximation-based concepts absolute without justification is a dangerous pitfall, both in practical and philosophical regards. On the contrary, the recognition of the approximate character of concepts does not denigrate them, but reminds us of the existence of applicability limits. Approximation-induced concepts remain illuminating and constructive, although one has to bear in mind the limitations of their use. In the case of chemistry, the limitations might be inferred from physics. This is a typical situation, since physics treats the basic properties of matter in a very broad scope of conditions (potentially it pretends to treat matter in any situation), whereas chemistry focuses on a limited range of conditions and studies the subject matter in more detail, especially concentrating on the structure and transformation of compounds. Chemical compounds cannot exist under certain conditions, for instance, in hot plasmas in the interiors of stars. Therefore it is not surprising that such a property as shape gradually looses its significance in some situations, outside the scope of chemistry. Atomic and molecular physics is a scientific discipline that studies atoms and molecules from a broader perspective, beyond that of chemistry. The outlook provided by this branch of science is useful when philosophical problems of chemistry are analyzed (see some further comments in Ostrovsky 2003a). 3.3. Orbitals Orbitals appear in the theory that provides approximate solutions for Schrödinger equations of systems with a number of interacting particles larger than two. Many textbooks provide detailed descriptions of the theoretical scheme. A brief exposure suitable for general discussion is given in Ostrovsky 2001, 2003b, and 2004, and will not be repeated here. However, it can be clearly stated that, contrary to some claims, the scheme to construct orbitals lies fully within modern quantum theory (without resort to classical trajectories) and does not violate its general principles, such as the non-distinguishability of electrons. The key physical approximation in the scheme is that any electron moves in the mean field produced by the averaged motion of other electrons and nuclei. On the one hand, this physical image can be cast in the mathematical form of equations; on the other hand, it is very useful for developing explanatory patterns for many phenomena in atomic and molecular physics as well as in chemistry. Methodically the approximation is developed along the lines normally used in theoretical physics. It has no particular features that would justify the introduction of a special term, like ‘floating model’. The notion of orbitals has attracted much philosophical attention in recent years. Scerri (2000) describes the situation in theoretical physics and quantum chemistry as follows: According to accepted current theory atomic orbitals serve merely as basis sets – that is, as types of coordinate systems that can be used to expand mathematically the wave function of any particular physical system. Thus, it is said sometimes that the continuing value of orbitals lies in their serving as a basis set, although the orbital model is an approximation in a many-electron system. The problem is that these two statements contradict each other. The same object of a theory cannot simultaneously serve as a basis and as an approximation. A basis in a Hilbert space is analogous to a coordinate frame in geometry. If we consider a point on a plane, we can characterize its position in rectangular, polar, parabolic, elliptic, etc. coordinate frames. All the frames provide equivalent information, and neither of them is approximate;[6] one frame can be only more convenient than the others, depending on the particular problem. The origin of the misconception lies in confusing basis functions ηj(r) (which in principle are arbitrary) with orbitals φ(r) that are expressed via the basis functions: φ(r) = j cj ηj(r)                                         (1). The expansion coefficients cj are found by solving approximate equations; for instance, the Hartree-Fock equations based on the mean field approximation. The equations depend on the specifics of a physical system (molecule) under consideration and thus bear the basic physical information about it (for instance, the number of particles, the type of interaction between them, the presence of external fields, etc.). So do the orbitals. One can replace the basis set ηj(r) by some other set, which results in a different set of coefficients cj, but the orbitals φ(r) remain the same. The latter statement is mathematically exact when both basis sets are complete and thus infinitely large. In practice the basis sets are finite, such that computational chemists or physicists have to check the convergence. This is a purely technical business, inevitable in any application of numerical mathematics to a real problem – the case of orbitals does not bear any specifics. Once the distinction between basis functions and orbitals is clarified, the puzzling situation described above is resolved: the basis functions ηj(r) are indeed ‘without physical significance’ and might be chosen at the researcher’s convenience. However, the orbitals φ(r) obtained via solution of physical (albeit approximate) equations ‘lie in the heart of much of quantum chemistry’, ‘as all computational chemists and physicists are aware’. It is true that, "the term ‘orbital’ is a highly generic one. It is used to describe hydrogenic orbitals, Gaussian orbitals, natural orbitals, spin orbitals, Hyleraas orbitals, Kohn-Sham orbitals, and so on" (Scerri 2001). Sometimes the terminology might be too loose and thus misleading to blur the distinction between basis functions ηj(r) and physical orbitals φ(r). For instance, ‘Gaussian orbitals’ are in fact always basis functions. For physical orbitals, it does not matter if they are constructed as a superposition, according to equation (1), of Gaussian basis functions, or if some other functions, say, Slater functions, are employed for this purpose. To non-specialists that distinction is not obvious and could lead to unjustified bulk statements such as ‘it does not matter whose orbitals are selected from the modern palette of choices since none of them refer’. Further on, there is no ground to say that "the scientific term ‘orbital’ is strictly non-referring with the exception of when it applies to the hydrogen atom or other one-electron system". In fact, as already indicated (Section 2), the Schrödinger orbitals, strictly speaking, are not exact even for the one-electron hydrogen atom, and the Dirac orbitals are not exact as well. Therefore both the hydrogenic orbitals (i.e. the hydrogenic wave functions) and the orbitals in a multi-electron atom are approximations. In this regard, the term ‘orbitals’ is in both cases ‘strictly non-referring’, although that terminology is hardly appropriate, because it underestimates the physically justified approximation. Along these lines, it is worthwhile to correct such statement as ‘atomic orbitals are mathematical constructs’, in order to make it acceptable. Orbitals are not mathematical constructs, since they bear physical information; they are constructs of theoretical physics, or ideal artifact in the sense discussed above. In this respect they are not worse than ‘exact’ wave functions.[7] In which sense then do orbitals exist? Here one can turn to the paper by Ogilvie (1990) entitled ‘There are no such things as orbitals’. In different terms, but equivalently, the author’s viewpoint might be cast as: orbitals are ideal artifacts. Then the preceding discussion of ideal artifacts fully applies. Orbitals do not exist in nature, just as ‘exact’ wave functions or the Schrödinger equation do not exist in nature: these are all creations of the human mind. Orbitals appear as a result of approximations, just as ‘exact’ wave functions (solutions of the Schrödinger equation). Better approximations are known in both cases, which ensure improved numerical results for quantitative comparison with experiments. Nevertheless, orbitals are important and in wide use for several reasons. First, they reflect some important qualitative features of nature and thus provide an instructive physico-chemical insight. Second, they ensure reasonably good quantitative descriptions because of that. Third, orbitals technically serve as a convenient basis for further quantitative refinement of theory. Orbitals are not the result of wishful thinking of theoreticians, but stem from a very physical idea, namely, that an electron motion proceeds largely as if an electron moved in the mean field of other electrons and atomic nuclei. It is important to stress that the orbital picture provides a useful guideline for developing the numerical schemes. Some of the most accurate numerical schemes do not explicitly use the orbital picture and rely on the ‘brute force’ of the computers. The highest numerical accuracy is achieved in this way (for simpler atoms and molecules), but the qualitative understanding is inevitably lost. This is a manifestation of the complementarity discussed in more detail below (Section 4). The orbital approximation plays a key role in the quantum explanation of the Periodic Table of Elements. This application of orbitals was thoroughly discussed in previous publications (Ostrovsky 2001, 2003a, 2003b, 2004). Here, I only want to indicate that attempts to discard the validity of the modern quantum explanation of the Periodic Table have been mostly based on the mere indication that the orbital picture used in this explanation is approximate. These arguments have been rejected, since an explanation requires the creation of a qualitative image that is usually done by using approximations (see Section 4). 3.4. Approximations and observability Now I turn to another important question: can orbitals be observed? This is actually an instance of the more general question: can ideal artifacts be observed? Of course, the manifold of ideal artifacts needs to be limited: for instance, a centaur is an ideal artifact beyond the scope of our discussion. Here we discuss only physical ideal artifacts, which is just another name for approximations. As ideal entities, they cannot be observed in the most direct sense. At the same time, if we consider a valid physical approximation as being based in nature, it is manifested via phenomena of nature, and in this sense it is observable. I will call this semi-direct observability. Just in this respect the Born-Oppenheimer approximation or molecular shapes are observable, as discussed in Sections 3.1 and 3.2. In our everyday life we observe effects and phenomena that are directly related to scientific ideal artifacts. Consider, for example, a shadow. Do shadows exist? Indeed, an unambiguously defined shadow exists only in geometrical optics, which is an approximate theory. The advanced theory of wave optics provides a better approach according to which an absolute shadow does not exist because of diffraction. In other words, it is impossible to define the boundaries of a shadow rigorously, since diffraction fringes appear near the boundaries. For a spectacular presentation of this situation we refer to the figure at the beginning of chapter 10 in a standard textbook of optics by Hecht (2002). It shows a shadow of a human hand holding a dime, illuminated by monochromatic laser light that allows discerning the fringes at the edges of this macroscopic shadow. The lower part of the figure shows the same phenomenon in the microworld, with electrons diffracted on a zinc oxide crystal. Diffraction phenomena depend on various parameters (the light wavelength, the size of the obstacle, the position of the observer), but in principle the phenomenon persists, whereas geometrical optics with its well-defined shadows is only an approximation. Thus, in physical terms a shadow is an approximation. (‘There are no such things as shadows’, Ogilvie would say). In everyday life we observe shadows and have no problems to identify them, which is often due to the low resolution of our visual sense. This fact clearly demonstrates that a reasonable, physically justified approximation might be used to describe something real (within the limits of its applicability) and might be perceived by direct observation. The essence of this example is not so far from chemistry as one might imagine. It concerns the relation between the classical (geometrical) description and more general theories that include wave (quantum) features. Orbitals have systematically been observed for a long time, but in the energy representation.[8] For instance, in the measured photoabsorption cross sections the prominent peaks appear as a result of photoionization from a particular orbital in an atom or molecule.[9] Consider, for example, figures 1 and 2 in Chung 2004.[10] They show the cross section of the photoionization (i.e. essentially the yield of photoelectrons) of a lithium atom. When the photon energy is high (Eph 60eV), the photoionization of valence electrons has a very low yield that smoothly depends on Eph. Superimposed on this background are the sharp and high peaks that are interpreted in terms of photoionization via intermediate resonance states. Each of these states corresponds to the excitation of two atomic electrons to various unoccupied orbitals as detailed in the figures. The doubly excited states eventually decay with the emission of an electron that contributes to the photoelectron yield. Thus, the explanation of a prominent structure in the experimental observation is achieved solely within the orbital picture; and there is no way to do it without. In this sense we can say that orbitals are observed in the experimental data, albeit in this case on the energy scale, or in the energy representation. In quantum mechanics a physical system is described by a wave function that can be represented in various ways. The space coordinate representation is probably most often employed. It provides standard probability densities in the coordinate space.[11] Along with the coordinate representation, the momentum representation of a wave function is frequently employed in theory. Various experiments directly measure the electron momentum distribution, which is represented by probability densities in the momentum space. In many cases, energy spectra provide the most convenient and direct way to describe a physical system. Nowadays virtually nothing is directly observed in physical experiments, but complicated experimental devices provide ‘raw’ data that need to be processed.[12] There is no fundamental reason to prefer an observation in the coordinate representation to an observation in the momentum or energy representation; and in the latter representation, as indicated already, the orbitals have been observed long ago. The observation of orbitals in conventional coordinate space can be inferred from a recent experiment (Zuo et al. 1999). Much of philosophical criticism has followed. Meanwhile the imaging of orbitals by various experimental techniques has become commonplace (Feng et al. 2000, Litvinyuk et al. 2000, Brion et al. 2002, Hatani et al. 2004). I will not go into details of the interpretation of these experiments. The experiments were carried out using modern state-of-the-art sophisticated techniques and their analysis should be done in a physical or chemical publication, but not in a philosophical one. In some particular cases, the interpretation of an experiment can be doubtful; for instance, a critical analysis of the experiments by Zuo et al. (1999) was carried out by Wang and Schwarz (2000a, 2000b) and Zuo et al. (2000). I just want to indicate that the experimental observation of orbitals cannot be rejected on general philosophical grounds,[13] because there is no principal objection to the observation of orbitals based on approximations in quantum theory. 3.5. More on orbitals Scerri (2001) devotes a significant part of his paper to emphasize the approximate status of orbitals in order to conclude only that this aspect is hardly relevant to the reality of orbitals: […] the fact that orbitals might only provide an approximation to the motion of many-electron systems is not a sufficient reason for the complete denial that they or something related to orbitals can possibly exist. Therefore, he puts forward two more arguments to support the idea that orbitals are in principle not observable. However, both arguments refer not only to orbitals, but also to the ‘exact’ Schrödinger wave function. The first argument is related to the well-known fact that the wave function ψ(r) is generally complex-valued, ψ(r) = |ψ(r)| exp[i φ(r)], so that its full description requires information not only on its modulus |ψ(r)|, but also on the phase φ(r). Complex-valued functions appear in quantum mechanics when two (or more) stationary states are populated coherently, or when the system is non-stationary (i.e., when its Hamiltonian is time-dependent), or when a magnetic field is present. It is also known that the phase φ(r) is trivial in the case of stationary (bound) states (which were actually the object of experimental analysis) and in the absence of a magnetic field. The phase depends linearly on time t and not on the electron coordinate r: φ = -iEb t + a. Here Eb is the bound state energy and the constant a is independent of r. This constant is insignificant since it does not influence any observable. Therefore, the phase can be treated as non-physical and neglected, such that the wave function may be considered a real-valued magnitude. A somewhat more complex situation emerges in the case of degeneracy, but this consideration can be restricted to real-valued functions. Many experiments probe the electron charge density ρ(r) that is proportional to the probability density |ψ(r)|2, ρ(r) = |ψ(r)|2, where e is the electron charge. Bearing in mind that the wave function phase might be omitted, one has to carry out only the square root operation, ψ(r) = ± [(1/e) ρ(r)]1/2, to restore the wave function from the electron density. Here the symbol ± requires some attention, since in general even a real-valued wave function oscillates around zero and thus is positive or negative in different domains of space. The dividing boundaries are known as the nodal surfaces. Each crossing of a nodal surface means a change of the wave function sign. The nodal surfaces [i.e., the zero value surfaces for the density ρ(r)] might in principle be defined from experiments. Then the wave function can be fully restored from the observable charge density. This might be considered a semi-direct observation, albeit not a direct observation of ψ(r) in the strict sense. However, as already stressed, in modern experiments virtually nothing is directly observed and some processing of raw data is always required. With this in mind, we may conclude that there are no theoretical obstacles to the semi-direct observation of wave functions of stationary states. The second argument reads: […] atomic orbitals are described in a many-dimensional Hilbert space which denies visualization since we can only observe objects in three-dimensional space. [Scerri 2001] This point reveals a misinterpretation. The Hilbert space theory is a mathematical apparatus that has found useful applications in quantum theory, but which is in no way limited to it. Any regular function, for instance, any function of a coordinate, might be regarded as a function belonging to some Hilbert space. For example, the electron density might be considered as belonging to a Hilbert space, but this does in no way preclude its observability. When the Schrödinger equation is solved, the eigenfunctions can be regarded as elements in an infinitely-dimensional Hilbert space; but they are simultaneously defined in the conventional three-dimensional space. A more reasonable point to consider is the fact that a wave function is defined in the configurational space. The latter is three-dimensional for a single electron, which allows visualization of the probability distribution. For two electrons the configurational space is already six-dimensional, and the complete probability distribution ρ(r1r2) = |ψ(r1r2)|2 cannot be visualized.[14] The charge distribution is obtained by wave function convolution. For an N-electron system the electron density is ρ(r) = dr2 dr3 … drN |ψ(r, r2, r3, … rN )|2                         (2), where rj is the coordinate of the jth electron. The formula suggests that the wave function cannot be exactly restored from the electron density. In terms of atomic orbitals this is reflected in the fact that in a multi-electron system all the orbitals filled by electrons contribute to the observed charge distribution. In order to separate the contribution of a single orbital, the experimentalists (Zuo et al. 1999) used a special technique critically analyzed in the subsequent discussion (Wang & Schwarz, 2000a, 2000b; Zuo et al. 2000). These developments are beyond the scope of the present study, however. To conclude this section, it should be stressed that the orbital approximation, as any other approximate or ‘exact’ theory, has its limitations. The applicability of the orbital picture reflects objective properties of atomic and molecular states, which are not universal. For instance, for some doubly (or multiply) excited states the electron motion is strongly correlated and the mean field picture does not hold even as a first-order approximation (for a bibliography see Prudov & Ostrovsky 1998). Some examples of the orbital picture breakdown were discussed previously (Ostrovsky 2001, 2003b); often they belong to atomic physics rather than to chemistry. However, the applicability domain is still large enough for orbitals to be ‘in the heart of most of quantum chemistry’. 3.6. Rejecting the existence of orbitals There are various possibilities to reject the existence of orbitals on philosophical grounds. To start with, some philosophical systems deny the reality of an objective material world, i.e. nature. Then the orbitals are rejected as a part of it. Another possibility is based on the philosophical distinction between properties and things, or properties and substances. It is impossible to object to statements like ‘An orbital as such is not observable; what is observable are its properties’. I would like to indicate only that there is nothing special about orbitals. Any physical experiment implies observing (measuring) some properties of the object of study. The object as such is never observed – if one does not hold to the naive view that observing something means seeing it by someone’s eyes. For instance, only the properties of molecules are observed, but not molecules as such – large molecules became accessible to some kind of experimental ‘viewing’ only recently, and in any case this is not viewing by someone’s eyes. Such a situation provides a basis for skepticism that could last for a long time, as the widely known example of the prominent physico-chemist Ostwald shows. Skepticism is a legitimate constituent of scientific approach; the point is that there is no fundamental difference in this respect between orbitals and molecules (the idea of this particular analogy belongs to W.H.E. Schwarz). Yet another possible type of objection could be as follows. Imagine that someone attributes a peak in the photoelectron spectrum to the ionization from a particular electron orbital, and then quantitatively describes the peak position based on the orbital calculations. A skeptic is not convinced but says that from the very beginning the scheme of calculations already presupposes the orbital picture. Again, this argument is not specific to orbitals or any other approximate scheme, but in fact refers to the conventional physics approach. For instance, when the energy levels of a hydrogen atom are calculated, it is presumed that the stationary states and the energy levels exist and that they correspond to regular solutions of the Schrödinger equation. Note that the fact that the theory operates with occupied (actual states) and non-occupied (potential states) electron orbitals is also not specific to the orbital approximation. ‘Exact’ quantum mechanics considers a variety of stationary states of any quantum system (for instance, an atom or a molecule) that are only potentially populated. For an atom in the ground state, all the excited states are potential states that might be excited under external perturbation. 3.7. Further examples of approximations In this subsection we give two further examples of approximations that seems to be of interest in the present context. According to modern theory, a chemical substance as common as water is only approximately stable. "Indeed, let us consider the system consisting of ten electrons, ten protons, and eight neutrons. These constituents can produce a water molecule or a neon atom with 18Ne nucleus" (Belyaev et al. 2001). The probability of such a molecular-nuclear transition from the water molecule to the neon atom is expected to be very small from general considerations. However, it is enhanced due to the presence of a particular resonance state in the 18Ne nucleus. At present it is difficult to evaluate the lifetime of water theoretically, and special experimental searches have not succeeded in detecting the reaction. Nevertheless, there is no rigorous conservation law that forbids such decay, and, as a general rule of quantum mechanics, everything that is not forbidden by strict selection rules proceeds with some probability. In hydrodynamics an approximation of an ideal liquid has been employed for quite some time. It implies the neglect of liquid viscosity and hence the energy dissipation emerging due to viscosity. Originally this approximation was inspired by its mathematical simplicity and beauty. In reality viscosity becomes important in the boundary layer along the surfaces that limit the liquid flow. Garcia-Ripoll and Perez-Garcia (2001) state, "John von Neumann noticed that most mathematical models of the date [around 1900] did not take viscosity into account and thus could not explain the features of real fluids. He coined the term ‘dry water’ to refer disrespectfully to those idealized models that did not care take account of dissipation (R. Feynman, 1964). Bose-Einstein condensate represents an experimental realization of such a ‘dry fluid’ or superfluid". This example teaches us that approximations have sometimes a particular fate. Starting as mathematical playgrounds, they can eventually find a manifestation in unusual states of matter. This is yet another case of "the unreasonable effectiveness of mathematics in the natural sciences" (Wigner 1995). 4. The epistemological value of approximations A professor in theoretical physics at St. Petersburg State University used to say to his students, ‘Imagine how poor, scarce and insufficient our knowledge would be if we knew only exact wave functions’. At first glance, that appears paradoxical. Indeed, according to quantum theory, the wave function contains all the information of a physical system and allows calculating any physical observable. Nevertheless the saying contains an ultimate truth. Knowledge of only numerical values is insufficient for understanding, because explanations are most often cast in terms of the qualitative images induced by approximations. The term ‘explanation’ has several meanings. Quite often it is used to denote a deduction from a more general theory. However, it seems that the term ‘prediction’ is more appropriate than ‘explanation’ in this situation. In quantum measurements ‘explanation’ is often understood as a mapping from the quantum physics of the actual system onto the classical point of view of an observer. However, we believe that researchers in quantum mechanics develop a special kind of ‘quantum intuition’ that allows a direct understanding of quantum objects without appeal to classical analogues (see, e.g., Zakhar’ev 1996). The ‘exact’ equations for complicated physical systems provide only limited insight. Few general theorems can be proved rigorously, like the probability conservation for the Schrödinger equation, but only very restricted possibilities are available to create qualitative images and patterns. In our pursuit of exploring nature we need both quantitative information and qualitative understanding. It is useless to ask which of the two is more important; both aspects are essential. However, we cannot obtain both fruits in a single approach. When we make our numerical schemes more and more sophisticated, the physical meaning becomes non-evident and only numbers emerge from the computer black box. On the other hand, approximate models provide a qualitative and often semi-quantitative description, though not of highest precision. Another example of very useful images created by approximations is the theory of chemical exchange reactions (without electronic transitions) viewed in terms of motion along a potential surface. This approach provides much understanding and is a quantitatively reliable tool, although it is based on an approximation, namely the Born-Oppenheimer approximation discussed in Section 3.1. My point is that in the preceding sentence it would be reasonable to replace ‘although’ by ‘because’. As pointed out by Del Re 2003, some researchers believe that the Born-Oppenheimer approximation has ‘no physical content’. My position is just the opposite: physical sense emerges in the frameworks of approximations. Modern researchers, when obtaining some numbers from their computers, frequently remain dissatisfied and seek for the physical sense of the results. While it is difficult to provide a complete definition of what ‘physical sense’ means, it implies, to a significant extent, the capability to interpret the numerical results in terms of simple models and qualitative images. All this comes from approximations and models. Approximations and models are a fully legitimate part of a theory, and not its temporary, abominable, and shameful part. Every textbook in quantum mechanics includes some simple problems, such as bound states in one-dimensional potential wells, scattering on a potential barrier, harmonic oscillator, hydrogen atom, etc. Most of these problems are included not because they provide an accurate description of nature, but because they allow students to understand important qualitative quantum concepts, such as the shape of the bound-state wave function, the tunneling phenomenon, the above-barrier reflection, etc. The basic approximations and the simple model problems with easily grasped properties form an appropriate language to develop explanations of more complicated situations. Of course, as with every language, such explanations are addressed to a knowledgeable audience. The current progress in computer techniques makes the complementary relation between calculations and explanation even more important. Niels Bohr put forward the idea of complementarity first on the basis of physics where the complementarity between coordinate and momentum is expressed by Heisenberg’s uncertainty principle. According to this principle one can measure with an arbitrary precision either the coordinate of a particle or its momentum, but not both simultaneously. Bohr realized that this type of relation is very generic. He applied the complementarity concept to a broad variety of fields outside of physics, such as psychology, biology, and anthropology (Bohr 1999). This concept is epistemologically significant because it is about a very general pattern of relations between subject and object. As to the complementary pair of numerical calculations and explanations, numerical calculations seek to reproduce a physical object with the highest possible quantitative precision, whereas explanations appeal to a subject and rely on qualitative images (Ostrovsky 2001). This also means that an explanation appeals to a community of researchers with a common background only, which may be different in other communities. The complementary pair numerical calculations/explanations might be considered as a particular implementation of the more general pair quantity/quality. If one’s objective is to obtain the best numerical results, then approximations are something to avoid or to limit as much as possible in the course of scientific progress: fewer approximations provide better numerical output. However, the bare ‘exact’ equations for a complex system provide very limited insight and are a barren ground for explanations. Explanatory concepts of high heuristic potential are born out of approximations. They inspire the intuition that is a powerful vehicle for the advancement of science. In Section 3 it was shown that key concepts of chemistry, such as molecular shape or molecular orbitals, directly emerge from approximations. If one seeks for explanations, then dropping some approximations might hopelessly destroy the entire framework. This, of course, does not necessarily mean that the same set of explanatory approximations or models would be retained forever. On the path of historical progress the models could be substantially modified or even completely new models could be developed. However, models and approximations remain a substantial and inevitable part of explanations of complex systems, and not some temporary deficiency. 5. Conclusion Thus far I have discussed both objective and subjective features of approximations, and one might argue that approximations have either a subjective character or an objective one and that both cannot be true at the same time. However, the point is that these features are not manifested simultaneously and in the same meaning. It is worthwhile to summarize my view once again. In many essential regards there is no basic difference between approximations and ‘exact’ equations. The natural sciences combine objective and subjective sides that are inseparable. On the one hand, the potential goal of science is to reflect nature in the most exact way, which means objectivity. On the other, science is created by humans and simply would not exist without the existence of subjects. Therefore science has inevitably subjective aspects. The technical aspects of the formulation of results and their dissemination, the particular ways of advancement in science, the existence of different although mostly complementary approaches – all these bear a strong flavor of subjectivity. Science cannot exist without such notions as understanding or intuition, which are clearly subjective. Science is a kind of interface between objects and subjects, and the same refers to its important part – approximations. The term ‘approximation’ belongs to the well established and universally accepted terminology in the exact sciences, such that one should not change the terminology by replacing it with other terms, such as ‘theorem’. The latter has a different meaning and cannot substitute ‘approximation’. It is important to develop a proper meaning of the term ‘approximation’ and to appreciate its significance in all aspects of science, including ontological and epistemological implications. Now I summarize the main points of this work. • A physical theory should not be blamed for using approximations because approximations are ubiquitous in the ‘exact’ sciences. Only invalid, physically (and mathematically) unjustified approximations discredit a theoretical scheme, and an approximate approach should not inappropriately be extended beyond its applicability domain. Acknowledging the approximate character of a theory or an approach cannot terminate a scientific or philosophical discourse, but is only the beginning. The failure to recognize the approximate character of a notion leads to fallacious absolutization and philosophical confusion. • A valid approximation is not a researcher’s subjective and voluntaristic construction, but a reflection of nature’s features; it is not inferior to ‘exact’ equations. Approximations reflect the more qualitative side of nature, while ‘exact’ theories tend to characterize its quantitative side. Valid approximations are deeply rooted in nature, in some sense they are observable via characteristic features of natural phenomena. • The hierarchy of approximations creates a path (and probably a unique one) to scientifically constructed qualitative images, notions, and patterns that emerge from ‘exact’ equations. By basing studies on approximations, semi-quantitative and qualitative approaches are developed, which are invaluable in science, particularly in chemistry. Thus, approximations are the most precious fruits of theory, which should be considered in the philosophy of science. • The ‘exact’ quantitative approaches and the intuition-inspiring approximations form a complementary pair in the universal sense of Niels Bohr’s complementary relations in nature and society. In this dual relation, the quantitative results represent the more objective side of nature while the qualitative approximation-induced images rest on the subjective side of the researchers’ interpretation of nature. Very often we progress in science via the development of approximate approaches. The author is grateful to Yu. N. Demkov, J. B. Greenwood, G. Yu. Kashenock, W. H. E. Schwarz, and R. Vihalemm for useful discussions and to J. Schummer for careful reading of the manuscript and many useful advices. [1]  Among the notable exceptions I indicate papers by Fock (1936, 1974), Pechenkin (1980), Ramsey (1997), Del Re (2000), and Friedrich (2004). [2]  It should be recognized that the present author, as a practicing physicist, holds to realism and understands by ‘nature’ an objective reality, as opposed to the subjective observer. [3]  The distinction between approximations and models is an interesting and sometimes subtle issue not pursued here. However, one aspect could be indicated: approximations are derivable from more general (i.e. more exact) theories, while models are constructed in order to grasp some important features of physical reality. From this point of view, the non-relativistic Schrödinger equation is rather an approximation (since it is derivable from the Dirac equation) than a model. [4]  Born-Oppenheimer and adiabatic approximations are often not properly distinguished in the literature. In the rigorous sense, the Born-Oppenheimer scheme implies expansion of the total (electronic and nuclear) molecular Hamiltonian in terms of a small parameter that proves to be (me/M)1/4, which is much larger than the mere ratio me/M. In the lowest order of the approximation, the atomic nuclei are localized near their equilibrium positions and their motion proceeds in a harmonic oscillator potential. Anharmonicity appears in the higher orders of the approximation. Thus, the genuine Born-Oppenheimer scheme is inconvenient when the strongly anharmonic vibrational motion close to the dissociation limit is considered. Moreover, the scheme is fully inapplicable for the treatment of atom-atom (or atom-molecule, or molecule-molecule) collisions. The adiabatic approximation is devoid of these deficiencies. [5]  There are some other cases, not related directly to chemistry, where a composite quantum system exhibits some properties that are interpreted in terms of a shape. Some heavy atomic nuclei show rotational structures in their energy spectra, which is evidence of the non-spherical (ellipsoidal) shape of such nuclei. In the nuclei, all the constituent particles (nucleons) have comparable masses, and the spontaneous breaking of spherical symmetry cannot be explained via the Born-Oppenheimer approximation. A vibro-rotational structure was also found in the energy spectra of doubly excited atomic states (see, for instance, Prudov & Ostrovsky 1998 and the bibliography therein). Isolated atoms might have anisotropic properties that also do not rely on the Born-Oppenheimer approximation. For instance, the excited states of the hydrogen atom might possess an electric dipole momentum (so-called Stark or parabolic states). For an arbitrary atom the states with a non-zero total angular momentum J and definite projection MJ are magnetic dipoles. The states with > 1/2 have an electric quadrupole momentum etc. All these anisotropic properties are revealed by the application of weak external fields. [6]  Note that the truncation of a basis set is an approximation. [7]  A careless characterization of orbitals as ‘mathematical constructs’ sometimes appears even in the professional physics literature. The most recent example is Hatani et al. 2004. The wording (cursory used in the abstract) is in contradiction to the content of the paper which discusses a sophisticated experimental technique employed for the observation of orbitals. [8]  In quantum mechanics a wave function might be expanded over different basis sets. It is said that the set of expansion coefficients provides a wave function representation in a given basis. Thus, representation is a rigorously defined notion of quantum theory. All the representations contain equivalent information on the wave function. They are related to each other by unitary transformations. Among the most frequently used representations are coordinate, momentum, and energy representation; the latter one employs a basis of eigenfunctions of energy, i.e. the Hamiltonian operator. [9]  Note that not only the outer (valence) orbitals, but also the inner-shell orbitals might be probed in this way. [10]  The choice of the particularly recent review-type paper is rather casual, since observation and calculation of these types of phenomena have been carried out for decades. [11]  The actual experiment might measure the charge density of an electron cloud that is proportional to the probability density, see also Section 3.5. [12]  In the philosophical literature, some experiments are characterized as theory-laiden, implying that they are not trustworthy. Actually almost all serious current experiments are strongly theory-laiden. Of course, vicious circles are to be avoided and the applicability of theoretical formulations should be attentively controlled. [13]  Thus the philosophical criticism of the observability of orbitals was met with skepticism in the physics community. [14]  The configurational space is used to describe the motion of classical particles. For N particles it has dimensionality 3N. Nevertheless this does not preclude visualization of classical particles motion because classical objects are sharply localized, in contrast to quantum particles spread in space. Belyaev, V.B.; Motovilov, A.K.; Miller, M.B.; Sermyagin, A.V.; Kuznetzov, I.V.; Sobolev, Yu.G.; Smolnikov, A.A.; Klimenko, A.A.; Osetrov, S.B. & Vasiliev, S.L.: 2001, ‘Search for Nuclear Reactions in Water Molecules’, Physics Letters B, 522, 222-6. Bohr, N.: 1999, Collected Works, Vol. 10 (Complementarity beyond Physics), ed. D. Favrholdt, Elsevier, Amsterdam. Brion, C.E.; Cooper, G.; Zheng, Y.; Litvinyuk, L.V. & McCarthy, I.E.: 2001, ‘Imaging of Orbital Electron Densities by Electron Momentum Spectrsocopy – a Chemical Interpretation of the Binary (e, 2e) Reaction’, Chemical Physics, 70, 13-30. Chung, K.T.: 2004, ‘Resonances in Atomic Photoionization’, Radiation Physics and Chemistry 70, 83-94. Del Re, G.: 2000, ‘Models and Analogies in Science’, Hyle – International Journal for Philosophy of Chemistry, 6, 5-15. Del Re, G.: 2003, ‘Reaction Mechanisms and Chemical Explanation’, Annals of the New York Academy of Sciences, 988, 133-140. Feng, R.; Sakai, Y.; Zheng, Y.; Cooper, G. & Brion, C.E.: 2000, ‘Orbital Imaging for the Valence Shell of Sulphur Dioxide: Comparison of EMS Measurements with Near Hartree-Fock Limit and Density Functional Theory’, Chemical Physics, 260, 29-43. Feynman, R.P. & Leighton R.B.: 1964, Feynman Lectures on Physics. Electromagnetism and Matter, Addison-Wesley, London. Feynman R.P.: 1992, The Character of Physical Laws, Penguin, New York. Fock, V.: 1936, ‘Printzipial’noe Znachenie Priblizhennykh Metodov v Teoreticheskoi Fizike [Principle Significance of Approximate Methods in Theoretical Physics]’, Uspekhi Fizicheskikh Nauk, 16, 1070-83. Fock, V.: 1974, ‘Printzipial’naya Rol’ Priblizhennykh Metodov v Fizike’ [Principle Role of Approximate Methods in Physics], in: Filosofskie voprosy fiziki [Philosophical problems in physics], Leningrad State University Publishing House, Leningrad, pp. 3-7. Friedrich, B.: 2004, ‘Hasn’t it? A Commentary on Eric Scerri’s paper "Has Quantum Mechanics Explained the Periodic Table?", now Published under the Title "Just How Ab Initio is Ab Initio Quantum Chemistry’, Foundations of Chemistry, 6, 117-132. Garcia-Ripoll, J.J. & Perez-Garcia, V.M.: 2001, ‘Vortex Bending and Tightly Packed Vortex Lattices in Bose-Einstein Condensates’, Physical Review A, 64, 053611 (1-7). Garcia-Sucre, M. & Bunge, M.: 1981, ‘Geometry of a Quantum System’, International Journal of Quantum Chemistry, 19, 83-93. Hatani, J.; Levesque, J.L.; Zeidler, D.; Niikura, H.; Pepin, H.; Kieffer, J.C.; Corkum, P.B. & Villeneuve, D.M.: 2004, ‘Tomographic Imaging of Molecular Orbitals, Nature, 432, 867-71. Hecht, E.: 2002, Optics, Addison-Wesley, London. Jasper, A.W.; Kendrick, B.K.; Mead, C.A. & Truhlar, D.G.: 2004, ‘Non-Born-Oppenheimer Chemistry: Potential Surfaces, Couplings, and Dynamics’, in: Modern Trends in Chemical Reaction Dynamics: Experiment and Theory, Part I, World Scientific, Singapore, pp. 329-391. Litvinyuk, I.V.; Zheng, Y. & Brion, C.E.: 2000, ‘Valence Shell Orbital Imaging in Adamantane by Electron Momentum Spectroscopy and Quantum Chemical Calculations’, Chemical Physics, 253, 41-50. Migdal, A.B.: 1989, Qualitative Methods in Quantum Theory, Addison-Wesley, New York (1st edition, 1977). Ogilvie, J.F.: 1990, ‘The Origin of Chemical Bonds – There are no Such Things as Orbitals’, Journal of Chemical Education, 67, 280-289. Ostrovsky, V.N.: 2001, ‘What and how Physics Contributes to Understanding the Periodic Law?’, Foundations of Chemistry, 3, 145-182. Ostrovsky, V.N.: 2003a, ‘Physical Explanation of the Periodic Table’, Annals of the New York Academy of Sciences, 988, 182-192. Ostrovsky, V.N.: 2003b, ‘Modern Quantum Look at the Periodic Table of Elements’, in: E.J. Brändas & E.S. Kryachko (eds.), Fundamental World of Quantum Chemistry. A Tribute to the Memory of Per-Olov Löwdin, Vol. 2, Kluwer, Dordrecht, pp. 631-74. Ostrovsky, V.N.: 2004, ‘The Periodic Table and Quantum Physics’, in: D.H. Rouvray & R.B. King (eds.), The Periodic Table: Into the 21st Century, Research Studies Press, Baldock, UK, pp. 331-70. Pechenkin, A.A.: 1980, ‘Priblizhennye Metody v Teorii Fizicheskogo Znaniya (Metodologicheskie Problemy)’ [Approximate Methods in the Theory of Physical Knowledge (Methodological Problems)], in: Fizicheskaya teoriya [Physical Theory], ed. Nauka, Moscow, pp. 136-153. Prudov, N.V. & Ostrovsky, V.N.: 1998, ‘Vibrorotational Structure in Asymmetric Doubly-Excited States’, Physical Review Letters, 81, 285-8. Pyykko, P.: 1988, ‘Relativistic Effects in Structural Chemistry’, Chemical Reviews, 88, 563-94. Ramsey, J.L.: 1997, ‘Molecular Shape, Reduction, Explanation and Approximate Concepts’, Synthese, 111, 233-51. Scerri, E.R.: 2000, ‘Have Orbitals Really been Observed?’, Journal of Chemical Education, 77, 1492-1494 & 79, 310. Scerri, E.R.: 2001, ‘The Recently Claimed Observation of Atomic Orbitals and Some Related Philosophical Issues’, Philosophy of Sciences, 68, (Proceedings), S76-88. Scerri, E.R.: 2003, ‘Löwdin’s Remarks on the Aufbau Principle and a Philosopher’s View of Ab Initio Quantum Chemistry’, in: E.J. Brändas & E.S. Kryachko (eds.), Fundamental World of Quantum Chemistry. A Tribute to the Memory of Per-Olov Löwdin, Vol. 2, Kluwer, Dordrecht, pp. 675-94. Wang, S.G. & Schwarz, W.H.E.: 2000a, ‘On Closed Shell Interactions, Polar Covalence, d Shell Holes and Direct Images of Orbitals: the Case of Cuprite’, Angewandte Chemie International Edition, 39, 1757-61. Wang, S.G. & Schwarz, W.H.E.: 2000b, ‘Final comments on the discussions of "the case of cuprite"’, Angewandte Chemie International Edition, 39, 3794-6. Wigner, E.P.: 1995, ‘The unreasonable effectiveness of mathematics in natural sciences’, in: E.P. Wigner, Philosophical Reflections and Syntheses, Springer, Berlin. Zakhar’ev, B.N., 1996, Uroki Kvantovoi Intuitzii [Lessons of Quantum Intuition], Joint Institute for Nuclear Research, Dubna. Zheng, Y.; Rolke, J.; Cooper, G. & Brion C.E.: 2002, ‘Valence Orbital Momentum Distributions for Dimethyl Sulfide: EMS Measurements and Comparison with Near-Hartree-Fock Limit and Density Functional Theory Calculations’, Journal of Electron Spectroscopy, 123, 377-88. Zuo, J.M.; Kim, M.; O’Keeffe, M. & Spence, J.C.H.: 1999 ‘Direct Observation of d-Orbital Holes and Cu-Cu Bonding in Cu2O’, Nature, 401, 49-52. Zuo, J.M.; O’Keeffe, M.; Kim, M. & Spence, J.C.H.: 2000, ‘On Closed Shell Interactions, Polar Covalence, d Shell Holes and Direct Images of Orbitals: the Case of Cuprite. Response to the Essay by S.G. Wang and W.H.E. Schwarz’, Angewandte Chemie International Edition, 39, 3791-4. Valentin N. Ostrovsky: V. Fock Institute of Physics, St Petersburg State University, 198504 St Petersburg, Russia; Valentin.Ostrovsky@pobox.spbu.ru Copyright © 2005 by HYLE and Valentin N. Ostrovsky
15db3fd37a430413
The quantum numbers and Pauli's exclusion principle Each state as obtained as a solution of the Schrödinger equation of the hydrogen atom is characterised by a unique combination of the quantum numbers n, l, and m. We have seen that each state can be populated by up to two electrons. Even if we reduce the energy of the system to near absolute zero temperature, not all of the electrons converge on the n=1 state but remain in higher states because the n=1 state can't take more than two electrons. This has lead to Pauli's exclusion principle: No two electrons can share the same state (i.e. have the same quantum numbers) at the same time. Therefore, we need another quantum number, spin, to distinguish the two electrons sharing the same hydrogen, i.e. (n,l,m) state. Here is an overview of the quantum numbers: Fig.: The main quantum number n determines the size of the probability cloud. The main quantum number, n, determines the energy of the state (exactly in the case of hydrogen-like atoms; with minor corrections involving the other quantum numbers for multi-electron atoms). It also determines the size of the spherical envelope of the wave function: The higher n, the further the probability cloud stretches out into space. Possible values of n are positive integer numbers. Fig.: The angular momentum quantum number l determines the ellipticity of the probability cloud. The angular momentum quantum number, l, governs the ellipticity of the probability cloud and the number of planar nodes going through the nucleus. In the Fig., red and blue areas indicate regions with opposite sign of the wave function and the black dot represents the position of the nucleus. For a given n, the possible values are integers from 0 up to n-1. Fig.: The magnetic quantum number m determines the orientation of the probability cloud. The magnetic quantum number, m, is related to the magnetic moment of any non-spherical electron (probability) distribution. Therefore, the different m states relate to differently orientated orbitals. Which one is which depends on the preferential axis, i.e. the orientation of an external field, if any. All (n,l) states with different m look the same. In the Fig., the third one is the same shape as the others but orientated perpendicular to the screen. The values of m for a given (n,l) range from -l to +l. Fig.: The spin quantum number s distinguishes the two electrons that can fit into an (n,l,m) state. The spin (quantum number), s, is similar to l in that it refers to an angular momentum component. It can be visualised as the rotation of the electron, thought of as a particle, around its own axis rather than around the nucleus. This is, however, a bit of a crutch because the electron's behaviour has both aspects of particle (such as scattering) and wave (such as diffraction), and if carried through quantitatively doesn't give accurate results. The possible values of the spin quantum number are -½ and +½ for electrons. Bosons and Fermions Classical objects in great numbers can be treated by classical statistics, also known as Boltzmann statistics. This is the basis of properties such as temperature and pressure, which have little meaning for an individual submicroscopic particle but are properties of an ensemble of such particles. See 2nd year Thermal Physics for a discussion of Boltzmann statistics. Quantum-mechanical particles obey a different sort of statistics which takes into account the discrete, quantised nature of physical properties on very small length scales. When the size of the objects making up the ensemble increases, the quantum statistics approach the classical limit, which is the familiar Boltzmann statistics. The classical description is not wrong, but it is not accurate enough to deal with the smallest of objects. There are two different quantum statistics (the gory mathematical detail will be left for 3rd year Condensed Matter Physics). The Fermi-Dirac statistics applies to particles with half-integer spin (such as electrons). It is only for these Fermions that the exclusion principle applies. Bosons, particles with integer spin, have their own quantum statistics, Bose-Einstein statistics. Bosons have some quite counter-intuitive properties at extremely low temperatures which follow from the fact that all of them can occupy the same lowest-energy state. The helium isotope 3He, for example, is a superfluid, i.e. a fluid that flows without friction. This surprising behaviour is due to the condensation of the 3He atoms in the lowest energy state, a process known as Bose-Einstein condensation. Directional quantisation The components of angular momentum are quantised. This applies both to orbital angular momentum (quantum number l) and to spin (quantum number s). \Delta p\Delta x\approx\hbar; \Delta E\Delta t\approx\hbar; \Delta L_z\Delta\phi\approx\hbar The uncertainty principle applies to all products of observables whose units combine to [J s] = [kg m2 s-1], i.e. the dimension of Planck's constant. The best-known uncertainty principle is the one linking the uncertainties of position, x, and momentum, p. We have also used the energy-time uncertainty principle when discussing energy bands. Fig.: Definition of angular momentum. Angular momentum is a vector defined by the cross product \vec{L}=m\vec{r}\times\frac{{\rm d}\vec{r}}{{\rm d}t}. Judging by its units, [kg][m]×[m s-1]=[kg m2 s-1], the complementary uncertainty should be dimensionless. It is in fact the uncertainty of the angle of rotation, φ. The uncertainty of the angle must be limited to 2π - the angular momentum has to point somewhere on a circular orbit. Therefore, the uncertainty of the component of the angular momentum perpendicular to the direction of motion must be of the order of Planck's constant. We may therefore conclude that values of Lz are quantised in multiples of h/2π. Fig.: Directional quantisation. There are 2l+1 possible orientations of the angular momentum, -l\hbar, -(l-1)\hbar, \dots, (l-1)\hbar, l\hbar, and two possible orientations of the spin, \pm\frac{\hbar}{2}, of an electron. The Fig. shows the situation for the spin: The spin aligns at an angle with the z axis (which could be a preferential axis generated by applying a magnetic field or by a chemical bond). There are two possible orientations (these are often called parallel or up and anti-parallel or down although that's exactly what they are not!). The angle with the z axis is given by the need to distribute the 2l+1 states uniformly over the whole angular range from the +z to the -z direction. For example, in the case of an l=1 angular momentum state, there are three possible orientations, one of which is perpendicular to the z axis, and the other two are 45o of the positive and negative z axis, respectively. In no case is the angular momentum vector exactly (anti-)parallel to the z axis. If it was, we would know both the angular momentum and the angle precisely, which would defy the uncertainty principle. An experimental observation of directional quantisation and spin was devised in the 1920s: the Stern-Gerlach experiment.
d319ba7c8b3b591a
Open main menu Wikipedia β Quantum gravity (QG) is a field of theoretical physics that seeks to describe gravity according to the principles of quantum mechanics, and where quantum effects cannot be ignored,[1] such as near compact astrophysical objects where the effects of gravity are strong. The current understanding of gravity is based on Albert Einstein's general theory of relativity, which is formulated within the framework of classical physics. On the other hand, the other three fundamental forces of physics are described within the framework of quantum mechanics and quantum field theory, radically different formalisms for describing physical phenomena.[2] It is sometimes argued that a quantum mechanical description of gravity is necessary on the grounds that one cannot consistently couple a classical system to a quantum one.[3][4]:11–12 While a quantum theory of gravity may be needed in order to reconcile general relativity with the principles of quantum mechanics, difficulties arise when one attempts to apply the usual prescriptions of quantum field theory to the force of gravity via graviton bosons.[5] The problem is that the theory one gets in this way is not renormalizable and therefore cannot be used to make meaningful physical predictions. As a result, theorists have taken up more radical approaches to the problem of quantum gravity, the most popular approaches being string theory and loop quantum gravity.[6] Although some quantum gravity theories, such as string theory, try to unify gravity with the other fundamental forces, others, such as loop quantum gravity, make no such attempt; instead, they make an effort to quantize the gravitational field while it is kept separate from the other forces. Strictly speaking, the aim of quantum gravity is only to describe the quantum behavior of the gravitational field and should not be confused with the objective of unifying all fundamental interactions into a single mathematical framework. A theory of quantum gravity that is also a grand unification of all known interactions is sometimes referred to as The Theory of Everything (TOE). While any substantial improvement into the present understanding of gravity would aid further work towards unification, the study of quantum gravity is a field in its own right with various branches having different approaches to unification. One of the difficulties of quantum gravity is that quantum gravitational effects include only near the Planck scale, a scale far smaller in distance, and equivalently far larger in energy, than those currently accessible at high energy particle accelerators. Therefore, quantum gravity takes a theoretical enterprise, and some see quantum gravitational effects in existing experiments as speculative.[7][8]   Unsolved problem in physics: (more unsolved problems in physics) Diagram showing where quantum gravity sits in the hierarchy of physics theories Quantum mechanics and general relativityEdit Popularly harmonizing the theory of general relativity that describes gravitation, and applications to large-scale structures like stars, planets, and galaxies with quantum mechanics, that describes the other three fundamental forces acting on the atomic scale, quantum mechanics and general relativity can seem fundamentally incompatible. Also, demonstrations of the structure of general relativity essentially follows inevitably from the quantum mechanics of interacting theoretical spin-2 massless particles called gravitons.[11][12][13][14][15] No concrete proof of gravitons exist, but quantized theories of matter may necessitate their existence.[16] The observation that all fundamental forces except gravity have one or more known messenger particles leads researchers to believe that at least one must exist. This hypothetical particle is known as the graviton. The predicted find would result in the classification of the graviton as a force particle similar to the photon of the electromagnetic interaction. Many of the accepted notions of a unified theory of physics since the 1970s assume, and to some degree depend upon, the existence of the graviton. These include string theory, superstring theory, and M-theory. Detection of gravitons will validate these various lines of research to unify quantum mechanics and relativity theory. The dilaton made its first appearance in Kaluza–Klein theory, a five-dimensional theory that combined gravitation and electromagnetism. It appears in string theory. However, it's become central to the lower-dimensional many-bodied gravity problem[19] based on the field theoretic approach of Roman Jackiw. The impetus arose from the fact that complete analytical solutions for the metric of a covariant N-body system have proven elusive in general relativity. To simplify the problem, the number of dimensions was lowered to 1+1 - one spatial dimension and one temporal dimension. This model problem, known as R=T theory,[20] as opposed to the general G=T theory, was amenable to exact solutions in terms of a generalization of the Lambert W function. Also, the field equation governing the dilaton, derived from differential geometry, as the Schrödinger equation could be amenable to quantization.[21] This combines gravity, quantization, and even the electromagnetic interaction, promising ingredients of a fundamental physical theory. This outcome revealed a previously unknown and already existing natural link between general relativity and quantum mechanics. There lacks clarity in the generalization of this theory to 3+1 dimensions. However, a recent derivation in 3+1 dimensions under the right coordinate conditions yields a formulation similar to the earlier 1+1, a dilaton field governed by the logarithmic Schrödinger equation[22] that is seen in condensed matter physics and superfluids. The field equations are amenable to such a generalization, as shown with the inclusion of a one-graviton process,[23] and yield the correct Newtonian limit in d dimensions, but only with a dilaton. Furthermore, some speculate on the view of the apparent resemblance between the dilaton and the Higgs boson.[24] However, there needs more experimentation to resolve the relationship between these two particles. Finally, since this theory can combine gravitational, electromagnetic, and quantum effects, their coupling could potentially lead to a means of vindicating the theory through cosmology and experimentally. Nonrenormalizability of gravityEdit However, gravity is perturbatively nonrenormalizable.[4]:xxxvi–xxxviii;211–212[25] For a quantum field theory to be well-defined according to this understanding of the subject, it must be asymptotically free or asymptotically safe. The theory must be characterized by a choice of finitely many parameters, which could, in principle, be set by experiment. For example, in quantum electrodynamics these parameters are the charge and mass of the electron, as measured at a particular energy scale. Quantum gravity as an effective field theoryEdit Spacetime background dependenceEdit String theoryEdit Background independent theoriesEdit Semi-classical quantum gravityEdit Problem of TimeEdit A conceptual difficulty in combining quantum mechanics with general relativity arises from the contrasting role of time within these two frameworks. In quantum theories time acts as an independent background through which states evolve, with the Hamiltonian operator acting as the generator of infinitesimal translations of quantum states through time.[30] In contrast, general relativity treats time as a dynamical variable which interacts directly with matter and moreover requires the Hamiltonian constraint to vanish,[31] removing any possibility of employing a notion of time similar to that in quantum theory. Candidate theoriesEdit There are a number of proposed quantum gravity theories.[32] Currently, there is still no complete and consistent quantum theory of gravity, and the candidate models still need to overcome major formal and conceptual problems. They also face the common problem that, as yet, there is no way to put quantum gravity predictions to experimental tests, although there is hope for this to change as future data from cosmological observations and particle physics experiments becomes available.[33][34] String theoryEdit One suggested starting point is ordinary quantum field theories which, after all, are successful in describing the other three basic fundamental forces in the context of the standard model of elementary particle physics. However, while this leads to an acceptable effective (quantum) field theory of gravity at low energies,[27] gravity turns out to be much more problematic at higher energies. For ordinary field theories such as quantum electrodynamics, a technique known as renormalization is an integral part of deriving predictions which take into account higher-energy contributions,[35] but gravity turns out to be nonrenormalizable: at high energies, applying the recipes of ordinary quantum field theory yields models that are devoid of all predictive power.[36] Loop quantum gravityEdit Simple spin network of the type used in loop quantum gravity The quantum state of spacetime is described in the theory by means of a mathematical structure called Spin Networks. Spin networks were initially introduced in 1964 by Roger Penrose in abstract form, as a way to set up an intrinsically quantum mechanical model of spacetime[43], and later shown by Carlo Rovelli and Lee Smolin to derive naturally from a non-perturbative quantization of general relativity. Spin networks do not represent quantum states of a field in spacetime: they represent directly quantum states of spacetime. The theory is based on the reformulation of general relativity known as Ashtekar variables, which represent geometric gravity using mathematical analogues of electric and magnetic fields.[44][45] In the quantum theory, space is represented by a network structure called a spin network, evolving over time in discrete steps.[46][47][48][49] Other approachesEdit Experimental testsEdit See alsoEdit 5. ^ a b Zee, Anthony (2010). Quantum Field Theory in a Nutshell (second ed.). Princeton University Press. pp. 172,434–435. ISBN 978-0-691-14034-6. OCLC 659549695.  10. ^ Wald, Robert M. (1994). Quantum Field Theory in Curved Spacetime and Black Hole Thermodynamics. University of Chicago Press. ISBN 0-226-87027-8.  16. ^ Charles Ginenthal. "Newton, Einstein, and Velikovsky".  17. ^ Weinberg, Steven; Witten, Edward (1980). "Limits on massless particles". Physics Letters B. 96: 59–62. doi:10.1016/0370-2693(80)90212-9.  18. ^ Horowitz, Gary T.; Polchinski, Joseph. "Gauge/gravity duality". In Oriti, Daniele. Approaches to Quantum Gravity. Cambridge University Press. arXiv:gr-qc/0602037 . ISBN 9780511575549. OCLC 873715753.  22. ^ Scott, T.C.; Zhang, Xiangdong; Mann, Robert; Fee, G.J. (2016). "Canonical reduction for dilatonic gravity in 3 + 1 dimensions". Physical Review D. 93 (8): 084017. arXiv:1605.03431 . Bibcode:2016PhRvD..93h4017S. doi:10.1103/PhysRevD.93.084017.  23. ^ Mann, R B; Ohta, T (1997). "Exact solution for the metric and the motion of two bodies in (1+1)-dimensional gravity". Phys. Rev. D. 55 (8): 4723–4747. arXiv:gr-qc/9611008 . Bibcode:1997PhRvD..55.4723M. doi:10.1103/PhysRevD.55.4723.  24. ^ Bellazzini, B.; Csaki, C.; Hubisz, J.; Serra, J.; Terning, J. (2013). "A higgs-like dilaton". Eur. Phys. J. C. 73 (2): 2333. arXiv:1209.3299 . Bibcode:2013EPJC...73.2333B. doi:10.1140/epjc/s10052-013-2333-x.  25. ^ Hamber, H. W. (2009). Quantum Gravitation – The Feynman Path Integral Approach. Springer Publishing. ISBN 978-3-540-85292-6.  26. ^ Distler, Jacques (2005-09-01). "Motivation". Retrieved 2018-02-24.  27. ^ a b c Donoghue, John F. (editor) (1995). "Introduction to the Effective Field Theory Description of Gravity". In Cornet, Fernando. Effective Theories: Proceedings of the Advanced School, Almunecar, Spain, 26 June–1 July 1995. Singapore: World Scientific. arXiv:gr-qc/9512024 . Bibcode:1995gr.qc....12024D. ISBN 981-02-2908-9.  29. ^ Smolin, Lee (2001). Three Roads to Quantum Gravity. Basic Books. pp. 20–25. ISBN 0-465-07835-4.  Pages 220–226 are annotated references and guide for further reading. 31. ^ Novello, Mario; Bergliaffa, Santiago E. (2003-06-11). Cosmology and Gravitation: Xth Brazilian School of Cosmology and Gravitation; 25th Anniversary (1977–2002), Mangaratiba, Rio de Janeiro, Brazil,. Springer Science & Business Media. p. 95. ISBN 978-0-7354-0131-0.  37. ^ An accessible introduction at the undergraduate level can be found in Zwiebach, Barton (2004). A First Course in String Theory. Cambridge University Press. ISBN 0-521-83143-1. , and more complete overviews in Polchinski, Joseph (1998). String Theory Vol. I: An Introduction to the Bosonic String. Cambridge University Press. ISBN 0-521-63303-6.  and Polchinski, Joseph (1998b). String Theory Vol. II: Superstring Theory and Beyond. Cambridge University Press. ISBN 0-521-63304-4.  43. ^ Wolfram, Stephen (2002). A New Kind of Science. Wolfram Media, Inc. p. 1054. ISBN 1-57955-008-8.  46. ^ Thiemann, Thomas (2006). "Loop Quantum Gravity: An Inside View". Approaches to Fundamental Physics. Lecture Notes in Physics. 721: 185–263. arXiv:hep-th/0608210 . Bibcode:2007LNP...721..185T. doi:10.1007/978-3-540-71117-9_10. ISBN 978-3-540-71115-5.  50. ^ Rovelli, Carlo (2004). Quantum Gravity. Cambridge University Press. ISBN 0-521-71596-2.  51. ^ Isham, Christopher J. (1994). "Prima facie questions in quantum gravity". In Ehlers, Jürgen; Friedrich, Helmut. Canonical Gravity: From Classical to Quantum. Springer. arXiv:gr-qc/9310031 . Bibcode:1994LNP...434....1I. doi:10.1007/3-540-58339-4_13. ISBN 3-540-58339-4.  55. ^ See ch. 33 in Penrose 2004 and references therein. 56. ^ Aastrup, J.; Grimstrup, J. M. (27 Apr 2015). "Quantum Holonomy Theory". Fortschritte der Physik. 64 (10): 783. arXiv:1504.07100 . Bibcode:2016ForPh..64..783A. doi:10.1002/prop.201600073.  58. ^ Hossenfelder, Sabine (2010). "Experimental Search for Quantum Gravity Chapter 5". "Classical and Quantum Gravity: Theory, Analysis and Applications," Edited by V. R. Frignanni, Nova Publishers. 5 (2011). arXiv:1010.3420 . Bibcode:2010arXiv1010.3420H.  Further readingEdit External linksEdit
837e851c7dd04676
Friday, March 30, 2012 Jonathan Haidt is on a roll: Consider this gem: I would say that's true for scientific analysis too. Anonymous said... Um, no. If by "scientific analysis" you are referring to economics, then perhaps you are right. Real science, however, has laws that are sacred (think Newton's Law of Motion, Avogadro's Law, Schrödinger equation, etc.) and proposing they are surrounded by a ring of ignorance puts you in the same league as the moon landing denialists, intelligent design advocates, etc. Eric Falkenstein said... I don't want to get too far into semantics, but I think the way he's using sacred here is not which assumption are used, but rather, which somewhat debatable assumptions are used. As virtually no sane person disputes the assumptions you mention, they aren't sacred facts, but simple facts. Anonymous said... The quote makes no sense whatsoever. If that's what you mean by a roll, then go for it.
11230206f778abb3
Science and the Denial of Animal Consciousness Mind-Matter for Animals Matters By David Olivier and Estiva Reus This article was written to be distributed at the International Conference on Animal Sentience organized in March 2005 by Compassion in World Farming. A poster was presented on the same theme. This text has been published in French and in English by the Cahiers antispécistes and also in English in the Journal Between the Species, v.13, #7 (2007). Animal people are usually confident that Cartesianism is something of the past and that modern science clearly establishes that animals are sentient beings. But actually the scientific status of sentience is anything but firmly established. Not only is the subjective point of view absent from current science; it is precluded by construction from our fundamental realms of knowledge. Physics — the mother-science once we reject Cartesian dualism — is currently unable to include sentience in its account of the world. A large part of the philosophy of mind describes a mindless mind, from which subjectivity — feeling, qualia — has been stripped, leaving only in place functional relations. This situation paves the way for discourses in which sentience seems to escape the realm of knowledge to fall into that of private beliefs, which individuals can choose as freely as their religion. This is a real obstacle to having animal sentience taken seriously; as such it has been largely underestimated. We believe it necessary for the movement for animals to understand that it cannot by-pass the “mind-matter” problem. We must not allow the existence and relevance of animal sentience to be denied in the name of science. One path we are contemplating is a “Declaration on Sentience” in which scientists and other thinkers would subscribe to the following assertion: sentience is an objective reality of the world and belongs to the realm of science. Despite the current intractability of the “mind-matter” problem, we do have something on our side. Although we cannot prove the reality of sentience, we can show it impossible for anyone to disbelieve in it — just as no one can really disbelieve in the reality of the physical world. We thus have enough reasons to reject the main ways in which sentience is denied or dismissed in many realms of current philosophy or science. These reasons are based on our own situation as sentient and deliberative beings. For the main currents of philosophy — and for common morality — sentience is a necessary condition for a being to be a moral patient (for utilitarianism it is also a sufficient condition). The vividness with which we are conscious of a being's emotions determines the amount of attention we will give her. A decisive factor for bringing humans to treat other animals ethically is thus the unrestricted recognition of their possession of a mental life, of the fact that they think, desire, feel and so on. The movement for animals is confident in the support it can obtain in this respect from both modern science and modern philosophy: the Cartesian body/soul dualism is almost unanimously rejected, and no one nowadays defends outright the animal-machine doctrine. What is the basis of this confidence in the solidity of our knowledge of sentience? It is the idea that common sense, supplemented with scientific data, no longer leaves any doubt about the reality of consciousness. In a sense, this view is justified. We do have an intuitive knowledge of the existence of sentience in others, even if our confidence in asserting its presence dwindles as we move to beings very different from ourselves, such as mussels and jellyfish. The starting point of our knowledge and beliefs about sentience is the personal experience we have of it, i.e. the fact that we ourselves have feelings. Our attribution of a mental life to others is based on analogy, on the physical and behavioral similarities between those beings and ourselves. In this respect, regarding both the physiological and the behavioral levels, the development of the different branches of science undeniably brings considerable support to the assertion of animal sentience. Three branches are particularly significant: • The theory of evolution has evidenced the kinship of all animals (including humans), providing an explanation for both their physical and mental similarities1. • Ethology, with methods that grow ever closer to those of human psychology and sociology, has gathered data on individual and collective behaviors of animals, including cultural behaviors. • Neurobiology has established an ever more precise mapping between parts of the brain and faculties of perception, emotion or action, and has developed comparisons between the nervous systems of animals of different species. All of this is true; nonetheless, the favorable consequences of these scientific results for the animals remain uncertain, for two reasons that have to do with the question of knowledge itself: 1. The first reason is due to the uncertain status generally ascribed to ethics. Science and ethics are generally perceived as being radically different in nature. Science uncovers objective truths, such that are true for all and hold whether or not anyone comes to know them. To ethics on the other hand is often ascribed a lower status. The validity of ethical prescriptions tends to be seen as purely relative, dependent on the subject who defends them. Ethical assertions, as opposed to scientific assertions, are not seen as objective, as having a truth value, beyond the different and incompatible beliefs of different persons about what is right and wrong. Because each ethical prescription is phrased as a universal prescription, and different subjects utter different and incompatible ethical prescriptions, many people fall back on a relativistic point of view. This situation contrasts with the attitude generally accepted concerning science, where the existence of different and incompatible theories is usually seen as an indication of the fact that the truth, necessarily single, simply remains to be discovered. 2. The second reason is that the phenomenon of consciousness is very poorly accounted for at the fundamental levels of our knowledge. Not only is sentience absent from the accounts they give of the world: it is precluded by construction, or, at best, “included” as a fake consciousness or a superfluous consciousness. This void is a permanent threat to the recognition of animal sentience, the existence of which either is flatly denied, or seen as an undecidable and meaningless issue — this last position being often tantamount to its outright denial when it comes to the practical treatment of animals. Alternatively, animal sentience is seen as a non-scientific issue, and hence expelled, much like ethics, from the realm of truth, of objectivity, and confined to that of “personal beliefs”. This situation is one effect of our incapacity to correctly deal with the articulation between the mental and non-mental aspects of reality. The mind-matter problem remains currently unsolved and appears in many ways inextricable. Does that leave us completely helpless? In this article, we argue that this is not the case. We believe that there are arguments that allow us to refuse the main ways in which consciousness is conjured away in certain branches of philosophy and of science. We have the means to assert that the manner these branches deal with sentience can be really accepted by no one. It is important to bring this point to attention, even if we are unable to propose any viable alternative: by gaining recognition for the shortcomings of these theories, we can prevent the disingenuous use of their unsatisfactory treatment of sentience — a treatment that is actually unsatisfactory and unacceptable to anyone — for the purpose of brushing aside all serious consideration for animal sentience. The reasons we are about to put forward in order to reject as unacceptable the manner sentience is currently dealt with in certain scientific and philosophical approaches are not a demonstration. Our conclusions are not deduced from known facts by way of rules of logic. But that does not make our argument weak. It can be summed up in the following way: It follows from our condition as sentient, deliberative beings that we have a certain set of inescapable beliefs. The way we are in the world makes it the case that we necessarily hold certain things to be true. Consequently, any theory that is incompatible with these inescapable beliefs cannot be accepted by us as science, as true assertions about the world. After explaining what these unescapable beliefs are, we will show how they are not satisfied by the prevailing or very common conceptions in several branches of knowledge. We will end with an appeal for the animal movement to pay more attention to the mind-matter problem, and to search for ways of approaching it that are favorable to the full recognition of animal sentience. Inescapable beliefs As conscious beings capable of movement, we are brought to make choices as to what actions we will perform among those that appear possible to us. For this, we deliberate and make decisions. We cannot refrain from doing this. As soon as we perceive several paths as open to us, we need reasons to follow one of them rather than the others. This situation is not specific to human beings. Our situation as sentient, deliberative beings — beings that have to choose their acts — implies for us a certain number of inescapable beliefs: we necessarily believe (1) that there is a right answer to the question “What am I to do?”, (2) that there is a world beyond our own selves, and (3) that our deliberations and decisions determine our acts. There is a right answer to the question “What am I to do?”. To deliberate means to search for the right (i.e. correct, true) answer to the question “What am I to do2?”. Ethics can be defined as the theory of the true answer to this question3. Since several different courses of action appear possible to us, and since it appears that the course that will actually be taken is dependent upon our decision, we cannot avoid asking ourselves “What am I to do?”. To search for the correct answer to this question necessarily implies the assumption that there is such a correct answer, whether we will actually find it or not. Otherwise, it would be meaningless to search for it. And it is impossible for us not to search for it, since we do have to decide what we are to do. We thus necessarily believe in the reality of an objective truth value for prescriptive assertions, just as we believe in the reality of an objective truth value for the descriptive assertions science deals with. The decisions we take when we are in a situation in which we must decide are not right simply in virtue of their being our decisions; if such was the case, it would be impossible for us to choose. We can regret a decision we have taken, which implies that we believe our choice — our answer to the question “What am I to do?” — to have been wrong, even though it was our answer. We also often feel uncertain about the soundness of the very principles we base our judgements upon. These regrets and doubts imply we attempt to compare our own judgements to other judgements — the objectively correct ones — that we believe exist despite our failure to discover them. We believe prescriptive assertions to have a truth value, like descriptive assertions. While it is possible to profess ethical relativism4, it is impossible to really believe in it. There is a world beyond our own selves: a non-mental reality and other minds. We act because we believe that what we do will change something in the world. The rabbit who runs from the fox towards a burrow believes in the existence of the burrow and in the efficiency of running in getting away from the fox; or, at least, in its efficiency in appeasing his own fear. We believe in the existence of a physical world, at least in the minimal sense of a basis for a causal chain linking our actions to some effect on the feelings of some sentient being — be it only on our own future feelings. We believe in the existence of a non-mental reality and in that of a mental reality, interacting parts of the same world. We believe in causal relations that allow us to affect the other parts of this world through our actions. Because this conviction is inherent in all sentient, deliberative beings, we cannot believe solipsism5 to be true. The only support for solipsism is the fact that our own sentience is the sole thing we have direct knowledge of. This seems to allow the idea that “all that exists is my own mind”. But if we seriously want to build on the principle following which “I can believe to be true only what I feel”, then the only defensible theory is instant solipsism. For I experience no more than my present feelings. But it is impossible for us to believe that the only thing that exists is our present mind. To deliberate would be pointless if we did not believe in the effect of our decisions and actions at least on our future feelings, even though we do not experience these at the moment of the deliberation. We thus necessarily believe in the existence of subjectivities other than our own present mind. Neither can we believe that it is in principle impossible to determine from outside that a being is sentient. In such a case, all ethical behavior would be impossible. We could not search for the right answer to the question “What am I to do?” if we believed from the start that we have no way to discover it. This does not mean that it is always possible to determine with certainty whether or not a given organism is sentient — we know that today we cannot. But we cannot believe the quest for this knowledge to be vain. Furthermore, we have a greater than zero confidence in our present attribution of sentience to the beings that surround us. It should be noted that our future subjectivity relatively to our present subjectivity is in a position just as radically external as is the subjectivity of any other being. It follows that it is not less problematic to believe in our own (future) sentience than in that of someone else. It also follows that, contrary to a common idea, altruism (acting in favour of the future subjectivity of another being) is not less conceivable than egoism (acting in favour of our own future being). If our future feelings can be a true motivation for action, then the same stands for the future feelings of any other being. Our thoughts influence our acts The fact that we are faced with choices implies that we must take decisions, and we can do this only if we believe that our decisions influence our actions. Consequently, in our situation as deliberative beings we cannot hold epiphenomenalism6 to be true. Those who, under the impression that this doctrine is convincing, attempt to argue in its favour, demonstrate, by their very arguments, that they do not themselves believe in it: for if epiphenomenalism was true, the fact that we believe in it could not in any way lead us argue for it. To assert that one believes in epiphenomenalism is to put oneself in the same position as that woman who is said to have written to Bertrand Russel that solipsism was such a well-founded doctrine that she was surprised that so few people believed in it7. The unescapable beliefs we have listed above are satisfied neither by our current physics nor by much of our current philosophy of mind. Physics without mind or without reality Physics is just one science among many, but the account it can give — or cannot give — of sentience is of special importance. The reason for this is that once we reject dualism, we must regard all that exists and happens in the world as ultimately physical. This does not imply that all forms of knowledge must be elaborated and expounded in the terms of physics, or that it would be advantageous to do so. The study of behavior, psychology, or biology does not require knowledge of the characteristics of all the particles and fields in play in the corresponding phenomena. However, because physics represents a unified description of the same reality that all such phenomena are part of, no fields of knowledge can validly make assertions that are incompatible with the truths of physics. But by construction our current conception of physics is unable to account for sentience in an acceptable manner. Classical (non quantum) physics8 Although classical physics no longer claims to form a correct description of reality, it is still perceived as an ideal model for the way physics should be, both by the general public and by scientists, and even by physicists themselves. This is because, in contrast to quantum mechanics, classical physics appears to give an intelligible picture of the world. But this appearance is false: for classical physics is incapable of giving an account of sentience and of satisfying our inescapable beliefs. Classical physics describes the world as a set of numbers evolving in time through a fully determinate and calculable evolution function. A good image is that of billiard-ball physics, where the set of numbers is that of the positions and velocities of all the balls. The knowledge of the state of the world at any given time — of the set of all the positions and velocities — is enough to calculate the state at any other point in time in the future or in the past. This is the definition of Laplacian determinism9, which governs classical (non quantum) physics. The Laplacian world of classical physics is not compatible with the beliefs we necessarily have as sentient, deliberative, active beings, for several reasons. We necessarily believe in causality, as a reality of the world. It would make no sense to deliberate about what we are to do, if the outcome of our deliberation was not the cause of some change (or non-change) in the world. Since we inescapably do believe that our deliberations make sense, we also inescapably believe in the reality of causality. But there is no place for causality in a Laplacian world. The states of the world at different times are mutually dependent in the sense that full knowledge of one gives full knowledge of the other (by virtue of the evolution operator), but nothing makes one the cause of the other. Strikingly, real-world versions of Laplacian physics are fully time-symmetrical. Time has no arrow, and the dependency relation between two states is the same whatever the temporal ordering of the states. When we (or another chimpanzee) strike a nut with a stone at moment t1 in order to cause it to break at a future moment t2, we believe our arm movement at t1 is the cause of the nut's breaking at t2. We do not believe that the nut's breaking at t2 caused our striking it at t1, even though one might say it necessarily implied it (since the nut broke at t2, we necessarily struck it at t1). Laplacian determinism involves no more than such relations of necessity, which are not causal relations. There are no causal relations in Laplacian determinism. Neither are there probabilities in a Laplacian world. This implies of all concepts of thermodynamics such as temperature or entropy, which are based on probabilities and/or contrafactuality, that they do not describe realities of the world. Thus the attempts to reintroduce an arrow of time and causality through such notions fail. In the Laplacian world of classical physics there is no place for feelings (qualia). The explanation that classical theory purports to give of every event is complete when it has been expressed in terms of the set of positions and velocities of the particles10 of the system and of their evolution. There is no need to postulate sensation. The succession of events starting with the hand touching the burning plate, through the movements of the particles in the nerves and the brain, and ending with the mouth screaming, is complete without any reference to pain. But we know that it is pain that makes us scream; it is pain that we take into account in our deliberations when deciding how to act. In other words, it is pain that has (negative) ethical value. The only way Laplacian physics can make way for pain is as an epiphenomenon. Pain would be an additional event, something caused by the disposition of particles in the brain but unable to cause anything in return — the laws governing those particles are enough to explain the screaming. It would happen in a parallel “mental” universe, a universe affected by, but not affecting, the “material” universe. “How do you know that animals really feel pain? Their screams might be mere reflexes!” is a standard response to pro-animal arguments. In effect, we cannot know, if Laplacian physics is true. We cannot know for other animals, but neither can we for humans, or even for ourselves — our memories of pain could not themselves have been caused by real pain. This is not believable, because it would make all deliberations pointless. If classical physics is necessarily epiphenomenalist regarding sentience, by transitivity the same must be said of neurobiology, insofar as it deals with the physics and chemistry of the brain with no reference to quantum-mechanical aspects11. Neurobiology has accomplished spectacular progress in mapping regions of the brain and the nervous system to different perceptions, forms of memory, emotions and movements. On this basis it has developed an understanding of the similarities and differences between animals, human and non-human. But insofar as it accepts the classical model of the workings of the brain, it is itself necessarily epiphenomenalist. Quantum mechanics Classical physics has been superseded by quantum mechanics. This theory exhibits some results that are properly fantastic, some of which can be seen as indications that something is missing in current science concerning subjectivity. However, quantum mechanics has received to this day no convincing and intelligible interpretation as to what it means about the workings of the world. On the face of it, the standard formulation of quantum mechanical theory implies that the world evolves in a deterministic fashion, as long as it is not measured. If there were no measurements, the world, governed by the sole Schrödinger equation, would be Laplacian. At the moment of measurement, however, the state of the world “jumps” in an indeterministic fashion into a different state, in what is called (for historical reasons) a “wave packet reduction”. The measurement is an operation performed by a conscious operator. This would seem to mean that consciousness — our act of perceiving the state of a system — modifies the system in a way no other physical process is capable of. However, this itself is difficult to accept, since the conscious operator him- or herself is just a physical system, which too should be governed by the usual deterministic Schrödinger equation. The mainstream answer to this contradiction is what is known as the Copenhagen interpretation of quantum mechanics. In this view, what is required of a theory in physics is only that it should correctly predict the results of experiments. Underlying is an operationalist conception of science: physical entities are defined by the operations by means of which they are perceived. Niels Bohr said12: There is no quantum world. There is only an abstract quantum mechanical description. It is wrong to think that the task of physics is to find out how Nature is. Physics concerns what we can say about Nature. The subject of physics is thus reformulated as being the study of the results of measurements. No more references are made to an underlying reality, the existence of which is set in doubt, denied, or deemed undeterminable and thus meaningless. This mainstream interpretation of modern physics does make way for the existence of sentience — but at the price of a form of collective solipsism: the only thing that is deemed to exist, or to matter, is the “intersubjective” agreement between the perceptions of the “observers” — who are implicitly defined as humans13, or even, one is tempted to say, as typical quantum physicists! The operationalist position of the Copenhagen interpretation contrasts with realism, which holds that things exist by themselves, independently from the perception we may have of them. The fact a patient has fever is not defined by the agreement there may be between doctors about the results of certain procedures called “measurements of temperature”. Our condition as sentient, deliberative beings is such that we inescapably believe in realism, or at least in the existence of some world beyond our own selves, our own immediate perceptions. And it will not do to limit that world to the minds of other humans, as would the “intersubjectivity” approach: there is nothing special about other humans that would make their minds directly perceptible to our own, or that should make their subjective experiences such as pleasure and suffering real and important to us, whereas the subjectivity of non-humans would remain unreal or meaningless. The “intersubjective” approach as it stands is an expression of plain speciesism. And neither can we simply switch for a non-speciesist version of intersubjectivity, since this would require that we know in advance who are the sentient beings in the real world — something that cannot be determined if we do not accept that there is a real world in the first place, and that it should precisely be up to physics to determine. Thus we cannot believe as true the Copenhagen interpretation of quantum mechanics. Where classical physics described a world devoid of subjectivity, modern physics, in its dominant interpretation, leaves room for (human) subjectivity but empties reality of its substance. Neither of the two can satisfy our inescapable beliefs14. Philosophy of mind without mind Most of contemporary philosophy of mind aims at being scientifically inspired. Mind/body dualism is generally rejected, and, indeed, has become difficult to defend. But modern thought has “solved” the problem of the articulation between the mental and the non-mental by simply getting rid of the mental. It has done so quietly. The boxes labeled “mental” and “non-mental” have been left on the shelves. It is just that the box labeled “mental” has been insidiously emptied of its substance. Functionalism is the most common theory nowadays among philosophers of mind. This theory says that mental states are constituted by their causal relations to sensory stimulations, other mental states, and behavior. What makes something a mental state does not depend on its constitution, but rather on the role it plays in the system of which it is a part, in the same way as what makes an object a carburetor or an eye does not depend on what it is made of or the way it was built, but on its function in a motor or an organism. Functionalism has been much inspired by reflections relative to computer science, and also somewhat by the theory of evolution. It has become the dominant theory, because while remaining among the “materialist” (i.e. non mystical) doctrines, it appears immune to the criticism addressed to previous doctrines, particularly to behaviorism15. Behaviorism was marked by a rejection of any reference to psychology, reducing the analysis of behavior to rigid relations between inputs (stimuli) and outputs (movements). This approach is nowadays unanimously rejected. Modern authors have added flexibility and intermediate stages between the input and the output. In contrast to behaviorists, functionalists use mental words such as “desires”, “beliefs” or “intentions”. However, they do so in a way that implies that these words do not refer to anything by themselves. They are just labels on certain points in the network of interdependencies that lead from input to output, and are defined exclusively through the relations they entertain with other points. Thus, the desire “not to get wet” can be understood as an element which, in conjunction with other elements such as the sensory input “seeing falling drops” and the belief “I am outside” leads to the action “opening an umbrella”. A functionalist does not posit a sensation as a mental state that exists by itself. A certain element X is defined as a sensation only because it has certain relations with other elements in an explanatory chain, and there is nothing more to it than its having such relations. Thus functionalism is actually a form of neo-behaviorism: the path from the sensory input to the motor output has been purged of any reference to subjectivity (emotions, sensations, preferences and so on experienced by a sentient individual). The words that, in everyday language, refer to qualia16 are present in functionalist texts, but the realities they refer to have been eliminated. Sentient experience per se has been disposed of, as a reality of the world existing independently of the relations it can have with other events. A parallel can be drawn between this way of proceeding and that of classical physics, which has remained the ideal model in science. For classical physics, the world is a set of numbers fully describing each of its parts at a given point in time, and a set of relations (laws) that allow the description at any other point in time to be deduced. Thus, next to a physics that appears to describe only empty things existing solely by their relation to other empty things, has been constructed a philosophy of mind with the same features17. Functionalism allows us to give a mind to non-human animals18. In this respect it is not speciesist. But this does no good to the animals, because such a “mind” has nothing to do with what we ordinarily mean by the word. What is in play is a redefinition of mind, in such a way that what ends up being analyzed is a concept of consciousness from which all consciousness is expunged. The ability to feel emotions, to give them positive or negative value, has been eliminated by construction. Since the reason why sentient beings care to be in one state rather than another is lacking, we have no reason to give any ethical consideration to beings with such a redefined “mind”. Computational functionalism Functionalism deals not with concrete objects, but with relations, independently of their physical basis. A same relation can be “expressed” by a different basis. Many functionalist writings rely heavily on the computer-as-mind analogy, and are inspired by work in artificial intelligence. The brain is conceived of as a computer, and consciousness as a program that runs on the brain. Do machines think and have feelings? Following the above analogy, it seems difficult to answer negatively: it matters not that an algorithm be implemented in a brain of flesh or in a machine made of metal and silicon. For supporters of weak artificial intelligence, a machine, equipped with programs of the right type, simulates thinking; for supporters of strong artificial intelligence, they think. We will not dwell upon this distinction19, for the difficulty is upstream: in the equivalence that is made between mental facts and algorithms20. An algorithm is an abstract object the existence of which is not tied to any particular time or place. It is difficult to see how it could constitute or elicit, by itself, a feeling or a thought (or the simulation of a feeling or thought). But it is usually held that it is instead the “execution” of the algorithm that is identical to or elicits thought (or a simulation of thought). The execution consists in applying the algorithm to an initial set of data, or rather to a translation of the data into certain physical states of the machine. The “execution” of the algorithm will then be a finite set of physical events taking place at a precise moment in time. But it is not at all obvious in what way those particular physical events are, in themselves, an execution of that specific algorithm. They could also be described as the execution of any number of other algorithms operating on the same or on another initial set of data. They could also be described without any reference to an algorithm whatsoever. On the other hand, any physical event or succession of events happening in the world (a falling spoon, boiling water...) can be described as the execution of any arbitrary algorithm operating on any arbitrary set of data. All that is necessary for this is to determine an adequate mapping between the successive physical states and the corresponding values of the data. If the execution of an algorithm was enough to produce (was identical with) consciousness, then consciousness, and all possible qualia, would be everywhere, all the time! The conclusion would be an extreme form of panpsychism21. An algorithm is a “recipe” the execution of which happens step by step in an automatic fashion, without any necessity for the material basis that implements it to give it a meaning22. If certain algorithms, or their execution, are or elicit thought or experience, then sentience is redundant. Just as in classical physics, consciousness is either absent or ineffective. In a functionalist perspective, sentience can exist only as an epiphenomenon. But we cannot believe that sentience is an epiphenomenon; we thus cannot believe functionalism to be right. Our conclusion is thus that by virtue of our condition as sentient, deliberative beings, we cannot accept as true a theory that reduces consciousness to the execution of algorithms23. The finalist hijacking of Darwinism Functionalism often takes its inspiration not only from computer science but also from the theory of evolution. It is held that one of the features of consciousness, or part of its definition itself, is the aim it serves: to favour the reproduction of the organism implementing the “mental program”. Whether or not they are associated with functionalism, certain interpretations of the theory of evolution encourage in their own way the discounting of sentience. Darwinism is a scientific theory and, as such, appeals only to efficient causes. It has made the complexity and the transformations of the living world intelligible without being necessarily construed as the implementation of a project, as the execution of a plan; in other words, without teleology. However, it has, from the very start, given birth to a fake copy that obstinately does precisely the contrary. One modern version of this copy has developed along with sociobiology. (It is a deformed version of sociobiology, not a necessary consequence of its methods.) This version appropriates the terms of the theory of evolution and puts them to the service of an adaptationist and finalist interpretation of reality. The expression “selfish genes” borrowed from Richard Dawkins has become a privileged vector of this interpretation both in popular and academic texts. Adaptationism is the view that any feature possessed by an organism is necessarily in favour of its “fitness”, natural selection having eliminated all features that were useless or unfavorable in regard to this criterion. Together with adaptationism, finalism has made a come-back, not in the guise of a cosmic watch-maker guiding his creatures, but as a multitude of tiny genies manipulating their survival capsule to attain their one and only goal: flooding the universe with copies of themselves. It is a conception of this kind that inspires the “Darwinian” interpretation of ethics put forward by Michael Ruse and Edward Wilson: ...human beings function better if they are deceived by their genes into thinking that there is a disinterested objective morality binding upon them, which all should obey24. ...we think morally because we are subject to the appropriate epigenetic rules. These predispose us to think that certain courses of action are right and certain courses of action are wrong. The rules certainly do not lock people blindly into certain behaviors. But because they give the illusion of objectivity to morality, they lift us above immediate wants to actions which (unknown to us) ultimately serve our best genetics interests25. Morals is rather a collective illusion instated by the genes to make us “altruists”. Morality, as such, has no greater status as a justification than has any other adaptation, such as eyes, hands or teeth. It is just something that has biological value, and nothing more26. This analysis, as applied here to human morality, should be valid when applied to any feeling or thought liable to influence our actions. It is a position that asserts that all subjects act on the basis of a false consciousness: the true aims of their deeds are unknown to them, not only occasionally, not just because of the inevitable imperfections in their knowledge of reality, but because their consciousness is necessarily false: it must be so for the real aim they are serving to be accomplished. The only genuine goal that exists belongs to a system that is beyond them, while the goals they believe to have are only decoys aimed at luring them to act the way “nature”, “genes” or the “laws of evolution” have planned them to. Can we believe in this position? In virtue of our condition as sentient, deliberative beings we necessarily deliberate before acting. But we could not do so while believing that our consciousness is systematically false, that we are the victims of an illusion that we are unable to counter. This is particularly true in light of the fact that we do not deliberate only about the best means to attain certain pre-determined goals that we could not help seeing as desirable. The most difficult aspect of a decision is often the search for the right answer to the question “What is it that I should want?”; in other words, it is the determination of the goal itself (such is also the most complex part of ethical theory). We cannot start out on such a quest if we believe that it will necessarily lead us to chimerical goals based on illusory reasons, serving unknown to us aims that we cannot grasp. Consequently, we cannot believe in the truth of Ruse and Wilson's theory. These authors claim to have exposed the deceptions of nature and uncovered the hidden goal. But if their theory was correct, it would follow that no one could know it, since ignorance is necessary for the accomplishment of the destiny the genes have assigned to their vehicles. So the mere fact that they put forth their theory proves that it is false. The modern resurgence of evolutionary ethics does not in itself imply a denial of animal sentience — instead, it asserts its existence. Indirectly, however, it reinforces factors that are contrary to the full recognition of its existence and ethical significance. In Ruse and Wilson's approach, the fact that subjects experience feelings and take decisions is not denied, nor is the fact that their thoughts influence their actions. Furthermore, these authors uphold the Darwinian continuity between humans and other animals on the mental level. But because we cannot believe that our own consciousness is perpetually mystified, when we attempt to take such theories seriously, we naturally apply them to others only, throwing doubt on the reality of their consciousness. In the present cultural context, the human/non-human divide comes quickly to mind. The concept of a “consciousness” manipulated by a superior will rapidly conflates with age-old images of animals moved by their “instincts” or by the more modern concept of a “program”; in other words, it suggests that the beings provided with such a consciousness are actually just mechanical automata. This is why it is important to uncover and reject the fake copy of Darwinism, under its many forms. It is important to point out that the novelty of Darwinism, its deformed versions notwithstanding, was precisely in that it made it possible to conceive that life evolved without any preestablished purpose or meaning. The features of all living beings (among which are sentient beings) have causes. The theory of evolution sheds light on what favours the spreading of certain features, and on how a series of inherited elementary mutations can accumulate to the point of bringing about complex organisms. But the causes — that remain unknown — that have allowed sentience to appear, and those — partially explained by evolutionary theory — that have favoured its transmission, are not in Darwinian theory acts of purposeful agents with the power and the will to dictate the contents of their creatures consciousness in order to attain their own goals. “Nature”, no more than “the genes” or “the evolution”, caries a meaning, a will, purposes. Only sentient individuals have such things. It is important to reaffirm the non-finalist character of Darwinism in order to prevent sentient individuals from being dispossessed of them in favour of such entities. For that unjustified displacement weakens our perception of the reality of the desires and emotions of the sole beings who have them. That is one of the processes that makes it easy to dismiss animal sentience. Mind-matter for animals matters A large body of knowledge relative to sentience is already available (concerning nervous systems, behavior, etc.). Its value cannot be overestimated. However, we today have no idea how to deal with sentience in physical terms. A substantial part of the studies of mind deal with consciousness by redefining it in a manner that strips it of what makes it conscious (subjective experience) or by dismissing it as an illusion. Up to now, the menace that this situation represents for the efforts to better the situation of animals has not been correctly appreciated. Consequently, too little efforts have been made to counter it. Science against animal sentience? Both current physics and current biology are heavy with latent epiphenomenalism. The superfluous sentience they imply is easily translated into non-sentience whenever non human animals are concerned. The themes this article deals with may seem to concern only abstract and confidential issues of philosophy or of science. But they bear upon factors that favor the everyday denial of animal suffering, including studies by “animal welfare” experts that called upon to found decisions concerning the treatment of animals. As an example, the following excerpt is from a page on foie gras on the website of the INRA (French National Institute for Agronomical Research), as an answer to the question “Does the act of gavage [force-feeding] cause pain?”: Because of the stimuli that can be linked to it (repeated daily insertion of the feeding-tube into the esophagus, distention of the walls of the esophagus and of the proventricule, risk of erosion of the mucus membranes, liver steatosis inducing a compression of the viscera), the act of gavage is seen as a prima facie cause of suffering and pain. But first of all, it is implicit that the use of these notions is inappropriate for animals because they imply a psychological element, and that it is consequently preferable to replace them by the concept of nociception. In the case of gavage, the analysis of the signals that might correspond, at the level of the the upper digestive tract and of the nervous system, to a visceral nociception (such as inflammation, extravasation, activation of genes) do not allow us to come to a conclusion about their activation27. This excerpt is typical of a vast body of literature (produced by scientists and animal husbandry professionals) that speaks of animal welfare in empty space. From the very start, the subject-matter is dismissed by the assertion that in the case of animals it is inappropriate to use concepts that have any psychological implications. An enquiry was conducted in 1996-97 by Florence Burgat among researchers specialized in the conditions in which farm animals are reared28. It was a period in which researchers were being asked to reorient their attention towards animal welfare, as a response to a growing “social request” concerning that issue. Several of them clearly expressed their reluctance to working on a subject that they felt was not within the compass of technicians and scientists, declaring for instance: “Behavioral aspects are not objectifiable and we are used to working with measurable data” (p. 119); “The sow cannot move, perhaps she feels well, perhaps she doesn't, you cannot tell” (p. 120); “There is no such thing as an ideal environment, all that there is is the way a given individual adapts to a given environment” (p. 120); “Welfare is not at all a subject for research; instead, adaptation definitely is” (p. 122); “Welfare is not a scientific subject” (p. 122). It is true that the reactions of these researchers can in part be explained by the institutional context: taking into consideration animal welfare upsets their habits because, for years, the quality of their work has been judged only by its contribution to the productivity and profitability of farming methods. However, it is remarkable that these same researchers were more willing to express emotion or disapproval about the living conditions of farm animals when they were allowed to add that they were expressing no more than “personal feelings” or “a moral point of view”, and not speaking “as scientists”, as if it went without saying that sentience was not part of the scientific (i.e. objective) field of knowledge. It is disturbing to find that sentience is routinely construed as a non-scientific, non-objective issue. Such an assertion is both false and fraught with serious consequences. This situation leaves room for discourses in which sentience seems to escape from the realm of knowledge to fall into that of private belief, which individuals can choose as freely as they do their religion. Furthermore, concerning animal sentience, it is false to say that it is a subject where people spontaneously exhibit a great variety of contrary opinions. The truth is that everyone knows that ducks, rabbits, cows and so on are sentient. But humans have contrived over the millennia many mental tricks to weaken their perception of this fact. These tricks enable barbarity to take place against animals on a frightful scale: they offer us an escape when we are faced with reproaches from others or from our own conscience. A kind of social custom has thus been established that requires us to refrain from setting in doubt the lie according to which, in the absence of material proof to the contrary, many people are spontaneously convinced that animals are not sentient (or hardly sentient). A many-sided myth has been built; a myth that states that the direct perception we have of animal sentience is an illusion of common sense, an illusion dismissed by more rigorous examination aimed at letting us grasp truths behind the misleading appearances29. It is a myth that comes in handy every time we abuse and kill. The fact that science can be called in support of this lie is a real obstacle to efforts to have animal sentience taken seriously; and as such, it has been largely underestimated. In our societies, the name of science holds much authority: to claim that something is not scientific is tantamount to asserting that it is not true. We must thus find a way to overcome this obstacle without taking an anti-scientific stance, which would be neither necessary nor desirable. The subjective is objective We experience feelings. Sentience being precisely this subjective experience, no further proof is needed of its existence. Consciousness is a reality of the world. Since science is knowledge of reality, sentience is within the scope of scientific investigation. “Is this being sentient or not?” is a question which has an answer. It is not a question devoid of meaning, or one that might be answered one way or the other depending on one's personal opinions. Singling out and countering all discourses that attempt to play down sentience as an unscientific or meaningless question is a task that we must take up right now; it is one of the challenges we must meet in order to bring down the ideological fortress that has been built to insure that the interests of sentient beings be effectively ignored. It is necessary for the animal movement to realize that it cannot bypass the “mind-matter” problem. Thinkers who are concerned with the animal issue must become familiar with the literature concerning this thorny problem, while keeping in mind the consequences that are at stake for the animals. We must not allow that in the name of science, or of learned thought in general, the existence and the ethical significance of animal sentience be denied. If science in its current state is unable, as we have argued, to account for the undeniable reality of consciousness, we must explicitly acknowledge this fact as a shortcoming of our knowledge, rather than allowing it to be used to deny reality whenever it comes in handy to defend speciesist discrimination. We must raise the consciousness and fluency of animal activists on this issue, and find a way for this to influence the way the general public perceives animal consciousness. One possibility that we are contemplating is the publication of a “Declaration on Sentience” in which scientists and other thinkers would support an assertion such as this: Sentience is an objective reality of the world, notwithstanding the problems our current physics and philosophy of mind may have in accounting for it. The ethical implications of the objective existence of sentience, when and where it is present, are not to be dismissed in the name of science. Such a declaration, if it gathers enough support, could change the “climate” in which researchers work. The neglect or denial of sentience would no longer be a standard feature of their works, and would come to be seen instead as a shortcoming that should be acknowledged and, sooner or later, corrected. Other projects are emerging concerning the issue of sentience. Their nascent state (and our own difficulties in meeting the deadlines for this article) explains that we will not say anything more of them here. In general, our sentiment is that has been underestimated the potential contribution of the investigation of this central fact: that animals (humans included) are sentient. More work on this question will bear considerable fruit. We believe that this issue actually carries the potential to bring about a profound change in the “world-view” of humans, and consequently in their behavior. About the authors Cover of Cahiers #23 David Olivier ( and Estiva Reus ( have authored many articles published in the French journal Les Cahiers antispécistes. [Replace the y's by i's - antispam measure.] Issue #23 of the Cahiers is particularly concerned with the question of sentience. Cover of Cahiers Espèces et Éthique The issues relative to Darwinism that have been approached in this article are more fully developed in the following collective book (in French): Yves bonnardel, David Olivier, James Rachels, Estiva Reus, Espèces et éthique: Darwin, une révolution à venir, Tahin-party, Lyon, 2001 (texts available on the publisher's website: 1. Darwin himself describes in detail (in The Descent of Man) the emotions that we find among animals, arguing that not only are they capable of pleasure and pain but also of fear, diffidence, timidity, boredom, curiosity, the desire for approval, astonishment, a taste for lively impressions, love and a sense of beauty. He recognizes in them mental capacities such as attention, memory, imagination, reason, and the formation of general concepts. 2. Or, more correctly, the question should be phrased impersonally: “What is to be done?”. A deliberative being must answer this question, even without having any concept of an “I”. We choose to retain the personal, less awkward formulation in the text. 3. Any subject who is the author of decisions is an ethical being, since that subject produces, and acts upon, a judgement about what is to be done. Even if the level of complexity and abstraction the being is capable of attaining in his or her deliberations varies from one individual to another, this fact should lead us to reconsider the strict distinction that is usually made between moral agents (usually only humans, or a subset thereof) and mere moral patients (other animals). 4. Ethical relativism holds that there is no moral truth, no objective right and wrong. Moral judgements emerge from social customs or personal preferences, and there is no single, independent standard by which one can adjudicate between conflicting views about what ought to be done. 5. Solipsism is the view that my mental states are the only reality. All objects, people etc. have no independant existence. They are merely dreams created by my own mind. 6. Epiphenomenalism holds that mental facts are caused by physical facts, but have no effects upon any physical facts. Consciousness is an inefficacious by-product of neural events. It plays no causal role in our behavior. 7. Several versions of this (true?) story can be found on the Web. 8. For a more thorough account of the problem with classical (Laplacian) physics, see (in French) David Olivier, “Le subjectif est objectif”, in Les Cahiers antispécistes #23 (Dec. 2003); full text on the Web site 9. After Pierre Simon de Laplace (1749-1827), French physicist and mathematician. “An intellect which at any given moment knew all the forces that animate Nature and the mutual positions of the beings that comprise it, if this intellect were vast enough to submit its data to analysis, could condense into a single formula the movement of the greatest bodies of the universe and that of the lightest atom: for such an intellect nothing could be uncertain; and the future just like the past would be present before its eyes.” (in Essai philosophique sur les probabilités, 1814). 10. With the addition of fields; but apparently this does not change the nature of the problem. 11. Modern chemistry is, of course, heavily dependent on quantum mechanics, but the quantum-mechanical effects are as much as possible confined to the lower levels, to the explanation of interatomic bonding and levels of molecular energy. The molecules themselves are rarely treated as quantum objects, and their quantum nature plays no role in most applications of chemistry, such as neurobiology. 12. Quoted by Roger G. Newton, The Truth of Science: Physical Theories and Reality, Harvard University Press, Cambridge, Mass., 1997, p. 176, according to 13. In the famous “Schrödinger's cat” thought experiment, the cat is explicitely not seen as capable performing a measurement on the system. It is unclear whether this means that the cat has no mind, or has one but not of the right kind, or that the existence of a cat's mind is meaningless. 14. An alternative view of quantum mechanics is worth mentioning. The “many worlds” view holds that the world evolves according to the deterministic Schrödinger equation at all times; there are no “wave packet reductions”. This allows us to maintain a realist interpretation of physics. Unfortunately, it is also a return to Laplacian physics. The many worlds view has several features of much interest to the problem of sentience, but by itself it does no better than classical, Laplacian physics. 15. It also answers some objections made to identity theory. (The identity theory of mind holds that states and processes of the mind are identical to states and processes of the brain.) 16. The term qualia (singular, quale) refers to the introspectively accessible, phenomenal aspect of our mental lives. It is used to stand for the subjective character of conscious experience, the way it feels to have mental states such as pain, seeing blue, smelling coffee, being angry, etc. 17. Despite its influence, functionalist thought is not the only trend in current philosophy of mind. Actually, philosophers have been the main force criticizing functionalism, precisely because it eliminates sentience, subjective experience. 18. This is often not the case for another trend in philosophy that also aims at dealing with mind while leaving out feelings and emotions: the attempt to define consciousness by logico-linguistic criteria, in virtue of properties that obtain in the case of propositions concerning consciousness. For instance, it is said that consciousness is such that the truth values of propositions concerning it do not depend on the truth values of subordinate propositions. (The proposition “Mary believes it is 5 o'clock” is true or false independently from its being 5 o'clock.) This school, which centers its reflections on verbal properties, leads some of its members to assert that animals do not think or do not feel because they do not have a language, without which they cannot have concepts. In contrast, Joëlle Proust, a functionalist philosopher, has recently dedicated two works to the issue of animal mind: • Joëlle Proust, Comment l'esprit vient aux bêtes. Essai sur la représentation, Gallimard, 1997. • Joëlle Proust, Les animaux pensent-ils?, Bayard, 2003. 19. The debate about whether or not it will be some day possible to create sentient artifacts is not at issue here. 20. An algorithm is a finite set of instructions which, given an initial state, will result in a corresponding end-state. For instance, a cooking recipe is an algorithm. In classical (Laplacian) physics, the set of physical laws is an algorithm that allows the computation of the state at any time t2 knowing the state at another time t1. A computer program is an algorithm that tells what specific steps to perform (in what specific order) to carry out a specified task. It can be described as a set of instructions in the form “If the machine is in the state Sa and receives the input Ib, let it go into state Sc while producing the output Od”. 21. Panpsychism is the view that mind is omnipresent throughout the universe. According to this doctrine, or at least some of its forms, all objects have consciousness: rivers, planets, clocks, molecules... have consciousness. 22. It is this that has brought John Searle to reject approaches that identify the mind with the software and the brain with the hardware, in his famous “Chinese Room” argument: for the computer, he remarks, the objects that are manipulated are not symbols and the rules for the manipulation are not a syntax. The operations that are carried out are perceived as meaningful only from the point of view of conscious subjects, from outside the system. 23. In opposition to functionalists, Roger Penrose, has developed arguments in the case of mathematical understanding which tend to prove that thought is not reducible to an algorithmic process. See his two books on the subject: • Roger Penrose, The Emperor's New Mind: Concerning Computers, Minds, and the Laws of Physics, Oxford Press, Oxford, 1989. • Roger Penrose, Shadows of the Mind: A Search for the Missing Science of Consciousness, Oxford Press, Oxford, 1994. 24. Ruse Michael and Wilson Edward (1986), “Moral Philosophy as Applied Science”, Philosophy, 61, p. 179. 25. Id., p. 180. 26. Ruse Michael (1993), “Une défense de l'éthique évolutionniste”, in Jean-Pierre Changeux (ed.), Fondements naturels de l'éthique, Odile Jacob, Paris, p. 59. 27. From an article published on the website of the INRA on Dec. 15, 2004: Our emphasis. 28. The report of this enquiry is in Florence Burgat, Les animaux d'élevage ont-ils droit au bien-être?, INRA Éditions, Paris, 2001, p. 105-133. 29. The most famous example is that of Descartes' animal-machine. But the set of ideas that imply a denial of animal sentience extends much further than that. There remains much to do to pinpoint them, for many of them are not explicit in their denial. For instance, in ethics, many writers hold that the distinctive feature defining moral patients is not sentience, but some other character X (culture, language, individuality, freedom, self-awareness...) possessed only by a subset of sentient beings. But they often construe the character X in such a way as to make us unable to imagine a being deprived of it as any more than an automaton.
21b394307f98a30a
Bose–Einstein condensate From Wikipedia, the free encyclopedia   (Redirected from Bose-Einstein condensate) Jump to: navigation, search Schematic Bose-Einstein Condensation versus temperature and the energy diagram A Bose–Einstein condensate (BEC) is a state of matter of a dilute gas of bosons cooled to temperatures very close to absolute zero (that is, very near 0 K or −273.15 °C). Under such conditions, a large fraction of bosons occupy the lowest quantum state, at which point macroscopic quantum phenomena become apparent. It is formed by cooling a gas of extremely low density, about one-hundred-thousandth the density of normal air, to ultra-low temperatures. This state was first predicted, generally, in 1924–25 by Satyendra Nath Bose and Albert Einstein. Velocity-distribution data (3 views) for a gas of rubidium atoms, confirming the discovery of a new phase of matter, the Bose–Einstein condensate. Left: just before the appearance of a Bose–Einstein condensate. Center: just after the appearance of the condensate. Right: after further evaporation, leaving a sample of nearly pure condensate. Satyendra Nath Bose first sent a paper to Einstein on the quantum statistics of light quanta (now called photons), deriving Planck's quantum radiation law without any reference to classical physics, and Einstein was impressed, translated the paper himself from English to German and submitted it for Bose to the Zeitschrift für Physik, which published it. (The Einstein manuscript, once believed to be lost, was found in a library at Leiden University in 2005.[1]). Einstein then extended Bose's ideas to matter in two other papers.[2] The result of their efforts is the concept of a Bose gas, governed by Bose–Einstein statistics, which describes the statistical distribution of identical particles with integer spin, now called bosons. Bosons, which include the photon as well as atoms such as helium-4 (4He), are allowed to share a quantum state. Einstein proposed that cooling bosonic atoms to a very low temperature would cause them to fall (or "condense") into the lowest accessible quantum state, resulting in a new form of matter. In 1938 Fritz London proposed BEC as a mechanism for superfluidity in 4He and superconductivity.[3][4] On June 5, 1995 the first gaseous condensate was produced by Eric Cornell and Carl Wieman at the University of Colorado at Boulder NISTJILA lab, in a gas of rubidium atoms cooled to 170 nanokelvin (nK).[5] Shortly thereafter, Wolfgang Ketterle at MIT demonstrated important BEC properties. For their achievements Cornell, Wieman, and Ketterle received the 2001 Nobel Prize in Physics.[6] Many isotopes were soon condensed, then molecules, quasi-particles, and photons in 2010.[7] Critical temperature[edit]  is  the critical temperature,  is  the particle density,  is  the mass per boson,  is  the reduced Planck constant,  is  the Boltzmann constant, and  is  the Riemann zeta function; [8] Interactions shift the value and the corrections can be calculated by mean-field theory. This formula is derived from finding the gas degeneracy in the bose gas using Bose–Einstein statistics. Einstein's non-interacting gas[edit] Consider a collection of N noninteracting particles, which can each be in one of two quantum states, and . If the two states are equal in energy, each different configuration is equally likely. If we can tell which particle is which, there are different configurations, since each particle can be in or independently. In almost all of the configurations, about half the particles are in and the other half in . The balance is a statistical effect: the number of configurations is largest when the particles are divided equally. If the particles are indistinguishable, however, there are only N+1 different configurations. If there are K particles in state , there are N − K particles in state . Whether any particular particle is in state or in state cannot be determined, so each value of K determines a unique quantum state for the whole system. Suppose now that the energy of state is slightly greater than the energy of state by an amount E. At temperature T, a particle will have a lesser probability to be in state by . In the distinguishable case, the particle distribution will be biased slightly towards state . But in the indistinguishable case, since there is no statistical pressure toward equal numbers, the most-likely outcome is that most of the particles will collapse into state . In the distinguishable case, for large N, the fraction in state can be computed. It is the same as flipping a coin with probability proportional to p = exp(−E/T) to land tails. In the indistinguishable case, each value of K is a single state, which has its own separate Boltzmann probability. So the probability distribution is exponential: For large N, the normalization constant C is (1 − p). The expected total number of particles not in the lowest energy state, in the limit that , is equal to . It does not grow when N is large; it just approaches a constant. This will be a negligible fraction of the total number of particles. So a collection of enough Bose particles in thermal equilibrium will mostly be in the ground state, with only a few in any excited state, no matter how small the energy difference. Consider now a gas of particles, which can be in different momentum states labeled . If the number of particles is less than the number of thermally accessible states, for high temperatures and low densities, the particles will all be in different states. In this limit, the gas is classical. As the density increases or the temperature decreases, the number of accessible states per particle becomes smaller, and at some point, more particles will be forced into a single state than the maximum allowed for that state by statistical weighting. From this point on, any extra particle added will go into the ground state. To calculate the transition temperature at any density, integrate, over all momentum states, the expression for maximum number of excited particles, p/(1 − p): When the integral is evaluated with factors of kB and ℏ restored by dimensional analysis, it gives the critical temperature formula of the preceding section. Therefore, this integral defines the critical temperature and particle number corresponding to the conditions of negligible chemical potential. In Bose–Einstein statistics distribution, μ is actually still nonzero for BECs; however, μ is less than the ground state energy. Except when specifically talking about the ground state, μ can be approximated for most energy or momentum states as μ ≈ 0. Bogoliubov theory for weakly interacting gas[edit] Bogoliubov considered perturbations on the limit of dilute gas,[9] finding a finite pressure at zero temperature and positive chemical potential. This leads to corrections for the ground state. The Bogoliubov state has pressure(T=0): . The original interacting system can be converted to a system of non-interacting particles with a dispersion law. Gross–Pitaevskii equation[edit] In some simplest cases, the state of condensed particles can be described with a nonlinear Schrödinger equation, also known as Gross-Pitaevskii or Ginzburg-Landau equation. The validity of this approach is actually limited to the case of ultracold temperatures, which fits well for the most alkali atoms experiments. This approach originates from the assumption that the state of the BEC can be described by the unique wavefunction of the condensate . For a system of this nature, is interpreted as the particle density, so the total number of atoms is Provided essentially all atoms are in the condensate (that is, have condensed to the ground state), and treating the bosons using mean field theory, the energy (E) associated with the state is: Minimizing this energy with respect to infinitesimal variations in , and holding the number of atoms constant, yields the Gross–Pitaevski equation (GPE) (also a non-linear Schrödinger equation):  is the mass of the bosons,  is the external potential,  is representative of the inter-particle interactions. In the case of zero external potential, the dispersion law of interacting Bose-Einstein-condensed particles is given by so-called Bogoliubov spectrum (for ): The Gross-Pitaevskii equation (GPE) provides a relatively good description of the behavior of atomic BEC's. However, GPE does not take into account the temperature dependence of dynamical variables, and is therefore valid only for . It is not applicable, for example, for the condensates of excitons, magnons and photons, where the critical temperature is up to room one. Weaknesses of Gross–Pitaevskii model[edit] The Gross–Pitaevskii model of BEC is a physical approximation valid for certain classes of BECs. By construction, the GPE uses the following simplifications: it assumes that interactions between condensate particles are of the contact two-body type and also neglects anomalous contributions to self-energy.[10] These assumptions are suitable mostly for the dilute three-dimensional condensates. If one relaxes any of these assumptions, the equation for the condensate wavefunction acquires the terms containing higher-order powers of the wavefunction. Moreover, for some physical systems the amount of such terms turns out to be infinite, therefore, the equation becomes essentially non-polynomial. The examples where this could happen are the Bose–Fermi composite condensates,[11][12][13][14] effectively lower-dimensional condensates,[15] and dense condensates and superfluid clusters and droplets.[16] However, it is clear that in a general case the behaviour of Bose–Einstein condensate can be described by coupled evolution equations for condensate density, superfluid velocity and distribution function of elementary excitations. This problem was in 1977 by Peletminskii et al. in microscopical approach. The Peletminskii equations are valid for any finite temperatures below the critical point. Years after, in 1985, Kirkpatrick and Dorfman obtained similar equations using another microscopical approach. The Peletminskii equations also reproduce Khalatnikov hydrodynamical equations for superfluid as a limiting case. Superfluidity of BEC and Landau criterion[edit] The phenomena of superfluidity of a Bose gas and superconductivity of a strongly-correlated Fermi gas (a gas of Cooper pairs) are tightly connected to Bose-Einstein condensation. Under corresponding conditions, below the temperature of phase transition, these phenomena were observed in helium-4 and different classes of superconductors. In this sense, the superconductivity is often called the superfluidity of Fermi gas. In the simplest form, the origin of superfluidity can be seen from the weakly interacting bosons model. Experimental observation[edit] Superfluid He-4[edit] In 1938, Pyotr Kapitsa, John Allen and Don Misener discovered that helium-4 became a new kind of fluid, now known as a superfluid, at temperatures less than 2.17 K (the lambda point). Superfluid helium has many unusual properties, including zero viscosity (the ability to flow without dissipating energy) and the existence of quantized vortices. It was quickly believed that the superfluidity was due to partial Bose–Einstein condensation of the liquid. In fact, many properties of superfluid helium also appear in gaseous condensates created by Cornell, Wieman and Ketterle (see below). Superfluid helium-4 is a liquid rather than a gas, which means that the interactions between the atoms are relatively strong; the original theory of Bose–Einstein condensation must be heavily modified in order to describe it. Bose–Einstein condensation remains, however, fundamental to the superfluid properties of helium-4. Note that helium-3, a fermion, also enters a superfluid phase at low temperature, which can be explained by the formation of bosonic Cooper pairs of two atoms (see also fermionic condensate). The first "pure" Bose–Einstein condensate was created by Eric Cornell, Carl Wieman, and co-workers at JILA on 5 June 1995. They cooled a dilute vapor of approximately two thousand rubidium-87 atoms to below 170 nK using a combination of laser cooling (a technique that won its inventors Steven Chu, Claude Cohen-Tannoudji, and William D. Phillips the 1997 Nobel Prize in Physics) and magnetic evaporative cooling. About four months later, an independent effort led by Wolfgang Ketterle at MIT condensed sodium-23. Ketterle's condensate had a hundred times more atoms, allowing important results such as the observation of quantum mechanical interference between two different condensates. Cornell, Wieman and Ketterle won the 2001 Nobel Prize in Physics for their achievements.[17] A group led by Randall Hulet at Rice University announced a condensate of lithium atoms only one month following the JILA work.[18] Lithium has attractive interactions, causing the condensate to be unstable and collapse for all but a few atoms. Hulet's team subsequently showed the condensate could be stabilized by confinement quantum pressure for up to about 1000 atoms. Various isotopes have since been condensed. Velocity-distribution data graph[edit] In the image accompanying this article, the velocity-distribution data indicates the formation of a Bose–Einstein condensate out of a gas of rubidium atoms. The false colors indicate the number of atoms at each velocity, with red being the fewest and white being the most. The areas appearing white and light blue are at the lowest velocities. The peak is not infinitely narrow because of the Heisenberg uncertainty principle: spatially confined atoms have a minimum width velocity distribution. This width is given by the curvature of the magnetic potential in the given direction. More tightly confined directions have bigger widths in the ballistic velocity distribution. This anisotropy of the peak on the right is a purely quantum-mechanical effect and does not exist in the thermal distribution on the left. This graph served as the cover design for the 1999 textbook Thermal Physics by Ralph Baierlein.[19] Bose–Einstein condensation also applies to quasiparticles in solids. Magnons, Excitons, and Polaritons have integer spin and form condensates. Magnons, electron spin waves, can be controlled by a magnetic field. Densities from the limit of a dilute gas to a strongly interacting Bose liquid are possible. Magnetic ordering is the analog of superfluidity. In 1999 condensation was demonstrated in antiferromagnetic TlCuCl3,[20] at temperatures as large as 14 K. The high transition temperature (relative to atomic gases) is due to the magnons small mass (near an electron) and greater achievable density. In 2006, condensation in a ferromagnetic Yttrium-iron-garnet thin film was seen even at room temperature,[21][22] with optical pumping. Excitons, electron-hole pairs, were predicted to condense at low temperature and high density by Boer et al. in 1961. Bilayer system experiments first demonstrated condensation in 2003, by Hall voltage disappearance. Fast optical exciton creation was used to form condensates in sub-Kelvin Cu2O in 2005 on. Polariton condensation was detected in a 5 K quantum well microcavity. Peculiar properties[edit] As in many other systems, vortices can exist in BECs. These can be created, for example, by 'stirring' the condensate with lasers, or rotating the confining trap. The vortex created will be a quantum vortex. These phenomena are allowed for by the non-linear term in the GPE. As the vortices must have quantized angular momentum the wavefunction may have the form where and are as in the cylindrical coordinate system, and is the angular number. This is particularly likely for an axially symmetric (for instance, harmonic) confining potential, which is commonly used. The notion is easily generalized. To determine , the energy of must be minimized, according to the constraint . This is usually done computationally, however in a uniform medium the analytic form , where:  is  density far from the vortex,  is  healing length of the condensate. demonstrates the correct behavior, and is a good approximation. A singly charged vortex () is in the ground state, with its energy given by where  is the farthest distance from the vortex considered.(To obtain an energy which is well defined it is necessary to include this boundary .) For multiply charged vortices () the energy is approximated by which is greater than that of singly charged vortices, indicating that these multiply charged vortices are unstable to decay. Research has, however, indicated they are metastable states, so may have relatively long lifetimes. Closely related to the creation of vortices in BECs is the generation of so-called dark solitons in one-dimensional BECs. These topological objects feature a phase gradient across their nodal plane, which stabilizes their shape even in propagation and interaction. Although solitons carry no charge and are thus prone to decay, relatively long-lived dark solitons have been produced and studied extensively.[23] Attractive interactions[edit] Experiments led by Randall Hulet at Rice University from 1995 through 2000 showed that lithium condensates with attractive interactions could stably exist up to a critical atom number. Quench cooling the gas, they observed the condensate to grow, then subsequently collapse as the attraction overwhelmed the zero-point energy of the confining potential, in a burst reminiscent of a supernova, with an explosion preceded by an implosion. Further work on attractive condensates was performed in 2000 by the JILA team, of Cornell, Wieman and coworkers. Their instrumentation now had better control so they used naturally attracting atoms of rubidium-85 (having negative atom–atom scattering length). Through Feshbach resonance involving a sweep of the magnetic field causing spin flip collisions, they lowered the characteristic, discrete energies at which rubidium bonds, making their Rb-85 atoms repulsive and creating a stable condensate. The reversible flip from attraction to repulsion stems from quantum interference among wave-like condensate atoms. When the JILA team raised the magnetic field strength further, the condensate suddenly reverted to attraction, imploded and shrank beyond detection, then exploded, expelling about two-thirds of its 10,000 atoms. About half of the atoms in the condensate seemed to have disappeared from the experiment altogether, not seen in the cold remnant or expanding gas cloud.[17] Carl Wieman explained that under current atomic theory this characteristic of Bose–Einstein condensate could not be explained because the energy state of an atom near absolute zero should not be enough to cause an implosion; however, subsequent mean field theories have been proposed to explain it. Most likely they formed molecules of two rubidium atoms,[24] energy gained by this bond imparts velocity sufficient to leave the trap without being detected. Current research[edit] Question dropshade.png Unsolved problem in physics: How do we rigorously prove the existence of Bose-Einstein condensates for general interacting systems? (more unsolved problems in physics) Compared to more commonly encountered states of matter, Bose–Einstein condensates are extremely fragile. The slightest interaction with the external environment can be enough to warm them past the condensation threshold, eliminating their interesting properties and forming a normal gas.[citation needed] Bose–Einstein condensates composed of a wide range of isotopes have been produced.[28] Cooling fermions to extremely low temperatures has created degenerate gases, subject to the Pauli exclusion principle. To exhibit Bose–Einstein condensation, the fermions must "pair up" to form bosonic compound particles (e.g. molecules or Cooper pairs). The first molecular condensates were created in November 2003 by the groups of Rudolf Grimm at the University of Innsbruck, Deborah S. Jin at the University of Colorado at Boulder and Wolfgang Ketterle at MIT. Jin quickly went on to create the first fermionic condensate composed of Cooper pairs.[29] In 1999, Danish physicist Lene Hau led a team from Harvard University which slowed a beam of light to about 17 meters per second.[clarification needed], using a superfluid.[30] Hau and her associates have since made a group of condensate atoms recoil from a light pulse such that they recorded the light's phase and amplitude, recovered by a second nearby condensate, in what they term "slow-light-mediated atomic matter-wave amplification" using Bose–Einstein condensates: details are discussed in Nature.[31] Researchers in the new field of atomtronics use the properties of Bose–Einstein condensates when manipulating groups of identical cold atoms using lasers.[32] Further, BECs have been proposed by Emmanuel David Tannenbaum for anti-stealth technology.[33] The effect has mainly been observed on alkaline atoms which have nuclear properties particularly suitable for working with traps. As of 2012, using ultra-low temperatures of 10−7 K or below, Bose–Einstein condensates had been obtained for a multitude of isotopes, mainly of alkali metal, alkaline earth metal, and lanthanide atoms (7Li, 23Na, 39K, 41K, 85Rb, 87Rb, 133Cs, 52Cr, 40Ca, 84Sr, 86Sr, 88Sr, 174Yb, 164Dy, and 168Er). Research was finally successful in hydrogen with aid of special methods. In contrast, the superfluid state of 4He below 2.17 K is not a good example, because the interaction between the atoms is too strong. Only 8% of atoms are in the ground state near absolute zero, rather than the 100% of a true condensate. The bosonic behavior of some of these alkaline gases appears odd at first sight, because their nuclei have half-integer total spin. It arises from a subtle interplay of electronic and nuclear spins: at ultra-low temperatures and corresponding excitation energies, the half-integer total spin of the electronic shell and half-integer total spin of the nucleus are coupled by a very weak hyperfine interaction. The total spin of the atom, arising from this coupling, is an integer lower value. The chemistry of systems at room temperature is determined by the electronic properties, which is essentially fermionic, since room temperature thermal excitations have typical energies much higher than the hyperfine values. See also[edit] 1. ^ "Leiden University Einstein archive". 27 October 1920. Retrieved 23 March 2011.  2. ^ Clark, Ronald W. (1971). Einstein: The Life and Times. Avon Books. pp. 408–409. ISBN 0-380-01159-X.  3. ^ F. London (1938). "The λ-Phenomenon of liquid Helium and the Bose–Einstein degeneracy". Nature. 141 (3571): 643–644. Bibcode:1938Natur.141..643L. doi:10.1038/141643a0.  4. ^ London, F. Superfluids Vol.I and II, (reprinted New York: Dover 1964) 5. ^ 6. ^ Levi, Barbara Goss (2001). "Cornell, Ketterle, and Wieman Share Nobel Prize for Bose–Einstein Condensates". Search & Discovery. Physics Today online. Archived from the original on 24 October 2007. Retrieved 26 January 2008.  7. ^ J. Klaers; J. Schmitt; F. Vewinger & M. Weitz (2010). "Bose–Einstein condensation of photons in an optical microcavity". Nature. 468 (7323): 545–548. arXiv:1007.4088free to read. Bibcode:2010Natur.468..545K. doi:10.1038/nature09567. PMID 21107426.  8. ^ (sequence A078434 in the OEIS) 9. ^ N. N. Bogoliubov (1947). "On the theory of superfluidity". J. Phys. (USSR). 11: 23.  10. ^ Beliaev, S. T. Zh. Eksp. Teor. Fiz. 34, 417–432 (1958) [Soviet Phys. JETP 7, 289 (1958)]; ibid. 34, 433–446 [Soviet Phys. JETP 7, 299 (1958)]. 11. ^ M. Schick (1971). "Two-dimensional system of hard-core bosons". Phys. Rev. A. 3 (3): 1067–1073. Bibcode:1971PhRvA...3.1067S. doi:10.1103/PhysRevA.3.1067.  12. ^ E. Kolomeisky; J. Straley (1992). "Renormalization-group analysis of the ground-state properties of dilute Bose systems in d spatial dimensions". Phys. Rev. B. 46 (18): 11749–11756. Bibcode:1992PhRvB..4611749K. doi:10.1103/PhysRevB.46.11749. PMID 10003067.  13. ^ E. B. Kolomeisky; T. J. Newman; J. P. Straley & X. Qi (2000). "Low-dimensional Bose liquids: Beyond the Gross-Pitaevskii approximation". Phys. Rev. Lett. 85 (6): 1146–1149. arXiv:cond-mat/0002282free to read. Bibcode:2000PhRvL..85.1146K. doi:10.1103/PhysRevLett.85.1146. PMID 10991498.  14. ^ S. Chui; V. Ryzhov (2004). "Collapse transition in mixtures of bosons and fermions". Phys. Rev. A. 69 (4): 043607. Bibcode:2004PhRvA..69d3607C. doi:10.1103/PhysRevA.69.043607.  15. ^ L. Salasnich; A. Parola & L. Reatto (2002). "Effective wave equations for the dynamics of cigar-shaped and disk-shaped Bose condensates". Phys. Rev. A. 65 (4): 043614. arXiv:cond-mat/0201395free to read. Bibcode:2002PhRvA..65d3614S. doi:10.1103/PhysRevA.65.043614.  16. ^ A. V. Avdeenkov; K. G. Zloshchastiev (2011). "Quantum Bose liquids with logarithmic nonlinearity: Self-sustainability and emergence of spatial extent". J. Phys. B: At. Mol. Opt. Phys. 44 (19): 195303. arXiv:1108.0847free to read. Bibcode:2011JPhB...44s5303A. doi:10.1088/0953-4075/44/19/195303.  17. ^ a b "Eric A. Cornell and Carl E. Wieman — Nobel Lecture" (PDF).  18. ^ C. C. Bradley; C. A. Sackett; J. J. Tollett & R. G. Hulet (1995). "Evidence of Bose-Einstein condensation in an atomic gas with attractive interactions" (PDF). Phys. Rev. Lett. 75 (9): 1687–1690. Bibcode:1995PhRvL..75.1687B. doi:10.1103/PhysRevLett.75.1687. PMID 10060366.  19. ^ Baierlein, Ralph (1999). Thermal Physics. Cambridge University Press. ISBN 0-521-65838-1.  20. ^ T. Nikuni; M. Oshikawa; A. Oosawa & H. Tanaka (1999). "Bose–Einstein condensation of dilute magnons in TlCuCl3". Phys. Rev. Lett. 84 (25): 5868–71. arXiv:cond-mat/9908118free to read. Bibcode:2000PhRvL..84.5868N. doi:10.1103/PhysRevLett.84.5868. PMID 10991075.  21. ^ S. O. Demokritov; V. E. Demidov; O. Dzyapko; G. A. Melkov; A. A. Serga; B. Hillebrands & A. N. Slavin (2006). "Bose–Einstein condensation of quasi-equilibrium magnons at room temperature under pumping". Nature. 443 (7110): 430–433. Bibcode:2006Natur.443..430D. doi:10.1038/nature05117. PMID 17006509.  22. ^ Magnon Bose Einstein Condensation made simple. Website of the "Westfählische Wilhelms Universität Münster" Prof.Demokritov. Retrieved 25 June 2012. 23. ^ C. Becker; S. Stellmer; P. Soltan-Panahi; S. Dörscher; M. Baumert; E.-M. Richter; J. Kronjäger; K. Bongs & K. Sengstock (2008). "Oscillations and interactions of dark and dark–bright solitons in Bose–Einstein condensates". Nature Physics. 4 (6): 496–501. arXiv:0804.0544free to read. Bibcode:2008NatPh...4..496B. doi:10.1038/nphys962.  24. ^ M. H. P. M. van Putten (2010). "Pair condensates produced in bosenovae". Phys. Lett. A. 374 (33): 3346–3347. Bibcode:2010PhLA..374.3346V. doi:10.1016/j.physleta.2010.06.020.  25. ^ Gorlitz, Axel. "Interference of Condensates (BEC@MIT)". Retrieved 13 October 2009.  26. ^ Z. Dutton; N. S. Ginsberg; C. Slowe & L. Vestergaard Hau (2004). "The art of taming light: ultra-slow and stopped light". Europhysics News. 35 (2): 33–39. Bibcode:2004ENews..35...33D. doi:10.1051/epn:2004201.  27. ^ "From Superfluid to Insulator: Bose–Einstein Condensate Undergoes a Quantum Phase Transition". Retrieved 13 October 2009.  28. ^ "Ten of the best for BEC". 1 June 2005.  29. ^ "Fermionic condensate makes its debut". 28 January 2004.  30. ^ Cromie, William J. (18 February 1999). "Physicists Slow Speed of Light". The Harvard University Gazette. Retrieved 26 January 2008.  31. ^ N. S. Ginsberg; S. R. Garner & L. V. Hau (2007). "Coherent control of optical information with matter wave dynamics". Nature. 445 (7128): 623–626. doi:10.1038/nature05493. PMID 17287804.  32. ^ P. Weiss (12 February 2000). "Atomtronics may be the new electronics". Science News Online. 157 (7): 104. doi:10.2307/4012185. JSTOR 4012185. Retrieved 12 February 2011.  33. ^ Tannenbaum, Emmanuel David (1970). "Gravimetric Radar: Gravity-based detection of a point-mass moving in a static background". arXiv:1208.2377free to read [physics.ins-det].  Further reading[edit] External links[edit]
81791696f483b228
Take the 2-minute tour × (This is a simple question, with likely a rather involved answer.) What are the primary obstacles to solve the many-body problem in quantum mechanics? Specifically, if we have a Hamiltonian for a number of interdependent particles, why is solving for the time-independent wavefunction so hard? Is the problem essentially just mathematical, or are there physical issues too? The many-body problem of Newtonian mechanics (for example gravitational bodies) seems to be very difficult, with no solution for $n > 3$. Is the quantum mechanical case easier or more difficult, or both in some respects? In relation to this, what sort of approximations/approaches are typically used to solve a system composed of many bodies in arbitrary states? (We do of course have perturbation theory which is sometimes useful, though not in the case of high coupling/interaction. Density functional theory, for example, applies well to solids, but what about arbitrary systems?) Finally, is it theoretically and/or practically impossible to simulate high-order phenomena such as chemical reactions and biological functions precisely using Schrodinger's quantum mechanics, over even QFT (quantum field theory)? (Note: this question is largely intended for seeding, though I'm curious about answers beyond what I already know too!) share|improve this question Why do you restrict it to quantum problems ? –  Cedric H. Nov 4 '10 at 23:19 You could say restrict, but in many ways it's generalising! In any case, the problem is rather different for quantum mechanics, and certainly more interesting I find. –  Noldorin Nov 4 '10 at 23:30 5 Answers 5 up vote 10 down vote accepted First let me start by saying that the $N$-body problem in classical mechanics is not computationally difficult to approximate a solution to. It is simply that in general there is not a closed form analytic solution, which is why we must rely on numerics. For quantum mechanics, however, the problem is much harder. This is because in quantum mechanics, the state space required to represent the system must be able to represent all possible superpositions of particles. While the number of orthogonal states is exponential in the size of the system, each has an associated phase and amplitude, which even with the most coarse grain discretization will lead to a double exponential in the number of possible states required to represent it. Thus in quantum systems you need $O(2^{2^n})$ variables to reasonable approximate any possible state of the system, versus only $O(2^n)$ required to represent an analogous classical system. Since we can represent $2^m$ states with $m$ bits, to represent the classical state space we need only $O(n)$ bits, versus $O(2^n)$ bits required to directly represent the quantum system. This is why it is believed to be impossible to simulate a quantum computer in polynomial time, but Newtonian physics can be simulated in polynomial time. Calculating ground states is even harder than simulating the systems. Indeed, in general finding the ground state of a classical Hamiltonian is NP-complete, while finding the ground state of a quantum Hamiltonian is QMA-complete. share|improve this answer An informative answer. Interestingly, I've heard that (universal) qunatum computers should be able to simulate other quantum computers in polynomial times, just that classical computers can't. –  Noldorin Nov 5 '10 at 16:14 Yes, that is true. Calculating ground states seems to be beyond their reach though. –  Joe Fitzsimons Nov 5 '10 at 16:39 Ah I see. I suppose calculating ground states and performing complete quantum simulations of systems typically cover different ranges of application, however, so it's not all bad news. Anyway, cheers for the detail, you seem to be very knowledgeable on the subject; the answer is yours. –  Noldorin Nov 9 '10 at 3:22 @Noldorin: Thanks. I only know this stuff because I have spent quite a while working in this exact field. By the way,ground states are to some extent less relevant because the systems for which is is computationally hard to calculate the ground state of (at least on a QC) don't cool efficiently either. –  Joe Fitzsimons Nov 9 '10 at 3:50 the N-body problem in classical mechanics is not computationally difficult to approximate a solution to. This is somewhat misleading. The systems are typically chaotic, so they're fundamentally impossible to predict, for the same reason that long-term weather forecasts are fundamentally impossible. Calculating ground states is even harder than simulating the systems. This may be true in some abstract sense, but it is not true in a practical sense. For example, it's relatively straightforward to get a good approximation to the ground state of a nucleus using the Hartree-Fock method. –  Ben Crowell Aug 16 '13 at 15:06 The answer is fairly simple -- classical N-body problem has its solution in $6N$ 1D functions of time, quantum N-body problem has its solution in one complex function, but $3N$-dimensional (not counting spin and similar stuff). Then, there is no wonder why one can find analytical solutions only for trivial problems or at least make $N$ huge and escape into statistical mechanics. And yes, this is only the problem of mathematical complexity here. From modelling point of view exact solving also seems hopeless, with only memory complexity of $\mathcal{O}(K^{3N})$. For the rest of the answer I will restrict myself to quantum chemistry/material science, since this is the most exploited region -- this means we are now talking about atoms. First of all, atoms have small and very heavy nuclei, which thus can be treated as almost stationary sources of electrostatic potential; this reduces the problem to electrons only (Born-Oppenheimer approx.). Now, there are two main routes to follow: Hartree-Fock or Density Functional Theory. In HF, one roughly represents the many-body weavefunction as a combination of some standard base functions -- then one can optimize their contributions to get minimal energy, yet using extended Hamiltonian to adjust the effects of such approximation. In DFT, one encouraged by Hohenberg-Kohn theorems reduces the many body weavefunction to electron probability density field (3-dimensional), and accordingly Shroedinger equation terms into density functionals (and there approximations are applied). Next, it can be either solved as this 3D field or in Kohn-Sham way, which is pretty much Hartree-Fock for DFT (one represents density with base functions). People sometimes are making something analytical here, but those are mostly theories made to support computational approaches. And finally your last question: those approximate methods (but still ab initio -- there are no experimental parameters there) do predict things like chemical reactions, various spectra and other measurable quantities; accuracy is problematic though. Biology is mostly out of reach because of a time scale; at least there are hybrid methods able to mix for instance the classical simulation of protein motion with quantum simulation of the binding site when it is squeezed enough so something quantum like enzymatic reaction can take place. share|improve this answer Looks like a pretty good answer, I'll read it properly tomorrow. In any case, it's important to make clearer that that although the "properties of the solution* are "fairly simple", the solutions themselves are certainly not! –  Noldorin Nov 5 '10 at 1:37 Note that H-F, DFT are the main approximation techniques to the quantum many-body problem, though neither are "well-controlled approximations" in the sense that they are used as the first term in a convergent expansion to the actual solution. And I'm not sure what level of computational complexity they reduce the problem to, though that's an important question. –  j.c. Nov 5 '10 at 14:42 @j.c. Those are approximated theories rather than approximated ways of solving equations. Reduction of complexity is obvious -- 3N-dim function to a vector of parameters in case of HF or to 3-dim field in case of DFT. –  mbq Nov 5 '10 at 17:23 In addition to what mbq said, it might be interesting to know that things get really funny in relativistic quantum mechanics, that is using the Klein-Gordon and the Dirac equation (but without the "second" quantization of Quantum Field Theory). There, there's one wave function per particle sort, so no matter how many particles of one kind you consider, the only thing that changes is the field itself. You only get more degrees of freedom by actually adding another kind of particles. Of course, since Fermions require Spinors, you may end up with other computational issues then... share|improve this answer The problem here, of course, is that the field modes are continuous variables. –  Joe Fitzsimons Nov 5 '10 at 6:47 By which I mean the problem with simulating the system, not a problem with your answer. –  Joe Fitzsimons Nov 5 '10 at 7:04 Yeah, I was curious as to whether QFT actually makes things easier in some respect. It's a tricky scenario. –  Noldorin Nov 5 '10 at 16:15 It definitely can only make things harder as you can encode a discrete system in the CV but not necessarily the other way around. –  Joe Fitzsimons Nov 5 '10 at 16:38 Noldorin: QFT would probably make things even more complicated, I was only wondering whether the unquantized relativistic QM equations would yield an advantage over the non-QFT Schrödinger equation, but as @Joe mentions, this may not be the case... –  Tobias Kienzler Nov 8 '10 at 8:00 On a more abstract level, the problem is linearity versus non-linearity. It's straightforward to solve a number of linear equations, and they always yield an analytic answer. However, non-linear equations produce chaotic behaviour, which cannot be generalised in most cases. As an example, the 3-body Newtonian problem involves 2C3 = 3 non-linear equations; the nonlinearity comes from the r2 relationship. And 3 non-linear relations are the minimum requirement for a chaotic system. Similarly, quantum mechanics involves a large number of non-linear equations - given a set of 3 electrons, each will repel the others via a non-linear relation, and with even more complexity than the Newtonian problem where all things are known and determinable. So, the simple answer is that the problem is mathematics that can't be solved for the general case, which result from the physics, and that the quantum case is indeed worse than the classical one. share|improve this answer The many-body equation is immensely difficult to study, both classically and quantum-mechanically. The late John Pople, of Northwestern University, won a Nobel Prize in 1998 for his numerical models of wave functions of atoms, developing a theoretical basis for their chemical properties. Here is a link: share|improve this answer Thanks for the info. I may just have to read some of Pople's papers some day, out of curiosity. :) –  Noldorin Nov 7 '10 at 1:14 Your Answer
d35dadd4c28b1525
onsdag 31 augusti 2016 Ännu Mer Undervisningstid i Matematik! Riksdagen har sagt ja till Regeringens förslag att ytterligare utöka den totala undervisningstiden i matematik i grundskolan med 105 tim från 1020 tim till 1125 tim, detta efter att tiden ökades med 120 tim 2013. Den totala undervisningstiden i alla ämnen är 6785 tim vilket innebär att var sjätte skoldag, eller nästan en hel dag varje vecka, skall ägnas matematik under alla grundskolans 9 år. Lagrådsremissen bakom beslutet argumenterar på följande sätt : 1. Matematik är ett av tre ämnen som krävs för behörighet till samtliga nationella program i gymnasieskolan.  2. Grundläggande kunskaper i matematik är också en förutsättning för att klara många högskoleutbildningar. 3. För de enskilda eleverna är det av stor vikt att de får de kunskaper i matematik de kommer att behöva i yrkeslivet eller för fortsatta studier.  4. Att de har sådana kunskaper är viktigt även för samhället i stort. 5. Mycket tyder dock på att svenska elevers matematikkunskaper försämrats under 2000-talet. 6. Som redovisas i promemorian finns det internationell forskning som stöder sambandet mellan utökad undervisningstid och kunskapsresultat. 7. Någon förändring av kursplanen och kunskapskraven i matematik med anledning av utökningen av undervisningstiden är inte avsedd. Logiken förefaller vara att om ytterligare tid ägnas åt en kursplan/undervisning med dokumenterat dåligt resultat, så kommer resultaten att förbättras.  Vem kan ha hittat på ett så befängt förslag? Sverker Lundin ger i Who wants to be scientific , anyway? en förklaring: Matematik (eller vetenskap) har blivit modernitetens nya religion när den gamla nu har lagt sig att dö, en religion som ingen vuxen egentligen tror på och mycket få utövar, men en religion som det blivit klädsamt och politiskt korrekt att bekänna sig till i modernitetens tecken, men då bara i "interpassiv" form med försvarslösa skolelever som mottagare av predikan.  I detta narrspel är finns det aldrig tillräckligt med ritualer för att uppvisa sin fasta tro, och timantalet i matematik kommer således att fortsätta att öka, medan resultaten fortsätter att sjunka och det bara  blir viktigare och viktigare både för de enskilda eleverna och samhället i stort att de kunskaper i matematik som behövs i skolan också lärs ut i skolan. De nya 105 timmarna skall företrädesvis tillföras mellanstadiet, medan de 120 som tillfördes 2013 avsåg främst lågstadiet. Detta speglar en utbredd förställning att något fundamentalt har gått snett i den tidiga matematikundervisningen, oklart dock vad, och att om bara detta tidiga misstag, oklart vad, undviks eller snabbt rättas till genom extra timmar, så kommer allt att gå så mycket bättre. Men en ensidig jakt på att undvika det första misstaget, oklart vilket det är, kommer naturligtvis medföra att det inte blir mycket tid över till förkovran i senare årskurser, men det kanske inte gör så mycket... måndag 15 augusti 2016 New Quantum Mechanics 19: 1st Excitation of He Here are results for the first excitation of Helium ground state into a 1S2S state with excitation energy = 0.68 = 2.90 -2.22, to be compared with observed 0.72: söndag 14 augusti 2016 New Quantum Mechanics 18: Helium Ground State Revisited Concerning the ground state and ground state energy of Helium the following illumination can be made: Standard quantum mechanics describes the ground state of Helium as $1S2$ with a 6d wave function $\psi (x1,x2)$ depending on two 3d Euclidean space coordinates $x1$ and $x2$ of the form • $\psi (x1,x2) =C \exp(-Z\vert x1\vert )\exp (-Z\vert x2\vert )$,       (1) with $Z =2$ the kernel charge, and $C$ a normalising constant. This describes two identical spherically symmetric electron distributions as solution of a reduced Schrödinger equation without electronic repulsion potential, with a total energy $E =-4$, way off the observed $-2.903$.  To handle this discrepancy between model and observation the following corrections in the computation of total energy are made, while keeping the spherically symmetric form (1) of the ground state as the solution of a reduced Schrödinger equation:   1 . Including Coulomb repulsion energy of (1) gives  $E=-2.75$. 2. Changing the kernel attraction to $Z=2 -5/16$ claiming screening gives $E=-2.85$. 3. Changing Coulomb repulsion by inflating the wave function to depend on $\vert x1-x2\vert$ can give  at best $E=-2.903724...$ to be compared with precise observation according to Nist atomic data base $-2.903385$ thus with an relative error of $0.0001$. Here the dependence on $\vert x1-x2\vert$ of the inflated wave function upon integration with respect to $x2$ reduces to a dependence on only the modulus of $x1$. Thus the inflated non spherically symmetric wave function can be argued to anyway represent two spherically symmetric electronic distributions. We see that a spherically symmetric ground state of the form (1) is attributed to have correct energy, by suitably modifying the computation of energy so as to give perfect fit with observation. This kind of physics has been very successful and convincing (in particular to physicists), but it may be that it should be subject to critical scientific scrutiny. The ideal in any case is a model with a solution which ab initio in direct computation has correct energy, not a  model with a solutions which has correct energy only if the computation of energy is changed by some ad hoc trick until match. The effect of the fix according to 3. is to introduce a correlation between the two electrons to the effect that they would tend appear on opposite sides of the kernel, thus avoiding close contact. Such an effect can be introduced by angular weighting in (1) which can reduce electron repulsion energy but at the expense of increasing kinetic energy by angular variation of wave functions with global support and then seemingly without sufficient net effect. With the local support of the wave functions meeting with a homogeneous Neumann condition (more or less vanishing kinetic energy) of the new model, such an increase of kinetic energy is not present and a good match with observation is obtained. fredag 12 augusti 2016 New Quantum Mechanics 17: The Nightmare of Multi-Dimensional Schrödinger Equation Once Schrödinger had formulated his equation for the Hydrogen atom with one electron and with great satisfaction observed an amazing correspondence to experimental data, he faced the problem of generalising his equation to atoms with many electrons. The basic problem was the generalisation of the Laplacian to the case of many electrons and here Schrödinger took the easy route (in the third out of Four Lectures on Wave Mechanics delivered at the Royal Institution in 1928) of a formal generalisation introducing a set of new independent space coordinates and associated Laplacian for each new electron, thus ending up with a wave function $\psi (x1,...,xN)$ for an atom with $N$ electrons depending on $N$ 3d spatial coordinates $x1$,...,$xN$. Already Helium with a Schrödinger equation in 6 spatial dimensions then posed a severe computational problem, which Schrödinger did not attempt to solve.  With a resolution of $10^2$ for each coordinate an atom with $N$ electrons then gives a discrete problem with $10^{6N}$ unknowns, which already for Neon with $N=10$ is bigger that the total number of atoms in the universe. The easy generalisation thus came with the severe side-effect of giving a computationally hopeless problem, and thus from scientific point meaningless model. To handle the absurdity of the $3N$ dimensions rescue steps were then taken by Hartree and Fock to reduce the dimensionality by restricting wave functions to be linear combinations of products of one-electron wave functions $\psi_j(xj)$ with global support: • $\psi_1(x1)\times\psi_2(x2)\times ....\times\psi_N(xN)$     to be solved computationally by iterating over the one-electron wave functions. The dimensionality was further reduced by ad hoc postulating that only fully symmetric or anti-symmetric wave functions (in the variables $(x1,...,xN)$) would describe physics adding ad hoc a Pauli Exclusion Principle on the way to help the case. But the dimensionality was still large and to get results in correspondence with observations required ad hoc trial and error choice of one-electron wave functions in Hartree-Fock computations setting the standard. We thus see an easy generalisation into many dimensions followed by a very troublesome rescue operation stepping back from the many dimensions. It would seem more rational to not give in to the temptation of easy generalisation, and in this sequence of posts we explore such a route. PS In the second of the Four Lectures Schrödinger argues against an atom model in terms of charge density by comparing with the known Maxwell's equations for electromagnetics in terms of electromagnetic fields, which works so amazingly well, with the prospect of a model in terms of energies, which is not known to work. torsdag 11 augusti 2016 New Quantum Mechanics 16: Relation to Hartree and Hartree-Fock The standard computational form of the quantum mechanics of an atom with N electrons (Hartree or Hartree-Fock) seeks solutions to the standard multi-dimensional Schrödinger equation as linear combinations of wave functions $\psi (x1,x2,...,xN)$ depending on $N$ 3d space coordinates $x1$,...,$xN$ as a product: • $\psi (x1,x2,...,xN)=\psi_1(x1)\times\psi_2(x2)\times ....\times\psi_N(x_N)$  where the $\psi_j$ are globally defined electronic wave functions depending on a single space coordinate $xj$. The new model takes the form of a non-standard free boundary Schrödinger equation in a wave function $\psi (x)$ as a sum: • $\psi (x)=\psi_1(x)+\psi_2(x)+....+\psi_N(x)$, where the $\psi_j(x)$ are electronic wave functions with local support on a common partition of 3d space with common space coordinate $x$. The difference between the new model and Hartree/Hartree-Fock is evident and profound.  A big trouble with electronic wave functions having global support is that they overlap and demand an exclusion principle and new physics of exchange energy.  The wave functions of the new model do not overlap and there is no need of any exclusion principle or exchange energy. PS Standard quantum mechanics comes with new forms of energy such as exchange energy and correlation energy. Here correlation energy is simply the difference between experimental total energy and total energy computed with Hartree-Fock and thus is not a physical form of energy as suggested by the name, simply a computational /modeling error. onsdag 10 augusti 2016 New Quantum Mechanics 15: Relation to "Atoms in Molecules" Atoms in Molecules developed by Richard Bader is a charge density theory based on basins of attraction of atomic kernels with boundaries characterised by vanishing normal derivative of charge density. This connects to the homogeneous Neumann boundary condition identifying separation between electrons of the new model under study in this sequence of posts. Atoms in Molecules is focussed on the role of atomic kernels in molecules, while the new model primarily (so far) concerns electrons in atoms. New Quantum Mechanics 14: $H^-$ Ion Below are results for the $H^-$ ion with two electrons and a proton. The ground state energy comes out as -0.514, slightly below the energy -0.5 of $H$, which means that $H$ is slightly electro-negative and thus by acquiring an electron into $H^-$ may react with $H^+$ to form $H2$ (with ground state energy -1.17), as one possible route to formation of $H2$. Another route is covered in this post with two H atoms being attracted to form a covalent bond. The two electron wave functions of $H^-$ occupy half-spherical domains (depicted in red and blue) and meet at a plane with a homogeneous Neumann condition satisfied on both sides. söndag 7 augusti 2016 New Quantum Mechanics 13: The Trouble with Standard QM Standard quantum mechanics of atom is based on the eigen functions of the Schrödinger equation for a Hydrogen atom with one electron, named "orbitals" being the elements of the Aufbau or build of many-electron atoms in the form of s, p, d and f orbitals of increasing complexity, see below. These "orbitals" have global support and has led to the firm conviction that all electrons must have global support and so have to be viewed to always be everywhere and nowhere at the same time (as a basic mystery of qm beyond conception of human minds). To handle this strange situation Pauli felt forced to introduce his exclusion principle, while strongly regretting to ever have come up with such an idea, even in his Nobel Lecture: • Of course in the beginning I hoped that the new quantum mechanics, with the help of which it was possible to deduce so many half-empirical formal rules in use at that time, will also rigorously deduce the exclusion principle.  • Instead of it there was for electrons still an exclusion: not of particular states any longer, but of whole classes of states, namely the exclusion of all classes different from the antisymmetrical one.  • The impression that the shadow of some incompleteness fell here on the bright light of success of the new quantum mechanics seems to me unavoidable.  In my model electrons have local support and occupy different regions of space and thus have physical presence. Besides the model seems to fit with observations. It may be that this is the way it is. The trouble with (modern) physics is largely the trouble with standard QM, the rest of the trouble being caused by Einstein's relativity theory. Here is recent evidence of the crisis of modern physics: The LHC "nightmare scenario" has come true. Here is a catalogue of "orbitals" believed to form the Aufbau of atoms: And here is the Aufbau of the periodic table, which is filled with ad hoc rules (Pauli, Madelung, Hund,..) and exceptions from these rules: lördag 6 augusti 2016 New Quantum Mechanics 12: H2 Non Bonded Here are results for two hydrogen atoms forming an H2 molecule at kernel distance R = 1.4 at minimal total energy of -1.17 and a non-bonded molecule for larger distance approaching full separation for R larger than 6-10 at a total energy of -1. The results fit quite well with table data listed below. The computations were made (on an iPad) in cylindrical coordinates in rotational symmetry around molecule axis on a mesh of 2 x 400 along the axis and 100 in the radial direction. The electrons are separated by a plane perpendicular to the axis through the the molecule center, with a homogeneous Neumann boundary condition for each electron half space Schrödinger equation. The electronic potentials are computed by solving a Poisson equation in full space for each electron. PS To capture energy approach to -1 as R becomes large, in particular the (delicate) $R^{-6}$ dependence of the van der Waal force, requires a (second order) perturbation analysis, which is beyond the scope of the basic model under study with $R^{-1}$ dependence of kernel and electronic potential energies. %TABLE II. Born–Oppenheimer total, E %Relativistic energies of the ground state of the hydrogen molecule %L. Wolniewicz %Citation: J. Chem. Phys. 99, 1851 (1993);  for two hydrogen atoms separated by a distance R bohr  R    energy 0.20 2.197803500  0.30 0.619241793  0.40 -0.120230242  0.50 -0.526638671  0.60 -0.769635353  0.80 -1.020056603  0.90 -1.083643180  1.00 -1.124539664  1.10 -1.150057316  1.20 -1.164935195  1.30 -1.172347104  1.35 -1.173963683  1.40 -1.174475671  1.45 -1.174057029  1.50 -1.172855038  1.60 -1.168583333  1.70 -1.162458688  1.80 -1.155068699  2.00 -1.138132919  2.20 -1.120132079  2.40 -1.102422568  2.60 -1.085791199  2.80 -1.070683196  3.00 -1.057326233  3.20 -1.045799627  3.40 -1.036075361  3.60 -1.028046276  3.80 -1.021549766  4.00 -1.016390228  4.20 -1.012359938  4.40 -1.009256497  4.60 -1.006895204  4.80 -1.005115986  5.00 -1.003785643  5.20 -1.002796804  5.40 -1.002065047  5.60 -1.001525243  5.80 -1.001127874  6.00 -1.000835702  6.20 -1.000620961  6.40 -1.000463077  6.60 -1.000346878  6.80 -1.000261213  7.00 -1.000197911  7.20 -1.000150992  7.40 -1.000116086  7.60 -1.000090001  7.80 -1.000070408  8.00 -1.000055603  8.50 -1.000032170  9.00 -1.000019780  9.50 -1.000012855 10.00 -1.000008754  11.00 -1.000004506  12.00 -1.000002546 onsdag 3 augusti 2016 New Quantum Mechanics 11: Helium Mystery Resolved The modern physics of quantum mechanics born in 1926 was a towering success for the Hydrogen atom with one electron, but already Helium with two electrons posed difficulties, which have never been resolved (to be true). The result is that prominent physicists always pride themselves by stating that quantum mechanics cannot be understood, only be followed to the benefit of humanity, like a religion: • I think I can safely say that nobody understands quantum mechanics. (Richard Feynman, in The Character of Physical Law (1965)) Text books and tables list the ground state of Helium as $1S^2$ with two spherically symmetric electrons (the S) with opposite spin in a first shell (the 1), named parahelium.  The energy of a $1S^2$ state according to basic quantum  theory is equal to -2.75 (Hartree), while the observation of ground state energy  is -2.903. To handle this apparent collapse of basic quantum theory, the computation of energy is changed by introducing a suitable perturbation away from spherical symmetry which delivers the wanted result of -2.903, while maintaining that the ground state still is $1S^2$. Of course, this does not make sense, but since quantum mechanics is not "anschaulich" or  "visualisable" (as required by Schrödinger) and therefore cannot be understood by humans, this is not a big deal.  By a suitable perturbation the desired result can be reached, and we are not allowed to ask any further questions following the dictate of Dirac: Shut up and calculate. New Quantum Mechanics resolves the situation as follows: The ground state is predicted to be a spherically (half-)symmetric continuous electron charge distribution with each electron occupying a half-space, and the electrons meeting on at plane (free boundary) where the normal derivative for each electron charge distribution vanishes. The result of ground state energy computations according to earlier posts shows close agreement with the observed -2.903: Notice the asymmetric electron potential and the resulting slightly asymmetric charge distribution with polar accumulation. The model shows a non-standard electron configuration, which may be the true one (if there is anything like that).
3c986144d6727a6d
Why I think the Foundational Research Institute should rethink its approach by Mike Johnson I. What is the Foundational Research Institute? What I like about FRI: What is FRI’s research framework? II. Why do I worry about FRI’s research framework? Objection 1: Motte-and-bailey Objection 2: Intuition duels Objection 3: Convergence requires common truth Objection 5: The Hard Problem of Consciousness is a red herring Objection 6: Mapping to reality McCabe concludes that, metaphysically speaking, Objection 7: FRI doesn’t fully bite the bullet on computationalism Objection 8: Dangerous combination Three themes which seem to permeate FRI’s research are: (1) Suffering is the thing that is bad. III. QRI’s alternative But is it right? What we’ve built with QRI’s framework IV. Closing thoughts Mike Johnson Qualia Research Institute My sources for FRI’s views on consciousness: Flavors of Computation are Flavors of Consciousness: Is There a Hard Problem of Consciousness? Consciousness Is a Process, Not a Moment How to Interpret a Physical System as a Mind Dissolving Confusion about Consciousness Debate between Brian & Mike on consciousness: Max Daniel’s EA Global Boston 2017 talk on s-risks: Multipolar debate between Eliezer Yudkowsky and various rationalists about animal suffering: The Internet Encyclopedia of Philosophy on functionalism: Gordon McCabe on why computation doesn’t map to physics: Luke Muehlhauser’s OpenPhil-funded report on consciousness and moral patienthood: Scott Aaronson’s thought experiments on computationalism: My work on formalizing phenomenology: My colleague Andrés’s work on formalizing phenomenology: A parametrization of various psychedelic states as operators in qualia space: A brief post on valence and the fundamental attribution error: Connectome-Specific Harmonic Waves on LSD The harmonics-in-connectome approach to modeling brain activity is a fascinating paradigm. I am privileged to have been at this talk in the 2017 Psychedelic Science conference. I’m extremely happy find out that MAPS already uploaded the talks. Dive in! Below is a partial transcript of the talk. I figured that I should get it in written form in order to be able to reference it in future articles. Enjoy! [After a brief introduction about harmonic waves in many different kinds of systems… at 7:04, Selen Atasoy]: We applied the [principle of harmonic decomposition] to the anatomy of the brain. We made them connectome-specific. So first of all, what do I mean by the human connectome? Today thanks to the recent developments in structural neuroimaging techniques such as diffusion tensor imaging, we can trace the long-distance white matter connections in the brain. These long-distance white matter fibers (as you see in the image) connect distant parts of the brain, distant parts of the cortex. And the set of all of the different connections is called the connectome. Now, because we know the equation governing these harmonic waves, we can extend this principle to the human brain by simply solving the same equation on the human connectome instead of a metal plate (Chladni plates) or the anatomy of the zebra. And if you do that, we get a set of harmonic patterns, this time emerging in the cortex. And we decided to call these harmonic patterns connectome harmincs. And each of these connectome harmonic patterns are associated with a different frequency. And because they correspond to different frequencies they are all independent, and together they give you a new language, so to speak, to describe neural activity. So in the same way the harmonic patterns are building blocks of these complex patterns we see on animal coats, these connectome harmonics are the building blocks of the complex spatio-temporal patterns of neural activity. Describing and explaining neural activity by using these connectome harmonics as brain states is really not very different than decomposing a complex musical pieces into its musical notes. It’s simply a new way of representing your data, or a new language to express it. What is the advantage of using this new language? So why not use the state-of-the-art conventional neurimaging analysis methods? Because these connectome harmonics, by definition are the vibration modes, but applied to the anatomy of the human brain, and if you use them as brain states to express neural activity we can compute certain fundamental principles very easily such as the energy or the power. The power would be the strength of activation of each of these states in neural activity. So how strongly that particular state contributes to neural activity. And the energy would be a combination of this strength of activation with the intrinsic energy of that particular brain state, and the intrinsic energy comes from the frequency of its vibration (in the analogy of vibration). So in this study we looked at the power and the energy of these connectome harmonic brain states in order to explore the neural correlates of the LSD experience. We looked at 12 healthy participants who received either 75µg of LSD (IV) or a placebo, over two sessions. These two sessions were 14 days apart in counter-balanced order. And the fMRI scans consisted of 3 eyes-closed resting states scans, each lasting 7 minutes, in the first and the third scan the participants were simply resting, eyes closed, but in the second scan they were also listening to music. And after each scan, the participants rated the intensity of certain experiences. So if you look at, firstly, at the total power and the total energy of each of these scans under LSD and placebo, what we see is that under LSD both the power as well as the energy of brain activity increases significantly. And if we compute the probability of observing a certain energy value on LSD or placebo, what we see is that the peak of this probability distribution clearly shoots towards high energy values under LSD. And that peak is even slightly higher in terms of probability when the subjects were listening to music. So if we interpret that peak as, in a way, the characteristic energy of a state, you can see that it shifts towards higher energy under LSD, and that this effect is intensified when listening to music. And then we asked, which of these brain states, which of these frequencies, were actually contributing to this energy increase. So we partitioned the spectrum of all of these harmonic brain states into different parts and computed the energy of each of these partitions individually. So in total we have around 20,000 brain states. And if you look at the energy differences in LSD and placebo, what we find is that for a very narrow range of low frequencies actually these brain states were decreasing their energy on LSD. But for a very broad range of high frequencies, LSD was inducing an energy increase. So this says that LSD alters brain dynamics in a very frequency-selective manner. And it was causing high frequencies to increase their energy. So next we looked at whether these changes we are observing in brain activity are correlated with any of the experiences that the participants themselves were having in that moment. If you look at the energy changes within the narrow range of low frequencies, we found that the energy changes in that range significantly correlated with the intensity of the experience of ego dissolution. The loss of subjective self. And very interestingly, the same range of energy change within the same frequency range also significantly correlated with the intensity of emotional arousal, whether the experience was positive or negative. This could be quite relevant for studies looking into potential therapeutic applications of LSD. Next, when we look at a slightly higher range of frequencies, what we found was that the energy changes within that range significantly correlated with the positive mood. In brief, this suggests that it’s rather the low frequency brain states which correlated with ego dissolution or with emotional arousal, and it’s the activity of higher frequencies that is correlated with the positive experiences. Next, we wanted to check the size of the repertoire of active brain states. And if you look at the probability of activation for any brain state (so we are not distinguishing for any frequency brain states), what we observe is that the probability of a brain state being silent (zero contribution), actually decreased under LSD. And the probability of a brain state contributing very strongly, which corresponds to the tails of these distributions, were increased under LSD. So this suggests that LSD was activating more brain states simultaneously. And if we go back to the music analogy that we used in the beginning, that would correspond to playing more musical notes at the same time. And it’s very interesting, because studies that have looked at improvising, those who have looked at jazz improvisation, show that improvising jazz musicians play significantly more musical notes compared to memorized play. And this is what we seem to be finding under the effect of LSD. That your brain is actually activating more of these brain states simultaneously. And it does so in a very non-random fashion. So if you look at the correlation across different frequencies. Like at the co-activation patterns, and their activation over time. You may interpret it as the “communication across various frequencies”. What we found is that for a very broad range of the spectrum, there was a higher correlation across different frequencies in their activation patterns under LSD compared to placebo. So this really says that LSD is actually causing a reorganization, rather than a random activation of brain states. It’s expanding the repertoire of active brain states, while maintaining -or maybe better said- recreating a complex but spontaneous order. And in the musical analogy it’s really very similar to jazz improvisation, to think about it in an intuitive way. Now, there is actually one particular situation when dynamical systems such as the brain, and systems that change their activity over time, show this type of emergence of complex order, or enhanced improvisation, enhanced repertoire of active states. And this is when they approach what is called criticality. Now, criticality is this special type of behavior, special type of dynamics, that emerges right at the transition between order and chaos. When these two (extreme) types of dynamics are in balance. And criticality is said to be “the constantly shifting battle zone between stagnation and anarchy. The one place where a complex system can be spontaneous, adaptive, and alive” (Waldrop 1992). So if a system is approaching criticality, there are very characteristic signatures that you would observed in the data, in the relationships that you plot in your data. And one of them is -and probably the most characteristic of them- is the emergence of power laws. So what does that mean? If you plot one observable in our data, which for example, in our case would be the maximum power of a brain state, in relationship to another observable, for example, the wavenumber, or the frequency of that brain state, and you plot them in logarithmic coordinates, that would mean that if they follow power laws, they would approximate a line. And this is exactly what we observe in our data, and surprisingly for both LSD as well as for placebo, but with one very significant and remarkable difference: because the high frequencies increase their power on LSD, this distribution follows this power law, this line, way more accurately under LSD compared to placebo. And here you see the error of the fit, which is decreasing. This suggests that LSD shoots brain dynamics further towards criticality.  The signature of criticality that we find in LSD and in placebo is way more enhanced, way more pronounced, under the effect of LSD. And we found the same effect, not only for the maximum power, but also for the mean power, as well as for the power of fluctuations. So this suggests that the criticality actually may be the principle that is underlying this emergence of complex order, and this reorganization of brain dynamics, and which leads to enhanced improvisation in brain activity. So, to summarize briefly, what we found was that LSD increases the total power as well as total energy of brain activity. It selectively activates high frequency brain states, and it expands the repertoire or active brain states in a very non-random fashion. And the principle underlying all of these changes seems to be a reorganization of brain dynamics, right at criticality, right at the edge of chaos, or just as the balance between order and chaos. And very interestingly, the “edge of chaos”, or the edge of criticality, is said to be where “life has enough stability to sustain itself, and enough creativity to deserve the name of life” (Waldrop 1992). So I leave you with that, and thank you for your attention. [Applauses; ends at 22:00, followed by Q&A] ELI5 “The Hyperbolic Geometry of DMT Experiences” I wrote the following in response to a comment on the r/RationalPsychonaut subreddit about this DMT article I wrote some time ago. The comment in question was: “Can somebody eli5 [explain like I am 5 years old] this for me?” So here is my attempt (more like “eli12”, but anyways): In order to explain the core idea of the article I need to convey the main takeaways of the following four things: 1. Differential geometry, 2. How it relates to symmetry, 3. How it applies to experience, and 4. How the effects of DMT turn out to be explained (in part) by changes in the curvature of one’s experience of space (what we call “phenomenal space”). 1) Differential Geometry If you are an ant on a ball, it may seem like you live on a “flat surface”. However, let’s imagine you do the following: You advance one centimeter in one direction, you turn 90 degrees and walk another centimeter, turn 90 degrees again and advance yet another centimeter. Logically, you just “traced three edges of a square” so you cannot be in the same place from which you departed. But let’s says that you somehow do happen to arrive at the same place. What happened? Well, it turns out the world in which you are walking is not quite flat! It’s very flat from your point of view, but overall it is a sphere! So you ARE able to walk along a triangle that happens to have three 90 degree corners. That’s what we call a “positively curved space”. There the angles of triangles add up to more than 180 degrees. In flat spaces they add up to 180. And in “negatively curved spaces” (i.e. “negative Gaussian curvature” as talked about in the article) they add up to less than 180 degrees. Eight 90-degree triangles on the surface of a sphere So let’s go back to the ant again. Now imagine that you are walking on some surface that, again, looks flat from your restricted point of view. You walk one centimeter, then turn 90 degrees, then walk another, turn 90 degrees, etc. for a total of, say, 5 times. And somehow you arrive at the same point! So now you traced a pentagon with 90 degree corners. How is that possible? The answer is that you are now in a “negatively curved space”, a kind of surface that in mathematics is called “hyperbolic”. Of course it sounds impossible that this could happen in real life. But the truth is that there are many hyperbolic surfaces that you can encounter in your daily life. Just to give an example, kale is a highly hyperbolic 2D surface (“H2” for short). It’s crumbly and very curved. So an ant might actually be able to walk along a regular pentagon with 90-degree corners if it’s walking on kale (cf. Too Many Triangles). An ant walking on kale may infer that the world is an H2 space. In brief, hyperbolic geometry is the study of spaces that have this quality of negative curvature. Now, how is this related to symmetry? 2) How it relates to symmetry As mentioned, on the surface of a sphere you can find triangles with 90 degree corners. In fact, you can partition the surface of a sphere into 8 regular triangles, each with 90 degree corners. Now, there are also other ways of partitioning the surface of a sphere with regular shapes (“regular” in the sense that every edge has the same length, and every corner has the same angle). But the number of ways to do it is not infinite. After all, there’s only a handful of regular polyhedra (which, when “inflated”, are equivalent to the ways of partitioning the surface of a sphere in regular ways). If you instead want to partition a plane in a regular way with geometric shapes, you don’t have many options. You can partition it using triangles, squares, and hexagons. And in all of those cases, the angles on each of the vertices will add up to 360 degrees (e.g. six triangles, four squares, or thee corners of hexagons meeting at a point). I won’t get into Wallpaper groups, but suffice it to say that there are also a limited number of ways of breaking down a flat surface using symmetry elements (such as reflections, rotations, etc.). Regular tilings of 2D flat space Hyperbolic 2D surfaces can be partitioned in regular ways in an infinite number of ways! This is because we no longer have the constraints imposed by flat (or spherical) geometries where the angles of shapes must add up to a certain number of degrees. As mentioned, in hyperbolic surfaces the corners of triangles add up to less than 180 degrees, so you can fit more than 6 corners of equilateral triangles at one point (and depending on the curvature of the space, you can fit up to an infinite number of them). Likewise, you can tessellate the entire hyperbolic plane with heptagons. Hyperbolic tiling: Each of the heptagons is just as big (i.e. this is a projection of the real thing) On the flip side, if you see a regular partitioning of a surface, you can infer what its curvature is! For example, if you see that a surface is entirely covered with heptagons, three on each of the corners, you can be sure that you are seeing a hyperbolic surface. And if you see a surface covered with triangles such that there are only four triangles on each joint, then you know you are seeing a spherical surface. So if you train yourself to notice and count these properties in regular patterns, you will indirectly also be able to determine whether the patterns inhabit a spherical, flat, or hyperbolic space! 3) How it applies to experience How does this apply to experience? Well, in sober states of consciousness one is usually restricted to seeing and imagining spherical and flat surfaces (and their corresponding symmetric partitions). One can of course look at a piece of kale and think “wow, that’s a hyperbolic surface” but what is impossible to do is to see it “as if it were flat”. One can only see hyperbolic surfaces as projections (i.e. where we make regular shapes look irregular so that they can fit on a flat surface) or we end up contorting the surface in a crumbly fashion in order to fit it in our flat experiential space. (Note: even sober phenomenal space happens to be based on projective geometry; but let’s not go there for now.) 4) DMT: Hyperbolizing Phenomenal Space In psychedelic states it is common to experience whatever one looks at (or, with more stunning effects, whatever one hallucinates in a sensorially-deprived environment such as a flotation tank) as slowly becoming more and more symmetric. Symmetrical patterns are attractors in psychedelia. It’s common for people to describe their acid experiences as “a kaleidoscope of colors and meaning”. We should not be too quick to dismiss these descriptions as purely metaphorical. As you can see from the article Algorithmic Reduction of Psychedelic States as well as PsychonautWiki’s Symmetrical Texture Repetition, LSD and other psychedelics do in fact “symmetrify” the textures you experience! What gravel might look like on 150 mics of LSD (Source) As it turns out, this symmetrification process (what we call “lowering the symmetry detection/propagation threshold”) does allow one to experience any of the possible ways of breaking down spherical and flat surfaces in regular ways (in addition to also enabling the experience of any wallpaper group!). Thus the surfaces of the objects one hallucinates on LSD (specially for Closed Eyes Visuals), are usually carpeted with patterns that have either spherical or flat symmetries (e.g. seeing honeycombs, square grids, regular triangulations, etc.; or seeing dodecahedra, cubes, etc.). 17 wallpaper symmetry groups Only on very high doses of classic psychedelics does one start to experience objects that have hyperbolic curvature. And this is where DMT becomes very relevant. Vaping it is one of the most efficient ways of achieving “unworldly levels of psychedelia”: On DMT the “symmetry detection threshold” is reduced to such an extent that any surface you look at very quickly gets super-saturated with regular patterns. Since (for reasons we don’t understand) our brain tries to incorporate whatever shape you hallucinate into the scene as part of the scene, the result of seeing too many triangles (or heptagons, or whatever) is that your brain will “push them into the surfaces” and, in effect, turn those surfaces into hyperbolic spaces.HeptagonsIndrasPearls Yet another part of your brain (or system of consciousness, whatever it turns out to be) recognizes that “wait, this is waaaay too curved somehow, let me try to shape it into something that could actually exist in my universe”. Hence, in practice, if you take between 10 and 20 mg of DMT, the hyperbolic surfaces you see will become bent and contorted (similar to the pictures you find in the article) just so that they can be “embedded” (a term that roughly means “to fit some object into a space without distorting its properties too much”) into your experience of the space around you. But then there’s a critical point at which this is no longer possible: Even the most contorted embeddings of the hyperbolic surfaces you experience cannot fit any longer in your normal experience of space on doses above 20 mg, so your mind has no other choice but to change the curvature of the 3D space around you! Thus when you go from “very high on DMT” to “super high on DMT” it feels like you are traveling to an entirely new dimension, where the objects you experience do not fit any longer into the normal world of human experience. They exist in H3 (hyperbolic 3D space). And this is in part why it is so extremely difficult to convey the subjective quality of these experiences. One needs to invoke mathematical notions that are unfamiliar to most people; and even then, when they do understand the math, the raw feeling of changing the damn geometry of your experience is still a lot weirder than you could ever anticipate. Anybody else want to play hyperbolic soccer? Humans vs. Entities, the match of the eon! Note: The original article goes into more depth Now that you understand the gist of the original article, I encourage you to take a closer look at it, as it includes content that I didn’t touch in this ELI5 (or 12) summary. It provides a granular description of the 6 levels of DMT experience (Threshold, Chrysanthemum, Magic Eye, Waiting Room, Breakthrough, and Amnesia), many pictures to illustrate the various levels as well as the particular emergent geometries, and a theoretical discussion of the various algorithmic reductions that might explain how the hyperbolization of phenomenal space takes place based on combining a series of simpler effects together. Principia Qualia: Part II – Valence Extract from Principia Qualia (2016) by my colleague Michael E. Johnson (from Qualia Research Institute). This is intended to summarize the core ideas of chapter 2, which proposes a precise, testable, simple, and so far science-compatible theory of the fundamental nature of valence (also called hedonic tone or the pleasure-pain axis; what makes experiences feel good or bad). VII. Three principles for a mathematical derivation of valence We’ve covered a lot of ground with the above literature reviews, and synthesizing a new framework for understanding consciousness research. But we haven’t yet fulfilled the promise about valence made in Section II- to offer a rigorous, crisp, and relatively simple hypothesis about valence. This is the goal of Part II. Drawing from the framework in Section VI, I offer three principles to frame this problem: ​ 1. Qualia Formalism: for any given conscious experience, there exists- in principle- a mathematical object isomorphic to its phenomenology. This is a formal way of saying that consciousness is in principle quantifiable- much as electromagnetism, or the square root of nine is quantifiable. I.e. IIT’s goal, to generate such a mathematical object, is a valid one. 2. Qualia Structuralism: this mathematical object has a rich set of formal structures. Based on the regularities & invariances in phenomenology, it seems safe to say that qualia has a non-trivial amount of structure. It likely exhibits connectedness (i.e., it’s a unified whole, not the union of multiple disjoint sets), and compactness, and so we can speak of qualia as having a topology. More speculatively, based on the following: (a) IIT’s output format is data in a vector space, (b) Modern physics models reality as a wave function within Hilbert Space, which has substantial structure, (c) Components of phenomenology such as color behave as vectors (Feynman 1965), and (d) Spatial awareness is explicitly geometric, …I propose that Qualia space also likely satisfies the requirements of being a metric space, and we can speak of qualia as having a geometry. Mathematical structures are important, since the more formal structures a mathematical object has, the more elegantly we can speak about patterns within it, and the closer our words can get to “carving reality at the joints”. ​ 3. Valence Realism: valence is a crisp phenomenon of conscious states upon which we can apply a measure. –> I.e. some experiences do feel holistically better than others, and (in principle) we can associate a value to this. Furthermore, to combine (2) and (3), this pleasantness could be encoded into the mathematical object isomorphic to the experience in an efficient way (we should look for a concise equation, not an infinitely-large lookup table for valence). […] I believe my three principles are all necessary for a satisfying solution to valence (and the first two are necessary for any satisfying solution to consciousness): Considering the inverses: If Qualia Formalism is false, then consciousness is not quantifiable, and there exists no formal knowledge about consciousness to discover. But if the history of science is any guide, we don’t live in a universe where phenomena are intrinsically unquantifiable- rather, we just haven’t been able to crisply quantify consciousness yet. If Qualia Structuralism is false and Qualia space has no meaningful structure to discover and generalize from, then most sorts of knowledge about qualia (such as which experiences feel better than others) will likely be forever beyond our empirical grasp. I.e., if Qualia space lacks structure, there will exist no elegant heuristics or principles for interpreting what a mathematical object isomorphic to a conscious experience means. But this doesn’t seem to match the story from affective neuroscience, nor from our everyday experience: we have plenty of evidence for patterns, regularities, and invariances in phenomenological experiences. Moreover, our informal, intuitive models for predicting our future qualia are generally very good. This implies our brains have figured out some simple rules-of-thumb for how qualia is structured, and so qualia does have substantial mathematical structure, even if our formal models lag behind. If Valence Realism is false, then we really can’t say very much about ethics, normativity, or valence with any confidence, ever. But this seems to violate the revealed preferences of the vast majority of people: we sure behave as if some experiences are objectively superior to others, at arbitrarily-fine levels of distinction. It may be very difficult to put an objective valence on a given experience, but in practice we don’t behave as if this valence doesn’t exist. VIII. Distinctions in qualia: charting the explanation space for valence Sections II-III made the claim that we need a bottom-up quantitative theory like IIT in order to successfully reverse-engineer valence, Section VI suggested some core problems & issues theories like IIT will need to address, and Section VII proposed three principles for interpreting IIT-style output: 1. We should think of qualia as having a mathematical representation, 2. This mathematical representation has a topology and probably a geometry, and perhaps more structure, and 3. Valence is real; some things do feel better than others, and we should try to explain why in terms of qualia’s mathematical representation. But what does this get us? Specifically, how does assuming these three things get us any closer to solving valence if we don’t have an actual, validated dataset (“data structure isomorphic to the phenomenology”) from *any* system, much less a real brain? It actually helps a surprising amount, since an isomorphism between a structured (e.g., topological, geometric) space and qualia implies that any clean or useful distinction we can make in one realm automatically applies in the other realm as well. And if we can explore what kinds of distinctions in qualia we can make, we can start to chart the explanation space for valence (what ‘kind’ of answer it will be). I propose the following four distinctions which depend on only a very small amount of mathematical structure inherent in qualia space, which should apply equally to qualia and to qualia’s mathematical representation: 1. Global vs local 2. Simple vs complex 3. Atomic vs composite 4. Intuitively important vs intuitively trivial Takeaways: this section has suggested that we can get surprising mileage out of the hypothesis that there will exist a geometric data structure isomorphic to the phenomenology of a system, since if we can make a distinction in one domain (math or qualia), it will carry over into the other domain ‘for free’. Given this, I put forth the hypothesis that valence may plausibly be a simple, global, atomic, and intuitively important property of both qualia and its mathematical representation. Reverse-engineering the precise mathematical property that corresponds to valence may seem like finding a needle in a haystack, but I propose that it may be easier than it appears. Broadly speaking, I see six heuristics for zeroing in on valence: A. Structural distinctions in Qualia space (Section VIII); B. Empirical hints from affective neuroscience (Section I); C. A priori hints from phenomenology; D. Empirical hints from neurocomputational syntax; E. The Non-adaptedness Principle; F. Common patterns across physical formalisms (lessons from physics). None of these heuristics determine the answer, but in aggregate they dramatically reduce the search space. IX.A: Structural distinctions in Qualia space (Section VIII): In the previous section, we noted that the following distinctions about qualia can be made: Global vs local; Simple vs complex; Atomic vs composite; Intuitively important vs intuitively trivial. Valence plausibly corresponds to a global, simple, atomic, and intuitively important mathematical property. Music is surprisingly pleasurable; auditory dissonance is surprisingly unpleasant. Clearly, music has many adaptive signaling & social bonding aspects (Storr 1992; Mcdermott and Hauser 2005)- yet if we subtract everything that could be considered signaling or social bonding (e.g., lyrics, performative aspects, social bonding & enjoyment), we’re still left with something very emotionally powerful. However, this pleasantness can vanish abruptly- and even reverse– if dissonance is added. Much more could be said here, but a few of the more interesting data points are: 1. Pleasurable music tends to involve elegant structure when represented geometrically (Tymoczko 2006); 2. Non-human animals don’t seem to find human music pleasant (with some exceptions), but with knowledge of what pitch range and tempo their auditory systems are optimized to pay attention to, we’ve been able to adapt human music to get animals to prefer it over silence (Snowdon and Teie 2010). 3. Results suggest that consonance is a primary factor in which sounds are pleasant vs unpleasant in 2- and 4-month-old infants (Trainor, Tsang, and Cheung 2002). 4. Hearing two of our favorite songs at once doesn’t feel better than just one; instead, it feels significantly worse. More generally, it feels like music is a particularly interesting case study by which to pick apart the information-theoretic aspects of valence, and it seems plausible that evolution may have piggybacked on some fundamental law of qualia to produce the human preference for music. This should be most obscured with genres of music which focus on lyrics, social proof & social cohesion (e.g., pop music), and performative aspects, and clearest with genres of music which avoid these things (e.g., certain genres of classical music). X. A simple hypothesis about valence To recap, the general heuristic from Section VIII was that valence may plausibly correspond to a simple, atomic, global, and intuitively important geometric property of a data structure isomorphic to phenomenology. The specific heuristics from Section IX surveyed hints from a priori phenomenology, hints from what we know of the brain’s computational syntax, introduced the Non-adaptedness Principle, and noted the unreasonable effectiveness of beautiful mathematics in physics to suggest that the specific geometric property corresponding to pleasure should be something that involves some sort of mathematically-interesting patterning, regularity, efficiency, elegance, and/or harmony. We don’t have enough information to formally deduce which mathematical property these constraints indicate, yet in aggregate these constraints hugely reduce the search space, and also substantially point toward the following: XI. Testing this hypothesis today In a perfect world, we could plug many peoples’ real-world IIT-style datasets into a symmetry detection algorithm and see if this “Symmetry in the Topology of Phenomenology” (SiToP) theory of valence successfully predicted their self-reported valences. Unfortunately, we’re a long way from having the theory and data to do that. But if we make two fairly modest assumptions, I think we should be able to perform some reasonable, simple, and elegant tests on this hypothesis now. The two assumptions are: 1. We can probably assume that symmetry/pleasure is a more-or-less fractal property: i.e., it’ll be evident on basically all locations and scales of our data structure, and so it should be obvious even with imperfect measurements. Likewise, symmetry in one part of the brain will imply symmetry elsewhere, so we may only need to measure it in a small section that need not be directly contributing to consciousness. 2. We can probably assume that symmetry in connectome-level brain networks/activity will roughly imply symmetry in the mathematical-object-isomorphic-to-phenomenology (the symmetry that ‘matters’ for valence), and vice-versa. I.e., we need not worry too much about the exact ‘flavor’ of symmetry we’re measuring. So- given these assumptions, I see three ways to test our hypothesis: 1. More pleasurable brain states should be more compressible (all else being equal). Symmetry implies compressibility, and so if we can measure the compressibility of a brain state in some sort of broad-stroke fashion while controlling for degree of consciousness, this should be a fairly good proxy for how pleasant that brain state is. 2. Highly consonant/harmonious/symmetric patterns injected directly into the brain should feel dramatically better than similar but dissonant patterns. Consonance in audio signals generally produces positive valence; dissonance (e.g., nails-on-a-chalkboard) reliably produces negative valence. This obviously follows from our hypothesis, but it’s also obviously true, so we can’t use it as a novel prediction. But if we take the general idea and apply it to unusual ways of ‘injecting’ a signal into the brain, we should be able to make predictions that are (1) novel, and (2) practically useful. TMS is generally used to disrupt brain functions by oscillating a strong magnetic field over a specific region to make those neurons fire chaotically. But if we used it on a lower-powered, rhythmic setting to ‘inject’ a symmetric/consonant pattern directly into parts of the brain involved directly with consciousness, the result should produce good feeling- or at least, much better valence than a similar dissonant pattern. Our specific prediction: direct, low-power, rhythmic stimulation (via TMS) of the thalamus at harmonic frequencies (e.g., @1hz+2hz+4hz+6hz+8hz+12hz+16hz+24hz+36hz+48hz+72hz+96hz+148hz) should feel significantly more pleasant than similar stimulation at dissonant frequencies (e.g., @1.01hz+2.01hz+3.98hz+6.02hz+7.99hz+12.03hz+16.01hz+24.02hz+35.97hz+48.05hz+72.04hz+95.94hz+ 147.93hz). 3. More consonant vagus nerve stimulation (VNS) should feel better than dissonant VNS. The above harmonics-based TMS method would be a ‘pure’ test of the ‘Symmetry in the Topology of Phenomenology’ (SiToP) hypothesis. It may rely on developing custom hardware and is also well outside of my research budget. However, a promising alternative method to test this is with consumer-grade vagus nerve stimulation (VNS) technology. Nervana Systems has an in-ear device which stimulates the Vagus nerve with rhythmic electrical pulses as it winds its way past the left ear canal. The stimulation is synchronized with either user-supplied music or ambient sound. This synchronization is done, according to the company, in order to mask any discomfort associated with the electrical stimulation. The company says their system works by “electronically signal[ing] the Vagus nerve which in turn stimulates the release of neurotransmitters in the brain that enhance mood.” This explanation isn’t very satisfying, since it merely punts the question of why these neurotransmitters enhance mood, but their approach seems to work– and based on the symmetry/harmony hypothesis we can say at least something about why: effectively, they’ve somewhat accidentally built a synchronized bimodal approach (coordinated combination of music+VNS) for inducing harmony/symmetry in the brain. This is certainly not the only component of how this VNS system functions, since the parasympathetic nervous system is both complex and powerful by itself, but it could be an important component. Based on our assumptions about what valence is, we can make a hierarchy of predictions: 1. Harmonious music + synchronized VNS should feel the best; 2. Harmonious music + placebo VNS (unsynchronized, simple pattern of stimulation) should feel less pleasant than (1); 3. Harmonious music + non-synchronized VNS (stimulation that is synchronized to a different kind of music) should feel less pleasant than (1); 4. Harmonious music + dissonant VNS (stimulation with a pattern which scores low on consonance measures such as (Chon 2008) should feel worse than (2) and (3)); 5. Dissonant auditory noise + non-synchronized, dissonant VNS should feel pretty awful. We can also predict that if a bimodal approach for inducing harmony/symmetry in the brain is better than a single modality, a trimodal or quadrimodal approach may be even more effective. E.g., we should consider testing the addition of synchronized rhythmic tactile stimulation and symmetry-centric music visualizations. A key question here is whether adding stimulation modalities would lead to diminishing or synergistic/accelerating returns. Qualia Computing Attending the 2017 Psychedelic Science Conference Why Care About Psychedelics? List of Qualia Computing Psychedelic Articles 1) Psychophysics For Psychedelic Research: Textures 2) State-Space of Drug Effects 3) How to Secretly Communicate with People on LSD 6) Algorithmic Reduction of Psychedelic States 7) Peaceful Qualia: The Manhattan Project of Consciousness 8) Getting closer to digital LSD 9) Generalized Wada-Test 10) Psychedelic Perception of Visual Textures Hard to summarize. The Binding Problem [Our] subjective conscious experience exhibits a unitary and integrated nature that seems fundamentally at odds with the fragmented architecture identified neurophysiologically, an issue which has come to be known as the binding problem. For the objects of perception appear to us not as an assembly of independent features, as might be suggested by a feature based representation, but as an integrated whole, with every component feature appearing in experience in the proper spatial relation to every other feature. This binding occurs across the visual modalities of color, motion, form, and stereoscopic depth, and a similar integration also occurs across the perceptual modalities of vision, hearing, and touch. The question is what kind of neurophysiological explanation could possibly offer a satisfactory account of the phenomenon of binding in perception? One solution is to propose explicit binding connections, i.e. neurons connected across visual or sensory modalities, whose state of activation encodes the fact that the areas that they connect are currently bound in subjective experience. However this solution merely compounds the problem, for it represents two distinct entities as bound together by adding a third distinct entity. It is a declarative solution, i.e. the binding between elements is supposedly achieved by attaching a label to them that declares that those elements are now bound, instead of actually binding them in some meaningful way. Von der Malsburg proposes that perceptual binding between cortical neurons is signalled by way of synchronous spiking, the temporal correlation hypothesis (von der Malsburg & Schneider 1986). This concept has found considerable neurophysiological support (Eckhorn et al. 1988, Engel et al. 1990, 1991a, 1991b, Gray et al. 1989, 1990, 1992, Gray & Singer 1989, Stryker 1989). However although these findings are suggestive of some significant computational function in the brain, the temporal correlation hypothesis as proposed, is little different from the binding label solution, the only difference being that the label is defined by a new channel of communication, i.e. by way of synchrony. In information theoretic terms, this is no different than saying that connected neurons posses two separate channels of communication, one to transmit feature detection, and the other to transmit binding information. The fact that one of these channels uses a synchrony code instead of a rate code sheds no light on the essence of the binding problem. Furthermore, as Shadlen & Movshon (1999) observe, the temporal binding hypothesis is not a theory about how binding is computed, but only how binding is signaled, a solution that leaves the most difficult aspect of the problem unresolved. I propose that the only meaningful solution to the binding problem must involve a real binding, as implied by the metaphorical name. A glue that is supposed to bind two objects together would be most unsatisfactory if it merely labeled the objects as bound. The significant function of glue is to ensure that a force applied to one of the bound objects will automatically act on the other one also, to ensure that the bound objects move together through the world even when one, or both of them are being acted on by forces. In the context of visual perception, this suggests that the perceptual information represented in cortical maps must be coupled to each other with bi-directional functional connections in such a way that perceptual relations detected in one map due to one visual modality will have an immediate effect on the other maps that encode other visual modalities. The one-directional axonal transmission inherent in the concept of the neuron doctrine appears inconsistent with the immediate bi-directional relation required for perceptual binding. Even the feedback pathways between cortical areas are problematic for this function due to the time delay inherent in the concept of spike train integration across the chemical synapse, which would seem to limit the reciprocal coupling between cortical areas to those within a small number of synaptic connections. The time delays across the chemical synapse would seem to preclude the kind of integration apparent in the binding of perception and consciousness across all sensory modalities, which suggests that the entire cortex is functionally coupled to act as a single integrated unit. — Section 5 of “Harmonic Resonance Theory: An Alternative to the ‘Neuron Doctrine’ Paradigm of Neurocomputation to Address Gestalt properties of perception” by Steven Lehar To conduct the experiment you need: 3. A phenomenal puzzle (as described above). Core Philosophy 1. Catalogue the entire state-space of consciousness David Pearce on the “Schrodinger’s Neurons Conjecture” Critically, molecular matter-wave interferometry can in principle independently be used to test the truth – or falsity – of this conjecture (see: https://www.physicalism.com/#6). In a word, decoherence. Too quick. Let’s step back. We shall see. [Content Warnings: Psychedelic Depersonalization, Fear of the Multiverse, Personal Identity Doubts, Discussion about Quantum Consciousness, DMT entities, Science] The brain is wider than the sky, For, put them side by side, The one the other will include With ease, and you beside. – Emily Dickinson Is it for real? A sizable percentage of people who try a high dose of DMT end up convinced that the spaces they visit during the trip exist in some objective sense; they either suspect, intuit or conclude that their psychonautic experience reflects something more than simply the contents of their minds. Most scientists would argue that those experiences are just the result of exotic brain states; the worlds one travels to are bizarre (often useless) simulations made by our brain in a chaotic state. This latter explanation space forgoes alternate realities for the sake of simplicity, whereas the former envisions psychedelics as a multiverse portal technology of some sort. Some exotic states, such as DMT breakthrough experiences, do typically create feelings of glimpsing foundational information about the depth and structure of the universe. Entity contact is frequent, and these seemingly autonomous DMT entities are often reported to have the ability to communicate with you. Achieving a verifiable contact with entities from another dimension would revolutionize our conception of the universe. Nothing would be quite as revolutionary, really. But how to do so? One could test the external reality of these entities by asking them to provide information that cannot be obtained unless they themselves held an objective existence. In this spirit, some have proposed to ask these entities complex mathematical questions that would be impossible for a human to solve within the time provided by the trip. This particular test is really cool, but it has the flaw that DMT experiences may themselves trigger computationally-useful synesthesia of the sort that Daniel Tammet experiences. Thus even if DMT entities appeared to solve extraordinary mathematical problems, it would still stand to reason that it is oneself who did it and that one is merely projecting the results into the entities. The mathematical ability would be the result of being lucky in the kind of synesthesia DMT triggered in you. A common overarching description of the effects of psychedelics is that they “raise the frequency of one’s consciousness.” Now, this is a description we should take seriously whether or not we believe that psychedelics are inter-dimensional portals. After all, promising models of psychedelic action involve fast-paced control interruption, where each psychedelic would have its characteristic control interrupt frequency. And within a quantum paradigm, Stuart Hameroff has argued that psychedelic compounds work by bringing up the quantum resonance frequency of the water inside our neurons’ microtubules (perhaps going from megahertz to gigahertz), which he claims increases the non-locality of our consciousness. In the context of psychedelics as inter-dimensional portals, this increase in the main frequency of one’s consciousness may be the key that allows us to interact with other realities. Users describe a sort of tuning of one’s consciousness, as if the interface between one’s self and the universe underwent some sudden re-adjustment in an upward direction. In the same vein, psychedelicists (e.g. Rick Strassman) frequently describe the brain as a two-way radio, and then go on to claim that psychedelics expand the range of channels we can be attuned to. One could postulate that the interface between oneself and the universe that psychonauts describe has a real existence of its own. It would provide the bridge between us as (quantum) monads and the universe around us; and the particular structure of this interface would determine the selection pressures responsible for the part of the multiverse that we interact with. By modifying the spectral properties of this interface (e.g. by drastically raising the main frequency of its vibration) with, e.g. DMT, one effectively “relocates” (cf. alien travel) to other areas of reality. Assuming this interface exists and that it works by tuning into particular realities, what sorts of questions can we ask about its properties? What experiments could we conduct to verify its existence? And what applications might it have? The Psychedelic State of Input Superposition Once in a while I learn about a psychedelic effect that captures my attention precisely because it points to simple experiments that could distinguish between the two rough explanation spaces discussed above (i.e. “it’s all in your head” vs. “real inter-dimensional travel”). This article will discuss a very odd phenomenon whose interpretations do indeed have different empirical predictions. We are talking about the experience of sensing what appears to be a superposition of inputs from multiple adjacent realities. We will call this effect the Psychedelic State of Input Superposition (PSIS for short). There is no known way to induce PSIS on purpose. Unlike the reliable DMT hyper-dimensional journeys to distant dimensions, PSIS is a rare closer-to-home effect and it manifests only on high doses of LSD (and maybe other psychedelics). Rather than feeling like one is tuning into another dimension in the higher frequency spectrum, it feels as if one just accidentally altered (perhaps even broke) the interface between the self and the universe in a way that multiplies the number of realities you are interacting with. After the event, the interface seems to tune into multiple similar universes at once; one sees multiple possibilities unfold simultaneously. After a while, one somehow “collapses” into only one of these realities, and while coming down, one is thankful to have settled somewhere specific rather than remaining in that weird in-between. Let’s take a look at a couple of trip reports that feature this effect: [Trip report of taking a high dose of LSD on an airplane]: So I had what you call “sonder”, a moment of clarity where I realized that I wasn’t the center of the universe, that everyone is just as important as me, everyone has loved ones, stories of lost love etc, they’re the main character in their own movies. That’s when shit went quantum. All these stories begun sinking in to me. It was as if I was beginning to experience their stories simultaneously. And not just their stories, I began seeing the story of everyone I had ever met in my entire life flash before my eyes. And in this quantum experience, there was a voice that said something about Karma. The voice told me that the plane will crash and that I will be reborn again until the quota of my Karma is at -+0. So, for every ill deed I have done, I would have an ill deed committed to me. For every cheap T-shirt I purchased in my previous life, I would live the life of the poor Asian sweatshop worker sewing that T-shirt. For every hooker I fucked, I would live the life of a fucked hooker. And it was as if thousands of versions of me was experiencing this moment. It is hard to explain, but in every situation where something could happen, both things happened and I experienced both timelines simultaneously. As I opened my eyes, I noticed how smoke was coming out of the top cabins in the plane. Luggage was falling out. I experienced the airplane crashing a thousand times, and I died and accepted death a thousand times, apologizing to the Karma God for my sins. There was a flash of the brightest white light imagineable and the thousand realities in which I died began fading off. Remaining was only one reality in which the crash didn’t happen. Where I was still sitting in the plane. I could still see the smoke coming out of the plane and as a air stewardess came walking by I asked her if everything was alright. She said “Yes, is everything alright with YOU?”. — Reddit user I_DID_LSD_ON_A_PLANE, in r/BitcoinMarkets (why there? who knows). Further down on the same thread, written by someone else: [A couple hours after taking two strong hits of LSD]: Fast-forward to when I’m peaking hours later and I find myself removed from the timeline I’m in and am watching alternate timelines branch off every time someone does something specific. I see all of these parallel universes being created in real time, people’s actions or interactions marking a split where both realities exist. Dozens of timelines, at least, all happening at once. It was fucking wild to witness. Then I realize that I don’t remember which timeline I originally came out of and I start to worry a bit. I start focusing, trying to remember where I stepped out of my particular universe, but I couldn’t figure it out. So, with the knowledge that I was probably wrong, I just picked one to go back into and stuck with it. It’s not like I would know what changed anyway, and I wasn’t going to just hang out here in the whatever-this-place-is outside of all of them. Today I still sometimes feel like I left a life behind and jumped into a new timeline. I like it, I feel like I left a lot of baggage behind and there are a lot of regrets and insecurities I had before that trip that I don’t have anymore. It was in a different life, a different reality, so in this case the answer I found was that it’s okay to start over when you’re not happy with where you are in life. — GatorAutomator Let us summarize: Person X takes a lot of LSD. At some point during the trip (usually after feeling that “this trip is way too intense for me now”) X starts experiencing sensory input from what appear to be different branches of the multiverse. For example, imagine that person X can see a friend Y sitting on a couch in the corner. Suppose that Y is indecisive, and that as a result he makes different choices in different branches of the multiverse. If Y is deciding whether to stand up or not, X will suddenly see a shadowy figure of Y standing up while another shadowy figure of Y remains sitting. Let’s call them Y-sitting and Y-standing. If Y-standing then turns indecisive about whether to drink some water or go to the bathroom, X may see one shadowy figure of Y-standing getting water and a shadowy figure of Y-standing walking towards the bathroom, all the while Y-sitting is still on the couch. And so it goes. The number of times per second that Y splits and the duration of the perceived superposition of these splits may be a function of X’s state of consciousness, the substance and dose consumed, and the degree of indecision present in Y’s mind. The two quotes provided are examples of this effect, and one can find a number of additional reports online with stark similarities. There are two issues at hand here. First, what is going on? And second, can we test it? We will discuss three hypotheses to explain what goes on during PSIS, propose an experiment to test the third one (the Quantum Hypothesis), and provide the results of such an experiment. Hard-nosed scientists may want to skip to the “Experiment” section, since the following contains a fair amount of speculation (you have been warned). Three Hypothesis for PSIS: Cognitive, Spiritual, Quantum In order to arrive at an accurate model of the world, one needs to take into account both the prior probability of the hypothesis and the likelihoods that they predict that one would obtain the available evidence. Even if one prior of yours is extremely strong (e.g. a strong belief in materialism), it is still rational to update one’s probability estimates of alternative hypotheses when new relevant evidence is provided. The difficulty often comes from finding experiments where the various hypotheses generate very different likelihoods for one’s observations.  As we will see, the quantum hypothesis has this characteristic: it is the only one that would actually predict a positive result for the experiment. The Cognitive Hypothesis The first (and perhaps least surreal) hypothesis is that PSIS is “only in one’s mind”. When person X sees person Y both standing up and staying put, what may be happening is that X is receiving photons only from Y-standing and that Y-sitting is just a hallucination that X’s inner simulation of her environment failed to erase. Psychedelics intensify one’s experience, and this is thought to be the result of control interruption. This means that inhibition of mental content by cortical feedback is attenuated. In the psychedelic state, sensory impressions, automatic reactions, feelings, thoughts and all other mental contents are more intense and longer-lived. This includes the predictions that you make about how your environment will evolve. Not only is one’s sensory input perceived as more intense, one’s imagined hypotheticals are also perceived more intensely. Under normal circumstances, cortical inhibition makes our failed predictions quickly disappear. Psychedelic states of consciousness may be poor at inhibiting these predictions. In this account, X may be experiencing her brain’s past predictions of what Y could have done overlaid on top of the current input that she is receiving from her physical environment. In a sense, she may be experiencing all of the possible “next steps” that she simply intuited. While these simulations typically remain below the threshold of awareness (or just above it), on a psychedelic state they may reinforce themselves in unpredictable ways. X’s mind never traveled anywhere and there is nothing really weird going on. X is just experiencing the aftermath of a specific failure of information processing concerning the inhibition of past predictions. Alternatively, very intense emotions such as those experienced on intense ego-killing psychedelic experiences may distort one’s perception so much that one begins to suspect that one is perhaps dead or in another dimension. We can posit that the belief that one is not properly connected to one’s brain (or that one is dying) can trigger even stronger emotions and unleash a cascade of further distortions. This positive feedback loop may create episodes of intense confusion and overlapping pieces of information, which later might be interpreted as “seeing splitting universes”. The Spiritual Hypothesis Many spiritual traditions postulate the existence of alternate dimensions, additional layers of reality, and hidden spirit pathways that connect all of reality. These traditions often provide rough maps of these realities and may claim that some people are able to travel to such far-out regions with mental training and consciousness technologies. For illustration, let’s consider Buddhist cosmology, which describes 31 planes of existence. Interestingly, one of the core ideas of this cosmology is that the major characteristic that distinguishes the planes of existence is the states of consciousness typical of their inhabitants. These states of consciousness are correlated with moral conditions such as the ethical quality of their past deeds (karma), their relationship with desire (e.g. whether it is compulsive, sustainable or indifferent) and their existential beliefs. In turn, a feature of this cosmology is that it allows inter-dimensional travel by changing one’s state of consciousness. The part of the universe one interacts with is a function of one’s karma, affinities and beliefs. So by changing these variables with meditation (or psychedelic medicine) one can also change which world we exist in. An example of a very interesting location worth trying to travel to is the mythical city of Shambhala, the location of the Kalachakra Tantra. This city has allegedly turned into a pure land thanks to the fact that its king converted to Buddhism after meeting the Buddha. Pure lands are abodes populated by enlightened and quasi-enlightened beings whose purpose is to provide an optimal teaching environment for Buddhism. One can go to Shambhala by either reincarnating there (with good karma and the help of some pointers and directions at the time of death) or by traveling there directly during meditation. In order to do the latter, one needs to kindle one’s subtle energies so that they converge on one’s heart, while one is embracing the Bodhisattva ethic (focusing on reducing others’ suffering as a moral imperative). Shambhala may not be in a physical location accessible to humans. Rather, Buddhist accounts would seem to depict it as a collective reality built by people which manifests on another plane of existence (specifically somewhere between the 23rd and 27th layer). In order to create a place like that one needs to bring together many individuals in a state of consciousness that exhibits bliss, enlightenment and benevolence. A pure land has no reality of its own; its existence is the result of the states of consciousness of its inhabitants. Thus, the very reason why Shambhala can even exist as a place somewhere outside of us is because it is already a potential place that exists within us. Similar accounts of a wider cosmological reality can be found elsewhere (such as Hinduism, Zoroastrianism, Theosophy, etc.). These accounts may be consistent with the sort of experiences having to do with astral travel and entity contact that people have while on DMT and other psychedelics in high doses. However, it seems a lot harder to explain PSIS with an ontology of this sort. While reality is indeed portrayed as immensely vaster than what science has shown so far, we do not really encounter claims of parallel realities that are identical to ours except that your friend decided to go to the bathroom rather than drink some water just now. In other words, while many spiritual ontologies are capable of accommodating DMT hyper-dimensional travel, I am not aware of any spiritual worldview that also claims that whenever two things can happen, they both do in alternate realities (or, more specifically, that this leads to reality splitting). The only spiritual-sounding interpretation of PSIS I can think of is the idea that these experiences are the result of high-level entities such as guardians, angels or trickster djinns who used your LSD state to teach you a lesson in an unconventional way. The first quote (the one written by Reddit user I_DID_LSD_ON_A_PLANE) seems to point in this direction, where the so-called Karma God is apparently inducing a PSIS experience and using it to illustrate the idea that we are all one (i.e. Open Individualism). Furthermore, the experience viscerally portrays the way that this knowledge should impact our feelings of self-importance (by creating a profound feeling of sonder). This way, the tripper may develop a lasting need to work towards peace, wisdom and enlightenment for the benefit of all sentient beings. Life as a learning experience is a common trope among spiritual worldviews. It is likely that the spiritual interpretations that emerge in a state of psychedelic depersonalization and derealization will depend on one’s pre-existing ideas of what is possible. The atonement of one’s sins, becoming aware of one’s karma, feeling our past lives, realizing emptiness, hearing a dire mystical warning, etc. are all ideas that already exist in human culture. In an attempt to make sense- any sense- of the kind of qualia experienced in high doses of psychedelics, our minds may be forced to instantiate grandiose delusions drawn from one’s reservoir of far-out ideas. On a super intense psychedelic experience in which one’s self-models fail dramatically and one experiences fear of ego dissolution, interpreting what is happening as the result of the Karma God judging you and then giving you another chance at life can viscerally seem to make a lot of sense at the time. The Quantum Hypothesis For the sake of transparency I must say that we currently do not have a derivation of PSIS from first principles. In other words, we have not yet found a way to use the postulates of quantum mechanics to account for PSIS (that is, assuming that the cognitive and spiritual hypothesis are not the case). That said, there are indeed some things to be said here: While a theory is missing, we can at least talk about what a quantum mechanical account of PSIS would have to look like. I.e. we can at least make sense of some of the features that the theory would need to have to predict that people on LSD would be able to see the superposition of macroscopic branches of the multiverse. Why would being on acid allow you to receive input from macroscopic environments that have already decohered? How could taking LSD possibly prevent the so-called collapse of the wavefunction? You might think: “well, why even think about it? It’s simply impossible because the collapse of the wavefunction is an axiom of quantum mechanics and we know it is true because some of the predictions made by quantum mechanics (such as QED) are in agreement with experimental data up to the 12th decimal point.” Before jumping to this conclusion, though, let us remember that there are several formulations of quantum mechanics. Both the Born rule (which determines the probability of seeing different outcomes from a given quantum measurement) and the collapse of the wavefunction (i.e. that any quantum state other than the one that was measured disappears) are indeed axiomatic for some formulations. But other formulations actually derive these features and don’t consider them fundamental. Here is Sean Carroll explaining the usual postulates that are used to teach quantum mechanics to undergraduate audiences: The status of the Born Rule depends greatly on one’s preferred formulation of quantum mechanics. When we teach quantum mechanics to undergraduate physics majors, we generally give them a list of postulates that goes something like this: 1. Quantum states are represented by wave functions, which are vectors in a mathematical space called Hilbert space. 2. Wave functions evolve in time according to the Schrödinger equation. 3. The act of measuring a quantum system returns a number, known as the eigenvalue of the quantity being measured. 4. The probability of getting any particular eigenvalue is equal to the square of the amplitude for that eigenvalue. 5. After the measurement is performed, the wave function “collapses” to a new state in which the wave function is localized precisely on the observed eigenvalue (as opposed to being in a superposition of many different possibilities). In contrast, here is what you need to specify for the Everett (Multiple Worlds) formulation of quantum mechanics: And that’s it. As you can see this formulation does not employ any collapse of the wavefunction, and neither does it consider the Born rule as a fundamental law. Instead, the wavefunction is thought to merely seem to collapse upon measurement (which is achieved by nearly diagonalizing its components along the basis of the measurement; strictly speaking, neighboring branches never truly stop interacting, but the relevance of their interaction approaches zero very quickly). Here the Born rule is derived from first principles rather than conceived as an axiom. How exactly one can derive the Born rule is a matter of controversy, however. Currently, two very promising theoretical approaches to do so are Quantum Darwinism and the so-called Epistemic Separability Principle (ESP for short, a technical physics term not to be confused with Extra Sensory Perception). Although these approaches to deriving the Born rule are considered serious contenders for a final explanation (and they are not mutually exclusive), they have been criticized for being somewhat circular. The physics community is far from having a consensus on whether these approaches truly succeed. Is there any alternative to either axiomatizing or deriving the apparent collapse and the Born rule? Yes, there is an alternative: we can think of them as regularities contingent upon certain conditions that are always (or almost always) met in our sphere of experience, but that are not a universal fact about quantum mechanics. Macroscopic decoherence and Born rule probability assignments work very well in our everyday lives, but they may not hold universally. In particular -and this is a natural idea to have under any view that links consciousness and quantum mechanics- one could postulate that one’s state of consciousness influences the mind-body interaction in such a way that information from one’s quantum environment seeps into one’s mind in a different way. Don’t get me wrong; I am aware that the Born rule has been experimentally verified with extreme precision. I only ask that you bear in mind that many scientific breakthroughs share a simple form: they question the constancy of certain physical properties. For example, Einstein’s theory of special relativity worked out the implications of the fact that the speed of light is observer-independent. In turn this makes the passage of time of external systems observer-dependent. Scientists had a hard time believing Einstein when he arrived at the conclusion that accelerating our frame of reference to extremely high velocities could dilate time. What was thought to be a constant (the passage of time throughout the universe) turned out to be an artifact of the fact that we rarely travel fast enough to notice any deviation from Newton’s laws of motion. In other words, our previous understanding was flawed because it assumed that certain observations did not break down in extreme conditions. Likewise, maybe we have been accidentally ignoring a whole set of physically relevant extreme conditions: altered states of consciousness. The apparent wavefunction collapse and the Born rule may be perfectly constant in our everyday frame of reference, and yet variable across the state-space of possible conscious experiences. If this were the case, we’d finally understand why it seems so hard to derive the Born rule from first principles: it’s impossible. Succinctly, the Quantum Hypothesis is that psychedelic experiences modify the way one’s mind interacts with its quantum environment in such a way that the world does not appear to decohere any longer from one’s point of view. Our ignorance about the non-universality of the apparent collapse of the wavefunction is just a side effect of the fact that physicists do not usually perform experiments during intense life-changing entheogenic mind journeys. But for science, today we will. Deriving PSIS with Quantum Mechanics Here we present a rough (incomplete) sketch of what a possible derivation of PSIS from quantum mechanics might look like. To do so we need three background assumptions: First, conscious experiences must be macroscopic quantum coherent objects (i.e. ontologically unitary subsets of the universal wavefunction, akin to super-fluid helium or Bose–Einstein condensates, except at room temperature). Second, people’s decision-making process must somehow amplify low-level quantum randomness into macroscopic history bifurcations. And third, the properties of our quantum environment* are in part the result of the quantum state of our mind, which psychedelics can help modify. This third assumption brings into play the idea that if our mind is more coherent (e.g. is in a super-symmetrical state) it will select for wavefunctions in its environment that themselves are more coherent. In turn, the apparent lifespan of superpositions may be elongated long enough so that the quantum environment of one’s mind receives records from both Y-sitting and Y-standing as they are overlapping. Now, how credible are these three assumptions? That events of experience are macroscopic quantum coherent objects is an explanation space usually perceived as pseudo-scientific, though a sizable number of extremely bright scientists and philosophers do entertain the idea very seriously. Contrary to popular belief, there are legitimate reasons to connect quantum computing and consciousness. The reasons for making this connection include the possibility of explaining the causal efficacy of consciousness, finding an answer to the palette problem with quantum fields and solving the phenomenal binding problem with quantum coherence and panpsychism. The second assumption claims that people around you work as quantum Random Number Generators. That human decision-making amplifies low-level quantum randomness is thought to be likely by at least some scientists, though the time-scale on which this happens is still up for debate. The brain’s decision-making is chaotic, and over the span of seconds it may amplify quantum fluctuations into macroscopic differences. Thus, people around you making decisions may result in splitting universes (e.g. “[I] am watching alternate timelines branch off every time someone does something specific.” – GatorAutomator’s quote above). Presumably, this assumption would also imply that during PSIS not only people but also physics experiments would lead to apparent macroscopic superposition. With regards to the third assumption: widespread microscopic decoherence is not, apparently, a necessary consequence of the postulates of quantum mechanics. Rather, it is a very specific outcome of (a) our universe’s Hamiltonian and (b) the starting conditions of our universe, i.e. Pre-Inflation/Eternal Inflation/Big Bang. (A Ney & D Albert, 2013). In principle, psychedelics may influence the part of the Hamiltonian that matters for the evolution of our mind’s wavefunction and its local interactions. In turn, this may modify the decoherence patterns of our consciousness with its local environment and- perhaps- ultimately the surrounding macroscopic world. Of course we do not know if this is possible, and I would have to agree that it is extremely far-fetched. The overall picture that would emerge from these three assumptions would take the following form: both the mental content and raw phenomenal character of our states of consciousness are the result of the quantum micro-structure of our brains. By modifying this micro-structure, one is not only altering the selection pressures that give rise to fully formed experiences (i.e. quantum darwinism applied to the compositionality of quantum fields) but also altering the selection pressures that determine which parts of the universal wave-function we are entangled with (i.e. quantum darwinism applied to the interactions between coherent objects). Thus psychedelics may not only influence how our experience is shaped within, but also how it interacts with the quantum environment that surrounds it. Some mild psychedelic states (e.g. MDMA) may influence mostly the inner degrees of freedom of one’s mind, while other more intense states (e.g. DMT) may be the result of severe changes to the entanglement selection pressures and thus result in the apparent disconnection between one’s mind and one’s local environment. Here PSIS would be the result of decreasing the rate at which our mind decoheres (possibly by increasing the degree to which our mind is in a state of quantum confinement). In turn, by boosting one’s own inner degree of quantum superposition one may also broaden the degree of superposition acceptable at the interface with one’s quantum environment. One could now readily take in packets of information that have a wider degree of superposition. In the right circumstances, this may result in one’s mind experiencing information seemingly coming from alternate branches of the multiverse. In other words, the trick to PSIS both in the Quantum and the Spiritual Hypothesis is the same (though for different reasons): travel to other dimensions by being the change that you wish to see in the world. You need to increase your own degree of quantum coherence so that you become able of interacting with a more coherent quantum environment. If this were the case it would call for a conceptual revolution. We would stop thinking of “our universe” as the “place we inhabit” and instead think of it as “the quantum environment our consciousness can interface with”. Perhaps life as we know it exists in a quantum niche that can give rise to embodied biological beings, but the universal wavefunction also contains alien state-spaces in which the patterns of entanglement work in different ways and lead to different evolutionary histories. Our minds evolved to interact with the particular environment that leads to the sustenance of our everyday macroscopic world. Our consciousness is shaped to fit in here, and it has a symbiotic relationship with the body that hosts it. Now, how do we go about testing the quantum hypothesis? Following previous work on psychophysics for psychedelic research, the main idea will be to show that people on psychedelics can achieve something that sober individuals can’t. In this case, the task is designed to be impossible if one is only receiving input from one world (as predicted by the cognitive hypothesis). Only if PSIS is the result of simultaneously experiencing input from the several nearby branches that correspond to different outcomes of quantum events can the task be successfully performed. The Experiment The task in question involves three white little squares on a black background. At each time step (~16 milliseconds) the squares move one pixel down, and some number of pixels horizontally. While the downward movement is constant, the change in the horizontal direction is chosen randomly. The horizontal movement of one of the squares is being randomized on the fly with a Quantum Random Number Generator (qRNG**). We call this the quantum square. The horizontal movement of the other two squares follows a path that was determined at the beginning of each trial by a Classical Random Number Generator (cRNG***). We will refer to these two squares as the classical squares. The task for the participant is to identify which of the squares is the “odd one out” (i.e. the quantum square). Here is an infographic that explains the experimental setup: In order to make the horizontal movement seem very chaotic and, at times even dramatic, it is randomized by choosing a horizontal change of either -4, -1, 1 or 4 pixels. By using a jump of 4 pixels (rather than e.g. 2) the squares sometimes seem to advance very quickly in one direction. The rationale for doing this was to make the predicted effect (assuming the Quantum Hypothesis) more noticeable – we want low probability paths that are extremely different than the typical path and in turn make it easy to distinguish between single paths and the superposition of many of the possible paths at once. Critically, the starting position of the quantum square (whether left, center or right) for each of the trials during an experiment is obtained with a cRNG at the very beginning of the experiment. Why not use a qRNG? If we randomized the position of the quantum square using a qRNG we might be sabotaging ourselves: participants might see a superposition of classical and quantum squares on each of the three positions and thus not see any square behave differently than the other two. Thus, for the experiment to work (under the assumption that PSIS is explained by the Quantum Hypothesis) the only aspect that should be randomized using a qRNG is the horizontal displacement of the quantum square. Since the numbers coming from the qRNG need to be requested over the internet, it is also crucial that we try to obtain fresh quantum random numbers as frequently as possible without messing up the flow of the experiment. Thus, a request for new quantum random numbers is made as often as possible (usually every 1 to 2 seconds). Since internet connections, server response time and network reliability varies from moment to moment, it is inevitable that once in a while we will have to stop the downward movement of all three squares as we wait for the next batch of random numbers. In principle this should have no negative effect at all in the ability to choose the odd-one-out. This problem merely interrupts the flow of the experiment, but it should not invalidate the results. We kept track of the number of trials that were interrupted for this reason (which you will find recorded in the “Network” column of the table of results down below). Participants were instructed to: 1. Clone the Github code for the experiment, and to open it up in a Chrome browser. 2. Make sure that they have a reliable high-speed internet connection. 3. Record the pertinent facts (time of experiment, substance, dose and time of intake). 4. Use a sensorially-minimized room for the experiment (such as a closed bathroom), turn off the lights, ask anyone who is in the house to be quiet and mind their own business, and wear earplugs. 5. Meditate for a couple of minutes before conducting the experiment and to try to induce PSIS voluntarily. 6. Choose the odd-one-out (i.e. the quantum square) on each trial, for a total of at least 30 trials. Under this setup, what should the participants expect to see if the Quantum Hypothesis is correct? Predicted Psychedelic Perception The pictures above illustrate how the experiment looks like to three different observers. On the left we see what a sober participant would see at a given time during a trial. The squares are moving downwards but since we are taking a snapshot in time they seem to be static. The image on the center shows what we would expect out of a generic psychedelic experience. In brief, the typical control interrupt action of psychedelics (i.e. tracers) effectively allows you to see more clearly the path that the squares have traced over time, but it would not allow you to notice any difference between the classic and quantum squares. The image on the right shows what we would expect to see if the Quantum Hypothesis is correct and PSIS is actually about perceiving nearby branches of the Everett multiverse. Notice how the center square is visibly different than the other two: it consists of the superposition of many alternative paths the square took in slightly different branches. Implications of a Positive Result: Quantum Mind, Everett Rescue Missions and Psychedelic Cryptography It is worth noting that if one can indeed reliably distinguish between the quantum and the classical squares, then this would have far-reaching implications. It would indeed confirm that our minds are macroscopic quantum coherent objects and that psychedelics influence their pattern of interactions with their surrounding quantum environment. It would also provide strong evidence in favor of the Everett interpretation of quantum mechanics (in which all possibilities are realized). More so, we would not only have a new perspective on the fundamental nature of the universe and the mind, but the discovery would just as well suggest some concrete applications. Looking far ahead, a positive outcome is that this knowledge would encourage research on the possible ways to achieve inter-dimensional travel, and in turn instantiate pan-Everettian rescue missions to reduce suffering elsewhere in the multiverse. The despair of confirming that the quantum multiverse is real might be evened out by the hope of finally being able to help sentient beings trapped in Darwinian environments in other branches of the universal wavefunction. Looking much closer to home, a positive result would lead to a breakthrough in psychedelic cryptography (PsyCrypto for short), where spies high on LSD would obtain the ability to read information that is secretly encoded in public light displays. More so, this particular kind of PsyCrypto would be impervious to discovery after the fact. Even if given an arbitrary amount of time and resources to analyze a video recording of the event, it would not be possible to determine which of the squares was being guided by quantum randomness. Unlike other PsyCrypto techniques, this one cannot be decoded by applying psychedelic replication software to video recordings of the transmission. Three persons participated in the experiments: S (self), A, and B. [A and B are anonymous volunteers; for more information read the legal disclaimer at the end of this article]. Participant S (me) tried the experiment both sober and after drinking 2 beers. Participant A tried the experiment sober, on LSD, 2C-B and a combination of the two. And participant B tried the experiment both sober and on DMT. The total number of trials recorded for each of the conditions is: 90 for the sober state, 275 for 2C-B, 60 for DMT, 120 for LSD and 130 for the LSD/2C-B combo. The overall summary of the results is: chance level performance outcomes for all conditions. You can find the breakdown of results for all experiments in the table shown below, and you can download the raw csv file from the Github repository. Columns from left to right: Date, State (of consciousness), Dose(s), T (time), #Trials (number of trials), Correct (number of trials in which the participant made the correct choice), Percent correct (100*Correct/Trials), Participants (S=Self, A/B=anonymous volunteers), Requests / Second (server requests per second), Network (this tracks the number of times that a trial was temporarily paused while the browser was waiting for the next batch of quantum random numbers), Notes (by default the squares left a dim trail behind them and this was removed in two trials; by default the squares were 10×10 pixels in size, but a smaller size was used in some trials). I thought about visualizing the results in a cool graph at first, but after I received them I realized that it would be pointless. Not a single experiment reached a statistically significant deviation from chance level; who is interested in seeing a bunch of bars representing chance-level outcomes? Null results are always boring to visualize.**** In addition to the overall performance in the task, I also wanted to hear the following qualitative assessment from the participants: did they notice any difference between the three squares? Was there any feeling that one of them was behaving differently than the other two? This is what they responded when I asked them: “I could never see any difference between the squares, so it felt like I was making random choices” (from A) and “DMT made the screen look like a hyper-dimensional tunnel and I felt like strange entities were watching over me as I was doing the experiment, and even though the color of the squares would fluctuate randomly, I never noticed a single square behaving differently than the other two. All three seemed unique. I did feel that the squares were being controlled by some entity, as if with an agency of their own, but I figured that was made up by my mind.” When I asked them if they noticed anything similar to the image labeled Psychedelic view as predicted by the Quantum Hypothesis (as shown above) they both said “no”. It is noteworthy that neither participant reported an experience of PSIS during the experiments. Even without an explicit and noticeable input superposition, PSIS may turn out to be a continuum rather than a discrete either-or phenomenon. If so, we might still expect to see some deviations from chance. This may be analogous to how in blindsight people report not being able to see anything and yet perform better than chance in visual recognition tasks. That said, the effect size of blindsight and other psychological effects in which information is processed unbeknownst to the participant tend to be very small. Thus, in order to confirm that quantum PSIS is happening below the threshold of awareness we may require a much larger number of samples (though still a lot smaller than what we would need if we were aiming to use the experiment to conduct Psi research with or without psychedelics, again, due to the extremely small effect sizes). Why did the experiment fail? The first possibility is that it could be that the Quantum Hypothesis is simply wrong (and possibly because it requires false assumptions to work). Second, perhaps we were simply unlucky that PSIS was not triggered during the experiments; perhaps the set, setting, and dosages used simply failed to produce the desired effect (even if the state does indeed exist out there). And third, the experiment itself may be wrong: the second-long delays between the server requests and the qRNG may be too large to produce the effect. In the current implementation (and taking into account network delays), the average delay between the moment the quantum measurement was conducted and the moment it appeared on the computer screen as horizontal movement was .9 seconds (usually in the range of .4 to 1.4 seconds, given an average of 1/2 second lag due to the number buffering and 400 milliseconds in network time). This problem would be easily sidestepped if we used an on-site qRNG obtained from hardware directly connected to the computer (as is common in psi research). To minimize the delay even further, the outcomes of the quantum measurements could be delivered directly to your brain via neuroimplants. If psychedelic experiences do make you interact with other realities, I would like to know about it with a high degree of certainty. The present study was admittedly a very long shot. But to my judgement, it was totally worth it. As Bayesians, we reasoned that since the Quantum Hypothesis can lead to a positive result for the experiment but the Cognitive Hypothesis can’t, then a positive result should make us update our probabilities of the Quantum Hypothesis a great deal. A negative result should make us update our probabilities in the opposite direction. That said, the probability should still not go to zero since the negative result could still be accounted for by the fact that participants failed to experience PSIS, and/or that the delay between the quantum measurement and the moment it influences the movement of the square in the screen is too large. Future studies should try to minimize these two possible sources of failure. First, by researching methods to reliably induce PSIS. And second, by minimizing the delay between branching and sensory input. In the meantime, we can at least tentatively conclude that something along the lines of the Cognitive Hypothesis is the most likely case. In this light, PSIS turns out to be the result of a failure to inhibit predictions. Despite losing their status as suspected inter-dimensional portal technology, psychedelics still remain a crucial tool for qualia research. They can help us map out the state-space of possible experiences, allow us to identify the computational properties of consciousness, and maybe even allow us to reverse engineer the fundamental nature of valence. [Legal Disclaimer]: Both participants A and B contacted me some time ago, soon after the Qualia Computing article How to Secretly Communicate with People on LSD made it to the front page of Hacker News and was linked by SlateStarCodex. They are both experienced users of psychedelics who take them about once a month. They expressed their interest in performing the psychophysics experiments I designed, and to do so while under the influence of psychedelic drugs. I do not know these individuals personally (nor do I know their real names, locations or even their genders). I have never encouraged these individuals to take psychedelic substances and I never gave them any compensation for their participation in the experiment. They told me that they take psychedelics regularly no matter what, and that my experiments would not be the primary reason for taking them. I never asked them to take any particular substance, either. They just said “I will take substance X on day Y, can I have some experiment for that?” I have no way of knowing (1) if the substances they claim they take are actually what they think they are, (2) whether the dosages are accurately measured, and (3) whether the data they provided is accurate and isn’t manipulated. That said, they did explain that they have tested their materials with chemical reagents, and are experienced enough to tell the difference between similar substances. Since there is no way to verify these claims without compromising their anonymity, please take the data with a grain of salt. * In this case, the immediate environment would actually refer to the quantum degrees of freedom surrounding our consciousness within our brain, not the macroscopic exterior vicinity such as the chair we are sitting on or the friends we are hanging out with. In this picture, our interaction with that vicinity is actually mediated by many layers of indirection. ** The experiment used the Australian National University Quantum Random Numbers Server. By calling their API every 1 to 2 seconds we obtain truly random numbers that feed the x-displacement of the quantum square. This is an inexpensive and readily-available way to magnify decoherence events into macroscopic splitting histories in the comfort of your own home. *** In this case, Javascript’s Math.random() function. Unfortunately the RGN algorithm varies from browser to browser. It may be worthwhile to go for a browser-independent implementation in the future to guarantee a uniform high quality source of classical randomness. **** As calculated with a single tailed binomial test with null probability equal to 1/3. The threshold of statistical significance at the p < 0.05 level is found at 15/30 and for p < 0.001 we need at least 19/30 correct responses. The best score that any participant managed to obtain was 14/30.