chash
stringlengths
16
16
content
stringlengths
267
674k
08a6703ad9593a03
What is Quantum Entanglement? Part 1: Waves and particles If you follow science, or science fiction, to any degree, great or small, you’ve probably heard the term “quantum entanglement” before.  You may also have heard it referred to as “spooky action at a distance,” and understand that it somehow involves a weird connection between separated quantum particles that can “communicate,” in a sense, over long distances instantaneously.  You may have read that quantum entanglement is a key aspect in proposed technologies that could transform society, namely quantum cryptography and quantum computing. But it is difficult for a non-physicist to learn more about quantum entanglement than this, because even understanding it in a non-technical sense requires a reasonably thorough knowledge of how quantum mechanics works. In writing my recently-published textbook on Singular Optics, however, I had to write a summary of the relevant physics for a chapter on the quantum aspects of optical vortices. I realized that, with some modification, this summary could serve as an outline for a series of non-technical blog posts on the subject; so here we are! It will take a bit of work to really get at the heart of the problem; in this first post, I attempt to outline the early history of quantum physics, which will be necessary to understand what quantum entanglement is, why it is important, and why it has caused so much mischief for nearly 100 years! Small disclaimer: though I am a physicist, I am not an expert on the weirder aspects of quantum physics, which have many pitfalls in understanding for the unwary! There is the possibility that I may flub some of the subtle parts of the explanation. This post is, in fact, an exercise for me to test my understanding and ability to explain things. I will revise anything that I find is horribly wrong. Near the end of the 19th century, there was a somewhat broad perception that the science of physics was complete; that is, there were no more important discoveries to be made.  This is encapsulated perfectly in an 1894 statement by Albert Michelson, “… it seems probable that most of the grand underlying principles have been firmly established … An eminent physicist remarked that the future truths of physical science are to be looked for in the sixth place of decimals.” By 1900, the universe seemed to be well-described as a duality. Matter consisted of discrete particles (atoms), whose motion could be described by Newton’s laws of motion and law of gravitation, and light consisted of waves, whose evolution could be described by Maxwell’s equations for electromagnetism.  In short: matter was made of particles, light was made of waves, and that covered everything that we observed.  We will, in shorthand, call this “classical physics” going forward. But there were still a number of mysteries that were perplexing and unsolved at the time.  One mystery was the nature of atoms: atoms clearly had some sort of structure, because they absorbed and emitted light at isolated frequencies (colors), but what was that structure? There was much speculation in the early years of the 20th century related to this. Fraunhofer’s 1814 drawing of the spectrum of sunlight. The dark lines in the lower color image aren’t mistakes; they’re discrete colors of light that are absorbed by atoms at the sun’s surface. Another unsolved mystery was the origin of the phenomenon known as the photoelectric effect. In short: when light shines onto a metal surface under the right conditions, it can kick off electrons, as illustrated crudely below. However, the photoelectric effect didn’t seem to work as classical physics predicted it would.  The energy of electrons being kicked off of the metal didn’t increase with the brightness of the light beam, as one would expect from the classical theory; it increased with the frequency of light. If the light was below a certain frequency, no electrons at all would be kicked off. The brightness of the light beam only increased the number of electrons ejected. The puzzle was solved by none other than Albert Einstein. In a 1905 paper, he argued that the photoelectric effect could be explained if light not only behaved as a wave but also as a stream of particles, later dubbed photons, each of which has an energy proportional to frequency.  Higher frequency photons therefore transfer more energy to the ejected electrons. Also, a brighter light beam has more photons in it, resulting in more electrons getting ejected. This was the first illustration of the concept of wave-particle duality: the idea that light has a dual nature as a wave and a stream of particles.  Depending on the circumstances, sometimes the wave properties are dominant, sometimes the particle properties are; sometimes, both must be taken into account. Einstein’s argument was a profound one, and answered other questions that had been troubling physicists for a number of years. For instance, the shape of the upper curve in Fraunhofer’s spectrum above, which shows the relative brightness of the different colors of sunlight, is known as a blackbody spectrum.  It can be shown that the shape of the curve arises from the particle nature of light.  Einstein won the 1921 Nobel Prize in Physics for his work on the photoelectric effect, which provided clear evidence that there was still more to understand about the fundamentals of physics. So light, which was long thought to only be a wave, turns out to also be a particle!  One might naturally wonder if the reverse is true: that matter, long thought to consist of particles, might also have wave properties?  This was the idea that occurred to French physicist and PhD candidate Louis de Broglie in the early 1920s.  As he would later say in his 1929 Nobel Lecture, Louis de Broglie put forth this hypothesis in his 1924 PhD dissertation, and though his work was considered radical at the time, the wave nature of electrons was demonstrated in 1927 in what is now known as the Davisson-Germer experiment. The idea that electrons have wave properties resolved other physics mysteries. Remember the question about the structure of the atom?  The first major piece of the puzzle to be found was the experimental discovery of the atomic nucleus in 1910 by Ernest Rutherford and his colleagues.  It naturally followed that electrons must orbit the atomic nucleus, much like planets orbit the sun, but this still did not explain why atoms would only absorb and emit light at distinct frequencies. In 1914, Danish physicist Niels Bohr solved the problem by introducing new physics.  In the Bohr model of the atom, electrons are only allowed to orbit the nucleus with discrete values of orbital angular momentum, and could only release or absorb light by “jumping” between these discrete orbits.  The orbits are labeled by an integer index n, as illustrated below.  Bohr’s model reproduced exactly the emission and absorption spectrum of hydrogen, and was viewed as a major step in understanding atomic structure. The Bohr model. a photon is emitted when an electron “jumps” from one orbit to another. But why would electrons only orbit with those discrete values of angular momentum? This was a question that the physics of the time could not answer, and was in essence an unexplained assumption in Bohr’s model. It so happened that de Broglie’s hypothesis, that electrons have wave properties, provided the explanation!  de Broglie realized that, if the electron acted like a wave, then those waves could only “fit” around the nucleus when an integer number of wavelengths fit in an orbit.  A rough illustration of this is below. Visualization of de Broglie waves around an atom. Each more distant electron orbit has one extra “hump” in the electron wave. Louis de Broglie was actually inspired by a very mundane example: a vibrating string! As he recalled in his Nobel lecture, On the other hand the determination of the stable motions of the electrons in the atom involves whole numbers, and so far the only phenomena in which whole numbers were involved in physics were those of interference and of eigenvibrations. That suggested the idea to me that electrons themselves could not be represented as simple corpuscles either, but that a periodicity had also to be assigned to them too. This is an experiment you can try at home with a string or a phone cord!  Though you can shake a string at any frequency you want, there are only certain special isolated frequencies that will feel natural, known as resonance frequencies. First few resonance modes of a string with fixed ends. Each mode has one more “hump” than the previous one. So, by 1924, physicists were aware that both matter and light possess a dual nature as waves and particles.  However, the situation between matter and light was not entirely equal.  Since James Clerk Maxwell’s work in the 1860s, physicists had a set of equations, known as Maxwell’s equations, that could be used to describe how a light wave evolves in space and time.  But nobody had yet derived an equation or set of equations that could describe how the wave associated with matter evolves. This was a challenge undertaken by Austrian physicist Erwin Schrödinger in 1925, soon after de Broglie had suggested matter has wave properties.  Within a year, using very clever arguments and intuition, Schrödinger derived an equation that accurately modeled the wave properties of electrons, now known as the Schrödinger equation.  With the Schrödinger equation, it became possible to quantitatively model the behavior of electrons in an atom and accurately predict how they would absorb and emit light. So, by 1927, the new quantum theory of light and matter was in reasonably good shape.  There was experimental evidence of both the particle and wave nature of both light and matter, and the particle nature of light and the wave nature of matter had been experimentally confirmed.  This is summarized in the table below, for convenience. But there was one big puzzle on the matter side, as illustrated in the lower right corner: what, exactly, is “waving”?  In other words, what does the wave part of a matter wave represent?  In water waves, it is the water “waving” up and down, and conveying energy through this motion.  In sound waves, it is the molecules of the air “waving” forward and back.  In light waves, it is the electric and magnetic fields that are “waving.”  But there was no obvious way to interpret what was doing the waving in a matter wave. It turns out that the initial answer to the question, which would be formulated right around the years 1926-1927, would lead to some very strange philosophical implications of the new quantum theory.  This will be discussed in part 2 of this series of posts! 19 Responses to What is Quantum Entanglement? Part 1: Waves and particles 1. Richard Cunliff says: I need part 2 2. Doug Cook says: I am going to share this with my 10 year old Grandson…..He will get it. Been looking for “Quantum Physics for Kids ” or QP for Dummies. Doug Cook 3. Jay Gillette says: Cliff hanger! 4. jaime garavito says: Good job. I hope second part will come soon. 5. This post is awesome in every single aspect. Can’t wait for part 2! 6. Cy Coote says: Brilliant! So accessible, thank you. 7. Rob Giunta says: Good so far. When is the next post? 8. Rob Giunta says: When is the next post? 9. Raman Kohli says: Great post! looking forward to Part 2 🙂 10. Walter says: Nice. A partical on Tuesday and a wave on Thursday. 11. Cam Hough says: Awesome post! In the fourth figure (de Broglie waves), you say higher modes correspond to larger distances from the atom. What is preventing higher modes from existing on every orbital level? For example, couldn’t you “fit” three, four, five, etc. humps on the n=2 circle? • The answer, I believe, is a mix of classical and quantum. A larger number of humps = a shorter wavelength = more momentum. A higher momentum particle in the same orbit, however, will not be in a stable orbit. 12. Christine says: And we breathe it. 13. S.danny says: Very good explanations. 14. NoOne says: Reblogged this on Transcendence and commented: For the QM freaks, here’s something awesome… 15. Brandon says: It is extremely difficult for three dimensional beings that see two dimensionally to understand fourth, fifth, etc. dimensional cubes or “masses.” This is a good first step for people who are interested in exercising their perception of our Universe. I remind people, constantly; not that long ago (considering the age of our planet) it was a “fact” that the Earth was flat and if you sailed in one direction…you would fall off the edge of the Earth. We now know that the Earth is not flat, but how long will it take for the majority to understand that it is not only three dimensional, but also…simultaneously…fourth, fifth, etc dimensional? 16. Jitendra Dhanky says: I salute you with Thanks ! You have shared your rare gift to put across the interconnectedness of complex into simple clear coherent that a layperson can understand and say ‘ Now I can see ‘ and ‘ Now I Understand ‘ 17. Dr. Nabarun Moitra says: You have inadvertently forgotten the Father of it all — Max Plank. He deserves at least a footnote! Leave a Reply to Rob Giunta Cancel reply WordPress.com Logo Twitter picture Facebook photo Connecting to %s
b3e80d5ae23c7fa7
Spatiotemporal Aspects of Nonlinear Wave Propagation in Multimode Fiber Nonlinear wave propagation in optical fiber is a rich and fascinating subject. From a purely-scientific point of view, fibers provide convenient and reproducible experimental settings for a broad range of nonlinear dynamical processes. Qualitatively new phenomena are still being discovered. The major advances of the past 20 years are dominated by phenomena that occur in single-mode fibers (SMF). However, in multimode waveguides, light encounters an environment with dimensionality that exceeds 1D but is effectively below the 3D of free space. Wave propagation depends on the optical guiding characteristics and the excitation conditions. The complexity of the problem increases once nonlinear effects come into play: all the eigenmodes (which may number in the thousands) tend to strongly affect each other through nonlinear processes. Rich and complex dynamics that are possible in nonlinear multimode environments. For access to our massively-parallel numerical solver for the general multimode nonlinear Schrödinger equation: Github link Paper link For any questions regarding our code, please contact Zimu Zhu at, thanks! Fiber Lasers that Generate Ultrashort Light Pulses Short-pulse optical techniques have had major scientific and technological impact. Some of the fastest processes in nature can be observed directly with ultrashort (picosecond or femtosecond) optical pulses. Researchers now perform measurements with attosecond time resolution. In parallel, efforts are underway to apply ultrafast lasers in areas with broader societal impact, such as manufacturing and health care. Ultrafast science is dominated by solid-state lasers, which are outstanding laboratory tools. Inexpensive, robust instruments will enable applications in a much broader range of settings. Fiber lasers offer major practical advantages owing to their waveguide nature, efficient power-handling, and low-cost. These are thoroughly-exploited in high-average power continuous-wave lasers.  However, generation of pulses with high peak power remains a serious challenge owing to uncontrolled nonlinear effects, which in turn arise from strong confinement in the waveguide medium. We figure out new ways for light to propagate inside optical fibers, allowing stable pulses to form in spite of (or indeed, because of) strong nonlinear effects.  Our work spans the spectrum from analytic theory to experimental laser construction: we start by finding solutions to the nonlinear partial differential equations that govern pulse propagation, and then we try to build something in the lab that will support the desired solutions.  Currently, much of our motivation comes from the needs of scientists who are trying to do sub-cellular imaging deep in biological tissue.  By studying and harnessing nonlinear optical effects, we can design lasers that emit shorter pulses, higher powers, new colors, and more. Video Resources: “Spacetime instability of light in multimode optical fiber” Trying real hard to be accessible, we describe our recent work trying to understand the behavior of intense pulses of light in multimode optical fibers, motivated by high power fiber lasers and telecommunications. The work highlighted here is described in the paper in Nature Photonics “Spatiotemporal Dynamics of Optical Pulse Propagation in Multimode Fibers” In this OSA webinar (originally streamed on 6/22/2016), Frank discusses several areas of our ongoing work in multimode optical fiber. The slides can be downloaded here. “Interdimensional Nonlinear Optics in Multimode Fibers” As part of Cornell’s Summer Graduate Student STEM Colloquium (7/18/2016), Logan Wright gives an informal talk on nonlinear optics in multimode fibers, aimed at the educated non-specialist. You should love optical fibers. They are an integral part of the internet-age global infrastructure. In industry and medicine, lasers based on optical fiber are a rapidly growing market, providing unprecedented cost:performance and reliability. All these technologies, based on single-mode fiber, are nearing their fundamental limits, and panic for the anticipated ‘capacity crunch’ is building. In other words, we’re screwed. Fortunately, something called ‘multimode fiber’ has the potential for unparalleled unscrewing. In this talk, I will explain what multimode fibers are, and review recent work studying how short pulses of light behave inside them. I will explain what I mean by ‘interdimensional’ and show that nonlinear optical pulses called ‘multimode solitons’ can provide a basis for understanding complex 4-D nonlinear wave behavior in MMFs. In some limits, their dynamics can be understood using a classical “fat duck on a trampoline” model. Finally, I will discuss prospects for multimode fibers in several important applications, including things like high-speed internet, neural networks, and ridiculously (!!!) high-power lasers.
d086a53a6d863d88
Quantum mechanics From Wikipedia, the free encyclopedia   (Redirected from Quantum Mechanics) Jump to: navigation, search Scientific inquiry into the wave nature of light began in the 17th and 18th centuries when scientists such as Robert Hooke, Christiaan Huygens and Leonhard Euler proposed a wave theory of light based on experimental observations.2 In 1803, Thomas Young, an English polymath, performed the famous double-slit experiment that he later described in a paper entitled "On the nature of light and colours". This experiment played a major role in the general acceptance of the wave theory of light. In 1838, with the discovery of cathode rays by Michael Faraday, these studies were followed by the 1859 statement of the black-body radiation problem by Gustav Kirchhoff, the 1877 suggestion by Ludwig Boltzmann that the energy states of a physical system can be discrete, and the 1900 quantum hypothesis of Max Planck.3 Planck's hypothesis that energy is radiated and absorbed in discrete "quanta" (or "energy elements") precisely matched the observed patterns of black-body radiation. In 1896, Wilhelm Wien empirically determined a distribution law of black-body radiation, known as Wien's law in his honor. Ludwig Boltzmann independently arrived at this result by considerations of Maxwell's equations. However, it was valid only at high frequencies, and underestimated the radiance at low frequencies. Later, Max Planck corrected this model using Boltzmann statistical interpretation of thermodynamics and proposed what is now called Planck's law, which led to the development of quantum mechanics. Among the first to study quantum phenomena in nature were Arthur Compton, C.V. Raman, Pieter Zeeman, each of whom has a quantum effect named after him. Robert A. Millikan studied the Photoelectric effect experimentally and Albert Einstein developed a theory for it. At the same time Niels Bohr developed his theory of the atomic structure which was later confirmed by the experiments of Henry Moseley. In 1913, Peter Debye extended Niels Bohr's theory of atomic structure, introducing elliptical orbits, a concept also introduced by Arnold Sommerfeld.4 This phase is known as Old quantum theory. According to Planck, each energy element, E, is proportional to its frequency, ν: E = h \nu\ Planck is considered the father of the Quantum Theory where h is Planck's constant. Planck (cautiously) insisted that this was simply an aspect of the processes of absorption and emission of radiation and had nothing to do with the physical reality of the radiation itself.5 In fact, he considered his quantum hypothesis a mathematical trick to get the right answer rather than a sizeable discovery. However, in 1905 Albert Einstein interpreted Planck's quantum hypothesis realistically and used it to explain the photoelectric effect in which shining light on certain materials can eject electrons from the material. The foundations of quantum mechanics were established during the first half of the 20th century by Max Planck, Niels Bohr, Werner Heisenberg, Louis de Broglie, Arthur Compton, Albert Einstein, Erwin Schrödinger, Max Born, John von Neumann, Paul Dirac, Enrico Fermi, Wolfgang Pauli, Max von Laue, Freeman Dyson, David Hilbert, Wilhelm Wien, Satyendra Nath Bose, Arnold Sommerfeld and others. In the mid-1920s, developments in quantum mechanics led to its becoming the standard formulation for atomic physics. In the summer of 1925, Bohr and Heisenberg published results that closed the "Old Quantum Theory". Out of deference to their particle-like behavior in certain processes and measurements, light quanta came to be called photons (1926). From Einstein's simple postulation was born a flurry of debating, theorizing, and testing. Thus the entire field of quantum physics emerged, leading to its wider acceptance at the Fifth Solvay Conference in 1927. The other exemplar that led to quantum mechanics was the study of electromagnetic waves, such as visible and ultraviolet light. When it was found in 1900 by Max Planck that the energy of waves could be described as consisting of small packets or "quanta", Albert Einstein further developed this idea to show that an electromagnetic wave such as light could also be described as a particle (later called the photon) with a discrete quantum of energy that was dependent on its frequency.6 Einstein was able to use the photon theory of light to explain the photoelectric effect for which he won the 1921 Nobel Prize in Physics. This led to a theory of unity between subatomic particles and electromagnetic waves in which particles and waves are neither simply particle nor wave but have certain properties of each. This originated the concept of wave-particle duality. While quantum mechanics traditionally described the world of the very small, it is also needed to explain certain recently investigated macroscopic systems such as superconductors, superfluids, and large organic molecules.7 The word quantum derives from the Latin, meaning "how great" or "how much".8 In quantum mechanics, it refers to a discrete unit that quantum theory assigns to certain physical quantities, such as the energy of an atom at rest (see Figure 1). The discovery that particles are discrete packets of energy with wave-like properties led to the branch of physics dealing with atomic and sub-atomic systems which is today called quantum mechanics. It underlies the mathematical framework of many fields of physics and chemistry, including condensed matter physics, solid-state physics, atomic physics, molecular physics, computational physics, computational chemistry, quantum chemistry, particle physics, nuclear chemistry, and nuclear physics.9 Some fundamental aspects of the theory are still actively studied.10 Quantum mechanics is essential to understanding the behavior of systems at atomic length scales and smaller. If the physical nature of an atom was solely described by classical mechanics electrons would not "orbit" the nucleus since orbiting electrons emit radiation (due to circular motion) and would eventually collide with the nucleus due to this loss of energy. This framework was unable to explain the stability of atoms. Instead, electrons remain in an uncertain, non-deterministic, "smeared", probabilistic, wave–particle orbital about the nucleus, defying the traditional assumptions of classical mechanics and electromagnetism.11 Quantum mechanics was initially developed to provide a better explanation and description of the atom, especially the differences in the spectra of light emitted by different isotopes of the same element, as well as subatomic particles. In short, the quantum-mechanical atomic model has succeeded spectacularly in the realm where classical mechanics and electromagnetism falter. Mathematical formulations In the mathematically rigorous formulation of quantum mechanics developed by Paul Dirac,12 David Hilbert,13 John von Neumann,14 and Hermann Weyl15 the possible states of a quantum mechanical system are represented by unit vectors (called "state vectors"). Formally, these reside in a complex separable Hilbert space—variously called the "state space" or the "associated Hilbert space" of the system—that is well defined up to a complex number of norm 1 (the phase factor). In other words, the possible states are points in the projective space of a Hilbert space, usually called the complex projective space. The exact nature of this Hilbert space is dependent on the system—for example, the state space for position and momentum states is the space of square-integrable functions, while the state space for the spin of a single proton is just the product of two complex planes. Each observable is represented by a maximally Hermitian (precisely: by a self-adjoint) linear operator acting on the state space. Each eigenstate of an observable corresponds to an eigenvector of the operator, and the associated eigenvalue corresponds to the value of the observable in that eigenstate. If the operator's spectrum is discrete, the observable can attain only those discrete eigenvalues. In the everyday world, it is natural and intuitive to think of everything (every observable) as being in an eigenstate. Everything appears to have a definite position, a definite momentum, a definite energy, and a definite time of occurrence. However, quantum mechanics does not pinpoint the exact values of a particle's position and momentum (since they are conjugate pairs) or its energy and time (since they too are conjugate pairs); rather, it provides only a range of probabilities in which that particle might be given its momentum and momentum probability. Therefore, it is helpful to use different words to describe states having uncertain values and states having definite values (eigenstates). Usually, a system will not be in an eigenstate of the observable (particle) we are interested in. However, if one measures the observable, the wavefunction will instantaneously be an eigenstate (or "generalized" eigenstate) of that observable. This process is known as wavefunction collapse, a controversial and much-debated process22 that involves expanding the system under study to include the measurement device. If one knows the corresponding wave function at the instant before the measurement, one will be able to compute the probability of the wavefunction collapsing into each of the possible eigenstates. For example, the free particle in the previous example will usually have a wavefunction that is a wave packet centered around some mean position x0 (neither an eigenstate of position nor of momentum). When one measures the position of the particle, it is impossible to predict with certainty the result.18 It is probable, but not certain, that it will be near x0, where the amplitude of the wave function is large. After the measurement is performed, having obtained some result x, the wave function collapses into a position eigenstate centered at x.23 The time evolution of a quantum state is described by the Schrödinger equation, in which the Hamiltonian (the operator corresponding to the total energy of the system) generates the time evolution. The time evolution of wave functions is deterministic in the sense that - given a wavefunction at an initial time - it makes a definite prediction of what the wavefunction will be at any later time.24 During a measurement, on the other hand, the change of the initial wavefunction into another, later wavefunction is not deterministic, it is unpredictable (i.e., random). A time-evolution simulation can be seen here.2526 Mathematically equivalent formulations of quantum mechanics There are numerous mathematically equivalent formulations of quantum mechanics. One of the oldest and most commonly used formulations is the "transformation theory" proposed by Paul Dirac, which unifies and generalizes the two earliest formulations of quantum mechanics—matrix mechanics (invented by Werner Heisenberg)29 and wave mechanics (invented by Erwin Schrödinger).30 Interactions with other scientific theories The rules of quantum mechanics are fundamental. They assert that the state space of a system is a Hilbert space, and that observables of that system are Hermitian operators acting on that space—although they do not tell us which Hilbert space or which operators. These can be chosen appropriately in order to obtain a quantitative description of a quantum system. An important guide for making these choices is the correspondence principle, which states that the predictions of quantum mechanics reduce to those of classical mechanics when a system moves to higher energies or—equivalently—larger quantum numbers, i.e. whereas a single particle exhibits a degree of randomness, in systems incorporating millions of particles averaging takes over and, at the high energy limit, the statistical probability of random behaviour approaches zero. In other words, classical mechanics is simply a quantum mechanics of large systems. This "high energy" limit is known as the classical or correspondence limit. One can even start from an established classical model of a particular system, then attempt to guess the underlying quantum model that would give rise to the classical model in the correspondence limit. List of unsolved problems in physics Quantum mechanics and classical physics Predictions of quantum mechanics have been verified experimentally to an extremely high degree of accuracy.35 According to the correspondence principle between classical and quantum mechanics, all objects obey the laws of quantum mechanics, and classical mechanics is just an approximation for large systems of objects (or a statistical quantum mechanics of a large collection of particles).36 The laws of classical mechanics thus follow from the laws of quantum mechanics as a statistical average at the limit of large systems or large quantum numbers.37 However, chaotic systems do not have good quantum numbers, and quantum chaos studies the relationship between classical and quantum descriptions in these systems. Quantum coherence is an essential difference between classical and quantum theories as illustrated by the Einstein-Podolsky-Rosen (EPR) paradox — an attempt to disprove quantum mechanics by an appeal to local realism.38 Quantum interference involves adding together probability amplitudes, whereas classical "waves" infer that there is an adding together of intensities. For microscopic bodies, the extension of the system is much smaller than the coherence length, which gives rise to long-range entanglement and other nonlocal phenomena characteristic of quantum systems.39 Quantum coherence is not typically evident at macroscopic scales, though an exception to this rule may occur at extremely low temperatures (i.e. approaching absolute zero) at which quantum behavior may manifest itself macroscopically.40 This is in accordance with the following observations: Relativity and quantum mechanics Einstein himself is well known for rejecting some of the claims of quantum mechanics. While clearly contributing to the field, he did not accept many of the more "philosophical consequences and interpretations" of quantum mechanics, such as the lack of deterministic causality. He is famously quoted as saying, in response to this aspect, "My God does not play with dice". He also had difficulty with the assertion that a single subatomic particle can occupy numerous areas of space at one time. However, he was also the first to notice some of the apparently exotic consequences of entanglement, and used them to formulate the Einstein-Podolsky-Rosen paradox in the hope of showing that quantum mechanics had unacceptable implications if taken as a complete description of physical reality. This was 1935, but in 1964 it was shown by John Bell (see Bell inequality) that - although Einstein was correct in identifying seemingly paradoxical implications of quantum mechanical nonlocality - these implications could be experimentally tested. Alain Aspect's initial experiments in 1982, and many subsequent experiments since, have definitively verified quantum entanglement. According to the paper of J. Bell and the Copenhagen interpretation—the common interpretation of quantum mechanics by physicists since 1927 - and contrary to Einstein's ideas, quantum mechanics was not, at the same time a "realistic" theory and a "local" theory. The Einstein-Podolsky-Rosen paradox shows in any case that there exist experiments by which one can measure the state of one particle and instantaneously change the state of its entangled partner - although the two particles can be an arbitrary distance apart. However, this effect does not violate causality, since no transfer of information happens. Quantum entanglement forms the basis of quantum cryptography, which is used in high-security commercial applications in banking and government. Attempts at a unified field theory The quest to unify the fundamental forces through quantum mechanics is still ongoing. Quantum electrodynamics (or "quantum electromagnetism"), which is currently (in the perturbative regime at least) the most accurately tested physical theory,45unreliable source(blog) has been successfully merged with the weak nuclear force into the electroweak force and work is currently being done to merge the electroweak and strong force into the electrostrong force. Current predictions state that at around 1014 GeV the three aforementioned forces are fused into a single unified field,46 Beyond this "grand unification," it is speculated that it may be possible to merge gravity with the other three gauge symmetries, expected to occur at roughly 1019 GeV. However — and while special relativity is parsimoniously incorporated into quantum electrodynamics — the expanded general relativity, currently the best theory describing the gravitation force, has not been fully incorporated into quantum theory. One of the leading authorities continuing the search for a coherent TOE is Edward Witten, a theoretical physicist who formulated the groundbreaking M-theory, which is an attempt at describing the supersymmetrical based string theory. M-theory posits that our apparent 4-dimensional spacetime is, in reality, actually an 11-dimensional spacetime containing 10 spatial dimensions and 1 time dimension, although 7 of the spatial dimensions are - at lower energies - completely "compactified" (or infinitely curved) and not readily amenable to measurement or probing. Another popular theory is Loop quantum gravity (LQG), a theory that describes the quantum properties of gravity. It is also a theory of quantum space and quantum time, because in general relativity the geometry of spacetime is a manifestation of gravity. LQG is an attempt to merge and adapt standard quantum mechanics and standard general relativity. The main output of the theory is a physical picture of space where space is granular. The granularity is a direct consequence of the quantization. It has the same nature of the granularity of the photons in the quantum theory of electromagnetism or the discrete levels of the energy of the atoms. But here it is space itself which is discrete. More precisely, space can be viewed as an extremely fine fabric or network "woven" of finite loops. These networks of loops are called spin networks. The evolution of a spin network over time, is called a spin foam. The predicted size of this structure is the Planck length, which is approximately 1.616×10−35 m. According to theory, there is no meaning to length shorter than this (cf. Planck scale energy). Therefore LQG predicts that not just matter, but also space itself, has an atomic structure. Loop quantum Gravity was first proposed by Carlo Rovelli. Philosophical implications Since its inception, the many counter-intuitive aspects and results of quantum mechanics have provoked strong philosophical debates and many interpretations. Even fundamental issues, such as Max Born's basic rules concerning probability amplitudes and probability distributions took decades to be appreciated by society and many leading scientists. Richard Feynman once said, "I think I can safely say that nobody understands quantum mechanics."47 According to Steven Weinberg, "There is now in my opinion no entirely satisfactory interpretation of quantum mechanics."48 The Copenhagen interpretation - due largely to the Danish theoretical physicist Niels Bohr - remains the quantum mechanical formalism that is currently most widely accepted amongst physicists, some 75 years after its enunciation. According to this interpretation, the probabilistic nature of quantum mechanics is not a temporary feature which will eventually be replaced by a deterministic theory, but instead must be considered a final renunciation of the classical idea of "causality". It is also believed therein that any well-defined application of the quantum mechanical formalism must always make reference to the experimental arrangement, due to the complementarity nature of evidence obtained under different experimental situations. Quantum mechanics had enormous52 success in explaining many of the features of our world. Quantum mechanics is often the only tool available that can reveal the individual behaviors of the subatomic particles that make up all forms of matter (electrons, protons, neutrons, photons, and others). Quantum mechanics has strongly influenced string theories, candidates for a Theory of Everything (see reductionism). Quantum mechanics is also critically important for understanding how individual atoms combine covalently to form molecules. The application of quantum mechanics to chemistry is known as quantum chemistry. Relativistic quantum mechanics can, in principle, mathematically describe most of chemistry. Quantum mechanics can also provide quantitative insight into ionic and covalent bonding processes by explicitly showing which molecules are energetically favorable to which others, and the magnitudes of the energies involved.53 Furthermore, most of the calculations performed in modern computational chemistry rely on quantum mechanics. Quantum tunneling is vital to the operation of many devices - even in the simple light switch, as otherwise the electrons in the electric current could not penetrate the potential barrier made up of a layer of oxide. Flash memory chips found in USB drives use quantum tunneling to erase their memory cells. While quantum mechanics primarily applies to the atomic regimes of matter and energy, some systems exhibit quantum mechanical effects on a large scale - superfluidity, the frictionless flow of a liquid at temperatures near absolute zero, is one well-known example. Quantum theory also provides accurate descriptions for many previously unexplained phenomena, such as black body radiation and the stability of the orbitals of electrons in atoms. It has also given insight into the workings of many different biological systems, including smell receptors and protein structures.54 Recent work on photosynthesis has provided evidence that quantum correlations play an essential role in this basic fundamental process of the plant kingdom.55 Even so, classical physics can often provide good approximations to results otherwise obtained by quantum physics, typically in circumstances with large numbers of particles or large quantum numbers. Free particle For example, consider a free particle. In quantum mechanics, there is wave-particle duality, so the properties of the particle can be described as the properties of a wave. Therefore, its quantum state can be represented as a wave of arbitrary shape and extending over space as a wave function. The position and momentum of the particle are observables. The Uncertainty Principle states that both the position and the momentum cannot simultaneously be measured with complete precision. However, one can measure the position (alone) of a moving free particle, creating an eigenstate of position with a wavefunction that is very large (a Dirac delta) at a particular position x, and zero everywhere else. If one performs a position measurement on such a wavefunction, the resultant x will be obtained with 100% probability (i.e., with full certainty, or complete precision). This is called an eigenstate of position—or, stated in mathematical terms, a generalized position eigenstate (eigendistribution). If the particle is in an eigenstate of position, then its momentum is completely unknown. On the other hand, if the particle is in an eigenstate of momentum, then its position is completely unknown.56 In an eigenstate of momentum having a plane wave form, it can be shown that the wavelength is equal to h/p, where h is Planck's constant and p is the momentum of the eigenstate.57 3D confined electron wave functions for each eigenstate in a Quantum Dot. Here, rectangular and triangular-shaped quantum dots are shown. Energy states in rectangular dots are more ‘s-type’ and ‘p-type’. However, in a triangular dot, the wave functions are mixed due to confinement symmetry. Step potential The potential in this case is given by: V(x)= \begin{cases} 0, & x < 0, \\ V_0, & x \ge 0. \end{cases} \psi_1(x)= \frac{1}{\sqrt{k_1}} \left(A_\rightarrow e^{i k_1 x} + A_\leftarrow e^{-ik_1x}\right)\quad x<0 \psi_2(x)= \frac{1}{\sqrt{k_2}} \left(B_\rightarrow e^{i k_2 x} + B_\leftarrow e^{-ik_2x}\right)\quad x>0 where the wave vectors are related to the energy via k_1=\sqrt{2m E/\hbar^2}, and k_2=\sqrt{2m (E-V_0)/\hbar^2} Rectangular potential barrier Particle in a box 1-dimensional potential energy box (or infinite potential well) The particle in a one-dimensional potential energy box is the most mathematically simple example where restraints lead to the quantization of energy levels. The box is defined as having zero potential energy everywhere inside a certain region, and infinite potential energy everywhere outside that region. For the one-dimensional case in the x direction, the time-independent Schrödinger equation may be written58 With the differential operator defined by \hat{p}_x = -i\hbar\frac{d}{dx} the previous equation is evocative of the classic kinetic energy analogue, \frac{1}{2m} \hat{p}_x^2 = E, with state \psi in this case having energy E coincident with the kinetic energy of the particle. or, from Euler's formula, and D = 0. At x = L, Finite potential well The finite potential well problem is mathematically more complicated than the infinite particle-in-a-box problem as the wavefunction is not pinned to zero at the walls of the well. Instead, the wavefunction must satisfy more complicated mathematical boundary conditions as it is nonzero in regions outside the well. Harmonic oscillator This problem can either be treated by directly solving the Schrödinger, which is not trivial, or by using the more elegant "ladder method" first proposed by Paul Dirac. The eigenstates are given by \psi_n(x) = \sqrt{\frac{1}{2^n\,n!}} \cdot \left(\frac{m\omega}{\pi \hbar}\right)^{1/4} \cdot e^{ - \frac{m\omega x^2}{2 \hbar}} \cdot H_n\left(\sqrt{\frac{m\omega}{\hbar}} x \right), \qquad n = 0,1,2,\ldots. where Hn are the Hermite polynomials, H_n(x)=(-1)^n e^{x^2}\frac{d^n}{dx^n}\left(e^{-x^2}\right) and the corresponding energy levels are This is another example illustrating the quantization of energy for bound states. See also 1. ^ van Hove, Leon (1958). "Von Neumann's contributions to quantum mechanics" (PDF). Bulletin of the American Mathematical Society 64: Part2:95–99.  4. ^ http://www.ias.ac.in/resonance/December2010/p1056-1059.pdf 7. ^ "Quantum interference of large organic molecules". Nature.com. Retrieved April 20, 2013.  9. ^ http://mooni.fccj.org/~ethall/quantum/quant.htm 10. ^ Compare the list of conferences presented here 11. ^ Oocities.com at the Wayback Machine (archived October 26, 2009) 13. ^ D. Hilbert Lectures on Quantum Theory, 1915–1927 17. ^ "Heisenberg - Quantum Mechanics, 1925–1927: The Uncertainty Relations". Aip.org. Retrieved 2012-08-18.  19. ^ "[Abstract] Visualization of Uncertain Particle Movement". Actapress.com. Retrieved 2012-08-18.  20. ^ Hirshleifer, Jack (2001). The Dark Side of the Force: Economic Foundations of Conflict Theory. Campbridge University Press. p. 265. ISBN 0-521-80412-4. , Chapter , p. 21. ^ Dict.cc 22. ^ "Topics: Wave-Function Collapse". Phy.olemiss.edu. 2012-07-27. Retrieved 2012-08-18.  23. ^ "Collapse of the wave-function". Farside.ph.utexas.edu. Retrieved 2012-08-18.  24. ^ "Determinism and Naive Realism : philosophy". Reddit.com. 2009-06-01. Retrieved 2012-08-18.  25. ^ Michael Trott. "Time-Evolution of a Wavepacket in a Square Well — Wolfram Demonstrations Project". Demonstrations.wolfram.com. Retrieved 2010-10-15.  26. ^ Michael Trott. "Time Evolution of a Wavepacket In a Square Well". Demonstrations.wolfram.com. Retrieved 2010-10-15.  28. ^ "Wave Functions and the Schrödinger Equation" (PDF). Retrieved 2010-10-15. dead link 29. ^ "Quantum Physics: Werner Heisenberg Uncertainty Principle of Quantum Mechanics. Werner Heisenberg Biography". Spaceandmotion.com. 1976-02-01. Retrieved 2012-08-18.  30. ^ http://th-www.if.uj.edu.pl/acta/vol19/pdf/v19p0683.pdf 32. ^ http://ocw.usu.edu/physics/classical-mechanics/pdf_lectures/06.pdf 34. ^ Carl M. Bender, Daniel W. Hook, Karta Kooner (2009-12-31). "Complex Elliptic Pendulum". arXiv:1001.0131 hep-th. 37. ^ "Quantum mechanics course iwhatisquantummechanics". Scribd.com. 2008-09-14. Retrieved 2012-08-18.  38. ^ A. Einstein, B. Podolsky, and N. Rosen, Can quantum-mechanical description of physical reality be considered complete? Phys. Rev. 47 777 (1935). [1] 39. ^ "Between classical and quantum�" (PDF). Retrieved 2012-08-19.  40. ^ (see macroscopic quantum phenomena, Bose-Einstein condensate, and Quantum machine) 41. ^ "Atomic Properties". Academic.brooklyn.cuny.edu. Retrieved 2012-08-18.  42. ^ http://assets.cambridge.org/97805218/29526/excerpt/9780521829526_excerpt.pdf 44. ^ Stephen Hawking; Gödel and the end of physics 45. ^ "Life on the lattice: The most accurate theory we have". Latticeqcd.blogspot.com. 2005-06-03. Retrieved 2010-10-15.  49. ^ "Action at a Distance in Quantum Mechanics (Stanford Encyclopedia of Philosophy)". Plato.stanford.edu. 2007-01-26. Retrieved 2012-08-18.  50. ^ "Everett's Relative-State Formulation of Quantum Mechanics (Stanford Encyclopedia of Philosophy)". Plato.stanford.edu. Retrieved 2012-08-18.  51. ^ The Transactional Interpretation of Quantum Mechanics by John Cramer. Reviews of Modern Physics 58, 647-688, July (1986) 52. ^ See, for example, the Feynman Lectures on Physics for some of the technological applications which use quantum mechanics, e.g., transistors (vol III, pp. 14-11 ff), integrated circuits, which are follow-on technology in solid-state physics (vol II, pp. 8-6), and lasers (vol III, pp. 9-13). 53. ^ Introduction to Quantum Mechanics with Applications to Chemistry - Linus Pauling, E. Bright Wilson. Books.google.com. 1985-03-01. ISBN 9780486648712. Retrieved 2012-08-18.  55. ^ "Quantum mechanics boosts photosynthesis". physicsworld.com. Retrieved 2010-10-23.  57. ^ Baofu, Peter (2007-12-31). The Future of Complexity: Conceiving a Better Way to Understand Order and Chaos. Books.google.com. ISBN 9789812708991. Retrieved 2012-08-18.  58. ^ Derivation of particle in a box, chemistry.tidalswan.com More technical: Further reading External links Course material Creative Commons License
7e3aa32d816055fd
The Full Wiki Quantum mechanics: Map Wikipedia article: Map showing all locations mentioned on Wikipedia article: Quantum mechanics (QM) is a set of principles describing the physical reality at the atomic level of matter (molecules and atoms) and the subatomic (electrons, protons, and even smaller particles). These descriptions include the simultaneous wave-like and particle-like behavior of both matter and radiation ("wave–particle duality"). Quantum Mechanics is a mathematical description of reality, like any scientific model. Some of its predictions and implications go against the "common sense" of how humans see a set of bodies (a system) behave. This isn't necessarily a failure of QM - it's more of a reflection of how humans understand space and time on larger scales (e.g., centimetres, seconds) rather than much smaller. QM says that the most complete description of a system is its wavefunction, which is just a number varying between time and place. One can derive things from the wavefunction, such as the position of a particle, or its momentum. Yet the wavefunction describes probabilities, and some physical quantities which classical physics would assume are both fully defined together simultaneously for a system are not simultaneously given definite values in QM. It is not that the experimental equipment is not precise enough - the two quantities in question just really aren't defined at the same time by the Universe. For instance, location and velocity just do not exist simultaneously for a body (this is called the Heisenberg uncertainty principle — see its formula in the box to the right). Certain systems, however, do exhibit quantum mechanical effects on a larger scale; superfluidity (the frictionless flow of a liquid at temperatures near absolute zero) is one well-known example. Quantum theory also provides accurate descriptions for many previously unexplained phenomena such as black body radiation and the stability of electron orbitals. It has also given insight into the workings of many different biological systems, including smell receptors and protein structures. Even so, classical physics often can be a good approximation to results otherwise obtained by quantum physics, typically in circumstances with large numbers of particles or large quantum numbers. (However, some open questions remain in the field of quantum chaos.) The word quantum is Latin for "how great" or "how much." In quantum mechanics, it refers to a discrete unit that quantum theory assigns to certain physical quantities, such as the energy of an atom at rest (see Figure 1, at right). The discovery that waves have discrete energy packets (called quanta) that behave in a manner similar to particle led to the branch of physics that deals with atomic and subatomic systems which we today call quantum mechanics. It is the underlying mathematical framework of many fields of physics , including condensed matter physics, solid-state physics, atomic physics, molecular physics, computational chemistry, quantum chemistry, particle physics, and nuclear physics. The foundations of quantum mechanics were established during the first half of the twentieth century by Werner Heisenberg, Max Planck, Louis de Broglie, Albert Einstein, Niels Bohr, Erwin Schrödinger, Max Born, John von Neumann, Paul Dirac, Wolfgang Pauli, David Hilbert, and others. Some fundamental aspects of the theory are still actively studied. Quantum mechanics is essential to understand the behavior of systems at atomic length scales and smaller. For example, if classical mechanics governed the workings of an atom, electrons would rapidly travel towards and collide with the nucleus, making stable atoms impossible. However, in the natural world the electrons normally remain in an uncertain, non-deterministic "smeared" (wave-particle wave function) orbital path around or "through" the nucleus, defying classical electromagnetism. Quantum mechanics was initially developed to provide a better explanation of the atom, especially the spectra of light emitted by different atomic species. The quantum theory of the atom was developed as an explanation for the electron's staying in its orbital, which could not be explained by Newton's laws of motion and by Maxwell's laws of classical electromagnetism. In the formalism of quantum mechanics, the state of a system at a given time is described by a complex wave function (sometimes referred to as orbitals in the case of atomic electrons), and more generally, elements of a complex vector space. This abstract mathematical object allows for the calculation of probabilities of outcomes of concrete experiments. For example, it allows one to compute the probability of finding an electron in a particular region around the nucleus at a particular time. Contrary to classical mechanics, one can never make simultaneous predictions of conjugate variables, such as position and momentum, with accuracy. For instance, electrons may be considered to be located somewhere within a region of space, but with their exact positions being unknown. Contours of constant probability, often referred to as “clouds,” may be drawn around the nucleus of an atom to conceptualize where the electron might be located with the most probability. Heisenberg's uncertainty principle quantifies the inability to precisely locate the particle given its conjugate. The other exemplar that led to quantum mechanics was the study of electromagnetic waves such as light. When it was found in 1900 by Max Planck that the energy of waves could be described as consisting of small packets or quanta, Albert Einstein exploited this idea to show that an electromagnetic wave such as light could be described by a particle called the photon with a discrete energy dependent on its frequency. This led to a theory of unity between subatomic particles and electromagnetic waves called wave–particle duality in which particles and waves were neither one nor the other, but had certain properties of both. While quantum mechanics describes the world of the very small, it also is needed to explain certain “macroscopic quantum systems” such as superconductors and superfluids. Broadly speaking, quantum mechanics incorporates four classes of phenomena that classical physics cannot account for: (I) the quantization (discretization) of certain physical quantities, (II) wave-particle duality, (III) the uncertainty principle, and (IV) quantum entanglement. Each of these phenomena is described in detail in subsequent sections. The history of quantum mechanics began essentially with the 1838 discovery of cathode rays by Michael Faraday, the 1859 statement of the black body radiation problem by Gustav Kirchhoff, the 1877 suggestion by Ludwig Boltzmann that the energy states of a physical system could be discrete, and the 1900 quantum hypothesis by Max Planck that any energy is radiated and absorbed in quantities divisible by discrete ‘energy elements’, E, such that each of these energy elements is proportional to the frequency ν with which they each individually radiate energy, as defined by the following formula: E = h \nu = \hbar \omega\, where h is Planck's Action Constant. Planck insisted that this was simply an aspect of the processes of absorption and emission of radiation and had nothing to do with the physical reality of the radiation itself. However, at that time, this appeared not to explain the photoelectric effect (1839), i.e. that shining light on certain materials can function to eject electrons from the material. In 1905, basing his work on Planck’s quantum hypothesis, Albert Einstein postulated that light itself consists of individual quanta. These later came to be called photons (1926). From Einstein's simple postulation was born a flurry of debating, theorizing and testing, and thus, the entire field of quantum physics. Quantum mechanics and classical physics Predictions of quantum mechanics have been verified experimentally to a very high degree of accuracy. Thus, the current logic of correspondence principle between classical and quantum mechanics is that all objects obey laws of quantum mechanics, and classical mechanics is just a quantum mechanics of large systems (or a statistical quantum mechanics of a large collection of particles). Laws of classical mechanics thus follow from laws of quantum mechanics at the limit of large systems or large quantum numbers. However, chaotic systems do not have good quantum numbers, and quantum chaos studies the relationship between classical and quantum descriptions in these systems. The main differences between classical and quantum theories have already been mentioned above in the remarks on the Einstein-Podolsky-Rosen paradox. Essentially the difference boils down to the statement that quantum mechanics is coherent (addition of amplitude), whereas classical theories are incoherent (addition of intensities). Thus, such quantities as coherence lengths and coherence times come into play. For microscopic bodies the extension of the system is certainly much smaller than the coherence length; for macroscopic bodies one expects that it should be the other way round. An exception to this rule can occur at extremely low temperatures, when quantum behavior can manifest itself on more "macroscopic" scales (see Bose-Einstein condensate). This is in accordance with the following observations: Many “macroscopic” properties of “classic” systems are direct consequences of quantum behavior of its parts. For example, stability of bulk matter (which consists of atoms and molecules which would quickly collapse under electric forces alone), rigidity of this matter, mechanical, thermal, chemical, optical and magnetic properties of this matter—they are all results of interaction of electric charges under the rules of quantum mechanics. While the seemingly exotic behavior of matter posited by quantum mechanics and relativity theory become more apparent when dealing with extremely fast-moving or extremely tiny particles, the laws of classical “Newtonian” physics still remain accurate in predicting the behavior of surrounding (“large”) objects—of the order of the size of large molecules and bigger—at velocities much smaller than the velocity of light. There are numerous mathematically equivalent formulations of quantum mechanics. One of the oldest and most commonly used formulations is the transformation theory proposed by Cambridge theoretical physicist Paul Dirac, which unifies and generalizes the two earliest formulations of quantum mechanics, matrix mechanics (invented by Werner Heisenberg) and wave mechanics (invented by Erwin Schrödinger). Generally, quantum mechanics does not assign definite values to observables. Instead, it makes predictions using probability distributions; that is, the probability of obtaining possible outcomes from measuring an observable. Oftentimes these results are skewed by many causes, such as dense probability clouds or quantum state nuclear attraction. Naturally, these probabilities will depend on the quantum state at the "instant" of the measurement. Hence, uncertainty is involved in the value. There are, however, certain states that are associated with a definite value of a particular observable. These are known as "eigenstates" of the observable ("eigen" can be roughly translated from German as inherent or as a characteristic). In the everyday world, it is natural and intuitive to think of everything (every observable) as being in an eigenstate. Everything appears to have a definite position, a definite momentum, a definite energy, and a definite time of occurrence. However, quantum mechanics does not pinpoint the exact values of a particle for its position and momentum (since they are conjugate pairs) or its energy and time (since they too are conjugate pairs); rather, it only provides a range of probabilities of where that particle might be given its momentum and momentum probability. Therefore, it is helpful to use different words to describe states having uncertain values and states having definite values (eigenstate). For example, consider a free particle. In quantum mechanics, there is wave-particle duality so the properties of the particle can be described as the properties of a wave. Therefore, its quantum state can be represented as a wave of arbitrary shape and extending over space as a wave function. The position and momentum of the particle are observables. The Uncertainty Principle states that both the position and the momentum cannot simultaneously be measured with full precision at the same time. However, one can measure the position alone of a moving free particle creating an eigenstate of position with a wavefunction that is very large (a Dirac delta) at a particular position x and zero everywhere else. If one performs a position measurement on such a wavefunction, the result x will be obtained with 100% probability (full certainty). This is called an eigenstate of position (mathematically more precise: a generalized position eigenstate (eigendistribution)). If the particle is in an eigenstate of position then its momentum is completely unknown. On the other hand, if the particle is in an eigenstate of momentum then its position is completely unknown.In an eigenstate of momentum having a plane wave form, it can be shown that the wavelength is equal to h/p, where h is Planck's constant and p is the momentum of the eigenstate. Usually, a system will not be in an eigenstate of the observable we are interested in. However, if one measures the observable, the wavefunction will instantaneously be an eigenstate (or generalized eigenstate) of that observable. This process is known as wavefunction collapse, a debatable process. It involves expanding the system under study to include the measurement device. If one knows the corresponding wave function at the instant before the measurement, one will be able to compute the probability of collapsing into each of the possible eigenstates. For example, the free particle in the previous example will usually have a wavefunction that is a wave packet centered around some mean position x0, neither an eigenstate of position nor of momentum. When one measures the position of the particle, it is impossible to predict with certainty the result. It is probable, but not certain, that it will be near x0, where the amplitude of the wave function is large. After the measurement is performed, having obtained some result x, the wave function collapses into a position eigenstate centered at x. Wave functions can change as time progresses. An equation known as the Schrödinger equation describes how wave functions change in time, a role similar to Newton's second law in classical mechanics. The Schrödinger equation, applied to the aforementioned example of the free particle, predicts that the center of a wave packet will move through space at a constant velocity, like a classical particle with no forces acting on it. However, the wave packet will also spread out as time progresses, which means that the position becomes more uncertain. This also has the effect of turning position eigenstates (which can be thought of as infinitely sharp wave packets) into broadened wave packets that are no longer position eigenstates.Some wave functions produce probability distributions that are constant or independent of time, such as when in a stationary state of constant energy, time drops out of the absolute square of the wave function. Many systems that are treated dynamically in classical mechanics are described by such "static" wave functions. For example, a single electron in an unexcited atom is pictured classically as a particle moving in a circular trajectory around the atomic nucleus, whereas in quantum mechanics it is described by a static, spherically symmetric wavefunction surrounding the nucleus (Fig. 1). (Note that only the lowest angular momentum states, labeled s, are spherically symmetric). The time evolution of wave functions is deterministic in the sense that, given a wavefunction at an initial time, it makes a definite prediction of what the wavefunction will be at any later time. During a measurement, the change of the wavefunction into another one is not deterministic, but rather unpredictable, i.e., random. A time-evolution simulation can be seen here.[4162] The probabilistic nature of quantum mechanics thus stems from the act of measurement. This is one of the most difficult aspects of quantum systems to understand. It was the central topic in the famous Bohr-Einstein debates, in which the two scientists attempted to clarify these fundamental principles by way of thought experiments. In the decades after the formulation of quantum mechanics, the question of what constitutes a "measurement" has been extensively studied. Interpretations of quantum mechanics have been formulated to do away with the concept of "wavefunction collapse"; see, for example, the relative state interpretation. The basic idea is that when a quantum system interacts with a measuring apparatus, their respective wavefunctions become entangled, so that the original quantum system ceases to exist as an independent entity. For details, see the article on measurement in quantum mechanics. Mathematical formulation In the mathematically rigorous formulation of quantum mechanics, developed by Paul Dirac and John von Neumann, the possible states of a quantum mechanical system are represented by unit vectors (called "state vectors") residing in a complex separable Hilbert space (variously called the "state space" or the "associated Hilbert space" of the system) well defined up to a complex number of norm 1 (the phase factor). In other words, the possible states are points in the projectivization of a Hilbert space, usually called the complex projective space. The exact nature of this Hilbert space is dependent on the system; for example, the state space for position and momentum states is the space of square-integrable functions, while the state space for the spin of a single proton is just the product of two complex planes. Each observable is represented by a maximally-Hermitian (precisely: by a self-adjoint) linear operator acting on the state space. Each eigenstate of an observable corresponds to an eigenvector of the operator, and the associated eigenvalue corresponds to the value of the observable in that eigenstate. If the operator's spectrum is discrete, the observable can only attain those discrete eigenvalues. The time evolution of a quantum state is described by the Schrödinger equation, in which the Hamiltonian, the operator corresponding to the total energy of the system, generates time evolution. The inner product between two state vectors is a complex number known as a probability amplitude. During a measurement, the probability that a system collapses from a given initial state to a particular eigenstate is given by the square of the absolute value of the probability amplitudes between the initial and final states. The possible results of a measurement are the eigenvalues of the operator - which explains the choice of Hermitian operators, for which all the eigenvalues are real. We can find the probability distribution of an observable in a given state by computing the spectral decomposition of the corresponding operator. Heisenberg's uncertainty principle is represented by the statement that the operators corresponding to certain observables do not commute. It turns out that analytic solutions of Schrödinger's equation are only available for a small number of model Hamiltonians, of which the quantum harmonic oscillator, the particle in a box, the hydrogen molecular ion and the hydrogen atom are the most important representatives. Even the helium atom, which contains just one more electron than hydrogen, defies all attempts at a fully analytic treatment. There exist several techniques for generating approximate solutions. For instance, in the method known as perturbation theory one uses the analytic results for a simple quantum mechanical model to generate results for a more complicated model related to the simple model by, for example, the addition of a weak potential energy. Another method is the "semi-classical equation of motion" approach, which applies to systems for which quantum mechanics produces weak deviations from classical behavior. The deviations can be calculated based on the classical motion. This approach is important for the field of quantum chaos. An alternative formulation of quantum mechanics is Feynman's path integral formulation, in which a quantum-mechanical amplitude is considered as a sum over histories between initial and final states; this is the quantum-mechanical counterpart of action principles in classical mechanics. Interactions with other scientific theories The fundamental rules of quantum mechanics are very deep. They assert that the state space of a system is a Hilbert space and the observables are Hermitian operators acting on that space, but do not tell us which Hilbert space or which operators, or if it even exists. These must be chosen appropriately in order to obtain a quantitative description of a quantum system. An important guide for making these choices is the correspondence principle, which states that the predictions of quantum mechanics reduce to those of classical physics when a system moves to higher energies or equivalently, larger quantum numbers. In other words, classic mechanics is simply a quantum mechanics of large systems. This "high energy" limit is known as the classical or correspondence limit. One can therefore start from an established classical model of a particular system, and attempt to guess the underlying quantum model that gives rise to the classical model in the correspondence limit. When quantum mechanics was originally formulated, it was applied to models whosecorrespondence limit was non-relativistic classical mechanics. For instance, the well-known model of the quantum harmonic oscillator uses an explicitly non-relativistic expression for the kinetic energy of the oscillator, and is thus a quantum version of the classical harmonic oscillator. Early attempts to merge quantum mechanics with special relativity involved the replacement of the Schrödinger equation with a covariant equation such as the Klein-Gordon equation or the Dirac equation. While these theories were successful in explaining many experimental results, they had certain unsatisfactory qualities stemming from their neglect of the relativistic creation and annihilation of particles. A fully relativistic quantum theory required the development of quantum field theory, which applies quantization to a field rather than a fixed set of particles. The first complete quantum field theory, quantum electrodynamics, provides a fully quantum description of the electromagnetic interaction. The full apparatus of quantum field theory is often unnecessary for describing electrodynamic systems. A simpler approach, one employed since the inception of quantum mechanics, is to treat charged particles as quantum mechanical objects being acted on by a classical electromagnetic field. For example, the elementary quantum model of the hydrogen atom describes the electric field of the hydrogen atom using a classical \scriptstyle -\frac{e^2}{4 \pi\ \epsilon_0\ } \frac{1}{r} Coulomb potential. This "semi-classical" approach fails if quantum fluctuations in the electromagnetic field play an important role, such as in the emission of photons by charged particles. Quantum field theories for the strong nuclear force and the weak nuclear force have been developed. The quantum field theory of the strong nuclear force is called quantum chromodynamics, and describes the interactions of the subnuclear particles: quarks and gluons. The weak nuclear force and the electromagnetic force were unified, in their quantized forms, into a single quantum field theory known as electroweak theory, by the physicists Carl Jamieson, Sheldon Glashow and Steven Weinberg. It has proven difficult to construct quantum models of gravity, the remaining fundamental force. Semi-classical approximations are workable, and have led to predictions such as Hawking radiation. However, the formulation of a complete theory of quantum gravity is hindered by apparent incompatibilities between general relativity, the most accurate theory of gravity currently known, and some of the fundamental assumptions of quantum theory. The resolution of these incompatibilities is an area of active research, and theories such as string theory are among the possible candidates for a future theory of quantum gravity. The particle in a 1-dimensional potential energy box is the most simple example where restraints lead to the quantization of energy levels.The box is defined as zero potential energy inside a certain interval and infinite everywhere outside that interval. For the 1-dimensional case in the x direction, the time-independent Schrödinger equation can be written as: The general solutions are: \psi(x) = A e^{ikx} + B e ^{-ikx} \qquad\qquad E = \frac{\hbar^2 k^2}{2m} or, from Euler's formula, \psi(x) = C \sin kx + D \cos kx.\! The presence of the walls of the box determines the values of C, D, and k. At each wall ( and ), . Thus when , \psi(0) = 0 = C\sin 0 + D\cos 0 = D\! and so . When , \psi(L) = 0 = C\sin kL.\! C cannot be zero, since this would conflict with the Born interpretation. Therefore , and so it must be that kL is an integer multiple of π. Therefore, k = \frac{n\pi}{L}\qquad\qquad n=1,2,3,\ldots. E = \frac{\hbar^2 \pi^2 n^2}{2mL^2} = \frac{n^2h^2}{8mL^2}. Attempts at a unified field theory As of 2009 the quest for unifying the fundamental forces through quantum mechanics is still ongoing. Quantum electrodynamics (or "quantum electromagnetism"), which is currently the most accurately tested physical theory, has been successfully merged with the weak nuclear force into the electroweak force and work is currently being done to merge the electroweak and strong force into the electrostrong force. Current predictions state that at around 1014 GeV the three aforementioned forces are fused into a single unified field, Beyond this "grand unification", it is speculated that it may be possible to merge gravity with the other three gauge symmetries, expected to occur at roughly 1019 GeV. However -and while special relativity is parsimoniously incorporated into quantum electrodynamics- the expanded general relativity, currently the best theory describing the gravitation force, has not been fully incorporated into quantum theory. Relativity and quantum mechanics Main articles: Quantum gravity and Theory of everything Even with the defining postulates of both Einstein's theory of general relativity and quantum theory being indisputably supported by rigorous and repeated empirical evidence and while they do not directly contradict each other theoretically (at least with regard to primary claims), they are resistant to being incorporated within one cohesive model. Einstein himself is well known for rejecting some of the claims of quantum mechanics. While clearly contributing to the field, he did not accept the more philosophical consequences and interpretations of quantum mechanics, such as the lack of deterministic causality and the assertion that a single subatomic particle can occupy numerous areas of space at one time. He also was the first to notice some of the apparently exotic consequences of entanglement and used them to formulate the Einstein-Podolsky-Rosen paradox, in the hope of showing that quantum mechanics had unacceptable implications. This was 1935, but in 1964 it was shown by John Bell (see Bell inequality) that Einstein's assumption was correct, but had to be completed by hidden variables and thus based on wrong philosophical assumptions. According to the paper of J. Bell and the Copenhagen interpretation (the common interpretation of quantum mechanics by physicists for decades), and contrary to Einstein's ideas, quantum mechanics was • neither a "realistic" theory (since quantum measurements do not state pre-existing properties, but rather they prepare properties) • nor a local theory (essentially not, because the state vector \scriptstyle |\psi\rangle determines simultaneously the probability amplitudes at all sites, |\psi\rangle\to\psi(\mathbf r), \forall \mathbf r). The Einstein-Podolsky-Rosen paradox shows in any case that there exist experiments by which one can measure the state of one particle and instantaneously change the state of its entangled partner, although the two particles can be an arbitrary distance apart; however, this effect does not violate causality, since no transfer of information happens. These experiments are the basis of some of the most topical applications of the theory, quantum cryptography, which has been on the market since 2004 and works well, although at small distances of typically \scriptstyle \le 1000 km. Gravity is negligible in many areas of particle physics, so that unification between general relativity and quantum mechanics is not an urgent issue in those applications. However, the lack of a correct theory of quantum gravity is an important issue in cosmology and physicists' search for an elegant "theory of everything". Thus, resolving the inconsistencies between both theories has been a major goal of twentieth- and twenty-first-century physics. Many prominent physicists, including Stephen Hawking, have labored in the attempt to discover a theory underlying everything, combining not only different models of subatomic physics, but also deriving the universe's four forces —the strong force, electromagnetism, weak force, and gravity— from a single force or phenomenon. One of the leading minds in this field is Edward Witten, a theoretical physicist who formulated the groundbreaking M-theory, which is an attempt at describing the supersymmetrical based string theory. Quantum mechanics has had enormous success in explaining many of the features of our world. The individual behaviour of the subatomic particles that make up all forms of matterelectrons, protons, neutrons, photons and others—can often only be satisfactorily described using quantum mechanics. Quantum mechanics has strongly influenced string theory, a candidate for a theory of everything (see reductionism) and the multiverse hypothesis. It is also related to statistical mechanics. Quantum mechanics is important for understanding how individual atoms combine covalently to form chemicals or molecules. The application of quantum mechanics to chemistry is known as quantum chemistry. (Relativistic) quantum mechanics can in principle mathematically describe most of chemistry. Quantum mechanics can provide quantitative insight into ionic and covalent bonding processes by explicitly showing which molecules are energetically favorable to which others, and by approximately how much. Most of the calculations performed in computational chemistry rely on quantum mechanics. Much of modern technology operates at a scale where quantum effects are significant. Examples include the laser, the transistor (and thus the microchip), the electron microscope, and magnetic resonance imaging. The study of semiconductors led to the invention of the diode and the transistor, which are indispensable for modern electronics. Researchers are currently seeking robust methods of directly manipulating quantum states. Efforts are being made to develop quantum cryptography, which will allow guaranteed secure transmission of information. A more distant goal is the development of quantum computers, which are expected to perform certain computational tasks exponentially faster than classical computers. Another active research topic is quantum teleportation, which deals with techniques to transmit quantum states over arbitrary distances. In many devices, even the simple light switch, quantum tunneling is vital, as otherwise the electrons in the electric current could not penetrate the potential barrier made up, in the case of the light switch, of a layer of oxide. Flash memory chips found in USB drives also use quantum tunneling to erase their memory cells. Philosophical consequences Since its inception, the many counter-intuitive results of quantum mechanics have provoked strong philosophical debate and many interpretations. Even fundamental issues such as Max Born's basic rules concerning probability amplitudes and probability distributions took decades to be appreciated. Albert Einstein, himself one of the founders of quantum theory, disliked this loss of determinism in measurement (this dislike is the source of his famous quote, "God does not play dice with the universe."). Einstein held that there should be a local hidden variable theory underlying quantum mechanics and that, consequently, the present theory was incomplete. He produced a series of objections to the theory, the most famous of which has become known as the EPR paradox. John Bell showed that the EPR paradox led to experimentally testable differences between quantum mechanics and local realistic theories. Experiments have been performed confirming the accuracy of quantum mechanics, thus demonstrating that the physical world cannot be described by local realistic theories. The Bohr-Einstein debates provide a vibrant critique of the Copenhagen Interpretation from an epistemological point of view. The Everett many-worlds interpretation, formulated in 1956, holds that all the possibilities described by quantum theory simultaneously occur in a "multiverse" composed of mostly independent parallel universes. This is not accomplished by introducing some new axiom to quantum mechanics, but on the contrary by removing the axiom of the collapse of the wave packet: All the possible consistent states of the measured system and the measuring apparatus (including the observer) are present in a real physical (not just formally mathematical, as in other interpretations) quantum superposition. (Such a superposition of consistent state combinations of different systems is called an entangled state.) While the multiverse is deterministic, we perceive non-deterministic behavior governed by probabilities, because we can observe only the universe, i.e. the consistent state contribution to the mentioned superposition, we inhabit. Everett's interpretation is perfectly consistent with John Bell's experiments and makes them intuitively understandable. However, according to the theory of quantum decoherence, the parallel universes will never be accessible to us. This inaccessibility can be understood as follows: once a measurement is done, the measured system becomes entangled with both the physicist who measured it and a huge number of other particles, some of which are photons flying away towards the other end of the universe; in order to prove that the wave function did not collapse one would have to bring all these particles back and measure them again, together with the system that was measured originally. This is completely impractical, but even if one could theoretically do this, it would destroy any evidence that the original measurement took place (including the physicist's memory). See also 1. See the Davisson–Germer experiment, which showed the wave-like character of the electron. 2. See Einstein's photoelectric effect, for which he gained the Nobel prize in physics. 6. Compare the list of conferences presented here. 9. , Chapter 1, p. 52 12. J. Mehra and H. Rechenberg, The historical development of quantum theory, Springer-Verlag, 1982. 13. e.g. T.S. Kuhn, Black-body theory and the quantum discontinuity 1894-1912, Clarendon Press, Oxford, 1978. 14. A. Einstein, Über einen die Erzeugung und Verwandlung des Lichtes betreffenden heuristischen Gesichtspunkt (On a heuristic point of view concerning the production and transformation of light), Annalen der Physik 17 (1905) 132-148 (reprinted in The collected papers of Albert Einstein, John Stachel, editor, Princeton University Press, 1989, Vol. 2, pp. 149-166, in German; see also Einstein's early work on the quantum hypothesis, ibid. pp. 134-148). 20. Especially since Werner Heisenberg was awarded the Nobel Prize in Physics in 1932 for the creation of quantum mechanics, the role of Max Born has been obfuscated. A 2005 biography of Born details his role as the creator of the matrix formulation of quantum mechanics. This was recognized in a paper by Heisenberg, in 1940, honoring Max Planck. See: Nancy Thorndike Greenspan, "The End of the Certain World: The Life and Science of Max Born" (Basic Books, 2005), pp. 124 - 128, and 285 - 286. 23. probability clouds are approximate, but better than the Bohr model, whereby electron location is given by a probability function, the wave function eigenvalue, such that the probability is the squared modulus of the complex amplitude 25. , Chapter , p. 27. , Chapter 6, p. 79 30. , Chapter 8, p. 215 32. , Chapter 2, p. 36 35. , Chapter 8, p. 215 36. P.A.M. Dirac, The Principles of Quantum Mechanics, Clarendon Press, Oxford, 1930. 38. Derivation of particle in a box, 39. Life on the lattice: The most accurate theory we have. • Bryce DeWitt, R. Neill Graham, eds., 1973. The Many-Worlds Interpretation of Quantum Mechanics, Princeton Series in Physics, Princeton University Pressmarker. ISBN 0-691-08131-X • The beginning chapters make up a very clear and comprehensible introduction. • Hugh Everett, 1957, "Relative State Formulation of Quantum Mechanics," Reviews of Modern Physics 29: 454-62. • A standard undergraduate text. • Max Jammer, 1966. The Conceptual Development of Quantum Mechanics. McGraw Hill. • Albert Messiah, 1966. Quantum Mechanics (Vol. I), English translation from French by G. M. Temmer. North Holland, John Wiley & Sons. Cf. chpt. IV, section III. • Hermann Weyl, 1950. The Theory of Groups and Quantum Mechanics, Dover Publications. Further reading External links Course material Embed code: Got something to say? Make a comment. Your name Your email address
465e372c52c5f24b
Quantum Mechanics: Hydrogen Atom By Dragica Vasileska1, Gerhard Klimeck2 1. Arizona State University 2. Purdue University Download (PDF) Licensed according to this deed. Published on The solution of the Schrödinger equation (wave equations) for the hydrogen atom uses the fact that the Coulomb potential produced by the nucleus is isotropic (it is radially symmetric in space and only depends on the distance to the nucleus). Although the resulting energy eigenfunctions (the "orbitals") are not necessarily isotropic themselves, their dependence on the angular coordinates follows completely generally from this isotropy of the underlying potential: The eigenstates of the Hamiltonian (= energy eigenstates) can be chosen as simultaneous eigenstates of the angular momentum operator. This corresponds to the fact that angular momentum is conserved in the orbital motion of the electron around the nucleus. Therefore, the energy eigenstates may be classified by two angular momentum quantum numbers, l and m (integer numbers). The "angular momentum" quantum number l = 0, 1, 2, ... determines the magnitude of the angular momentum. The "magnetic" quantum number m = −l, .., +l determines the projection of the angular momentum on the (arbitrarily chosen) z-axis. In addition to mathematical expressions for total angular momentum and angular momentum projection of wavefunctions, an expression for the radial dependence of the wave functions must be found. It is only here that the details of the 1/r Coulomb potential enter (leading to Laguerre polynomials in r). This leads to a third quantum number, the principal quantum number n = 1, 2, 3, ... The principal quantum number in hydrogen is related to atom's total energy. Note that the maximum value of the angular momentum quantum number is limited by the principal quantum number: it can run only up to n − 1, i.e. l = 0, 1, ..., n − 1. Due to angular momentum conservation, states of the same l but different m have the same energy (this holds for all problems with rotational symmetry). In addition, for the hydrogen atom, states of the same n but different l are also degenerate (i.e. they have the same energy). However, this is a specific property of hydrogen and is no longer true for more complicated atoms which have a (effective) potential differing from the form 1/r (due to the presence of the inner electrons shielding the nucleus potential). Taking into account the spin of the electron adds a last quantum number, the projection of the electron's spin angular momentum along the z axis, which can take on two values. Therefore, any eigenstate of the electron in the hydrogen atom is described fully by four quantum numbers. According to the usual rules of quantum mechanics, the actual state of the electron may be any superposition of these states. This explains also why the choice of z-axis for the directional quantization of the angular momentum vector is immaterial: An orbital of given l and m' obtained for another preferred axis z' can always be represented as a suitable superposition of the various states of different m (but same l) that have been obtained for z. Sponsored by Cite this work Researchers should cite this work as follows: • ww.eas.asu.edu/~vasilesk • Dragica Vasileska; Gerhard Klimeck (2008), "Quantum Mechanics: Hydrogen Atom," http://nanohub.org/resources/4993. BibTex | EndNote
166b9d9de3d777ef
No-communication theorem In quantum information theory, a no-communication theorem is a result which gives conditions under which instantaneous transfer of information between two observers is impossible. These results can be applied to understand the so-called paradoxes in quantum mechanics such as the EPR paradox or violations of local realism obtained in tests of Bell's theorem. In these experiments, the no-communication theorem shows that failure of local realism does not lead to what could be referred to as "spooky communication at a distance" (in analogy with Einstein's labeling of quantum entanglement as 'spooky action at a distance'). We will illustrate this result for the setup of Bell tests in which two observers Alice and Bob perform local observations on a common bipartite system. Theorem. In a Bell test, the statistics of Bob's measurements are unaffected by anything Alice does locally. To prove this, we use the statistical machinery of quantum mechanics, namely density states and quantum operations. Alice and Bob perform measurements on system S whose underlying Hilbert space is H = H_A otimes H_B. We also assume everything is finite dimensional to avoid convergence issues. The state of the composite system is given by a density operator on H. Any density operator σ on H is a sum of the form: sigma = sum_i T_i otimes S_i where Ti and Si are operators on HA and HB which however need not be states on the subsystems (that is non-negative of trace 1). In fact, the claim holds trivially for separable states. If the shared state σ is separable, it is clear that any local operation by Alice will leave Bob's system intact. Thus the point of the theorem is no communication can be achieved via a shared entangled state. P(sigma) = sum_k (V_k otimes I_{H_B})^* sigma (V_k otimes I_{H_B}), where Vk are called Kraus matrices which satisfy sum_k V_k V_k^* = I_{H_A}. The term from the expression (V_k otimes I_{H_B}) means that Alice's measurement apparatus does not interact with Bob's subsystem. Suppose the combined system is prepared in state σ. Assume for purposes of argument a non-relativistic situation. Immediately (with no time delay) after Alice performs her measurement, the relative state of Bob's system is given by the partial trace of the overall state with respect to Alice's system. In symbols, the relative state of Bob's system after Alice's operation is One can directly calculate this state: operatorname{tr}_{H_A}(P(sigma)) = operatorname{tr}_{H_A} left(sum_k (V_k otimes I_{H_B})^* sigma (V_k otimes I_{H_B} )right) = operatorname{tr}_{H_A} left(sum_k sum_i V_k^* T_i V_k otimes S_i right) = sum_i sum_k operatorname{tr}(V_k^* T_i V_k) S_i = sum_i sum_k operatorname{tr}(T_i V_k V_k^*) S_i = sum_i operatorname{tr}left(T_i (sum_k V_k V_k^*)right) S_i = sum_i operatorname{tr}(T_i) S_i = operatorname{tr}_{H_A}(sigma) Some comments • Notice that once time evolution operates on the density state, then the calculation in the proof fails. In the case of the (non-relativistic) Schrödinger equation which has infinite propagation speed, then of course the above analysis will fail for positive times. Clearly, the importance of the no-communication theorem for positive times is for relativistic systems. • The no-communication theorem thus says shared entanglement alone can not be used to transmit quantum information. Compare this with the no teleportation theorem, which states a classical information channel can not transmit quantum information. (By "transmit" we mean transmission with full fidelity.) However, quantum teleportation schemes utilize both resources to achieve what is impossible for either alone. • Peres, A. and Terno, D. Quantum Information and Relativity Theory, Rev. Mod. Phys. 76, 93 (2004), arXiv quant-ph/0212023 Search another word or see No-communication_theoremon Dictionary | Thesaurus |Spanish Copyright © 2014, LLC. All rights reserved. • Please Login or Sign Up to use the Recent Searches feature
dcf8549bb7ba9939
zbMATH — the first resource for mathematics a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term Global Schrödinger maps in dimensions d2: small data in the critical Sobolev spaces. (English) Zbl 1233.35112 Two global results for the initial-value problem for the Schrödinger map equation, t φ=φ×Δφ, on d ×, φ(0)=φ 0 , φ: d ×𝕊 2 , d2, are proved. In order to formulate these results we have to introduce some notation. As usual, H σ denotes the Sobolev space on d . For Q𝕊 2 , H Q σ ={f: d 3) ;|f(x)|=1,a.e.,f-QH σ }, f H Q σ =f-Q H σ . The space H Q is the intersection of all spaces H Q σ and for fH , f H ˙ σ is the homogeneous Sobolev norm. The first theorem proved in the paper states that if d2 and Q𝕊 2 , then there exist ε 0 (d)>0 and a constant C>0 such that for any φ 0 H Q σ with φ 0 -Q H ˙ d/2 ε 0 (d) there exists a unique solution φ=S Q (φ 0 )C(;H Q ) of the initial-value problem, φ(t)-Q H ˙ d/2 Cφ 0 -Q H ˙ d/2 ,forallt and for any T[0,) and σ + , sup t[-T,T] φ(t) H Q σ C(σ,T,φ 0 H Q σ ). The second theorem provides uniform bounds (on ) for φ(t) H Q σ and asserts that the operator S Q admits continuous extensions S Q :B ε 0 (d,σ 1 ) σ C(;H ˙ σ H ˙ Q d/2-1 ) for any σ[d/2,σ 1 ] and some ε 0 (d,σ 1 )(0,ε 0 (d)), where B ε σ ={φH ˙ Q d/2-1 H ˙ σ ;φ-Q| H ˙ d/2 ε}. The first theorem was proved in dimensions d4 in [I. Bejenaru, A. D. Ionescu, and C. E. Kenig, Adv. Math. 215, No. 1, 263–291 (2007; Zbl 1152.35049)]. In order to overcome the difficulties which appear, especially in the case d=2, the authors of the paper under review use a sum of Galilei transforms of the lateral L e p,q spaces and the caloric gauge. 35K45Systems of second-order parabolic equations, initial value problems 35B65Smoothness and regularity of solutions of PDE 35Q41Time-dependent Schrödinger equations, Dirac equations
2ae813e91777cec2
Thursday, January 29, 2009 AWT and definition of intelligence By AWT correct - i.e. physically relevant - definition of intelligence is rather important, as it can give us a clue about direction of psychological time arrow. From certain perspective every free particle appears like quite intelligent "creature", because it can find the path of the optimal potential gradient unmistakably even inside of highly dimensional field where interactions of many particles overlaps mutually. Whereas single particle is rather "silly" and it can follow just a narrow density gradient, complex multidimensional fluctuations of Aether can follow a complex gradients and they can even avoid a wrong path or obstacles at certain extent. They're "farseeing" and "intelligent". Note that the traveling of particle along density gradient leads into gradual dissolving of it and "death". The same forces, which are keeping the particle in motion will lead to its gradual disintegration of it. The ability of people to make correct decisions in such fuzzy environment is usually connected with social intelligence. We can say, motion of particle is fully driven by its "intuition". They can react fast in many time dimensions symmetrically (congruently), whereas their ability to interact with future (i.e. ability of predictions) still remains very low, accordingly to low (but nonzero) memory capacity of single gradient particle. Nested clusters of many particles are the more clever, the more hidden dimensions are formed by. Electrochemical waves of neural system activity should form a highly nested systems of energy density fluctuations. Neverthelles, if we consider intelligence as "an ability to obtain new abilities", then the learning ability and memory capacity of single level density fluctuations still remains very low. Every particle has a surface gradient from perspective of single level of particle fluctuations, so it has an memory (compacted space-time dimensions) as well. Therefore for single object we can postulate the number of nested dimensions inside of object as a general criterion of intelligence. The highly compactified character of neuron network enables people to handle a deep level of mutual implications, i.e. manifolds of causual space defined by implication tensors of high order. Such definition remains symmetrical, i.e. invariant to both intuitive behaviour driven by parallel logics, both conscious behaviour, driven by sequential logics. Every highly condensed system becomes chaotic, because intelligent activities of individual particles are temporal and they're compensating mutually here. By such way, the behavior of human civilization doesn't differ very much from behavior of dense gas, as we can see from history of wars and economical crisis, for instance. The ability of people to drive the evolution of their own society is still quite limited in general. We can consider such ability as a criterion of social self-awareness. The process of phase transition corresponds learning phase of multi-particle system. Interesting point is, individual members of such systems may not be aware of incoming phase transition, because theirs space-time expands (the environment becomes more dense) together with these intelligent artifacts. At certain moment the environment becomes more conscious (i.e. negentropic), then the particle system formed by it and phase transition will occur. The well known superfluidity and superconductivity phenomena followed by formation of boson condensate can serve as a physical analogy of sectarian community formation, separated from the needs/feedback of rest of society. Members of community can be internally characterized by their high level of censorship (total reflection phenomena with respect to information spreading) and by superfluous homogeneity of individual stance distribution, followed by rigidity and fragility of their opinions (i.e. by duality of odd and even derivations in space and time) from outside perspective. AWT explains, how even subtle forces of interests between individuals crowded around common targets cumulate under emergence of irrational behavior gradually. Because such environment becomes more dense, the space-time dilatation occurs here and everything vents OK from insintric perspective. As the result, nobody from sectarian community will realize, he just lost control over situation. For example, people preparing LHC experiments cannot be accused from evil motives - they just want to do some interesting measurements on LHC, to finish their dissertations, make some money in attractive job, nurse children, learn French, and so on… Just innocent wishes all the time, am I right? But as a whole their community has omitted serious precautionary principles under hope, successful end justifies the means. Particle model explains, how even subtle forces of interests between individuals crowded around common targets cumulate under emergence of irrational behavior gradually. For example, nobody of this community has taken care about difference in charged and neutral black holes in their ability to swallow surrounding matter. As a result, nobody of members of such community realizes consequence of his behavior until very end. And this is quite silly and unscouscios behavior, indeed. AWT and LHC safety risk The LHC "black hole" issue disputed (1, 2, 3) and recently reopened (1, 2, 3) is manifestation of previously disputed fact, every close community becomes sectarian undeniably and separated from needs of rest of society like singularity by total reflection mechanism. Ignorance of fundamental ideas (Heim theory) or discoveries (cold fusion, surface superconductivity, "antigravity") on behalf of risky and expensive LHC experiments illustrates increasing gap between priorities of physical community and interests of the rest of society. The power of human inquisitiveness is the problem here: as we know from history, scientists as a whole never care about morality, just about technical difficulties. If they can do something, then they will do it - less or more lately, undeniably. No matter whether it's nuclear weapon, genetically engineered virus and/or collider. Which makes trouble at the moment, the results of such experiments can threaten the whole civilization. We should know about this danger of human nature and we should be prepared to suffer consequences. Max Tegmark’s “quantum suicide” experiment doesn't say, how large portion of the original system can survive its experiment. So, what's the problem with LHC experiments planned? Up to this day, no relevant analysis, evaluating all possible risks and their error bars is publicly available. Existing safety analysis and reports (1, 2) are very rough and superficial, as they doesn't consider important risk factors and scenarios, like formation of charged black holes or surface tension phenomena of dense particle clusters. There's an obstinate tendency to start LHC experiments without such analysis and to demonstrate first successful results even without thorough testing phase. Because the load of accelerator was increased over 80% of nominal capacity during first days impatiently, the substantial portion of cooling system crashed due the massive spill (100 tons) of expensive helium and monitoring systems of whole LHC are in extensive upgrade and replacement to avoid avalanche propagation of the same problem over whole accelerator tube in future. Up to these days, publicity has no relevant and transparent data about probability of supercritical black hole formation during expected period of LHC lifetime and about main factors, which can increase total risk above acceptable level, in particular the risk associated to: 1. Extreme asymmetry of head-to-head collisions, during which a zero momentum/speed black holes can be formed, so they would have a lot of time to interact with Earth with compare to natural protons from cosmic rays. The collision geometry is has no counterpart in nature, as it's a product of long-term human evolution, not natural processes. 2. Avalanche-like character of multi-particle collisions. When some piece of matter appears in accelerator line, then whole content of LHC will feed it by new matter incoming from both directions by nearly luminal speed, i.e. in much faster way with compare to collisions of natural cosmic rays appearing in stratosphere 3. Proximity of dense environment. With compare to stratospheric collisions of gamma rays, the metastable products of LHC collisions can be trapped by gravitational field of Earth and to interact with it in long term fashion. Some models are considering, the black hole can move in Earth core for years without notion, thus changing the Earth into time-bomb for further generations. 4. Formation of charged and magnetic black hole. As we know from theory, real black holes should always exhibit nonzero charge and magnetic field as the result of their fast surface rotation. While force constant of electromagnetic force is about 10^39 times stronger then those of gravitational interaction (and the force constant of nuclear force is even much higher), the omitting of such possibility from security analysis is just a illustration of deep incompetence of high energy physics and it looks rather like intention, than just omission. It's not so surprising, as every introduction of such risk into safety analysis would lead into increasing of LHC risk estimations in many orders of magnitude, making them unfeasible in the eyes of society. 5. Formation of dense clusters of quite common neutral particles, which are stable well outside from LHC energy range (presumably the neutrons). This risk is especially relevant for ALICE experiment, consisting of head-to-head collisions of heavy atom nuclei, during which the large number of free neutrons can be released in the form of so called neutron fluid. The signs of tetra-neutron existence supports this hypothesis apparently. The neutron fluid would stabilize neutrons against decay due its strong surface tension by analogous way, like the neutrons inside neutron stars. The risk of neutron fluid formation is connected to possible tendency to expel protons from atom nuclei in contact with neutron fluid, thus changing them into droplets of another neutron fluid by avalanche like mechanism, which was proposed for strangelet risk of LHC originally. 6. Surface tension effects of large dense particle clusters, like the various gluonium and quarkonium states which CAN stabilize even unstable forms of mater, like neutral mesons and other hadrons up to levels, they can interact with ordinary matter by mechanism above described under formation of another dense particle clusters, so called strangelets (sort of tiny quark stars, originally proposed by Ed Witten). The evidence of these states was confirmed recently for tetra- and pentaquark exotic states. By AWT the surface tension phenomena are related to dark matter and supersymmetry effects observed unexpectedly in Fermilab (formation of di muon states well outside of collider pipe), as we can explain later. If this connection will be confirmed, we aren't expected to worry about strangelet formation anymore - simply because we observed it already! With compare to black hole formation, the risks of strangelet and neutron fluid aren't connected to collapse of Earth into gravitational singularity, but to release of wast amount of energy (comparable to those of thermonuclear fusion), during which of most of matter would be vaporized and expelled into cosmic space by pressure of giant flash of accretion radiation. As I explained already, cosmic ray arguments aren’t wery relevant to highly asymmetric LHC collisions geometry, so it has no meaning to repeat them again and again. This geometry - not the energy scale - is what makes the LHC collisions so unique and orthogonal to extrapolations based on highly symmetrical thermodynamics. It’s product of very rare human evolution. Whole AWT is just about probability of various symmetries. So we are required to reconsider LHC experiments in much deeper, publicly available and peer reviewed security analysis. We should simply apply scientific method even to security analysis of scientific experiments - no less, no more. By my opinion, these objections are trivial and mostly evident - but no safety analysis has considered them so far from apparent reason: not to threat the launch of LHC. So now we can just ask, who is responsible for this situation and for lack of persons responsible for relevant safety analysis of LHC project of 7 billions € in total cost? Safety is the main concern of LHC experiments. You can be perfectly sure, LHC experiments are safe because of many theories. After all, the main purpose of these experiments is to verify these theories. Isn't the only purpose of LHC to verify it's own safety at the very end? Is it really enough for everybody? Tuesday, January 27, 2009 AWT and Bohmian mechanics This post is a reaction to recent L. Motl's comments (1, 2, reactions) concerning the Bohm interpretation of quantum mechanics (QM), the concept of Louis de Broglie pilot wave in particular (implicate/explicate order is disputed here). Bohm's holistic approach (he was proponent of marxistic ideas) enabled him to see general consequences of this concept a way deeper, the aristocratic origin of de Broglie. It's not surprising, Bohm's interpretation has a firm place in AWT interpretations of various concepts, causual topology of implications and famous double slit experiment in particular. After all, we have a mechanical analogy of double slit experiment (DSE) presented already (videos), therefore it’s evident, QM can be interpreted by classical wave mechanics without problem.. Single-particle interference observed for macroscopic objects AWT considers pilot wave as an analogy of Kelvin waves formed during object motion through particle environment. Original AWT explanation of double slit experiment is, every fast moving particle creates an undulations of vacuum foam around it by the same way, like fish flowing beneath water surface in analogy to de Broglie wave. These undulations are oriented perpendicular to the particle motion direction and they can interfere with both slits, whenever particle passes through one of them. Aether foam gets more dense under shaking temporarily, thus mimicking mass/energy equivalence of relativity and probability density function of quantum mechanics at the same moment. The constructive interference makes a flabelliform paths of more dense vacuum foam, which the particle wave follows preferably, being focused by more dense environment, thus creating a interference patterns at the target. By AWT the de Broglie wave or even quantum wave itself are real physical artifacts. The fact, they cannot be observed directly by the using of light wave follows from Bose statistics: the surface waves are penetrating mutually, so they cannot be observed mutually. But by Hardy's theorem weak (gravitational or photon coupling) measurement of object location without violating of uncertainty principle is possible. What we can observe is just a gravitational lensing effect of density gradients (as described by probability function), induced by these waves in vacuum foam by thickening effect during shaking. Other thing is, whether pilot wave concept supplies a deeper insight or even other testable predictions, then for example time dependent Schrödinger equation does. By my opinion it doesn't, or it's even subset of information contained in classical QM formalism. This doesn't mean, in certain situations pilot wave formalism cannot supply an useful shortcut for formal solution (by the same way, like for example Bohr's atom model) - whereas in others cases it can become more difficult to apply, then other interpretations. Monday, January 26, 2009 AWT and definition of observable reality When comparing contemporary physical theories, a natural question can emerge immediately: if AWT is proclamativelly more general, then for example various quantum field or quantum gravity theories, shouldn't it lead to even more solutions, then these theories can supply? And if the vagueness is the main objection against these theories, why we should take care about AWT, after then? The true is, AWT can lead into virtually infinite number of solutions, because even in quite limited particle system the number of possible states increases by extremely fast way. But AWT introduces a gradient driven reality concept, which is probability driven. Many results of particle-particle collisions aren't simply probable, because they're too rare. Therefore we can see only density gradients inside of dense particles system, not a particles or intermediate states as such. The concept of gradient driven reality is apparently anthropocentric, but it can be derived from AWT concept independently, because only artifacts, which were created by long term evolution of high number of mutations, i.e. by causal time events can interact with reality by gradient driven way. The probability based approach based on particle statistics brings a rather strict restriction into number of possible solutions of every fuzzy theory. String theorists are aware of this opportunity, so they're trying to apply a statistical approach onto landscape of string theory predictions as well. But because the number of predictions of string theory (~10E+500) roughly corresponds the number of particles states inside of observable portion of Universe, then such approach is phenomenologically identical to AWT, if we simply omit whole intermediate step related to tedious string theory formalism (which is serving like random number generator only) - and if we apply Boltzmann statistics to these states directly. By such way, the AWT wins over formal theories in simplicity (i.e. by Occam's razor criterion), just because it introduces a gradient driven definition of observable reality into physics, thus reducing the number of possible observable states in it: every object can be observed only and if only it contains some space-time gradient from sufficiently general perspective. For example, the (movement of) density gradients inside of condensing supercritical vapor can be observed, while the molecules (motion) itself not. The single Aether concept i.e. material conditional (antecedent) is sufficient for such decision, if we apply an observability criterion (consequent), thus introducing basic implication vector, which the AWT is based on: if Universe is formed by chaotic/particle environment, then every fluctuation evolved/emerged in it via (number of) causal events would see only the (same number of) causal gradients of it. (... and we can predict an appearance of this observable reality by unique way). By such way, we can always see exactly the part of Universe, which has served for our evolution (space-time emergence) and the observable scope of reality expands gradually. This is the way, how Bohm's implicate/explicate order may be undertood in context of AWT, because implication vector defines a time arrow of causual space-time curvature and subsequent compactification of it here. The testability of AWT insintric perspective is provided by nonscalar implication vector, which is based on nonsingular (zero or infinite) order of axiomatic tensor.  Outside of this perspective AWT remains tautology inherently, whis is given by fact, no assumption can consider itself, or less generally, that no object of observation can serve both like mean, both like subject of the same observation in the same time and space point. Aether concept itself remains a tautology, as it cannot be proven by observation and causual logic without violation of this logic in less or more distant perspective by the same way, like God concept. It can be demonstrated easily, many conceptual problems of contemporary science simply follows from the fact, the scientists have no clue, what is observable and what not, because of lack of relevant definition of observable reality. By such way, many possible combinations would simply disappear from testable predictions, if we apply the gradient driven statistics or Lagrange/Hamilton mechanics, which is based on it. In particular, the misinterpretation of results of M-M experiment just follows from the fact, scientists didn't realize, the motion of environment isn't observable by waves of this environment. The refusal of deBroglie /Bohmian mechanics is misunderstanding of the same category: scientists didn't realize, deBroglie wave cannot be observable by light wave (so easily), being a wave of the same environment, so that the lack of experimental evidence of deBroglie wave cannot serve as an evidence against Bohmian mechanics. AWT, emergence and Hardy's paradox Recently, the fundamental experimental evidence of Hardy's paradox was given, which basically means, quantum mechanics isn't pure statistics based theory following Bell inequalities anymore. The non-formal understanding of this paradox is easy: if every combination of mutually commutable quantities cannot be measured with certainty, how can we be sure about it? Whether some combination exists, which violates such uncertainty? By such way, uncertainty principle of quantum mechanics violates itself on background, thus enabling so called "weak" measurements. This was demonstrated recently for the case of entangled photon pairs - it can serve as an evidence, even the photons have a distinct "shape", which is the manifestation of the rest mass of photon. This is because the explicit formulation of quantum mechanics neglects the gravity phenomena and the rest mass concept on background: by Schrödinger equation every particle should dissolve into whole Universe gradually - which violates the everyday observations, indeed. Such behavior is effectively prohibited by acceleration following from omni-directional Universe expansion i.e. the gravity potential, so that every locatable particle has a nonzero surface curvature and its conditionally stable at the human scale. From nested character of Aether fluctuations follows, not only single level of "weak" measurement should be achievable here. After all, the fact we can interact with another people and object without complete entanglement can serve as an evidence, the "weak" observation is very common at the human scale. By AWT every strictly causual theory violates itself in less or more distant perspective due the emergence phenomena. While the classical formulation of general relativity remains seemingly self-consistent (being strictly based on single causality arrow) - the deeper analysis reveals, derivation of Einstein field equations neglects the stress energy tensor contribution (Yilmaz, Heim, Bekenstein and others), which is the result of mass-energy equivalence. This approach makes relativity implicit and infinitely fractal theory by the same way, like the quantum mechanics (which is AdS/CFT dual theory). For example, gravitational lensing, multiple event horizons of charged black holes and/or dark matter phenomena can serve as an evidence of spontaneous symmetry breaking of time arrows and manifestation of quantum uncertainty and super-symmetry in relativity. This uncertainty leads into landscape of many solutions for every theory quantum field or quantum gravity theory, based on combination of mutually inconsistent (i.e. different) postulates. Such behavior follows Gödel's incompleteness theorems, by which formal proof of rules valid for sufficiently large natural number sets becomes more difficult, then these rules itself - thus remaining unresolvable by their very nature. This is a consequence of emergence, which introduces a principal dispersion into observation of large causal objects and/or phenomena, which cannot be avoided, or such artifacts wouldn't observable anymore. By such way, every strictly formal (i.e. sequential logic based) proof of natural law becomes violated in less or more distant perspective and it follows "More is Different" theorem. AWT demonstrates, this emergence is followed by causal (i.e. transversal wave based) energy spreading through large system of scale invariant symmetry fluctuations (unparticles), which are behaving like soap foam with respect to light spreading and they enable to observe the universe (and all objects inside it) both from excentric, both from insintric perspective simultaneously. The mutual interference of these two perspectives leads to the quantization of observable reality, which is insintrically chaotic, exsintrically causal by its very nature. In this connection it's useful (..and sometimes entertaining) to follow deductions of formally thinking theorists, like Lubos Motl, whose strictly formal thinking leads him to the deep contradiction/confrontation with common sense and occasionally the whole rest of world undeniably. It may appear somewhat paradoxical, just fanatic proponent of string theory - which has introduced the duality concept into physics - has so deep problem with dual/plural thinking. This paradox is still logical though, if we realize, how complex the string theory is and how strictly formal thinking it requires for its comprehension. By such way, "emergence group" of dense Aether theory makes understanding of observable reality quite transparent and easy task at sufficiently general level. It still doesn't mean, here's not still a lotta things to understand at the deeper levels, dedicated to individual formal theories.
ac293482707d26b5
Saturday, February 1, 2014 snow, wind, and avalanches I <3 pow Freeriding is arguably the most fun thing to do on a snowboard. But as the proverb has it: no risk, no fun. There is always a looming threat due to avalanches. Although, judging the risk of avalanche danger is today based on a lot of scientific knowledge, allowing for proper  assessments resulting in decision strategies (see, for instance, Werner Munter), there is always a residual risk. Avalanches are very complex phenomena, depending on a web of factors, like temperature, slope orientation and steepness, terrain, vegetation, snowpack, ... A very difficult variable to deal with is wind. Heavy winds during snow fall can pack incredible amounts of snow at very specific exposures. And windy conditions after the last snow fall can result in very local hot spots. Often only experience can help here. Recently, we had to deal with this. In order to reach the side of the mountain we planned on descending, there was some windpacked powder to deal with. Between the three of us, we triggered four avalanches. Luckily they were all small and superficial - but you never know. Interestingly, the final couloirs greeted us with epic pow, very different in quality to the other slopes... Perhaps the greatest safety accomplishment in the last years has been the introduction of avalanche airbags. A simple idea based on increasing the volume associated with the freerider. In an avalanche, understood as granular media moving under the influence of gravity, larger particles tend to travel to the surface. This is vital for survival, as being rescued before about 20 minutes results in a very good survival rate, which drops significantly after that. One last thing. If you are "lucky" enough to be close to the tear where the avalanche rips away from the slope, you have a few seconds left to do the right thing. Next to deploying the airbag you can actually try and ride out of the avalanche. When the snow silently crumbles around you, it's like surfing! Your board actually carries you and if you are not distracted by the dynamics of everything around you moving, you can focus on a sideways exit. This happened to me here: Not sure how easy this is on skis though, as you can see herehere  and here (note the effect of the airbag - the last guy didn't have one; those must have been long 5 1/2 minutes). Watch the pros struggling: 1, 2, 3, 4, 5. And try not to do this, after you decide to gun it. And then there's these guys: 1, 2. Please, don't be one of those people who turn up with no safety equipment or say stuff like, "but I've never seen an avalanche come down on this slope" or "hey, there were already some tracks, no big deal"! And finally, why bother? Why expose yourself to unnecessary risk? Because it is so much fun, that's why:) Safe and awesome freeriding! Wednesday, November 6, 2013 old posts from This is a collection of old blog posts, going back to 2006. For some strange reason I thought it would be a good idea to have two blogs. They have been migrated here from a philosophy of science primer - part III • part I: some history of science and logical empiricism, • part II: problems of logical empiricism, critical rationalism and its problems. After the unsuccessful attempts to found science on common sense notions as seen in the programs of logical empiricism and critical rationalism, people looked for new ideas and explanations. the thinker The Kuhnian View Thomas Kuhn’s enormously influential work on the history of science is called the Structure of Scientific Revolutions. He revised the idea that science is an incremental process accumulating more and more knowledge. Instead, he identified the following phases in the evolution of science: • prehistory: many schools of thought coexist and controversies are abundant, • history proper: one group of scientists establishes a new solution to an existing problem which opens the doors to further inquiry; a so called paradigm emerges, • paradigm based science: unity in the scientific community on what the fundamental questions and central methods are; generally a problem solving process within the boundaries of unchallenged rules (analogy to solving a Sudoku), • crisis: more and more anomalies and boundaries appear; questioning of established rules, • revolution: a new theory and weltbild takes over solving the anomalies and a new paradigm is born. Another central concept is incommensurability, meaning that proponents of different paradigms cannot understand the other’s point of view because they have diverging ideas and views of the world. In other words, every rule is part of a paradigm and there exist no trans-paradigmatic rules. This implies that such revolutions are not rational processes governed by insights and reason. In the words of Max Planck (the founder of quantum mechanics; from his autobiography): Kuhn gives additional blows to a commonsensical foundation of science with the help of Norwood Hanson and Willard Van Orman Quine: • every human observation of reality contains an a priori theoretical framework, • underdetermination of belief by evidence: any evidence collected for a specific claim is logically consistent with the falsity of the claim, • every experiment is based on auxiliary hypotheses (initial conditions, proper functioning of apparatus, experimental setup,…). People slowly started to realize that there are serious consequences in Kuhn’s ideas and the problems faced by the logical empiricists and critical rationalists in establishing a sound logical and empirical foundation of science: • postmodernism, • constructivism or the scoiology of science, • relativism. Modernism describes the development of Western industrialized society since the beginning of the 19th Century. A central idea was that there exist objective true beliefs and that progression is always linear. Postmodernism replaces these notions with the belief that many different opinions and forms can coexist and all find acceptance. Core ideas are diversity, differences and intermingling. In the 1970s it is seen to enter scientific and cultural thinking. Postmodernism has taken a bad rap from scientists after the so called Sokal affair, where physicist Alan Sokal got a nonsensical paper published in the journal of postmodern cultural studies, by flattering the editors ideology with nonsense that sounds good. Postmodernims has been associated with scepticism and solipsism, next to relativism and constructivism. Notable scientists identifiable as postmodernists are Thomas Kuhn, David Bohm and many figures in the 20th century philosophy of mathematics. As well as Paul Feyerabend, an influential philosopher of science. To quote the Nobel laureate Steven Weinberg on Kuhnian revolutions: Constructivism excludes objectivism and rationality by postulating that beliefs are always subject to a person’s cultural and theological embedding and inherent idiosyncrasies. It also goes under the label of the sociology of science. In the words of Paul Boghossian (in his book Fear of Knowledge: Against Relativism and Constructivism): Constructivism about rational explanation: it is never possible to explain why we believe what we believe solely on the basis of our exposure to the relevant evidence; our contingent needs and interests must also be invoked. The proponents of constructivism go further: […] all beliefs are on a par with one another with respect to the causes of their credibility. It is not that all beliefs are equally true or equally false, but that regardless of truth and falsity the fact of their credibility is to be seen as equally problematic. From Barry Barnes’ and David Bloor’s Relativism, Rationalism and the Sociology of Knowledge. In its radical version, constructivism fully abandons objectivism: • Objectivity is the illusion that observations are made without an observer(from the physicist Heinz von Foerster; my translation) • Modern physics has conquered domains that display an ontology that cannot be coherently captured or understood by human reasoning (from the philosopher Ernst von Glasersfeld); my translation In addition, radical constructivism proposes that perception never yields an image of reality but is always a construction of sensory input and the memory capacity of an individual. An analogy would be the submarine captain who has to rely on instruments to indirectly gain knowledge from the outside world. Radical constructivists are motivated by modern insights gained by neurobiology. Historically, Immanuel Kant can be understood as the founder of constructivism. On a side note, the bishop George Berkeley went even as far as to deny the existence of an external material reality altogether. Only ideas and thought are real. Another consequence of the foundations of science lacking commonsensical elements and the ideas of constructivism can be seen in the notion of relativism. If rationality is a function of our contingent and pragmatic reasons, then it can be rational for a group A to believe P, while at the same time it is rational for group B to believe in negation of P. Although, as a philosophical idea, relativism goes back to the Greek Protagoras, its implications are unsettling for the Western mid:anything goes (as Paul Feyerabend characterizes his idea of scientific anarchy). If there is no objective truth, no absolute values, nothing universal, then a great many of humanity’s century old concepts and beliefs are in danger. It should however also be mentioned, that relativism is prevalent in Eastern thought systems, and as an example found in many Indian religions. In a similar vein, pantheism and holism are notions which are much more compatible with Eastern thought systems than Western ones. Furthermore, John Stuart Mill’s arguments for liberalism appear to also work well as arguments for relativism: • fallibility of people’s opinions, • opinions that are thought to be wrong can contain partial truths, • accepted views, if not challenged, can lead to dogmas, • the significance and meaning of accepted opinions can be lost in time. From his book On Liberty. But could relativism be possibly true? Consider the following hints: • Epistemological • problems with perception: synaesthesia, altered states of consciousness (spontaneous, mystical experiences and drug induced), • psychopathology describes a frightening amount of defects in the perception of reality and ones self, • people suffering from psychosis or schizophrenia can experience a radically different reality, • free will and neuroscience, • synthetic happiness, • cognitive biases. • Ontological • nonlocal foundation of quantum reality: entanglement, delayed choice experiment, • illogical foundation of reality: wave-particle duality, superpositions, uncertainty, intrinsic probabilistic nature, time dilation (special relativity), observer/measurment problem in quantum theory, • discreteness of reality: quanta of energy and matter, constant speed of light, • nature of time: not present in fundamental theories of quantum gravity, symmetrical, • arrow of time: why was the initial state of the universe very low in entropy? • emergence, selforganization and structureformation. In essence, perception doesn’t necessarily say much about the world around us. Consciousness can fabricate reality. This makes it hard to be rational. Reality is a really bizarre place. Objectivity doesn’t seem to play a big role. And what about the human mind? Is this at least a paradox free realm? Unfortunately not. Even what appears as a consistent and logical formal thought system, i.e., mathematics, can be plagued by fundamental problems. Kurt Gödel proved that in every consistent non-contradictory system of mathematical axioms (leading to elementary arithmetic of whole numbers), there exist statements which cannot be proven or disproved in the system. So logical axiomatic systems are incomplete. As an example Bertrand Russell encountered the following paradox: let R be the set of all sets that do not contain themselves as members. Is R an element of itself or not? If you really accede to the idea that reality and the perception of reality by the human mind are very problematic concepts, then the next puzzles are: • why has science been so fantastically successful at describing reality? • why is science producing amazing technology at breakneck speed? • why is our macroscopic, classical level of reality so well behaved and appears so normal although it is based on quantum weirdness? • are all beliefs justified given the believers biography and brain chemistry? a philosophy of science primer - part II Continued from part I The Problems With Logical Empiricism The programme proposed by the logical empiricists, namely that science is built of logical statements resting on an empirical foundation, faces central difficulties. To summarize: • it turns out that it is not possible to construct pure formal concepts that solely reflect empirical facts without anticipating a theoretical framework, • how does one link theoretical concepts (electrons, utility functions in economics, inflational cosmology, Higgs bosons,…) to experiential notions? • how to distinguish science from pseudo-science? Now this may appear a little technical and not very interesting or fundamental to people outside the field of the philosophy of science, but it gets worse: • inductive reasoning is invalid from a formal logical point of view! • causality defies standard logic! This is big news. So, just because I have witnessed the sun going up everyday of my life (single observations), I cannot say it will go up tomorrow (general law). Observation alone does not suffice, you need a theory. But the whole idea here is that the theory should come from observation. This leads to the dead end of circular reasoning. But surely causality is undisputable? Well, apart from the problems coming from logic itself, there are extreme examples to be found in modern physics which undermine the common sense notion of a causal reality: quantum nonlocalitydelayed choice experiment. But challenges often inspire people, so the story continues… Critical Rationalism OK, so the logical empiricists faced problems. Can’t these be fixed? The critical rationalists belied so. A crucial influence came from René Descartes’ and Gottfried Leibniz’ rationalism: knowledge can have aspects that do not stem from experience, i.e., there is an immanent reality to the mind. The term critical refers to the fact, that insights gained by pure thought cannot be strictly justified but only critically tested with experience. Ultimate justifications lead to the so called Münchhausen trilemma, i.e., one of the following: • an infinite regress of justifications, • circular reasoning, • dogmatic termination of reasoning. The most influential proponent of critical rationalism was Karl Popper. His central claims were in essence • use deductive reasoning instead of induction, • theories can never be verified, only falsified. Although there are similarities with logical empiricism (empirical basis, science is a set of theoretical constructs), the idea is that theories are simply invented by the mind and are temporarily accepted until they can be falsified. The progression of science is hence seen as evolutionary process rather than a linear accumulation of knowledge. Sounds good, so what went wrong with this ansatz? The Problems With Critical Rationalism In a nutshell: • basic formal concepts cannot be derived from experience without induction; how can they be shown to be true? • deduction turns out to be just as tricky as induction, • what parts of a theory need to be discarded once it is falsified? To see where deduction breaks down, a nice story by Lewis Carroll (the mathematician who wrote the Alice in Wonderland stories): What the tortoise Said to Achilles. If deduction goes down the drain as well, not much is left to ground science on notions of logic, rationality and objectivity. Which is rather unexpected of an enterprise that in itself works amazingly well employing just these concepts. Explanations in Science And it gets worse. Inquiries into the nature of scientific explanation reveal further problems. It is based on Carl Hempel’s and Paul Oppenheim’s formalisation of scientific inquiry in natural language. Two basic schemes are identified: deductive-nomological and inductive-statistical explanations. The idea is to show that what is being explained (the explanandum) is to be expected on the grounds of these two types of explanations. The first tries to explain things deductively in terms of regularities and exact laws (nomological). The second uses statistical hypotheses and explains individual observations inductively. Albeit very formal, this inquiry into scientific inquiry is very straightforward and commonsensical. Again, the programme fails: • can’t explain singular causal events, • asymmetric (a change in the air pressure explains the readings on a barometer, however, the barometer doesn’t explain why the air pressure changed), • many explanations are irrelevant, • as seen before, inductive and deductive logic is controversial, • how to employ probability theory in the explanation? So what next? What are the consequences of these unexpected and spectacular failings of the most simplest premises one would wish science to be grounded on (logic, empiricism, causality, common sense, rationality, …)? The discussion is ongoing and isn’t expected to be resolved soon. Seepart III a philosophy of science primer - part I Naively one would expect science to adhere to two basic notions: • common sense, i.e., rationalism, • observation and experiments, i.e., empiricism. Interestingly, both concepts turn out to be very problematic if applied to the question of what knowledge is and how it is acquired. In essence, they cannot be seen as a foundation for science. But first a little history of science… Classical Antiquity The Greek philosopher Aristotle was one of the first thinkers to introduce logic as a means of reasoning. His empirical method was driven by gaining general insights from isolated observations. He had a huge influence on the thinking within the Islamic and Jewish traditions next to shaping Western philosophy and inspiring thinking in the physical sciences. Modern Era Nearly two thousand years later, not much changed. Francis Bacon (the philosopher, not the painter) made modifications to Aristotle’s ideas, introducing the so called scientific method where inductive reasoning plays an important role. He paves the way for a modern understanding of scientific inquiry. Approximately at the same time, Robert Boyle was instrumental in establishing experiments as the cornerstone of physical sciences. Logical Empiricism So far so good. By the early 20th Century the notion that science is based on experience (empiricism) and logic, and where knowledge is intersubjectively testable, has had a long history. The philosophical school of logical empiricism (or logical positivism) tries to formalise these ideas. Notable proponents were Ernst Mach, Ludwig Wittgenstein, Bertrand Russell, Rudolf Carnap, Hans Reichenbach, Otto Neurath. Some main influences were: • David Hume’s and John Locke’s empiricism: all knowledge originates from observation, nothing can exist in the mind which wasn’t before in the senses, • Auguste Comte’ and John Stuart Mills’ positivism: there exists no knowledge outside of science. In this paradigm (see Thomas Kuhn a little later) science is viewed as a building comprised of logical terms based on an empirical foundation. A theory is understood as having the following structure: observation -> empirical concepts -> formal notions -> abstract law. Basically a sequence of ever higher abstraction. This notion of unveiling laws of nature by starting with individual observations is called induction (the other way round, starting with abstract laws and ending with a tangible factual description is called deduction, see further along). And here the problems start to emerge. See part II Stochastic Processes and the History of Science: From Planck to Einstein How are the notions of randomness, i.e., stochastic processes, linked to theories in physics and what have they got to do with options pricing in economics? How did the prevailing world view change from 1900 to 1905? What connects the mathematicians Bachelier, Markov, Kolmogorov, Ito to the physicists Langevin, Fokker, Planck, Einstein and the economists Black, Scholes, Merton? The Setting • Science up to 1900 was in essence the study of solutions of differential equations (Newton’s heritage); • Was very successful, e.g., Maxwell’s equations: four differential equations describing everything about (classical) electromagnetism; • Prevailing world view: • Deterministic universe; • Initial conditions plus the solution of differential equation yield certain prediction of the future. Three Pillars By the end of the 20th Century, it became clear that there are (at least?) two additional aspects needed in a completer understanding of reality: • Inherent randomness: statistical evaluations of sets of outcomes of single observations/experiments; • Quantum mechanics (Planck 1900; Einstein 1905) contains a fundamental element of randomness; • In chaos theory (e.g., Mandelbrot 1963) non-linear dynamics leads to a sensitivity to initial conditions which renders even simple differential equations essentially unpredictable; • Complex systems (e.g., Wolfram 1983), i.e., self-organization and emergent behavior, best understood as outcomes of simple rules. Stochastic Processes • Systems which evolve probabilistically in time; • Described by a time-dependent random variable; • The probability density function describes the distribution of the measurements at time t; • Prototype: The Markov process. For a Markov process, only the present state of the system influences its future evolution: there is no long-term memory. Examples: • Wiener process or Einstein-Wiener process or Brownian motion: • Introduced by Bachelier in 1900; • Continuous (in t and the sample path) • Increments are independent and drawn from a Gaussian normal distribution; • Random walk: • Discrete steps (jumps), continuous in t; • Is a Wiener process in the limit of the step size going to zero. To summarize, there are three possible characteristics: 1. Jumps (in sample path); 2. Drift (of the probability density function); 3. Diffusion (widening of the probability density function). Probability distribution function showing drift and diffusion: Probability distribution function with drift and diffusion But how to deal with stochastic processes? The Micro View • Presented a theory of Brownian motion in 1905; • New paradigm: stochastic modeling of natural phenomena; statistics as intrinsic part of the time evolution of system; • Mean-square displacement of Brownian particle proportional to time; • Equation for the Brownian particle similar to a diffusion (differential) equation. • Presented a new derivation of Einstein’s results in 1908; • First stochastic differential equation, i.e., a differential equation of a “rapidly and irregularly fluctuating random force” (today called a random variable) • Solutions of differential equation are random functions. However, no formal mathematical grounding until 1942, when Ito developed stochastic calculus: • Langevin’s equations interpreted as Ito stochastic differential equations using Ito integrals; • Ito integral defined to deal with non-differentiable sample paths of random functions; • Ito lemma (generalized integration rule) used to solve stochastic differential equations. • The Markov process is a solution to a simple stochastic differential equation; • The celebrated Black-Scholes option pricing formula is a stochastic differential equation employing Brownian motion. The Fokker-Planck Equation: Moving To The Macro View • The Langevin equation describes the evolution of the position of a single “stochastic particle”; • The Fokker-Planck equation describes the behavior of a large population of of “stochastic particles”; • Formally: The Fokker-Planck equation gives the time evolution of the probability density function of the system as a function of time; • Results can be derived more directly using the Fokker-Planck equation than using the corresponding stochastic differential equation; • The theory of Markov processes can be developed from this macro point of view. The Historical Context • Developed a theory of Brownian motion (Einstein-Wiener process) in 1900 (five years before Einstein, and long before Wiener); • Was the first person to use a stochastic process to model financial systems; • Essentially his contribution was forgotten until the late 1950s; • Black, Scholes and Merton’s publication in 1973 finally gave Brownian motion the break-through in finance. • Founder of quantum theory; • 1900 theory of black-body radiation; • Central assumption: electromagnetic energy is quantized, E = h v; • In 1914 Fokker derives an equation on Brownian motion which Planck proves; • Applies the Fokker-Planck equation as quantum mechanical equation, which turns out to be wrong; • In 1931 Kolmogorov presented two fundamental equations on Markov processes; • It was later realized, that one of them was actually equivalent to the Fokker-Planck equation. 1905 “Annus Mirabilis” publications. Fundamental paradigm shifts in the understanding of reality: • Photoelectric effect: • Explained by giving Planck’s (theoretical) notion of energy quanta a physical reality (photons), • Further establishing quantum theory, • Winning him the Nobel Prize; • Brownian motion: • First stochastic modeling of natural phenomena, • The experimental verification of the theory established the existence of atoms, which had been heavily debate at the time, • Einstein’s most frequently cited paper, in the fields of biology, chemistry, earth and environmental sciences, life sciences, engineering; • Special theory of relativity: the relative speeds of the observers’reference frames determines the passage of time; • Equivalence of energy and mass (follows from special relativity): E = m c^2. Einstein was working at the Patent Office in Bern at the time and submitted his Ph.D. to the University of Zurich in July 1905. Later Work: • 1915: general theory of relativity, explaining gravity in terms of the geometry (curvature) of space-time; • Planck also made contributions to general relativity; • Although having helped in founding quantum mechanics, he fundamentally opposed its probabilistic implications: “God does not throw dice”; • Dreams of a unified field theory: • Spend his last 30 years or so trying to (unsuccessfully) extend the general theory of relativity to unite it with electromagnetism; • Kaluza and Klein elegantly managed to do this in 1921 by developing general relativity in five space-time dimensions; • Today there is still no empirically validated theory able to explain gravity and the (quantum) Standard Model of particle physics, despite intense theoretical research (string/M-theory, loop quantum gravity); • In fact, one of the main goals of the LHC at CERN (officially operational on the 21st of October 2008) is to find hints of such a unified theory (supersymmetric particles, higher dimensions of space). Technorati , laws of nature What are Laws of Nature? Regularities/structures in a highly complex universe Allow for predictions • Dependent on only a small set of conditions (i.e., independent of very many conditions which could possibly have an effect) …but why are there laws of nature and how can these laws be discovered and understood by the human mind? No One Knows! • G.W. von Leibniz in 1714 (Principes de la nature et de la grâce): • Why is there something rather than nothing? For nothingness is simpler and easier than anything • E. Wigner, “The Unreasonable Effectiveness of Mathematics in the Natural Sciences“, 1960: • […] the enormous usefulness of mathematics in the natural sciences is something bordering on the mysterious and […] there is no rational explanation for it • […] it is not at all natural that “laws of nature” exist, much less that man is able to discover them • […] the two miracles of the existence of laws of nature and of the human mind’s capacity to divine them • […] fundamentally, we do not know why our theories work so well In a Nutshell • We happen to live in a structured, self-organizing, and fine-tuned universe that allows the emergence of sentient beings (anthropic principle) • The human mind is capable of devising formal thought systems (mathematics) • Mathematical models are able to capture and represent the workings of the universe See also this post: in a nutshell. The Fundamental Level of Reality: Physics Mathematical models of reality are independent of their formal representation: invariance and symmetry • Classical mechanics: invariance of the equations under transformations (e.g., time => conservation of energy) • Gravitation (general relativity): geometry and the independence of the coordinate system (covariance) • The other three forces of nature (unified in quantum field theory): mathematics of symmetry and special kind of invariance See also these posts: funadamentalinvariant thinking. Towards Complexity • Physics was extremely successful in describing the inanimate world the in the last 300 years or so • But what about complex systems comprised of many interacting entities, e.g., the life and social sciences? • The rest is chemistryC. D. Anderson in 1932; echoing the success of a reductionist approach to understanding the workings of nature after having discovered the positron • At each stage [of complexity] entirely new laws, concepts, and generalizations are necessary […]. Psychology is not applied biology, nor is biology applied chemistryP. W. Anderson in 1972; pointing out that the knowledge about the constituents of a system doesn’t reveal any insights into how the system will behave as a whole; so it is not at all clear how you get from quarks and leptons via DNA to a human brain… Complex Systems: Simplicity The Limits of Physics • Closed-form solutions to analytical expressions are mostly only attainable if non-linear effects (e.g., friction) are ignored • Not too many interacting entities can be considered (e.g., three body problem) The Complexity of Simple Rules • S. Wolfram’s cellular automaton rule 110: neither completely random nor completely repetitive • [The] results [simple rules give rise to complex behavior] where were so surprising and dramatic that as I gradually came to understand them, they forced me to change my whole view of science […]; S. Wolfram reminiscing on his early work on cellular automaton in the 80s (”New Kind of Science”, pg. 19) Complex Systems: The Paradigm Shift • The interaction of entities (agents) in a system according to simple rules gives rise to complex behavior • The shift from mathematical (analytical) models to algorithmic computations and simulations performed in computers (only this bottom-up approach to simulating complex systems has been fruitful, all top-down efforts have failed: try programming swarming behaviorant foragingpedestrian/traffic dynamics,… not using simple local interaction rules but with a centralized, hierarchical setup!) • Understanding the complex system as a network of interactions (graph theory), where the complexity (or structure) of the individual nodes can be ignored • Challenge: how does the macro behavior emerge from the interaction of the system elements on the micro level? See also these posts: complexswarm theorycomplex networks. Laws of Nature Revisited So are there laws of nature to be found in the life and social sciences? • Yes: scaling (or power) laws • Complex, collective phenomena give rise to power laws […] independent of the microscopic details of the phenomenon. These power laws emerge from collective action and transcend individual specificities. As such, they are unforgeable signatures of a collective mechanism; J.P. Bouchaud in “Power-laws in Economy and Finance: Some Ideas from Physics“, 2001 Scaling Laws Scaling-law relations characterize an immense number of natural patterns (from physics, biology, earth and planetary sciences, economics and finance, computer science and demography to the social sciences) prominently in the form of • scaling-law distributions • scale-free networks • cumulative relations of stochastic processes A scaling law, or power law, is a simple polynomial functional relationship f(x) = a x^k     <=>   Y = (X/C)^E Scaling laws • lack a preferred scale, reflecting their (self-similar) fractal nature • are usually valid across an enormous dynamic range (sometimes many orders of magnitude) See also these posts: scaling lawsbenford’s law. Scaling Laws In FX • Event counts related to price thresholds • Price moves related to time thresholds • Price moves related to price thresholds • Waiting times related to price thresholds FX scaling law Scaling Laws In Biology So-called allometric laws describe the relationship between two attributes of living organisms as scaling laws: • The metabolic rate B of a species is proportional to its mass M: B ~ M^(3/4) • Heartbeat (or breathing) rate T of a species is proportional to its mass: T ~ M^(-1/4) • Lifespan L of a species is proportional to its mass: L ~ M^(1/4) • Invariants: all species have the same number of heart beats in their lifespan (roughly one billion) allometric law (Fig. G. West) G. West (et. al) proposes an explanation of the 1/4 scaling exponent, which follow from underlying principles embedded in the dynamical and geometrical structure of space-filling, fractal-like, hierarchical branching networks, presumed optimized by natural selection: organisms effectively function in four spatial dimensions even though they physically exist in three. • The natural world possesses structure-forming and self-organizing mechanisms leading to consciousness capable of devising formal thought systems which mirror the workings of the natural world • There are two regimes in the natural world: basic fundamental processes and complex systems comprised of interacting agents • There are two paradigms: analytical vs. algorithmic (computational) • There are ‘miracles’ at work: • the existence of a universe following laws leading to stable emergent features • the capability of the human mind to devise formal thought systems • the overlap of mathematics and the workings of nature • the fact that complexity emerges from simple rules • There are basic laws of nature to be found in complex systems, e.g., scaling laws Technorati , animal intelligence We’re glimpsing intelligence throughout the animal kingdom. Copyright Vincent J. Musi, National Geographic A dog with a vocabulary of 340 words. A parrot that answers “shape” if asked what is different, and “color” if asked what is the same, while being showed two items of different shape and same color. An octopus with “distinct personality” that amuses itself by shooting water at plastic-bottle targets (the first reported invertebrate play behavior). Lemurs with calculatory abilities. Sheep able to recognize faces (of other sheep and humans) long term and that can discern moods. Crows able to make and use tools (in tests, even out of materials never seen before). Human-dolphin communication via an invented sign language (with simple grammar). Dolphins ability to correctly interpret on the first occasion instructions given by a person displayed on a TV screen. This may only be the tip of the iceberg… Read the article Animal Minds in National Geographic`s March 2008 edition. Ever think about vegetarianism? Technorati , complex networks The study of complex networks was sparked at the end of the 90s with two seminal papers, describing their universal: • small-worlds property [1], • and scale-free nature [2] (see also this older post: scaling laws). weighted network unweighted network Today, networks are ubiquitous: phenomena in the physical world (e.g., computer networks, transportation networks, power grids, spontaneous synchronization of systems of lasers), biological systems (e.g., neural networks, epidemiology, food webs, gene regulation), and social realms (e.g., trade networks, diffusion of innovation, trust networks, research collaborations, social affiliation) are best understood if characterized as networks. The explosion of this field of research was and is coupled with the increasing availability of • huge amounts of data, pouring in from neurobiology, genomics, ecology, finance and the Word-wide Web, …, • computing power and storage facilities. The new paradigm states that it is best to understand a complex system, if it is mapped to a network. I.e., the links represent the some kind of interaction and the nodes are stripped of any intrinsic quality. So, as an example, you can forget about the complexity of the individual bird, if you model the flocks swarming behavior. (See these older posts: complexfundamentalswarm theoryin a nutshell.) Only in the last years has the attention shifted from this topological level of analysis (either links are present or not) to incorporate weights of links, giving the strength relative to each other. Albeit being harder to tackle, these networks are closer to the real-world system it is modeling. However, there is still one step missing: also the vertices of the network can be assigned with a value, which acts as a proxy for some real-world property that is coded into the network structure. The two plots above illustrate the difference if the same network is visualized [3] using weights and values assigned to the vertices (left) or simply plotted as a binary (topological) network (right)… [1] Strogatz S. H. and Watts D. J., 1998, Collective Dynamics of ‘Small-World’ Networks, Nature, 393, 440–442. [2] Albert R. and Barabasi A.-L., 1999, Emergence of Scaling in Random Networks, [3] Cuttlefish Adaptive NetWorkbench and Layout cool links… think statistics are boring, irrelevant and hard to understand? well, think again. two examples of visually displaying important information in an amazingly cool way: territory size shows the proportion of all people living on less than or equal to US$1 in purchasing power parity a day. displays a large collection of world maps, where territories are re-sized on each map according to the subject of interest. sometimes an image says more than a thousand words… want to see the global evolution of life expectancy vs. income per capita from 1975 to 2003? and additionally display the co2 emission per capita? choose indicators from areas as diverse as internet users per 1′000 people to contraceptive use amongst adult women and watch the animation. gapminder is a fantastic tool that really makes you think… work in progress… Some of the stuff I do all week… Complex Networks Visualizing a shareholder network: The underlying network visualization framework is JUNG, with theCuttlefish adaptive networkbench and layout algorithm (coming soon). The GUI uses Swing. Stochastic Time Series Scaling laws in financial time series: A Java framework allowing the computation and visualization of statistical properties. The GUI is programmed using SWT. plugin of the month The Firefox add-on Gspace allows you to use Gmail as a file server: tech dependence… Because technological advancement is mostly quite gradual, one hardly notices it creeping into ones life. Only if you would instantly remove these high tech commodities, you’d realize how dependent one has become. A random list of ‘nonphysical’ things I wouldn’t want to live without anymore: • everything you ever wanted to know — and much more • (e.g., news, scholar, maps, webmaster tools, …): basically the internet;-) • Web 2.0 communities (e.g.,,,,,,, …): your virtual social network • towards the babel fish  • recommendations from the fat tail of the probability distribution • Web browsers (e.g., Firefox): your window to the world • Version control systems (e.g., Subversion): get organized • CMS (e.g., TYPO3): disentangle content from design on your web page and more • LaTeX typesetting software (btw, this is not a fetish;-): the only sensible and aesthetic way to write scientific documents • Wikies: the wonderful world of unstructured collaboration • Blogs: get it out there • Java programming language: truly platform independent and with nice GUI toolkits (SWT, Swing, GWT); never want to go back to C++ (and don’t even mention C# or .net) • Eclipse IDE: how much fun can you have while programming? • MySQL: your very own relational database (the next level: db4o) • PHP: ok, Ruby is perhaps cooler, but PHP is so easy to work with (e.g., integrating MySQL and web stuff) • Dynamic DNS (e.g., let your home computer be a node of the internet • Web server (e.g., Apache 2): open the gateway • CSS: ok, if we have to go with HTML, this helps a lot • VoIP (e.g., Skype): use your bandwidth • P2P (e.g., BitTorrent): pool your network • Video and audio compression (e.g., MPEG, MP3, AAC, …): information theory at its best • Scientific computing (R, Octave, gnuplot, …): let your computer do the work • Open source licenses (Creative Commons, Apache, GNU GPL, …): the philosophy! • Object-oriented programming paradigm: think design patterns • Rich Text editors: online WYSIWYG editing, no messing around with HTML tags • SSH network protocol: secure and easy networking • Linux Shell-Programming (”grep”, “sed”, “awk”, “xargs”, pipes, …): old school Unix from the 70s • E-mail (e.g., IMAP): oops, nearly forgot that one (which reminds me of something i really, really could do without: spam) • Graylisting: reduce spam • Debian (e.g., Kubuntu): the basis for it all • apt-get package management system: a universe of software at your fingertips • Compiz Fusion window manager: just to be cool… It truly makes one wonder, how all this cool stuff can come for free!!! climate change 2007 Confused about the climate? Not sure what’s happening? Exaggerated fears or impending cataclysm? A good place to start is a publication by Swiss Re. It is done in a straightforward, down-to-earth, no-bullshit and sane manner. The source to the whole document is given at the bottom. Executive Summary The Earth is getting warmer, and it is a widely held view in the scientific community that much of the recent warming is due to human activity. As the Earth warms, the net effect of unabated climate change will ultimately lower incomes and reduce public welfare. Because carbon dioxide (CO₂) emissions build up slowly, mitigation costs rise as time passes and the level of CO₂ in the atmosphere increases. As these costs rise, so too do the benefits of reducing CO₂ emissions, eventually yielding net positive returns. Given how CO₂ builds up and remains in the atmosphere, early mitigation efforts are highly likely to put the global economy on a path to achieving net positive benefits sooner rather than later. Hence, the time to act to reduce these emissions is now.The climate is what economists call a “public good”: its benefits are available to everyone and one person’s enjoyment and use of it does not affect another’s. Population growth, increased economic activity and the burning of fossil fuels now pose a threat to the climate. The environment is a free resource, vulnerable to overuse, and human activity is now causing it to change. However, no single entity is responsible for it or owns it. This is referred to as the “tragedy of the commons”: everyone uses it free of charge and eventually depletes or damages it. This is why government intervention is necessary to protect our climate. Climate is global: emissions in one part of the world have global repercussions. This makes an international government response necessary. Clearly, this will not be easy. The Kyoto Protocol for reducing CO₂ emissions has had some success, but was not considered sufficiently fair to be signed by the United States, the country with the highest volume of CO₂ emissions. Other voluntary agreements, such as the Asia-Pacific Partnership on Clean Development and Climate – which was signed by the US – are encouraging, but not binding. Thus, it is essential that governments implement national and international mandatory policies to effectively reduce carbon emissions in order to ensure the well-being of future generations. The pace, extent and effects of climate change are not known with certainty. In fact, uncertainty complicates much of the discussion about climate change. Not only is the pace of future economic growth uncertain, but also the carbon dioxide and equivalent (CO₂e) emissions associated with economic growth. Furthermore, the global warming caused by a given quantity of CO₂e emissions is also uncertain, as are the costs and impact of temperature increases. Though uncertainty is a key feature of climate change and its impact on the global economy, this cannot be an excuse for inaction. The distribution and probability of the future outcomes of climate change are heavily weighted towards large losses in global welfare. The likelihood of positive future outcomes is minor and heavily dependent upon an assumed maximum climate change of 2° Celsius above the pre-industrial average. The probability that a “business as usual” scenario – one with no new emission-mitigation policies – will contain global warming at 2° Celsius is generally considered as negligible. Hence, the “precautionary principle” – erring on the safe side in the face of uncertainty – dictates an immediate and vigorous global mitigation strategy for reducing CO₂e emissions. There are two major types of mitigation strategies for reducing greenhouse gas emissions: a cap-and-trade system and a tax system. The cap-and-trade system establishes a quantity target, or cap, on emissions and allows emission allocations to be traded between companies, industries and countries. A tax on, for example, carbon emissions could also be imposed, forcing companies to internalize the cost of their emissions to the global climate and economy. Over time, quantity targets and carbon taxes would need to become increasingly restrictive as targets fall and taxes rise. Though both systems have their own merits, the cap-and-trade policy has an edge over the carbon tax, given the uncertainty about the costs and benefits of reducing emissions. First, cap-and-trade policies rely on market mechanisms – fluctuating prices for traded emissions – to induce appropriate mitigating strategies, and have proved effective at reducing other types of noxious gases. Second, caps have an economic advantage over taxes when a given level of emissions is required. There is substantial evidence that emissions need to be capped to restrict global warming to 2 °C above preindustrial levels or a little more than 1 °C compared to today. Given that the stabilization of emissions at current levels will most likely result in another degree rise in temperature and that current economic growth is increasing emissions, the precautionary principle supports a cap-and-trade policy. Finally, cap-and-trade policies are more politically feasible and palatable than carbon taxes. They are more widely used and understood and they do not require a tax increase. They can be implemented with as much or as little revenue-generating capacity as desired. They also offer business and consumers a great deal of choice and flexibility. A cap-and-trade policy should be easier to adopt in a wide variety of political environments and countries. Whichever system – cap-and-trade or carbon tax – is adopted, there are distributional issues that must be addressed. Under a quantity target, allocation permits have value and can be granted to businesses or auctioned. A carbon tax would raise revenues that could be recycled, for example, into research on energy-efficient technologies. Or the revenues could be used to offset inefficient taxes or to reduce the distributional aspects of the carbon tax. Source: “The economic justification for imposing restraints on carbon emissions”, Swiss Re, Insights, 2007; PDF scaling laws Scaling-law relations characterize an immense number of natural processes, prominently in the form of 1. scaling-law distributions, 2. scale-free networks, 3. cumulative relations of stochastic processes. A scaling law, or power law, is a simple polynomial functional relationship, i.e., f(x) depends on a power of x. Two properties of such laws can easily be shown: • a logarithmic mapping yields a linear relationship, • scaling the function’s argument x preserves the shape of the functionf(x), called scale invariance. See (Sornette, 2006). Scaling-Law Distributions Scaling-law distributions have been observed in an extraordinary wide range of natural phenomena: from physics, biology, earth and planetary sciences, economics and finance, computer science and demography to the social sciences; see (Newman, 2004). It is truly amazing, that such diverse topics as • the size of earthquakes, moon craters, solar flares, computer files, sand particle, wars and price moves in financial markets, • the number of scientific papers written, citations received by publications, hits on webpages and species in biological taxa, • the sales of music, books and other commodities, • the population of cities, • the income of people, • the frequency of words used in human languages and of occurrences of personal names, • the areas burnt in forest fires, are all described by scaling-law distributions. First used to describe the observed income distribution of households by the economist Pareto in 1897, the recent advancements in the study of complex systems have helped uncover some of the possible mechanisms behind this universal law. However, there is as of yet no real understanding of the physical processes driving these systems. Processes following normal distributions have a characteristic scale given by the mean of the distribution. In contrast, scaling-law distributions lack such a preferred scale. Measurements of scaling-law processes yield values distributed across an enormous dynamic range (sometimes many orders of magnitude), and for any section one looks at, the proportion of small to large events is the same. Historically, the observation of scale-free or self-similar behavior in the changes of cotton prices was the starting point for Mandelbrot’s research leading to the discovery of fractal geometry; see (Mandelbrot, 1963). It should be noted, that although scaling laws imply that small occurrences are extremely common, whereas large instances are quite rare, these large events occur nevertheless much more frequently compared to a normal (or Gaussian) probability distribution. For such distributions, events that deviate from the mean by, e.g., 10 standard deviations (called “10-sigma events”) are practically impossible to observe. For scaling law distributions, extreme events have a small but very real probability of occurring. This fact is summed up by saying that the distribution has a “fat tail” (in the terminology of probability theory and statistics, distributions with fat tails are said to be leptokurtic or to display positive kurtosis) which greatly impacts the risk assessment. So although most earthquakes, price moves in financial markets, intensities of solar flares, … will be very small, the possibility that a catastrophic event will happen cannot be neglected. Scale-Free Networks Another modern research field marked by the ubiquitous appearance of scaling-law relations is the study of complex networks. Many different phenomena in the physical (e.g., computer networks, transportation networks, power grids, spontaneous synchronization of systems of lasers), biological (e.g., neural networks, epidemiology, food webs, gene regulation), and social (e.g., trade networks, diffusion of innovation, trust networks, research collaborations, social affiliation) worlds can be understood as network based. In essence, the links and nodes are abstractions describing the system under study via the interactions of the elements comprising it. In graph theory, the degree of a node (or vertex), k, describes the number of links (or edges) the node has to other nodes. The degree distribution gives the probability distribution of degrees in a network. For scale-free networks, one finds that the probability that a node in the network connects with k other nodes follows a scaling law. Again, this power law is characterized by the existence of highly connected hubs, whereas most nodes have small degrees. Scale-free networks are • characterized by high robustness against random failure of nodes, but susceptible to coordinated attacks on the hubs, and • thought to arise from a dynamical growth process, called preferential attachment, in which new nodes favor linking to existing nodes with high degrees. It should be noted, that another prominent feature of real-world networks, namely the so-called small-world property, is separate from a scale-free degree distribution, although scale-free networks are also small-world networks; (Strogatz and Watts, 1998). For small-world networks, although most nodes are not neighbors of one another, most nodes can be reached from every other by a surprisingly small number of hops or steps. Most real-world complex networks - such as those listed at the beginning of this section - show both scale-free and small-world characteristics. Some general references include (Barabasi, 2002), (Albert and Barabasi, 2001), and (Newman, 2003). Emergence of scale-free networks in the preferential attachment model (Albert and Barabasi, 1999). An alternative explanation to preferential attachment, introducing non-topological values (called fitness) to the vertices, is given in (Caldarelli et al., 2002). Cumulative Scaling-Law Relations Next to distributions of random variables, scaling laws also appear in collections of random variables, called stochastic processes. Prominent empirical examples are financial time-series, where one finds empirical scaling laws governing the relationship between various observed quantities. See (Guillaume et al., 1997) and (Dacorogna et al., 2001). Albert R. and Barabasi A.-L., 2001, Statistical Mechanics of Complex Networks, Barabasi A.-L., 2002, Linked — The New Science of Networks, Perseus Publishing, Cambridge, Massachusetts. Caldarelli G., Capoccio A., Rios P. D. L., and Munoz M. A., 2002, Scale- free Networks without Growth or Preferential Attachment: Good get Richer, Dacorogna M. M., Gencay R., Müller U. A., Olsen R. B., and Pictet O. V., 2001, An Introduction to High-Frequency Finance, Academic Press, San Diego, CA. Guillaume D. M., Dacorogna M. M., Dave R. D., Müller U. A., Olsen R. B., and Pictet O. V., 1997, From the Bird’s Eye to the Microscope: A Survey of New Stylized Facts of the Intra-Daily Foreign Exchange Markets, Finance and Stochastics, 1, 95–129. Mandelbrot B. B., 1963, The variation of certain speculative prices, Journal of Business, 36, 394–419. Newman M. E. J., 2003, The Structure and Function of Complex Networks, Newman M. E. J., 2004, Power Laws, Pareto Distributions and Zipf ’s Law, Sornette D., 2006, Critical Phenomena in Natural Sciences, Series in Synergetics. Springer, Berlin, 2nd edition. Nature, 393, 440–442. See also this post: laws of nature. swarm theory National Geographic`s July 2007 edition: Swarm Theory benford’s law In 1881 a result was published, based on the observation that the first pages of logarithm books, used at that time to perform calculations, were much more worn than the other pages. The conclusion was that computations of numbers that started with 1 were performed more often than others: if d denotes the first digit of a number the probability of its appearance is equal to log(d + 1). The phenomenon was rediscovered in 1938 by the physicist F. Benford, who confirmed the “law” for a large number of random variables drawn from geographical, biological, physical, demographical, economical and sociological data sets. It even holds for randomly compiled numbers from newspaper articles. Specifically, Benford’s law, or the first-digit law, states, that the occurrence of a number with first digit 1 is with 30.1%, 2 with 17.6%, 3 with 12.5%, 4 with 9.7%, 5 with 7.9%, 6 with 6.7%, 7 with 5.8%, 8 with 5.1% and 9 with 4.6% probability. In general, the leading digit d ∈ [1, …, b−1] in base b ≥ 2 occurs with probability proportional to log_b(d + 1) − log_b(d) = log_b(1 + 1/d). First explanations of this phenomena, which appears to suspend the notions of probability, focused on its logarithmic nature which implies a scale-invariant or power-law distribution. If the first digits have a particular distribution, it must be independent of the measuring system, i.e., conversions from one system to another don’t affect the distribution. (This requirement that physical quantities are independent of a chosen representation is one of the cornerstones of general relativity and called covariance.) So the common sense requirement that the dimensions of arbitrary measurement systems shouldn’t affect the measured physical quantities, is summarized in Benford’s law. In addition, the fact that many processes in nature show exponential growth is also captured by the law, which assumes that the logarithms of numbers are uniformly distributed. So how come one observes random variables following normal and scaling-law distributions? In 1996 the phenomena was mathematically rigorously proven: if one repeatedly chooses different probability distribution and then randomly chooses a number according to that distribution, the resulting list of numbers will obey Benford’s law. Hence the law reflects the behavior of distributions of distributions. Benford’s law has been used to detect fraud in insurance, accounting or expenses data, where people forging numbers tend to distribute their digits uniformly. There is an interesting observation or conjecture to be made from the Mataphysics Map in the post what can we know?, concerning the nature of infinity. The Finite Many observations reveal a finite nature of reality: • Energy comes in finite parcels (quatum mechanics) • The knowledge one can have about quanta is a fixed value (uncertainty) • Energy is conserved in the universe • The speed of light has the same constant value for all observers (special relativity) • The age of the universe is finite • Information is finite and hence can be coded into a binary language Newer and more radical theories propose: • Space comes in finite parcels • Time comes in finite parcels • The universe is spatially finite • The maximum entropy in any given region of space is proportional to the regions surface area and not its volume (this leads to the holographic principle stating that our three dimensional universe is a projection of physical processes taking place on a two dimensional surface surrounding it) So finiteness appears to be an intrinsic feature of the Outer Reality box of the diagram. There is in fact a movement in physics ascribing to the finiteness of reality, called Digital Philosophy. Indeed, this finiteness postulate is a prerequisite for an even bolder statement, namely, that the universe is one gigantic computer (a Turing complete cellular automata), where reality (thought and existence) is equivalent to computation. As mentioned above, the selforganizing structure forming evolution of the universe can be seen to produce ever more complex modes of information processing (e.g., storing data in DNA, thoughts, computations, simulations and perhaps, in the near future, quantum computations). There is also an approach to quantum mechanics focussing on information stating that an elementary quantum system carries (is?) one bit of information. This can be seen to lead to the notions of quantisation, uncertainty and entanglement. The Infinite It should be noted that zero is infinity in disguise. If one lets the denominator of a fraction go to infinity, the result is zero. Historically, zero was discovered in the 3rd century BC in India and was introduced to the Western world by Arabian scholars in the 10th century AC. As ordinary as zero appears to us today, the great Greek mathematicians didn’t come up with such a concept. Indeed, infinity is something intimately related to formal thought systems (mathematics). Irrational numbers have an infinite number of digits. There are two measures for infinity: countability and uncountablility. The former refers to infinite series as 1, 2, 3, … Whereas for the latter measure, starting from 1.0 one can’t even reach 1.1 because there are an infinite amount of numbers in the interval between 1.0 and 1.1. In geometry, points and lines are idealizations of dimension zero and one, respectively. So it appears as though infinity resides only in the Inner Reality box of the diagram. The Interface If it should be true that we live in a finite reality with infinity only residing within the mind as a concept, then there should be some problems if one tries to model this finite reality with an infinity-harboring formalism. Perhaps this is indeed so. In chaos theory, the sensitivity to initial conditions (butterfly effect) can be viewed as the problem of measuring numbers: the measurement can only have a finite degree of accuracy, whereas the numbers have, in principle, an infinite amount of decimal places. In quatum gravity (the, as yet, unsuccessful merger of quantum mechanics and gravity) many of the inherent problems of the formalism could by bypassed, when a theory was proposed (string theory) that replaced (zero space) point particles with one dimensionally extended objects. Later incarnations, called M-theory, allowed for multidimensional objects. In the above mentioned information based view of quantum mechanics, the world appears quantised because the information retrieved by our minds about the world is inevitably quantised. So the puzzle deepens. Why do we discover the notion of infinity in our minds while all our experiences and observations of nature indicate finiteness? medical studies medical studies often contradict each other. results claiming to have “proven” some causal connection are confronted with results claiming to have “disproven” the link, or vice versa. this dilemma affects even reputable scientists publishing in leading medical journals. the topics are divers: • high-voltage power supply lines and leukemia [1], • salt and high blood pressure [1], • heart diseases and sport [1], • stress and breast cancer [1], • smoking and breast cancer [1], • praying and higher chances of healing illnesses [1], • the effectiveness of homeopathic remedies and natural medicine, • vegetarian diets and health, • low frequency electromagnetic fields and electromagnetic hypersensitivity [2], basically, this is understood to happen for three reasons: • i.) the bias towards publishing positive results, • ii.) incompetence in applying statistics, • ii.) simple fraud. publish or perish. in order the guarantee funding and secure the academic status quo, results are selected by their chance of being published. an independent analysis of the original data used in 100 published studies exposed that roughly half of them showed large discrepancies in the original aims stated by the researchers and the reported findings, implying that the researchers simply skimmed the data for publishable material [3]. this proves fatal in combination with ii.) as every statistically significant result can occur (per definition) by chance in an arbitrary distribution of measured data. so if you only look long enough for arbitrary results in your data, you are bound to come up with something [1]. often, due to budget reasons, the numbers of test persons for clinical trials are simply too small to allow for statistical relevance. ref. [4] showed next to other things, that the smaller the studies conducted in a scientific field, the less likely the research findings are to be true. statistical significance - often evaluated by some statistics software package - is taken as proof without considering the plausibility of the result. many statistically significant results turn out to be meaningless coincidences after accounting for the plausibility of the finding [1]. one study showed that one third of frequently cited results fail a later verification [1]. another study documented that roughly 20% of the authors publishing in the magazine “nature” didn’t understand the statistical method they were employing [5]. iii.) a.) two thirds of of the clinical biomedical research in the usa is supported by the industry - double as much as in 1980 [1]. it was shown that in 1000 studies done in 2003, the nature of the funding correlated with the results: 80% of industry financed studies had positive results, whereas only 50% of the independent research reported positive findings. it could be argued that the industry has a natural propensity to identify effective and lucrative therapies. however, the authors show that many impressive results were only obtained because they were compared with weak alternative drugs or placebos. [6] iii.) b.) quoted from “Andrew Wakefield (born 1956 in the United Kingdom) is a Canadian trained surgeon, best known as the lead author of a controversial 1998 research study, published in the Lancet, which reported bowel symptoms in a selected sample of twelve children with autistic spectrum disorders and other disabilities, and alleged a possible connection with MMR vaccination. Citing safety concerns, in a press conference held in conjunction with the release of the report Dr. Wakefield recommended separating the components of the injections by at least a year. The recommendation, along with widespread media coverage of Wakefield’s claims was responsible for a decrease in immunisation rates in the UK. The section of the paper setting out its conclusions, known in the Lancet as the “interpretation” (see the text below), was subsequently retracted by ten of the paper’s thirteen authors. In February of 2004, controversy resurfaced when Wakefield was accused of a conflict of interest. The London Sunday Times reported that some of the parents of the 12 children in the Lancet study were recruited via a UK attorney preparing a lawsuit against MMR manufacturers, and that the Royal Free Hospital had received £55,000 from the UK’s Legal Aid Board (now the Legal Services Commission) to pay for the research. Previously, in October 2003, the board had cut off public funding for the litigation against MMR manufacturers. Following an investigation of The Sunday Times allegations by the UK General Medical Council, Wakefield was charged with serious professional misconduct, including dishonesty, due to be heard by a disciplinary board in 2007. In December of 2006, the Sunday Times further reported that in addition to the money given to the Royal Free Hospital, Wakefield had also been personally paid £400,000 which had not been previously disclosed by the attorneys responsible for the MMR lawsuit.” wakefield had always only expressed his criticism of the combined triple vaccination, supporting single vaccinations spaced in time. the british tv station channel 4 exposed in 2004 that he had applied for patents for the single vaccines. wakefield dropped his subsequent slander action against the media company only in the beginning of 2007. as mentioned, he now awaits charges for professional misconduct. however, he has left britain and now works for a company in austin texas. it has been uncovered that other employees of this us company had received payments from the same attorney preparing the original law suit. [7] should we be surprised by all of this? next to the innate tendency of human beings to be incompetent and unscrupulous, there is perhaps another level, that makes this whole endeavor special. the inability of scientist to conclusively and reproducibly uncover findings concerning human beings is maybe better appreciated, if one considers the nature of the subject under study. life, after all, is an enigma and the connection linking the mind to matter is elusive at best (i.e., the physical basis of consciousness). the bodies capability to heal itself, i.e., the placebo effect and the need for double-blind studies is indeed very bizarre. however, there are studies questioning, if the effect exists at all;-) taken from  (consult also for the corresponding links for the sources cited below) [1] This article in the magazine issued by the Neue Zürcher Zeitung by Robert Matthews [2] C. Schierz; Projekt NEMESIS; ETH Zürich; 2000 [3] A. Chan (Center of Statistics in Medicine, Oxford) et. al.; Journal of the American Medical Association; 2004 [4] J. Ioannidis; “Why Most Published Research Findings Are False” ; University of Ioannina; 2005 [5] R. Matthews, E. García-Berthou and C. Alcaraz as reported in this “Nature” article; 2005 [6] C. Gross (Yale University School of Medicine) et. al.; “Scope and Impact of Financial Conflicts of Interest in Biomedical Research “; Journal of the American Medical Association; 2003 [7] H. Kaulen; “Wie ein Impfstoff zu Unrecht in Misskredit gebracht wurde”; Deutsches Ärzteblatt; Jg. 104; Heft 4; 26. Januar 2007 in a nutshell Science, put simply, can be understood as working on three levels: • i.) analyzing the nature of the object being considered/observed, • ii.) developing the formal representation of the object’s features and its dynamics/interactions, • iii.) devising methods for the empirical validation of the formal representations. To be precise, level i.) lies more within the realm of philosophy (e.g., epistemology) and metaphysics (i.e., ontology), as notions of origin, existence and reality appear to transcend the objective and rational capabilities of thought. The main problem being: “Why is there something rather than nothing? For nothingness is simpler and easier than anything.”; [1]. In the history of science the above mentioned formulation made the understanding of at least three different levels of reality possible: • a.) the fundamental level of the natural world, • b.) inherently random phenomena, • c.) complex systems. While level a.) deals mainly with the quantum realm and cosmological structures, levels b.) and c.) are comprised mostly of biological, social and economic systems. a.) Fundamental Many natural sciences focus on a.i.) fundamental, isolated objects and interactions, use a.ii.) mathematical models which are a.iii.) verified (falsified) in experiments that check the predictions of the model - with great success: “The enormous usefulness of mathematics in the natural sciences is something bordering on the mysterious. There is no rational explanation for it.”; [2]. b.) Random Often the nature of the object b.i.) being analyzed is in principle unknown. Only statistical evaluations of sets of outcomes of single observations/experiments can be used to estimate b.ii.) the underlying model, and b.iii.) test it against more empirical data. Often the approach taken in the fields of social sciences, medicine, and business. c.) Complex Moving to c.i.) complex, dynamical systems, and c.ii.) employing computer simulations as a template for the dynamical process, unlocks a new level of reality: mainly the complex and interacting world we experience at our macroscopic length scales in the universe. Here two new paradigms emerge: • the shift from mathematical (analytical) models to algorithmic computations and simulations performed in computers, • simple rules giving rise to complex behavior: “And I realized, that I had seen a sign of a quite remarkable and unexpected phenomenon: that even from very simple programs behavior of great complexity could emerge.”; [3]. However, things are not as clear anymore. What is the exact methodology, and how does it relate to underlying concepts of ontology and epistemology, and what is the nature of these computations per se? Or within the formulation given above, i.e., iii.c.), what is the “reality” of these models: what do the local rules determining the dynamics in the simulation have to say about the reality of the system c.i.) they are trying to emulate? There are many coincidences that enabled the structured reality we experience on this planet to have evolve: exact values of fundamental constants (initial conditions), emerging structure-forming and self-organizing processes, the possibility of (organic) matter to store information (after being synthesized in supernovae!), the right conditions of earth for harboring life, the emergent possibilities of neural networks to establish consciousness and sentience above a certain threshold, … Interestingly, there are also many circumstances that allow the observable world to be understood by the human mind: • the mystery allowing formal thought systems to map to patterns in the real world, • the development of the technology allowing for the design and realization of microprocessors, • the bottom-up approach to complexity identifying a micro level of simple interactions of system elements. So it appears that the human mind is intimately interwoven with the fabric of reality that produced it. But where is all this leading to? There exists a natural extension to science which fuses the notions from levels a.) to c.), namely • information and information processing, • formal mathematical models, • statistics and randomness. Notably, it comes from an engineering point-of-view and deals with quantum computers and comes full circle back to level i.), the question about the nature of reality: “[It can be shown] that quantum computers can simulate any system that obeys the known laws of physics in a straightforward and efficient way. In fact, the universe is indistinguishable from a quantum computer.”; [4]. At first blush the idea of substituting reality with a computed simulation appears rather ad hoc, but in fact it does have potentially falsifiable notions: • the discreteness of reality, i.e., the notion that continuity and infinity are not physical, • the reality of the quantum realm should be contemplated from the point of view of information, i.e., the only relevant reality subatomic quanta manifest is that they register one bit of information: “Information is physical.”; [5]. [1] von Leibniz, G. W., “Principes de la nature et de la grâce”, 1714 [2] Wigner, E. P., “Symmetries and Reflections”, MIT Press, Cambridge, 1967 [3] Wolfram, S., “A New Kind of Science”, Wolfram Media, pg. 19, 2002 [4] Lloyd, S., “Programming the Universe”, Random House, pgs. 53 - 54, 2006 [5] Landauer, R., Nature, 335, 779-784, 1988 See also: “The Mathematical Universe” by M. Tegmark. Related posts: See also this post: laws of nature. what can we know? Put bluntly, metaphysics asks simple albeit deep questions: • Why do I exist? • Why do I die? • Why does the world exist? • Where did everything come from? • What is the nature of reality? • What is the meaning of existence? • Is there a creator or omnipotent being? Although these questions may appear idle and futile, they seem to represent an innate longing for knowledge of the human mind. Indeed, children can and often do pose such questions, only to be faced with resignation or impatience of adults. To make things simpler and tractable, one can focus on the question “What can we know?”. When you wake up in the morning, you instantly become aware of your self, i.e., you experience an immaterial inner reality you can feel and probe with your thoughts. Upon opening your eyes, a structured material outer reality appears. These two unsurmountable facts are enough to sketch a small metaphysical diagram: Focussing on the outer reality or physical universe, there exists an underlying structure forming and selforganizing process starting with an initial singularity or Big Bang (extremely low entropy state, i.e., high order, giving rise to the arrow or direction of time). Due to the exact values of physical constants in our universe, this organizing process yields structures eventually giving birth to stars, which, at the end of their lifecycle, explode (supernovae) allowing for nuclear reactions to fuse heavy elements. One of these heavy elements brings with it novel bonding possibilities, resulting in a new pattern: organic matter. Within a couple of billion years, the structure forming process gave rise to a plethora of living organisms. Although each organism would die after a short lifespan, the process of life as a whole continued to live in a sustainable equilibrium state and survived a couple of extinction events (some of which eradicated nearly 90% of all species). The second law of thermodynamics states, that the entropy of the universe is increasing, i.e., the universe is becoming an ever more unordered place. It would seem that the process of life creating stable and ordered structures violates this law. In fact, complex structures spontaneously appear where there is a steady flow of energy from a high temperature input source (the sun) to a low temperature external sink (the earth). So pumping a system with energy leads it to a state far from the thermodynamic equilibrium which is characterized by the emergence of ordered structures. Viewed from an information processing perspective, the organizing process suddenly experienced a great leap forward. The brains of some organisms had reached a critical mass, allowing for another emergent behavior: consciousness. The majority of people in industrialized nations take a rational and logicall outlook on life. Although one might think this is an inevitable mode of awareness, it actually is a cultural imprinting as there exist other civilization putting far less emphasis on rationality. Perhaps the divide between Western and Eastern thinking illustrates this best. Whereas the former is locked in continuous interaction with the outer world, the latter focuses on the experience of an inner reality. A history of meditation techniques underlines this emphasis on the nonverbal experience of ones self. Thought is either totally avoided, or the mind is focused on repetitive activities, in effect deactivating it. Recall from fundamental that there are two surprising facts to be found. On the one hand, the physical laws dictating the fundamental behavior of the universe can be mirrored by formal thought systems devised by the mind. And on the other hand, real complex behavior can be emulated by computer simulations following simple laws (the computers themselves are an example of technological advances made possible by the successfull modelling of nature by formal thought systems). This conceptual map allows one to categorize a lot of stuff in a concise manner. Also, the interplay between the outer and inner realities becomes visible. However, the above mentioned questions remain unanswered. Indeed, more puzzles appear. So as usual, every advance in understanding just makes the question mark bigger… Continued here: infinity? invariant thinking… Arguably the most fruitful principle in physics has been the notion of symmetry. Covariance and gauge invariance - two simply stated symmetry conditions - are at the heart of general relativity and the standard model (of particle physics). This is not only aesthetically pleasing it also illustrates a basic fact: in coding reality into a formal system, we should only allow the most minimal reference to be made to this formal system. I.e. reality likes to be translated into a language that doesn’t explicitly depend on its own peculiarities (coordinates, number bases, units, …). This is a pretty obvious idea and allows for physical laws to be universal. But what happens if we take this idea to the logical extreme? Will the ultimate theory of reality demand: I will only allow myself to be coded into a formal framework that makes no reference to itself whatsoever. Obviously a mind twister. But the question remains: what is the ultimate symmetry idea? Or: what is the ultimate invariant? Does this imply “invariance” even with respect to our thinking? How do we construct a system that supports itself out of itself, without relying on anything external? Can such a magical feat be performed by our thinking? Taken from this newsgroup message See also: fundamental While physics has had an amazing success in describing most of the observable universe in the last 300 years, the formalism appears to be restricted to the fundamental workings of nature. Only solid-state physics attempts to deal with collective systems. And only thanks to the magic of symmetry one is able to deduce fundamental analytical solutions. In order to approach real life complex phenomena, one needs to adopt a more systems oriented focus. This also means that the interactions of entities becomes an integral part of the formalism. Some ideas should illustrate the situation: • Most calculations in physics are idealizations and neglect dissipative effects like friction • Most calculations in physics deal with linear effect, as non-linearity is hard to tackle and is associated with chaos; however, most physical systems in nature are inherently non-linear • The analytical solution of three gravitating bodies in classical mechanics, given their initial positions, masses, and velocities, cannot be found; it turns out to be a chaotic system which can only be simulated in a computer; however, there are an estimated hundred billion of galaxies in the universe Systems Thinking Systems theory is an interdisciplinary field which studies relationships of systems as a whole. The goal is to explain complex systems which consist of a large number of mutually interacting and interwoven parts in terms of those interactions. A timeline: • Cybernetics (50s): Study of communication and control, typically involving regulatory feedback, in living organisms and machines • Catastrophe theory (70s): Phenomena characterized by sudden shifts in behavior arising from small changes in circumstances • Chaos theory (80s): Describes the behavior of non-linear dynamical systems that under certain conditions exhibit a phenomenon known as chaos (sensitivity to initial conditions, regimes of chaotic and deterministic behavior, fractals, self-similarity) • Complex adaptive systems (90s): The “new” science of complexity which describes emergence, adaptation and self-organization; employing tools such as agent-based computer simulations In systems theory one can distinguish between three major hierarchies: • Suborganic: Fundamental reality, space and time, matter, … • Organic: Life, evolution, … • Metaorganic: Consciousness, group dynamical behavior, financial markets, … However, it is not understood how one can traverse the following chain: bosons and fermions -> atoms -> molecules -> DNA -> cells -> organisms -> brains. I.e., how to understand phenomena like consciousness and life within the context of inanimate matter and fundamental theories. e.g., systems view Category Theory The mathematical theory called category theory is a result of the “unification of mathematics” in the 40s. A category is the most basic structure in mathematics and is a set of objects and a set of morphisms (maps). A functor is a structure-preserving map between categories. This dynamical systems picture can be linked to the notion of formal systems mentioned above: physical observables are functors, independent of a chosen representation or reference frame, i.e., invariant and covariant. Object-Oriented Programming This paradigm of programming can be viewed in a systems framework, where the objects are implementations of classes (collections of properties and functions) interacting via functions (public methods). A programming problem is analyzed in terms of objects and the nature of communication between them. When a program is executed, objects interact with each other by sending messages. The whole system obeys certain rules (encapsulationinheritancepolymorphism, …). Some advantages of this integral approach to software development: • Easier to tackle complex problems • Allows natural evolution towards complexity and better modeling of the real world • Reusability of concepts (design patterns) and easy modifications and maintenance of existing code • Object-oriented design has more in common with natural languages than other (i.e., procedural) approaches Algorithmic vs. Analytical Perhaps the shift of focus in this new weltbild can be understood best when one considers the paradigm of complex system theory: • The interaction of entities (agents) in a system according to simple rules gives rise to complex behavior: Emergence, structure-formation, self-organization, adaptive behavior (learning), … This allows a departure from the equation-based description to models of dynamical processes simulated in computers. This is perhaps the second miracle involving the human mind and the understanding of nature. Not only does nature work on a fundamental level akin to formal systems devised by our brains, the hallmark of complexity appears to be coded in simplicity (”simple sets of rules give complexity”) allowing computational machines to emulate its behavior. complex systems It is very interesting to note, that in this paradigm the focus is on the interaction, i.e., the complexity of the agent can be ignored. That is why the formalism works for chemicals in a reaction, ants in an anthill, humans in social or economical organizations, … In addition, one should also note, that simple rules - the epitome of deterministic behavior - can also give rise to chaotic behavior. The emerging field of network theory (an extension of graph theory,yielding results such as scale-free topologies, small-worlds phenomena, etc. observed in a stunning veriety of complex networks) is also located at this end of the spectrum of the formal descriptions of the workings of nature. Finally, to revisit the analytical approach to reality, note that in the loop quantum gravity approach, space-time is perceived as a causal network arising from graph updating rules (spin networks, which are graphs associated with group theoretic properties), where particles are envisaged as ‘topological defects’ and geometric properties of reality, such as dimensionality, are defined solely in terms of the network’s connectivity pattern. list of open questions in complexity theory. 2 Responses to “complex” 1. jbg » Blog Archive » complex networks Says: […] The new paradigm states that it is best to understand a complex system, if it is mapped to a network. I.e., the links represent the some kind of interaction and the nodes are stripped of any intrinsic quality. So, as an example, you can forget about the complexity of the individual bird, if you model the flocks swarming behavior. (See these older posts: complex, fundamental, swarm theory, in a nutshell.) […] 2. jbg » Blog Archive » laws of nature Says: […] See also these posts: complex, swarm theory, complex networks. […] What is science? • Science is the quest to capture the processes of nature in formal mathematical representations So “math is the blueprint of reality” in the sense that formal systems are the foundation of science. In a nutshell: • Natural systems are a subset of reality, i.e., the observable universe • Guided by thought, observation and measurement natural systems are “encoded” into formal systems • Using logic (rules of inference) in the formal system, predictions about the natural system can be made (decoding) • Checking the predictions with the experimental outcome gives the validity of the formal system as a model for the natural system Physics can be viewed as dealing with the fundamental interactions of inanimate matter. For a technical overview, go to the here. math models • Mathematical models of reality are independent of their formal representation This leads to the notions of symmetry and invariance. Basically, this requirement gives rise to nearly all of physics. Classical Mechanics Symmetry, understood as the invariance of the equations under temporal and spacial transformations, gives rise to the conservation laws of energy, momentum and angular momentum. In layman terms this means that the outcome of an experiment is unchanged by the time and location of the experiment and the motion of the experimental apparatus. Just common sense… Mathematics of Symmetry The intuitive notion of symmetry has been rigorously defined in the mathematical terms of group theory. Physics of Non-Gravitational Forces The three non-gravitational forces are described in terms of quantum field theories. These in turn can be expressed as gauge theories, where the parameters of the gauge transformations are local, i.e., differ from point to point in space-time. The Standard Model of elementary particle physics unites the quantum field theories describing the fundamental interactions of particles in terms of their (gauge) symmetries. Physics of Gravity Gravity is the only force that can’t be expressed as a quantum field theory. Its symmetry principle is called covariance, meaning that in the geometric language of the theory describing gravity (general relativity) the physical content of the equations is unchanged by the choice of the coordinate system used to represent the geometrical entities. To illustrate, imagine an arrow located in space. It has a length and an orientation. In geometric terms this is a vector, lets call it a. If I want to compute the length of this arrow, I need to choose a coordinate system, which gives me the x-, y- and z-axes components of the vector, e.g., a = (3, 5, 1). So starting from the origin of my coordinate system (0, 0, 0), if I move 3 units in the x direction (left-right), 5 units in the y-direction (forwards-backwards) and 1 unit in the z direction (up-down), I reach the end of my arrow. The problem is now, that depending on the choice of coordinate system - meaning the orientation and the size of the units - the same arrow can look very different: a = (3, 5, 1) = (0, 23.34, -17). However, everytime I compute the length of the arrow in meters, I get the same number independent of the chosen representation. In general relativity the vectors are somewhat like multidimensional equivalents called tensors and the commonsense requirement, that the calculations involving tensor do not depend on how I represent the tensors in space-time, is covariance. It is quite amazing, but there is only one more ingredient needed in order to construct one of the most estethic and accurate theories in physics. It is called the equivalence principle and states that the gravitational force is equivalent to the forces experienced during acceleration. This may sound trivial, has however very deep implications. micr macro math models Physics of Condensed Matter This branch of physics, also called solid-state physics, deals with the macroscopic physical properties of matter. It is one of physics first ventures into many-body problems in quantum theory. Although the employed notions of symmetry do not act at such a fundamental level as in the above mentioned theories, they are a cornerstone of the theory. Namely the complexity of the problems can be reduced using symmetry in order for analytical solutions to be found. Technically, the symmetry groups are boundary conditions of the Schrödinger equation. This leads to the theoretical framework describing, for example, semiconductors and quasi-crystals (interestingly, they have fractal properties!). In the superconducting phase, the wave functionbecomes symmetric. The Success It is somewhat of a miracle, that the formal systems the human brain discovers/devises find their match in the workings of nature. In fact, there is no reason for this to be the case, other than that it is the way things are. The following two examples should underline the power of this fact, where new features of reality where discovered solely on the requirements of the mathematical model: • In order to unify electromagnetism with the weak force (two of the three non-gravitational forces), the theory postulated two new elementary particles: the W and Z bosons. Needless to say, these particles where hitherto unknown and it took 10 years for technology to advance sufficiently in order to allow their discovery. • The fusion of quantum mechanics and special relativity lead to the Dirac equation which demands the existence of an, up to then, unknown flavor of matter: antimatter. Four years after the formulation of the theory, antimatter was experimentally discovered. The Future… Albeit the success, modern physics is still far from being a unified, paradox-free formalism describing all of the observable universe. Perhaps the biggest obstacles lies in the last missing step to unification. In a series of successes, forces appearing as being independent phenomena, turned out to be facets of the same formalism: electricity and magnetism was united in the four Maxwell equations; as mentioned above, electromagnetism and the weak force were merged into the electroweak force; and finally, the electroweak and strong force were united in the framework of the standard model of particle physics. These four forces are all expressed as quantum (field) theories. There is only one observable force left: gravity. The efforts to quantize gravity and devise a unified theory, have taken a strange turn in the last 20 years. The problem is still unsolved, however, the mathematical formalisms engineered for this quest - namely string/M-theory and loop quantum gravity - have had a twofold impact: • A new level in the application of formal systems is reached. Whereas before, physics relied on mathematical branches that where developed independently from any physical application (e.g., differential geometry, group theory), string/M-theory is actually spawning new fields of mathematics (namely in topology). • These theories tell us very strange things about reality: • Time does not exist on a fundamental level • Space and time per se become quantized • Space has more than three dimensions • Another breed of fundamental particles is needed: supersymmetricmatter Unfortunately no one knowns if these theories are hinting at a greater reality behind the observable world, or if they are “just” math. The main problem being the fact that any kind of experiment to verify the claims appears to be out of reach of our technology… 4 Responses to “fundamental” 1. jbg » Blog Archive » complex networks Says: 2. jbg » Blog Archive » what can we know? Says: 3. jbg » Blog Archive » in a nutshell Says: […] fundamental and complex […] 4. jbg » Blog Archive » laws of nature Says: […] See also this post: funadamental, invariant thinking. […]
b5f2221fbdc508c7
Why Stuff is Hard Why is stuff hard? That is, how can matter become solid, instead of just floating through us (or into the center of the earth) like a ghost? It might seem like a silly question, that the burdon of explaination ought to be on exceptions to the rule, such as holograms and optical illusions. But as I learned about matter from a particle physics perspective, I became increasingly perplexed that this stuff ever manages to condense itself into anything concrete. I carried this question around with me for years before finding the explaination at the end of this article; I’m surprised how rarely it is addressed in detail. Why stuff might be soft Fundamental particles of matter, according to Sir Isaac Newton centuries ago, or Democritus millenia ago, are hard, solid shapes that can be stacked and stuck together (with little hooks, in Democritus’s theory). News graphics of particle physics reactions suggest a similar picture, rendering electrons and quarks as shaded spheres eminating from a billards collision. For most of our history, we have concieved of matter as something which occupies space exclusively, with an inclination toward defining its reality by its impenetrability. When MacBeth saw an intangible knife floating before him, Art thou not, fatal vision, sensible To feeling as to sight? or art thou but a dagger of the mind, a false creation Proceeding from the heat-oppressed brain? it was either imaginary or conjured by witches. When Samuel Johnson heard Berkeley’s theory that all physical objects exist only in the mind as ideas, he pronounced, “I refute it thus!” and kicked a large stone. He could not have imagined how many neutrinos, cosmic rays, and (very likely) dark matter particles were pouring through his body at that instant. Today, we define matter using quantum field theory, the culnumation of the first 50 years of trying to understand quantum mechanics. Quantum field theory is a framework, rather than a single theory, only making predictions when given a set of fundamental fields and interactions. The goal of particle physics is to identify these inputs (or, if that doesn’t work, improve upon the framework). A classical field, in this language, is a function from every point in space-time to a number, spinor, vector, tensor, or some other structure of numbers. The first fields discussed in earnest were the electric field and the magnetic field, both of which map points in space-time to 3-component vectors. These fields are manifestly real (they make telegraphs work) even though they fill all of space, permeating all matter, as well as each other. The crowning demonstration of the reality of these fields came when Maxwell predicted the existence of radio in 1873 (13 years early) as self-perpetuating waves in the electromagnetic field. In quantum field theory, all matter consists of self-perpetuating waves in one of several quantum fields: the up-quark field, the down-quark field, the neutrino field, etc. These fields fill all of space— an empty vacuum is simply a region without waves. In modern language, we might call Maxwell’s electromagnetic field the photon field, the field of photon particles. A quantum field differs from a classical field in that it is a probability distribution over classical fields (plus a “phase,” an angle in an abstract space, which is not important for this discussion). This probability distribution is constrained by a differential equation called the Schrödinger equation, which often restricts energy to a discrete set of values. In particular, the energy of a standing wave, a particle at rest, is forced to be an integer multiple of the particle mass. If we want to add waves to the electron field (that is, make electrons), we can only add 0.511 MeV (one electron), 1.022 MeV (two electrons), or 1.533 MeV (three electrons), etc. The very fact that matter comes in particles of fixed mass, rather than a mushy continuum, is a consequence of quantum mechanics! The quantum field is therefore both more free and more constrained than the classical field, as illustrated below. The energy in a quantum field can’t be any arbitrary value, but it can be several restricted values at the same time. Classical field versus quantum field Derivation of quantized particles with minimal prerequisites This is an aside, but if you’d like to see where this quantization comes from, the following derivation only requires first-year differential equations. In the simplest case of a real-valued field with no interactions, the Schrödinger equation is \displaystyle i\sqrt{ \left(\frac{\partial\Psi}{\partial t}\right)^2 - \nabla^2\Psi } \;=\; -\frac{1}{2}\left( \frac{\partial^2}{\partial\phi^2} - m^2\phi^2 \right)\Psi where \Psi is the quantum field, (the square root of) a probability distribution over the 5-dimensional space t, x, y, z, \phi, where \phi is the classical field value. (We could think of the classical field as being a single point in that 5-dimensional space. That’s equivalent to a function from 4 dimensions to a real number.) The m^2\phi^2 term is the potential energy: an energy cost that penalizes large values of |\phi| (Einstein’s equivalence between mass and energy). If we divide both sides by \Psi and assume that \Psi factorizes into a function of space-time multiplied by a function of classical field value, this differential equation becomes seperable (justifying the assumption). The left-hand side of the equation would then only depend on t, x, y, z and the right-hand side would only depend on \phi. Therefore, both sides must equal a constant, suggestively called M for mass. The left-hand side is a wave equation in space and time with energy and momentum related by \displaystyle\sqrt{E^2 - |\vec{p}|^2} = M and the right-hand side becomes \displaystyle\left( m^2\phi^2 - 2M \right) \psi(\phi) = \frac{\partial^2 \psi}{\partial\phi^2} with \psi(\phi) being the factor of \Psi depending on \phi only. This equation is hard to satisfy; this is what constrains the masses of particles to be integers times M. The solution is \displaystyle \psi_n(\phi) = \exp\left( -\frac{m}{2}\phi^2 \right) H_n\left(\sqrt{m}\,\phi\right)\;\;\mbox{only if}\;\;M = \left( n + \frac{1}{2} \right)m where H_n are Hermite polynomials for integers n. Excitations of the the real-valued quantum field are therefore waves with \sqrt{E^2 - |\vec{p}|^2} (the solution to the left-hand side) constrained to be \left( n+\frac{1}{2} \right) m\;\mbox{MeV} (to solve the right-hand side). The actual field values are spread over a continuum on both sides of zero— if the energy is single-valued, the field amplitude cannot be. This is Heisenberg’s uncertainty principle in the field theory context. Getting back to Why stuff might be soft Given this picture of matter as waves, it’s hard to imagine how it could ever coalesce into something solid. In fact, the above example doesn’t. These waves pass through each other, doubling the energy in the region where they superimpose, returning to their original shapes as they continue on their way. It was a simple example without interactions; a more realistic treatment would include extra terms that allow energy to flow from one field to another, in the same way that vibrational energy flows from a cello string to its sounding board to the air in a concert hall. A field without interactions vibrates in isolation, unable to be heard. This is nearly the case for the neutrino field: there are ten times as many neutrinos by mass than heavy elements like carbon and oxygen (which is like saying there are four times as many ants than humans, by mass), but they interact so weakly with our matter that a few detections per day is a good rate for a ton-scale detector. Neutrinos do not form solid structures. Particle physicists have identified four fundamental interactions in nature: • Electromagnetism: charged particles attract or repel each other by exciting the photon field, to which they are both coupled. • Weak Nuclear force: particles change species by de-exciting one field and exciting two others in their place: e.g. a down quark becomes an up and a W^-. This is how many radioactive isotopes decay. • Strong Nuclear force: holds quarks and nuclei together with gluons, rather than photons; very short-range. • Gravitation: the medium of exchange is the metric of space-time itself. Gravitons are virtual excitations of the curvature of space-time, treated as a quantum mechanical field. The “contact force” that keeps solids from pushing through each other is obviously not derived from gravity, and it acts over distances which are too large to be related to the Strong Nuclear force. The Weak Nuclear force is too weak, and that would turn electrons into neutrinos anyway. So we’re left with electromagnetism. Explaination #1: Electromagnetic force makes stuff hard Electromagnetism is responsible for nearly all macroscopic phenomena, the major exception being gravity. It is certainly the reason small things stick together: neutral atoms can be polarized and attract each other at short distances, even though they each have zero total charge. Many molecules, like water, have permanent electric dipoles which make it bead up into drops and crawl up the edges of a glass beaker. Water’s dipole and oil’s lack of a strong dipole are together responsible for all the hydrophilic/hydrophobic mechanisms in biology, such as keeping our cells from bursting open. But it’s not clear that electromagnetism can be solely responsible for holding things apart. I have never heard a description of exactly how electromagnetism is supposed to do it, and there are some general facts about electromagnetism that seem to preclude its being responsible for the contact force. The simplest way to hold things apart is to make them out of like charges, since like charges repel electrostatically (that is, without magnetism). Ignoring for the moment that ordinary matter is resolutely neutral, any residual charges being immediately screened by humidity or punished with an electric shock, there’s a theorem by Samuel Earnshaw which states that charged particles cannot be electrostatically trapped. Solids are in a state of stable equilibrium: the (electrostatic) attractive forces must be balanced by the repulsive contact force to keep them from collapsing to a point. The particles in a solid are trapped in Earnshaw’s sense, so electrostatic forces can’t be the reason for it. More likely, contact forces would be due to electrically polarized atoms or molecules. I wrestled with this for a long time, trying to make a model that works. The problem is that polarized particles should attract each other, except for unusual special cases. As two neutral atoms approach, the positive parts of one lean toward the negative parts of the other, minimizing the distance between the unlike charges and maximizing the distance between the like charges, making the total force attractive. It is possible for molecules to have permanant dipoles, but then they can simply rotate themselves to minimize distance between unlike charges, becoming attractive again. In biology, huge molecules can use repulsive polarization to their advantage, largely because they can root themselves relative to the object they want to repel. But this can’t be the reason so many simple substances solidify. Magnetism always comes in dipoles, so the same arguments apply. I’m fairly convinced that contact forces cannot be due to electromagnetism alone, though I don’t have a proof that rules out all possible mechanisms. The puzzling thing is that I have heard “electromagnetism” (with no further explaination) cited as the origin of contact forces in several reputable physics popularizations, one of them being Brian Greene on Nova. In our case study, we will see that electromagnetism is involved in holding metals together, but it is not responsible for the repulsive contact force. Explaination #2: The Pauli exclusion principle makes stuff hard Closer to the heart of the matter is Pauli’s exclusion principle, which states, roughly, that “two identical particles cannot occupy the same state at the same time.” That sounds like the solution to our problem, given as an axiom! It is the second explaination that I have heard in popular presentations of physics, always discouragingly unspecific. We have reason to be wary— this is the effect which “becomes significant” when matter is crushed in white dwarf stars. Could it also be responsible for balsa wood? Derivation of Pauli’s exclusion principle To see more clearly what this principle states, we should return to our formulation of matter as a quantum field. Last time, I skirted past the fact that the quantum field is the square root of a probability distribution, with a phase. The function maps classical field configurations to an “amplitude,” A, which is a complex number with a normalization property when it is squared. |A|^2, or A^*A, is interpreted as probability density. \displaystyle \int A^*(x) A(x) \, dx = 1 (This “amplitude” is a new word. It is not the amplitude of the classical field— sorry! To use my notation from an earlier section, A is \Psi(t,x,y,z,\phi), not \phi(t,x,y,z).)The phase of the complex number is lost when A is squared, but it is relevant when two waves superimpose, because their relative phase determines whether they add constructively or subtract destructively. Pauli’s exclusion principle applies to spinor fields, not fields of real numbers or vectors. Spinors are mathematical objects which are negated by 2\pi rotations. Vectors, which I assume you’re more familiar with, are unaffected by rotation by 2\pi. Imagine rotating a teacup 360 degrees— if it’s a vector, you get the same teacup back, but if it’s a spinor, you get minus a teacup (which, if squared, is a teacup squared in either case). For concreteness, we can represent spinors with matrices. A vector living in 3-dimensional space is a 3-tuple that is rotated by applying this transformation: \displaystyle \left(\begin{array}{c} x' \\ y' \\ z' \end{array}\right) = \left(\begin{array}{c c c} \cos\theta & -\sin\theta & 0 \\ \sin\theta & \cos\theta & 0 \\ 0 & 0 & 1 \end{array} \right) \left(\begin{array}{c} x \\ y \\ z \end{array} \right) while a spinor living in 3-dimensional space is a 2-tuple that is rotated by applying this transformation: \displaystyle \left(\begin{array}{c} x' \\ y' \end{array} \right) = \left(\begin{array}{c c} \cos\frac{\theta}{2} - i \sin\frac{\theta}{2} & 0 \\ 0 & \cos\frac{\theta}{2} + i \sin\frac{\theta}{2} \end{array} \right) \left(\begin{array}{c} x \\ y \end{array} \right) Note that x', y' \to -x, -y when \theta \to 2\pi for spinors. (These are both special cases of rotation around the z axis, and the above spinor representation applies only to spinors in the z axis. A spinor only lives in one axis, with the two components interpreted as “up” and “down.”) The key thing about spinors is that the amplitude of two spinor-particles is negated if they are exchanged. We can construct an example of this by placing one spinor-particle at x with spin (1,0) and the other at -x with spin (-i,0), which is (1,0) rotated by \theta=\pi. The amplitude of a two-particle system is the product of the amplitudes of the individual particles, because it represents the combined probability of the pair. We can interchange the two particles by rotating everything by \theta=\pi: this swaps the positions of the two particles, and the spins become (-i,0) and (-1,0). This is the same scenario we started with, except that one of the spinors has acquired a minus sign, so the whole amplitude has a minus sign. The general proof (of the Spin-Statistics Theorem) follows these lines. Now imagine two particles in the same position with the same spin. Swapping them yields exactly the same state, so the amplitude is unchanged. But it is also negated, therefore A=0. There is no amplitude for a pair of spinor-particles in the same state, so there is no probability for it, either! This is Pauli’s exclusion principle. It is not an axiom: it is derived from the rotation properties of spinors. Consequences of Pauli’s exclusion principle So what happens if two spinor-particles merely get close to each other? If there are no other fields whose coupling is strong enough to drain their mass-energy away, the spinor-particles can’t disappear. The total probability density must be 1, so one of them must change its state, either by changing spin from (1,0) to (0,1) or by entering a higher-energy state. Open vice-grip Closed vice-grip with excited particles Imagine a row of spinor-particles, all at rest, lined up along the x axis. If we crush them in a vice-grip, we will encounter no resistance until the length of the row is halved, because the particles will happly overlap each other in the remaining space, selecting opposite spins. But as we twist the crank further, the particles will be forced to overlap each other with the same spin. The only way they can do that is by climbing into higher and higher states of kinetic energy. (High-energy states are standing waves with more wiggles.) At last, when the vice-grip is one particle wide, every state from the ground state up to state number N/2 will be filled with two of the N particles. We provided the energy needed to push the particles into the upper states with the handle of the vice-grip, and it felt to us like a resisting force. Force is defined F = -dE/dx, so the Pauli exclusion principle really did exert a force against our hand as we turned the crank (sometimes called the exchange force). But the Pauli exclusion principle isn’t one of the four fundamental forces! Gravity, Electromagnetism, and the Strong and Weak Nuclear interactions are put into quantum field theory by hand; the Pauli exclusion principle is a derived consequence of the rotation of spinors, independent of interactions! How can a principle exert a force? This is not the only force-which-is-not-a-fundamental interaction. The random motions of air molecules inside a balloon exert a force on the balloon’s surface, even though they are freely-streaming particles, without interactions (ideally). Balloons and vice-grips are made of charged particles, so we might expect electromagnetism to be a distant cause, but it doesn’t need to be. Imagine, instead of a vice-grip, that the particles are enclosed in a small, toroidial universe with finite volume. If the volume shrinks for some reason, the particles will resist, even if there are no interactions in the Schrödinger equation at all. Thus, the Pauli exclusion principle can provide a resisting force that acts qualitatively like the contact force from freshman physics. But is it big enough for balsa wood? Case study: what makes metal hard? I decided one day that I had gone long enough without knowing why things are hard, so I went to the library to find out. The answer lies in “that other branch of physics,” namely, everything but particle physics/cosmology. The majority of physicists start with protons, neutrons, and electrons and derive from these everything we encounter in our macroscopic lives, including tangibility. I expected “Why things are hard” to be a chapter in a standard Solid State Physics textbook. At least, I expected it to come in a “Basic Properties” section, before applications to transistors. I never found a general answer to my question, and I suspect that the exact answer might differ in the details for every material. The Pauli exclusion principle is probably involved at some level for all of them, though it could be obfusgated by other effects. From my reading, I was able to derive the hardness of one simple, but undeniably hard, material: metal. I need to apply some approximations, and only carry out my calculation to the nearest order of magnitude, but a rough agreement with the measured hardness of metal gives me some confidence that this is the correct explaination. Many metals can be described by the following picture: a lattice of atomic nuclei, all the same element, surrounded by several filled shells of electrons. Electrons are spinor-particles, so they obey Pauli’s exclusion principle: only one can fill a given orbital+spin energy level. The innermost shell accepts exactly 2 electrons (purely determined by spin), the next takes 8, the next 18, etc. Beyond the filled shells, the atoms may require 3 to 6 more electrons to be neutral, but this is not enough to fill a shell. These “valence” electrons are loosely bound and roam from nucleus to nucleus. The nuclei attract each other electrostatically, because although most of their charge is screened by the filled shells and the valence electrons, not all of it is. This is the attractive force that keeps metal from flying apart. The repulsive force that balances it, and resists the force of my swinging fist, derives from the dependence of the valence electrons’ energy on the size of the metal object, highly amplified by the Pauli exclusion principle. It’s worth noting that elements with no valence electrons are all formless gasses.To derive the hardness of metal, we must consider how the internal energy responds to crushing, just like the example of the vice-grip on a line of particles. The quantity that measures three-dimensional crushing is called the bulk modulus, \displaystyle B = -V\frac{dP}{dV} = -V\frac{d^2E}{dV^2} also known as 1/\kappa, inverse compressibility. We need to calculate the energy of the metal as a function of volume. Normal humans cannot crush metal to such an extent that the nuclei or their inner shells are threatened, so the valence electrons’ energy is the only component that matters. The potential energy that the valence electrons feel is a regular lattice of 1/|r| wells, the Coulumb potential due to each nucleus. But do we need all of this detail? Back-of-the-envelope calculations of electron wavelengths yield 10 nm at the smallest, which are tens to hundreds of times the interatomic spacing. The valence electrons will therefore see a smoothed potential that looks remarkably like the infinite square well at the beginning of most Quantum Mechanics books. For the most part, the nuclei just keep the valence electrons from wandering away. Approximation of potential energy The solution to the Schrödinger equation for an infinite square-well is sinusodial with zero amplitude along the edges. An electron in the ground state is one big wave that fills the entire metal conductor— if the metal object is, say, a skyscraper, that’s an enormous electron! Or consider the electron that sits in the metallic hydrogen core of Jupiter. Fundamental particles really are waves— their sizes are not intrinsic. The three-dimensional infinite square-well problem is solved in detail on the “Fermi Sea” Wikipedia page, in exactly our context: valence electrons in metal. The energy of a given state is \displaystyle E(n_x,n_y,n_z) = \frac{\hbar^2\pi^2}{2m}\left(\frac{{n_x}^2}{{L_x}^2} + \frac{{n_y}^2}{{L_y}^2} + \frac{{n_z}^2}{{L_z}^2}\right) where we will ignore dimensionless constants of order one. If the electrons did not obey Pauli’s exclusion principle, the total energy of N electrons would be \displaystyle E_{\mbox{\scriptsize tot}}=\frac{\hbar^2}{mV^{2/3}}N because they would all be in the ground state (barring excitation due to thermal noise). Since electrons are spinor-particles, the exclusion principle applies and the electrons fill one state each, from the ground state up to the N^{\mbox{\scriptsize th}} state. (Thermal excitation only blurs the top few levels.) We can integrate for the total energy by representing n_x,n_y,n_z as the first quadrant of a sphere— this is all worked out in detail on the above-mentioned Wikipedia page. The total energy is actually \displaystyle E_{\mbox{\scriptsize tot}}=\frac{\hbar^2}{mV^{2/3}}N^{5/3} The 5/3 power, applied to typical numbers of valence electrons (10^{23}), makes a difference of a factor of 10^{15} in total energy. Metal would be much squishier (and denser) without it. Now we can calculate the bulk modulus. \displaystyle B=V\frac{d^2E}{dV^2}=V\frac{N^{5/3}\hbar^2}{mV^{8/3}}=\frac{N^{5/3}\hbar^2}{V^{5/3}m}= \left(\frac{N}{V}\right)^{5/3}\times 10^{-38} \mbox{ Nm}^3 Armed with this prediction, I confronted the Periodic Table and was immediately overwhelmed by the qualitatively different kinds of metals. The transition metals, including some of the most familiar such as iron, gold, and tin, don’t fit the simple picture I presented at the beginning of this derivation because they have two unfilled shells, and the other shell can be fairly large (holds 18). I don’t know how many of these electrons to call “valence.” The non-transition metals are divided into semi-metalic “metaloids” and post-transition “poor metals.” I found the best agreement in the Boron and Carbon families, an indication that unaccounted-for systematic effects are lurking among the data, preferring certain electron configurations over others. Here are the data for the Boron and Carbon families, with an asterix marking the poor metals. Element Nuclei/V (\times 10^{27}/\mbox{m}^3) Valence Electrons Prediction (\times 10^{7} \mbox{N}/\mbox{m}^2) Measured (\times 10^{7} \mbox{N}/\mbox{m}^2) Ratio (meas/pred) Boron 130 3 21000 32000 1.52 Silicon 48 4 6400 10000 1.56 Aluminum* 59 3 5600 7600 1.36 Thallium* 34 3 2200 4300 1.95 Tin* 36 4 4000 5800 1.45 Lead* 32 4 3300 4600 1.40 The fact that the ratio is not 1.0 is no surprise: we ignored constants of order unity. What is interesting is (a) the prediction and the measurement are the same order of magnitude, indicating that this mechanism really can explain most of the effect, and (b) there’s a correlation between the measurement and the prediction: the values vary by a factor of 7 from Thallium to Boron, but their ratios vary by less than a factor of 1.5. So what happened to this being the effect which “becomes significant” in white dwarfs? It’s certainly significant in ordinary matter! It just isn’t a factor in normal stars, because stars are so hot that the kinetic energy of their electrons aren’t limited to the minimum-energy states. When stars cool into white dwarfs, they become more like metal. Because I’m honest, here are the rest of the non-transition metals. Arsenic 45 5 8300 2200 0.27 Antimony 32 5 4700 4200 0.89 Tellurium 28 6 5100 6500 1.27 Bismuth* 28 5 3800 3100 0.82 (No data for Gallium.) Including these, we can see variations of a factor of 7 in the ratio. Perhaps there isn’t anything special about the Boron and Carbon families; it could have more to do with metaloids versus poor metals. There are way too few elements to do a statistical analysis— to find out what’s really going on here, I would need to learn Chemistry. Now it’s time to back up: have we answered our question? We have just explained (roughly, at least) how metal resists being crushed, assuming that we have the power to move the edges of the square-well potential. But why would pushing a metal face move the edge of the potential? That potential is set by the nuclei— couldn’t I push my finger ghost-like into the metal, leaving the nuclei, and therefore the square-well potential, fixed? (The metal’s nuclei and the nuclei in my finger are both very small; they won’t collide.) Here’s how I think about it: suppose we push a block of metal with a metal finger, all the same substance. The block plus the finger can be considered one object, and if they could interpenetrate, that would be the same as saying that the block-plus-finger object is shrinking, and the bulk modulus would be sure to prevent that! My finger is not made of metal, but it has the same effect. This still amazes me, because the outermost electrons in my finger are not free-roaming valence electrons; they form a wide variety of configurations, and yet it all still works— solids don’t interpenetrate. Conclusion: which explaination was right? As we have seen, Pauli’s exclusion principle has more to do with why things are hard than electromagnetism. But its role is not particularly simple, and it isn’t enough to say that “two particles can’t sit in the same place at the same time,” because they can and they do. The electrons (or really, electron-waves) in a metal all fill the whole structure, but at strictly different energy levels. And even this isn’t the direct cause of the contact force but an amplifier of it: the direct cause is the fact that energies of all the electron states scale with the size of the box they are forced to live in. One could point out that it is electromagnetism that keeps them in the box, but if that means that electromagnetism wins, it wins by an enormous technicality. The disturbing thing about this picture is that it is not general. The exact reason that one material, like metal, is hard is not necessarily the reason that another material, like my finger, is somewhat hard. This might account for the vast diversity in pliability and texture in nature, but it makes me wonder if there’s a more general way of looking at it that I’m missing, or if such a complicated rule has an exception. Could some highly engineered substance be made intangible, like recent materials designed for invisibility? That’s a tricky problem, because the same force that resists outside pressure is the force that keeps metal from collapsing. To make an intangible solid, one must find a way to balance the attractive force of the substance’s internal particles without being influenced by penetration from outside. We could do this if there were new fields in the Schrödinger equation which are strongly coupled to each other but weakly coupled to our fields. That would be a ghost universe, with planets we could orbit but never land on— in fact, we could orbit inside them! This line of reasoning is unfair, because you can’t make new technology by rewriting the laws of physics. There is some value to thinking about it, though, because dark matter is a field with very little coupling to our own matter. This has been firmly established with gravitational observations (we could definitely orbit it), but not directly, with physical detectors (emphasizing the point that it couples very weakly). We don’t know if there are new strong interactions felt only by dark matter, which would be necessary to make dark planets, and all indications so far suggest that the vast majority of dark matter is softer than gas. But suppose that there are different kinds of stable dark matter particles, and only a small fraction of them interact strongly with themselves. This could be enough to make a planet here and there… We’ll find out when the dark matter people living in the center of the earth send up a satellite to discover, “What’s beyond the Mantle?” About these ads 17 Responses to “Why Stuff is Hard” 1. Kea Says: Don’t worry – it’s a fantastic post. We’re just speechless at your blogging skills. I especially liked the point about quantized masses. 2. Rueben Says: …way too long an explanation for us dummies. “If you can’t explain it to your grandmother — you don’t understand it.” {– A. Einstein} 3. Jonathan Vos Post Says: I think this sweeps some hard problems under a soft rug. We know that crystals of various substances and crystallographic point groups exist and are stable in 3+1 dimensional space (x,y,z,t). But only in the past decade or so can we mathematically suggest why that stability is so. Most solid matter is not crystalline. Except for little bits of tooth and bone, we ourselves are squishy wet soft stuff. Why is soft matter stable? Why are DNA and RNA and proteins in protoplasm stable? Even more to the point: what is the actual structure of liquid water, and why is it stable? The question of loops versus strings in water were debated in the past couple of years between Los Alamos people and other people. Why is glass hard? These are serious questions. 4. Kea Says: Agreed, Jonathon, but would you mind clarifying the mysterious comment: 5. jpivarski Says: Hi all, thanks for your comments! I had to choose a level for this article, and I chose to write it for interested mathematicians, and myself five years ago. I can’t write about the specifics of soft matter, glass, and water, because I don’t know much about them. However, I also want to understand this subject in general— it’s disturbing that most things in the world feel solid, but we need a separate explanation for each of them. I think I’ve found a truly general argument (see my next post), and that one necessarily glosses over details. I, too, would like to learn if water is stringy (or loopy). Is it? 6. Kea Says: It’s neither stringy or loopy because these are failed attempts at QG. 7. Michael D. Cassidy Says: Thanks for this post, though there are parts that went by me, it was wonderful to read. 8. The uselessness of physics in fundamental research at Freedom of Science Says: […] Here a physicist ruminates about the hardness of matter:1 The Pauli exclusion principle is probably involved at some level for all of them, though it could be obfuscated by other effects. From my reading, I was able to derive the hardness of one simple, but undeniably hard, material: metal. … […] 9. Abubakar Mahre Says: I am student from the department of Geological engineering from Kaduna Polytechnic-Nigeria. On a project,trying to know what makes a material hard. But couldn’t find any meaning. 10. Abubakar Mahre Says: 11. John Heath Says: Knowing what a electron is and by what means it likes a positron but not another electron would go a long way towards answering the question ” why stuff is hard ” . No answer on my end but it would be nice to know what a electron is other than .511 MeV and the smoke and mirrors of quantum probabilities . Where is the beef ? 12. The uselessness of physics in fundamental research « How the world works Says: […] Here a physicist ruminates about the hardness of matter: ((Studying hardness of matter has been the quientessential scholastic subject for millennia.)) The Pauli exclusion principle is probably involved at some level for all of them, though it could be obfuscated by other effects. From my reading, I was able to derive the hardness of one simple, but undeniably hard, material: metal. … […] 13. El mundo Says: El mundo… […]Why Stuff is Hard « The Everything Seminar[…]… 14. software autocad Says: software autocad… 15. spacetime Says: 16. emily browning Says: 17. katv.com Says: Why Stuff is Hard | The Everything Seminar Leave a Reply WordPress.com Logo Twitter picture Facebook photo Google+ photo Connecting to %s Get every new post delivered to your Inbox. Join 105 other followers %d bloggers like this:
6a414a4b9b1dfe6e
Sensors Sensors 1424-8220 Molecular Diversity Preservation International (MDPI) 10.3390/s110302426 sensors-11-02426 Article Modeling of Nonlinear Aggregation for Information Fusion Systems with Outliers Based on the Choquet Integral SuKuo-Lan1* JauYou-Min2 JengJin-Tsong3 Department of Electrical Engineering, National Yunlin University of Science & Technology, 123 University Road, Section 3, Douliou, 64002 Yunlin, Taiwan Graduate School of Engineering Science and Technology, National Yunlin University of Science & Technology, 123 University Road, Section 3, Douliou, 64002 Yunlin, Taiwan; E-Mail: Department of Computer Science & Information Engineering, National Formosa University, Wunhua Road, Huwei Township, 64632 Yunlin, Taiwan; E-Mail: Author to whom correspondence should be addressed; E-Mail:; Tel.: + 886-5-5342601 ext. 4248; Fax: +886-5-5342065. 2011 25 2 2011 11 3 2426 2446 25 12 2010 25 1 2011 15 2 2011 © 2011 by the authors; licensee MDPI, Basel, Switzerland. 2011 Modern information fusion systems essentially associate decision-making processes with multi-sensor systems. Precise decision-making processes depend upon aggregating useful information extracted from large numbers of messages or large datasets; meanwhile, the distributed multi-sensor systems which employ several geographically separated local sensors are required to provide sufficient messages or data with similar and/or dissimilar characteristics. These kinds of information fusion techniques have been widely investigated and used for implementing several information retrieval systems. However, the results obtained from the information fusion systems vary in different situations and performing intelligent aggregation and fusion of information from a distributed multi-source, multi-sensor network is essentially an optimization problem. A flexible and versatile framework which is able to solve complex global optimization problems is a valuable alternative to traditional information fusion. Furthermore, because of the highly dynamic and volatile nature of the information flow, a swift soft computing technique is imperative to satisfy the demands and challenges. In this paper, a nonlinear aggregation based on the Choquet integral (NACI) model is considered for information fusion systems that include outliers under inherent interaction among feature attributes. The estimation of interaction coefficients for the proposed model is also performed via a modified algorithm based on particle swarm optimization with quantum-behavior (QPSO) and the high breakdown value estimator, least trimmed squares (LTS). From simulation results, the proposed MQPSO algorithm with LTS (named LTS-MQPSO) readily corrects the deviations caused by outliers and swiftly achieves convergence in estimating the parameters of the proposed NACI model for the information fusion systems with outliers. information fusion multi-sensor systems Choquet integral particle swarm optimization with quantum-behavior least trimmed squares In the modern world, to make optimum decisions in economics, industry, science, aeronautics, manufacturing, traffic control, and many other military and civilian applications we are extremely dependent on useful and crucial information which is drawn from messages or data via transformation, classification and/or some other processing. Therefore, multi-sensor systems providing these messages or data are becoming increasingly important in meeting the goals of optimum decision-making. Besides, a feasible model to elaborate on information fusion and a soft computing technique to perform the heavy computations required are also critical. Within the consideration of a feasible model, traditionally, the most common forms are the weighted average model and the linear regression model. These models are all linear and assume that there is no interaction among feature attributes (i.e., input information). However, in many real-world systems, the inherent interaction among feature attributes must be considered circumspectly and these kinds of systems are essentially non-additive systems. Hence, a nonlinear aggregation based on a nonlinear integral (NANI) model with respect to a non-additive set function is a powerful way of coping with these kinds of systems. In general, the Choquet integral is the most frequent form of the nonlinear integral and some literature proposing its use exists [14]. Liu et al. [1] proposed a NACI model derived from one of the following three kinds of fuzzy supports: the bespoke fuzzy support, the sample relative fuzzy support and the response correlative fuzzy support. This model deals with the interaction among feature attributes based on the correlation in statistics. Wang et al. proposed the original [2] and weighted [3,4] NACI model to deal with the information with numerical and categorical feature attributes, respectively. In fact, the weighted NACI model is the generalized form of the original one. In these two models, the interaction among the feature attributes toward the objective attributes (i.e., outputs) is described as non-additive set functions and is essentially derived from the co-relationship in the statistics. Although the weighted NACI model is successful in describing the interaction among hybrid feature attributes, at the same time, more parameters have to be estimated than in the original NACI model, but for a system with n-dimensional feature attributes, there are 2n + n parameters that must be determined and it is obvious that the amount of parameters increases exponentially with the dimensions of the feature attributes. The problem of exactly finding out these parameters is an essential optimization problem and the basic idea consists of making the residuals as small as possible. Residuals here are defined as the difference between what is actually observed and what is estimated. To minimize residuals, traditionally, the Least Square (LS) method is introduced and typically it achieves a remarkable estimation under circumstances where all attributes are uncontaminated. Unfortunately, in real world applications these features and objective attributes are always subject to outliers. That is, outliers may occur due to various reasons, such as erroneous measurements or data with a heavy-tailed distribution function. Whenever outliers exist, they always cause a serious deviation of what is estimated. Within the outlier detection literature [57], the least trimmed squares (LTS) estimator and the least median squares (LMS) estimator are the most popular ways of eliminating the effects caused by outliers. The LTS estimator not only possesses a high breakdown value but also several advantages over the LMS estimator, therefore, in this study we have focused our efforts on the LTS estimator to eliminate the inference from outliers. That is, we propose a feasible model able to effectively reject outliers that is also a contribution of this paper to the fuzzy integral problem. Confirming the feasible model and from previous analysis, to efficiently and swiftly estimate the model’s parameters satisfying specific criteria is the next challenge. That is, a timesaving soft computing technique is necessary for the information fusion system with contaminated attributes. In the literature, there are many outstanding soft computing techniques that qualify for this task; they are neural network (NN) [8], GA [9], ant colony optimization (ACO) [10], etc. Particle swarm optimization with quantum-behavior (QPSO) which is an improved version of the traditional particle swarm optimization (PSO) [11] would be one of the powerful choices [1213]. In the QPSO algorithm, particles are bounded in the searching range just like electrons move in a quantum well; meanwhile, according to the uncertainty principle, a particle’s position and velocity cannot be determined simultaneously. Hence, the information of a particle in quantum space is depicted by probabilities (i.e., wave function) and the dynamic behavior of a particle is widely divergent and dominated by the Schrödinger equation. The QPSO algorithm ensures the congregation of the particle swarm without losing the randomness. Within the QPSO algorithm, particles can appear at any position of the whole space which is searched with a certain probability. This algorithm offers high performance in single mode systems, because of the property of swift convergence. However, particles usually fall into local extreme states in multimode optimization systems and then take on the premature phenomenon. In order to make use of the merits of quick convergence and conquer premature in the traditional PSO, we proposed a QPSO algorithm with elitist crossover mechanism of the GA (named MQPSO) in our previous work [14] and demonstrated a superior performance than the GA in estimations of model parameters. In this paper, we improve the MQPSO algorithm proposed in our previous work to manipulate systems with outliers. That is, the mechanism of the LTS estimator is introduced to eliminate deviations caused by outliers and enhance the robustness of the MQPSO algorithm. To distinguish it, the revised MQPSO algorithm is named LTS-MQPSO. The most significant improvement is that the LTS-MQPSO algorithm combines the concepts of the simulated annealing (SA) and the GA within the QPSO algorithm to achieve global search and overcome prematurity in optimal processes, respectively; meanwhile, the LTS estimator is also performed to eliminate the inference from outliers. In order to verify the proposed LTS-MQPSO algorithm, a numerical example is also performed in this study. From the results of the experiment, the proposed LTS-MQPSO algorithm is able to acquire reasonable parameters for the NACI model and make quite precise decisions. The rest of paper is organized as follows: in Section 2, we introduce the NACI model and characterize the information fusion system. Section 3, the least trimmed square estimator and the QPSO algorithm are briefly described. Next, we propose the LTS-MQPSO algorithm in detail. Section 5, is shown the results of numerical simulation and then the paper is concluded in Section 6. The NACI Model and Information Fusion System Characterization In traditional linear aggregations, the most frequent model used to describe the relation between feature attributes X and objective attribute Y is the Lebesgue-like integral [15]: Y = κ 0 + κ s fd υ + erwhere κ0 is a constant, κs is a scaling factor, the integrand f represents observations of the scope of feature attributes X, υ is an additive measure which indicates the relative contribution of each element of feature attributes and er is the error term which has the form of normally distributed random perturbation with zero mean and variance σ2. This linear model always performs a good approximation based on a fundamental assumption that there is no interaction among feature attributes. However, in many real-world systems, the inherent interaction among feature attributes must be considered circumspectly. To reasonably describe the inherent interaction among feature attributes, Wang and Klir [16,17] proposed a regular non-additive set function μ named normalized general measure (NGM). The NGM is defined on the power set of feature attributes and the formal definition of the NGM can be express as: μ ( Ø ) = 0 ,     μ ( X ) = 1 ,   when Ø P ( X ) A P ( X ) ,   B P ( X ) ,   and   A B μ ( A ) μ ( B ) Besides, a nonlinear integral is also introduced to aggregate the feature attributes. That is, whenever we deal with information fusion systems where information possesses some inherent interactions, the nonlinear integral with respect to the NGM is the most reasonable tool. In practical applications, there are many kinds of nonlinear integrals such as the Choquet integral [18], the Sugeno integral [19], the Wang integral [20], and so on. The Sugeno integral, by definition, is similar to logical operations and thus it is not an extension of the Lebesgue-like integral. Although the Sugeno integral is very timesaving to perform, it cannot be precisely inverted and this is a fatal defect. On the other hand, the Wang integral has been shown to possess remarkable properties. However, it is rather complex and quite time-consuming to perform. Those are the main reasons why the Choquet integral is adopted in this paper. The Choquet integral with respect to the NGM is defined as follows: fd   μ = 0 μ ( F α ) d αwhere f, {f(x1), f(x2),⋯f(xn)} is a non-negative measureable function with n-dimensions on X, and Fα = {x|f(x) ≥ α, xX}, α ∈ [0, ∞), is called the α-cut set of function f. Since X is a finite set and the value of measureable function f can be sorted as: min 1 i n   f ( x i ) = f ( x 1 * ) f ( x 2 * ) f ( x n * ) = max 1 i n   f ( x i )where { x 1 * , x 2 * , , x n * } is a permutation of {x1,x2,⋯,xn}. Then, the discrete type of Choquet integral with respect to the NGM defined above can be expressed as: fd   μ = i = 1 n ( f ( x i * ) f ( x i 1 * ) ) μ ( { x i * , x i + 1 * , , x n * } )   with   f ( x 0 * ) = 0 Compared to the linear aggregation model shown in Equation (1), μ({xi}) represents the relative strength of contribution to objective attributes Y by a single feature attribute xi, and μ(A), AP(X) represents the joint relative strength of contribution to objective attributes Y by the feature attribute set A. In addition, to simultaneously deal with observations with categorical attributes and numerical attributes, the NACI model which indicates the relation between hybrid attributes X and objective attributes Y can be expressed by the following formula [4]: y = c + q ( ω f ) d   μ + erwhere c and q are constants, ∫ fd μ is the Choquet integral of function f with respect to the NGM μ, vector ω = (ω1,ω2,⋯,ωn) is an n-dimensional weighting vector which is used for coping with categorical attributes, i.e., vector ω = (ω1,ω2,⋯,ωn) is used for balancing the units among various attributes and satisfies the following constraint: 0 < ω i 1   and   max 1 i n ω i = 1 ,   i = 1 , 2 , , n In the NACI model, constants c, q, vectors ω and the NGM μ are all parameters of the model. In total there are 2n + n unknown parameters and this number increases exponentially with the dimensions of the feature attributes. In order to complete the NACI model, these model’s parameters have to be determined in advance. That is so called the training state of the NACI model. In the training, associating Equation (7) with available observations constitutes an over-determined system with the Choquet integral. Thus, the analytic solution of the model parameters cannot be figured out exactly. Furthermore, constants c and q are essentially different from the other parameters which are governed by the Choquet integral. Therefore, a dual optimization procedure must be simultaneously performed; meanwhile, the performance index of optimization J (called fitness function) is also introduced and expressed as: J = Minimize   { e 2 } = Minimize   { j = 1 k ( y j c q ( ω f j ) d   μ ) 2 }where k is the length of available observations for the training state. Because the kernel of the performance index of optimization is the LS estimator, it always suffers from atypical observations which arise from outliers in real world systems. That is, the LS method deviates seriously in estimations of a model’s parameters where outliers are present. Hence, it is also a major objective of this study to propose a feasible method for resolving this issue. The proposed method has to achieve not only precise model’s parameters but also remarkable capability of rejecting outliers. In general, these kinds of problems are also called robust regressions and many high breakdown value regression estimators have been proposed for this [6,7]. For the reasons of simplicity and efficiency, the LMS and the LTS are the more popular regression estimators in scientific applications. Furthermore, the LTS estimator possesses not only the same breakdown value as the LMS, but also several additional merits: for instance, its objective function is smoother; its statistical efficiency is better, and so on. Therefore, we focus the treatment of outliers in the LTS method and thus, Equation (9) is revised as: J = Minimize   { j = 1 h ( y j * c q ( ω f j * ) d   μ ) 2 }where y j * and f j * are a permutation of observations under the best model parameters and h is a trimmed parameter of the LTS estimator. The block diagram of the proposed structure for the training state and information fusion systems is shown in Figures 1 and 2, respectively. In Figure 1, the block named MQPSO receives the differences of objective attributes between observations and estimations when the terminative criterion is not satisfied yet; meanwhile, the parameters of the NACI model are updated based on these differences. Another block which is named LTS is used for filtering out these atypical observations and the trimmed parameter of the LTS estimator h is also revised by the global optimal parameters so far. Besides, the block named “Non-additive systems with outliers” is the system that we are considering. That is, it is the source of the training data (Observations) which are used for modeling the NACI. The block named “Subset of observations” is represented as the observations after the LTS. That is, the “Subset of observations” is also „Non-additive systems’ but different from the Observations (Non-additive systems with outliers). In Figure 2, the block named feature attributes of information depicts continued observations in a period in which the decision profile (DP) is produced. Associating the DP with the model’s parameters which are acquired in the training state, the decision is usually able to be made precisely. Besides, the block named decision by majority guarantees that we are always able to make a correct decision in a low contaminated environment. The LTS Estimator and the QPSO Algorithm The LTS estimator is formulated as: Minimize   { i = 1 h r 2 ( d i ) }   and   r 2 ( d 1 ) r 2 ( d 2 ) r 2 ( d h ) r 2 ( d k )where di is the ith observation, r2 (di) is the ith squared residual, k is the length of observations and h is the number of data points which are not trimmed from the data set. In robust regression analysis [6], the maximum tolerance of the LTS estimators to outliers (named Maximum Breakdown Point) for any equivariant regression estimator satisfies: Breakdown Point 1 k ( ( ( k ζ ) / 2 ) + 1 )where ζ is the dimension of variables. Intuitively, the breakdown point is bounded above at 50%. The maximum breakdown point is actually attained for Equation (12) with h = (k + ζ +1)/2 in a multiple regression system and the solution of Equation (11) always exists. Of course, one can achieve the optimal solution by considering C h k ordinary least squares problems for all subsets of {1,2,⋯,k} with h elements and selecting the best one among all candidates. Obviously, it is laborious and impractical for real world systems with large numbers of observations. In order to cope with a great deal of observations, the FAST-LTS method was been proposed [7]. The major distinguishing features are the initial h-subset, the C-step and the nested extensions. By and large, the initial h-subset is a preselecting mechanism to confirm that a clean h-subset {d1,d2,⋯,dh} drawn from all observations can be attained. The C-step is a recursive procedure and used for increasing the accuracy of the estimated model parameters. This recursive procedure estimates a model parameters θini with the LS estimator based on a clean h-subset { d 1 * , d 2 * , , d h * } which is created by the initial h-subset procedure. Then, the newly square residuals and h-subset { d 1 new , d 2 new , , d h new } are acquired in turn. By the new h-subset, the estimated θnew is more accurate than θini. Repeating these procedures, a set of precise model parameters θ can be achieved. For a small to moderate data size k, these two procedures work well and do not take much time. When the number of observations is large enough, for instance k > 600, the performance of these two procedures is poor and it takes much more time. To deal with this situation, the procedure named nested extension is introduced. In nested extensions, the data is partitioned into many subsets and then, the initial h-subset and the C-step are applied to each subset. Next, each subset with λ feasible solutions is extended to the full observations and the C-step procedure performed repeatedly. Finally, an optimal solution that satisfies the specific desired accurateness would be achieved. After drawing observations without contaminations, a proper soft computing technique is essential and can help us to efficiently estimate the parameters of the NACI model. In the literature there are many outstanding soft computing techniques that qualify for this work. The QPSO algorithm is one of these soft computing techniques, and possesses significant global and local search abilities. In the QPSO algorithm, particles move in a quantum multi-dimensional space, the state of particles is usually depicted by normalized wave function Ψ(ρ,t), i.e., the probability amplitude of the position where particles are present; and further, |Ψ(ρ,t)|2 is then interpreted as the corresponding probability density function which satisfies the follow equation: whole space | Ψ ( ρ , t ) | 2 d   ρ = 1where ρ are the n-dimensional coordinates. That is, a single particle with mass m is subjected to the influence of a potential field V(ρ,t) in the quantum space and the wave function is governed by the Schrödinger equation: i ħ t Ψ ( ρ , t ) = ħ 2 2 m 2 Ψ ( ρ , t ) + V ( ρ , t ) Ψ ( ρ , t )where ħ is the Planck constant and ∇2 is the Laplacian operator. In an environment with a potential field, the particles are then attracted to the center of field through the optimization process, and this attraction leads to the global optimum. Based on the assumption that the attractive potential field is time-independent (the co-called stationary state), the solution of the time-independent Schrödinger equation has the form [21]: Ψ ( ρ , t ) = φ ( ρ ) exp ( i ω t )where ω has the dimensions of an angular frequency. In theory, any type of potential well can describe this system which is bounded and attracted by a potential field. However, the simplest one is the Delta Potential Well and the potential field is given by: V ( ρ ) = γ δ ( ρ )where γ is a positive number proportional to the “depth” of the potential well. The meaning of Equation (16) is that the depth is infinite at the origin and zero elsewhere. For the sake of simplicity, the solution of time-independent Schrödinger equation for this system in one dimensional space is considered and expressed as: Q ( z ) = | φ ( z ) | 2 = 1 L e 2 | z | L , L = ħ 2 m γwhere Q(z) is the probability density function for measuring a particle’s state and L is the characteristic length of Delta Potential Well. The L specifies the search scope of a particle and is called “Creativity” or “Imagination”. In order to obtain the precise position of particles, the Monte Carlo Method is used for simulating the procedure whereby the quantum state collapses to the classic state. After this effort, the particle’s position can be expressed as: φ i = pf cnt ( i ) ± L 2 ln ( 1 u ) ,   i = 1 , 2 , , NPwhere NP is the number of particles in a population, u is random number uniformly distributed on [0,1] and pfcnt is the center of potential field which is proposed by Clerc and Kennedy [22] and defined as: pf cnt ( i ) = ( c 1 p i loc + c 2 p gol ) ( c 1 + c 2 ) ,   i = 1 , 2 , , NPwhere c1, c2 are constriction coefficients and p i loc, pgol are the best position of the ith particle and the global best position found so far. In order to improve performance of the QPSO algorithm, Sun et al. [13] employ a Mainstream Thought Point (or named Mean Best Position, mbest) to evaluate the parameter L. However, to extend the global search of the QPSO algorithm, the mbest is modified and then, these two parameters can be expressed as the following form: mbest = [ i = 1 NP ϕ i , 1 NP , i = 1 NP ϕ i , 2 NP , , i = 1 NP ϕ i , n NP ] L = 2 β | mbest φ i |where β is a creative coefficient which is used to adjust the convergent speed of individual particle and the performance of the QPSO algorithm. Hence, the particle’s position can be updated in the each iteration by the form: φ i + 1 = pf cnt ( i ) ± β | mbest φ i | ln ( 1 u ) The LTS-MQPSO Algorithm Within empirical applications, however, the QPSO algorithm usually represents a stagnating phenomenon for searching the global optimal solution in multi-mode problems and systems. Meanwhile, it is also strongly influenced by the creative coefficient β. In order to improve these defects, the updating mechanism of the creative coefficient β on the MQPSO algorithm which is proposed in our previous works is revised. That is, the modified MQPSO algorithm combines the QPSO algorithm with mechanisms of the SA and the GA to achieve global search and overcome premature for traditional PSO in optimization process. Two significant improvements are introduced to the modified MQPSO algorithm. They are the nonlinear updating of the creative coefficient β with the form of the SA and the instantaneous monitoring the convergence of the optimization procedure, respectively. In the QPSO algorithm, the creative coefficient β is set to a large number at the beginning and adjusted decreasingly following the optimization procedure. Such mechanisms effectively realize that a global search is performed at the beginning and the convergence is achieved finally. In general, the decreasing rate of β is linear, but a nonlinear revision according to the convergence of the optimization process would be more reasonable and feasible. In the modified MQPSO algorithm, a nonlinearly revising mechanism which is similar to the SA algorithm is introduced and expressed as the form: β = β ini Δ β ( 1 + exp ( ( Δ fit ) ) ) 1where Δβ is step length of β, Δfit is the changing rate of optimal estimation so far and βini is the initial value of β. A typical curve of β which is adjusted by Δfit is shown in Figure 3. The other improvement of the modified MQPSO algorithm is the mechanism to overcome prematurity. Inspired by the mechanisms of mutation and elite crossover in the GA, an index of conquering stagnation (named ECM which is an abbreviation of Elite Crossover and Mutation) is used for monitoring the status of the optimization procedure in the modified MQPSO algorithm. That is, during the optimization procedure, the modified MQPSO algorithm preserves each different pgol; meanwhile, the index of conquering stagnation, ECM is set to zero whenever pgol is updated. Of course, the ECM increases by one whenever pgol is unchanged. Before finishing the current iteration, the modified MQPSO algorithm judges whether ECM exceeds the specific criteria. If it is true, the modified MQPSO algorithm lets the new population be these collected pgol instead of the original population (all/or these worse particles) and sets the ECM to zero, instantaneously. For observations without outliers, the MQPSO algorithm offers superior performance for estimating parameters than the GA [14]. Because the kernel of estimating fitness is the LS estimator, the MQPSO algorithm always makes a serious deviation in the contaminated circumstance. Therefore, the LTS estimator is introduced to sieve out the observations without contamination. The proposed LTSMQPSO algorithm and flow chart is shown below and Figure 4. Step 1: Randomly initialize the population of particles with dimension 2n − 2 + n and then, evaluate their fitness values by Equation (10). Step 2: Sort particles according to their fitness values and then initialize p i loc, pgol. Step 3: Perform the LTS estimator to sieve out these h observations without contamination. Step 4: Calculate pfcnt, mbest and L by (19), (20) and (21), respectively. Step 5: Select (24) or (25) with randomly probability to update φi: φ i ( t + 1 ) = pf cnt ( i ) β norm 2 ( mbest φ i ( t ) ) ln ( 1 u ) φ i ( t + 1 ) = pf cnt ( i ) + β norm 2 ( mbest φ i ( t ) ) ln ( 1 u ) ,where norm2(ps1ps2) denotes the distance between ps1 and ps2. Step 6: Evaluate the fitness values of all particles base on (10). Step 7: According to fitness values evaluated in Step 6, update p i loc. Step 8: Check over whether the maximum iteration is reached or the terminative criterion is satisfied? If yes, go to Step 11, else perform next Step. Step 9: Check over whether pgol is updated? If pgol is updated, sets ECM to 0 and perform the LTS procedure, then go to Step 3. If pgol is unchanged, increase ECM by 1 and perform next Step. Step 10: Check over whether the maximum ECM is reached? If yes, let these collected pgol instead of φ(t + 1) and go to Step 4, else keep φ(t + 1) and go to Step 4. Step 11: Check over whether pgol should be updated and then output the results. Numerical Simulation and Results The multi-sensor-based intelligent security robot (ISR) [23] consists of six subsystems; namely, sensor system, remote supervision system, software development system, image system, avoid obstacle and motion planning system. These subsystems can acquire and preliminarily processes sensory signals and then, the sensory data is transmitted by interface devices to the main controller (IPC) for further treatment. The hierarchy structure of sensory systems used for the ISR is shown in Figure 5. In the fire detection subsystem and intruder detection subsystem, the sensory data is transmitted by a digital input/output interface. That is, these two subsystems only send a decision which is made by an information fusion system to the IPC of the ISR. However, a wrong decision is usually made whenever the sensory signal is contaminated with outliers. In this simulation, we focus our attention on the fire detection subsystem. This subsystem is constituted by environmental sensors, which include flame sensors, smoke sensors and temperature sensors. It is suitable for demonstrating and verifying the effectiveness and feasibility of the proposed information fusion system shown in Figures 1 and 2. Prior to performing the numerical simulation, the principles of these three sensors are briefly described. In the smoke sensor module, the kernel is a TG135 ionization smoke sensor. When smoke occurs, an ionizing radioactive source is brought close to the plates and the air itself is ionized. In other words, it will generate a tiny current. For the flame sensory module, the R2868 ultraviolet sensor is used for detecting the flame. Its peak wavelength is 200 μm and its sensing wavelength is 185–260 μm. For the temperature sensory module, the AD590 semiconductor sensor is adopted to detect the temperature of fire. This sensor has a positive temperature coefficient of about 0.7, and its linearity is within 0.5% for a temperature range between −65 °C and 150 °C. The standard output of the AD590 is 1 mA/°K. In general, these sensory signals are all tiny values and have to be converted to a standardized voltage output by an amplifier circuit. Besides, the relations of input sensory signals and output voltage signals must be made linear by tuning the calibration circuits. Finally, these sensory signals that are converted to binary digital signals are transmitted to the IPC. In this experiment, these three modules are integrated together and the resulting 3-in-1 fire detection sensor is shown in Figure 6. Because the sensory signal is tiny, it always suffers from outliers and this causes a wrong output. Fortunately, these outliers only last an instant in general and we are able to eliminate them by considering the interactions among continuous samples. For the sake of simplicity, an artificial observation profile which simulates four continuous sampling data points with normalization is made to estimate the model’s parameters by associating the proposed LTS-MQPSO algorithm in the training state. All simulations are implemented in the Matlab environment and conducted on an Intel Core 2 Duo CPU P8400, 4GB Ram capacity PC. Example: The original model parameters are set as: c = 5, q = 1.2, ω = {0.67 0.3 1 0.43}, μ = {0.2 0.12 0.35 0.4 0.56 0.5 0.6 0.3 0.45 0.38 0.6 0.73 0.9 0.83 1} and h = 75%. Then, we randomly create 400 4-dimensional feature attributes with 10% random contamination to produce training data as shown in Table 1, where ytrue are the original objective attributes ycont are the contaminated objective attributes and the bold-faced numbers represent that objective attributes are contaminated. In this example, the termination criteria of the program are that the iterations reach a maximum of 1,500 times or the mean square error is less than 10−5. After performing the proposed LTS-MQPSO algorithm for many times, the average results of estimating the model parameters and comparisons are shown in Tables 24. In addition, we also show in Figures 710 plots of the training data and estimated results. In Figure 7, a comparison between the contaminated (red line) and the estimated (blue dash line) objective attributes are shown. These two curves nearly overlap besides these points where outliers are present. To clearly show the performance of rejecting outliers, the zoomed in portion which is circled with a dotted line is also shown in Figure 8. As shown in Figure 8, the LTS-MQPSO algorithm is able to identify outliers and reject them. In the Figure 9, a comparison between the original (red line) and the estimated (blue dash line) objective attributes are shown. These two curves almost overlap everywhere. To distinguish each other, the zoomed in portion which is circled with dotted line is also shown in the Figure 10. As shown in this figure, the difference between the original and the estimated objective attributes is less than 10−4. Besides, it is intuitive that the LTS-MQPSO algorithm is able to make quite precise estimations of model’s parameters. In this paper, the NACI model association with the LTS-MQPSO algorithm is considered and developed to deal with a non-additive system with outliers. Whenever atypical observations are present, the parameter estimation method based on the LS estimator is no longer feasible. Therefore, replacement of the LS estimator with the LTS estimator is an excellent alternative. That is, we successfully integrate the mechanisms of the SA, and the GA into the QPSO algorithm to estimate parameters of the NACI model; meanwhile, the LTS estimator is also introduced to filter out outliers before performing the modified MQPSO algorithm. From the simulation results, the proposed LTSMQPSO algorithm can precisely estimate parameters of the NACI model for observations contaminated with outliers; meanwhile, it still maintains high coincidence between the estimated and original objective attributes. This work was supported by the National Science Council of Taiwan, R. O. C. under Grand NSC99-2221-E-224-060 and NSC96-2221-E-150-070-MY3. References LiuHCTuYCChenCCWengWSThe Choquet integral with respect to λ measure based on γ supportProceedings of the 7th International Conference on Machine Learning and CyberneticsKunming, China12–15 July 200836023606 WangZLeungKSWongMLFangJXuKNonlinear nonnegative multi-regressions based on Choquet integralsInt. J. Approx. Reasoning2000257187 XuKWangZWongMLLeungKSDiscover dependency pattern among attributes by using a new type of nonlinear multi-regressionInt. J. Intell. Syst200116949962 LeungKSWongMLLamWWangZXuKLearning nonlinear multi-regression networks based on evolutionary computationIEEE Trans. Syst. Man Cybern200232630643 ChuangCCSuSFHsiaoCCThe annealing robust backpropagation (ARBP) learning algorithmIEEE Trans. Neural Network20001110671077 RousseeuwPJLeroyMARobust Regression and Outlier DetectionWileyNew York, NY, USA1987 RousseeuwPJDriessenKVComputing LTS regression for large data setsData Min. Knowl. Discov2006122945 ChiangKWChangHWIntelligent sensor positioning and orientation through constructive neural network-embedded INS/GPS integration algorithmsSensors20101092529285 LeeHWAzidIHANeuro-genetic optimization of the diffuser elements for applications in a valveless diaphragm micropumps systemSensors2009974817497 YangJXuMZhaoMXuBA multipath routing protocol based on clustering and ant colony optimization for wireless sensor networksSensors20101045214540 KoCNChangYPWuCJAn orthogonal-array-based particle swarm optimizer with nonlinear time-varying evolutionAppl. Math. Comput2007191272279 SunJFengBXuWBParticle swarm optimization with particles having quantum behaviorProceedings of CEC 2004 Congress on Evolutionary ComputationPortland, OR, USA19–23 June 2004325331 SunJFengBXuWBA global search strategy of quantum-behaved particle swarm optimizationProceedings of 2004 IEEE Conference on Cybernetics and Intelligent SystemsSingapore, Singapore1–3 December 2004111116 JauYMWuCJJengJTA fast parameters estimation for nonlinear multi-regressions based on Choquet integral with quantum-behaved particle swarm optimizationArtif. Life Robot201015199202 HalmosPRMeasure TheoryVan NostrandNew York, NY, USA1967 WangZKlirGJFuzzy Measure TheoryPlenum PressNew York, NY, USA1992 WangZKlirGJGeneralized Measure TheorySpringerNew York, NY, USA2008 ChoquetGTheory of CapacitiesAnn. Inst. Fourier19535131295 SugenoMTheory of Fuzzy Integrals and ApplicationsPh. D. Thesis.Tokyo Institute of TechnologyTokyo, Japan1974 WangZLeungKSWongMLFangJA new type of nonlinear integrals and the computational algorithmFuzzy Set. System2000112223231 ClaudeCTDiuBLaloëFQuantum MechanticsWiley-InterscienceHoboken, NJ, USA20061 ClercMKennedyJThe particle swarm: Explosion, stability, and convergence in a multidimensional complex spaceIEEE Trans. Evol. Computat200266873 LuoRCSuKLAutonomous fire-detection system using adaptive sensory fusion for intelligent security robotIEEE/ASME Trans. Mechatron200712274281 Figures and Tables Block diagram of the proposed structure for the parameters estimation of the NACI model via MQPSO and LTS in the training state. Block diagram of the proposed structure for information fusion systems. A typical curve of the creative coefficient β as affected by the changing rate of the optimal estimation Δfit, where the traverse axis is a logarithmic scale. The flow chart of the proposed LTS-MQPSO algorithm. The hierarchy structure of the sensory systems used for the ISR. The 3-in-1 fire detection sensor used for the fire detection subsystem of the ISR. Shown the results for the contaminated objective attributes and the estimated objective attributes. Zoom in the curve marked by a dotted circle in Figure 7. Shown the results for the original objective attributes and the estimated objective attributes. Zoom in the curve marked by a dotted circle in Figure 9. Tanning data with and without contaminations used for verifying the proposed NACI model and LTS-MQPSO algorithm. x1 x2 x3 x4 ytrue ycont 01 0.760 0.900 0.790 0.930 5.67253 5.67253 02 0.930 0.210 0.440 0.260 5.39280 5.39280 03 0.680 0.850 0.070 0.750 5.28559 5.28559 04 0.260 0.940 0.900 0.030 5.49294 5.49294 05 0.860 0.790 0.630 0.690 5.56252 5.56252 06 0.670 0.120 0.420 0.710 5.41806 5.41806 07 0.190 0.760 0.460 0.210 5.30678 5.30678 08 0.920 0.180 0.400 0.040 5.33158 5.33158 09 0.650 0.680 0.450 0.920 5.48550 3.80599 10 0.050 0.710 0.210 0.010 5.13252 5.13252 11 0.290 0.640 0.600 0.290 5.39438 5.39438 12 0.270 0.040 0.940 0.650 5.60023 5.60023 13 0.970 0.400 0.080 0.290 5.25339 5.25339 14 0.450 0.460 0.890 0.810 5.64319 5.64319 15 0.730 0.020 0.910 0.330 5.58932 5.58932 16 0.650 0.120 0.160 0.210 5.21480 5.21480 17 0.490 0.790 0.150 0.910 5.31462 5.31462 18 0.530 0.900 0.820 0.720 5.62050 5.62050 19 0.170 0.580 0.070 0.620 5.17635 6.84732 20 0.870 0.820 0.790 0.030 5.50912 5.50912 385 0.820 0.120 0.250 0.680 5.35888 5.35890 386 0.480 0.240 0.160 0.200 5.19002 5.19003 387 0.960 0.110 0.830 0.140 5.55050 5.55042 388 0.880 0.370 0.210 0.660 5.35336 5.35336 389 0.710 0.650 0.350 0.330 5.34275 5.34284 390 0.420 0.840 0.130 0.430 5.23067 5.23077 391 0.680 0.650 0.390 0.400 5.36962 5.36974 392 0.330 0.320 0.810 0.640 5.55438 5.55440 393 0.650 0.790 0.340 0.710 5.40430 5.40440 394 0.840 0.030 0.250 0.580 5.34598 6.35911 395 0.300 0.940 0.480 0.520 5.39375 5.39379 396 0.770 0.770 0.200 0.810 5.36578 5.36589 397 0.520 0.980 0.760 0.010 5.44782 5.44787 398 0.580 0.920 0.870 0.900 5.68322 5.68323 399 0.790 0.910 0.720 0.370 5.53678 5.53670 400 0.370 0.560 0.890 0.210 5.52618 5.52621 The average results of model parameters estimated by the proposed LTS-MQPSO algorithm. Parameters Estimated Original Data Estimated Contaminated Original μ1 0.202630 0.20 d(1) 5.67262 5.67253 5.67253 μ2 0.119595 0.12 d(2) 5.39267 5.39280 5.39280 μ1,2 0.352183 0.35 d(3) 5.28556 5.28559 5.28559 μ3 0.399180 0.40 d(4) 5.49297 5.49294 5.49294 μ1,3 0.561244 0.56 d(2) 5.56263 5.56252 5.56252 μ2,3 0.498894 0.50 d(6) 5.41794 5.41806 5.41806 μ1,2,3 0.601077 0.60 d(7) 5.30686 5.30678 5.30678 μ4 0.298500 0.30 d(8) 5.33147 5.33158 5.33158 μ1,4 0.451280 0.45 d(9) 5.48563 3.80599 5.48550 μ2,4 0.379223 0.38 d(10) 5.13246 5.13252 5.13252 μ3,4 0.601223 0.60 μ1,2,4 0.728169 0.73 d(391) 5.36962 5.36974 5.36974 μ1,3,4 0.900233 0.90 d(392) 5.55438 5.55440 5.55440 μ2,3,4 0.828266 0.83 d(393) 5.40430 5.40440 5.40440 μ1,2,3,4 1.000000 1.00 d(394) 5.34598 6.35911 5.34591 ω1 0.661194 0.67 d(395) 5.39375 5.39379 5.39379 ω2 0.299558 0.30 d(396) 5.36578 5.36589 5.36589 ω3 1.000000 1.00 d(397) 5.44782 5.44787 5.44787 ω4 0.430799 0.43 d(398) 5.68322 5.68323 5.68323 c 4.999999 5.00 d(399) 5.53678 5.53670 5.53670 q 1.202177 1.20 d(400) 5.52618 5.52621 5.52621 The average results of objective attributes estimated by the LTS-MQPSO, the LTS-MQPSO-LB and the MQPSO algorithm. Data Original Contaminated Estimated by LTS-MQPSO Estimated by LTS-MQPSO-LB Estimated by MQPSO d(1) 5.67253 5.67250 5.67253 5.67356 5.68501 d(2) 5.39280 5.39280 5.39280 5.39319 5.33520 d(3) 5.28559 5.28560 5.28559 5.28591 5.31600 d(4) 5.49294 5.49290 5.49294 5.49304 5.46360 d(2) 5.56252 5.56250 5.56252 5.56237 5.61565 d(6) 5.41806 5.41810 5.41806 5.42090 5.38345 d(7) 5.30678 5.30680 5.30678 5.30770 5.32974 d(8) 5.33158 3.68524 5.33158 5.33157 5.24508 d(9) 5.48550 5.48550 5.48550 5.48373 5.52952 d(10) 5.13252 5.13250 5.13252 5.13101 5.17121 d(11) 5.39440 5.39440 5.39438 5.39677 5.42818 d(12) 5.60020 5.60020 5.60020 5.60104 5.58952 d(13) 5.25340 5.25340 5.25353 5.25332 5.41922 d(14) 5.64320 5.64320 5.64318 5.64338 5.66970 d(15) 5.58930 5.58930 5.58939 5.58941 5.52346 d(386) 5.30790 5.30790 5.30781 5.309731 5.31757 d(387) 5.47460 5.47460 5.47471 5.476158 5.47223 d(388) 5.30380 5.30380 5.30377 5.303276 5.25468 d(389) 5.59290 5.59290 5.59300 5.594471 5.61154 d(390) 5.57300 5.57300 5.57296 5.573292 5.55016 d(391) 5.36974 5.46760 5.36974 5.464645 5.37275 d(392) 5.55440 5.52800 5.55440 5.529859 5.52020 d(393) 5.40440 5.38750 5.40440 5.387559 5.44484 d(394) 5.34591 5.29230 5.34591 5.29274 5.33559 d(395) 5.39379 7.25114 5.39379 5.337529 5.30333 d(396) 5.36589 5.19230 5.36589 5.192462 5.20087 d(397) 5.44787 5.30860 5.44787 5.307585 5.28206 d(398) 5.68323 5.61440 5.68323 5.614041 5.61254 d(399) 5.53670 5.65030 5.53670 5.651546 5.65586 d(400) 5.52621 5.53400 5.52621 5.533216 5.44066 The average results of model parameters estimated by the LTS-MQPSO, the LTS-MQPSO-LB and the MQPSO algorithm. Parameters Original Estimated by LTS-MQPSO Estimated by LTS-MQPSO-LB Estimated by MQPSO μ1 0.20 0.202630 0.220264 0.999814 μ2 0.12 0.119595 0.101039 0.205399 μ1,2 0.35 0.352183 0.355931 0.000001 μ3 0.40 0.399180 0.446918 0.209780 μ1,3 0.56 0.561244 0.617110 0.000134 μ2,3 0.50 0.498894 0.542382 0.044604 μ1,2,3 0.60 0.601077 0.660831 0.002942 μ4 0.30 0.298500 0.249854 0.251532 μ1,4 0.45 0.451280 0.430170 0.000009 μ2,4 0.38 0.379223 0.337117 0.000087 μ3,4 0.60 0.601223 0.578540 0.289889 μ1,2,4 0.73 0.728169 0.718593 0.389799 μ1,3,4 0.90 0.900233 0.905735 0.347362 μ2,3,4 0.83 0.828266 0.827270 0.000376 μ1,2,3,4 1.00 1.000000 1.000000 1.000000 ω1 0.67 0.661194 0.689136 0.163266 ω2 0.30 0.299558 0.330364 0.163267 ω3 1.00 1.000000 1.000000 1.000000 ω4 0.43 0.430799 0.584927 0.269410 c 5.00 4.999999 4.999316 5.1152898 q 1.20 1.202177 1.075533 2.0519762 MSE 8.0154e-005 0.0018 0.455776 Elapse 1059 seconds 1496 seconds 1580 seconds
f9e588eee30f1bde
Take the 2-minute tour × This is something which I suspect is written up in introductory books on mathematical physics if I knew where to look. Suppose I have some parameters $t_1$, ..., $t_k$ ranging over a neighborhood in $\mathbb{R}^k$. I also have $k$ matrix-valued functions of the $t$'s: $H_1(t_1, \ldots, t_k)$, ... $H_k(t_1, \ldots, t_k)$. These obey both $$[H_i, H_j]=0 \quad (\ast)$$ and $$[\partial_i+H_i, \partial_j+H_j] =0 \quad (\dagger).$$ For those who don't like the language of connections, we can expand $(\dagger)$ as $\partial H_i/\partial t_j - \partial H_j/\partial t_i + [H_i, H_j]=0$ or, in the presence of $(\ast)$, as $$\frac{\partial H_i}{\partial t_j} = \frac{\partial H_j}{\partial t_i}.$$ Equation $(\ast)$ tells us that, assuming the $H_i$ are individually diagonalizable, we can find $u(t)$ a simultaneous eigenvector for all the $H_i$: $$H_i(t) u(t) = \lambda_i(t) u(t). \quad (\ast\ast)$$ Equation $(\dagger)$ tells us that the vector-valued PDE $$\frac{\partial v}{\partial t_i} + H_i v=0 \quad (\dagger \dagger)$$ will have a unique solution $v(t_1, t_2, \ldots, t_n)$ for any initial value. I'm pretty sure there is supposed to be a relation between the solutions to $(\ast \ast)$ and $(\dagger \dagger)$. What is the right statement, and what is the keyword to read about this situation? Motivation: I'm trying to work through the papers of Varchenko, Scherbak and others on the KZ equation. I think it would really clear my head to just see this scenario described abstractly without all the details of which operators they are thinking about. $\def\mg{\mathfrak{gl}_n}$ Edit to spell out the relation. Let $V_1$, $V_2$, ..., $V_n$ be representations of $\mg$. So $U(\mg)^{\otimes n}$ acts on $V_1 \otimes V_2 \otimes \cdots \otimes V_n$. Let $\Omega \in U(\mg) \otimes U(\mg)$ be the Casimir. (Note: The element I learned to call the Casimir was a central element $c$ in $U(g)$. In terms of that element, $\Omega = \Delta(c) - c \otimes 1 - 1 \otimes c$.) Let $\Omega_{ij}$ be $\Omega$ acting in positions $i$ and $j$. For generic parameters $z_1$, ..., $z_n$, define $H_i = \sum_{j \neq i} \Omega_{ij}/(z_i-z_j)$. Then, as I understand it, the KZ equation is $(\partial_i + H_i) v(z_1, \ldots, z_n)=0$, where $v$ is a function valued in $V_1 \otimes V_2 \otimes \cdots \otimes V_n$. The $H_i$'s obey both $(\ast)$ and $(\dagger)$ (a nice exercise). And people seem to be very interested in solving both $(\ast \ast)$ "diagonalizing the action of the Gaudin subalgebra" and $(\dagger \dagger)$ "solving the KZ equation". So I was hoping to understand how they relate, and why. share|improve this question Usually one considers the limit of "connections" to "h" (geometric optics or short wave asymtotics). So sections of flat con. are costructed as series where first term made of eigenvectors . Technically you put constant "k" in front of d/dt and k goes to zero so you can forget about d/dt. This is "cortical level" in KZ story. –  Alexander Chervov Apr 25 '12 at 5:06 "Critical level" –  Alexander Chervov Apr 25 '12 at 5:07 Thanks! But I'm pretty sure that the interesting stuff in, for example, 1102.5368, 0910.4690 or 1004.3253 are all happening without sending $\hbar$ to $0$. –  David Speyer Apr 25 '12 at 5:22 You are welcome. I am pretty agree that there is much interesting stuff around the KZ and the Gaudin model, but still when I was working on this I did not see the way how to constuct solutions of KZ from Gaudin hamiltonians in somewhat "nice"/"explicit". Except one very strange case which we discuss at arxiv.org/abs/0711.2236 page 15 section "4.1.1 Application to the Knizhnik-Zamolodchikov equation" –  Alexander Chervov Apr 25 '12 at 6:20 I looked at the papers you mention - still I did not see explicit relation with the question you ask and what is discussed there, may be I was looking not very carefully. –  Alexander Chervov Apr 25 '12 at 6:21 4 Answers 4 up vote 3 down vote accepted Hi David, I think there is indeed a relation, which I learned precisely from papers of Varchenko among others. All of this is rather classical and can be found e.g. in Etingof-Frenkel-Kirilov book "Lectures on Representation Theory and Knizhnik-Zamolodchikov Equations". The fact that the $H_i$ satisfies this stronger condition is equivalent to say that for any parameter $\kappa$ the operators $\kappa \partial_i+H_i$ alos satisfies ($\dagger$). Hence you can take asymptotic expansion of solutions at $\kappa \rightarrow 0$ on some neighbourhood $D$ of some $z_0$, of the form $$e^{S(z)/\kappa} (f_0(z)+O(\kappa))$$ where $S$ is a scalar valued function. Then you can show that, assuming the $H_i(z)$ are simultaneously digonalizable then $f_0$ is a common eigenvector of them, with eigenvalues $\partial_i S$. Conversly given a common eigenvector at some $z_0$ you can construct an asymptotic solution. So the usual trick, widely used in the study of the KZ equation, is to also take some asymptotic limit w.r.t. the variable $z_i$ in such a way that eigenvectors are "easy" to find. The standard example in the KZ case is the asymptotic zone $$|z_i-z_1| \ll |z_j -z_1|\quad if\quad i < j $$ for which, up to some change of variable, the equation can be written $$\kappa \partial_i f= \left ( \Omega_i/u_i +reg\right)f\quad i=1\dots n-1$$ where $\Omega_i=\sum_{k < i} \Omega_{k,i+1}$ and $reg$ is regular at $u=0$. Then given some common eigenvector $v$ of the $\Omega_i$ with eigenvalues $\mu_i$ there exists a unique solution of the form $$(\prod u_i^{\mu_i/\kappa})(v+r(u))$$ where $r(u)$ is regular at $u=0$ and $r(0)=0$. I'm not very familiar with D-modules (and by the way I would be happy is someone extends on this), but you can rephrase it as follows: viewing $\kappa$ as a formal variable leads to a filtration on the algebra of differential operators on $V$ (the vector space acted on by the $H_i$) which in turn is nothing but the usual filtration by the degree of differential operators. Taking the associated graded turns the equation into the equation Whose solutions are clearly commons eigenvectors of $H_i$. So I'm rather confident that you can say that the spectrum of the $H_i$ for all common eigenvectors is the characteristic variety of the D-module of solutions of the differential equation you started with. share|improve this answer I can't comment on the case of several operators $H_i$, but for a single operator, the eigenvector equation $(**)$ $$ H(τ) \psi_τ = λ(τ) \psi_τ $$ and the time-dependent Schrödinger equation $(\dagger\dagger)$ $$ (i\frac{\partial}{\partial t} - H(t)) ψ(t) = 0 $$ are related by the adiabatic theorem. Not sure if that's what you are looking for, but I would be very surprised if your setting didn't have a similar intuition. Essentially, the idea of the adiabatic theorem is the following: the eigenvector equation describes, for each parameter $\tau$, an instantaneous eigenvector $\psi_{\tau}$. This gives a solution $\psi_{\tau}(t) = e^{-itλ(τ)} ψ_τ$ to the "instantaneous" Schrödinger equation $$ (i\frac{\partial}{\partial t} - H(τ)) ψ_τ(t) = 0 $$ where the Hamiltonian $H$ is considered at a fixed time $τ$. Now, if the Hamiltonian $H(t)$ varies very "slowly" in time, then it is reasonable to expect that the full Schrödinger equation will essentially follow the solutions to the "instantaneous" Schrödinger equation(s). First it evolves like a solution of the instantaneous equation with $H(0)$, then for $H(\Delta t)$ a small time step after, and so on. This can be made precise by rescaling time to $τ=t/T$ and obtaining an asymptotic expansion $$ ψ(t) = e^{-i∫λ(τ)dt} ψ_τ + \mathcal O(1/T) $$ in the limit $T\to ∞$ and in the $L^2$ sense. More details can be found wherever you can find details about the adiabatic theorem. share|improve this answer Without further assumptions, it does not seem that much can be said. Consider the case k=1. You are asking for a connection between the eigenvalue problem for H(t) and the equation dv/dt+Hv=0. But time-dependent linear ODE systems cannot in general be related to the eigenvalue problem. On the other hand, the condition $\partial H_i/\partial t_j=\partial H_j/\partial t_i$ implies that $H_i=\partial K/\partial t_i$ for some $K$. Let us now strengthen your assumptions and assume that the $H_i$ commute not only with each other, but also with $K$. Then solutions of ($\dagger\dagger$) can be written as $\exp(-K(t))w$ for fixed $w$, and solutions of ($**$) can be written as $u=\partial v/\partial t_i$, where $v$ is an eigenfunction of $K$. share|improve this answer To expand on Greg's answer regarding the adiabatic theorem. You are looking for situations where the adiabatic evolution is exact. This is the case for a Hamiltonian of the form $H = i\left[\frac{\partial P}{\partial t},P\right]$ where $P$ is a projector onto your chosen instantaneous eigenstate. This comes from T, Kato, J. Phys. Soc. J. Jpn. 5, 435 (1950). share|improve this answer Your Answer
eee92b9cf16b892a
Talk:Double-slit experiment/Archive 7 From Wikipedia, the free encyclopedia Jump to: navigation, search Archive 1 Archive 5 Archive 6 Archive 7 Archive 8 Archive 9 Proposal to split this into two articles, one concerned with classical optics model, and the other the quantum mechancial one At the moment, it seems to me that this article is a mish-mash of classical optics, and discussions of the quantum mechanical interpretation of the experiment. I propose to separate the two aspects, and to this end, will start work on the classical optics article. At the moment, this will be on my user page. Comments/views welcome. Epzcaw (talk) 16:01, 29 May 2011 (UTC) the double-slit experiment proved classical optics wrong. a page on classical physics and double slit would be a page on the birth of quantum physics. Kevin Baastalk 01:30, 30 May 2011 (UTC) I don't understand your comment. Young's experiment was an important part of the evidence used to justify the use of wave theory to model the propagation of light starting in the early nineteenth century and continuing till today, as opposed to the particle theory originally advanced by Newton - see quotations below. It still works very well - I am sure that you will find the designers of lenses and other optical systems find it perfectly adequate in their work, and many current optics books still use it. I have to hand the 4th volume of Wiley-VCH's Encyclopaedia of Optics, 2004, and at least 10 of 19 articles use either ray or wave optics in their discussions. Surely this means that you cannot say that 'classic optics is wrong'. See the following: Heavens and Ditchburn, 'Insight into Optics', page 38: "In the eighteenth century, the wave theory was neglected. It did not gain acceptance unitl the experiment of Young (1773-1829) together with the work of Fresnel (1788-1827) who applied the theory to a wide range of phenomena" Born and Wolf, Principles of Optics, 1999, pages 287, and 290 "The prize (Paris Academy) was awarded to Jean Augustin Fresnel (1788-1827) whose treatment was based on the wave theory, and was the first of a series of investigations which, in the course of a few years, were to discredit the corpuscular theory completely. The substance of his memoir consisted of a synthesis of Huygen's envelope construction with Young's principle of interference" p xxvii The earliest experimental arrangement for demonstrating the inteference of light is due to Young" page 290 Stephen Mason, A History of Science' ... Young performed an experiment in which two light waves were allowed to overlap and interfere, producing alternate light and dark bands..... Young instanced such phenomena as evidence for the wave theory of light" page 469 Penguin Dictionary of Physics "Interference ..... These and similar fringes (Young's fringes) are readlily explicable on wave theory, and were used by Fresnel and Young as evidence to establish wave theory" My suggestion is that an explanation of the double slit experiment using classical optics only would be useful, and should be separated from the discussion of the implications of the experiment considered from a quantum mechanical point of view. Can you provide a reference source which says that Young's experiment proved that classical optics was wrong? Epzcaw (talk) 11:40, 30 May 2011 (UTC) The wave theory of light _is_ the quantum mechanical view of light. It is only a matter of realizing certain repurcussions of this view that one discovers "quantum mechanis". Though it wasn't until a similiar experiment was done w/electrons that quantum physics really hit its prime. Kevin Baastalk 14:56, 10 July 2011 (UTC) Classical optics is no more wrong than Newtonian Mechanics. Both provide models of how the worlds works in certain conditions and are accurate enough to be used in designing many of the things we use: (cars, planes, bridges etc etc) are designed using Newtonian mechanics, and optical devices (cameras, spectrometers, laser scanners etc etc) are designed using classical optics. Of course, much of our world is also designed using QM (e.g. electronics) and relativity (satellites, particle colliders etc) so each model has its place. So I still don't understand what you mean!!Epzcaw (talk) 17:40, 10 July 2011 (UTC) I don't understand how your statement about how classical physics is a useful approximation of more exact physics in any way relates to what i said, so i have no foothold on your confusion. Kevin Baastalk 17:52, 10 July 2011 (UTC) Classical physics is not "wrong" any more than quantum physics is "right". Both are models created by humans to try to represent what they observe in the world. They both work well in particular circumstances. What about the re-normalization problem in quantum mechanics? Does this make all quantum mechanics wrong? I guess, like in many arguments, we are not really talking about the same thing Epzcaw (talk) 18:23, 10 July 2011 (UTC) we seem to be talking past each other, by no fault of our own. i mean that historically the double-slit experiment was a big step on the road to our current quantum physics understanding of physics. hell, when they did an analogue of the experiment for electrons, that was ground-breaking. regarding renormalization in quantum mechanics; does it make it wrong - if so then our understanding of limits in regard to calculus and our understanding of probability are fatally flawed. so in short: no, there never was a problem with renormalization. the problem was in the thinking; with some misconceptions some people had due to a shallow understanding of the math and/or poor spatial reasoning. is there a problem with triangles in that angles can exceeded 180 degrees when put on a curved surface? no. Kevin Baastalk 19:43, 11 July 2011 (UTC) Truce? I'm sure we're on the same side really! But I love classical wave light theory, and used it a lot in my working life, so I guess I feel I must defend its honour!! Must quantum particles go through one slit or another? The article is concerned with "apparatus that can determine which slit a photon passes through." Is there any reliable reference that justifies the assumption that the photon must go through one slit or another rather than through both at once and portions of the "thin plate" along the edges of the slits besides? User:Fartherred from (talk) 23:44, 6 July 2011 (UTC) That's kind of the point. You only get interference if you let the photon propagate as a wave, as opposed to collapsing it onto one slit or the other as a particle. Dicklyon (talk) 23:53, 6 July 2011 (UTC) I don't know what you mean by "thin plate." There are three levels on which you can look at this kind of phenomena: (1) empirical experience, (2) equations, (3) interpretations in various human languages that talk about what the equations tell us about how the Universe really is. The empirical experience is quite straightforward. You can easily build your own double-slit apparatus with some plastic railroad track, some automatic pencil lead, some black electician's tape, some glue, and a moderately inexpensive laser pointer. You will see what everybody sees. But, why does it happen? On the empirical level you can demonstrate that closing off one slit or the other drastically changes the results. You can, if you want to take the trouble, make experiments with different slit widths, different distances between the slits, different distances between laser and double-slit barrier, and between double-slit barrier and the detection screen (white paper pinned to the wall). You can even, I guess, invest in some fancy darkroom and expensive electronic "photographic film," put neutral density filter after neutral density filter in front of the laser until you get the output so damped down that only one photon is getting registered at a time on the CCD "film." Again, you will reliably see what has been verified in lab after lab, time after time. So, on the empirical level, you can get a very complete, consistent (as long as you do not get too much experimental error from things like jostling the laser), record that will agree with what other experimenters had found. If this were not the case, if somebody came up with credible results where an ordinary double-slit apparatus did not deliver the expected results, it would cause quite a stir. So what you know about, empirically, is what you did to set up the experiment (most important factors being slit dimensions), and what came out. You can get very familiar with your apparatus, checking time laser was activated and time flash was detected on the screen (with the slit barrier out of the way, for instance). So you have empirical information about when and where a photon was emitted, and empirical information about when and where a photon was detected. You will also know the frequency of photons characteristic of your laser. As far as I know, that is all you can know about on an empirical level. Everything else that anybody claims to know about the experiment is based on creations of human beings that we hope will be a very reliable predictive guide for us. Richard Feynman says that the photon goes from laser to detection screen by every possible path. Most people insist on their naive sense of how things in the universe work and say that the photon must be going through one slit or the other. Why does it matter, then, whether the other slit is open? Well, it is because quantum effects occur in a "non-local" universe, i.e., a universe in which things (such as slits) do not have to be in touch with each other to "know about" each other and to influence how "each of them" (which is really one of them in some sense) acts. Or it's because of "guide waves." Or it's because of .... It's easier to discount this theory and say that "light is a wave, it spreads out wide enough to encompass both slits, and it goes through both of them," but it is harder (for me at least) to say that about electrons. A single electron heads out from a cathode, comes to the electrical equivalent of a double slit, and goes through both slits. Really? An electron has mass. Does the mass split somehow? If we do the same experiment with a buckyball, which has 60 carbon atoms, you are saying that it is a wave that goes through both slits and all that mass also goes both ways? Incredible! But we are talking about quantum mechanics, so maybe that is exactly the best way to explain it. So we have, for starters, the idea of a particle that goes through the slit by one path, by two paths, by "all possible paths" (which I suspect must be infinite in number), or maybe the slits are in this universe and the photon or the electron in motion are not in this universe until they "materialize" at the far end of the experiment. Who knows?! There are "slits in time" versions of this experiment in which a photon can be emitted at either the peak or the trough of a sine wave current applied to a photon emitting device. By some kind of experimental trickery that I can't remember at the moment, the experimenters can contrive to have one situation in which the experiment begins at the highest electrical potential (the peak) and goes through the descent, the zero potential point and then through further descent to the bottom of the trough, and back up to the first half of a peak, where it ends. So one of these peak values (for instance, +4v or +4v) touches off a photon. But it could be either one of them, and they are separated in time. Interference is produced because, as it were, the photon sent off at the peak of the electrical potential interferes with the photon sent off at the trough of the electrical potential and they are out of phase with each other so they interfere. Are we really talking about two photons, the possibilities of two photons, one photon and the mere fact that it could have been another photon a split second later, or what??? Again, we have no idea of what is really going on. (If they have the experiment start with zero volts and end at zero volts, there is only one 4v. point encountered in this one run, so the photon must be emitted at a single point in time. No interference will be noted.) There have been different sets of equations used to describe/predict this experiment. Basically, there are pre-quantum and post-quantum theories. Before quantum mechanics, the theories treated light as a classical wave. Huygens had the basic math needed to account for the interference pattern in a highly accurate way. (If there were errors due to inadequacies of the equations, experimenters would have had to have exquisite lab instruments to sort these mistakes out from mere experimental error.) Equations that are derived on a quantum theoretical basis have slightly different predictions, if I remember correctly. But they don't tell us what the particle "does." They just tell us what to expect at the detection screen. If I remember correctly, Dirac had a set of equations that was created in such a way that the equations could be solved to deliver information about a particle that would be consistent with the data supplied, or, alternatively, could be solved to deliver information about the wave that would be consistent with the data supplied. You could get either kind of result, depending on how you set things up. But that reflects the experimental situation. You can get information about photons in terms of their wave characteristics, or in terms of their particle characteristics, but not at the same time. The double-slit experiment is neat because it requires computing the results of wave characteristics of the laser-produced photons at the double slit apparatus, but it requires computing the results of particle characteristics at the detection screen -- yet it refuses to give us anything more than probabilities regarding where the particle impacts will be observed. Back to your question regarding a "reliable reference that justifies the assumption that the photon must go through one slit or another," I think you could troll through Google and find assertions by people who "ought to know," but I don't think they can justify any such assertion. It would be a major coup if somebody could do that, and a hot topic of debate if anybody seriously tried to prove it. I'm pretty sure it's one of those "you can't get there from here" situations. P.S. Here is something on a slightly different, but related, topic that you may find useful: P0M (talk) 01:12, 7 July 2011 (UTC) Thanks for the response. It seems that there is some notion that photons and electrons can each one at a time go through both slits. This could be documented by a reference without eliminating the notion that people have of particles going through one slit or the other. Of course, the universe might be not only weirder than I imagine but also weirder than I can imagine. That is close to a quote but I don't know what article it would fit into if I could find the source. The 80th &81st words of the article are "thin plate." It would be possible to provide more technical information about an example experimental set-up without turning the article into a forbidden how-to article. The fancy mathematical notation that can be used is good for some people, not me. Perhaps both sorts of information can be offered without one being dependent on on the other. The way that physics labs with good budgets make things, I guess, was to take a thin sheet of brass and something like a tiny rotary saw for a Dremel drill set, and physically cut slits into the plate. Some people have advocated exposing a sheet of photographic film to light, developing it (yielding a sheet of plastic with a thin black layer on the emulsion side, and then scratching two parallel lines in the black layer. I tried lots of these methods and concluded that most likely the people who gave the advice for such "easy" methods had never tried to do the work themselves. My first successful attempt was done by gluing the smallest diameter brads I could find to a kind of plastic railing and then building side barriers on beyond those "slit walls" using black electrician's tape. The main problem with that method was that the brads may look straight to the naked eye, but when you look closely you will discover that they all bend a little. Also, my brads were shiny, and could reflect damaging laser light directly into my eyes -- not good! I think you may have in mind something that has bothered me a little -- that is the fact that the brass plate (or whatever substitute one may use) is not of zero thickness, so there may be some effects due to light bouncing off those very narrow "walls" to each of the slits. I would like to experiment with that idea, but it ignores the basic requirement (often not mentioned) for the ideal double-slit instrumentation which is that the "wave fronts" coming at the double slits should be parallel to the surface of the barrier with the slits in it. In the early days, sunlight was used, and because the distance from the lab to the sun was so great, the curvature of a circle drawn with the sun at its center and the earth at its circumference was so near to being flat that nobody could see that the actually curved line was not a straight line. So what hit the double slits was effectively "flat," and there was therefore no possibility of hitting the sides of the slits a glancing blow. (Imagine the difference between one machine gun on a tripod swinging back and forth a little as it shot bullets toward two open windows, and one hundred rifles with barrels welded together shooting one hundred bullets at toward the same two windows. No bullet with hit the window frames a glancing blow because all bullet trajectories would be perpendicular to the double-window barrier. Stay tuned. It is not appropriate to make a "how to" article here, but I can make one elsewhere and post a URL here. It seems that in the realm of electrons and photons there is no such thing as a sharp boundary with particle on one side and not on the other. So, how large must the electron be extended to slip some of its substance through both slits and the barrier between them? That barrier is mainly empty space with some electrons and nuclei holding each other at arm's length(see below) any way. True it is that sheets of solid brass are mostly vacuum, but there are plenty of electrons whizzing around, and no straight-line paths through anything except the thinnest of gold leaf or something like it that is found to be translucent if not transparent. The slit width is significant in that it has to be greater than the wavelength of the light directed at the slit or else the light cannot pass. Examine the window in the door of a microwave oven. The microwaves are just photons of a frequency pretty far below that of the red light we can see and even below the infrared light that we use to heat things. That means that their wavelength is greater that visible light. So visible light can go through the metal screen in the microwave oven door, but microwaves cannot go through it -- which is good because it prevents sensitive parts of the cook from getting cooked from the inside out. Electrons have extremely short wavelengths, which is why we use electron microscopes when we want to make well-defined images of very small things. So by ordinary, macro-world, logic, the wavelength of electrons ought to require very small slits positioned very close together. The things that are actually used for making "double-slits" are actually crystalline structures that have the "slits" built in as part of that crystalline structure. (I'm using my imagination a bit here because I don't think I've ever seen a very detailed explanation of how the lab apparatus is set up, what crystals are used, etc.) Anyway, the electron does not have to be "extended." It has its characteristic wavelength (remember that "wave" and "particle" are both analogies, and rather poor ones at that, made between things on atomic and sub-atomic levels and the bullets and ocean waves that we see in everyday life), so to make an apparatus that will produce interference the experimenters must make a device with "slits" that are tailored to the electron's wavelength. On the internet I have found many explanations without numbers for the dual slit experiment; contradictory, interesting, clear, impenatrable and otherwise. Numbers for dimensions and voltages are less common. User:Fartherred from (talk) 04:47, 7 July 2011 (UTC) In the article there should be a sort of empirical formula relating wavelength, slit width, slit separation, etc. and the characteristics of the interference fringes produced. If one knows the dimensions of the slits and the distances between the slits and the detection screen along with the distances between fringes in the interference pattern, you can actually use that information to measure the wavelength of the light being used. I can tell you that the most recent apparatus I used had a center "post" the width of a piece of mechanical pencil lead, I think it was 0.07 inch diameter lead, and the slits were on the order of 0.01 inch in width (about the thickness of a dollar bill). I got a nice bright interference pattern that was convenient to photograph at around ten feet from the barrier with the double slits in it, but I could project it on a wall twenty feet away where it would be much more spread out (greater distance between bright bands) but naturally also much dimmer. The voltages I mentioned were just dummy numbers. My guess would be that voltages would be different depending on the kind of circuitry involved, just as you can buy computer chips that require 5 volts to operate, and other computer chips that do the same job but are differently fabricated and only require a lower voltage. It's a little like asking what voltage an electric blanket requires. If you buy one fabricated for the U.S. market, it will be designed to get warm when fed 120 volts AC. If you take that item to China or anywhere else where the household voltage is 240 volts, you will get a very hot blanket and hopefully one that burns itself out before it sets the bed on fire. P0M (talk) 09:15, 7 July 2011 (UTC) After plowing through the article some more I find it better than I thought it was. I will need more time to be able to comment intelligently on the article, if ever. User:Fartherred from (talk) 05:57, 7 July 2011 (UTC) Following through the math of the classical physics model (the Huygens stuff) will help you understand what is going on. Graphing out where the wave fronts will be at different distances from the double slits will also help. You can see where the two "wave fronts" will reinforce each other and where they will cancel each other. There are good simulations on-line that let you use a virtual double-slit apparatus with which you can change slit widths and distances, light frequencies (light wavelengths), and then see how those figures will affect the interference pattern those settings would produce. Finally, you can build your own double-slit apparatus (but be sure not to stare into the laser because you might burn holes in your retinas if you did) and see the real thing instead of schematic diagrams of the apparatus and phenomena.P0M (talk) 09:15, 7 July 2011 (UTC) Arm's length is a variable unit of measure in this case the length of the arm of an electon in an atom. The size of the 2s orbital or the 6s orbital for neutral atoms or ions can all be considered arm's length. I do not do much experimenting. I need the details of experiments to interpret the results. I learned of the wavefront explanation for refraction by prisms and for diffraction about 44 years ago. I could perhaps dust the cobwebs off of my memories, but I do not doubt the internal self-consistancy of the model. I never really used the matrix manipulations or differental equations that I learned for much of anything. Any notation beyond college sophmore calculus is likely to cause me to skip the section until I learn more (perhaps a long time). I would bet that if two state superposition has any use as an explanation of the universe that it should show up as a result of experiments that can be demonstrated to someone of my mathmatical sophistication. If more complex notation cannot be dispensed with, I suspect the situation that prevailed with Ptolemaic astronomy. People using ever more complex mathematical tricks to reconcile their pet theory with the real world. I have not made up my mind yet. J.B.S. Haldane wrote: "My own suspicion is that the universe is not only queerer than we suppose, but queerer than we can suppose." (Possible Worlds: And Other Essays[1927], Chatto and Windus: London, 1932, reprint, p.286. Emphasis in the original) This shows that even a communist can sometimes do something worthwhile. User:Fartherred from (talk) 00:41, 8 July 2011 (UTC) See for a simulation of the experiments that you could perform for yourself with a little trouble. The classical equation linking the slit separation s, wavelength of light λ, distance from the slits to the screen D, and fringe width (the distance between the centers of the observed bands of light -- x) is: λ / s = x / D Note that the math doesn't say anything about "the wave" or "the particle" or "the anything" going through one slit or the other or the two of them. The math and the simulations represent what you will see under various conditions. Everything else is a sort of narrative that humans impose on the situation to make it appear to make sense to them. P0M (talk) 01:52, 8 July 2011 (UTC) The same result is obtained (not surprisingly) when you use Englert–Greenberger duality relation which is a detailed treatment of the mathematics of double-slit interference in the context of quantum mechanics. "We have in particular D=0 for two symmetric holes and D=1 for a single aperture (perfect distinguishability). In the far-field of the two pinholes the two waves interfere and produce fringes. The intensity of the interference pattern at a point y in the focal plane is given by I(y)\propto 1+V\cos{(p_yd/\hbar+\phi)} where p_y= h/\lambda\cdot \sin(\alpha) is the momentum of the particle along the y direction, \phi=\text{Arg}(C_A)-\text{Arg}(C_B) is a fixed phase shift, and d is the separation between the two pinholes. The angle α from the horizontal is given by \sin(\alpha)\simeq \tan(\alpha)=y/L where L is the distance between the aperture screen and the far field analysis plane." Fringes occur each time p_yd/\hbar varies by 2π. We calculate the angle αfsubtended by the fringes as follows: p_yd/\hbar = (h/\lambda )\sin(\alpha_f) d/\hbar = \frac{h\sin\alpha_f}{\lambda} \frac {2 \pi d} {h} = \frac{2 \pi d\sin\alpha_f}{\lambda} = 2 \pi \sin {\alpha_f} = \frac {\lambda}{d} The fringe spacing, yf is then given by y_f= \frac {dL}{\lambda} which is the same expression as above, just in different notation. I have been able to see double slit fringes by cutting two slits in a piece of cardboard with a Stanley knife (separation a bit less than a mm but I haven't measured it exactly), illuminating with a laser pointer and viewing at about 2m. You need to have fairly dim lighting, but not total darkness to see them. Epzcaw (talk) 08:55, 8 July 2011 (UTC) If you are still interested, you can find a "how I did it" article on constructing a double-slit apparatus here: There is a trick I had not thought of that will enable an experimenter to make parallel scratches in flashed photographic negatives: just sandwich a sheet or two of paper between two razor blades and guide your cut with a straight edge.P0M (talk) 08:59, 6 August 2011 (UTC) P0M (talk) 06:36, 9 July 2011 (UTC) POM and Epzcaw have been more helpful than I could expect. Profound concepts are touched and useful details included in the discussion. I still have hope of making comments related to the article when I have digested this material and that left at [[User_talk:]]. Fartherred (talk) 11:40, 20 July 2011 (UTC) There is one more thing that may be of interest to you. Some people have done experiments in which two lasers are used, one for each slit. The result is that most of the time there is no interference, but occasionally two wave-functions reach the detection screen at closely enough to the same time that they interfere. Dirac thought that a photon could only interfere with itself, but it appears that he was wrong. (It's a little like two marksmen shooting at the same bullseye and having their bullets collide just as their noses touch the paper, I guess. It would not happen very often. If you decide to do your own double-slit experiment, be sure to follow the regulations for laser use posted on your laser. Some people have published stuff about using green lasers, which might be a mistake since the shorter the wavelength the more damaging power each photon packs. I would stick with red lasers of low power. Anyway, I can only tell you how I did it. Following safety precautions is entirely your responsibility. P0M (talk) 03:03, 21 July 2011 (UTC) Remove "Three slit experiment" section I have now read this paper, and it does not say "In 1926 Max Born proposed that as a consequence of the quantum mechanics, only two slits would produce the familiar results of the double-slit experiment, while three or more slits would not". What it does say is "Therefore, by Born’s rule and its square exponent, interference always occurs in pairs of possibilities and is defined as the deviation from the classical additivity of the probabilities of mutually exclusive events (2)." The authors are not just referring to Born, but to the conventional interpretation of quantum mechanics, i.e. that the probabilty is the sum of the square of the wave functions, and the only interference terms are cross terms between individual waves, just as in classical wave theory. What the authors of this paper were looking for was second order interference terms, where the probability includes terms which are multiples of three terms. Such terms would not, of course, occur, in a two slit experiment, because there are only two terms, and this is the reason for doing a three slit experiment. I don't think this is therefore relevant to the double slit experiment (it might merit an article of its own), and will certainly confuse readers who are new to the subject, so I propose to remove it, unless someone can persuade me otherwiseEpzcaw (talk) 17:06, 2 August 2011 (UTC) I have now done this as no-one has objectedEpzcaw (talk) 09:19, 5 August 2011 (UTC) I think this is a good idea. Maybe someone can start a new article on three slit experiment.--LaoChen (talk)06:44, 6 August 2011 (UTC) Dear editors, whether I can add section ===Copenhagen interpretation=== following sentence?: Event-probability interpretation of quantum theory specifies this interpretation, considering a particle to be an ensemble of dot events connected by probabilities. Gqu (talk) 08:51, 7 August 2011 (UTC) Are you saying that you want to add the above sentence to the Copenhagen interpretation section? If so, I think the word "specifies" in the sentence needs to be changed. "Specifies" ordinarily would make the sentence mean that there is something in the Copenhagen interpretation that specifically refers to the event-probability interpretation and requires it or insists on it. "The divorce decree specified that the antique horseshoe should remain nailed to the house that became the sole property of the wife." It sounds like you are actually trying to indicate that the EPI makes more specific the Copenhagen interpretation, i.e., adds specifications to it. P0M (talk) 14:27, 7 August 2011 (UTC) Thank you. It is very expressive example. But EPI really develops the Copenhagen interpretation, because: 1. in the Copenhagen interpretation the concept of continuously existing particle remains. Hence, there is a question, through which slit the particle has passed? EPI considers a particle to be an ensemble of dot events connected by probabilities. And at an interference there probability of the event defining this particle passes through slits, but not the particle. 2. some variants of the Copenhagen interpretation consider wave function to be a certain physical essence. The Copenhagen interpretation postulates that this function submits to the equations of the quantum theory. In EPI this function represents a characteristic of the 4-vector of density of probability of dot event ( ,pp. 1-3). It is proved (G. Quznetsov, Progress in Physics, v.2, (2009), pp.96-106) that this function submits to such equations. Thus, In EPI there is no division of Universe on quantum and classical parts, as in the Copenhagen interpretation. And there is no problem of the collapse of wave function. Then, maybe that: "EPI develops this interpretation, considering a particle to be an ensemble of dot events connected by probabilities."? Gqu (talk) 08:27, 8 August 2011 (UTC) I find the above material to be beyond the scope of this article. It may be a useful addition to the Copenhagen interpretation article, but this article hardly offers a comprehensive run-down of the many interpretations. Even the Interpretations of quantum mechanics article doesn't cover every sub-interpretation. I feel like this one is too new to the literature to be included, certainly in this article. -Jordgette (talk) 19:10, 8 August 2011 (UTC) Maybe the best thing to do is to put a link to the article on EPI down below this article in the "see also" slot.P0M (talk) 20:37, 8 August 2011 (UTC) I have added a template regarding the double slit experiment with electrons. Such section does not explain with clarity why in despite of firing one electron there is an interference pattern. Thanks --Camilo Sanchez (talk) 20:44, 17 September 2011 (UTC) I've added something that may fulfill your request.P0M (talk) 02:37, 18 September 2011 (UTC) This template has returned, despite two clarifying changes being made. Let's try to determine what is still unclear about the section. The probability of any point on the screen being hit by an individual electron depends on the point's distances to the the two slits. If the point is equidistant to the two slits, that point has the highest probability of being hit by any individual electron, and therefore corresponds to a maximum, whereas slightly to the left or right that probability is lower and may correspond to a minimum. These probability relationships repeat across the screen, with the greatest maximum at the center. This is why an interference pattern eventually develops when many individual electrons are built up on the screen. Should that be spelled out as such in the section? -Jordgette [talk] 22:47, 9 October 2011 (UTC) I suspect that spelling it out will not help. The problem is not with the idea of additions of probabilities, but with the idea that there are probabilities involved at all. Before it was removed, the template complained that there is, "no explanation on why there is an interference pattern when one electron is fired." It appears that Camilo Sanchez is somehow getting the wrong information out of what we have written because there is no interference pattern when only one electron is fired. The interference pattern gradually builds up, as is well shown by the video on the lab experiment performed in Japan. Any one electron will appear somewhere on the screen, and most of the time any electron will show up on what will become one of the bright bands. Of course it doesn't "make sense" that this should happen since something with mass must presumably be somewhere at any time during its trip from the emitter to the target screen, and so it looks like it ought to be going through one slit or the other (or on one side or another of a charged wire). So it doesn't "make sense" that the presence of the other possible path could have any effect on the trajectory of the electron since the electron "was never there" and consequently "the other path cannot be a causal factor" (at least in a universe that believes in no action at a distance). Nevertheless, the universe does not seem to give an electrical panel punch-out for what we think. Maybe the electron goes by two paths, or by all possible paths, or maybe there are not really two paths in a non-local universe. It seems that there is another kind of complementarity involved in attempts to explain what happens between observable events (the brief pulsing of some apparatus that kicks out an electron, and some change at a highly localized spot on the detection screen). Either we talk as though there is an electron that takes one path and a "something" that carries a copy of the probabilities that takes the other path, or we get rid of the ghostly and probably entangled twin of the electron and talk about one electron -- but then we have a non-local connection such that what we ordinarily regard as two slits are in effect a single slit the passage through which has a bizarre effect on the single electron. The treatment by Gunn Quznetsov offers a way around those equally unappealing alternatives, but it involves the infinite regress of saying that a "wave" that is nothing other than the probabilities for mass, momentum, position, etc., etc. itself has a trajectory (i.e., it must itself have a position), and yet if the position where the electron shows up is a function of a probability, the position (or the trajectory, if you prefer) of the "wave" ought to itself be governed by a probability. If I'm right, then any model that humans make to explain things like electron interference patterns will not be a complete and satisfying account of what happens. No Tinkertoy model will be a fully satisfactory substitute for the real thing. There is in fact no explanation for why an interference pattern will eventually form when enough single electrons have been fired at a detection screen. So Camilo Sanchez is asking for the impossible. Or maybe there really is some way that the Schrödinger equation can be deduced from some kind of string theory??? But I don't think that there is a "reason" for why string theory is true, even if it is true. At the level of investigation represented in the Double-slit experiment Wikipedia article, all we can really say is that we can make statistically valid predictions of what will happen, but we cannot explain why it happens. Weird though it is, it's just the way the universe works. I don't know whether the Double-slit experiment article is the right place to discuss "scientific theory," "models," "useful fictions," etc. However, an understanding of these issues would certainly be useful in a society that seems more and more to vilify science and also to want to give orders to people about what they ought to believe.P0M (talk) 06:53, 10 October 2011 (UTC) Ok, I understand that is a difficult question. So maybe the reader should be told that there is no explanation?. I mean, for the most part if we are talking about one particle being fired at a time and then over time forming the bands that are visible when the light waves go through the slit the reader is going to want to know the reason why one particle behaves as a wave, after all, is the electron going through one slit of through both? Basically what I am trying to get here is, can we tell the reader why the particle is behaving like a wave, or is it a wave? I mean, maybe you guys know about quantum theory but that is a basic question that is not being answered in the article. I think it's just responsible to answer it. --Camilo Sanchez (talk) 05:59, 11 October 2011 (UTC) The reader has certainly been told that there is no explanation in other articles (e.g., Introduction to quantum mechanics), Maybe you are right that the issue needs to be brought up again in this article. Understanding quantum mechanics is like drinking from a fire hose. I've just this moment turned from reading The Quantum Challenge by George Greenstein and Arthur Zajonc. "Is the electron going through one slit or through both?" Either way you answer, you will find evidence to show that you are wrong. It amounts to the basic question, "Is light a particle, or is it a wave?" The only answer that even begins to be satisfactory is to say that it is something other than either one of those familiar things, and that if we do one kind of experiment we can get it to show up behaving like a particle (the photons always hit the detection screen in one spot for each one, and they don't "wash across the screen"), but if we do another kind of experiment we can get it to show up behaving like a wave. By doing the double-slit experiment we get a "two for the price of one." The experiment would not work if each single photon did not behave like a wave at the double-slit barrier, and the experiment would not work if each single photon did not behave like a particle when it hit the detection screen. But do not believe me. Get The Quantum Challenge and let that book hammer out the result. You might also be helped by Fritjof Capra's book, The Tao of Physics. He is a physicist who has studied Eastern philosophies and Buddhism. His main point is that in learning Buddhism we have to give up lots of ideas that seem perfectly reasonable to us, and replace them with ideas that sound like nonsense. In the Prajnaparamita Sutras there are several places where the Buddha is quoted as starting a sentence with ordinary human notions such as the idea that each human is a discrete entity, and then ending the sentence is a way that destroys the ordinary notion and replaces it with a correct Buddhist understanding. For instance, since there are not really any discrete entities called "human beings," the Buddha says, "As no beings have I brought salvation to millions of human beings over the course of time." (That's not an exact quotation, or even an exact quotation of one English translation, but i think you can get the idea.) Maybe reading Flatland would help. The author imagined a two dimensional world with inhabitants like people drawn in a comic strip. They can only be aware of things on the surface of their two-dimensional world. Then somebody in our world comes upon them and starts casting shadows on the two-dimensional world. They can see the shadows. The man starts to use his hand to make various shadow forms. One looks like a fox that is opening and closing his mouth. Then the man turns his hand another way and the same hand looks like something entirely different to the Flatlanders. We are a little like that when trying to look at light, electrons, etc. They "turn" one way and look like a particle, then they "turn" another way and look like a wave. And, one thing which we tend to forget, most of the time they are not "throwing a shadow on our world" at all. Maybe in some sense humans would need to be able to grow into another dimension to really perceive photons, electrons, etc., as they really are. In the Dao De Jing and the Zhuang Zi we are introduced to the idea that although the universe is real, the way humans experience and understand that universe is severely limited. We work by imposing things that we build on small volumes of the Universe. For instance, we have the equivalent of a plaster cast we made of something. We label that hunk of plaster "starfish," and carry it around with us. If we pick up a snail it will not fit into the plaster cast. But if we find another starfish it may fit well enough that we say, "I think I just found another starfish." But we constantly get into trouble because our plaster cast of a horse will also fit a zebra pretty well. If we get used to tractable horses and identify a zebra as a horse then we may get ourselves killed when we try to ride it. So from the Daoist point of view we have one "plaster cast" that we have labeled "wave," and another "plaster cast" that we have labeled "particle." Nobody would ever mistake one of these casts for the other. But we grab a photon and we find that it fits right into the "particle" cast, but also that it fits right into the "wave" cast. Now we are really in trouble. We have to understand that the cast is not the starfish, horse, zebra, particle, wave, etc. It's just something we cobbled together. Jill Bolte Taylor wrote a book called My Stroke of Insight about what happened to her when she had a stroke and lost the ability to use concepts. She prefers to talk about "language" instead of concepts, but that's just a choice of words issue. Anyway, on a radio interview she once said, "Language is the tool by which we construct our world, and by which we understand our world." Quantum mechanics seems to me to do a good job of reminding us that concepts are only as good as we make them. As somebody once said, "The map is not the territory." So if you believe the map that says there is a bridge across a chasm and the bridge has recently fallen down, you may drive your car over the edge. P0M (talk) 07:36, 11 October 2011 (UTC) Needs addition of University of Toronto experiment All I have to say is wow. No mention of the somewhat recent monumental experiment that shows how they can know which slit it traveled through without destroying the pattern. — Preceding unsigned comment added by (talk) 23:16, 9 March 2012 (UTC) See for a report and a link. This is not the only experiment that shows that one can get partial information about quantum events by doing things that make particles partially "show up" in the real world. Has anyone published on an experiment that does the calcite crystal kind of trick on single particles? P0M (talk) 03:25, 10 March 2012 (UTC) Scale Question Does all matter behave this way in the Double-slit when reduced to its smallest state? Many of the experiments I've seen compare the behavior of grouped objects to the behavior of individual objects, like a single photon compared to a body of water. Would a single water molecule behave in the same manner as the photon? --IronMaidenRocks (talk) 04:52, 2 April 2012 (UTC) Water molecules are smaller than buckyballs, and buckyballs will interfere with themselves, so presumably if one could fire a single water molecule at an appropriate double-slit apparatus it would behave in such a way that it would contribute to a wet interference fringe on a detection screen. The problem, I would guess, is that it is more difficult to control a water molecule than it is to control a buckyball. Another problem might be that unless you are operating in a vacuum water molecules are all over the place. If you found a water molecule on a detection screen it might be difficult to show that it wasn't just a stray. On the other hand, buckyballs are not very common. Generally speaking, the problem with doing an experiment with larger things is how to keep them from interacting with the environment before they interfere with themselves at a detection screen. An electron moving through a CRT, from cathode to screen, is unlikely to hit one of the oxygen molecules that did not get evacuated while they were making the CRT. But a tiny pith ball falling from the top of the inside of the CRT to the bottom might easily hit (or be hit by) some of the molecules of gas that were not successfully evacuated when making the tube. A sparrow in its own special shielded space ship on the way to the moon would still get hit by all sorts of cosmic rays along the way. What I am trying to get at is the idea that the larger something is the more likely it is to get "measured." When the double-slit experiment is done with light there is no guarantee that every photon will avoid getting "measured" after it is emitted by the laser or whatever the experimenters are using to provide themselves with photons heading in the right direction. You can actually confirm this fact for yourself if you have access to a little laser pointer based double-slit apparatus. The laser will light up the whole region around where the double slits are located, including the tiny vertical patch between the two slits. So some of the photons that the experimenters hoped would go through both slits went through neither. Instead, those photons got "measured" by converting to elevated electron orbits in the atoms of whatever the double-slit board was made of. (I wonder how I would feel if I were put in a space suit and aimed at two narrow slits in a steel wall, and the technicians in charge said that I might interfere with myself and show up at a number of places, most of which were covered by space ships in waiting, but that it was perhaps more likely that I would splat on the double-slit barrier.) The experimenter doesn't have to worry that not all the photons get through. After all, photons are cheap to produce, and losing a few along the way doesn't ruin the experiment. But I suspect that the bigger the particle being used, the more are lost by their running into the wrong thing and getting "measured." If a buckyball is "measured" by being hit by a stray ray of light that reveals that the buckyball was where it got hit and not somewhere else, then it is no longer in a state where it has two superimposed psi functions. The psi-functions have collapsed. I suspect that experimenters have to fire many more buckyballs in an experiment to get enough to go through to produce satisfactory results. I think there has to be a law of diminishing returns. How many .22 caliber bullets would you have to fire dead center at an appropriate double-slit apparatus to get one that would succeed in going through both slits? If one bullet ever made it to the detection screen the suspicion would surely be that the gun barrel was wearing out and shooting to the side occasionally, or that a bullet was imperfect and veered to one side or the other, or... Back to your original question, it isn't really a question of aggregation (since buckyballs will self-interfere). It is a question of "size" as measured by how hard it is to keep the particle being fired at a double-slit apparatus from interacting with something that will spoil that run of the experiment. So any 0 rest mass particles should work. Any single atoms should work. And any molecules that aren't "too big" to escape detection should work. I think that about covers everything. Matter,energy, and what else? Anything like a body of water, even as small as a test tube full of water, is going to be too big to miss. Something will hit it and decohere it before it gets from the double slit apparatus to wherever the detection screen is. But I can't help thinking of science-fictional situations. What would it be like to be fired into some kind of trajectory that would have equal chances of going around both sides of a black hole and therefore actually going around both sides of it only to emerge into ordinary reality when the psi-functions associated with both of me would merge, interfere, and make me show up at one place or another depending on where the wave function collapsed. Clearly it is late at night or I wouldn't be thinking such nonsense.P0M (talk) 06:05, 12 April 2012 (UTC) It feels, to me, like scientists are quick to assume what's actually happening. If we say the reason for canceling the wave effect is that the observer can only see from one dimension of time, we would assume that he would not be able to see the wave pattern at all. Does the act of not observing the passage of the photon pass the observer into a dimension where both things happened at once? We would assume that, as in your experiment where we are the photon, we would not feel that we had been in two places at once. We would experience having gone through one or the other. It would seem to me that there's some more reasonable underlying reason for the perceived 'self-reactive' pattern. Perhaps a minor nature of matter that we don't yet know about and cannot yet measure. Something so delicate that it might interfere with one single particle of matter, but having the effect lost when paths are attempted measurement. --IronMaidenRocks (talk) 09:07, 14 April 2012 (UTC) {obligatory qualification that this page is not a general discussion forum for the topic, but} I personally think that any attempt to generalize on "what's really real" in the material world, such as what's happening to such-and-such photons, comes down to interpreting the information we get from the world and the inevitable pitfalls of doing so (see Blind men and an elephant). This is because everything that is observed ultimately derives from interacting sets of information, new information entering into relations with existing information, etc. That's the direction physics is going, relational information theory (see Holographic principle and Relational quantum mechanics). Features of the world such as time, spatial separation, particles, measured properties of matter, etc., emerge from this informational realm, as formalized by conscious analytical brains with a highly developed symbolic language. Even consciousness -- what is consciousness if not a sequence of comparisons between sets of information in the information-rich context of the brain? But, by stopping at the level of particles, fields, and forces, there is a limit to the useful conclusions we can draw about how the world is put together, and progress in physics ceases. But hey, what do I know.... -Jordgette [talk] 23:04, 14 April 2012 (UTC) I don't know whether the article can be any clearer on the universality of quantum mechanics. Maybe the article needs to say that our current understanding is that all effects (waves at the beach, etc.) are grounded in the way nature is, as described at the quantum level. It might be possible to predict how mobs of people would behave if we knew enough about the psychology of individual humans, but if we had to begin with the behavior of mobs and use that knowledge to try to learn about the psychology of individual humans we would be faced with a daunting task. Physics started with the mass behavior of things in the universe that we experience as "rigid rods," "beams of light," etc. All our thinking, all our direct experience, involves things that are on that scale. It would be difficult for scientists not to think in terms of what is "really" going on, because that is where they came into this movie. They were looking at things that they perceived as really happening, and they were trying to explain why they happened. Neils Bohr insisted over and over again that scientists should not go beyond assertions about what they could see to make assertions about things that they could not see. If you do a double-slit experiment you input a certain minimum amount of energy into a cleverly devised laser device that only emits one quantum of energy at a time, and you receive the same set amount of energy at the detection screen. You know where it started out and you know where it ended up. It seems perfectly natural to assert that it must have been someplace at any time between the aforesaid two events. According to the way Bohr thought about things, when some people asserted that there was some "minor nature of matter that we don't yet know about and cannot yet measure" that accounts for where the photon shows up, they were stating thing for which they had no evidence. Your idea is the same idea that Einstein had, and it is generally called the idea of "hidden variables" (i.e., things that we would like to know the value of, the measure of, but that are still hidden to us, and perhaps are always going to be hidden to us). Bohr thought that Einstein was "quick to assume what's actually happening." For a long time people thought that there would never be a way to be sure that there really wasn't something that a photon carried along with it from the minute it left the laser, or maybe from the minute that it encountered the double-slit apparatus, that would determine exactly where it was going to show up. But Bell showed that the "hidden variable" people were wrong. This story is long, complicated, and has a weird plot. So it is difficult to take it all in at one time. Nevertheless, it would not give proper respect to the scientists who have tried to be responsible about working things out to call them hasty. Einstein tried more than once to show Bohr that he was wrong, and Bohr defeated Einstein with logic and mathematics. Bohr didn't call Einstein an idiot when Einstein challenged his new quantum mechanics, and Einstein didn't call Bohr a fool when Bohr defeated him. Einstein just went back and figured out another challenge. And so it went until they both died. Their followers carried forth with variations of those two basic contentions until Bell came along. Even now, some people still look for loopholes in Bell's proof. Things should be that way. But nobody is irresponsibly jumping to conclusions or holding up his/her own view as dogma. P0M (talk) 05:33, 16 April 2012 (UTC) But why would the photons ever interact with each other over these two separate realities? In every thought experiment I've conducted, two realities intersecting is inconceivable. The observer is always locked into the one, its a self defeating mind game. The very reason why we can't measure the path of the photon is because, purportedly, we can't intersect these realities. Why, then, is such dual-reality activity being observed on the impact end? Why is the path nature and the impact nature of the photon so different in ability to be measured? --IronMaidenRocks (talk) 03:45, 19 April 2012 (UTC) I don't understand what you mean by "the photons" "interact[ing] over these two separate realities." In the purest form of the experiment there is only one photon going through the apparatus during one time interval. I think I understand the part where you say: "The very reason why we can't measure the path of the photon is because, purportedly, we can't intersect these realities. Why, then, is such dual-reality activity being observed on the impact end? Why is the path nature and the impact nature of the photon so different in ability to be measured?" You need to take out the "purportedly, and then you will have what the quantum mechanics people have been trying to tell us. Take out "purportedly" because you can't shine a photon on a photon and gather up in your retina the reflected photon you directed toward the target photon and so "see" the target photon. I had a friend in Singapore who wrote to me that his apartment complex was being invaded by red flying things that showed up out of nowhere on the walls and other surfaces, that could move very fast, and evidently they moved too fast for the eye to follow them because he never really saw them flying from one place to another. I finally convinced him that there was some joker with a laser pointer at work. He would not have been fooled for very long by a regular flashlight because part of the light come out toward the observer even if the main focus of the light is on a wall nearby, so the observer of a spot on the wall could always easily find the source of the light. The thing about photons is that a whole intense beam of them can go past you an inch in front of your nose and you will not see anything. (If the beam is intense enough, enough dust in the air may reflect a tiny part of the beam out toward someone standing to the side, but usually only barns or other very dusty places have enough dust in the air to let that happen.) The photon is a little like the ghost Quaspar. When Quaspar is on the move s/he has dematerialized and does not exist on this plane of existence. So you cannot find Quaspar in his/her dematerialized state in the mundane world. The only way you can see this ghost is to put a ghost catcher out and hope that Quaspar will get caught in it. The ghost catcher can be a piece of photographic film, a sheet of white paper, a CCD camera, etc. Wherever Quaspar materializes, s/he will do so at a single point. But from then on Quaspar ceases to haunt the universe. If anybody sees a scintillation at the point Quaspar met his/her end, that must be because another (phoenix) photon has been generated where Quaspar terminated. We don't know whether, while still dematerialized, Quaspar goes through one slit, the other slit, or both slits. All that we know is that the width and separation of these slits makes a mathematically definable difference in where Quaspar might show up, and that if one slit is closed off then the whole experience will change. What you call the "path nature" (and what I call the part of the total event that pertains to "being out of touch with this universe") and what you call the "impact nature" (and what I call the part of the total event that pertains to "materializing at a definite time and place") Whenever you attempt to determine something about the "path nature" (e.g., what Quaspar is doing between laser and detection screen), you will inevitably "measure" Quaspar and make him/her have an "impact nature." The answer to any "path nature" attempted question is always an "impact nature" answer. The main kicker in your questions is the repeated word "why." There are probably quite a lot of "why" questions that cannot be given an immediate answer. Why does π = 1.14159...? Why isn't it equal to 1.1400000 or 1.1500000? There may be an answer to that question, but it won't be found in the realm of regular mathematics or geometry. We have to invent non-Euclidean geometry to even make sense of the idea that π might have some other value. And then we have to figure out some explanation for the form of geometry that our universe corresponds to, for why the universe is that way and not some other way. Trying to understand why photons have a dual and complementry wave-particle nature is trying to understand why the universe is the way it is and not some other way. There has been a great deal of interest in "string theory" because it appears to be an entryway into some kind of physics that will supply some answers to these "why" questions on a deeper level than just, "because that is the one and only way that things work." Briane Green has a recent book on the subject of "strings" and "branes." But, before you try to tackle that book, I would recommend (on the basis of my efforts to bring clarity to my own thoughts) that you let yourself become accustomed to the concrete results of empirical physics such as the double-slit experiment. You are already aware of the questions that bug the physicists. Just be aware that they (generally) did not reject the evidence of their senses because of the preconceptions they brought with them from the macro world. So they had to say, "Well, if that's really the way photons behave in this universe, I guess I'd better get used to it and try to figure out the implications for other parts of my understanding of the universe." Starting with Neils Bohr, the great masters have all said something like, "If you think any of this makes sense, then you obviously have not even gotten to first base in understanding what we are seeing in the physics lab. If it isn't weird it isn't quantum." I've noticed that physicists like George Greenstein are pretty good at following the via negativa, and telling readers of their books what we are not justified in asserting about what is "really" going on.P0M (talk) 06:51, 19 April 2012 (UTC) So, the photon has a different nature until it's measured. This doesn't necessarily mean there are alternate realities, but simply that we don't know how the photon behaves until something measures it? If that's the case, I feel significantly less confused. Maybe I understand it less now, because it makes sense. --IronMaidenRocks (talk) 19:04, 19 April 2012 (UTC) "we don't know how the photon behaves until something measures it?" -- That's basically it, yes. John Wheeler was fond of saying that a photon doesn't even exist (in the "particle" sense anyway) until it interacts with something. Although that is a metaphysical position -- it is an assertion that goes beyond present empirical experiment -- it's as good a way of confronting the issue as any. -Jordgette [talk] 22:19, 19 April 2012 (UTC) Now I'm even more intrigued. The photon is presumably still under the effects of standard mechanics to some extent; its not everywhere, its behaving quite reasonably with the guidelines of our natural universe as we understand them. Still, I can't figure out why it would appear to be interfering with itself; nothing outside the 'many worlds' theory seems to come close to being an explanation. Here's a crazy idea I came up with relating the problem to computer science, I came up with it while trying to rationalize why the particle doesn't appear just anywhere, but only in relation to variables applied to it (velocity, mass, etc): The photon isn't really traveling. Its being placed by a physics processing computer running all scenarios, which places the photon, when measured, according to the most logical path. The 50/50 split at the slit causes a paradox in the pathing logic of the program, because both paths are equally likely. This causes it to run scenarios where the photon is going through both holes at once. The program chooses the 'both holes' scenario because this is the only situation which doesn't cause a processing deadlock. Interfering with either slit will change the perfect split scenario, causing the physics engine to behave normally. --IronMaidenRocks (talk) 07:43, 21 April 2012 (UTC) Leibniz was ahead of you by several centuries. All individuals (which he called "monads") are totally isolated from all other monads. They have no real relationships. All relations among monads are mediated by God and occur outside of what we imagine to be space and time. The acts of will (I will hit the letter "a" on my keyboard) of each monad are perceived (read) by God. God then updates the states of all monads who need to be updated (the fruit fly sitting on the "a" key sees a finger approaching), and so forth and so on. (If this account reminds you of the movie Matrix, there is probably a reasonable explanation.) The physics model speaks of psi-wave fronts emerging from both slits and propagating toward the detection screen. They overlap. Their potential results are either computable (or, by your model, computed), and when they reach the detection screen they are ordinarily resolved by delivering the photon of energy to some electron. There has to be an electron ready to have its energy state changed or nothing will happen. That's why a photon doesn't scintillate in a pure vacuum. (We would miss seeing the stars if this rule changed.) When electrons are available, the choice of which place to show up is determinate only in the sense that the photon won't show up where there is a 0 probability of its showing up. Where it does show up is a random selection among all of the possibilities. For instance, there might be a spot well away from the center of the detection screen for which there is only a 0,01% probability of a photon showing up. But one must show up there if you wait a while. By your model there would have to be another part of the "physics processing computer" (or Leibniz's God) that would pick at random one of the possible positions for that individual photon to show up. But the computer would have to have a true random number generator and also some way to jigger the percentages of "payout." The problem is that algorithms do not produce true randomness, and the mind of God is presumably not randomly fluctuating in some way. When you start with certainty and try to use it to introduce uncertainty, then you get paradoxes or infinite regression. When you start with uncertainty, then you need to look at ways in which uncertainties can generate something that looks like certainty. In the case of physics, the certainty can be explained by "arrow of time" arguments that show how the probability that an egg balanced on the tip of a broken off cane of bamboo will fall and break is very high, but that the probability is very low that the broken egg being hit by forces generated somehow in the ground in which the bamboo is rooted will be reassembled and then the egg will be catapulted back to its position atop the bamboo stalk. So it is, I think, always going to be easier to deal with a system that simply observes the regular rules of probabilities in things like photon self-interference and computes those probabilities for individual events and for very many runs of the same experiment. (You won't see interference fringes at once in a double-slit experiment involving single photon deliveries stretched out over some extended period of time, and you will never know where a photon will show up next. At best you will know where a photon will not show up.) Imagine that you had your physics processing computer, and that it laid out a CGI "movie" of a double-slit experiment. The CGI movie would show the laser being set up, the barrier screen being set up, the detection screen being set up, etc. Then it would show a virtual physicist pushing a virtual button that allowed the virtual laser to have just enough energy to deliver one virtual photon. Then it would show a virtual photon appearing on the virtual detection screen. Nothing needs to "happen" between the virtual laser and the virtual detection screen. They are only lighted pixels of a computer screen anyway. All that has to happen is that the computer follows its own rules for how light should be transmitted when two slits are involved. There is no real energy, only virtual energy. The virtual energy cannot go anywhere, but the software calculates where it would probably show up. Unfortunately, this model would be determinate because a computer works on algorithms. Even the so-called "chaos" math works out exactly the same way every time you work the math providing that you start with exactly the same numbers. To get a truly random number we need to go to quantum processes. The computer would show a w% chance of a photon showing up in the central band of an interference fringe, x% of it showing up on the adjacent left-side band, y% of it showing up on the adjacent right-side band, z% probability of a photon showing up on the next band out from the center on the left side, and so forth. But then for this run of the virtual experiment, which band would the virtual photon be made to show up at? To get a real random choice I think you need to get out of the computer and into a real-world quantum event generator. On top of that your computer would have to keep track of how many virtual photons where made to appear on each band, and avoid any band getting too many or too few virtual hits. Where is this information kept if the universe we believe we inhabit is nothing but a CGI picture made of floating pixels on some non-real computer screen? Why did Leibniz need his monads anyway? Why did not God just keep a memory records for each monad and work out its story the way a human author might imagine an entire novel on the scale of Crime and Punishment before ever writing it down? Why do we need a "physical world" if there is a computer to compute every change? On the other hand, in what sense does the computer exist if it is not a part of what we perceive to be the physical universe? Your model has explanatory power in the sense that many models do. It says, essentially, "Forget about the supposed physical location of the photon in this experiment. Forget that humans conceive of a photon as a tiny particle. Just calculate what happens to a wave front of a certain frequency that emerges from the snout end of a laser, approaches double slits of certain dimensions, passes one component through each of the two slits, combines these two wavefronts by superposition as they approach the detection screen and then hits the screen, which forces the photon to show up somewhere. What bugs people like Einstein is that there is no way to decide which "selection" of fringe band any given photon makes. They believe that it cannot be due to no determinate cause. "There has to be a reason why it came here to this fringe rather than going to some other fringe." P0M (talk) 19:22, 21 April 2012 (UTC) Thanks for the well thought out reply, Patrick; especially the part about Leibniz. It does sound quite similar. However, my idea is not that there is no physical relationships, but that physical reactions are being calculated and performed by this 'physics computer'. Its not selecting random outcomes, but rather, selecting the most logical scenarios based on the 'loaded physics program'. For example, in my thought experiment, a photon with mass 0, velocity 0, etc, is illogical and will be read anywhere that is measured - or, perhaps nowhere. Would this particle register only once or engulf the whole universe? Introduce some variables to the equation. Mass and velocity are determinables, so the physics engine knows logically where photons with such variables will go. A photon with such hard-set variables will not 'ping' off every other photon and interfere, because they all have set variables which make such random collisions unlikely - the double-slit scenario is an exception, because the computer is doing something different to work around a paradox. I start to stretch for 'whys', because it looks like I'm fooling myself by perspective and becoming too deterministic in my thinking. But why do physics happen? The phenomenon seen in the double-slit experiment might give us a unique window there. It will certainly tell us more accurately how light works. How important it could be to keep considering this, where others would push it aside and say "we can't know". Something else I've been considering is time relative to the experiment. It would seem that when anything measures the photon, it is measured relative to time. For example, a photon is registered when measured no matter if a sentient observer is looking at it. We might be able to check and observe how much time had passed from when the photon was registered, to when we read the measurement. That is, all of this exchange of photons occurs whether anyone is there to see it or not. Any deterministic process is unreasonable, for otherwise that force would have simply picked between one of the two 50/50 scenarios. The computer in my experiment would have to be an inanimate force, not capable of making its own moral judgement. However, I would still reason that the 'logical evolution' of taking the 'both paths' scenario is a deterministic way of avoiding a paradox of some sort, whether in my scenario or not, otherwise a paradox simply would have occurred. But back onto time and the experiment: working on this macro level, one is drawn closer to the idea that time might be, in some way, relative and perhaps a vector of mass. But I guess that's getting off-topic. --IronMaidenRocks (talk) 00:48, 22 April 2012 (UTC) Leibniz and his ideas about what space and time are led by way of Kant to Einstein's ideas about space and time. I do not believe that Leibniz was right. I am not even sure that I am right about Leibniz. But here is one of the truths that emerged from thinking about thinking during the time between Leibniz and the present: Nature does its own thing, and humans try to impose their own conceptual schemes, theories, models—call them what you will—on nature to explain nature in some way that gives us the ability to predict things better. "We control nature by obeying here," as Bacon said. So we have to be able to understand what nature is really up to, not what we would like nature to be doing. Richard Feynman said that the double-slit experiment is a sort of prime example or central nugget of all of the mystery of quantum mechanics. Anybody with a pocket laser and something to make little parallel slits in a sheet of thin metal can observe the phenomenon. We can watch nature doing its own thing, and we can sort of slow it down by reducing the rate of photon production until only one photon is going onto the detection screen at a time. (Doing so will take somewhat greater resources.) But all we get out of experience can be reduced to some simple equations that tell us what but do not tell us why. Theories can be "confirmed" in the sense that we do more and more experiments and keep getting good results from some theory, but theories cannot be proven to be correct because there is always the next run of the experiment, the next test of the theory. Swan number one is white, so is number two, so is the millionth, and then somebody goes to Australia and the theory that all swans are white falls apart. Progress comes when a theory fails to be confirmed and people have to scramble to account for the instances that do not fit the old theory. But the result is not a "true theory." Instead it is another "convenient fiction," that is, something that came out of human creativity and that is provisionally reliable enough to let us build MRI machines or whatever else we want to do. The models that we build (mental or physical) have utility, but they do not promise to be anything other than convenient fictions. Ideas about what might be happening but that pertain to things for which we cannot gain evidence are sometimes valued because they make us feel better, or maybe because they suggest other experiments that we could be doing, but they are all speculation. Einstein's speculation was that there are "hidden variables" that account for where the photon will ultimately show up, and that will account for other quantum events, but it could not be proven because by Einstein's own admission/construction they were "hidden," i.e., we have no empirical information whatsoever about them. He presumed them to exist. Then Jonathan Bell came along and proved that even though nobody knows what is "really" going on, Einstein could not be right. Try to get more context for understanding this experiment. You might start with the Introduction to quantum mechanics where some of the history of things that forced people to drop old ideas is given. In the dawn years of the 20th century, people learned that position and velocity are indeterminate, i.e., the closer you get to pinning one of this pair of measurements down for, e.g., an electron, the farther off the mark you will be on what the other one should be. The group of physicists around Neils Bohr maintained that a particle has neither position nor momentum (mass times velocity) until they are measured and you can't measure them at the same time. The best you can do is to measure one and then measure the other as soon as possible thereafter. But measuring the first will always affect the results of measuring the other. So mass and velocity are not determinate. In Relativity theory it was discovered that while light is energy and therefore does not have mass, energy can be converted into mass, and mass can be converted into energy. What time is, and what space is, are issues that go back at least as far as St. Augustine. (And I've mentioned already the contributions of Leibniz and Kant.) Einstein brought them together in the idea of space-time, another part of the context that deserves to be studied without, at first, bringing in the complications of quantum theory (or complicating quantum theory by bringing in relativistic effects and how to account for their influences). The presence of mass affects the "passage" of time, i.e., clocks close to a huge mass like a sun or a black hole will tick at a different rate from clocks of the same design stuck out in interstellar space somewhere. But to understand that stuff you would have to learn about Einstein's General Relativity. This discussion page is not really the appropriate place to discuss all of these ins and outs. An article on a single topic such as the double-slit experiment is more complete than it needs to be if it were presented in a general treatment of quantum mechanics for the beginning reader on the subject, but it only offers indirect guidance to related topics. To understand this article thoroughly one would have to follow all the linked articles and maybe all the articles linked to those articles. You might find it very helpful to read Introducing Quantum Theory by J.P. McEvor and Oscar Zarate. There are lots of other good books, but this one has the advantage of being reasonably short and yet reasonably complete, so it is easier to get an overview of the entire field in which the double-slit experiment is embedded. That way many of the assertions in this article that may appear dogmatic or unreasonable would be shown to have been arrived at by generations of responsible physicists as the result of having had some sense beaten into them by Mother Nature, who does not always like their "convenient fictions," especially when considered by humans to be "truth." P0M (talk) 06:11, 22 April 2012 (UTC) Might this solve it ? I just watched another video about it then it occurred to me, that these videos might be wrong. The videos are true for what ever happens after the split, interference yes, bright spots based based on probability yes no problem. But before we hit the split light (not even from a laser) light wont behave like a straight line (how its often simplified shown). Before it enters the split we also have these same radial waves of probability, as a result a single photon doesnt choose a side of a split, its a probability that passes both splits. It also doesnt need to "know" if the other split is open, thats just its situation; (for example it also doesnt need to know where the wall is still it doesnt pass trough.) At a single split, the same radial waves of probability 'like a drawn circle', on part hits first probability collapses in a single point. With a single slit, it wont pass like a straight line (but we like to think of it that way sure, but its not our mind that runs physics, its natures work) — Preceding unsigned comment added by (talk) 07:47, 20 April 2012 (UTC) The way the situation is modeled in mathematics, a probability wave emerges from the laser as a wave front that does not curve around. (The other way to get this kind of wave is to pick a wave whose center of radiation is so far away that the circumference of the wave that reaches the observer is so huge that any small part of it is almost perfectly a straight line.) That wave reaches both of the slits and is still essentially flat. When it goes through a single slit it is diffracted and so you will see a sort of three part fringe pattern just from going through one slit. When there are two slits there are two such patterns. They interfere, i.e., the probabilities of where a photon will show up are determined by the interaction of the two probability waves. So in one sense light does not behave like a straight line because, at least in the model, what leaves from the laser is a surface moving away from the laser. Take the barrier with its slits away for the moment and ask why the laser puts a tiny spot of light on the screen even when the laser and the screen are feet or yards apart. The probability that the photons will hit the spot diametrically opposite to the laser is extremely high, and the probability that it will hit elsewhere falls off rapidly on all sides. When a single slit is put in the center, then the wave is affected by the way it propages through that narrow place. (Huygens had this figured out a long time ago.) But the probability for the photon to show up at the center of the target is still highest, and there are also fairly high probabilities for two side bands to show up. So where the light shows up appears to be at the end of a straight line when there is not any barrier to its passage, and still appears nearly like a straight line when there is a single slit in its way. Where the probability wave hits first does not determine where the photon shows up. Otherwise, the diffraction effect would not show up.P0M (talk) 21:38, 20 April 2012 (UTC) At or on the screen The "on the screen" .... "at the screen" language in the first paragraph is confusing (at least to me). It seems impossible that the two different patterns are occurring simultaneously "on" the screen; and yet, what could "at" mean if not "on" ? Greg P. Hodes, Ph.D. — Preceding unsigned comment added by (talk) 22:34, 26 April 2012 (UTC) I've changed "at the screen" to "on the screen" -- meaning "on" in the sense of "spilled coffee on the table," i.e., it's a real physical position. There are two patterns superimposed on the screen (as well as elsewhere), but they are patterns of probabilities, so they are not visible. They produce a single visible patern as a result of their interaction.P0M (talk) 00:42, 27 April 2012 (UTC) Slits to blame for interference pattern. There is no mention of the possibility of the slit device it's self causing the interference. It is possible, and highly probable that the particles are bouncing off the edges of the inner walls of the slit, and so mathematically according to the size, shape and depth of the slits, causing the pattern to show up. If you toss a ball towards a double slit repeatedly, some balls will bounce off the inner walls of the slit, and be sent flying to the left, some to the right, and some to the central area. It is my theory that the slit it's self is causing the interference and that a particle is not in two places at once, and that it is not interfering with it's self. If someone credible could confirm what I am talking about, or refute it then please do. Freegen (talk) 05:44, 20 June 2012 (UTC) If what you suggest were happening, then people would have noticed that the interference patterns were changing depending on how thick the barrier was made. If you have straight-line "bullets" bouncing off the sort of window frame, you have to imagine a "rifle" that shoots inaccurately, sometimes veering left, sometimes veering right, sometimes managing to go straight and then hitting the middle ground between slits. Then you would get some bullets going straight through and some being reflected in one direction (e.g., going from heading left into the left slit to going right coming out of it). So you ought to get something like this: ||| ||||||| |||. I'm sure that people have been aware of the possibility you suggest going all the way back to Young and the people who were discussing his experiments in his own time. The math would not support the regular bands and predictable distances that depend on slit width and slit spearation, anyway.P0M (talk) 16:21, 20 June 2012 (UTC) Another demonstration that that is not the explanation: if you block one slit, eliminating the interference pattern, the intensity of light (number of bullets) reaching the screen will decrease in some areas and increase in others. This is the proof that the light waves from the two slits are interacting. If it was only "bullets" bouncing off the edges of the slits, closing one slit could only decrease the number of bullets reaching a given area of screen, not increase it. --ChetvornoTALK 20:00, 20 June 2012 (UTC) You don't have to have slits to get two beam interference - a single light beam can be split into two beams travelling in different directions by beam-splitters - they will then produce interference fringes if they overlap, e.g. Michelson, Mach Zender interferometers, Fresnel Biprism etc etc. If you block off one path, the interference fringes disappear just as when you block off one of the double-slits. The geometry of the fringes follows the same rules as two overlapping beams generated by a double slit. This sort of interferometer can be set up so that there is only one photon travelling through it at any time, and fringes will still be observed. This cannot be explained by slit edge effects. Epzcaw (talk) 19:55, 15 August 2012 (UTC) Shouldn't a mention be made of Everett and DeWitt's "many worlds interpretation?" — Preceding unsigned comment added by (talk) 03:54, 1 August 2012 (UTC) I'm not sure how useful it would be, as many worlds doesn't offer a unique elucidation of wave–particle duality, as far as I know. The only thing I can think of is that in other worlds, one or both slits may be closed, and in those worlds there is no interference. -Jordgette [talk] 22:20, 1 August 2012 (UTC) Figure showing double slit fringes Double slit fringes with sodium light illumination I suggest that this would illustrate (maybe even illuminate) double slit interference a lot more clearly than the current figure which shows a combination of interference and diffraction. Single and double slit 4.jpg Or this one,, which is a modified version of the existing one. Epzcaw (talk) 19:40, 15 August 2012 (UTC) The original purpose of the double image was to head off the frequent confusion between "diffraction pattern" and "interference pattern" by making the difference concrete. I would therefore prefer the modified version of the existing one. Originally I used a picture of a cruder pair of patterns made the traditional way, and in a way I preferred that pair because it showed readers a pattern that they could reproduce for themselves. The first new photo is so artifact free that it begins to look manufactured in Inkscape or something. It's too beautiful. P0M (talk) 15:45, 18 August 2012 (UTC) Substituted second figure. I have modified the text to take the change into account Epzcaw (talk) 19:45, 23 August 2012 (UTC) Slow GIF animation There's a big and slow GIF animation of a double slit simulation, causing longer load time and a bit laggy when scrolling pass. Weaktofu (talk) 03:22, 16 February 2013 (UTC) The GIF animantionn is my own work an it is no very performant. If it would be accepted I can publish two static PNGs and a hyperlink to a YOUTUBE or a WIKI video source. (talk) 23:16, 22 February 2013 (UTC) I'm a bit curious about the simulation: How is it constructed? (i.e. what equations describe it) the wave packet demonstrated a solution to the Klein-Gordon equation or how does it work? I assume the walls are chosen to be completely reflective? ----ChrisLHC (talk) 14:46, 29 May 2013 (UTC) Dropping "s/he" construct When this block quote was added to the article, the pronouns were modified to "he/she". Later on, another editor changed them to "s/he". But as this passage is a direct quote, I am changing the pronouns back to their original masculine form, as originally published by Časlav Brukner and Anton Zeilinger in their 2002 paper. Link to the full paper. The quoted passage is on page 3, second paragraph starting "Just to follow our example". — Preceding unsigned comment added by Ryanrs (talkcontribs) 08:53, 16 April 2013 (UTC) No-one else sees the obvious invalidities presented by observations from these experiments? I mean come on. It's very clear that the observer, which in these cases always does something to truthfully interact with the experiment, doesn't collapse the wave functions "just by observing" as the leaps in logic presented by the scientists running these experiments would have us believe. This is a major issue with scientists. They get stumped by often very obvious things, like the fact that the interaction of physical (electromagnetism included) properties of the so called "observers" themselves could very easily cancel a wave function property just by bombarding the experiment with interference. Where's my nobel prize? No thanks. Could scientists just stop overlooking these simple things in an attempt to, oh I dunno, try and look cool? And no, I'm not even saying that there are cases where the simple act of observing without interference (you let me know when thats truly possible) might actually cause completely different outcomes from when the experiment is not observed. All I'm really saying is that for the most part, from what I know, this kind of true experiment hasn't happened yet. And when it does, all these people who believe that the simple act of observing was just a simple act of observing will realise that there was probably no way to even do this kind of experiment truthfully until that point in time. I would look forward to the results of that experiment there. As our understanding increases however, the double slit experiment, atleast, will be one of the first supposed quantum mechanical demonstrations to show, via true interfernceless observation, a wave pattern not collapsing so easily. Our brains alone amplify our computer logic rendered thoughts to do things like cause our body to move. There's obviously alot of detection going on here. And at somepoint there's an electromagnetic charge generated. So its no wonder, even focused thoughts have shown interference on double slits. Essentially double slits are just our most sensitive detectors. Quantum unexplainable collapse of wave function my foot! Get back to work! (previously unsigned) Zoele (talk) 20:37, 2 May 2013 (UTC) (edited) Zoele (talk) 20:54, 2 May 2013 (UTC) When you say bombarding the experiment with interference, I think what you mean is bombarding the experiment with environmental photons. They've already thought of that; it's called decoherence. -Jordgette [talk] 22:33, 2 May 2013 (UTC) That's a nice long read, I'll get to that eventually but I have a concern with the title of it and the relevance of what you've said to what I've written. Thanks for your points, it is great to know that they may have thought about what I'm talking about but clearly you don't even know exactly what I'm talking about because I said exactly what I was talking about and the assumption that any photons generated from even the process of thinking would actually interfere with this experiment directly are what you are implying and that is not what I am implying at all. There are however, other forms of radiation (after all it must be some form of radiation) that is interacting with the experiment to cause a change. And the source doesn't have to be human thought but could easily be it (and has been proven to be an effector of the double slit phenomenon) and the other obvious candidate is the "observer". There are more than enough sources of interference present in all that can be considered the observer, and one would have to create an observer that doesn't interfere with such a highly sensitive experiment before getting close to saying that there is actually a quantum phenomenon taking place here. Afterall, this is a one major hole, and you don't need to be a quantum physicist to see it. Logic, also a science that many are familiar with (said many void here, except myself as far as I can tell), can easily shed photons on this massive hole for all to see. (talk) 23:39, 29 May 2013 (UTC) I think the "change" you speak of is caused simply by the setup of the experiment. Set it up a different way, with a different slit separation for example, and the wave function will "collapse" in a different manner such that the interactions are consistent with the laws of physics. That aspect of QM isn't controversial or interesting. And, there are experiments called weak measurements that do what you describe, and no results have been found that are inconsistent with the predictions of QM. -Jordgette [talk] 18:19, 30 May 2013 (UTC) I suppose I wasn't clear enough. I will describe what I'm talking about again. There is an experiment in which a beam of "particles" (electrons but usually a laser) is fired from the source emitter at a plate with 2 tiny slits in it. Now the experiment reports 2 different finds when "observed" and not when observed. This is specifically what I am referring to. Yes, I understand weak measurements have been tested but I don't know what theory of quantum mechanics "predicts" that when we observe phenomenons such as this, they will collapse the wave function. It doesn't matter because, and I know it has a name and whatever name it has, it simply does not take into account what I have stated above. That being that if we observe the results of the experiment without "observers" to run tests on the beam mid experiment and then run the test again with the "observers" and see different results after at the collector/deteciton screen behind, then yes we can say that the observer is obviously affecting the experiment. The quantum leap being made here is that we are assuming there is no interference being caused by the observers, and that what is therefore happening is a quantum collapse of the wave function that is a given property of the particles fired. Anyway, it is perfectly fine that we make these leaps so long as the word "theory" is tagged on. There are still so many possible forms for interference. Even if one were to "unplug" the observer having left it in its position and then the results again show us the wave function having not collapse, then the PROPER assumption would be to see how the ELECTRICAL CURRENT is affecting the experiment. Obviously. As I said earlier, there are many forms of radiation that can be to blame. And it could even be a complex feedback or discharge effect caused by the moving electrons in the area creating magnetic fields. Do you understand what I am saying?Zoele (talk) 15:46, 31 May 2013 (UTC) I'm sorry, I don't. What is this electrical current or other forms of radiation you are proposing? Sounds like speculation or original research. There's no place in Wikipedia for that. -Jordgette [talk] 18:08, 31 May 2013 (UTC) You may be misinterpreting the language, Zoele. In QM, "observation" means "interaction". In your double slit example, in order to "observe" the electron to determine which slit it went through, something has to interact (collide) with the electron. For example, light photons have to scatter off the electron to determine its position. The momentum change caused by the collision destroys the coherence of the wavefunction ("collapsing" it) so the interference pattern is lost. --ChetvornoTALK 19:59, 31 May 2013 (UTC) Chetvorno understands. He also clarified it for me. Jordgette you are a bit too defensive of Wikipedia here. Nothing I spoke about was speculation. The only thing I didn't realise was that they consider any observation an interaction in Quantum physics. Light having to impact an electron to figure out where its going. However, there are still many other methods of observation that you yourself Jordgette linked to under weak measurements. These are also observations of a sort and their electromagnetic signatures are almost non existent. We can observe the wavefunction almost as it happens with such methods. Again, the broad issue is the generality of stating that simply observing is enough to collapse a wave function, when in truth, observations are inherently removed from subjects happenings. For example, if you watch a car crash happening before you, your photons shouldn't be affecting the carcrash anymore than the photons reflected off of anything else in the area. Now, at the quantum level, due to Heisenberg, yeah, we can't actually see an electron (well we can see the path it took) or a photon (we can't determine its position), which is infact what this whole experiment is truthfully reminding us. The way that it is still erroneously presented is that general observations, which therefore is a generality and can allude to any observation, including the type that would not have the following effect, collapse wave functions. This is simply invalid to state like this. Thanks for the clarification Chetvorno. Jordgette, open your mind. This isn't speculation, this is questioning scientific report. Something Wikipedians try to assume they're often immune from due to their propietary standards and organization. And just for you Jordgette, the observers they would be using that WOULD affect these experiments often have power supplies running through them which, by nature, being close to the experiment, could affect the experiment near the observation. A magnetic detector that can sense electrons moving through a slit will be proejecting a magnetic field which will have to be altered for it to know something has passed through. This might be a strong enough effect to actually collapse the wave function. Call it speculation, but theory is just as speculative, and its all over Wikipedia. You don't know what you're talking about Jordgette.Zoele (talk) 09:36, 4 June 2013 (UTC) Do you have any suggestions for improving this article, or are you going to just continue giving your personal opinion of quantum mechanics and various Wikipedia editors? The standard for Wikipedia is verifiability. If you state "theories" that are not in the literature, then it's speculative original research on your part. I'm sorry but that is unambiguous. -Jordgette [talk] 20:37, 4 June 2013 (UTC) Not to be rude but, I feared I might get such overly defensive responses. "just continue giving your personal opinion of quantum mechanics and various Wikipedia editors?" Yes. I might JUST be giving my personal opinion of an error I see here. Are you going to sit there and point this out or do something about it? Or are you OK with the error and lack of clarity remaining? Are you going to tell me you realised all along that there was interference from the special observers used to detect where photons and electrons would be traveling before and or after the plate with the 2 slits? Are you going to tell me you knew this all along? Because I would assume, that unless you were a quantum physicist, anyone else who actually understood what was being said here (myself) would realise that the part about this phenomenon IS NOT clear. Now, seeing as how you have the time to colour your name and give so many stool samples about how to use Wikipedia and this article in its so called "verifiable" (as if published articles have NEVER included utter BS or pure personal theory before) state, then how about you realise how important this might be for ALL THOSE INTERESTED WHO ARE NOT QUANTUM PHYSICISTS NOR PEOPLE WHO JUST HAPPENS TO REALISE THAT THE SPECIFIC OBSERVATION METHODS AT THE SUB ATOMIC LEVEL CAUSE SO MUCH INTERFERENCE THAT THEY REGISTER AS MAJOR DISTURBANCES IN THE FORCES GOVERNING HAPPENINGS AT THAT LEVEL AND SUBSEQUENTLY RESULT IN DIFFERENT OBSERVABLE (IN TERMS OF OBSERVATIONS AT THE OPTICAL LEVEL) OUTCOMES OF EXPERIMENTS AT THE OPTICAL LEVEL. Might just be important. Since this your baby, what are YOU gonna do about it Jordgette?Zoele (talk) 01:19, 5 June 2013 (UTC) ──────────────────────────────────────────────────────────────────────────────────────────────────── Zoele, the standard for Wikipedia is Verifiability: "Wikipedia does not publish original research. Its content is determined by previously published information rather than the beliefs or experiences of its editors. Even if you're sure something is true, it must be verifiable before you can add it." I haven't seen any links to previously published sources, that would back up your definition of interference, so it is difficult to know how to go about improving the article. If you don't like the verifiability aspect of Wikipedia, consider starting a blog on the topic. -Jordgette [talk] 02:41, 5 June 2013 (UTC) Jordgette is right, Zoele. As far as I know there is no WP:RS that says the current theory of the double slit experiment is in error, as you imply above. But this is not the place to talk about that. This talk page is for discussing changes to the article (see WP:TALK). Do you have any specific changes you'd like to suggest? This is not the venue for discussing anything else. And the abuse of other editors has to stop (WP:TALKNO). --ChetvornoTALK 02:54, 5 June 2013 (UTC) What about undeniable biased favouratism? I'll be more careful henceforth but it was Jordgette who crossed the line first here anyway. It can be argued that my statements are not speculation and that not everything in this article is verifiable. Improper interpretations are clear throughout this article. In terms of the exact experiment the point of it truthfully, should be to show or explain the phenomenon demonstrated in its trials using sourced information. Some understanding of what is happening should be made rather than the leap that is : "it just collapses the wave function" - or something as simple as that, featured in the article. Assuming I am suddenly not allowed to point out obvious invalidities is just as rude. "You must first backflip before you can say what you are saying". The point of me saying this here was to bring it to attention. You have both now sadly made it clear you are biased against this discussion topic in terms of what it is bringing to light. Continue to defend the mistake. You could champion it on your own and even take all the credit (I really could care less) but I will not submit to the concept of having to visit my local physicist, get him to create an article, just so I can have proof that there is an obvious oversight in the grammar and word choice in certain key areas in this article. Ctrl-F the following: "That aspect of QM isn't controversial or interesting" "Sounds like speculation or original research. There's no place in Wikipedia for that" I'm not perfect but if I can't free-speech in a talk page on topic, I guess no one can right guys? Let it be known that Chetvorno's most recent reply applies directly to the both of us for I too feel abused by the quotes above, and I quote "And the abuse of other editors has to stop (WP:TALKNO)." Fine by me. Chetvorno, can you do the leg work for the fix I'm implying here? You seem to be a great wiki user. Thx in advanceZoele (talk) 21:49, 5 June 2013 (UTC) It's not going to happen. There is no leg work to be done, because no such articles exist, and for good reason: all "intuitive" interpretations of quantum mechanisms that have ever been put forward disagree with experiment. In science, that means they are wrong. Therefore your initial premise, that physicists ignore "obvious" or intuitive interpretations of quantum mechanics merely to "look cool," is just incorrect. -Jordgette [talk] 22:02, 6 June 2013 (UTC) Was talking to Chetvorno at the end but whatever. What I'm saying is, run an experiment with a test that does not interfere with the electrons yet is able to somehow detect where they're going and the result will not show a quantum collapse. Looks like you were right afterall Jordgette. Much earlier you linked towards articles talking about weak measurements. And it was my fault, I didn't realise that you were right on with your reference. They have thought of "it" and the issue now would be, to alter the conclusion presented on Wikipedia or, if wikipedia is just referencing (which it is to an extent), the original articles should have their contents updated to reflect that the simple act of observering isn't enough. I'm not saying all articles or even specific articles. But I know some do and the proliferation on Wikipedia and beyond of this incorrect wording is depressing. Is it wrong for me to feel like someone being put to death for a valid concept that the current masses are so opposed to?Zoele (talk) 03:10, 17 June 2013 (UTC) Zoele, you can't address anyone exclusively on this page; anyone is free to comment on what you write, just as you are free to comment. --ChetvornoTALK 11:30, 17 June 2013 (UTC) There is no way to do what you have been so egregiously asking people to do: to "detect where they're going" without "interfering" with the particles. In QM, "interfering" is necessary to "detect where they're going". However, there are experiments that interfere "less". That's what the 'weak measurement' experiments you mentioned do. There are experiments that only give partial information about which slit the particle went through: "that particle went through the lefthand slit with 75% probability"; or "20 of the last 100 particles went through the left slit but we don't know which ones". In that case, the interference pattern 'partially' disappears! It's not either-or: in between perfect knowledge and no knowledge of which slit it went through there is a range of partial knowlege specified by probabilities. As the experiment produces more and more accurate information on "which slit", the interference pattern slowly fades from view, until an experiment that can determine with no error "which slit" the particle went through produces no interference pattern at all. But you can see that the tradeoff between particles and waves cannot be escaped. There is no free lunch. Every bit of additional information about the particle's trajectory is paid for by less visibility of the interference fringes. This has all been worked out by mainstream physicists and is specified by the Englert–Greenberger duality relation, and is at least mentioned in the article, although it is not very clearly described. Did you read the full article? --ChetvornoTALK 11:30, 17 June 2013 (UTC) 3 times. Like I said I partially agree but there are still ways to detect without interference. We just need new methods of doing so without interference that would result in such findings, the wording is poor but the article is sufficient. It can't be that hard anyway. You could even use impact vectors. Going to read this article for the 4th time. I'll check out that Englert-Greenberger duality relation. In general this article makes less leaps than the general talk surrounding this "phenomenom" of sorts. Anyway peaceZoele (talk) 20:09, 23 June 2013 (UTC)
b9e7e5b501f523f0
Take the 2-minute tour × Is randomness based on lack of knowledge or behavior of universe is true random? Or in other words, are the allegation by EPR about hidden variable in the QM theory justifiable? What evidence can disprove/prove EPR? share|improve this question The question is very vast and more philosophical than physical in nature. It also mixes some concepts through each other like randomness, probability and determinism. It needs a clear exposition of the various terms and a narrowing down of the question like "Are current models of the universe deterministic?" otherwise the question is too broad and not really about physics specifically. In the meantime, I advise reading this text to clear up ideas a bit. –  Raskolnikov Dec 30 '10 at 14:51 The question body doesn't quite make sense... besides, is this asking about randomness in general, or non-determinism in quantum mechanics specifically? In the former case it's off topic and in the latter case it really needs to be reworded. –  David Z Dec 30 '10 at 23:20 Does the sun rise in the east? Do eggs harden when boiled? You have to be a little more precise in your question. Otherwise you're just being poetic. –  user346 Jan 4 '11 at 15:47 4 Answers 4 up vote 12 down vote accepted This is a very general question, and can be answered from several perspectives. I shall try to give an overview so you can perhaps research the areas that interest you a bit more. Firstly, the most fundamental interpretation of probability (as considered by most mathematicians) is Bayesian probability. This effectively states that probability measures state of knowledge of an observer. This view has interesting ties with physics, in particular quantum mechanics. One could consider the random outcome of a QM measurement (wavefunction collapse) from a frequentist approach, but it is often more appealing philosophically to consider it as a state of knowledge. (The famous thought experiment of Schrodinger's cat is a good example - until one opens the box, we can only say it is an "alive-dead" cat!) Interestingly, Bayesian probability does not explicitly preclude determinism (or non-determinism). Our current understanding of quantum mechanics does however. In other words, even knowing perfectly the state of a system at a given time, we cannot predict the state of the system at a future time. This most famous upset Albert Einstein, who spent many years of his life looking for a more fundamental deterministic theory - a so-called hidden-variables theory. Since then, however, we have learnt of Bell's theorem, which implies the non-existence of local hidden variables, suggesting that there is no more fundamental theory that "explains away" the non-determinism of QM. This is however a very contentious issue, and in any case does not rule out the existence of non-local hidden variable theories - the most famous of which is Bohm's interpretation. In summary, this issue is far from settled, and creates a lot of contention between different groups of physicists as well as philosophers today. share|improve this answer The "local" weasel in Bell's theorem leaves open the possibility that everything is predestined, but makes it in principle impossible to every know all the underling data needed to resolve any given "random" event (because it might be stored in some location outside your historical lightcone and still matter. Grrrr...). So there is a metaphysical escape for those who don't like God throwing dice, but us mere mortals are stuck with randomness in our lives. –  dmckee Dec 30 '10 at 21:34 Well, Shroedinger equation is deterministic, so the whole QM is a nonlocal hidden variable theory. –  mbq Dec 31 '10 at 10:28 @mbq: The Schrödinger equation might be deterministic, but wavefunction collapse is not. The operators representing quantum observables project onto a single eigenvalue of the observable. The information about the rest of the unobserved state is lost, and the loss happens randomly. –  Jerry Schirmer Jan 1 '11 at 17:44 @Jerry This is because (as always) measurement ruins isolation -- yet the larger system composed of measured system and measuring system stays deterministic. –  mbq Jan 1 '11 at 17:52 @Sklivvz Sure, but there is no contradiction here. –  mbq Jan 1 '11 at 23:04 There is a fundamental randomness in the universe but we can often treat things as deterministic. For example, we can accurately predict the path of a projectile provided we know the initial velocity and the gravitational acceleration. However, every measurement has uncertainty due to the accuracy and precision of the instruments used to make the measurements. From these measurements, our predictions also have uncertainty. Uncertainty becomes a fundamental problem at extremely small scales. You should read up on the Uncertainty Principle for a detailed explanation of this but I will attempt to put it simply. To make a measurement, you actually have to interact with the object. For example, to see in the dark you may use a torch. This will shine light at objects, which will be scattered and reflected and your eyes will detect the reflected light. Here light interacts with the object you are observing. At a large scale this doesn't change much, but at extremely small scales the energy carried by light is enough to change the system significantly. So the action of observing necessarily implies that you are changing the system so that you can never measure something exactly. It is important to realise that this is a fundamental law of the universe, not just that our equipment is not good enough. I recommend searching for Dr Quantum videos - it is an animated series that explains these concepts. Due to these limitations, we have to model things like the position of a particle as a probability distribution. Another important thing with regards to determinism is radioactive decay. We can predict very well how much of a radioactive substance will decay in a certain time. It is simply an exponential decay. However, if we extract a single atom, we have no idea when it will decay. This is completely random - the decay of an atom is indeterministic and not at all affected by environmental factors. Again, our models are reduced to probability. share|improve this answer I think there is another level at which this question is being asked. Randomness of a symbol string means there is no formal data compression algorithm that reduces the string to some small form. The shortest description of a string is the level at which its complexity is reduced to an extremum, and that it can be executed by a Turing machine which halts. This is the Kolmogoroff complexity of a symbol string. So to emulate the string there exists some Turing machine which has a "tape" and a stack, where the complexity of the string can't be significantly more (longer length, more bits etc) than the Turing machine to guarantee a halting condition. A set of $n$ coin tosses will produce $N~=~2^n$ possible binary configurations. For $n$ large a smaller percentage of the $N$ binary strings are likely to satisfy this condition. The Kolmogoroff complexity is then a form of the Chaitan halting probability, which is itself not computable in general, which does give a bound on the number of strings which are "halting." This leads to the indefinability of randomness. To define randomness you need some algorithm which can compute that a string is random. So given a string $S$ there must exist a Turning machine which determines $Rand(S)~=~T\vee F$. However, that is not mathematically possible. This means randomness is not computable. share|improve this answer Ok, what if we were to use some sort of metaphysical observation (no measurement) and then use this observation to choose when to measure thus perhaps partially aligning the colapse with some higher order? I would assume the waveform would collapse according to the same 'random' order but perhaps because of the partial alignment we could somehow influence the outcome of said measurement. share|improve this answer Your Answer
bc33c19a807126d7
New Text Document Home > Preview The flashcards below were created by user anti207 on FreezingBlue Flashcards. 1. Schrödinger Schrödinger equation 2. James Chadwick bombarded light nuclei with alpha particles and caused them to eject neutrons. 3. Antoine Henri Becquerel 4. Dobereiner triads as they became known were groups of three elements having very similar properties in which the atomic mass of the central element is nearly the average (arithmetic mean) of that for the other two elements. 5. De Chancourtois arranged the elements by atomic mass along a spiral wrapped around a metal cylinder which he called the telluric helix because tellurium was the element in the middle of the plot. When the elements were so arranged, elements with similar properties were found to lie in columns running parallel to the axis of the cylinder. 6. Robert Millikan “oil drop” experiment: Millikan was able to determine the charges on the droplets 7. Eugen Goldstein CANAL RAYS: discover proton 8. Ernest Rutherford • Gold foil is bombarded by energetic alpha particles (with a positive charge) coming from a radioactive source. => no plum pudding, NUCLEUS. • NUCLEAR MODEL OF THE ATOM: An atom is largely empty space and consists of a small, ‘massive’ positively-charged nucleus surrounded by small electrons (akin to a microscopic solar system) • Predict existence of neutrons 9. John Dalton • Law of Multiple Proportions: If two elements form more than one compound between them, then the ratio of the masses of the second element which combine with a fixed mass of the first element will be small whole number (rational) ratios. 10. William Crookes observed that the fluorescent spot could be moved by a magnet and determined that the ‘rays’ moved from the cathode (-) to the anode (+) and coined the term “cathode rays” 11. George Johnstone Stoney he proposed that these cathode rays were in fact negatively charged particles that he named ELECTRONS 12. J.J. Thomson • Thomson performed an ingenious experiment of balancing (or cancelling) deflection of the rays by an electric field with an equal but opposite deflection due to a magnetic field. Then using the laws of physics he determined the charge:mass ratio (e/m) for these electrons • “plum pudding” model of the atom 13. Lord Rayleigh 14. Pauli PAULI EXCLUSION PRINCIPLE: No two electrons in the same atom can have EXACTLY the same set of four quantum numbers, 15. Hund HUND’S RULE OF MAXIMUM MULTIPLICITY (spin): Electrons remain unpaired as much as possible (within a set of orbitals with equal energies (these orbitals are said to be DEGENERATE) must be satisfied. 16. Sir Karl Popper hypothetico-deductive method: based on FALSIFIABILITY, the potential to show that something is false by observation or experiment. 17. Francis Bacon (the inductive method). A SCIENTIFIC LAW is a concise and generally applicable statement of a scientific principle 18. Louis de Broglie postulated that particles could act in a wave-like manner. 19. Davisson and Germer scattered electrons off of a nickel crystal, and observed in the scattered intensity profile, diffraction patterns (regularly alternating light and dark spots) --- a WAVE phenomenon. 20. John Newlands classified the elements (by arranging them in order of increasing atomic mass) into groups with analogous properties. “Law of Octaves”. 21. Julius Lothar Meyer Periodic table (not good one =)) 22. Max Planck the atoms (oscillators) in the heated body cannot emit energy in any arbitrary amount (as they could according to classical physics). Instead the radiant energy could only be emitted in discrete bundles called QUANTA with energy , 23. Albert Einstein used Planck’s quantum formula. => find out why v0 24. Johann Jakob Balmer First fomular for hydrogen emission spectrum 25. Johannes Rydberg General hydrogen emission spectrum 26. Niels Bohr Bohr was able to theoretically derive the EMPIRICAL Rydberg constant. Card Set Information New Text Document 2015-11-03 06:40:50 card for final Show Answers: What would you like to do? Home > Flashcards > Print Preview
7307a260cc9fcc4c
Bill Hammel The modern "pictures" of Euclidean geometry are incongruous with its modern analytical Cartesian understanding. Some details of this are explored, along with historical paths that led to this unfortunate situation which infects even topology and logic, and the mathematics upon which these depend. Spinors, in particular, are creatures of analytic geometry, and do not arise intrinsically from either quantum theory or relativity. There is a long way to go in understanding the fundamental aspects of "simple" Euclidean geometry. This is also a longwinded explanation of why and how, spin-1/2 particles must be understood, in terms of classical Euclidean geometry, to be pointlike, and even classically irreducible: why supposed classical models by angular momentum, though interesting, are doomed to failure as genuine models of the mathematical physics of spin-1/2. Spin-1/2 is already a matter of classical geometry and does not need attached fairy tales to explain it. Points of physical space are not well modeled by Euclidean points, even in a completely classical physics or Euclidean geometry. This is not something I had wanted, much less expected to show. Up to now, I had resisted the idea that an electron could be pointlike; now, I understand better the concept and picture of "point": it is not what I was taught in my mathematics or physics classes. A physical point is Q-smeared, and, even without any Q-smearing by intrinsic uncertainty, posseses algeraic structure. This should put an end to the vast number of physics papers over many decades devoted to such projects of explaining putatively quantum or relativistic particle spin in terms of constipated pictures of Euclidean geometry. Spin is already a matter of classical Euclidean geometry seen from a Cartesian point of view. It requires no applications of quantum or relativistic physics. Classical physics and mathematics can be seen to have at least two different beginnings. Where perhaps most physicists' minds go is to the beginnings of Greek natural philosophy around 600 BCE, as in the traditions of classicists. The beginnings might equally be said to start with the Sumerians and Egyptians, around 3000 BCE, continued in the Egypto+Sumer-Akkad-Babylonian tradition which contains a good amount of the mathematics and astronomy that influences our mathematics and time keeping even to this day (360 degrees in a circle, 60 seconds in a minute, 60 minutes in an hour and 24 hours in a day with ancient calendars having 360 days in a canonical year). That older line also contains a good amount of mathematics and science that was either ignored or forgotten by the Greeks, much of which involves number systems and their calculational techniques, many of which were lost, forgotten or ignored from Greek mathematics on. Archeologists (one might coin "archeoepistemologists") have begun to recover some of them. [Should you think, as a pure mathematician, that the representational system of numbers is somehow irrelevant to the abstract body of pure mathematics, I invite you to attempt algorithms for decimals and long division using only Roman numerals. Mathematics always comes down to actually computing numbers that correspond to measurements, and that is even more so in physics. Proofs by construction are far more fecund than proofs by Reductio ad Absurdum. It is completely cool to know that solutions to certain problems exist; it is better to have a specific solution in hand that can be constructed by algorithm; such things inevitably involve numerical calculation.] A true beginning to point out with the Greeks is that mathematical logic distinguishes propositions that are "provably true." There is nothing that I am aware of predating the ideas of logic from Thales of Miletus (ca. 624 BCE - 547 BCE) and Pythagoras of Samos (ca. 572 BCE - 497 BCE). The famous Pythagorean Theorem was known and understood a millenium before Pathagoras. This is one of the first connections known between geometry and numbers that Descartes deals with many centuries later. What physics (astronomy) and mathematics may have originally arisen in other earlier cultures around or even before the Egypto-Sumerian line seems, at the moment to be lost to us through lack of written records. On the basis of some Greek and Roman historical writings, there are possibilities in both Vedic and Keltic cultures. What after all were the circular megalithic structures of, e.g., Stonehenge all about? No written records are available that tell us anything about them, so many interesting and curious stories can be made of them, guessing by their current structures, and how their structures might have been at various times in the past. In the later Greek classical era, the Greeks later invented a formal philosophical system of manipulation of linguistic symbols that we call "logic", the grand exponent of which is Aristotle (384 BCE - 322 BCE). , [Internet Encyclopedia of Philosophy], Aristotle [Wikipedia]. Aristotle. A great observer of the world, he attempted to organize and make sense of the world he saw, and in so doing laid the foundations of logic, physics, psychology and literary criticism, among others. Continuing the ancient line of thought contained in Greek (γεομετρια), he understood geometry of space as a part of the description of physical reality and not simply as an abstraction. We now speak of deductive and inductive logic, the latter relying on a proper understanding of probability and statistics; but, it is not difficult to understand that the very creation of deductive logic was had by inductive means: it did not somehow descend from heaven magically as Athena was born fully grown from the head of Zeus by a simple stroke of the axe of Hephaestus. Analogously, the priest class of ancient Sumer developed the mythology that their written cuneiform writing was given by the gods, but its development over centuries belies that manipulative fantasy. It is also not difficult to understand that deductive logic needed the sense of a spoken language that was evolved culturally: neither did language descend magically from heaven, nor did its written forms by which one generation can communicate to the next with more accuracy and precision than oral tradition provides. This is not to say that oral traditions are then made irrelevant; the opposite of that conclusion is an important point being made here. Precision and accuracy can also often be better propagated by an oral and interactive process, teacher to student. Some understandings cannot, and are not written with precision and accuracy - as masters and good students of Zen Buddhism would understand immediately. They are induced thought patterns outside of language, and certainly outside of written symbols of language. The object is, of course, to grasp the language and the symbols by which it is represented in terms of a conceptual level of understanding. The writing down of certain things was even forbidden in some cultures, perhaps because it degraded the art of memory. We are now so dependent on the writing of things because no human being being can remember the volume of what is called knowledge, even in a relatively small rarified area, e.g., analytic number theory. If one thinks of areas cut out with a more broad sword, like, mathematics, physics or philosophy, the memory requirements are almost beyond conception, much less human abilities. But, the oral traditions have never disappeared, though they seem to be disappearing now when we need them desperately. Scientists and scholars tread narrow paths through forests of knowledge to get to an unknown frontier, and it is taking longer and longer to do that. Elder guides are needed, and they are disappearing for too many various reasons. Euclid ca. (325 BCE - 265 BCE), summarized and organized the subject of Greek geometry in his systematic "Elements" on the subject, which also happened to include the foundations of number theory, and which became the foundation of the teaching of mathematics that persists to this day, but with about a 1000 year hiatus of the oral teacher-student tradition which can be traced historically as being essential in all the arts and sciences. I was rather amazed at one time to realize that my musical composition pedigree in these relationships goes back to Haydn. Over the centuries of this new, derivative teaching, there arose a set of "pictures" of "Euclidean geometry" that are purported faithful descriptors of Euclid's combined axioms of points and lines (extended to surfaces) with an Aristotelean manipulation of the linguistics involved. This was not a mathematics that was as potently symbolic (and linguistically specialized) as is modern mathematics. Mathematics has become so much of a language in itself that modern mathematicians can make sense to each other using standard written mathematical language without being able to speak each other's human language. Mathematicians have long since extended the general ideas, axiomatics of geometrical pictures from 2 to n dimensions [Sommerville 1958] and even to the curved spaces which quite literally leave Euclidean geometry specifically flat, beginning with Bolyai (1802-1860) and Lobachevsky (Lobachewsky) (1792-1856) who simply began questioning Euclid's 5th axiom and its logical independence. The conceptual breakthrough to generally curved spaces reaches an apex in the work of Bernhard Riemann (1826-1866). The important ideas of René Descartes (1596-1650), [Internet Encyclopedia of Philosophy], finally combined the notions of ancient Euclidean geometry, together with the developed pictures or descriptors with the older line of numerical calculations leading to the Islamic developments in algebra. It is unfortunate that the novel and inventive Descartes became intertwined with both continuing Western theism (spirit converted to law and logic) and the equally pernicious escape from it called "deism" (spirit converted to a machine). Neither supports reality, and both have the same level of epistemological enlightenment - commensurate with that of the tooth fairy, a product of Scholastic "thinking", the ultimate perversion of Aristotelean logic as instigated in Western theologized philosophy by Thomas Aquinas. There are those who would disagree with the perversion part; I do not care. See, perhaps Sketches in the History of Western Philosophy for an academic and detailed statement of the historical realities. The Islamic line of mathematics and science, rather probably stemming from the Sumerian line was introduced to the Western world during the time of the so called "Reconquista" (718-1492) [Wikipedia]. and the fall of Toledo in 1085 CE. (The Reconquista is one of those great lies of history: there was no taking back of Iberian lands by Christians since land was occupied by Vandals, whose name has wrongly been taken pejoratively into English.) Islam Spain and the history of technology At the same time, many of the destroyed works of the ancient Greeks, perhaps especially those of Aristotle were also rediscovered mostly in their Arabic translations. The continuing scholarship of Islamic civilization from both the Greek and Sumerian lines were essential to the "catching up" as it should be said, of previously destroyed knowledge in the West. Artistically and intellectually speaking, the translation from the mostly unknown (to the West) Greek into Latin of the Corpus Hermeticum, commissioned by Cosimo de Medici in 1440 is the seminal, and mostly ignored, turning point of Western culture that together with the contents of the great library at Toledo leaves Western culture not still wallowing like pigs in mud holes. [There is a great deal of nonsense now surrounding the Corpus; beware.] See Marsilio Ficino (1433-1499). The cultural turning point was not simply a matter of the decline of medieval sociology and the rise of an economic middle class as many "history" textbooks would have one believe. Seek simplicity - and distrust it. The Renaissance was indeed, as the name says, a rebirth of and rediscovery of culture, art and science that had been deliberately suppressed and destroyed. Why would anyone think that all had been reclaimed? Modern cultures, regardless of mother tongue, have adopted the rather perverse notion that language, and its methodical manipulations, is somehow the faithful encapsulator of all thought; what rubbish. These materials learned through the Renaissance were "rediscoveries" of not only forgotten, but suppressed and destroyed knowledge of and from the ancient world. Much is still lost, unknown and most probably irreclaimable. "Dark Ages" are where the history has been destroyed; there are a number of them, in various cultures. These are eras of glossed over history, all too gently placed bookmarks that ignore the fact that both history and knowledge is too frequently lost, and all too often by purposeful destruction. During the ages from Euclid and Archimedes (287 BCE - 212 BCE) to the Renaissance and its intellectual origins in the library of Toledo, the languages, texts and translations of ancient texts had to some extent been regained, but the thought patterns were lost, and remained lost. They are still lost, in the same sense that the arts, the art of science or the arts of engineering can be lost. The continuing lines of understanding, thought and "tricks of the trade" that pass only from teacher to student had not been merely severed; that line had been forcibly annihilated. We can only guess at what came before. The teaching of geometry is the subject, and it was a resuscitated subject, raised from the dead many centuries after Euclid. (One could say the same of reason, logic and Aristotelean philosophy, philosophy generally, optics, biology, and a scientific concept generally, not to mention simple curiosity.) There is no available record of how our cultured ancestors may have thought about their formalisms, or no record of their results. We have documents on ancient Greek musical theory, but have little idea of what the practices were, and there is also little idea of what the nouns actually symbolized. What was, really, what we call a "mode"? Only the destroyed line of student to teacher connections might have told. Words can have subtle, and not so subtle distinctions in their use, depending on where and when they are used. The word "raga" has rather different meanings in Northern and Southern traditions of classical Indian musical theory: it does not signify by itself, a unified and well defined concept, as it might appear to a westerner. Possibly, if that line of connections had not been broken, it would not have taken 1000 years for Descartes to have caught up with the connection of geometric thought with the beginnings of algebraic thought that is also, at least conceptually but not exclusively, connected with the schools of mathematics and astronomy that existed in India (Kerala) at least as far back as 310 BCE with Indian mathematics - and I suspect the existence goes considerably further back. As a matter of lost connections, one might simply notice that Elamite is a Dravidian language [Wikipedia], as is Malayalam, the language of Kerala, intimately related to Tamil. A point of this small history is that when the ancient mathematics and science had to be reconstructed from manuscripts and translations of translations, the thought patterns that lay in back also had to be reconstructed - which is almost to say fabricated from thin air (or fat air?). While many of the early writings can be read literally and translated, the corpus of interpretive thought in back of them is lost to us. There are two pertinent peculiar parts of this story. One is that the pictorial descriptors are fabricated, much later than the Greek mathematical texts and that we mistake them as truly representing Greek thought. The second, which is truly rather amazing, is that these fabricated descriptors have for the most part dominated all mathematical and physical thought, even beyond the work of Descartes, despite the fact that this combining of algebra and geometry opens up entirely new worlds of mathematics and actually does it in unequivocal terms lead to quite different descriptors of geometry. It might be worth mentioning that while Descartes did indeed work out the utility of using algebra to solve geometric problems, in Le Geometrie, an appendix to his Discourse on Method, he did not conceive of analytic geometry, or even the celebrated "Cartesian plane" as we know them today. Descartes' Life and Works Arguably, Fermat (1601-1665) had as much to do with such developments. They were not exactly on friendly terms, Descartes being the arrogant prick that he was. Even well before this, Al-Mahani, ca. (820-880), was engaged in the solution of geometric problems by algebraic means. Relationships between the algebra of Al-Khwarizmi ca. (780-850), number theory and geometry were in fact signal aspects generally of this much earlier Arabic mathematics that itself was well aware of prior mathematical and astronomical works from India. The idea that geometry was a theory of the physical world and not merely some cute mathematical structure was part of ancient Greek physics, and that idea remains in modern physics, without bothering to explore the new worlds of physical geometry opened up by Descartes, et al. The word itself, (γεομετρια), meaning "earth measurement" should be clue enough. This is not to say that mathematicians have not done the exploring, because they have; I simply mention algebraic geometry and geometric algebra along with the writings of W. K. Clifford (1845-1929) [Wikipedia], The Work of W.K. Clifford B. Riemann (1826-1866) and A. Einstein (1879-1955) [Wikipedia] that lead to the Clifford-Riemann-Einstein program of geometrizing physics which is quite impossible without understanding connections between algebra and geometry. We do not get a conceptual leg up on this program if we noodle about with, wrong and restricted pictures that ignore and defy those very connections. Physicists' notions of space and time geometry with the small exception of the relativistic spacetime, have not changed much (in operative consensus) to include the Cartesian relations in centuries. Because of this, physicists perhaps, have strange and antiquated working pictures and notions of space and time that produce confusion and paradox where there should be none. The clearest example of such confusion is with the idea of spinor, (which we devoutly wish should have the universal spelling "spinnor", so that it is truly pronounced as it is spelled) over which much fuss and many incantations have been said repeatedly, both relativistically and quantum mechanically. This mystical nonsense and confusion continues in physics textbooks to this day. Spinors, and indeed Clifford algebras [Wikipedia] (an article with incisive mathematical particularity) have absolutely nothing to do with relativity theory or quantum theory per se. They are purely aspects of classical geometry as algebraically suggested in principle by Descartes, and which can be developed with no additional physical assumptions, through a little more thought and even less formal algebra. Clifford algebras can arise in several different ways, all of which have origins in classical Cartesian-Euclidean geometry. One way is by simply considering the factorization of a quadratic form, which is after all exactly what Dirac (1902-1984) (Paul A.M. Dirac - Biography and Nobel Lecture) did to derive his equation for the electron with spin-½ from the Klein-Gordon equation. [Wikipedia] However it may seem, there is nothing magical here. The idea is routinely propagated that the electron spin is somehow a relativistic phenomenon. [Dirac equation - Wikipedia] when very clearly, it is not: the very same kind of factorization of the Laplace operator (a quadratic differential operator) [Laplace's equation - Wikipedia] in the nonrelativistic Schrödinger equation [Wikipedia] can be done, and it yields the Pauli algebra, [Pauli matrices - Wikipedia] an irreducible representation of the Lie algebra Lie algebra [Wikipedia] su(2), [Special unitary group - Wikipedia] whose irreducible representations describe quantum spin [Spin (physics) - Wikipedia] generally, as it is understood in an Euclidean space of three dimensions. One can, in fact, pull this very same "factorization trick" in any inner product space over R or C in any dimension, and with any signature, and so discuss the spin representations of SO(p, q) and SU(p, q), and associated Clifford algebras generally. Physically, Dirac's equation which takes the 2x2 ad hoc Pauli algebra to the 4x4 Dirac algebra merely adds the difficulty of ± signs for E by virtue of taking a square root; both the relativistic Klein-Gordon equation and the nonrelativistic Schrödinger equation factorize to expose spin-½. Moreover, both these equations are equations expressing a conservation of energy, as are all the core equations of physics. Regarding Dirac's factorization, we might just as well have written px² + py² + pz² - E²/c² = - m²c4 and factorized the quadratic form of the LHS of the equation. The appropriate Clifford algebra would still have materialized by purely algebraic means in a nonrelativistic context. Now figure out what that means - *geometrically*. This is not difficult if you transcend the erroneous idea that mathematical points are structureless. (Physical "points" as irreducible geometric atomics are even more complicated.) There are many ways of conceptualizing spinors (a hint, perhaps to their necessary ubiquity), one is as "square root of a vector", which seems only to be reasonable in a 3-Dim space. Spinors, can also be understood sturcturally as two sided ideals in Clifford algebras, becoming the elements of the carrier space of the representations of the Clifford algebras, and then of orthogonal Lie groups. There is no magic; just simple algebra that has nothing to do with quantum mechanics, and has also nothing to do with relativity theory. Returning to the idea of square root of a vector, and generalizing via axial vectors to bivectors, understand spinors as eigendirections of antisymmetric forms that represent bivectors. I thank R. M. Kiehn for this understanding and connection. Any student of quantum physics who has looked at the meanings of spinors has encountered the "belt trick", nicely explained by John Baez in week61 of his always enlightening "Finds in Mathematical Physics". It is an explicitly macro topological phenomenon, not a matter of quantum fiddling. There is nothing, either relativistic or quantum mechanical about this; it is a macroscopic, yet non global topological property of the physical E3 in which we seem to exist. We seemed to have developed, on the basis of our false descriptors, the idea that topology is only considered as local or global, and that there is no significant understanding inbetween. The belt trick actually puts the lie to that assumption, since it exists, is demontrable and *is* in between. With regard to the rotational symmetry of E3 the appropriate connected Lie group, SO(3) is *doubly* connected, and this is true for any SO(n), n > 2. This may have something to do with the mathematical existence of the spinor representations of rotations; on the other hand, the Lie groups SU(n) also generally have spin representations which are extentions by complexification from SO(n). An important aspect of spinor representations is a confluent isomorphism between associated Lie algebras, e.g., the isomorphism between the Lie algebras so(3) and su(2). But, while the associated spin group for SO(3) happens to have a nice confluence with a classical SU(2), this is not the state of affairs with SO(n) for n > 2. Spin groups are generally not classical Lie groups; the case of SO(3) is an isolated accident, but given the stress that this relationship is given, it can lead to gross misunderstandings of the mathematical reality. [This description does not answer all questions, nor does it pretend that all relevant questions can be answered by existing mathematics or by me; many geometric questions of the Cartesian viewpoint remain.] There is nothing magical about any this; it is all "classical" (but unfinished and possibly lost) mathematical understanding, and also most interestingly, a new understanding of classical physics. Physical reality does seem to take advantage of possibilities of the mathematical model that combines the logic of pure Euclidean geometry with the natural algebraic extensions by Descartes - which is interesting, bemusing and almost annoying because of the simplicity. If you can get the mathematics understood, the physics is not far behind. This is a bit more Platonic than I was prepared for. What is the problem, and why does this stuff seem mysterious? It *seems* that way, because we have paid entirely too much attention to and placed entirely too much stock in those long ago fabricated descriptors, and authoritarian text books. We have become bovinely wedded to various descriptorial notions, in a 1000 year disconnection of thought patterns, of Euclidean geometry that refuse the Cartesian understandings, e.g., the idea that "physical" points have no structure - because that is how we have been taught to picture them, and *not* because the mathematics tells us otherwise, which, in fact, it does. It is also the model that our vision supports. The question is, what does the physics tell us about the mathematical models of geometry and the pictures that we "glue on" to them? The physics tells us that the developed Cartesian mathematics is quite right, and that the simplistic Medieval geometrical pictures that do not faithfully represent the Cartesian-Euclidean formulation in algebraic mathematics are wrong; they are overly simplistic to the point of serious mathematical error in physical theory. Why should there be a problem? The only problem is that we have persisted in "picturing" Euclidean geometry wrongly while the mathematics has told us rightly that our pictures are wrong, and have been, for well over a century. The correct ideas have been available all along; they have simply been ignored. They need to be refigured and reinstalled in the minds of young mathematicians and mathematical/theoretical physicists. These ideas are not at all new, and were essentially understood by W. R. Hamilton [Biography of Hamilton] and J. C. Maxwell (1831-1879), though they did not enunciate them so forcefully: it was an embryonic time for the resurgence of these concepts, and they were unsure. In the first edition of Maxwell's great treatise on electromagnetics (1873), he did indeed flirt with idea of formulating electromagnetic (EM) theory in terms of Hamilton's William Rowan Hamilton (1805-1865) [Wikipedia] Hamilton quaternions, [Quaternion, Wikipedia] and did a number of translations of the equations into a quaternionic language. In the following edition of 1884, by Oliver Heaviside (1850-1925), the quaternionic sections were removed, and this probably was a great infortuity. They were almost side notes in the general progression done in Gibbsian [Josiah Willard Gibbs (1839-1903) Willard Gibbs, Wikipedia] vector language, and may have seemed to get in the way. O. Heaviside edited like some modern movie editors who have the strange idea that "moving the action along" is somehow always more important than actually making sense in telling a cohesive story. You can get Maxwell's first edition from good libraries (interlibrary loan) and see that I have described the historical and mathematical realities reasonably accurately. It should be noted, however, that Maxwell did write a manuscript in 1870 on the application of quaternions to electromagnetism that is reprinted in Vol. II of Maxwell's collected works. Maxwell was not thinking in field theoretic terms, and did assume a physically real aether with classically physical proprties, and had the mathematical machinery available for waves in an elastic medium, both vector and scalar parts, as given in [Morse 1953], pp 142-144, where it is shown that superluminal scalar waves are possible only if the medium is compressible. If spacetime is actually compressible, it should only be so at energies that we have not yet achieved, somewhere in the neighborhood of greater than 250 GeV. True historical reality is much like the reality of physics in that we never get to perceive it directly, but only indirectly through constructed models and theory applied to them. Since historical models are rarely deniable and often fabricated, they have an innate dubiousness, and do not suggest a convergence on any truth, and so should be taken only with the proverbial grain of salt. There are those who have claimed that Maxwell originally cast EM theory in quaternions, that the equations were somehow more general, and that some sort of conspiratorial suppression of the "real truth" was engaged in, particularly by Heaviside. They speak wrongly; the quaternionic equations were not more general, and Heaviside simply did not understand the purpose of quaternions; nor was Maxwell particularly clear in their significance, supremely careful mathematician and scientist though he was. Written history does have a clearly conspiratorial aspect, but this one is pure confabulated nonsense. The first edition of Maxwell's treatise on EM used Gibbsian vectors throughout, with quaternionic afterthoughts, and, as Maxwell himself shows in this treatise, the quaternionic formulation is isomorphic to his own vector formulation - which to a large extent missed some of the very interesting points of modern EM formulations by David Hestenes (1933-) [Wikipedia] in terms of Clifford algebras, or as Hestenes would put it in "geometric calculus". Further developments in understanding classical EMT can be found in the works of [E. J. Post 1963] and R. M. Kiehn: Maxwell Theory and Differential Forms Prof. Kiehn also gives another way of understanding spinors as eigendirections of an antisymmetric matrix that has implications for the understanding of the Maxwell equations and its spinorial solutions. I seem to remember also an earlier work of Penrose on spinorial solutions to the Maxwell equations related to his twistor theory [Wikipedia], in the Journal of Mathematical Physics, but I have not yet rediscovered the citation. Maxwell's use of quaternions is considerably more messy and less elegant than modern formulations, showing, I would think, that Maxwell was greatly intrigued by quaternions, but that he was also not exactly facile with them; on the other hand, neither was anybody else. Hamilton himself continued to search for their logic and meaning through to his death. Quaternions are not a part of the general education of mathematicians, even today; a pitiable condition. Maxwell was definitely not talking about functions of a quaternionic variable, as say, noncommutative extensions of the theory of analytic functions of a complex variable. This mathematical theory of noncommutative analysis was not yet a developed subject at the time of Maxwell, and is only now a subject of research. See also Notes on Noncommutative Geometry All of quaternions (for DIM=4) and Clifford algebras have now elucidated a generalized meaning of EM theory, as has also the approach to the Maxwell equations though differential forms. EM is clearly a topological (not metric) theory that expresses itself in antisymmetric forms (not symmetric forms). You can express the same sorts of equations on discrete, simplicial [Notes on Simplicial Homology], networks using coboundary operators [Sorkin 1975], or Kuratowski closure operators. RMK Articles: Specials and Freeform Index Page [R. M. Kiehn] One of the extended meanings of the Cartesian-Euclidean understanding is that the geometry of physical space and time should most generally be understood in at least locally complex coordinates, in addition to the addition of the noncommutative Cliffordian and nonlinear spinor structures associated with physical "points". Physical points do indeed have physical structure, and that structure seems best described by both a complex and Cliffordian nature. This is an understanding that I believe Clifford himself had, but that has been only sporadically picked up on. In building classical spinors on an E3 e.g., by starting with Eli Cartan's map of E3 to Hermitean 2x2 matrices, [Contexts for Spinor Algebra], we discover the Pauli matricies, and also that the building of spinors as null vectors, that we must allow that the real vector coordinates of E3 must embrace a complexification that must be meaningfully geometric. One might also simply recall "Cayley-Klein" - Google Search in E3 and their purely classical origins, and uses. This is mathematics that begins in the the 19th century, before the physical quantum and relativistic theories - and has everything to do with spinors, and the Pauli algebra, su(2) of the covering group of so(3). A similar pattern of necessary extensions from the old Euclidean descriptors of E3 to their complexification and Cliffordization can happen in any E(p,q), p+q=n. It might be worth noting that when a real Clifford algebra Cl(p,q) associated to an E(p,q) is complexified that the inner product signature (p,q) is essentially lost since the generating basis elements with suitable factors of 'i', can all be legitimately made to square to ±1, uniformly, and that while real Clifford algebras have "periodicity structure" mod 8, complex Clifford algebras have periodicity structure mod 2, and so have a simpler structure theory. Independently of the physically meaningful and necessary complexification, also consider the local analysis of curvature forms in differential geometry, e.g., Petrov's classification of Einstein spaces, [Petrov 1969] (in which there is a mathematical error in Type III spaces that I will get around to defining sometime, the meantime, see [Hammel 1974]), or the similar analysis of the electromagnetic field tensor, which shows the complexification to be physically meaningful in perfectly classical senses that are not directly connected to any relativistic assumptions. See again, the works of R. M. Kiehn, RMK Articles: Specials and Freeform Index Page. Once again, in both these cases, the physics tells us that physical geometry cannot simply be described by the old Euclidean pictures, and that these pictures are entirely too simple and simpleminded in thinking about the physical geometry of space, time and spacetime. There is much more there, physically; it is the case that we have not been looking at physical geometry with this necessary mathematical understanding, and have become confused, making the seemingly mysterious mystical, when the mystery is rather a concocted illusion in the first place. Instead, Wheeler's notions "pregeometry", and "geometry without geometry" have taken hold in physics, mostly because J. A. Wheeler (1911-2008) [Wikipedia] was a brilliant guy, and has shown the way in many areas. Anybody can miss the obvious, and a conceptual elision does not detract from Wheeler's genius. The return and connection with Wheeler's ideas might be through the concepts of topology and new understandings in differential forms that go beyond the usual language of tensor analysis of classical fields worked out by R. M. Kiehn. See the wealth of this at Cartan's Corner, and more particularly at RMK Articles: Specials and Freeform Index Page. The error of the old pictures can be, and likely is, besides theoretical and philosophical inertia, a continued error of language: After all this time, i=sqrt(-1), is still called "imaginary". The complex field is the smallest algebraically closed field. There is nothing imaginary or mystical about this, though it seemed there was at the time of its invention or calling into being; yet, the designation "imaginary" remains, and befuddles students even now. Why still, "imaginary"? History! Tradition! Toscanini described tradition as "the last bad performance". Enough said? We also prejudice our thinking simply by saying that a point "has dimension 0", and conclude from that its alleged structurelessness, and triviality. This picture is based on erroneously reconstructed Euclidean geometry. Physical "points" are not the same as mathematical points. The technomages of mathematics should already have had enough fun messing with the heads of the acolytes on that score, and equally so in the matter of "classical geometry". Both a classical Euclidean point as understood through Descartes, and a classical *physical* point do have structure, and it is both complex and spinorial/Cliffordian. This means that classical physical points, having spinorial structure are, in a language of quantum theory, Fermionic objects. This is not a trivial conclusion, or understanding. See, e.g. Introduction to quantum set theory and its (incomplete) sequel Set theory, quantum set theory & Clifford algebras. In the last reference, the exploration suggests that even a classical point in a space 3 dimensions is well described by the 8 dimensional Lie algebra gl(2, C), and that a classical point in 4 dimensional classical spacetime (of which we are not thoroughly convinced) should be described (perhaps, within a topological neighborhood) by the 16 complex dimensional complex space of the Lie algebra gl(4, C). Perhaps, to beat a dead horse in the matter of complex numbers being required, in any formulation of QM, which always needs an expression of interfering alternative outcomes, a complex structure cannot be avoided. [Mackey 1968] One cannot, e.g., in Q statisical mechanics, willy-nilly, separate q-space from p-space; they must be considered together. Then the complex structure on phase space is unavoidable, as it is also essentially unavoidable in QM. The essential noncommutativity of Q phase space is also unavoidable; so it presents itself as a prototypical noncommutative geometry. It is often suggested in QM texts that the complex projective Hilbert space is somehow the essential element of QM; this is patently wrong on two counts. First, all separable (all one needs for QM) infinite dimensional Hilbert spaces are isometrically isomorphic; it is so then also for the associated projective Hilbert spaces. This leaves no room for distinuishing physically different systems. Second, and complementarily, the physics is actually captured in the structure of the algebra of "observables" corresponding to a *-algebra of linear operators (some, necessarily unbounded, if you believe in CCR) acting on the Hilbert space, or more likely on a common domain within the Hilbert space of both the q and p operators. In any case, the kinematics is defined by the operator algebra, and the dynamics is defined by a semigroup of operators acting on the *-algebra. The Hilbert space can actually be conceptually eliminated by hiking it up into the C*-algebra, using projection operators, as the boundary (set of extremal points) of the forward cone of the algebra, whose interior describes the density operators of states in quantum statistical mechanics. The algebra of operators containing both the observables and the states of the system is the thing, not the Hilbert space, a point made long ago, and I think originally by I. Segal. This does not seem to have caught on. Yet another clue from physics that the fundamental picture tools of Euclidean geometry need to be reconsidered is in the concept of Supersymmetry - Wikipedia]. The distinctions among Bosonic (symmetry of interchange) and Fermionic (antisymmetry of interchange) particles with their different statistical behaviors are separate symmetries. Though there is little in the way of physical evidence to suggest it, it would be completely cool to have a theoretical framework which combines these two "particle types" into simply "particles". This did lead to the idea and investigation of superalgebras; it turns out that Clifford algebras are, in fact, superalgebras. Viewing Euclidean geometry in proper Cartesian fashion then provides evidence on the mathematical level for the essential validity of superalgebras which express supersymmetry. While physical evidence suggests the mathematical language in which physics is expressed, it is also true that the language suggests how to view the meaning of the physical evidence and mathematical language. Implications in both directions are operative here. Perhaps, it is no great shock to see that these extended Cartesian descriptors of physical geometry fit nicely into Klein's Erlanger program for geometry where a geometry is specified by the action of a group on a space together with a set of geometric invariants of the group action. One way of understanding "structure of a point" in a classical sense of pictures goes like this: In E3, for example, any point can be considered a Euclidean ball with antipodal points identified. Physicaly and classically, we would not observe this substructure directly, and space would appear to be conforming to the old picture. Passage of anything "through the ball" is not apparent from the outside. Such a ball is topologically equivalent to an SO(3) manifold, or a "contracted toroid with a half twist": take a finite cylinder or radius r, score it with lines parallel to its centroid. Holding one end fixed, twist the other by π radians. Now, glue the two disc ends together. There are "geodesics" on the surface of this solid toroid which must go around the central hole of the "doughnut" twice before returning to the starting point. Take the radius of the circle that is now the centroid of the Urcylinder to be R. If we contract the toroid's major radiu R→0, allowing the substance of the toroid to pass through itself, the result is a ball with its antipodal points identified. The manifold of the Lie group SO(3) can be understood as the same thing by allowing two of its angular parameters to have range [-π/2, +π/2] and [0, 2π] covering a spherical surface, and a third with range [0, π] that as a radial coordinate with the two surface coordinates finally determines a ball in E3 of radius π with antipodal points identified. Think Cayley-Klein parameters. If one thinks of a physical space being made up of such structured physical points, an idea resembling Bose-Einstein condensation of the space itself is not too far behind - given enough points. Simply put two such projective spheres in contact, and consider the identifications; continue the process. How many such classical angelic points may dance on the surface of another? As many as want to, within some finite limit? Limitations on that arise when points acquire quantum bulk, and some principle (Pauli, e.g.) that holds them apart, and from this notion of dimension can arise in the way that dimension is related to the "Kissing Problem", of how many unit spheres in n dimensions can be tangent to a central unit sphere. [Conway 1993] If the general concepts of geometry need some rethinking, then most probably do the general concepts of its abstraction, topology. Keep in mind that a unitary quantum theory is spoken of in terms of a projective Hilbert space, and its sphere where antipodal points are identified. There are fundamental geometric problems with combining the Poincaré symmetry of a relativistic spacetime and the internal symmetries of the Standard Model [Wikipedia] in elementary particle physics. The suggestion is, of course, that these problems stem from ignoring the necessities of the very classical geometry upon which the physical model is predicated. This extended way of seeing E3 involves to start, only an additional structure of points that is physically consistent with the primitive Scholastic descriptors. One only gets to "see" such structure physically when the quantum (fine grained) nature of physical geometry is probed or taken into account, and the classical points are fuzzed according to the basic necessities of quantum theory. Within the Brouwer-Urysohn concept of dimension, [Hurewicz 1948] one passes hierarchically in three dimensions from point to line (path), to plane (surface) to volume. We have already discussed the necessary complication of the concept of Euclidean "point" from a classical viewpoint. From a bottom up viewpoint, the obvious suggestion is that a classical 0-dimensional geometrical point, regardless of the space of which it is a member, should be replaced mathematically and conceptually with the smallest complex Clifford algebra. The question, of course, is what is the "smallest"? In one sense, the smallest is simply the complex numbers; but these are commutative, while the fundamental conceptions of quantum theory are noncommutative. So, the next and only possible choice of smallest "quantum point" is the complex Clifford algebra of complex dimension 4, and real dimension 8, which happens to be represented by the algebra of complex 2x2 matrices. This also happens to be the definining and lowest dimensional faithful IRREP of the Lie algebra gl(2, C). This geometrical viewpoint implies that all fermionic spin 1/2 particles are essentially geometrically irreducible, and "quantum pointlike". Thus, explanations of their existence in terms of wrong Euclidean pictures will fail. If one seeks further explanation, it must be found within a quantum theoretical language within the constraint that this quantum point with structure is still essentially irreducible, from the viewpoints of classical geometry and from quantum theory. The further subtleties are in what quantum theory may contribute beyond the classical geometry. The new business is then the concept of line/path which has to do with either or both the fitting together of quantum points, or from an opposed direction, the boundaries of quantum surfaces. I will come back to the "line problem" soon, I hope. Whether or not I live to complete this concept related to terms of Brouwer dimensions greater than zero, this is not some sort of new "nutsy" stuff: it is rather the digging out of aspects of the mathematical understanding of Euclidean spaces that "should" have been understood and well known centuries ago. Seeing this now, after all this studious time has passed, I am quite a bit mortified, and feel like an idiot for not having seen the obvious years ago. There is nothing at all peculiar here, even in modern days, and maybe even especially in modern days, about rediscoveries in mathematics, physics and other sciences. The nicely attributed "Ising Model" of spontaneous magnetization as a phase transition in statistical mechanics comes easily to mind. Ising model - Wikipedia. Ising originally used the concept of nearest neighbor interactions of spin 1/2 entities in a lattice to explain spontaneous magnetization. He worked out the model in one dimension and found that there was no such behavior. The model was generally abandoned for a while, but revived by Heisenberg. The first problem turned out to be that in one dimension, there are not enough (2) nearest neighbors, but that in two dimensions, 4 nearest neighbors are enough, and so certainly in three dimensions with 8 nearest neighbors there are certainly enough. Onsager, a chemist, showed in a most clever but tedious way that in two dimensions, spontaneous magnetisation does indeed exist; just solving the problem and showing this was very difficult, and no one had done this before Onsager. [Huang 1963], and references therein. The models of the physics of the universe in which we live is has been highly restricted by the barriers of concepts based on the illusions of our common perceptual neurology, so much so that our intellectual concepts even neglect primitive errors that obviously contradict reality. We rely entirely too much on direct visual perception. Because we are so physically large, we have no perception of the ultimately small. Our visual perception is only of molecular order. Discovery of the quantum regime has broken one barrier of what is possible and what is also necessary in the small for our own existence. Were it not for the existence of quantum indeterminancies, this mysterious universe in which we live could not exist: it would be a dead thing that would not allow the birth of anything new beyond its primordial existence. Time could not have any meaning, or exist in any sense. We construct a 3+1 dimensional existence because of our size and our inherited neurological substructure, and base our mathematics and mathematical models of reality on those things; we have "discovered" molecules, atoms and certain elementary particles. What else may be discovered? Even logically and mathematically it is clear that neither infinitesimals nor infinities can exist in reality; yet, all current physics is based on these 19th century interpolational delusions. Why? Because that monumental hierarchy of of mathematics and mathematical physics exists, and it is how we were instructed to think, despite the fact that it is clearly wrong. All of physics has manifestly gone awry, begiginning with the incompatibilities of quantum theory and relativity theory. Those began about a century ago! Ptolomeic epicycles come to mind. This a minimal reconstruction of geometrical pictures, but there is no reason to suspect that complex Clifford algebras are the end of story. Spinors in higher dimensions than 3 are more complicated, though still connected with Clifford algebras. There are at least further algebraic and geometric connections with division algebras, Lie, Jordan and Malcev algebras. This is still an open door to epiphanies in the physics of space. In considering the various possible connections from complex Clifford algebras for space itself, one should be mindful of the "time" concept and its multiplicities of meanings, but also of the tantalizing SU(3)xSU(2)xU(1) symmetry of the "standard theory" or elementary particle physics, and that while EMT is an essential topological expresion of space and time, concerning antisymmetric tensorial entities, general relativity theory is a metric theory concerning symmetric tensorial entities. Symmetry and antisymmetry are algebraic qualia, independent of "pictures". To interpolate a mathematical structure between the complex Clifford algebra at a physical point and the mathematical point of continuous manifold containing the metrical substance of GR, we will need something akin to a pseudohermitean complex manifold, where the complex structures in the local tangent spaces are not necessarily integrable. This pseudohermitean complex manifold need not even be a manifold as such: it could be discrete in the quantum sense. Both EMT and GR can be done on simplicial complexes, but the 0-dimensional structurless mathematical point must be replaced with a physical point modeled on its most elementary level by a complex Clifford algebra. "The truth points to itself" -- Kosh Naranek Many thanks Prof. R. M. Kiehn for very helpful discussion, for asking pregnant questions, and for leading me to his work that made so many connections for me. Thanks also to Mitch Smith for prior discussions on continua, models and ancient mathematics, to Richard B. Carter for encouraging me to read Descartes, and for old and seminal discussion, to Elihu Lubkin, to Leonard Parker and Dale Snider for discussion and comment on the geometrical meanings of spinors, and last, and certainly least, to my old physics teacher Harvey Kramer for repeatedly angrily telling me that every unorthodox thing I thought was perfectly insane, including my naïve reinvention and working out of fractional differential calculus that was a new (useless and irritating) idea to him. It is so wonderful to have knowledgeable teachers; in total, I think there were five or six: two were in musical composition. Others were mentors of one kind or another, Don Gelman, for one who guided me as an undergrad. Love on their heads. Nothing here has been supported by governmental or criminal corporate pseudoscience; so, no results, conclusions or opinions have been paid for. "We live for the one; we die for the one." -- Zathras "Zathras is used to being beast of burden to other people's needs. Very sad life, ... probably have very sad death, but at least there is symmetry." Top of Page Email me, Bill Hammel at © August 2006 by Bill Hammel (bhammel@graham.main.nc.us). Permission to use for any noncommercial, educational purpose. This copyright and permission notice must appear in all copies. Permission is also granted to refer to or describe these documents in commercial books, products, or online services. These documents may be freely reproduced, copied and disseminated by any electronic, digital or written means, but in no case may such copying or dissemination be charged for. The idea is very simple, no person or body has supported any of the original works contained in this pages. They are works of love given freely. I find repugnant the idea of someone expropriating, for profit, what I give freely. If you have a problem with this, ask; rules always have exceptions. The URL for this document is: Created: September 26, 2005 Last Updated: September 27, 2005 Last Updated: October 19, 2005 Last Updated: November 11, 2005 Last Updated: November 14, 2005 Last Updated: December 5, 2005 Last Updated: December 18, 2005 Last Updated: December 31, 2005 Last Updated: June 17, 2006 Last Updated: July 22, 2006 Last Updated: October 22, 2006 Last Updated: November 3, 2006 Last Updated: February 2, 2007 Last Updated: October 31, 2007 Last Updated: January 17, 2008 Last Updated: October 16, 2009 Last Updated: January 27, 2010 Last Updated: March 12, 2010 Last Updated: December 2, 2010 Last Updated: January 5, 2011 Last Updated: July 9, 2011 Last Updated: November 9, 2011 Last Updated:
6b9c100792d73d8f
Quantum probability From Wikipedia, the free encyclopedia Jump to: navigation, search Quantum probability was developed in the 1980s as a noncommutative analog of the Kolmogorovian theory of stochastic processes[1] [2] [3] [4] [5] . One of its aims is to clarify the mathematical foundations of quantum theory and its statistical interpretation.[6] [7] A significant recent application to physics is the dynamical solution of the quantum measurement problem[8] ,[9] by giving constructive models of quantum observation processes which resolve many famous paradoxes of quantum mechanics. Some recent advances are based on quantum filtering[10] and feedback control theory as applications of quantum stochastic calculus. Orthodox quantum mechanics[edit] Orthodox quantum mechanics has two seemingly contradictory mathematical descriptions: 1. deterministic unitary time evolution (governed by the Schrödinger equation) and 2. stochastic (random) wavefunction collapse. Most physicists are not concerned with this apparent problem. Physical intuition usually provides the answer, and only in unphysical systems (e.g., Schrödinger's cat, an isolated atom) do paradoxes seem to occur. Orthodox quantum mechanics can be reformulated in a quantum-probabilistic framework, where quantum filtering theory (see Bouten et al.[11] [12] for introduction or Belavkin, 1970s [13] [14] [15]) gives the natural description of the measurement process. This new framework encapsulates the standard postulates of quantum mechanics, and thus all of the science involved in the orthodox postulates. In classical probability theory, information is summarized by the sigma-algebra F of events in a classical probability space (Ω, F,P). For example, F could be the σ-algebra σ(X) generated by a random variable X, which contains all the information on the values taken by X. We wish to describe quantum information in similar algebraic terms, in such a way as to capture the non-commutative features and the information made available in an experiment. The appropriate algebraic structure for observables, or more generally operators, is a *-algebra. A (unital) *- algebra is a complex vector space A of operators on a Hilbert space H that • contains the identity I and • is closed under composition (a multiplication) and adjoint (an involution *): a ∈ A implies a*A. A state P on A is a linear functional P : AC (where C is the field of complex numbers) such that 0 ≤ P(a* a) for all a ∈ A (positivity) and P(I) = 1 (normalization). A projection is an element p ∈ A such that p2 = p = p*. Mathematical definition[edit] The basic definition in quantum probability is that of a quantum probability space, sometimes also referred to as an algebraic or noncommutative probability space. Definition : Quantum probability space. A pair (A, P), where A is a *-algebra and P is a state, is called a quantum probability space. This definition is a generalization of the definition of a probability space in Kolmogorovian probability theory, in the sense that every (classical) probability space gives rise to a quantum probability space if A is chosen as the *-algebra of bounded complex-valued measurable functions on it. The projections pA are the events in A, and P(p) gives the probability of the event p. 1. ^ L. Accardi, A. Frigerio, and J.T. Lewis (1982). "Quantum stochastic processes". Publ. Res. Inst. Math. Sci. 18 (1): 97–133. doi:10.2977/prims/1195184017.  2. ^ R.L. Hudson, K.R. Parthasarathy; Parthasarathy (1984). "Quantum Ito's formula and stochastic evolutions". Comm. Math. Phys. 93 (3): 301–323. Bibcode:1984CMaPh..93..301H. doi:10.1007/BF01258530.  3. ^ K.R. Parthasarathy (1992). An introduction to quantum stochastic calculus. Monographs in Mathematics 85. Basel: Birkhäuser Verlag.  4. ^ D. Voiculescu, K. Dykema, A. Nica (1992). Free random variables. A noncommutative probability approach to free products with applications to random matrices, operator algebras and harmonic analysis on free groups. CRM Monograph Series 1. Providence, RI: American Mathematical Society.  5. ^ P.-A. Meyer (1993). "Quantum probability for probabilists". Lecture Notes in Mathematics (Berlin: Springer-Verlag) 1538.  6. ^ John von Neumann (1929). "Allgemeine Eigenwerttheorie Hermitescher Funktionaloperatoren". Mathematische Annalen 102: 49–131. doi:10.1007/BF01782338.  7. ^ John von Neumann (1932). Mathematische Grundlagen der Quantenmechanik. Die Grundlehren der Mathematischen Wissenschaften, Band 38. Berlin: Springer.  8. ^ V. P. Belavkin (1995). "A Dynamical Theory of Quantum Measurement and Spontaneous Localization". Russian Journal of Mathematical Physics 3 (1): 3–24. arXiv:math-ph/0512069. Bibcode:2005math.ph..12069B.  9. ^ V. P. Belavkin (2000). "Dynamical Solution to the Quantum Measurement Problem, Causality, and Paradoxes of the Quantum Century". Open Systems and Information Dynamics 7 (2): 101–129. arXiv:quant-ph/0512187. doi:10.1023/A:1009663822827.  10. ^ V. P. Belavkin (1999). "Measurement, filtering and control in quantum open dynamical systems". Reports on Mathematical Physics 43 (3): A405–A425. arXiv:quant-ph/0208108. Bibcode:1999RpMP...43A.405B. doi:10.1016/S0034-4877(00)86386-7.  11. ^ Luc Bouten, Ramon van Handel, Matthew James (2007). "An introduction to quantum filtering". SIAM J. Control Optim. 46 (6): 2199–2241. arXiv:math/0601741v1. doi:10.1137/060651239.  12. ^ Luc Bouten, Ramon van Handel, Matthew R. James (2009). "A discrete invitation to quantum filtering and feedback control". SIAM Review 51 (2): 239–316. arXiv:math/0606118v4. Bibcode:2009SIAMR..51..239B. doi:10.1137/060671504.  13. ^ V. P. Belavkin (1972/1974). "Optimal linear randomized filtration of quantum boson signals". Problems of Control and Information Theory 3 (1): 47–62.  Check date values in: |date= (help) 14. ^ V. P. Belavkin (1975). "Optimal multiple quantum statistical hypothesis testing". Stochastics (Gordon & Breach Sci. Pub) 1: 315–345. doi:10.1080/17442507508833114.  15. ^ V. P. Belavkin (1978). "Optimal quantum filtration of Makovian signals [In Russian]". Problems of Control and Information Theory, 7 (5): 345–360.  External links[edit]
b6e2094b6e7c8f60
Optical and thermodynamic properties of gold metal nanoparticles. Effect of chemical functionalization. by Marcelo Carignano, Baudilio Tejerina, George C. Schatz PDF document: AuNanoParticles.pdf (1 MB, uploaded by Marcelo Carignano 5 years 4 months ago) This laboratory is intended to introduce the student to the use of semiempirical electronic structure methods. In particular, the semiempirical methods will be applied to the study of metallic clusters and the interaction of the clusters with discrete molecular systems such as pyridine. The reactivity of the metallic systems will be rationalized in terms the electron population of the atoms in the cluster. The metal-ligand affinity will be quantitatively estimated. The noble metals nanoparticles (Cu, Ag and Au) exhibit physical properties that make them unique for scientific and technological applications: electronics, catalysis, biotechnology, spectroscopy, etc. These properties may be modulated by specific chemical treatment of the particles during their synthesis. Hence, the geometrical and electronic characterization of their structures is crucial to understand the mechanisms that control such properties. In this exercise, we will use a theoretical approach to study nanoparticles of gold, including their structural and optical properties and the changes in these properties that arise from chemical functionalization. In order to assess the quality of our calculations it is common to contrast the results with other theoretical studies based on similar models. Thus, we will consider structures of tetrahedral/pyramidal shape whose geometries-–interatomic distances and angles–-have been taken from their natural crystal lattices. The affinity of the metal particles for other chemical species will be calculated by the difference in energy between the isolated molecules and the aggregate. The Figure 1 shows a schematic representation of the two binding modes of interaction between pyridine and a metal particle of pyramidal shape. In this respect, we will identify the reactive sites of the metal particles and study which mechanism is thermodynamically favored. The formation of the aggregate molecule/cluster will be also characterized by its optical properties (absorption spectrum) and how they correlate with the structure and composition of the complex. Figure 1: Representation of the two binding modes of a donor ligand (pyridine) to a metal particle of tetrahedral shape. In the S mode the pyridine attaches to a face of the particle while in the V mode it reacts by the vertices. For this particular exercise we will use the program CNDO/INDO accessible at NanoHUB (www.nanohub.org). The atomic coordinates of the species that we will study are available as separate attachments. __1.- The CNDO/INDO program interface. __ After logging into !nanoHUB, select the option My HUB (see Figure 2). On the menu My Tools select the tab All Tools and locate the application CNDO/INDO. If you click on the star next to the name, the application will be marked as Favorite and placed on this folder. Figure 2: NanoHUB window showing My HUB. The programs are listed on the tab All Tools of the menu My Tools. Selecting the Fig2h.jpg next to the name of the desired tool, marks it as Favorite and a link to it is placed in the Favorites folder. The button Fig2a.jpg launches the application. At this point you may start the application by clicking on the Launch Tool button. The window with the program’s GUI will appear as illustrated in Figure 3. The Popout feature of the tool (located on the upper right hand side of the window) may be activated by clicking on the button; it will create a dedicated window to the tool on your local computer that facilitates the work. Figure 3: Initial window of the CNDO/INDO program interface on the web browser. If you wish to continue working on the current project from a different location an computer, you may close the browser and even shut down the computer; when you log back in the NanoHUB all be as you left it. Even if a job was running, it will continue running on the background or, most likely, it will have finished the calculation in the meantime. Do not however click on the Close button; it will terminate the current CNDO/INDO session for good. 2.- Definition of the model. 2a.- Molecular geometry, electric charge, and spin multiplicity. To illustrate the selection of a molecular model, lets consider for instance the clusters Au20 as model of a gold nanoparticle of pyramidal shape. The structure, extracted from the experimental, X-ray structure of the material, is given in Cartesian coordinates in the file Au20.xyz. Download the file to your computer and then upload the coordinates to the CNDO/INDO program: on the CNDO Job menu, located on main window of the tool (see Figure 4), select Upload… and follow the instructions to pass the Au20.xyz file to the CNDO/INDO program. The Cartesian coordinates of the cluster will appear on the text box Atomic Positions. The interatomic distances are in Angstroms so the Units option must be set accordingly. During the computation, we will ignore the symmetry of the system so select option “C1” on the Point Group menu. Figure 4: Defining the chemical model: atomic coordinates, charge and spin multiplicity. Once the structure has been entered, the electric charge and the spin multiplicity of the system must be specified. Enter the appropriate values on the Charge and Multiplicity records. In the case of the Au20 cluster we will assume that the particles are neutral (Charge=0) and all the electrons are paired with opposite spins. Figure 5: Structural model for a neutral Au20 cluster. To complete the description of the system, we have to provide the theoretical approach that we will utilize in the computation. In our study we will calculate the wave function by solving the Schrödinger equation using the approximate, semiempirical method INDO (Intermediate Neglect of Differential Overlap). 2c.– Simulation parameters. The Schrödinger equation is solved iteratively using the Self-Consistent Field (SCF) method. The Error! Reference source not found shows the parameters that control the calculation. In the SCF method, the wave function is recomputed cyclically until the associated energy converges that is, the difference in energy between two consecutive iterations drops below a predefined threshold. The limit is specified by the parameter Convergence in the Control Parameters tab. In this exercise, we will use 10-5 eV as energy threshold and SCF Iterations: 200. This means that if the energy has not converged in 200 SCF iterations, the program will stop. It is a good practice to keep this parameter small, run the calculation and observe the SCF trend: if in the first few iteration it oscillates, stop the calculation and increase the SCF dumping factor (Parameter Shift in Control Parameters). Then rerun the calculation. For small systems, the solution is generally found in fewer steps. If the SCF procedure is converging but it needs more iteration steps, it is possible to restart the calculation. See the section Restarting Computation below. Figure 6: Simulation Control Parameters. The default values are appropriate for most of the calculations so in general they do not need to be changed. Analysis of the results After the computation has finished, a new window will automatically appear showing the picture of the structure (atomic configuration) of the system under study (See Figure 7). Figure 7: Structure of the particle Au20 showing the atomic labels. Total Energy In order to verify that the calculation has been successfully completed we must make sure that the SCF procedure has converged. On the Result menu, select the option Output Log. The window contains the description of the system and the results of the calculation. You may either download its content to your local computer for further reference (advisable), or search for specific information on-line using the built-in ‘Find:’ button located at the lower part of the window. For instance, search for the text “total energy”; if the string is found, its first appearance will be highlighted (see Figure 8). In the example, the SCF took 10 iterations to find a solution for the semiempirical (INDO) Hartree-Fock equation. The total energy, sum of the electronic and nuclear energies, is –192.0721473 eV. A few lines below, it is the energy that separates the highest occupied (HOMO) and lowest unoccupied (LUMO) molecular orbitals, 3.6775 eV. Figure 8. Output Log showing the converged SCF total energy (highlighted) after 10 SCF cycles and the HOMO/LUMO energy gap, 3.6775 eV. Charge distribution The reactivity of a molecule may be inferred from its electron density distribution. One way to quantify such distribution is by means of the charge population analysis. The charge at each atomic center is assigned by the sum of its nuclear charge (atomic number) and the number of electrons on the orbitals whose occupancy comes from such center. Figure 9 shows the resulting Orbital Occupancy upon converged wave function. The gold atoms with occupancy less than 1 in the valence atomic orbital 6s have ceded electron density to become part of the cluster and hence will bear a positive charge. If the occupancy is more than 1 electron, however, they will be assigned a partial negative charge. The numerical values obtained for the Au20 system are shown on column Charge, highlighted on Figure 9. On the log file window, the population analysis section may be located by searching, for instance, for the string charge. Figure 9: Atomic charge population analysis of the Au20 particle. Based on the electron population analysis, we may establish an electrostatic criterion to predict what centers of the Au20 will likely react with nucleophilic species and what others will be prone to react with electrophilic species. Although the electrostatic interactions may seem to be a reasonable argument to predict the initiating step of a reaction, it may not be sufficient: electronic factors are also in effect; they may dominate in the overall mechanism and change the direction of the reaction predicted by electrostatic factors alone. In a chemical process, the total energy is the decisive stability criterion between reactants and product. Optical properties Although the absorption spectrum may be also analyzed numerically reading the log file (search for example the string wavelength), the tool provides a graphical representation of this data showing the simulated UV/Vis spectrum as function of the wavelength. On the ‘Result’ options menu, select “Absorption spectrum”. The result obtained on the Au20 cluster is shown on Figure 10. Figure 10: Calculated absorption spectrum of pyramidal, neutral Au20 particles. Restarting Computations It may happen that the energy does not converge in the specified number of SCF iterations. In these cases, it is possible to restart and continue the calculation from the end of the previous one. To do so, first locate in the Output Log the SCF steps and make sure that the energy shows a convergent trend. Then, click on the Input button, which will take you back to the input window, and open the Control Parameters dialog tag as shown in Figure 11. On the Restart option select YES and for MO Assign pick PREVIOUS. Figure 11: Restarting incomplete SCF calculations. Set Restart: YES and MO Assign: PREVIOUS. At this point, the computation may be restarted by clicking on the Simulate button. The Cartesian coordinates of the structures are available as separate attachments. A Nanoparticles. 1.- Nucleation. The files M4_Tetrahedral.xyz and M4_Planar.xyz (with M= Au) contain the respective coordinates for the M4 clusters. Using the semi-empirical method INDO, calculate the electronic structures and tabulate the total energies of the clusters as a function of the geometry and the nature of the metal M. Based on the results, propose the mechanism of nucleation of the atoms to form the nanoparticle. B. Electrostatics: 2.- Calculate the electronic structure of the Au20 particle and determine the HOMO-LUMO gap. 3.- Using the population analysis, identify the atoms on the particle as electron-donors (nucleophilic sites) or acceptors (electrophilic). Based on these values, predict what sites (V or S) of the metal particles would most likely be attacked by nucleophilic and electrophilic substances. 4.- Take note of the calculated dipole moment of the particles (Search: moment) and discuss the how the values vary with the geometry of the metal clusters. C.- Reactivity and stereochemistry. 5.- Write the thermodynamic equations that represent the formation of a complex between the nanoparticle and the ligand (pyridine) and complete to show the energetic of the association reactions. What conclusions can you make concerning the dependence of stereochemistry on the nature of the metal. 6.- Review the predictions proposed in question 3 concerning the thermochemistry of donor/acceptor interactions. Are electrostatic considerations a sufficient criterion to predict the reactivity of the reactants and stereochemistry of the products? Other questions (if appropriate and time allows it): In excess of solvent, the pyridine may saturate the metal particles by occupying all acceptor (electrophilic) sites. Au20 + 4 Py → [Au20 ⋅ Py4{{{]}}} The files Au20_Py_Vertex.xyz and Au20_Py_Surface.xyz contain the Cartesian coordinates of the structures that represent the V and S models for the saturated metallic species. 1.- Calculate the respective enthalpies of reaction and determine which of the two species is thermodynamically favored. 2.- Based on the theoretical predictions, use the optical properties to propose an experiment that allows the characterization and identification of Au nanoparticles in solutions of pyridine. Auxiliary Files Au20.xyz (1 KB, uploaded by Marcelo Carignano 5 years 4 months ago) Au20_Py_Surface.xyz (1 KB, uploaded by Marcelo Carignano 5 years 4 months ago) Au20_Py_Vertex.xyz (1 KB, uploaded by Marcelo Carignano 5 years 4 months ago) Au4_Tetrahedral.xyz (180 B , uploaded by Marcelo Carignano 5 years 4 months ago) Created on , Last modified on
30d443ddaa62d944
Gravity Wiki Planck constant 771pages on this wiki Add New Page Talk0 Share Values of h Units 6.62606896(33)×10−34 J·s 4.13566733(10)×10−15 eV·s 6.62606896(33)×10−27 erg·s Values of ħ Units 1.054571628(53)×10−34 J·s 6.58211899(16)×10−16 eV·s File:MaxPlanckWirkungsquantums20050815 CopyrightKaihsuTai.jpg The Planck constant (denoted h), also called Planck's constant, is a physical constant used to describe the sizes of quanta in quantum mechanics. It is named after Max Planck, one of the founders of quantum theory. The Planck constant is the proportionality constant between energy (E) of a photon and the frequency of its associated electromagnetic wave (ν). This relation between the energy and frequency is called the Planck relation or the Planck–Einstein equation: E = h\nu.\, Using the following simple relation between frequency (ν), speed of light (c), and wavelength (λ), \nu = \frac{c}{\lambda}\, the Planck relation becomes the following E = \frac{hc}{\lambda}.\, A closely related constant is the reduced Planck constant, sometimes called the Dirac constant. It is equal to the Planck constant divided by (or reduced by) 2π, and denoted ħ ("h-bar"): The reduced Planck constant is used when frequency is expressed in terms of radians per second instead of cycles per second ("angular frequency"). The energy of a photon with angular frequency ω is given by E = \hbar \omega. Planck hypothesized (correctly, as it later turned out) that some types of energy could not take on any indiscriminate value: instead, the energy must be some multiple of a very small quantity (later to be named a "quantum"). This is counterintuitive in the everyday world, where it is possible to "make things a little bit hotter" or "move things a little bit faster", because the quanta of energy are very, very small in comparison to everyday human experience. Nevertheless, it is impossible, as Planck found out, to explain some phenomena without accepting that energy is discrete: that is to say like the integers 1, 2, 3,... instead of the line of all possible numbers. The Planck constant has dimensions of energy multiplied by time, which are also the dimensions of action. In SI units, the Planck constant is expressed in joule seconds (J·s). The dimensions may also be written as momentum multiplied by distance (N·m·s), which are also the dimensions of angular momentum. The value of the Planck constant is:[1] h = 6.626\ 068\ 96(33)\times 10^{-34}\ \mbox{J s} = 4.135\ 667\ 33(10)\times 10^{-15}\ \mbox{eV s} The value of the reduced Planck constant is: \hbar = {{h}\over{2\pi}} = 1.054\ 571\ 628(53)\times 10^{-34}\ \mbox{J s} = 6.582\ 118\ 99(16)\times 10^{-16}\ \mbox{eV s} The two digits between the parentheses denote the standard uncertainty in the last two digits of the value. The figures cited here are the 2006 CODATA recommended values for the constants and their uncertainties. The 2006 CODATA results were made available in March 2007 and represent the best-known, internationally-accepted values for these constants, based on all data available as of 31 December 2006. New CODATA figures are scheduled to be published approximately every four years. The reduced Planck constantEdit As quantum mechanics was developed, it was found that very often when h appeared in equations, it was divided by 2\pi. The reduced Planck constant, was introduced to simplify the notation. The reduced Planck constant is used when frequency is expressed in terms of radians per second instead of cycles per second. The expression of a frequency in radians per second is often called angular frequency (ω), where ω = 2πν. The energy of a photon with angular frequency ω is given by E = \hbar \omega. Significance of the size of the Planck constantEdit The numerical value of the Planck constant depends entirely on the system of units used to measure it. When it is expressed in SI units, it is one of the smallest of all constants used in physics. In part, this reflects the fact that, on a scale where energies are measured in joules or kilojoules and times are measured in seconds or minutes, the effects of quantization are themselves very small. However, in part, it is also an artifact of the measuring system. To take one example, green light of a wavelength of 555 nanometres (approximately the wavelength to which human eyes are most sensitive) has a frequency of 540 THz (540×1012 Hz). Each photon has an energy E of  = 3.58×10−19 J. That is still a very small energy in terms of everyday experience, but then everyday experience doesn't deal with individual photons any more than it deals with individual atoms or molecules. To get a more macroscopic view, the energy of one mole of photons can be calculated by multiplying by the Avogadro constant, NA ≈ 6.022×1023 mol−1. Green light of wavelength 555 nm has an energy of 216 kJ/mol, equivalent to the strength of some types of chemical bond. When the reduced Planck constant is treated as a conversion factor between phase, in radians, and action, in joule-seconds (as seen in the Schrödinger equation), it may be written with units J/(rad·s). The Planck constant is an atomic-scale constant and, even at the atomic scale, it has a small numerical value simply because frequencies tend to have large numerical values. The electronvolt is an atomic-scale unit of energy: each photon of green light has an energy of 2.23 eV. If time is measured in units which are much smaller than seconds, the numerical value of the Planck constant would become much larger. Atomic units are one such scale of measurement, in which the units of energy and time are chosen (indeed defined) so that the value of the reduced Planck constant is exactly one. Origins of the Planck constantEdit Black-body radiationEdit Main article: Planck's law Wiens law Intensity of light emitted from a black body at any given frequency. Each color is a different temperature. Planck was the first to explain the shape of these curves. In the last years of the nineteenth century, Planck was investigating the problem of black-body radiation first posed by Kirchhoff some forty years earlier. It is well known that hot objects glow, and that hotter objects glow brighter than cooler ones. The reason is that the electromagnetic field obeys laws of motion just like a mass on a spring, and can come to thermal equilibrium with hot atoms. When a hot object is in equilibrium with light, the amount of light it absorbs is equal to the amount of light it emits. If the object is black, meaning it absorbs all the light that hits it, then it emits the maximum amount of thermal light too. The assumption that blackbody radiation is thermal leads to an accurate prediction: the total amount of emitted energy goes up with the temperature according to a definite rule Stefan–Boltzmann law (1879–84). But it was also known that the colour of the light given off by a hot object changes with the temperature, so that "white hot" is hotter than "red hot". Nevertheless, Wilhelm Wien discovered the mathematical relationship between the peaks of the curves at different temperatures, by using the principle of adiabatic invariance. At each different temperature, the curve is moved over by Wien's displacement law (1893). Wien made a guess for the spectrum of the object, which was correct at high frequencies but not at low frequencies. It still wasn't clear why the spectrum of a hot object had the form that it has (see diagram). Planck hypothesized that the equations of motion for light are a set of harmonic oscillators, one for each possible frequency. He examined how the entropy of the oscillators varied with the temperature of the body, trying to match Wien's law, and was able to derive an approximate mathematical function for black-body spectrum.[2] However, Planck soon realized that his solution was not unique. There were several different solutions, each of which gave a different value for the entropy of the oscillators,[2]. To save his theory, Planck had to resort to using the then controversial theory of statistical mechanics,[2] which he described as "an act of despair … I was ready to sacrifice any of my previous convictions about physics."[3] One of his new boundary conditions was With this new condition, Planck had imposed the quantization of the energy of the oscillators, "a purely formal assumption … actually I did not think much about it…" in his own words,[4] but one which would revolutionize physics. Applying this new approach to Wien's displacement law showed that the "energy element" must be proportional to the frequency of the oscillator, the first version of what is now termed "Planck's relation": E = h\nu.\, Planck was able to calculate the value of h from experimental data on black-body radiation: his result, 6.55 × 10−34 J·s, is within 1.2% of the currently accepted value.[2] He was also able to make the first determination of the Boltzmann constant kB from the same data and theory.[5] Prior to Planck's work, it had been assumed that the energy of a body could take on any value whatsoever – that it was a continuous variable. This is equivalent to saying that the energy element ε (the difference between allowed values of the energy) is zero, and therefore that h is zero. This is the origin of the often-quoted summary that "the Planck constant is zero in classical physics" or that "classical physics is quantum mechanics at the limit that the Planck constant tends to zero". The Planck constant, of course, is never zero, but it is so small compared to most human experience that its existence had been ignored prior to Planck's work. The black-body problem was revisited in 1905, when Rayleigh and Jeans (on the one hand) and Einstein (on the other hand) independently proved that classical electromagnetism could never account for the observed spectrum. These proofs are commonly known as the "ultraviolet catastrophe", a name coined by Paul Ehrenfest in 1911. They contributed greatly (along with Einstein's work on the photoelectric effect) to convincing physicists that Planck's postulate of quantized energy levels was more than a mere mathematical formalism. The very first Solvay Conference in 1911 was devoted to "the theory of radiation and quanta".[6] Max Planck received the 1918 Nobel Prize in Physics "in recognition of the services he rendered to the advancement of Physics by his discovery of energy quanta". Photoelectric effectEdit Main article: Photoelectric effect The photoelectric effect is the emission of electrons (called "photoelectrons") from a surface when light is shined on it. It was first observed by Alexandre Edmond Becquerel in 1839, although credit is usually reserved for Heinrich Hertz,[7] who published the first thorough investigation in 1887. Another, particularly thorough investigation was published by Philipp Lenard in 1902.[8] Contrary to the impression to be gained from many physics textbooks, Einstein didn't perform any notable experiments on the effect himself: however his 1905 paper[9] discussing the effect in terms of light quanta would earn him the Nobel Prize in 1921,[7] when his predictions had been confirmed by the experimental work of Robert Andrews Millikan.[10] To put it another way, in 1921 at least, Einstein's theories on the photoelectric effect were considered more important than his theory of relativity (a name coined, as it happens, by Max Planck).[7] Prior to Einstein's paper, electromagnetic radiation such as visible light was considered to behave as a wave: hence the use of the terms "frequency" and "wavelength" to characterise different types of radiation. The energy transferred by a wave in a given time is called its intensity. The light from a theatre spotlight is more intense than the light from a domestic lightbulb; that is to say that the spotlight gives out more energy per unit time (and hence consumes more electricity) than the ordinary bulb, even though the colour of the light might be very similar. Other waves, such as sound or the waves crashing against a seafront, also have their own intensity. However the energy account of the photoelectric effect didn't seem to agree with the wave description of light. The "photoelectrons" emitted as a result of the photoelectric effect have a certain kinetic energy, which can be measured. This kinetic energy (for each photoelectron) is independent of the intensity of the light,[8] but depends linearly on the frequency;[10] and if the frequency is too low (corresponding to a kinetic energy for the photoelectrons of zero or less), no photoelectrons are emitted at all, however intense the light source.[10] Assuming the frequency is high enough to cause the photoelectric effect, a rise in intensity of the light source causes more photoelectrons to be emitted with the same kinetic energy, rather than the same number of photoelectrons to be emitted with higher kinetic energy.[8] E = h\nu\, Einstein's postulate was later proven experimentally: the constant of proportionality between the frequency of incident light (ν) and the kinetic energy of photoelectrons (E) was shown to be equal to the Planck constant (h).[10] Atomic structureEdit Main article: Bohr model E_n = \frac{h c_0 R_{\infty}}{n^2} where R is an experimentally-determined constant (the Rydberg constant) and n is any integer (n = 1, 2, 3, …). Once the electron reached the lowest energy level (n = 1), it could not get any closer to the nucleus (lower energy). This approach also allowed Bohr to account for the Rydberg formula, an empirical description of the atomic spectrum of hydrogen, and to account for the value of the Rydberg constant R in terms of other fundamental constants. Bohr also introduced the quantity h/2π, now known as the reduced Planck constant, as the quantum of angular momentum. At first, Bohr thought that this was the angular momentum of each electron in an atom: this proved incorrect and, despite developments by Sommerfeld and others, an accurate description of the electron angular momentum proved beyond the Bohr model. The correct quantization rules for electrons – in which the energy reduces to the Bohr-model equation in the case of the hydrogen atom – were given by Heisenberg's matrix mechanics in 1925 and the Schrödinger wave equation in 1926: the reduced Planck constant remains the fundamental quantum of angular momentum. In modern terms, if J is the total angular momentum of a system with rotational invariance, and Jz the angular momentum measured along any given direction, these quantities can only take on the values J^2 = j(j+1) \hbar^2,\quad & j = 0, \frac{1}{2}, 1, \frac{3}{2}, \ldots, \\ J_z = m \hbar, \qquad\quad\quad & m = -j, -j+1, \ldots, j. Uncertainty principleEdit Main article: Uncertainty principle The Planck constant also occurs in statements of Werner Heisenberg's uncertainty principle. Given a large number of particles prepared in the same state, the uncertainty in their position, Δx , and the uncertainty in their momentum (in the same direction), Δp, obey \Delta x\, \Delta p \ge \begin{matrix}\frac{1}{2}\end{matrix} \hbar where the uncertainty is given as the standard deviation of the measured value from its expected value. There are a number of other such pairs of physically measurable values which obey a similar rule. In addition to some assumptions underlying the interpretation of certain values in the quantum mechanical formulation, one of the fundamental corner-stones to the entire theory lies in the commutator relationship between the position operator \hat{x} and the momentum operator \hat{p}: where δij is the Kronecker delta. Unicode reserves codepoints U+210E () for the Planck constant and U+210F () for the reduced Planck constant. Physical constants whose values depend on the Planck constantEdit The following list is based on the 2006 CODATA evaluation;[1] for the constants listed below, more than 90% of the uncertainty is due to the uncertainty in the value of the Planck constant, as indicated by the square of the correlation coefficient (r2 > 0.9, r > 0.949). The Planck constant is (with one or two exceptions[12]) the fundamental physical constant which is known to the lowest level of precision, with a relative uncertainty ur of 5.0×10−8. Rest mass of the electronEdit The normal textbook derivation of the Rydberg constant R defines it in terms of the electron mass me and a variety of other physical constants. R_\infty = \frac{m_{\rm e} e^4}{8 \epsilon_0^2 h^3 c_0} = \frac{m_{\rm e} c_0 \alpha^2}{2 h} However, the Rydberg constant can be determined very accurately (ur = 6.6×10−12) from the atomic spectrum of hydrogen, whereas there is no direct method to measure the mass of a stationary electron in SI units. Hence the equation for the calculation of me becomes m_{\rm e} = \frac{2 R_{\infty} h}{c_0 \alpha^2} where c0 is the speed of light and α is the fine-structure constant. The speed of light has an exactly defined value in SI units, and the fine-structure constant can be determined more accurately (ur = 6.8×10−10) than the Planck constant: the uncertainty in the value of the electron rest mass is due entirely to the uncertainty in the value of the Planck constant (r2 > 0.999). Avogadro constantEdit Main article: Avogadro constant The Avogadro constant NA is determined as the ratio of the mass of one mole of electrons to the mass of a single electron: The mass of one mole of electrons is the "relative atomic mass" of an electron Ar(e), which can be measured in a Penning trap (ur = 4.2×10−10), multiplied by the molar mass constant Mu, which is defined as 0.001 kg/mol. N_{\rm A} = \frac{M_{\rm u} A_{\rm r}({\rm e})}{m_{\rm e}} = \frac{M_{\rm u} A_{\rm r}({\rm e}) c_0 \alpha^2}{2 R_{\infty} h} The dependence of the Avogadro constant on the Planck constant (r2 > 0.999) also holds for the physical constants which are related to amount of substance, such as the atomic mass constant. The uncertainty in the value of the Planck constant limits the knowledge of the masses of atoms and subatomic particles when expressed in SI units. It is possible to measure the masses more precisely in atomic mass units, but not to convert them more precisely into kilograms. Elementary chargeEdit Main article: Elementary charge Sommerfeld originally defined the fine-structure constant α as: \alpha\ =\ \frac{e^2}{\hbar c_0 \ 4 \pi \epsilon_0}\ =\ \frac{e^2 c_0 \mu_0}{2 h} where e is the elementary charge, ε0 is the electric constant (also called the permittivity of free space), and μ0 is the magnetic constant (also called the permeability of free space). The latter two constants have fixed values in the International System of Units. However, α can also be determined experimentally, notably by measuring the electron spin g-factor ge, then comparing the result with the value predicted by quantum electrodynamics. At present, the most precise value for the elementary charge is obtained by rearranging the definition of α to obtain the following definition of e in terms of α and h: e = \sqrt{\frac{2\alpha h}{\mu_0 c_0}}. Bohr magneton and nuclear magnetonEdit Main article: Bohr magneton The Bohr magneton and the nuclear magneton are units which are used to describe the magnetic properties of the electron and atomic nuclei respectively. The Bohr magneton is the magnetic moment which would be expected for an electron if it behaved as a spinning charge according to classical electrodynamics. It is defined in terms of the reduced Planck constant, the elementary charge and the electron mass, all of which depend on the Planck constant: the final dependence on h½ (r2 > 0.995) can be found by expanding the variables. \mu_{\rm B} = \frac{e \hbar}{2 m_{\rm e}} = \sqrt{\frac{c_0 \alpha^5 h}{32 \pi^2 \mu_0 R_{\infty}^2}} The nuclear magneton has a similar definition, but corrected for the fact that the proton is much more massive than the electron. The ratio of the electron relative atomic mass to the proton relative atomic mass can be determined experimentally to a high level of precision (ur = 4.3×10−10). \mu_{\rm N} = \mu_{\rm B} \frac{A_{\rm r}({\rm e})}{A_{\rm r}({\rm p})} Determination Edit Method Value of h (10−34 J·s) Watt balance 6.62606889(23) 3.4×10−8 [13][14][15] X-ray crystal density 6.6260745(19) 2.9×10−7 [16] Josephson constant 6.6260678(27) 4.1×10−7 [17][18] Magnetic resonance 6.6260724(57) 8.6×10−7 [19][20] Faraday constant 6.6260657(88) 1.3×10−6 [21] recommended value 6.62606896(33) 5.0×10−8 [1] The nine recent determinations of the Planck constant cover five separate methods. Where there is more than one recent determination for a given method, the value of h given here is a weighted mean of the results, as calculated by CODATA. In principle, the Planck constant could be determined by examining the spectrum of a black-body radiator or the kinetic energy of photoelectrons, and this is how its value was first calculated in the early twentieth century. In practice, these are no longer the most accurate methods. The CODATA value quoted here is based on three watt-balance measurements of KJ2RK and one inter-laboratory determination of the molar volume of silicon,[1] but is mostly determined by a 2007 watt-balance measurement made at the U.S. National Institute of Standards and Technology (NIST).[15] Five other measurements by three different methods were initially considered, but not included in the final refinement as they were too imprecise to affect the result. There are both practical and theoretical difficulties in determining h. The practical difficulties can be illustrated by the fact that the two most accurate methods, the watt balance and the X-ray crystal density method, do not appear to agree with one another. The most likely reason is that the measurement uncertainty for one (or both) of the methods has been estimated too low – it is (or they are) not as precise as is currently believed – but for the time being there is no indication which method is at fault. The theoretical difficulties arise from the fact that all of the methods except the X-ray crystal density method rely on the theoretical basis of the Josephson effect and the quantum Hall effect. If these theories are slightly inaccurate – there is no evidence at present to suggest they are – the methods would not give accurate values for the Planck constant. More importantly, the values of the Planck constant obtained in this way cannot be used as tests of the theories without falling into a circular argument. Fortunately, there are other statistical ways of testing the theories, and the theories have yet to be refuted.[1] Josephson constantEdit Main article: Josephson effect K_{\rm J} = U\nu = 2e/h\, h = \frac{8\alpha}{\mu_0 c_0 K_{\rm J}^2}. Watt balanceEdit Main article: Watt balance h = \frac{4}{K_{\rm J}^2 R_{\rm K}} Magnetic resonanceEdit Main article: Gyromagnetic ratio The gyromagnetic ratio γ is the constant of proportionality between the frequency ν of nuclear magnetic resonance (or electron paramagnetic resonance for electrons) and the applied magnetic field B: ν = γB. It is difficult to measure gyromagnetic ratios precisely because of the difficulties in precisely measuring B, but the value for protons in water at 25 °C is known to better than one part per million. The protons are said to be "shielded" from the applied magnetic field by the electrons in the water molecule, the same effect that gives rise to chemical shift in NMR spectroscopy, and this is indicated by a prime on the symbol for the gyromagnetic ratio, γ′p. The gyromagnetic ratio is related to the shielded proton magnetic moment μ′p, the spin number S (S = 12 for protons) and the reduced Planck constant. \gamma^{\prime}_{\rm p} = \frac{\mu^{\prime}_{\rm p}}{I \hbar} = \frac{2 \mu^{\prime}_{\rm p}}{\hbar} The ratio of the shielded proton magnetic moment μ′p to the electron magnetic moment μe can be measured separately and to high precision, as the imprecisely-known value of the applied magnetic field cancels itself out in taking the ratio. The value of μe in Bohr magnetons is also known: it is half the electron g-factor ge. Hence \mu^{\prime}_{\rm p} = \frac{\mu^{\prime}_{\rm p}}{\mu_{\rm e}} \frac{g_{\rm e} \mu_{\rm B}}{2} \gamma^{\prime}_{\rm p} = \frac{K_{\rm J-90} R_{\rm K-90}}{K_{\rm J} R_{\rm K}} \Gamma^{\prime}_{\rm p-90}({\rm hi}) = \frac{K_{\rm J-90} R_{\rm K-90} e}{2} \Gamma^{\prime}_{\rm p-90}({\rm hi}) h = \frac{c_0 \alpha^2 g_{\rm e}}{2 K_{\rm J-90} R_{\rm K-90} R_{\infty} \Gamma^{\prime}_{\rm p-90}({\rm hi})} \frac{\mu_{\rm p}^{\prime}}{\mu_{\rm e}}. Faraday constantEdit Main article: Faraday constant h = \frac{c_0 M_{\rm u} A_{\rm r}({\rm e})\alpha^2}{R_{\infty}} \frac{1}{K_{\rm J-90} R_{\rm K-90} F_{90}} X-ray crystal densityEdit h = \frac{M_{\rm u} A_{\rm r}({\rm e}) c_0 \alpha^2}{R_{\infty}} \frac{\sqrt{2}d^3_{220}}{V_{\rm m}({\rm Si})}. Fixing the value of the Planck constantEdit Main article: Kilogram As mentioned above, the numerical value of the Planck constant depends on the system of units used to describe it. Its value in SI units is known to 50 parts per billion but its value in atomic units is known exactly, because of the way the scale of atomic units is defined. The same is true of conventional electrical units, where the Planck constant (noted h90 to distinguish it from its value in SI units) is given by h_{90} = \frac{4}{K_{J-90}^2 R_{K-90}} with KJ–90 and RK–90 being exactly defined constants. Atomic units and conventional electrical units are very useful in their respective fields, because the uncertainty in the final result doesn't depend on an uncertain conversion factor, only on the uncertainty of the measurement itself. There are a number of proposals to redefine certain of the SI base units in terms of fundamental physical constants.[22] This has already been done for the metre, which is defined in terms of a fixed value of the speed of light. The most urgent unit on the list for redefinition is the kilogram, whose value has been fixed for all science (since 1889) by the mass of a small cylinder of platinumiridium alloy kept in vault just outside Paris. While nobody knows if the mass of the International Prototype Kilogram has "changed" since 1889 – and herein lies one of the problems – it is known that small cylinders of Pt–Ir alloy in general (there are many such cylinders in national laboratories around the world) change their mass by several tens of micrograms over such a timescale, however carefully they are stored, and even more so when they have to be taken out and used as mass standards. A change of several tens of micrograms in one kilogram is equivalent to the current uncertainty in the value of the Planck constant in SI units. The legal process to change the definition of the kilogram is already underway,[22] but no final decision will be made before the next meeting of the General Conference on Weights and Measures in 2011.[23] The Planck constant is a leading contender to form the basis of the new definition, although not the only one.[23] Possible new definitions include "the mass of a body at rest whose equivalent energy equals the energy of photons whose frequencies sum to 135,639,274×1042 Hz",[24] or simply "the kilogram is defined so that the Planck constant equals 6.62606896×10−34 J·s". Watt balances already measure mass in terms of the Planck constant: at present, standard mass is taken as "fixed" and the measurement is performed to determine the Planck constant but, were the Planck constant to be fixed in SI units, the same experiment would be a measurement of the mass. The relative uncertainty in the measurement would remain the same. Mass standards could also be constructed from silicon crystals or by other "atom-counting" methods. Such methods require a knowledge of the Avogadro constant, which fixes the proportionality between atomic mass and macroscopic mass but, with a defined value of the Planck constant, NA would be known to the same level of uncertainty (if not better) than current methods of comparing macroscopic mass. See also Edit 1. 1.0 1.1 1.2 1.3 1.4 Template:CODATA2006 2. 2.0 2.1 2.2 2.3 2.4 Planck, Max (1901), "Ueber das Gesetz der Energieverteilung im Normalspectrum", Ann. Phys. 309 (3): 553–63, doi:10.1002/andp.19013090310, . English translation: "On the Law of Distribution of Energy in the Normal Spectrum". 4. Kragh, Helge (1999), Quantum Generations: A History of Physics in the Twentieth Century, Princeton University Press, p. 62, ISBN 0691095523,,M1  6. Previous Solvay Conferences on Physics, International Solvay Institutes,, retrieved on 12 December 2008  7. 7.0 7.1 7.2 See, eg, Arrhenius, Svante (10 December 1922), Presentation speech of the 1921 Noble Prize for Physics,  8. 8.0 8.1 8.2 Lenard, P. (1902), "Ueber die lichtelektrische Wirkung", Ann. Phys. 313 (5): 149–98, doi:10.1002/andp.19023130510  9. Einstein, Albert (1905), "Über einen die Erzeugung und Verwandlung des Lichtes betreffenden heuristischen Gesichtspunkt", Ann. Phys. 17: 132–48, doi:10.1002/andp.19053220607,  10. 10.0 10.1 10.2 10.3 Millikan, R. A. (1916), "A Direct Photoelectric Determination of Planck's h", Phys. Rev. 7: 355–88, doi:10.1103/PhysRev.7.355  11. Bohr, Niels (1913), "On the Constitution of Atoms and Molecules" ([dead link]Scholar search), Phil. Mag., Ser. 6 26: 1–25,  12. The main exceptions are the Newtonian constant of gravitation G and the gas constant R. The uncertainty in the value of the gas constant also affects those physical constants which are related to it, such ad the Boltzmann constant and the Loschmidt constant. 13. Kibble, B. P.; Robinson, I. A.; Belliss, J. H. (1990), "A Realization of the SI Watt by the NPL Moving-coil Balance", Metrologia 27 (4): 173–92, doi:10.1088/0026-1394/27/4/002  14. Steiner, R.; Newell, D.; Williams, E. (2005), "Details of the 1998 Watt Balance Experiment Determining the Planck Constant", J. Res. Natl. Inst. Stand. Technol. 110 (1): 1–26,  15. 15.0 15.1 Steiner, R. L.; Williams, E. R.; Liu, R.; Newell, D. B. (2007), "Uncertainty Improvements of the NIST Electronic Kilogram", IEEE Trans. Instrum. Meas. 56 (2): 592–96, doi:10.1109/TIM.2007.890590  16. Fujii, K.; Waseda, A.; Kuramoto, N.; Mizushima, S.; Becker, P.; Bettin, H.; Nicolaus, A.; Kuetgens, U.; Valkiers, S.; Taylor, P.; De Bievre, Paul; Mana, G.; Massa, E.; Matyi, R.; Kessler, E.G., Jr.; Hanke, M. (2005), "Present state of the avogadro constant determination from silicon crystals with natural isotopic compositions", IEEE Trans. Instrum. Meas. 54 (2): 854–59, doi:10.1109/TIM.2004.843101  17. Sienknecht, Volkmar; Funck, Torsten (1985), "Determination of the SI Volt at the PTB", IEEE Trans. Instrum. Meas. 34 (2): 195–98, doi:10.1109/TIM.1985.4315300 . Sienknecht, V.; Funck, T. (1986), "Realization of the SI Unit Volt by Means of a Voltage Balance", Metrologia 22 (3): 209–12, doi:10.1088/0026-1394/22/3/018 . Funck, T. (1991), "Determination of the volt with the improved PTB voltage balance", IEEE Trans. Instrum. Meas. 40 (2): 158–61, doi:10.1109/TIM.1990.1032905 . 18. Clothier, W. K.; Sloggett, G. J.; Bairnsfather, H.; Currey, M. F.; Benjamin, D. J. (1989), "A Determination of the Volt", Metrologia 26 (1): 9–46, doi:10.1088/0026-1394/26/1/003  19. Kibble, B. P.; Hunt, G. J. (1979), "A Measurement of the Gyromagnetic Ratio of the Proton in a Strong Magnetic Field", Metrologia 15 (1): 5–30, doi:10.1088/0026-1394/15/1/002  20. Liu, R.; Liu, H.; Jin, T.; Lu, Z.; Du, X.; Xue, S.; Kong, J.; Yu, B.; Zhou, X.; Liu, T.; Zhang, W. (1995), "A Recent Determination for the SI Values of γ′p and 2e/h at NIM", Acta Metrol. Sin. 16 (3): 161–68  21. Bower, V. E.; Davis, R. S. (1980), "The Electrochemical Equivalent of Pure Silver: A Value of the Faraday Constant", J. Res. Natl. Bur. Stand. 85 (3): 175–91  24. Taylor, B. N.; Mohr, P. J. (1999), "On the redefinition of the kilogram", Metrologia 36 (1): 63–64, doi:10.1088/0026-1394/36/1/11,  • Barrow, John D. (2002). The Constants of Nature; From Alpha to Omega - The Numbers that Encode the Deepest Secrets of the Universe, Pantheon Books. ISBN 0-375-42221-8.  See alsoEdit External linksEdit ast:Constante de Planck bn:প্লাংকের ধ্রুবক bs:Planckova konstanta bg:Константа на Планк ca:Constant de Planck cs:Planckova konstanta da:Plancks konstantet:Plancki konstant el:Σταθερά του Πλανκeo:Konstanto de Planck fa:ثابت پلانکgl:Constante de Planckhr:Planckova konstanta id:Konstanta Planck it:Costante di Planck he:קבוע פלאנק ka:პლანკის მუდმივა lv:Planka konstante lt:Planko konstanta hu:Planck-állandó ml:പ്ലാങ്ക് സ്ഥിരാങ്കം mr:प्लांकचा स्थिरांक ms:Pemalar Planck mn:Планкийн тогтмол nl:Constante van Planckno:Plancks konstant pl:Stała Plancka pt:Constante de Planck ro:Constanta Plancksk:Planckova konštanta sl:Planckova konstanta sr:Планкова константа fi:Planckin vakio sv:Plancks konstant th:ค่าคงตัวของพลังค์ tr:Planck sabiti uk:Стала Планка vi:Hằng số Planck Ad blocker interference detected!
1db53d6893fe416a
Einstein's Theory of Relativity Classical Mechanics                     by   Paul Marmet         As seen in chapter one, the size of the hydrogen atom depends directly on the Bohr radius, which itself varies with the mass of the electron. Is that the case for all atoms? And what about molecules and crystals? Before we answer these questions rigorously, let us try to answer them intuitively.         Consider for example the hydrogen molecule, H2. It is made of two hydrogen atoms sharing their electrons. Since the size of the two hydrogen atoms taken separately varies with the Bohr radius, it would be reasonable to expect the size of the hydrogen molecule to do the same. If the radius of all atoms depended on the Bohr radius, we could apply the same reasoning to all molecules and crystals. Intuitively, we would arrive to the conclusion that the dimensions of matter depend on the Bohr radius. If this were to be the case, then according to chapter one, the size of any object would be different depending on its location in a gravitational potential. In this appendix, we will see how the dimensions of matter are predicted to vary theoretically. We will first look at all atoms. We will then study molecules which will be followed by crystals and metals.         Before we start our study of the dimensions of matter, a comment needs to be made about the Bohr radius and its use. Until now, ao has always been considered a constant because ,eo, e and me have been supposed constants. With this in mind, most experimentalists present their results in units of bohrs using 1 bohr = ao = 5.29177×10-11 m [1] (page 349). For an experimentalist, by definition, that numerical value is equal to one bohr unit whether the electron orbit in hydrogen is constant or not.         For theoretical results, this is different. Theoreticians could decide to give the results of their calculations in function of ao (i.e. in units of ao) to be able to compare them to the experimentalists' results. For the theoreticians, ao is defined as a combination of parameters. Therefore ao is constant only if all the parameters are constant. One then has to be careful in reading theoretical results and look at the method used to see if there really is a dependence of ao or if it is just a unit. Let us make sure that the physics is not lost in those calculations.         Most authors do their calculations in atomic units. In those units, me = e =  = 1. This means that the unit of mass is the electron mass. When the Schrödinger equation (or the Dirac equation) is expressed in those units, we end up with an equation that seems independent of me. The authors then go on with numerical calculations to solve the equations. But if the mass of the electron is not a constant, then it is not necessarily equal to one in atomic units (with respect to the initial frame of reference). This changes the Schrödinger (or Dirac) equation which changes its solution which changes the value of the parameter we are looking for (e.g. the bond length or the radius of an atom in the initial frame of reference). All the results in this appendix being theoretical, we made sure that their dependence in ao was real.        ATOMS -         It is easy to derive the radius of all hydrogenlike atoms by supposing that they are just like a hydrogen atom with an electron orbiting a nucleus of charge Z. According to Levine [1] (page 525):  "The average radius of a hydrogenlike atom is proportional to the Bohr radius ao, and ao is inversely proportional to the electron mass".  The radius of all other atoms has been well investigated [2, 3] and the results given are proportional to the Bohr radius. The method used in [2] was the Hartree-Fock method [4] and in [3], the Dirac-Fock method which is just the Hartree-Fock method with relativistic corrections due to the mass of the electron with respect to the nucleus frame of reference. The Dirac-Fock method gives no relativistic correction of the electron mass with respect to an external gravitational potential.         The hydrogen molecule is composed of two hydrogen atoms, each made of one electron and one proton. Its positive ion, H2+ , made of two protons and one electron, is a system that can easily be solved [1, 5, 6]. Upon finding its wave function and the potential of the nucleus (in the Born-Oppenheimer approximation), it is possible to calculate the distance between the two protons. This gives 2.00ao. (The variational method is used to solve this problem [5]. It uses wave functions of the hydrogen atom which depend on the Bohr radius.) The internuclear distance of a molecule is in direct relationship with the size of that molecule. We see then that the size of the hydrogen molecule ion is proportional to ao .         This means that when we change the mass of the particle moving about the nucleus, the size of the hydrogen molecule ion also changes. This has already been realized by Levine [1] (page 355):  "The negative muon (symbolm-) is a short-lived (half-life 2×10-6 s) elementary particle whose charge is the same as that of an electron but whose mass mmis 207 times me. When a beam of negative muons (produced when ions accelerated to high speed collide with ordinary matter) enters H2 gas, a series of processes leads to the formation of muomolecular ions that consist of two protons and one muon. This species, symbolized by (pmp)+, is an H2+ ion in which the electron has been replaced by a muon. Its Re [the distance between the two protons] is 2.002/(mme2) = 2.002/(207mee2) = (2.00/207) bohr = 0.0051Å."         It is about one hundred times smaller than the Bohr radius. If one day we are able to produce a molecule with a proton and an anti-proton, the internucleus distance of that molecule will be amazingly small. It is obvious from this result that the size of the hydrogen molecule ion depends on the electron mass.         A lot of calculations have been done to find the size of molecules (i.e. the length of the bonds in the molecule) [7, 8, 9]. Some of the molecules studied include F2, Cl2, LiCl, Ni , HF and HCl. For heavier molecules, the calculations were done using internal relativistic corrections [10, 11, 12] because of the higher mass of the electron. Relativistic corrections due to an external gravitational potential were never taken into account. Some of the molecules studied in this way are N2, N2+ , Au2, AuH, AuCl, Cl2, F2, Xe2, Xe2+ , TlH and Bi2. The table published by Pyykkö [10] is extensive and covers more than one hundred molecules. All the results cited in the references are in units of ao or in units that are related to ao and are proportional to ao.         According to Zhdanov [13] (page 201), the equilibrium distance between particles in a crystal is proportional to the equilibrium spacing in a diatomic molecule having the same parameters for the potential energy. (The constant of proportionality depends only on the structure of the crystal.) This means that the size of crystals is proportional to the Bohr radius since we have seen in the previous section that the size of all molecules (and thus the distance between the nuclei in diatomic molecules) is proportional to the Bohr radius. Furthermore, the same author [13] (pages 208-209) develops an ionic model for metals. According to this model, the atomic radius in a metallic crystal (which is defined as half the shortest interatomic distance) can be expressed as: where h is Planck's constant, A is Madelung's constant, m is the electron mass, e is the charge of the electron and z is the valency of the atom. We see then that the size of metals is proportional to the Bohr radius as defined in chapter one.         It is obvious that the size of all matter is strongly dependent on the Bohr radius and therefore the mass of the electron. Even if relativistic corrections are applied internally using Dirac's calculations, this correction does not take into account the relativistic effect caused by an external gravitational potential. This means that, since every object we know is made of either atoms, molecules, crystals or metals, the results of chapter one concerning the dilation and contraction of the Bohr radius in the hydrogen atom apply to all matter including humans. Finally, we conclude that this dilation or contraction is real. [1] Levine, Ira N., Quantum Chemistry, Prentice Hall, Englewood Cliffs, New Jersey, 1991, 629 pages. [2] Froese Fischer, Charlotte, Average-Energy-of-Configuration Hartree-Fock Results for the Atoms Helium to Radon, Atomic Data and Nuclear Data Tables, volume 12, page 87, 1973. [3] Desclaux, J. P., Relativistic Dirac-Fock Expectation Values for Atoms with Z=1 to Z=120, Atomic Data and Nuclear Data Tables, volume 12, page 311, 1973. [4] Froese, Charlotte, Numerical Solution of the Hartree-Fock Equations, Canadian Journal of Physics, volume 41, page 1895, 1963. [5] Cohen-Tannoudji, Claude, Bernard Diu et Franck Laloë, Mécanique quantique, Hermann, Paris, 1986, 1518 pages. [6] McWeeny, Roy, Coulson's Valence, Oxford University Press, Oxford, 1979, 435 pages. [7] Christiansen, Phillip A., Yoon S. Lee and Kenneth S. Pitzer, Improved Ab Initio Effective Core Potentials for Molecular Calculations, Journal of Chemical Physics, volume 71, number 11, page 4445, 1979. [8] Noell, J. Oakey, Marshall D. Newton, P. Jeffrey Hay, Richard L. Martin et Frank W. Bobrowicz, An Ab Initio Study of the Bonding in Diatomic Nickel, Journal of Chemical Physics, volume 73, number 5, page 2360, 1980. [9] Hay, P. Jeffrey, Willard R. Wadt et Luis R. Kahn, Ab Initio Effective Core Potentials for Molecular Calculations. II. All-Electron Comparisons and Modifications of the Procedure, Journal of Chemical Physics, volume 68, number 7, page 3059, 1978. [10] Pyykkö, Pekka, Relativistic Effects in Structural Chemistry, Chemical Reviews, volume 88, page 563, 1988. [11] Ziegler, Tom, Calculation of Bonding Energies by the Hartree-Fock Slater Transition State Method, Including Relativistic Effects, in Relativistic Effects in Atoms, Molecules and Solids, G. L. Malli editor, Plenum Press, New York, page 421, 1981. [12] Ermler, Walter C., Richard B. Ross et Phillip A. Christiansen, Spin-Orbit Coupling and Other Relativistic Effects in Atoms and Molecules, Advances in Quantum Chemistry, volume 19, page 139, 1988. [13] Zhdanov, G. S., Crystal Physics, Academic Press, London, 1965, 500 pages. Chapter12 Contents    Appendix 2 Return To: List of Papers on the Web Where to get a Hard Copy of this Book
502d7e73f34ef006
A Quantum Of Theory Exploring new paths in quantum physics Tag: Measurement Quantum Theory – A view from the inside Part I The history of science has taught us many things, among them that asking new questions often leads to new insight. Often, these new questions had not been asked before because they seemed to be too philosophical, unanswerable or even mostly unscientific. Here, I would like to confront you with a question that, at a first glance, might seem to fit into these categories. Nevertheless, I will show that discussing this question, specifically applied to quantum theory, leads to deep insight. In the computer age we have grown very familiar with the concept of simulation. We can simulate practically anything we have understood physically, and we do that for very complicated and large systems like climate models of our planet. Of course, we are using approximations to reality so that our computers can handle the complexity. This, however, is a limitation that we can easily imagine not to exist. The concept of simulation remains the same, even if performed on a hypothetical machine without any practical restrictions. We could think of any consistent set of mathematical rules and simulate it on a computer. In some sense, we would be creating our own universes with the rules that we make up. Some of these simulations might be just complex enough to allow for an internal observer to evolve, an individual that would have an inside view of our simulation. And if we had the means of communicating with him, we could ask him what he is observing. We will possibly never get to the scientific sophistication that would allow this sort of real experiment, so what is the point of proposing it? The universe of our hypothetical observer is purely mathematical, a list of rules and an initial state, not more. The reality perceived by him must emerge in some way from the mathematical rules. Surely some aspects of his observation will be highly subjective, like the perception of color, taste or anything that just developed by chance without any profound direct connection to material reality as perceived by him. But other aspects of his observation will not be so subjective, but shared by all other hypothetical observers in the same simulated universe. So, the question I would like to ask is “How does reality as shared by all possible observers emerge from the mathematical rules that describe the universe these observers inhabit?”. Maybe I have already convinced you that the question is not so esoteric after all. But quite certainly not, that it is even remotely possible to answer it. How would one distinguish objective features from subjective ones? And would we not have to know about all the emergent structures of the simulated universe first, like atoms and molecules or even brains? I do share the above concerns, but I can also offer a way to circumvent them entirely. Let us assume that our virtual observer is not just any observer, but in fact a physicist who tries to formulate his own mathematical theory of his perceived reality. If he is a good scientist, his theories will only include those aspects of his observation shared by all other observers, and if he is successful his final theory of all things he can observe will be a perfect mathematical description of the objective emergent reality in the virtual universe. This is an extremely helpful assumption, because it allows us to actually talk about mathematical structure instead of a fuzzy and partly psychological concept. With this we can reformulate the fundamental question to “What mathematical model does a virtual observer use to describe his perceived reality?”. This formulation sounds much more reasonable and there is some hope that we may find a way to mathematically deduce the emergence of this internal view from the mathematical structure of the universe we simulate. Does quantum theory have to be interpreted? Witnessing the ongoing discussion about how quantum theory should be interpreted, and the strong opinions and sometimes even dogmatic arguments, I decided to write a series of blog posts that will try to discuss the issue of interpretation as objectively as I possibly can. I will not specifically try to compare the different mainstream interpretation with each other, but rather explore the requirement of an interpretation at all and the possibility of answering the same fundamental questions using strong scientific rigor instead. A scientific theory is usually defined as consisting of a mathematical apparatus that allows to perform calculations of predictive nature, and a layer of interpretational glue that connects the resulting numbers with measurements that we can actually perform. The separation of measurement and prediction works very well for all classical theories, where observer and experiment can be regarded as entirely separate entities. Quantum theory however makes a clean cut between observer and the observed experiment impossible, because after the experiment the two subsystems are interwoven in a very fundamental and complicated way, even if spatially separated. The nonlocal entanglement of the quantum state space does not allow us to use the approximation of objectivity anymore. Understanding this problem, there are two main approaches of dealing with it. The older one insists on the classical separation and is willing to live with the necessary consequences. The Copenhagen interpretation introduces the Heisenberg cut between quantum and classical domains to recover the notion of an objective observer that can make classical statements about the measurement outcome. And with that cut we also get the interpretational glue back that relates mathematics with measurement results. This happens in the form of the well known measurement postulate which includes the Born rule describing the statistical outcome of a measurement. The approach has several drawbacks however. Firstly, the location of the Heisenberg cut is more or less arbitrary as long as the observer and the system are well distinguishable, but becomes impossible as soon as this is not the case anymore. Often this does not pose a problem, but it is still a shortcoming as it keeps us from understanding certain realizable situations. Secondly, the Copenhagen and related interpretations leave us entirely in the dark as to what precisely happens during a measurement. Still, the Copenhagen interpretation is fundamentally scientific, as it focuses on measurements and predictions only, and does not take into account what is not observable. The other main approach to the problem of observation takes the alternative route. Instead of introducing a cut, everything is taken into account. Experiment and measurement device become one system, which is itself a part of the largest system, the universe. It is then only consequential to assume the time evolution of undisturbed quantum systems as formulated in the Copenhagen interpretation, the Schrödinger equation, as the evolution law for the universe. Within this approach, all predictions and results must emerge only from the properties of the evolving system, as there is no external observer that can measure anything, and no classical measurement device either. The time evolution would also be fully deterministic and the randomness of the measurement outcome could also be regarded as an emergent property. So when Hugh Everett III came up with his many worlds or relative state interpretation, he did really not at all want to create an interpretation in the sense of the Copenhagen interpretation, namely as a layer of translation between math and measurement. Rather, he wanted to create a scientific theory of emergence, where all results are derived as inherent properties of the system itself. And he was willing to accept all the consequences it brought, because the approach was rigorously scientific and only the logical consequence of avoiding the artificial Heisenberg cut. Unfortunately, not everything worked out as well as this approach had been promising. Of course, the most famous consequence is the existence of arbitrarily many worlds containing observers that have seen any possible experimental outcome. While this is philosophically hard to accept for some, it surely is only an acceptable consequence if the other results work out correctly. And these results ought to be the precise statements of the measurement postulate of the Copenhagen interpretation, because those are experimentally verified. However, while the many worlds theory gives a reasonably good explanation for the state collapse, it fails to give the right statistics. There has been some criticism regarding the collapse too, but more importantly it is generally agreed that the Born rule does not come out of the relative state theory unless extra postulates are added. Decoherence theory, which incorporates the environment to move coherence away from the experiment, or more recent attempts to use psychologically founded counting mechanisms for calculating the relative outcome probabilities, have not been generally convincing enough to consider the issues of the theory be solved. And adding postulates of course spoils the initial idea of having an actual theory of emergence. So where does this leave us? We have a practical approach that works most of the time, but hides some possibly important features and mechanisms from us. And we have a holistic approach that stands on a beautiful theoretical idea, but fails to deliver the right results and comes with some curious side effects. The question that I will explore in the following articles is what Everett’s approach has to do with the relationship between simulation and reality, and whether something that he and others have potentially overlooked could lead to a new theory with better results. And I promise, I’ll have a few surprises for you!
f21e0fac92018b01
Monday, May 30, 2016 The medieval politics of infinitesimals MIT historian and ex-Russian Slava Gerovitch reviews the book. Sunday, May 29, 2016 Thursday, May 26, 2016 Cranks denying local causality I criticize mainstream professors a lot, but I am the one defending mainstream textbook science. Philosopher Massimo Pigliucci claims to be an expert on pseudoscience, and is writing a book bragging about how much progress philosophers have made. Much of it has appeared in a series of blog posts. One example of progress is that he says that philosophers have discovered that causality plays no role in fundamental physics: Moreover, some critics (e.g., Chakravartty 2003) argue that ontic structural realism cannot account for causality, which notoriously plays little or no role in fundamental physics, and yet is crucial in every other science. For supporters like Ladyman causality is a concept pragmatically deployed by the “special” sciences (i.e., everything but fundamental physics), yet not ontologically fundamental. I do not know how he could be so clueless. Causality plays a crucial role in every physics book I have. A Quanta mag article explains: New Support for Alternative Quantum View Of the many counterintuitive features of quantum mechanics, perhaps the most challenging to our notions of common sense is that particles do not have locations until they are observed. This is exactly what the standard view of quantum mechanics, often called the Copenhagen interpretation, asks us to believe. Instead of the clear-cut positions and movements of Newtonian physics, we have a cloud of probabilities described by a mathematical structure known as a wave function. The wave function, meanwhile, evolves over time, its evolution governed by precise rules codified in something called the Schrödinger equation. The mathematics are clear enough; the actual whereabouts of particles, less so. ... For some theorists, the Bohmian interpretation holds an irresistible appeal. “All you have to do to make sense of quantum mechanics is to say to yourself: When we talk about particles, we really mean particles. Then all the problems go away,” said Goldstein. “Things have positions. They are somewhere. If you take that idea seriously, you’re led almost immediately to Bohm. It’s a far simpler version of quantum mechanics than what you find in the textbooks.” Howard Wiseman, a physicist at Griffith University in Brisbane, Australia, said that the Bohmian view “gives you a pretty straightforward account of how the world is…. You don’t have to tie yourself into any sort of philosophical knots to say how things really are.” This is foolish. The double-slit experiment shows that electrons and photons are not particles. Not classical particles, anyway. Bohm lets you pretend that they are really particles, but you have to believe that they are attached to ghostly pilot waves that interact nonlocally with the particles and the rest of the universe. Bohm also lets you believe in determinism, altho it is a very odd sort of determinism because there is no local causality. Just what is appealing about that? Yes, you can say that the electrons have positions, but those position are inextricably tied up with unobservable pilot waves, so what is satisfying about that? Contrary to what the philosophers say, local causality is essential to physics and to our understanding of the world. If some experiment proves it wrong, then I would have to revise my opinion. But that has never been done, and there is no hope of doing it. Belief in action-at-a-distance is just a mystical pipe dream. So Bohm is much more contrary to intuition that ordinary Copenhagen quantum mechanics. And Bohm is only known to work in simple cases, as far as I know, and no one has ever used it to do anything new. Wednesday, May 25, 2016 Left denies progress towards genetic truths NPR radio interviews a Pulitzer Prize-winning physician plugging his new book on genetics: As researchers work to understand the human genome, many questions remain, including, perhaps, the most fundamental: Just how much of the human experience is determined before we are already born, by our genes, and how much is dependent upon external environmental factors? Oncologist Siddhartha Mukherjee tells Fresh Air's Terry Gross the answer to that question is complicated. "Biology is not destiny," Mukherjee explains. "But some aspects of biology — and in fact some aspects of destiny — are commanded very strongly by genes." The degree to which biology governs our lives is the subject of Mukherjee's new book, The Gene. In it, he recounts the history of genetics and examines the roles genes play in such factors as identity, temperament, sexual orientation and disease risk. Based on this, he has surely had his own genome sequenced, right? Nope. GROSS: ... I want to ask about your own genes. Have you decided whether to or not to get genetically tested yourself? And I should mention here that there is a history of schizophrenia in your family. You had two uncles and a cousin with schizophrenia. You know, what scientists are learning about schizophrenia is that there is a genetic component to it or genetic predisposition. So do you want to get tested for that or other illnesses? MUKHERJEE: I've chosen not to be tested. And I will probably choose not to be tested for a long time, until I start getting information back from genetic testing that's very deterministic. Again, remember that idea of penetrance that we talked about. Some genetic variations are very strongly predictive of certain forms of illness or certain forms of anatomical traits and so forth. I think that right now, for diseases like schizophrenia, we're nowhere close to that place. The most that we know is that there are multiple genes in familial schizophrenia, the kind that our family has. Essentially, we don't know how to map, as it were. There's no one-to-one correspondence between a genome and the chances of developing schizophrenia. And until we can create that map - and whether we can create that map ever is a question - but until I - we can create that map, I will certainly not be tested because it - that idea - I mean, that's, again, the center of the book. That confines you. It becomes predictive. You become - it's a chilling word that I use in the book - you become a previvor (ph). A previvor is someone who's survived an illness that they haven't even had yet. You live in the shadow of an illness that you haven't had yet. It's a very Orwellian idea. And I think we should resist it as much as possible. GROSS: Would you feel that way if you were a woman and there was a history of breast cancer in your family? MUKHERJEE: Very tough question - if I was a woman and I had a history of breast cancer in my family - if the history was striking enough - and, you know, here's a - it's a place where a genetic counselor helps. If the history was striking enough, I would probably sequence at least the genes that have been implicated in breast cancer, no doubt about it. I post this to prove that even the experts in genetics have the dopiest ideas about it. He wants to inform the public about genetics, but he is willfully ignorant of the personal practical implications. I also criticized his New Yorker article on epigenetics. Bad as he is, his reviewers are even worse. Atlantic mag reviews his book to argue that genes are overrated: The antidote to such Whig history is a Darwinian approach. Darwin’s great insight was that while species do change, they do not progress toward a predetermined goal: Organisms adapt to local conditions, using the tools available at the time. So too with science. What counts as an interesting or soluble scientific problem varies with time and place; today’s truth is tomorrow’s null hypothesis — and next year’s error. ... The point is not that this [a complex view of how genes work; see below] is the correct way to understand the genome. The point is that science is not a march toward truth. Rather, as the author John McPhee wrote in 1967, “science erases what was previously true.” Every generation of scientists mulches under yesterday’s facts to fertilize those of tomorrow. “There is grandeur in this view of life,” insisted Darwin, despite its allowing no purpose, no goal, no chance of perfection. There is grandeur in a Darwinian view of science, too. The gene is not a Platonic ideal. It is a human idea, ever changing and always rooted in time and place. To echo Darwin himself, while this planet has gone cycling on according to the laws laid down by Copernicus, Kepler, and Newton, endless interpretations of heredity have been, and are being, evolved. I do not recall Darwin ever said that evolution does not make progress, or have a purpose. Whether he did or not, many modern evolutionists, such as the late Stephen Jay Gould, say things like that a lot. They not only deny progress and purpose in the history of life, they deny that science makes progress. They say that "today’s truth is tomorrow’s null hypothesis". There are political undertones to this. Leftists and Marxists hate the idea of scientific truths, and they really despise truths about human nature. As you can see from my motto, I reject all of this. Science makes progress towards truth, and genuine truths are not erased or mulched. My positivism is in a minority among philosophers and science popularizers. Speaking of academic leftists citing Darwin for foolish ideas, the current Atlantic mag has a philosopher article saying: The sciences have grown steadily bolder in their claim that all human behavior can be explained through the clockwork laws of cause and effect. This shift in perception is the continuation of an intellectual revolution that began about 150 years ago, when Charles Darwin first published On the Origin of Species. Shortly after Darwin put forth his theory of evolution, his cousin Sir Francis Galton began to draw out the implications: If we have evolved, then mental faculties like intelligence must be hereditary. But we use those faculties — which some people have to a greater degree than others — to make decisions. So our ability to choose our fate is not free, but depends on our biological inheritance. ... This research and its implications are not new. What is new, though, is the spread of free-will skepticism beyond the laboratories and into the mainstream. ... This is mostly nonsense, of course. Intelligence has been shown to be heritable, as would be expected from Darwinian evolution. But I don't think that Darwin believe in such extreme genetic determination, as he did not understand genes. It is possible that people who believe that free will is an illusion have some mild form of schizophrenia. This is yet another example of philosophers thinking that they know better than everyone else. Philosophers and schizophrenics can hold beliefs that no normal person would. Here is a summary of the Atlantic article: Libertarian free will [the “we could have chosen otherwise” form] is dead, or at least dying among intellectuals. That means that determinism reigns (Cave doesn’t mention quantum mechanics), and that at any one time we can make only one choice. But if we really realized we don’t have free will of that sort, we’d behave badly. Cave cites the study of Vohs and Schooler (not noting that that study wasn’t repeatable), but also other studies showing that individuals who believe in free will are better workers than those who don’t. I haven’t read those studies, and thus don’t know if they’re flawed, but of course there may be unexamined variables that explain this correlation. Therefore, we need to maintain the illusion that we have libertarian free will, or at least some kind of free will. Otherwise society will crumble. I hate to be anti-intellectual, but what am I to think when all the intellectuals are trying to convince me to give up my belief in free will? Or that they are such superior beings that they can operate without free will, but lesser beings like myself need to maintain that (supposedly false) belief? Speaking of overrated intellectuals, I see that physicist Sean M. Carroll's new book is on the NY Times best-seller list. Tuesday, May 24, 2016 Skeptics stick to soft targets SciAm blogger John Horgan has infuriated the "skeptic" by saying they only go after soft targets, and not harder questions like bogus modern physics and the necessity of war. He has a point. I am always amazed when these over-educated academic skeptics endorse some totally goofy theory like many-worlds quantum mechanics. Steve Pinker rebuts the war argument: John Horgan says that he “hates” the deep roots theory of war, and that it “drives him nuts,” because “it encourages fatalism toward war.” ... Gat shows how the evidence has been steadily forcing the “anthropologists of peace” to retreat from denying that pre-state peoples engaged in lethal violence, to denying that they engage in “war,” to denying that they engage in it very often. Thus in a recent book Ferguson writes, “If there are people out there who believe that violence and war did not exist until after the advent of Western colonialism, or of the state, or agriculture, this volume proves them wrong.” ... And speaking of false dichotomies, the question of whether we should blame “Muslim fanaticism” or the United States as “the greatest threat to peace” is hardly a sophisticated way for skeptical scientists to analyze war, as Horgan exhorts them to do. Certainly the reckless American invasions of Afghanistan and Iraq led to incompetent governments, failed states, or outright anarchy that allowed Sunni-vs-Shiite and other internecine violence to explode — but this is true only because these regions harbored fanatical hatreds which nothing short of a brutal dictatorship could repress. According to the Uppsala Conflict Data Project, out of the 11 ongoing wars in 2014, 8 (73%) involved radical Muslim forces as one of the combatants, another 2 involved Putin-backed militias against Ukraine, and the 11th was the tribal war in South Sudan. (Results for 2015 will be similar.) To blame all these wars, together with ISIS atrocities, on the United States, may be cathartic to those with certain political sensibilities, but it’s hardly the way for scientists to understand the complex causes of war and peace in the world today. I would not blame the USA for all those wars, but it has been the Clinton-Bush-Obama policy to destabilize Mideast governments, aid the radical Muslim forces of our choosing, and to provoke Putin. This has been a disaster in Libya, Syria, Egypt, and now also Kosovo, Iraq, and Afghanistan. Sometimes I think that Barack Obama and Hillary Clinton are seeking a World War III between Islam and Christendom. Or maybe just to flood the West with Moslem migrants and refugees. Pinker has his own dubious theories about war. I do agree with Horgan that these supposed skeptics are not really very skeptical about genuine science issues. Another problem with them is that they are dominated by leftist politics. They will ignore any facts which conflict with their leftist worldview, and even purge anyone who says the wrong thing. The conference where Horgan spoke had disinvited Richard Dawkins because he retweeted a video that had some obscure cultural references that did not pass some leftist ideological purity test. They did not like Horgan either and denounced him on the stage. It is fair to assume that he will not be invited back. Monday, May 23, 2016 IBM claims quantum computer cloud service IBM announced: There is more hype here. As Scott Aaronson likes to point out, quantum computers would not be exponentially faster for any common application. Here is a list of quantum algorithms, with their suspected speedups. According to this timeline, we have had 5-qubit quantum computers since the year 2000. I first expressed skepticism on this blog in 2005. At least IBM admits: So there is no true quantum computer today, and no one has demonstrated any quantum speed-up. Don't plan on connect this to any real-time service. IBM is running batch jobs. Your results will come to you in the mail. That's email, I hope, and not some punch-cards in an envelope. Here is one way you can think about quantum computing, and my skepticism of it. Quantum mechanics teaches that you cannot be definite about an electron's position when it is not being observed. One way of interpreting this is that the electron can be in two or more places at once. If you believe that, then it seems plausible that the electron could be contributing to two or more different computations at the same time. Another view is that an electron is only in one place at a time, and that uncertainty in its position is just our lack of knowledge. With this view, it seems implausible that our uncertainty could be the backbone of a super-Turing computation. Some people with this latter view deny quantum mechanics and believe in hidden variables. But I am talking about followers of quantum mechanics, not them. See for example this recent Lubos Motl post, saying "Ignorance and uncertainty are de facto synonyms." He accepts (Copenhagen) quantum mechanics, and the idea that an electron can be understood as having multiple histories as distinct paths, but he never says that an electron is in two places at the same time, or that a cat is alive and dead at the same time. So which view is better? I don't think that either view is quite correct, but both views are common. The differences can explain why some people are big believers in quantum computing, and some are not. Suppose you really believe that a cat is alive and dead at the same time. That the live cat and the dead cat exist as distinct but entangled entities, not just possibilities in someone's head. Then it is not too much more of a stretch to believe that the live cat can interact with the dead cat to do a computation. If you do not take that view, then were is the computation taking place? I would say that an electron is not really a particle, but a wave-like entity that is often measured like a particle. So does that mean it can do two different computations at once? I doubt it. Saturday, May 21, 2016 More on the European Quantum Manifesto scam I trashed the European Quantum Manifesto, and a reader points me to this cryptologist rant against it: There is a significant risk that all of the benefits of quantum computing during the next 20 years will be outweighed by the security devastation caused by quantum computing during the same period. ... This brings me to what really bugs me about the Quantum Manifesto. Instead of highlighting the security threat of quantum technology and recommending funding for a scientifically justified response, the Manifesto makes the thoroughly deceptive claim that quantum technology improves security. He then goes on to explain why quantum cryptography is never going to improve anyone's security. A company named ID Quantique has been selling quantum-cryptography hardware, specifically hardware for BB84-type protocols, since 2004. ID Quantique claims that quantum cryptography provides "absolute security, guaranteed by the fundamental laws of physics." However, Vadim Makarov and his collaborators have shown that the ID Quantique devices are vulnerable to control by attackers, that various subsequent countermeasures are still vulnerable, and that analogous vulnerabilities in another quantum-key-distribution system are completely exploitable at low cost. The most reasonable extrapolation from the literature is that all of ID Quantique's devices are exploitable. How can a product be broken if it provides "absolute security, guaranteed by the fundamental laws of physics"? He explains how quantum cryptography misses the point of what cryptography is all about, and fails to address essential security issues that non-quantum techniques have already solved. He is right about all this, and I have made similar points on this blog for years. I also agree, and have said so here, that if quantum computing is successful in the next 20 years, then the social and economic impact will be overwhelmingly negative. The main application is to destroy the computer security that our society depends on. Where I differ from him is that I say quantum computing and quantum cryptography are both scams, for different reasons. Quantum computing is technologically impossible, and almost entirely destructive if it were possible. Quantum cryptography is possible, but expensive, impractical, and insecure. Actually, they are both possible in the sense that the laws of quantum mechanics allow experiments such that confirming the 1930 theory can be interpreted as favoring quantum computing or quantum cryptography. But the exitement is about the possibility of super-Turing computing, and more secure key exchanges, but these have never been achieved. Wednesday, May 18, 2016 Aristotle, Einstein, and the nature of force Vienna physics professor Herbert Pietschmann writes: But Newton also removed Aristotles division of the motions into “natural motions” and “enforced motions”. For Newton, also Aristotles “natural motions” became “enforced” by the gravitational force. In this way, he unified our understanding of dynamics in the most general way. ... In 1915, Albert Einstein had found the basic equation in his theory of general relativity. He published a complete version of his thoughts in 1916. According to this theory, the gravitational interaction was not caused by a force but by a curvature of spacetime. In this basic publication Einstein writes: “Carrying out the general theory of relativity must lead to a theory of gravitation: for one can generate a gravitational field by merely changing the coordinate system.” ... To summarize, let us state that motion caused by gravitation is not caused by a force; in that sense it differs from all other motions. Einstein made this clear in his quoted paper from 1916. He writes:21 “According to the theory of General Relativity gravitation has an exceptional role with respect to all other forces, especially electromagnetism.” This is what in a sense we might call “return of Aristotelian physics” since it clearly distinguishes between “natural motion” and “enforced motion”, constituting the basic problem of modern physics. Acceleration is either caused by the geometry of spacetime (gravitation) or by an external force in Euclidian spacetime (all other forces). Mathematically, these two different views are represented either by the Theory of General Relativity (gravitation) or by Quantum Field Theory (all other forces). This is partially correct. Aristotle's concept of force seems wrong by Newtonian standards, but is actually reasonable in the light of relativity, as I previously argued. But Einstein did not believe that gravitational acceleration was caused by the geometry of spacetime, as most physicists do today. Einstein is also making a false distinction between gravity and electromagnetism. The preferred relativistic view of electromagnetism, as developed by Poincare, Minkowski, and maybe Weyl, is that the field is a curvature tensor and the force is a geometrical artifact. In this view, electromagnetism and gravity are formally quite similar. In his paper on General Relativity from 1916 he writes: “the law of causality makes a sensible statement on the empirical world only when cause and effect are observable.” Since a gravitational “force” was not observable, Einstein had eliminated it from his theory of gravitation and replaced it by the curvature of spacetime. ... There are people who deduce that the law of causality is invalid because the theory of relativity made it obsolete. Einstein himself seems to have gone back and forth on the issue. The idea is mistaken, whether it is Einstein's fault or not. Causality is the essence of relativity. In this connexion it is historically interesting that only ten years later Einstein converted from these ideas which had led him to his most fundamental contributions to physics. Werner Heisenberg who had used the same philosophy for the derivation of his uncertainty relation, recalls a conversation with Einstein in 1926; Einstein: “You don’t seriously believe that a physical theory can only contain observables.” Heisenberg: “I thought you were the one who made this idea the foundation of your theory of relativity?” Einstein: “May be I have used this kind of philosophy, nevertheless it is nonsense.” Here is where Einstein rejects positivist philosophy for which he is widely credited. The main reason Einstein is credited for special relativity over Lorentz is for the greater emphasis on observables. But as you can see, Einstein disavowed that view. For the skeptical view of causality in physics, see this review: The role of causality in physics presents a problem. Although physics is widely understood to aim at describing the causes of observable phenomena and the interactions of systems in experimental set-ups, the picture of the world given by fundamental physical theories is largely acausal: e.g. complete data on timeslices of the universe related by temporally bidirectional dynamical laws. The idea that physics is acausal in nature, or worse, incompatible with the notion of causality, has attracted many adherents. Causal scepticism in physics is most associated with Russell’s (1913) arguments that a principle of causality is incompatible with actual physical theories. For causal sceptics, insofar as causal reasoning is used in physics, it is at best extraneous and at worst distorts the interpretation of a theory’s content. No, causal reasoning is essential to physics. The arguments of Bertrand Russell and other philosophers rejecting causality are nonsense. Thursday, May 12, 2016 Google seeks quantum supremacy Ilyas Khan writes: I travelled over to the west coast and spent some time with the Artificial Intelligence team within Google at their headquarters just off Venice Beach in LA. Like all who visit that facility, I am constrained by an NDA in talking about what is going on. However in their bid to establish "Quantum Supremacy" the team, led by Hartmut Neven, talks not in terms of decades but in a timetable that is the technology equivalent of tomorrow. For the avoidance of doubt, the "tomorrow" that I refer to is the timeline for building and operating a universal quantum computer. I interpret "the technology equivalent of tomorrow" as being within two years. Check back here at that time. No, Google is not going to succeed. This is not like self-driving cars, where it is clear that the technology is coming, as prototypes have proved feasibility. For that, computers just have to mimic what humans do, and have several advantages, such better sensors, faster reactions, and real-time access to maps. Despite hundreds of millions of dollars in investment, there is still no convincing demonstration of quantum supremacy, or any proof that any method will scale. Google is all about scale, so I am sure that its researchers have a story to tell their senior management. But it is covered by a non-disclosure agreement, so we do not know what it is. You can bet that if Google ever achieves a universal quantum computer, or even just quantum supremacy, it will brag to everyone and attempt to collect a Nobel prize. If you do not hear anything in a couple of years, then they are not delivering on their promises. Monday, May 9, 2016 Darwin did not discredit Lamarck Genomicist Razib Khan writes about a New Yorker mag mistake: But there’s a major factual problem which I mentioned when it came out, and, which some friends on Facebook have been griping about. I’ll quote the section where the error is clearest: …Conceptually, a key element of classical Darwinian evolution is that genes do not retain an organism’s experiences in a permanently heritable manner. Jean-Baptiste Lamarck, in the early nineteenth century, had supposed that when an antelope strained its neck to reach a tree its efforts were somehow passed down and its progeny evolved into giraffes. Darwin discredited that model…. It is true that in Neo-Darwinian evolution, the modern synthesis, which crystallized in the second quarter of the 20th century, genes do not retain an organism’s experiences in a permanently heritable manner. But this is not true for Charles Darwin’s theories, which most people would term a “classical Darwinian” evolutionary theory. Not just the New Yorker. For some reason, the Darwin idolizers frequently stress that he proved Lamarck wrong about heredity. This is misguided for a couple of reasons. First, they are usually eager to convince you of some leftist-atheist-evolutionist-naturalist-humanist agenda, but they could make essentially the same points if Lamarckianism were true. Second, Darwin was a Lamarkian, as a comment explains: Darwin’s On the Origin of Species proposed natural selection as the main mechanism for development of species, but did not rule out a variant of Lamarckism as a supplementary mechanism.[11] Darwin called his Lamarckian hypothesis pangenesis, and explained it in the final chapter of his book The Variation of Animals and Plants under Domestication (1868), after describing numerous examples to demonstrate what he considered to be the inheritance of acquired characteristics. Pangenesis, which he emphasised was a hypothesis, was based on the idea that somatic cells would, in response to environmental stimulation (use and disuse), throw off ‘gemmules’ or ‘pangenes’ which travelled around the body (though not necessarily in the bloodstream). These pangenes were microscopic particles that supposedly contained information about the characteristics of their parent cell, and Darwin believed that they eventually accumulated in the germ cells where they could pass on to the next generation the newly acquired characteristics of the parents. Darwin’s half-cousin, Francis Galton, carried out experiments on rabbits, with Darwin’s cooperation, in which he transfused the blood of one variety of rabbit into another variety in the expectation that its offspring would show some characteristics of the first. They did not, and Galton declared that he had disproved Darwin’s hypothesis of pangenesis, but Darwin objected, in a letter to the scientific journal Nature, that he had done nothing of the sort, since he had never mentioned blood in his writings. He pointed out that he regarded pangenesis as occurring in Protozoa and plants, which have no blood. (wiki-Lamarckism) Here is more criticism of the New Yorker and part 2. I have heard people say that the New Yorker employs rigorous fact checkers, but I don't think that they try to check that the science is right. It is a literary magazine, and they check literary matters. Denying relativity credit to Poincare I stumbled across this 1971 book review: In a third article Stanley Goldberg gives a remarkably clear picture of Einstein's special relativity theory and the response of the British, French, and Germans to the theory. Starting with two simple postulates, videlicet [= as follows] the constancy of the velocity of light and the impossibility of determining an absolute motion of any kind, Einstein was able to derive the Lorentz transformation with ease as well as many other relations of a kinematical nature. The "ether" was dismissed in a short sentence. The German physicists understood the theory, but not all agreed with it. The British stuck with the ether and didn't even try to understand special relativity. The French were not much interested in the theory either; even Poincaré failed to mention it in his writings on electrodynamics. Poincare did not fail to mention it; he created the theory. Poincare is mainly responsible for the spacetime geometry and electromagnetic covariance of special relativity, along with elaborations by Minkowski. I don't know how physicists could be so ignorant of one of the great advances of physics. I do not know anything like it in the history of science. Every discussion of relativity goes out of its way to attribute the theory solely to Einstein, and to give some history of how it happened. And they get the story wrong every time. I explain more in my book. Friday, May 6, 2016 Stop asking whether quantum computing is possible The current SciAm mag has a paywalled article describing three competing research programs trying to build the first quantum computer, and concludes: The time has come to stop asking whether quantum computing is possible and to start focusing on what it will be able to do. The truth is that we do not know how quantum computing will change the world. No, quantum computing is probably impossible, and would not change the world even if it were possible. One supposed application is a "quantum internet", which quantum computers are used as routers to transmit qubits from one user to another. The only known use for that is for so-called quantum cryptography, but that has no advantages over conventional cryptography. It would cost a million times as much, and be hopelessly insecure by today's standards. It cannot authenticate messages, and all implementations have been broken, as far as I know. The article also mentions quantum clocks. I do not know what that is all about, but we already have extremely cheap clocks that are far more accurate than what is needed by anyone. Meanwhile, IBM claims to have 5 qubits: IBM said on Wednesday that it's giving everyone access to one of its quantum computing processors, which can be used to crunch large amounts of data. Anyone can apply through IBM Research's website to test the processor, however, IBM will determine how much access people will have to the processor depending on their technology background -- specifically how knowledgeable they are about quantum technology. If IBM really had a revolutionary computer, it would be able to figure out something to do with it. No, it cannot "be used to crunch large amounts of data." Wednesday, May 4, 2016 Wired explains entanglement Famous physicist Frank Wilczek explains entanglement in a Wired/Quanta mag article: An aura of glamorous mystery attaches to the concept of quantum entanglement, and also to the (somehow) related claim that quantum theory requires “many worlds.” Yet in the end those are, or should be, scientific ideas, with down-to-earth meanings and concrete implications. Here I’d like to explain the concepts of entanglement and many worlds as simply and clearly as I know how. ... So: Is the quantity of evil even or odd? Both possibilities are realized, with certainty, in different sorts of measurements. We are forced to reject the question. It makes no sense to speak of the quantity of evil in our system, independent of how it is measured. Indeed, it leads to contradictions. The GHZ effect is, in the physicist Sidney Coleman’s words, “quantum mechanics in your face.” It demolishes a deeply embedded prejudice, rooted in everyday experience, that physical systems have definite properties, independent of whether those properties are measured. For if they did, then the balance between good and evil would be unaffected by measurement choices. Once internalized, the message of the GHZ effect is unforgettable and mind-expanding. To get to this conclusion, you have to equate "definite properties" with measurement outcomes. A electron has definite properties, but it is not really a particle and does not have a definite position. If you measure the electron, using a method that puts it in a definite position, then it is in that position for the instant of the measurement. A nanosecond later, it is back to its wave-like state with indeterminate position. For more on Wilczek, see this Edge interview. Monday, May 2, 2016 New book on spooky action I previously trashed George Musser's new book (without reading it), and now he was on Science Friday radio promoting it: Could the space we live in—our everyday reality—just be a projection of some underlying quantum structure? Might black holes be like the Big Bang in reverse, where space reverts to spacelessness? Those are the sorts of far-out questions science writer George Musser ponders in his book Spooky Action at a Distance: The Phenomenon that Reimagines Space and Time—And What it Means for Black Holes, the Big Bang, and Theories of Everything. In this segment, Musser and quantum physicist Shohini Ghose talk about the weird quantum world, and the unpredictable nature of particles. Here is an excerpt: The world we experience possesses all the qualities of locality. We have a strong sense of place and of the relations among places. We feel the pain of separation from those we love and the impotence of being too far away from something we want to affect. And yet quantum mechanics and other branches of physics now suggest that, at a deeper level, there may be no such thing as place and no such thing as distance. Physics experiments can bind the fate of two particles together, so that they behave like a pair of magic coins: if you flip them, each will land on heads or tails—but always on the same side as its partner. They act in a coordinated way even though no force passes through the space between them. Those particles might zip off to opposite sides of the universe, and still they act in unison. These particles violate locality. They transcend space. Evidently nature has struck a peculiar and delicate balance: under most circumstances it obeys locality, and it must obey locality if we are to exist, yet it drops hints of being nonlocal at its foundations. That tension is what I’ll explore in this book. For those who study it, nonlocality is the mother of all physics riddles, implicated in a broad cross section of the mysteries that physicists confront these days: not just the weirdness of quantum particles, but also the fate of black holes, the origin of the cosmos, and the essential unity of nature. Everything in the universe obeys locality, as far as we know. Musser's previous book was The Complete Idiot’s Guide to String Theory, and that does not require spooky action, so presumably he understands that the spookiness is just goofiness to sell books. He may understand that string theory is all a big scam also.
cadcb4b5ec9e3945
Thursday, October 29, 2009 House-Hunting in Wales: conclusions We found nothing we remotely liked. No wonder they're not very price-sensitive. House-Hunting in Wales: part 3 Breakfast view from The Lion Hotel, Builth Wells The M4 Severn crossing Welcome to England Exasperated in Llandovery Wednesday, October 28, 2009 House-Hunting in Wales: part 2 Near Brecon: no garden and another house right behind The A40 running through Llandovery Great view, shame about the damp cottage Tuesday, October 27, 2009 House-Hunting in Wales: part 1 Having sold our current accommodation (subject to contract), our plan is to buy a house which sits on some high plateau in magnificent isolation, with ample grounds and mountain views and country walks outside our gate. So far, our search in mid-Wales has proved a mite disappointing. Rhayader looking south We arrived in Rhayader at 12.45 today (three and a half hours driving from Andover). After lunch in one of the attractive pubs, we did the two estate agents and after a brief tourist drive to see the reservoirs of the Elan valley (very scenic - don't buy downstream) we took a look at five properties. To be honest, none of them was particularly inspiring. They were either mini-developments in a nearby village (no 'splendid isolation') or rather undistinguished bungalows fringed by barbed wire in a rural sheep-farming landscape. Somehow it didn't quite capture our dream. We didn't go for the cottage with this view Mid-afternoon we abandoned the picturesque, tourist-friendly but tiny Rhayader and drove to Llandrindod Wells, a pleasant Victorian town around 12 miles away which somewhat resembles Georgian Bath. It is however surrounded by valley farms, another house-hunt disappointment. We therefore proceed straight down to Brecon across the mountains, where we saw groups of soldiers being put through their paces. There was a heavy military presence above 2,000 feet. Clare in this evening's Brecon Chinese restaurant Brecon has a good feel about it, pleasant shops and restaurants and a cheerful occupied street life. Half-term probably has something to do with it. We ate at the local Chinese which was good, and we're currently ensconced in the George Hotel. Tomorrow we'll be checking whether there are properties for sale nestled in the Brecon Beacons. The Moralistic Fallacy I didn't catch much of the Channel 4 show "Race and Intelligence: Science's Last Taboo" last night. The pre-show publicity promised that presenter Rageh Omaar would, in a fair-minded way, demolish the claim that ethnicity was in any way involved in differential mental traits such as personality and intelligence. An example of the moralistic fallacy. I found the programme profiling Warren Buffet over on BBC-2 much more interesting. However, it would have been impossible for Rageh Omaar to come to any other conclusion in polite society, so strong is the grip of Human Biodiversity Denial if I may utilize a fashionable term of abuse. A point related to the moralistic fallacy is the framing of this issue in purely moral terms - "it's impolite and demeaning and racist so it's beyond the pale to even discuss the issue unless you're a member of the extreme right and even then you're soft-peddling these days". What a close-down! The issue of human biodiversity is a scientific one and an evolutionary one in particular and as usual in science one has to frame hypotheses and gather data. Strange how different the discussion is when one approaches it this way, but it will never get on TV. Here is what I wrote when Watson was demonised, by the way. I hope the Royal Mail will not lose the letter I sent yesterday deregistering from VAT, not that filling in the form every three months is such a huge chore. A good year puts me firmly in the category of those small businesses which need to pay VAT. However, this year and I anticipate next year are not good years. My trade, designing public telecommunications networks, depends on capital investment flowing through. And there's not much of that. Sunday, October 25, 2009 Stourhead in October We were last at Stourhead in May 2009. We agreed then that there was too much to explore in one day and vowed to return. What better than the first day after the clocks went back to redeem such an intention. Photographing the Autumn We started our leisurely day this morning with the sky blue and the air calm and warm. The 40 mile drive to Stourhead was accompanied by gathering cloud cover, however, and we thought we'd probably be the only ones there. Nothing could have been further from the truth, the place was crawling with the species National Trust humanity: stout shoes, corduroy trousers, Gore-Tex anoraks, beautifully-groomed but greying hair. The car park was full of meticulously clean four-by-fours. Clare after a strenuous walk We walked to Alfred's Tower, two miles through muddy woodland, mostly uphill. We reached the 150 foot tower at 11.50, ten minutes before it opened. The wait and additional height were too much for Clare, whose blood sugar level had plummeted to new depths. We did an immediate u-turn and made our weary way the two miles back to the main buildings and were revived at The Spread Eagle pub. The view across the lake We intend to spend a cosy afternoon reading The Sunday Times and catch the last episode of Emma this evening. Perhaps we were not so out of place at Stourhead as we sometimes imagine. Saturday, October 24, 2009 UP in 3D The new local cinema doesn't do 3D so we had to revert to our previous practice of visiting Salisbury (a half hour drive away) to see the 3D version of "Up". 1. The story The film centres around a grumpy old man named Carl Fredricksen and an overeager boy-scout Wilderness Explorer named Russell who fly to South America in a floating house suspended from helium balloons (Wikipedia). It opens however with a short, showcasing Pixar's capabilities in 3D rendering, which was impressive and amusing. I also liked the start of the film proper, which reprised Mr Fredricksen's life from a small child, most of it shared with the love of his life, Ellie who finally dies leaving him a widower. But I was always a sucker for sentimentality, of which this film is somewhat overfull. The main story retains interest although it's overlong and sags in the middle. The characters, who are not entirely stereotypes, achieve their just desserts in the end ... but that's a kid's film for you. 2. 3D Polarized spectacles are distributed in a cellophane wrapper at the ticket-check point. These fit over ordinary glasses. The 3D effect is quite real and initially impressive. It's not, however, immersive - at least not on a regular-sized cinema screen. 3D added little to the story however, really coming into its own in action and landscape sequences. Seems that film makers have not yet learnt how to use 3D to illuminate subtle, more relationship-centric scenes. 3. Rendering It's been a long time since I've seen an animation. The people were rendered as caricatures, but the backgrounds were quite impressive. There were scenes where I could have believed that we were seeing real footage of the waterfall in South America, or picket fence suburban America. There must be actors today who will never die, because they will be digitally rendered in the years to come. No tantrums and a lot less bucks. Friday, October 23, 2009 A First Course in String Theory "A First Course in String Theory" by Barton Zwiebach has arrived courtesy of the Amazon courier. This is my OU 'closed season' reading - although flipping through the contents it's pretty daunting: a full year's course for senior undergraduates at MIT. After sniffing "I thought you were off String Theory" Clare has written a dedication in the cover. It says "Happy Christmas" which I read as faint irony. Wednesday, October 21, 2009 Now my quantum mechanics course is finished I've had more time to work on fiction. I've moved my scattered short stories to a new blog called "Stories by Nigel Seel". It's also the link to the right of the main blog window here, called "Stories". Reviewing the material as I moved it across I thought some of it wasn't bad. I particularly commend to you "True Romance" which would make a misogynist proud (if they had absolutely no sense of humour). Tomorrow I will upload there a more substantial piece, c. 5,000 words, which will be sent off to Interzone before Christmas, perhaps along with True Romance and one further long story provisionally titled Urban Warrior which I am still revising. Tuesday, October 20, 2009 Solicitor Days Down to Stockbridge this afternoon to meet our solicitors (Brockmans). I guess we're around four weeks from exchange of contracts. My review of "Superfreakonomics" was posted on the website this morning at 9 a.m., only a few hours after US publication. So it was the very first review. There's a minor achievement for you! The book is mired in pre-publication controversy over the pond as it has been tagged as "climate-denying". Still, at time of writing everyone else has given it five stars and I gave it three. I must be doing something right. Monday, October 19, 2009 Whither WiFi hotspots? I was at the Newbury Hilton this afternoon with a colleague to review a Business Process Transformation pitch. As I arrived first, I powered up my laptop to check out the Internet access. I guess I was hoping and expecting a free WiFi service: no such luck. The entry browser page was to the BT Openzone portal, and the current price there is £5.88 for 90 minutes (there are other buy-plans where more purchased time costs less per minute). Naturally I backed out of that and plugged in my Vodafone 3G USB modem instead. The deal here is that £15 buys you 1GB of data - that's 30 hours surfing, 650 emails and dozens of downloads according to the website. I guess that checking my email, sending a Skype message to Clare and looking at a few pages probably cost me much less than a pound. I think the economics of WiFi hot spots are starting to look very dubious. Perhaps I should say even more dubious than they have in the past. Sunday, October 18, 2009 Review: Superfreakonomics – Levitt & Dubner In Chapter 1 we read a prurient but entertaining account of Chicago prostitution. We learn the benefits of having a pimp, the relative cost of different sexual services and why the police go easy on the ladies (this last explanation is unconvincing). Then we move to the high-end ‘escort’ market and consider the case study of “Allie”. Economic concepts: commodity good, price discrimination, inelastic demand, principal-agent problem. Plus a “how-to” guide on being a successful courtesan. Chapter 2 is organised around the concepts of data mining. We learn about the financial transaction profiles of Islamic terrorists, the disutility of hospitals and the relative performance of doctors in dealing with different kinds of illness and injuries. Economic concepts: data analysis. Chapter 3 is about altruism. The core of this chapter deconstructs a 1964 murder in New York City which was apparently witnessed by many people, none of whom intervened or even reported it to the police. This leads to an appraisal of economics experiments which purportedly showed people to possess an intrinsic core of altruism (leading to Nobel prizes in economics for the researchers). Such an appealing conclusion is debunked as you might expect. The murder story is also debunked. Economic concepts: limitations of behavioural economics. Chapter 4 is about perverse incentives and specifically how powerful interest groups succeed in bringing about outcomes which disadvantage society overall. In the sights are doctors and auto makers. It is shown repeatedly that the hero who correctly points out that the emperor has no clothes is subsequently uniformly reviled by said interest groups Chapter 5 is the part about global warming. Or is it cooling? Or is it something which just happens anyway? A long piece centred around Nathan Myhrvold’s company Intellectual Ventures shows that assuming global warming is actually the problem fashionable opinion claims, there exist a number of technological solutions which for a modest amount of cash would deal with it. Alas, such ideas are anathema to Green lobbies. In the epilogue, we learn that economic concepts of monetary value and exchange can also be taught (and internalised by) capuchin monkeys. I was not entirely clear why we were being told this apart from the monkey prostitution link back to Chapter 1. I am torn two ways about this book. In its favour it makes intelligent points about a number of topical issues, it correctly undermines various shibboleths of political correctness, and it’s compulsively readable – I was able to finish the 216 pages in a day. On the other hand, the sycophantic writing style is gratingly folksy-humorous. Subtle flattery throughout confirms the authors and reader as equal partners, intellectually superior to the idiots the book so delights in debunking. The book is somehow less than the sum of its parts. So if you are looking for an upmarket Reader’s Digest type book which will confirm you are an important mover and shaker, that you are fashionably dismissive of political correctness to an acceptable degree, and that won’t force you to engage with any difficult concepts, I guess this book is for you. Otherwise get it from the library or read the Sunday Times serialisation. Friday, October 16, 2009 Our house sale - an offer accepted Yes, at ten to six this evening Clare got a phone call from the estate agent. An offer of £345,000 (our guide price is £350k). Clare accepted. So we hope to be out of here by Christmas. Did all that mathematical modelling help? I'd like to think so! SM358 Exam - not so bad For a long time I believed it was a joke. An Open University exam to be held at Southampton football ground? Really? It took me 50 minutes to drive down to the stadium and find my way to the first floor suite where the exam was to be held. To my surprise the room was enormous, with space for 16 rows of miniature writing desks leading back to the picture window. All this was initially hidden by screens so that when you walked in you saw red carpeting with upholstered chairs laid out like a departure lounge and maybe fifty slumped people each looking at the floor, as if informed of a death in the family. I joined them. We were wheeled to our places at 2.15 p.m. to unpack our pens and instruments and fill in the various slips. At 2.30 I picked up the SM358 question paper and began to review this year's questions on quantum mechanics. I thought it was a pretty fair paper, with little tricksiness. It's amazing how fast three hours can go when you have a very great deal to do. And as far as I could ascertain, no-one put their hand up and asked to go to the toilet, no doubt a source of vast concern before the three hour exam got underway. Results due on December 18th. Update December 16th: I'm pleased to say I received a distinction. Thursday, October 15, 2009 The secretary problem applied to selling your house This problem goes under various names: the secretary problem, the sultan's dowry problem, the 'choose a fiancée' problem, the beauty contest problem. In the beauty contest variant, you are the judge. You are presented with the contestants one-by-one. After evaluating each contestant, you must either declare that contestant to be the winner, at which point the contest ends; or disqualify the current contestant from any further consideration and move on to the next girl. The question is, which choosing strategy makes it most likely you will choose the best, most beautiful candidate? Clearly, if you choose the first girl you see, you are rejecting all the rest without seeing them: hardly smart. On the other hand, if you just keep on rejecting until you come to the last one, you're stuck with her. How likely is it that she's going to be the very best of the lot? Clearly a better plan is to keep looking (and rejecting) for a while to get a sense of the general level of beauty, and then choose the next contestant you see who is more beautiful than all the ones you've previously rejected - hopefully before you run out of candidates! If there are n girls in the competition, it turns out that the best strategy is to consider and reject the first 37% of n (exact figure is n/e) and then choose the next candidate who is the best so far. This leads you to choose the girl who is actually the most beautiful 37% of the time (exact figure 1/e). Not stupendous odds, but reasonable under the constraints. The reason for mentioning this is that selling a house can look like a version of this problem too. The viewers come one by one and make an offer (a dowry) to you. If they're not interested, their value to you is zero, they're not beautiful at all. Or they make make an offer which doesn't impress, they're not all that beautiful. Each of these viewers is telling you something about the overall market - the space of possible offers for your house. So when should you accept an offer, realising that it's the best you're likely to get? One of the issues with the secretary problem, in all its variants, is that you need to know the total number of applicants in advance. This translates to the total number of viewers you are prepared to tolerate. Noting we are currently seeing an average of 2 viewers per week, for the sake of argument I'm going to take three possible viewing totals: 16, 24 and 32, corresponding to the house being on the market for two months, three months and four months. If n = 16, you should see and reject bids from 37% of 16 = 6 viewers, and then accept a higher bid once you get it from a subsequent viewer. If n = 24, you should reject bids from the first 9. If n = 32 you should reject bids from the first 12. The model therefore advises that we should see and 'reject' a few more - we have so far seen and 'rejected' seven viewers (six of whom did not make formal offers). Of course, this also applies to buying a house where the message is: don't make an offer too early. Wednesday, October 14, 2009 House sale: four weeks + Adrian returns Four weeks into house selling and we have received one offer from seven distinct viewings (which we turned down). We have had three repeat visits making ten viewing events in all. The market is still crawling around on the bottom without too activity. We're hanging in there and thankful that we have no pressing deadline to sell. Adrian arrived back this morning after a thirty hour flight from Christchurch, New Zealand via Singapore. In about a month's time he'll be off to Canada (Sun Peaks) for the winter season as ski & snowboarding instructor. He's looking remarkably fit and awake as I write this. I've pretty much completed my revision for the SM358 QM exam this Friday. Horrible to relate, I'm still finding pretty obvious things which puzzled me - example: energy levels of the helium atom ignoring electron-electron repulsion. Easy once you check back in the book. Evening Sky View yesterday evening in what had been a tee-shirt and jeans day. Monday, October 12, 2009 Quantum Theory and Telecom Networks I originally speculated on whether there might be a "deep theory of telecoms". Roy Simpson wrote to me as follows. Ok, well here are some comments on that 2008 post based on the original question. First there is no reference to the ideas of new physics directly applied to telecoms which was my original point, e.g. nothing on quantum routers (even quantum computers). More generally the problem with the post is that because the axiomatisation mentioned by A. Wheen never happened in the book we don't actually know what you count as "Telecoms Mathematics" i.e. what is in scope and what is out of scope. We have several axes to consider and the impression I have from that post is the following: 1. Operations vs. development axis On this axis I guess that you are referring to the mathematics as used in current telecoms operations (which I agree is relatively graph- and statistics-based but not a huge conceptual leap anywhere), rather than what might be required in the development process of new types of structures, connections, equipment and technology. 2. Telecoms as existing system vs software development Basically further depth on the point above. Two thirds of a telecoms system is software - an area in which reliability, real-time behaviour modelling, enhancement is underdeveloped and requires mathematical models (maybe) of all kinds. 3. Current vs. new axis Again are we refering to what is currently done to model and develop systems or what could/should be done to make it all cheaper/better/smarter. Remember that some have advocated e.g. learning systems to help the telecoms network do its network management better. 4. Maths vs. Physics axis In true STL systems fashion are we referring to just a maths solution or also a physics based solution? Any physics (e.g. the network timings using relativistic satellites) brings in its own maths and maybe its own new maths challenges. And lets not forget the NP-completeness corner. So many published network algorithms seem to falter "in general" as the underlying problem is NP-complete. Again any solution here might bring in unknown new maths/logic. Well, several of the problems you note above are common to any large-scale software system. To concern oneself with telecoms per se, we need to formalise what we mean by a telecoms system in some useful way. I guess my intuitions tell me that a telecoms system is a network structure with endpoint nodes and interior nodes such that messages can be routed node-to-node between arbitrary endpoints. I guess everybody can have their own definition of course. As I've defined it, the analysis of telecoms systems is a subdivision of classical graph-theory - especially in the design of routing protocols as you might expect. Small networks never pose any problems, so requirements for new technologies could be expected when dealing with scaling problems. Telecom architects and designers are, as you might expect, highly-sensitised to scalability issues just because they are so important - the IETF for example normally has scalability as a major protocol/architecture design constraint, see any RFC. As to whether quantum theory could be in any way relevant, it seems to me that today we see the relevance in three forms. a. Classical devices (e.g. integrated circuits) which internally rely upon explicitly quantum phenomena to work at all. These are found in all telecom equipment b. Quantum network functionality - most likely involving entanglement - which explicitly might add novel functionality to improve telecom network operation. There might I suppose be applications in routing, considered as a distributed form of quantum computing, although it seems a long way off. Ditto for quantum teleportation. As you suggest, this will be in SF stories before it gets to research papers and then into 22nd century networks! c. End-to-end issues, most notably quantum encryption and key distribution, where it's a constraint upon the network that the end-to-end requirement (typically not to collapse entanglement) should be met. Here we obviously have things happening right now. Selling our house: the first offer Today we received the first offer for our house, one for £325,000 which we rejected. We bought our house six years ago in October 2003 for £300,000. Over recent years the Government has operated an annualised inflation target of 2.5%, around which the actual rate has fluctuated. Given a nominal 2.5% inflation rate, what is the equivalent today of £300,000 six years ago? Answer: £300,000 times 1.0256 = £348,000. Even if we achieve our target price of £350,000 we will have made essentially a zero capital gain - in fact a significant loss once you take transaction charges into account. Who says that property is a good investment? Amazon Kindle? Not yet! The Sunday Times (ingear) recommended waiting a while until the web access of the US version is added and more UK-centric services arrive. But even then I fear I am not tempted. 1. The cost: at £175 you can buy a lot of old-fashioned books. 2. The Amazon Kindle ebook format is proprietary, so I guess you can't easily transfer ebooks to any other device. 3. At the moment I can just hand a book over to someone else to read. Now I've got to hand over the Kindle too and Clare would probably drop it in the bath (her point, not mine). The Kindle's advantages: a whole library in a slim package and instant loading of a new title over its wireless connection, aren't sufficient to offset its disadvantages at the point of use. What could make a difference? A really good text-to-speech function would be a start - we all like being read to. I heard they had to disable this function because of copyright mutterings from the 'audio book community'. Great. Friday, October 09, 2009 Shadow strikes again Found strewn in our garden this morning as, in my ceaseless attempts to improve the English climate, I went to collect coal for the fire. A headless corpse Scattered body parts What are we do to with him? Update 1: (2.45 p.m.). Clare has now buried the rabbit. Update 2: we have two more viewers for our house tomorrow. Thursday, October 08, 2009 Open University: SM358 The Quantum World This third level OU physics course comes in three books. Book 1: wave mechanics introduces Schrödinger’s equation and takes the reader through the standard models of particles in infinite and finite square wells, simple harmonic oscillators, and free particle wave packets. The book concludes with a first look at scattering and tunnelling, along with probability currents. Book 2: quantum mechanics and its interpretation starts with Dirac notation and the vector space model of quantum states. The next few chapters introduce the angular momentum operators and spin followed by many-particle systems and indistinguishability, including the Pauli exclusion principle. The final part of the book moves into the modern areas of quantum entanglement and the EPR ‘paradox, and briefly introduces quantum cryptography, quantum teleportation and a very brief mention of quantum computing. Book 3: quantum mechanics of matter opens with a thorough analysis of the hydrogen atom. We start with spherical harmonics, then look at the radial equation (for the radial part of the wave function in spherical coordinates). This allows us to account for the spectroscopic data for hydrogen in a first approximation. In chapter 3 we detour to study perturbation methods for solving more complex versions of the Schrödinger equation by approximation and then apply these to helium as well as developing a more sophisticated analysis of hydrogen involving the fine and hyperfine structure. We now have the tools to analyse more complex atoms with many electrons – we learn about electron shells and the Periodic Table. Next come diatomic molecules and then an overview of the quantum treatment of bulk solids. Now we begin to understand the real differences between insulators, semiconductors and full conductors. In the final chapter we look at the interaction between atoms and electromagnetic radiation, treating the former quantum mechanically and the latter classically. And that’s it. Some summary thoughts. SM358 is a very thorough, somewhat conservative and rather practical first course. It deliberately doesn’t get involved in populist worries about ‘the meaning of quantum mechanics’: the focus is very much on learning concepts and techniques. This is wholly to be applauded. The concepts are of course very alien and the course material really needs to be read at least twice. The first time to ‘load the concepts’ - hard work because of their novelty. The second time to knit them together into a holistic totality. Revision for the exam is very important for this final consolidation and sufficient time needs to be budgeted. Overall, the course is somewhat similar to the material covered in “An Introduction to Quantum Physics” by A. P. French and E. F. Taylor. I found the extra depth in this textbook sometimes helpful in illuminating concepts. What is barely hinted at is the elevated ladder of which this course is merely the first rung. The next step would be a graduate-level 'proper' Hilbert space development of non-relativistic quantum mechanics. This would be complemented by Quantum Field Theory, which as the name suggests quantizes the classical fields and unifies quantum mechanics with special relativity to give us the Standard Model. And then there is the search for grand unification, combining the four forces of nature into one coherent framework. This takes us to quantum theories of gravity, most notably String Theory. To climb this ladder would probably take an ambitious young physicist most of their twenties. Country Scenes The countryside around Penton Mewsey, bathed in afternoon Autumnal sunlight. Sheep opposite The White Hart, Penton Mewsey The Cricket Pavilion, Penton Mewsey Clare and Nigel in The White Hart A view towards Foxcotte The author in the pub Wednesday, October 07, 2009 Winchester University I'm typing this from the student coffee lounge at Winchester University - opposite the prison and next to the hospital - there's food for thought! As I write, Clare is attending her first OU AA100 tutorial in the main building somewhere, and I'm exploring remote Internet access. Meanwhile the rain is pouring down on the other side of the picture window, as it has been doing all day. We got soaked on our arrival after parking: we didn't know where any of the rooms were and wandered erratically for hundreds of yards between buildings in the downpour. Returning damp and dripping to the coffee lounge, I initially tried to log onto an unsecured WiFi network here, but it asked for a user-id and password: my skills and inclination don't run to the couple of hours I have here trying to crack it. Instead I pulled out my Vodafone dongle and was rapidly connected to the local 3G network at around 280 kbps. For network traffic like this, it's indistinguishable from the 5 Mbps I'm getting in my home office. Update: home now. Great relief I wasn't clamped for parking in the reserved staff area! Tuesday, October 06, 2009 Pandorum - 108 lost minutes So there's 108 minutes of my life I won't get back. Those minutes were spent in the vast interstices of the interstellar spacecraft Elysium, carrying thousands of deep-sleep colonists to a new planet. As is traditional in these ponderous, derivative, dystopian SF movies, the ship resembles nothing more than a spaceborn version of a vast council high-rise on a gang-ridden run-down estate. All is gloom, narrow garbage-strewn tunnels and intermittent power cuts. Around every corner and behind every dripping grating lurk crazed mutant zombies who need their fix of human flesh. There are lots of fights. After 50 minutes of a mindless quest to 'find and reboot the reactor' interspersed with loud-but-non-frightening mutant-action scenes I whispered to Clare "Let's get out of this stupidity." To my amazement she was dismissive, transfixed - "No, I want to see what happens!" Yeah, right. We all escaped the spacecraft at minute 107. We duly went off to the next-door Asda store to buy some cat-food. I had no such culinary expectations for the 1,213 lifepods deposited with no supplies on an utterly bare target planet, the last surviving humans in the universe. Clare's verdict: "I wouldn't recommend it but ... it was OK and I've seen worse." Monday, October 05, 2009 No common sense whatsoever An embarrassing read for me as it has always been claimed that I lack common sense. Olber's Paradox I was walking in the woods with Clare yesterday afternoon (pictured) when apropos nothing at all I mentioned the marvel of the night sky. If you look up on a clear night you can see the stars of course, but most of the sky is black. From this a profound conclusion emerges. "If the universe," I said, "was infinite in space and time, then in every direction you looked in the sky, your line of sight would intersect with the surface of a star. So the night sky would be uniformly bright, not dark at all. This is called Olber's Paradox." She thought for a moment. "Why exactly? The stars that are really far far away are too faint to see. The darkness of the night sky actually proves nothing." Good point. There are lots of stars out there fainter than magnitude 6 which we don't see with the naked eye at all. Collapse of my argument and I resolved to check when I got home. According to the Wikipedia article, the explanation is that in a uniform infinite universe, as you increase the distance from your eye by a given amount, the luminance from each star does indeed go down, but the number of stars goes correspondingly up. It sounds convincing in a hand-wavy way, but can we make this more precise? Suppose we take a particular line of sight - a one dimensional line from your eye to a spot in the infinite sky and assume the stars are equally spaced along this line. So the first star is at a distance of one "unit" (which might be a thousand light years), the next strung along this eye-line is at a distance of two thousand light years, the next three thousand ... and so on. Suppose each star send one unit of brightness to your eyes. Then the inverse-square law total brightness from this infinite bead of stars will be: Total brightness = 1 + (1/4) + (1/9) + ... + (1/n2) + ... OK. So what's the sum of this infinite series? I sat down with a pen and paper and tried to work it out: it's surprisingly hard. I did some approximations and guessed it was just over one and a half. Wikipedia tells us that this was a legendary problem in early-modern times - the Basel problem - solved by the famous Euler in 1735. The answer is π2/6 = 1.65 approx. But this doesn't really solve the problem. If the star was pretty faint in the first place, then all of its further-away clones only make it 65% brighter. You still wouldn't see it. What we're not capturing is the increase in the number of stars in a given solid angle as we project the eyeline farther and farther away. For a given patch of sky, the area at a distance r from your eye is proportional to r2 - think of the area of a sphere, 4πr2. So at a distance r we have to consider not one star, but r2 stars. The true brightness you would see is: Total brightness = 1 + 4(1/4) + 9(1/9) + ... + n2(1/n2) + ... = 1 + 1 + 1 + ... So the correct statement of Olber's paradox in an infinitely-old, infinitely-sized universe is that the night sky is infinitely bright. I think we would have noticed. Darkness tells us the universe had a definite beginning some while back, or that it's got a finite size, or both. Quite a big conclusion from darkness at night. Saturday, October 03, 2009 The Vyne, north of Basingstoke On an overcast, blustery Saturday we revisited The Vyne, an Elizabethan country house just north of Basingstoke. Three woodland walks beckoned and eschewing green and yellow we took the brown route, at 2.3 miles the longest. On our circular walk we met coming the other way a family comprising a large, 'pub landlord' kinda guy with a couple of equal-sized dogs, followed by his thin 'partner' followed by two small kids, the girl being called Paige. Actually we met them twice and they were friendlier the second time round. We also encountered a number of conker shells, with surprisingly sharp spikes. Our tranquilly was somewhat spoiled by helicopters hovering invisibly above the dense tree cover, which I speculated were hunting for the family already described. The Lake The Vyne The Walled Garden - Clare is quite envious! This "Bruce Willis sci-fi actioner vehicle" was not as bad as I had feared. We already have surrogates - sort of. Guys in recliners in Nevada flying Predator RPVs over Afghanistan. In this film pretty much everyone is at home in a recliner, wired-up and 'piloting' their humanoid surrogate at work and at play. What's not to like? No more fear of mugging or accident (your surrogate may get it, but you're safe at home). Your incredibly-lifelike surrogate is more handsome/beautiful than you, stronger and younger. So folks, this is definitely going to happen, once they get the wireless broadband speeded up a bit, and figure out how to build them. As an exercise in futurology this film was full of ideas. The military will want surrogates for close-quarters combat: we saw that. The sex industry will want them for enhanced appeal to clients and control of infection: we didn't see that! But is there a danger we will replace the spontaneity and intimacy of human interaction with one-step-removed machine-mediated distance? Of course! How else could we get a plot? So the founder of surrogacy has had second thoughts and is trying to get them all closed down, while the surrogate-manufacturing global corporation wants to close him down for their own protection. Off we go. Surrogacy, in this context, is such a big idea that there is potential for a number of films to explore the implications. This one was content to aim no higher than B-movie status, with Willis seeming to sleepwalk through his part. It was kinda fun and a little bit thought-provoking, but no more. Thursday, October 01, 2009 Passing The Exam Dr Gamal Khaled, religious instructor, sits back on his heels as he waits for his student Miss Sahar Al-Amir. He teaches in the only Western-syllabus university in the country, built with funding from the Americans, bloated with eagerness to share that great gift to the world, their American culture. Gamal is pious but not stupid. Unlike the fundamentalist teachers of the Madrassas which dot the capital, he knows that it’s not enough to learn the sacred texts by heart and eke out doctrine-ridden lives in fly-blown poverty. That way lies the final erasure of the faith under the grinding wheels of Western modernity. You have to sup with the devil, study his arts and his sciences without succumbing to that empty life of atomised secular materialism presented as the unavoidable correlate. Hence his appointment to this grand office with its modern computing and communications infrastructure and its large prayer-carpet at the edge of which he kneels. His course is not long or even particularly difficult. It is merely mandatory in the final year and Miss Al-Amir seems destined to fail. The students here have mostly come from traditional families. For once, a degree of selection by merit rather than wealth or influence has been enforced. They are uniformly unprepared for the culture shock – exposed to a cacophony of new ideas, forced to think for themselves. It’s not surprising that so many abandon their past certainties, their faith. It’s not too amazing that they think him a hopeless anachronism, an outdated authority figure with an obsolete ideology. Most of the students pay lip service to his teaching, passing his course through hypocrisy. Some of the more intelligent and principled fight or ignore him, and they of course will fail. This is normal in his country. Sahar is one of these. She’s taken with western ways, makes no secret of her disdain for the old superstitions. Maths and computer science centre her new life. Her first two essays have been disasters: alternately ignorant and satirical. She is here is to discuss the final assignment which will most likely be no different and will cause her to fail her degree. Gamal admires her determination not to play a game she despises while despairing of her parochial lack of insight and mourning the likely consequences for her future (or lack of one). Sahar Al-Amir knocks at the door and then pushes it open a fraction. Dr Khaled motions her into the room, beckoning her to kneel at the far side of the prayer-carpet. He can see at once that she is nervous as she settles herself down. Her black abaya is sloppy and loose while the material covering her hair is awry. This is not a good start. He speaks to her sharply – “Put your knees together!” - when she looks at him intensely and pulls off her badly-arranged headscarf. Blond ringlets tumble down her shoulders while her gaze never falters. Gamal recognises at once he’s in a situation: this is not in any kind of script. He suppresses a reprimand and waits, suspicions confirmed as Sahar slowly pushes her robe off her shoulders to reveal her breasts, small and stand-out. She moves her hands down to her waist, pulling her robe apart and draping it on the carpet behind her. And now she strikes a pose, pulling her head back, pushing out her chest and spreading her knees farther apart. Her serious gaze never leaves Gamal’s face and her mouth opens slightly as she nervously flicks her tongue over her lips. Dr Gamal Khaled has never been in this situation before but he’s heard plenty. Miss Al-Amir is indeed a vision of nakedly available loveliness, an erotic sculpture posed before him and part of him is signalling a very male response. But Khaled is an intellectual and at times like this it’s the rational, calculating part of him which takes control. Sahar has clearly reached the end of whatever makeshift plan she had for this afternoon and to prolong the silence further will surely lead to deeper humiliation. He is a teacher: now he must teach. “Well, Miss Al-Amir,” he says, “I take it that this display is not unconnected with your final assignment? I am sure it has nothing to do with any charm I may or may not possess. Now, let us leave to God the appreciation of what He has created and cover yourself so that we may begin the tutorial.” These kind words break the dam, and Miss Al-Amir bursts into tears. Gamal continues to kneel patiently, relaxed but unmoving until the snuffles cease. The contract they then negotiate puts the recently concluded display behind them, agrees that Sahar will participate in further coaching and that she deigns to take seriously the concept of a spiritual dimension to life – as a working hypothesis. Perhaps she will pass after all. After Sahar has departed, Gamal Khaled reflects. This was not a serious attempt to seduce him, the stuff of academic folklore everywhere. Sahar has no skills whatever in that department. No, it was an act of desperation, a crisis, and sometimes only a crisis will break down the walls of the mind and let us make progress. And now, he thinks wearily, he will have to make his report for the secret police. -- an excerpt from a story I'm writing.
a0b341f1d03c1db2
You are currently browsing the tag archive for the ‘Hahn-Banach theorem’ tag. This is a technical post inspired by separate conversations with Jim Colliander and with Soonsik Kwon on the relationship between two techniques used to control non-radiating solutions to dispersive nonlinear equations, namely the “double Duhamel trick” and the “in/out decomposition”. See for instance these lecture notes of Killip and Visan for a survey of these two techniques and other related methods in the subject. (I should caution that this post is likely to be unintelligible to anyone not already working in this area.) For sake of discussion we shall focus on solutions to a nonlinear Schrödinger equation \displaystyle iu_t + \Delta u = F(u) and we will not concern ourselves with the specific regularity of the solution {u}, or the specific properties of the nonlinearity {F} here. We will also not address the issue of how to justify the formal computations being performed here. Solutions to this equation enjoy the forward Duhamel formula \displaystyle u(t) = e^{i(t-t_0)\Delta} u(t_0) - i \int_{t_0}^t e^{i(t-t')\Delta} F(u(t'))\ dt' for times {t} to the future of {t_0} in the lifespan of the solution, as well as the backward Duhamel formula \displaystyle u(t) = e^{i(t-t_1)\Delta} u(t_1) + i \int_t^{t_1} e^{i(t-t')\Delta} F(u(t'))\ dt' for all times {t} to the past of {t_1} in the lifespan of the solution. The first formula asserts that the solution at a given time is determined by the initial state and by the immediate past, while the second formula is the time reversal of the first, asserting that the solution at a given time is determined by the final state and the immediate future. These basic causal formulae are the foundation of the local theory of these equations, and in particular play an instrumental role in establishing local well-posedness for these equations. In this local theory, the main philosophy is to treat the homogeneous (or linear) term {e^{i(t-t_0)\Delta} u(t_0)} or {e^{i(t-t_1)\Delta} u(t_1)} as the main term, and the inhomogeneous (or nonlinear, or forcing) integral term as an error term. The situation is reversed when one turns to the global theory, and looks at the asymptotic behaviour of a solution as one approaches a limiting time {T} (which can be infinite if one has global existence, or finite if one has finite time blowup). After a suitable rescaling, the linear portion of the solution often disappears from view, leaving one with an asymptotic blowup profile solution which is non-radiating in the sense that the linear components of the Duhamel formulae vanish, thus \displaystyle u(t) = - i \int_{t_0}^t e^{i(t-t')\Delta} F(u(t'))\ dt' \ \ \ \ \ (1) \displaystyle u(t) = i \int_t^{t_1} e^{i(t-t')\Delta} F(u(t'))\ dt' \ \ \ \ \ (2) where {t_0, t_1} are the endpoint times of existence. (This type of situation comes up for instance in the Kenig-Merle approach to critical regularity problems, by reducing to a minimal blowup solution which is almost periodic modulo symmetries, and hence non-radiating.) These types of non-radiating solutions are propelled solely by their own nonlinear self-interactions from the immediate past or immediate future; they are generalisations of “nonlinear bound states” such as solitons. A key task is then to somehow combine the forward representation (1) and the backward representation (2) to obtain new information on {u(t)} itself, that cannot be obtained from either representation alone; it seems that the immediate past and immediate future can collectively exert more control on the present than they each do separately. This type of problem can be abstracted as follows. Let {\|u(t)\|_{Y_+}} be the infimal value of {\|F_+\|_N} over all forward representations of {u(t)} of the form \displaystyle u(t) = \int_{t_0}^t e^{i(t-t')\Delta} F_+(t') \ dt' \ \ \ \ \ (3) where {N} is some suitable spacetime norm (e.g. a Strichartz-type norm), and similarly let {\|u(t)\|_{Y_-}} be the infimal value of {\|F_-\|_N} over all backward representations of {u(t)} of the form \displaystyle u(t) = \int_{t}^{t_1} e^{i(t-t')\Delta} F_-(t') \ dt'. \ \ \ \ \ (4) Typically, one already has (or is willing to assume as a bootstrap hypothesis) control on {F(u)} in the norm {N}, which gives control of {u(t)} in the norms {Y_+, Y_-}. The task is then to use the control of both the {Y_+} and {Y_-} norm of {u(t)} to gain control of {u(t)} in a more conventional Hilbert space norm {X}, which is typically a Sobolev space such as {H^s} or {L^2}. One can use some classical functional analysis to clarify this situation. By the closed graph theorem, the above task is (morally, at least) equivalent to establishing an a priori bound of the form \displaystyle \| u \|_X \lesssim \|u\|_{Y_+} + \|u\|_{Y_-} \ \ \ \ \ (5) for all reasonable {u} (e.g. test functions). The double Duhamel trick accomplishes this by establishing the stronger estimate \displaystyle |\langle u, v \rangle_X| \lesssim \|u\|_{Y_+} \|v\|_{Y_-} \ \ \ \ \ (6) for all reasonable {u, v}; note that setting {u=v} and applying the arithmetic-geometric inequality then gives (5). The point is that if {u} has a forward representation (3) and {v} has a backward representation (4), then the inner product {\langle u, v \rangle_X} can (formally, at least) be expanded as a double integral \displaystyle \int_{t_0}^t \int_{t}^{t_1} \langle e^{i(t''-t')\Delta} F_+(t'), e^{i(t''-t')\Delta} F_-(t') \rangle_X\ dt'' dt'. The dispersive nature of the linear Schrödinger equation often causes {\langle e^{i(t''-t')\Delta} F_+(t'), e^{i(t''-t')\Delta} F_-(t') \rangle_X} to decay, especially in high dimensions. In high enough dimension (typically one needs five or higher dimensions, unless one already has some spacetime control on the solution), the decay is stronger than {1/|t'-t''|^2}, so that the integrand becomes absolutely integrable and one recovers (6). Unfortunately it appears that estimates of the form (6) fail in low dimensions (for the type of norms {N} that actually show up in applications); there is just too much interaction between past and future to hope for any reasonable control of this inner product. But one can try to obtain (5) by other means. By the Hahn-Banach theorem (and ignoring various issues related to reflexivity), (5) is equivalent to the assertion that every {u \in X} can be decomposed as {u = u_+ + u_-}, where {\|u_+\|_{Y_+^*} \lesssim \|u\|_X} and {\|u_-\|_{Y_-^*} \lesssim \|v\|_X}. Indeed once one has such a decomposition, one obtains (5) by computing the inner product of {u} with {u=u_++u_-} in {X} in two different ways. One can also (morally at least) write {\|u_+\|_{Y_+^*}} as {\| e^{i(\cdot-t)\Delta} u_+\|_{N^*([t_0,t])}} and similarly write {\|u_-\|_{Y_-^*}} as {\| e^{i(\cdot-t)\Delta} u_-\|_{N^*([t,t_1])}} So one can dualise the task of proving (5) as that of obtaining a decomposition of an arbitrary initial state {u} into two components {u_+} and {u_-}, where the former disperses into the past and the latter disperses into the future under the linear evolution. We do not know how to achieve this type of task efficiently in general – and doing so would likely lead to a significant advance in the subject (perhaps one of the main areas in this topic where serious harmonic analysis is likely to play a major role). But in the model case of spherically symmetric data {u}, one can perform such a decomposition quite easily: one uses microlocal projections to set {u_+} to be the “inward” pointing component of {u}, which propagates towards the origin in the future and away from the origin in the past, and {u_-} to simimlarly be the “outward” component of {u}. As spherical symmetry significantly dilutes the amplitude of the solution (and hence the strength of the nonlinearity) away from the origin, this decomposition tends to work quite well for applications, and is one of the main reasons (though not the only one) why we have a global theory for low-dimensional nonlinear Schrödinger equations in the radial case, but not in general. The in/out decomposition is a linear one, but the Hahn-Banach argument gives no reason why the decomposition needs to be linear. (Note that other well-known decompositions in analysis, such as the Fefferman-Stein decomposition of BMO, are necessarily nonlinear, a fact which is ultimately equivalent to the non-complemented nature of a certain subspace of a Banach space; see these lecture notes of mine and this old blog post for some discussion.) So one could imagine a sophisticated nonlinear decomposition as a general substitute for the in/out decomposition. See for instance this paper of Bourgain and Brezis for some of the subtleties of decomposition even in very classical function spaces such as {H^{1/2}(R)}. Alternatively, there may well be a third way to obtain estimates of the form (5) that do not require either decomposition or the double Duhamel trick; such a method may well clarify the relative relationship between past, present, and future for critical nonlinear dispersive equations, which seems to be a key aspect of the theory that is still only partially understood. (In particular, it seems that one needs a fairly strong decoupling of the present from both the past and the future to get the sort of elliptic-like regularity results that allow us to make further progress with such equations.) dWhen studying a mathematical space X (e.g. a vector space, a topological space, a manifold, a group, an algebraic variety etc.), there are two fundamentally basic ways to try to understand the space: 1. By looking at subobjects in X, or more generally maps f: Y \to X from some other space Y into X.  For iTnstance, a point in a space X can be viewed as a map from pt to X; a curve in a space X could be thought of as a map from {}[0,1] to X; a group G can be studied via its subgroups K, and so forth. 2. By looking at objects on X, or more precisely maps f: X \to Y from X into some other space Y.  For instance, one can study a topological space X via the real- or complex-valued continuous functions f \in C(X) on X; one can study a group G via its quotient groups \pi: G \to G/H; one can study an algebraic variety V by studying the polynomials on V (and in particular, the ideal of polynomials that vanish identically on V); and so forth. (There are also more sophisticated ways to study an object via its maps, e.g. by studying extensions, joinings, splittings, universal lifts, etc.  The general study of objects via the maps between them is formalised abstractly in modern mathematics as category theory, and is also closely related to homological algebra.) A remarkable phenomenon in many areas of mathematics is that of (contravariant) duality: that the maps into and out of one type of mathematical object X can be naturally associated to the maps out of and into a dual object X^* (note the reversal of arrows here!).  In some cases, the dual object X^* looks quite different from the original object X.  (For instance, in Stone duality, discussed in Notes 4, X would be a Boolean algebra (or some other partially ordered set) and X^* would be a compact totally disconnected Hausdorff space (or some other topological space).)   In other cases, most notably with Hilbert spaces as discussed in Notes 5, the dual object X^* is essentially identical to X itself. In these notes we discuss a third important case of duality, namely duality of normed vector spaces, which is of an intermediate nature to the previous two examples: the dual X^* of a normed vector space turns out to be another normed vector space, but generally one which is not equivalent to X itself (except in the important special case when X is a Hilbert space, as mentioned above).  On the other hand, the double dual (X^*)^* turns out to be closely related to X, and in several (but not all) important cases, is essentially identical to X.  One of the most important uses of dual spaces in functional analysis is that it allows one to define the transpose T^*: Y^* \to X^* of a continuous linear operator T: X \to Y. A fundamental tool in understanding duality of normed vector spaces will be the Hahn-Banach theorem, which is an indispensable tool for exploring the dual of a vector space.  (Indeed, without this theorem, it is not clear at all that the dual of a non-trivial normed vector space is non-trivial!)  Thus, we shall study this theorem in detail in these notes concurrently with our discussion of duality. Read the rest of this entry » In the previous post, I discussed how an induction on dimension approach could establish Hilbert’s nullstellensatz, which we interpreted as a result describing all the obstructions to solving a system of polynomial equations and inequations over an algebraically closed field. Today, I want to point out that exactly the same approach also gives the Hahn-Banach theorem (at least in finite dimensions), which we interpret as a result describing all the obstructions to solving a system of linear inequalities over the reals (or in other words, a linear programming problem); this formulation of the Hahn-Banach theorem is sometimes known as Farkas’ lemma. Then I would like to discuss some standard applications of the Hahn-Banach theorem, such as the separation theorem of Dieudonné, the minimax theorem of von Neumann, Menger’s theorem, and Helly’s theorem (which was mentioned recently in an earlier post). Read the rest of this entry »
545a4cbd98a381fc
Q: Which is a better approach to quantum mechanics: Copenhagen or Many Worlds? Physicist: The Copenhagen interpretation requires that new laws be created that, in addition to being impossible, are completely unnecessary, physically unfeasible, and utterly unjustifiable.  The basic laws of quantum mechanics, when applied at all scales, give you the Many Worlds interpretation.  No fancy rules, no awkward questions.  Even better, what seems to be wave function collapse in Copenhagen is actually described by the Many Worlds approach.  So why choose the Copenhagen interpretation over Many Worlds?  No damn reason at all. Many Worlds vs. Copenhagen. On the one hand there are different versions of you, and on the other you've got vaguely defined mental powers. If you’re already familiar with the basic ideas behind the measurement problem, and the divide between Copenhagen and MW, feel free to skip down to the word “catawampus“. Bell’s theorem demonstrated that either information travels faster than light (Bohm) or particles exist in many states simultaneously (everybody else).  There is a lot wrong with Bohm’s theories, not the least of which is that it doesn’t have a relativistic formulation (which may not sound too bad, but that’s really damning in physics circles). The best example is the double slit experiment: shine coherent light on two slits and look at the pattern that’s projected onto a screen.  You get a nice interference pattern of dark and light bands on the screen, which is fine since light is a wave (don’t stress about wave/particleness here).  What’s messed up is that even when the light intensity is turned way down to just one photon at a time, the photon still tends to land where the bright bands were, and almost never lands where the dark bands were. You have to repeat the single-photon double-slit experiment a lot to see the pattern, but there it is. In order to get these interference fringes from two slits the photon must interfere with itself. It's going through both. Alright, so everybody has come to accept that very small things have no problem being in several states simultaneously (if you don’t buy it, then just take it as read).  But here’s the weirdness.  It turns out that particles only exhibit the super-position properties when “you” can’t tell the difference between the various states.  As soon as “you” can single one out, then suddenly the super-positionness goes away.  I’m putting quotes (“”) around “you”, because “you” could be a person, or a computer, or even another particle.  Just a weakness of the language. This is the measurement problem.  Things can be in several states at the same time, but only if nothing observes* them.  The moment they’re observed* they are found to be in only one state, and they continue on their merry way in only that one state.  The most straight forward (brute force) example is covering one slit and noting that the interference bands disappear.  By covering one slit you’re “observing*” that the photon is going through the other.  Most examples are more complex and subtle. (* this is another weakness of the language.  “Observe” would seem to imply a hierarchy; “this observes that”.  A better term might be “interact with”.) Here’s where the split comes up.  Copenhagenists will say that something, usually size, or complexity, or random collapse, or, worst of all, consciousness, causes “wave function collapse” (all but one state disappears). Many Worlds adherents will say that wave function collapse never happens, and that the trick is realizing that both the particles involved and the observer* are in multiple states.  After the observation* the multiple states of the particle are now entangled with the multiple states of the observer*.  In the double-slit test you’d have the two states: “photon goes right and observer sees it go right” and “photon goes left and observer sees it go left”.  Both states persist, and there’s no collapse. Catawampus: You’ll often hear that there’s no experiment that can be done to prove which approach is the correct one.  I’m of the opinion that the experiments have already been done, but that most people (myself included) don’t like the results.  However, among people who have stopped to consider the options (and there aren’t many good reasons to do so), most of us have decided to accept the results and move on. The big advantage behind the Copenhagen interpretation is that it makes people (like you!) important, and different from particles.  <sarcasm>Sure, they may be in multiple states, but I’m definitely in exactly one state.  Unlike particles, people can tell the difference.</sarcasm> It’s creepy to think that there are different versions of yourself “out there” doing “stuff”, but it’s awesome to assume that you’re special and that your mind (not brain) has some kind of power over reality. There is a version of the Copenhagen interpretation of quantum mechanics where consciousness (usually human, sometimes God, sometimes Gaea, …) plays a key role.  I feel pretty comfortable dismissing it out of hand.  There’s a post here that talks about it a little. Time and again, we’ve managed to show that larger and larger objects can be in multiple states, using the double slit experiment or variations of it.  At last check, the double slit experiment was successfully preformed on C60F48, which has fully 108 atoms, or 2,424 protons, neutrons, and electrons.  The entire molecule (actually, thousands of them) actually interfered with itself, demonstrating the ability to be in multiple states. Which raises the question: what’s the damn problem?  Everything that can be tested has demonstrated quantum superposition, so why not just extend that to “everything obeys the same quantum mechanical laws, including superposition.”?  Why not indeed? One may be tempted to say “the physics at small scales is just different!”.  Fair enough.  However, there are no physical laws that work differently on different scales.  For example, at very small scales water acts like honey, and to swim you need to use things like flagella.  At the other end of the scale (our scale) water behaves… like water, and things like fins and flippers suddenly work really well, but flagella don’t.  However, the same physical laws (specifically, the Navier-Stokes equation) govern everything. More generally, all laws apply at all scales, it’s just a question of degree.  Relativity works at all velocities, but you don’t notice the weird effects until you’re moving really fast.  What we call “Newton’s laws” are just an approximation that work at low speeds. If the Copenhagen “size argument” (that larger objects somehow have different laws) holds up, it’ll be the first of its kind. So how can the many worlds hypothesis hold up?  Either we’re in multiple states or we’re not, and (most) people don’t feel like they’re in more than one state. In very much the same sense that relativity includes Newtonian dynamics as a special case, the Many World hypothesis actually contains Copenhagenism. Normally in quantum mechanics, when you’re trying to predict the behavior of a system, you just let all the components evolve in time according to the Schrödinger equation.  If you follow a particular state or object, then you’ll find that it experiences wave function collapse all over the place. That is to say: If you require (artificially) that some particular thing must remain in one state (or small set of states) by disregarding its other states, then that object will (seem) to see wave function collapse. I should repeat that, because it bears repeating: According to the Many Worlds approach, the individual states of an object witness wave function collapse all the time, but taken as a whole, there is no collapse. Here’s an simplified example with very little physical basis, but that hopefully gets the point across. In what follows the \square are collision points.  Either the particles pass each other, or bounce off each other and change direction.  They can’t both go down the same path.  The \boxminus are splitters.  When a particle hits this it has a 50% chance of passing, and 50% chance of reflecting. To calculate the probabilities of how the two quantum particles will come out of the machine you have to sum over ever possible path. You have to take into account, not only every "choice", but every interaction. This map shows how two particle move through some machine.  Either they bounce off each other at the first \square and you have blue-up/red-down, or they pass each other and you have blue-down/red-up.  These two states then go on to the splitters and so on.  There is no collapse, and every possibility (every path, every interaction) is included (like the double slit). But what if you were the to track the blue particle?  Put your foot down and insisted that it remain in just one state and take just one path through the machine?  Even more profound, what is the effect on the red particle (from the new one-path-point-of-view of the blue particle)? Easy enough, pick one of the blue paths (worlds) above. 1) The particles start. 2) In this world they pass each other. They didn't have to, they just happened too. 3) Since we're insisting the blue particle stays on one path, it must pick a path: reflect. The red particle, however, is free to take multiple paths, and does. 4) In this world the blue particle bounces off the red particle at the second box. But this interaction means that the blue particle now knows where the red particle is. Collapse! This story is just one of the stories encompassed in the Many Worlds picture. Here’s a slightly different one.  What would happen if the blue particle went through the splitter instead of reflecting? 1&2) Same as last time. 3) Since we insisted that the blue particle must be in one state, it must either reflect or pass. In this case it passes. The red takes both available paths. 4) The blue particle can't interact with the red, so this time the red particle is free to take even more paths simultaneously (as far as this version of the blue particle is concerned). So the big point is that the Copenhagen wave function collapse is strictly an illusion created by restricting your attention to a particular state of one object.  So why is our attention stuck on just one state?  We’re not special, just victims of conservation laws. Looking down on the double slit experiment from outside you can ask questions like “what is the probability that the photon will go through each slot?”.  You have no “givens” to affect your probabilities so you say “50/50”, and you’re right.  The photon goes through both, but since there’s only one photon (conserved number of photons), it does it in a particular (some what obvious) way: it combines the states “left/not right” and “not left/right”. Now say you’re presented with two doors.  You also can’t be in the state “right and left”.  Now when you go through one door, because you’re interacting with yourself, you have givens that affect the probabilities.  Ask yourself, “What is the probability that I went through the left door, given that I just went through the right door?”  Zero, baby! The version of you that when through the left door will be able to make a very similar calculation. Finally (without going into too much detail), the Copenhagen interpretation also violates a number of very straight forward physical laws.  Conservation of information (supported by everything else, including logic), time reversibility (again, everything else), and information flowing backward through time (spacelike information exchange or “spooky action at a distance”) are only the most direct and grievous examples. This entry was posted in -- By the Physicist, Paranoia, Philosophical, Physics, Quantum Theory, Skepticism. Bookmark the permalink. 53 Responses to Q: Which is a better approach to quantum mechanics: Copenhagen or Many Worlds? 1. Alex says: Physics has a pretty good explanation for why we experience the passage of time, actually. It’s called entropy. Time seems to move for us because we can remember the past but not the future. But the only difference between the past and the future is that entropy is smaller in the past and larger in the future. It’s easy to look into the past because the number of possible pasts is small – as entropy decreases, so does the number of states. We can make conclusions about the possible pasts that might have led to the “present”, but this is much harder to do for the future. As for why we only observe one “present” at a time, this is really more of a consequence of word choice. If we take a human mind from beginning to end, as its time coordinate increases, we see that it accumulates more memories. At any given time coordinate, there is a most recent memory, and this is what we call the present. I like to think of it this way: We remember the past because the past is what we call events we can remember. We can’t remember the future because the future is what we call events we can’t remember. 2. Ernie says: Thanks for your remarks. Actually I’m well-familiar with the entropic argument, and I think it may have bearing on the direction of subjective time, but I don’t think it has anything to say at all about why time passes so far as conscious experience is concerned, or even what it means for it to “pass.” The coordinate time of the block-universe just sits there, and while we speak of a present “moving” along it, the concept of motion in time is incoherent. Subjective time and coordinate time may well be ontologically incomparable, just as are my mental experience of “color” and “color” as defined by differential reflectivities of physical surfaces. A number of people working in the philosophy of time have posited the necessity for a “Second time,” perhaps similar to Newtonian time in not having extent, to account for the fact that the subjective present is experienced as undergoing change with respect to coordinate time. I don’t find the notion terribly attractive (Occam’s razor and all that) but I see no better alternative. I understand your argument about why at any given point in coordinate time I seem aware of a past and a future, but that’s just because of the way my brain-states vary along the time coordinate. Nothing, including my brain-states, “changes” in the block universe except my subjective consciousness (if in fact it is in the block universe) and the notion of “change” in the time coordinate is not in the sense of time passing, but in the sense of having different properties at different points. So what accounts for the fact that time “passes” for me? 3. bob hall says: A newbie here so be gentle. Since a Photon does not experience “time”,how could it be stated that it “violates a number of very straight forward physical laws” such as flowing backward through “time”. Leave a Reply
2332627c66fc42b1
Of Two Time Indices Quantum mechanics provides statistical predictions for the results of measurements performed on physical systems that have been prepared in specified ways … I hope that everyone agrees at least with that statement. The only question here is whether there is more than that to say about quantum mechanics. Asher Peres In this note, I shall take a rather strictly ascetic view of quantum physics. I’ll make the lifestyle choice that “quantum states” are encodings of probability assignments for the possible outcomes of as-yet unperformed experiments. Nothing more, but certainly nothing less. The image which floats hazily to mind is of the great unanalyzed diversity of the world teeming on, and when pieces of it come together, novelty happens: there takes place an act of creative generation which belongs to neither participant alone. Quantum theory is, following this lifestyle, a means of coping with and possibly even living well within a world having such a character. It studies the special case of novelty-generating interactions in which one participant is a scientific agent, a complex system capable of sustaining beliefs and entertaining them with varied degrees of fervency. This leads me to another touchy question of lifestyle choice: personalist probability theory. The lifestyle starts with the idea of an agent which can believe things, and to these beliefs—which are arrayed in well-defined sets—we say the agent can attach a quantitative expression of its fervency in that direction. We impose the normative rule that the agent’s measures of credence be consistent with one another, and we make the dull matter of consistency more entertaining with stories about Dutch bookies or Ferengi bartenders. We say, and I think this is essentially a historical convention, that higher numbers should mean a stronger belief, even though we could just as well say that an agent writing a larger number means that the agent will be more surprised if that event turns out to happen. (Thanks to Claude Shannon, we know that the one convention is just the logarithm of the other.) Even the use of real numbers for degrees of fervency is, to my eye, a convention: if somebody wants to record credences using Conway’s surreal numbers, or with elements from some Lie group, all I can say is “peace be with you in your quest”. The normative standard of consistency will still yield constraints among credence assignments, the difference being that those credences won’t live in the same set as relative frequencies do. And, of course, there’s no guarantee that the exercise will lead to any useful novel structures. The theory of subjective event weights built up in this way just is, in the same way that Euclidean geometry just is. It provides an imitation of Platonism in the way that all abstract constructions built up from axioms do. But we want to deal with the natural world our species evolved within! Ferengi-book coherence tells us that subjective event weights obey the axioms of “probability”. The natural question is, in those places where we scientists make use of “probability” talk, can we handle those tasks with SEWs? The claim of the Bayesian-statistics practitioner is that the answer is “yes”—and in those cases where it isn’t, the invocation of “probability” is what’s illegitimate, rather than the theory of SEWs. In many circumstances of interest, quantum theory can be re-expressed solely in terms of SEWs. Though quantum theory is often (and validly) thought of as a generalization of ordinary probability theory to encompass a wider bestiary of mathematical structures, it can also be treated as a specialization of probability theory. In quantum physics, we take what we think we know about a system, roll it into a density operator $\rho$, and use that density operator to make statistical predictions about what the system might do in particular experiments. But presenting that information as a matrix operator is not always the most illuminating choice. We can actually rewrite any finite-dimensional density matrix as a probability distribution, using the idea of informationally complete measurements. These are generalized measurement procedures (positive operator valued measures, or POVMs) which have an appealing ability: given a probability distribution over the possible outcomes of an informationally complete POVM, we can compute all the statistics which we could have gotten using the density matrix. Such POVMs can be constructed in any finite-dimensional Hilbert space. The nicest variety are the symmetric informationally complete POVMs, known familiarly as SICs, which are known to exist for many values of Hilbert-space dimension and are suspected to exist for the others. With these tools in hand, quantum theory becomes probability plus extra conditions: the bare bones of “SEW theory” are dressed with sinews whose anatomy depends on our doing quantum physics instead of some other theory. With all that as prologue, then: I have occasionally bumped into people who seemingly want to interpret all scientific discovery as Bayesian conditioning. I think it was Howard Barnum who said something about seeing science in “broadly Bayesian” terms, but judging from the “cocktail talk” and how people act in some corners of the Internet, not everyone would grant that “broadly”. New experiences always being weighed against our preconceived mesh of ideas? Yes. Holding to different ideas with varying degrees of tenacity? Yes. The clockwork ratcheting up and down of numerical fervencies defined over a “distinct number of consequences”? I doubt it. Even if such a story could be cooked up, involving a stupendously baroque and artificial sample space, what use would it have? (A better term than “broadly Bayesian” might be “Darwinian”, or better yet just “evolutionary”. It’s been observed a few times that Bayesian updating is formally analogous to a formula in evolutionary theory called the discrete-time replicator equation. Prior probabilities map onto abundances of alleles in a gene pool, and the weight of new evidence maps onto biological fitness. The probability distribution after updating corresponds to the new gene pool composition after natural selection has operated. (Marc Harper describes it more fully in arXiv:0911.1763.) But the scenario modeled by the discrete-time replicator equation is just a tiny part of evolutionary phenomena. It’s even a tiny part of the mathematics developed to date for dealing with biological evolution. Other evolutionary processes could define other modes of inference whose mapping to Bayesian updating is contrived and awkward at best. For a relatively mundane example, I think Jeffrey conditioning is analogous to a discrete-time replicator equation with a mutation effect added.) What this might mean for quantum theory is the following: were one enamored of a different physics-neutral mode of inference, the Born rule would be an empirical addition to that mathematics, phrased in its terms. (And, I’d guess, probably harder to translate into the vernacular of physics.) There is your realism for you: the extra addition to coherence due to the quantum character of the world must still be there. We’d just be writing down and trying to motivate a different equation for it. The Schrödinger equation in QM and the Liouville equation in classical mechanics are, I think, fundamentally synchronic statements about probability assignments. If I’m willing to gamble that a pendulum has position within some range $\Delta q$ and momentum within some interval $\Delta p$, and if I accept the mechanics lessons I had as a young’un, then my hands are forced: I must price lottery tickets referring to the pendulum at other times in a particular way. Each probability assignment concerning a dynamical system carries two time indices: one for the time the agent makes it, and one for the time of the event or proposition written on the ticket. We could write the former with a subscript and the latter in parentheses, for example: $ \rho_\tau(q,p,t) $ would be a Liouville density for a one-particle system, a commitment made at time $\tau$ about the world’s affairs at time $t$. The Liouville equation connects $\rho_\tau(q,p,t_1)$ to $\rho_\tau(q,p,t_2)$. The subscripts are the same; it’s a synchronic statement. If I get new information at time $\tau_2$, then I can update the whole joint probability density for all times by conditioning on that new information. If I keep gaining new information, say at times $\tau_i$, then the Shannon entropy of my Liouville density for the present time will keep decreasing. By evolving my Liouville density at time $\tau_i$ forwards and backwards, I can refine my distributions for what the pendulum will be doing in the future and what it had been doing in the past. Entropy decreasing over time? Yegads! We must be in contradiction with thermodynamics! . . . except that by that reasoning, the thermodynamic entropy of any classical system we can characterize exactly must be zero, because the Shannon index of its Liouville density is nil! Indeed, the thermodynamic entropy of any simulated system must be exactly zero, because the computer doing the simulation knows where everything is and where all the pieces are going at all times! Yeah, that’s pretty silly. What it does mean is that I can extract energy more and more efficiently, and that I can make better and better predictions of how somebody else’s energy-extraction experiment will fare. (And a typical such experiment might well involve timescales significantly longer than those of the oscillations and vibrations within my system itself, so what matters isn’t where it can be in phase space at one particular time, but rather the possible variety in its trajectories over an interval of time—what I think in Jaynesian language is called its “caliber”.) The way statistical-physics students are taught to relate Shannon entropy with thermodynamic entropy is through the “fundamental assumption of statistical mechanics”: we’re told to assign equal a priori probabilities to all points in phase space which have the same energy. From this starting point, one makes computations until relationships which look like the phenomenological equations of thermodynamics come out. But the starting point is one which should make a personalist Bayesian’s skin crawl! Who mandates a prior probability, and who died and made them king? It’s only reasonable at all because of an even-more fundamental assumption: that we obtained all our information from highly coarse-grained measurements which could only access aggregate properties like the total energy. If we could make finer measurements in the first place, then we’d have no warrant to assign equal a priori probabilities across the constant-energy surfaces, and we’d have to rethink the “Shannon to thermodynamics” connection in a different way. It [the Second Law of Thermodynamics] is not a law that dictates how things go by themselves, but rather how they go in response to particular experimental investigations. Campisi and Hänggi (2011) In order to have any scientific weight, retrodictions have to yield predictions. Feynman has a bit in The Character of Physical Law where he explains how geologists “talk about the past by talking about the future”. If you dig in the ground where nobody has dug before—if you perform an as-yet-unperformed experiment—you’ll find fossil bones of the predicted kind. If a statement about the Earth’s geological past can’t be made to yield predictions about the consequences of new digging, we’re not talking about dinosaurs anymore— we’re talking about the invisible dragons which live in my garage. It’s fine to write a Bayesian probability $p(v)$ for the speed of the Chicxulub impactor, but if we can’t turn that into expectations for new investigations, why should anyone care? So, what if we do commit ourselves to the idea that quantum uncertainty is uncertainty about future experiment outcomes? That measurements are not just disturbing, but generative? Then we must conclude that retrodictions are just a nostalgic kind of prediction, statements made thinking of the past which must concern the future to have any scientific content. If we disagree about things which happened in the past and stayed there—so what?! That’s like my housemate and I disagreeing today about the number of eggs laid yesterday by the invisible dragon living in our garage. Possible agreement in the future about experiments which can be done in the future—that’s the key. Among other things, this affects, I believe, the criteria one uses to judge the compatibility of quantum probability assignments. I suspect that accepting agent experiences as the things beliefs are about changes significantly how important one feels various possible kinds of disagreement are. It appears possible in the quantum world that two physicists can be in sufficient discord that, as they perform experiments, their novel experiences bring them into agreement about the future but not about the past. More specifically, suppose Alice and her friend Bob are interested in a quantum system and each plan to receive word of an experiment performed on it. Prior to the experiment, Alice and Bob each make an assignment of probabilities to the possible experiment outcomes, in the form of a density matrix. When the measurement experiment intervenes on the system and coughs up a result, Alice and Bob update their “catalogues of knowledge” accordingly. It can transpire that the two physicists’ post-experiment density matrices agree, but the conditional density matrices which encode their beliefs about the past are incompatible. A classical analogue of this situation would be the following: suppose that Alice and Bob disagree about which side of a die is up. Alice says it’s a 6, Bob says it’s a 1. The die is rolled in a new experiment and comes up 6. Alice and her friend can come to agree about which side is up after the roll, but the new result changes nothing about their earlier disagreement. I do not know if this is a serious inconvenience to doing science, or to the ascetic view that quantum states are catalogues of probabilities for possible future experimental outcomes. After all, if we hold fast to that position, then a post-measurement conditional density operator for the past spacetime region must be a probability catalogue for counterfactual experiments—interventions into nature which could have been done, but weren’t. Incompatible conditional density operators for spacetime regions in the past are, in this view, arguments over yesterday’s tomorrow, friction without heat. They’re what you get when “measurements” in your world are, not just disturbances, but acts of generation. (Leifer and Spekkens write that “because nontrivial quantum measurements always entail a disturbance […] coming to agreement about the state of the region after the measurement does not resolve a disagreement about the state of the region before the measurement.” To which a justifiable response might be, “Yes, and isn’t it wonderful?” More importantly, perhaps, I don’t like the phrasing of “a disagreement about the state of the region before the measurement.” The choice and arrangement of words seem wrong. I don’t know if it’s a matter of accident or of design towards a goal I disagree with. Doesn’t “disagreement about the state of the region” sound too much like, say, “disagreement about the calorie content of the region”—isn’t it just the phrasing one would choose if one believed that “the state” were a property of “the region”? This feels to me like bad language for any psi-epistemist, radically QBic or otherwise. “The states ascribed by two agents disagree” would be a more forceful and less muddling statement than “Two agents disagree about the state,” I think.) A new slogan: Confusion about the past is the price we pay for a world of genuine novelty. The inability of Alice and her friend to come into agreement about retrodictions into the past beyond a measurement is, in microcosm, our inability to agree about what happened before the Big Bang. (For a discussion of retrodictions-as-predictions in a cosmological context, see Schriffen and Wald (2012), section VI. I also think one could productively disagree with Schriffen and Wald’s discussion of thermal equilibration, at the beginning of Section III. To my eye, invocations of “ergodicity” and “mixing” do not resolve the problem of assigning probability distributions over microstates, but rather defer it. (Which is, to be sure, nonnegligible progress from a physicist’s perspective.) They speak of sampling a system at a “random time” during its dynamical time-evolution, which naturally provokes the question: what do you mean by “random time”? You still need some notion of what probability means to give the whole structure of concepts any content. It’s like the old circular, or rather downward-spiralling, conversation between a Bayesian and a frequentist: “If the probability of the coin landing heads is $p$, does that mean there will be exactly $1000p$ heads in 1000 flips?” “No, the number of heads will likely be close to $1000p$, but not exactly.” “What do you mean, likely? That’s the idea we’re trying to define!” And so forth. You could use large deviation theory to write a formula for how the probability of a deviation falls off with the entropy, but you still need to define probability eventually. That is, you’ve deferred the question—in a sophisticated, quantitative, maybe even useful way!—but not answered it.) The time-reverse of historical science is, in a sense, the issue of “Boltzmann Brains“—you know, complex structures arising from quantum vacuum fluctuations in the distant future of the universe. Supposedly, there should be stupendously more of them in the long (long, long) run than there are beings like we think we are, and from this the cosmologists deduce all sorts of things. E.g., that there exist an infinite number of beings who have all the memories of my brain up to 18 April 2012, including everything I’ve seen from Hubble and WMAP, but then in their memories they wake up from a long dream and return to being a green-skinned dancing girl from Orion. (Alas, that is the fashion these days, populating the loneliness with shadow selves, frozen in branches of a stupefying state vector, floating in bubbles of spacetime frothed up by eternal inflation, or recorded in the memories of poor delusional Boltzmann Brains. Or, if you want to be particularly trendy, all of the above. Every variation of Hitler winning the war and von Stauffenberg’s bomb going off as planned, never the histories to meet. Every book in the Library of Babel a biography, innumerably many times over. Infinite iterations of Zhuangzi dreaming he is a butterfly; infinite butterflies dreaming they are Zhuangzi. Is this what we got into science for?) But is that actually a meaningful number to compute, with quantum theory as it stands? What’s the point of asking, “What are the potential consequences to me of my experimental intervention into this phenomenon?” if the phenomenon in question is, by definition, inaccessible, both to myself and to my posterity? I’ve read at least one cosmology person, Tom Banks, saying that Boltzmann-brainology is “silly.” His position was essentially that we can modify our physical theories in an infinite number of ways consistent with all available data and making the same predictions for all conceivable experiments, but with different numbers of Boltzmann Brains coming out. In addition, any detector sent out to gather data on Boltzmann Brains would disintegrate by quantum fluctuations itself long before it stood a chance of spotting any. . . but I think the issue is more fundamental than that. You’re asking a question which the theory is not prepared to answer! Sometimes, it’s obvious when that happens, like trying to calculate the self-energy of an ideal point electron and getting infinity, but here, people mostly don’t seem to be thinking of that possibility. If they do think the answer is absurd, they try to screw around with general relativity and invent a new cosmology that way. Maybe physicists are generally accustomed to thinking about “limits to the validity of quantum mechanics”—if they believe any such exist at all—in an unproductive kind of way? Having prematurely excised the active agent, we naturally think “QM might fail for objects larger than the Planck mass” or something like that, rather than “QM is the wrong tool for answering questions divorced from agent experience”. UPDATE (16 May 2012): I thank Howard Barnum for pointing out a misdirected hyperlink.
ef06ccc2d6a1cc4c
Soma 500mg prescription from doctor - Without Prescription. Guatemalan Maya of the soma 500mg prescription from doctor same age, largely due to better nutrition and health care. The type and number of antibiotics used soma 500mg prescription from doctor depends on the type of infection. Taylor regretted the decision, and less than a month later attempted to renege. Swelling by contact with a low viscosity fluid causes an increase in dimensions, and also lowers the tensile strength of the rubber. Italian physician and poet Girolamo Fracastoro as the title of his Latin soma 500mg prescription from doctor poem in dactylic hexameter describing the ravages of the disease in Italy. Notable among recent players is Tom Brady of the New England Patriots. The development of soma 500mg prescription from doctor vaccine for dengue purchase generic xanax online legally fever began as want to buy valium 10mg online with visa early as 1929, but has been hindered first by incomplete knowledge of the disease pathogenesis, and Buy fake xanax online later by the need to simultaneously create a stable immunity against all four dengue serotypes. These installations are mostly built purchase generic meridia 15mg online no prescription by the science funding agencies of governments of developed countries, or will ambien be affected buy hydrocodone by collaborations between several countries in a region, and operated as infrastructure facilities available to scientists from universities and research organisations throughout the country, region, or world. Sainsbury's annually runs a voucher scheme for local organisations to redeem equipment for sports and other activities. As a result, many soma 500mg prescription from doctor lesbians are not screened regularly with Pap smears. soma 500mg prescription from doctor Like most mammals, all dogs slough off dander, or flake dead skin. Another polymer of glucose is cellulose, which soma 500mg prescription from doctor is a linear chain composed of several hundred or thousand glucose units. Side effects may include heart problems if given too quickly by injection into a vein. Eventually, Helms became the longest cruiserweight champion in WWE buy drug adipex with visa history, as well as the longest reigning champion of any kind in SmackDown! Foster and Smith products, Foster & Smith contacted customers who had recently purchased the product in question. However, sodium nitrite has had varying degrees of soma 500mg prescription from doctor effectiveness for controlling growth of other spoilage or disease causing microorganisms. There are significant variations in the suicide rates of the different states. The opposite side is the side that is opposite to angle A. The lack of health knowledge in the general public creates a situation where a person can be easily swayed to a certain point of view that is cast in the manner in which information is reported. Cefixime treats infections of the: This practice later became controversial due to consumer ethics issues, and pharmaceutical manufacturers now monitor their distributors to ensure that they do not hold more than one month's supply of any given drug. The frequency of the oscillator is pulled towards the frequency soma 500mg prescription from doctor source as can be seen in the spectrogram. CCOHS is mandated to promote safe and healthy workplaces to help prevent work-related injuries and illnesses. People with low social support report more sub-clinical symptoms of depression and anxiety than do people with high social support. Investigators persuaded Hunter to help them in a sting operation. soma 500mg prescription from doctor To comply with these provisions, most Parties financially soma 500mg prescription from doctor support organizations and agencies dedicated to these goals. Privately, Taylor was hesitant about playing for New York as he had hoped to be drafted by the Dallas Cowboys, and was unimpressed with a tour of Giants Stadium he was taken on, after the draft. The park is focused on companies operating in the arenas of life sciences, homeland security, engineering, advanced manufacturing and information technology. Administration tends to include strict medical supervision and prior cheap legal clonazepam 2mg briefing of the medical risks. Occupational safety measures are not always well defined and rarely enforced. Translation of text and image has provided numerous versions and compilations soma 500mg prescription from doctor of individual manuscripts from diverse sources, old and new. When the vasectomy is complete, how much is a bar of xanax sperm cannot exit the body through the soma 500mg prescription from doctor penis. laboratory-scale reaction and purification methods. Such disparity has grown soma 500mg prescription from doctor over years. Six Dynasties period, the ideal woman was described as having firm breasts. The Masturbating Bear would touch his diaper to simulate masturbation. It earned an additional prescribed diet pills $180,281,283 in business through international release, to top out at a combined $408,247,917 in gross revenue. The company, however, shipped drugs to multiple states, and may buy cheap sibutramine 10mg in uk have been operating outside of their legal boundaries, authorities said. Involvement of the duodenum and jejunum can impair the absorption of many other nutrients including folate. In order to give these students hands-on experience, the university built a hospital. are official, tax funded, government agencies. Unavailable housing led to the rapid growth of slums and the per capita death soma 500mg prescription from doctor rate began to rise alarmingly, almost doubling in Birmingham and Liverpool. Treatment of a spontaneous pneumothorax is dependent on the volume of air in the pleural space and the natural progression of the individual's condition. Because ricin is a protein, it can be linked to a monoclonal antibody to target cancerous cells Phentermine 35 mg recognized by the antibody. This region of the world has extremely high rates of child underweight- 46% of its child population under five is underweight. This herb was already mentioned by Pliny the Elder for its early blooming attributes. Green Your School Challenge. Many of the deaths are from an extremely potent opioid, fentanyl, which is trafficked from Mexico. Later that month, CP 47,497 along with its dimethylhexyl, dimethyloctyl and dimethylnonyl homologues, were added to the German controlled drug schedules. The water-splitting reaction is catalyzed by the oxygen evolving complex of photosystem II. This represents almost 32% of Medicare beneficiaries. How to make phentermine Where to buy soma panties Where to buy Meridia online europe Ativan 2mg prescription use Clinical soma 500mg prescription from doctor trials generate data on soma 500mg prescription from doctor safety and efficacy. It ultimately joins with the other muscles that make up the quadriceps in the quadriceps tendon, which travels over the knee to connect to the tibia. Laudanum was supplied to druggists and physicians in regular and concentrated versions. At least one dozen Mexican norteño musicians have been murdered. I-4 drivetrain, but offered more options. Cytokines have been classed as lymphokines, interleukins, and chemokines, based on their presumed function, cell of secretion, or target of action. A medicinal plant is a plant that is used to attempt to soma 500mg prescription from doctor maintain health, to be administered for a specific condition, or both, whether in modern medicine or in traditional medicine. About 160 babies with birth defects were born. For example, that agent might be cyclophosphamide, which causes immunosuppression. Humans and other great apes do not have this ability, thus gout is common. Caffeine increases intraocular pressure in those with glaucoma but does not appear to affect normal individuals. Coffee competitions take place across the globe with people at the regional competing to achieve national titles and then compete on the international stage. Because a breast implant is a Class III medical device of limited product-life, the principal rupture-rate factors are its age and design; Nonetheless, a breast implant device can retain its mechanical integrity for decades in a woman's body. This buy alprazolam in canada ring acts as a guide during penetration and stops the sheath from shifting during intercourse. Houston, Texas; Milwaukee, Wisconsin; St. soma 500mg prescription from doctor The third season debuted on August 13, 2007, airing 15 episodes. Others describe a feeling like having a pebble in their shoe or walking on razor blades. A bikini line is the area of the upper leg soma 500mg prescription from doctor and inner thigh where pubic hair grows that normally is not covered by the bottom part of a soma 500mg prescription from doctor swimsuit. According to the Centers for Disease Control and Prevention in the United States, both children and adults should do 60 minutes or more of physical activity each Ativan 1mg prescription drug day. However, in 2009, Fleiss said that she had abandoned her plans to open such a brothel. Skin corrosion and irritation are determined though a skin patch test analysis. Even though many studies conduct testing under experimental or enriched conditions, oxidation-reduction reactions are naturally occurring and allow for chemicals to go through processes such as biodegradation, outlined above. Women may acquire acute pesticide poisoning, which soma 500mg prescription from doctor has classified as being based on three criteria. It is not necessary but helpful, especially so for the intellectual, who can become the victim of words and symbols. In this age group lorazepam soma 500mg prescription from doctor is associated with falls and hip fractures. Silver nitrate was one of the widely used drugs in the 19th century. The extra chromosome content can arise through several different ways. Charley Pemberton, an alcoholic, was the one obstacle who unnerved Asa Candler more than anyone else. Crack is smoked by placing it at soma 500mg prescription from doctor the end of the pipe; a flame held close to it produces vapor, which Purchase generic adipex 37.5mg online with american express is then inhaled by the smoker. professional or academic. A mental health practitioner is a health worker who offers services for the purpose of improving the mental health of individuals or treating mental illness. Meloxicam is a safer candidate to replace use of diclofenac. It was also presumed that buy generic alprazolam 1mg with mastercard Jackson bleached his soma 500mg prescription from doctor skin just to boost his career. MS test results are published on the project's website. Of those, approximately 100 different substances are known to elicit responses in at least some individuals. Although immunoglobulin is frequently used for long periods of time and is generally considered safe, immunoglobulin therapy can have severe adverse effects, both localized and systemic. The history of women's rights in Australia is a contradictory one: They ordinarily do not threaten, force, batter or mutilate their victims, nor do they carry weapons, steal from their victims or destroy their property. With aggressive treatment, they may dissolve. cheap legal tramadol Abuse-focused cognitive behavioral therapy was designed for soma 500mg prescription from doctor children who have experienced physical abuse. Seclusion of women within the home was a common practice among the upper classes of many societies, and this still remains the case today in some societies. There have been reported cases where female clients have been blackmailed by gigolos they visited. This request was refused due to having soma 500mg prescription from doctor no legal basis. Sometimes the metal plates that house the sensor windows are bolted on from behind, to prevent tampering. They sought to enhance their vocation, as well as protect public welfare. Couples may engage in frottage as a form of foreplay or simply as a method buy phentermine to achieve sexual gratification without the penetrative aspects of vaginal, anal soma 500mg prescription from doctor or oral sex, which may also be their personal way of preserving virginity or their way of practicing safer sex. This theory suggests that jobs which are predominated by women offer lower wages than do jobs simply because of the cheapest generic ativan 2mg online with prescription presence of women within the occupation. The wave model is derived from the wavefunction, soma 500mg prescription from doctor a set of possible equations derived from the time evolution of the Schrödinger equation which is applied carisoprodol 350mg prescription and drug test to the wavelike probability distribution of subatomic particles. PTH acts on several organs where to buy tramadol 100mg online legitimate to increase calcium levels. Buy ambien 10mg in houston Soma 50 mg Buy diazepam 10mg in china Buy cheap phentermine 37.5mg online with visa Buy generic Sibutramine 10mg online with paypal Want to buy tramadol with american express Leave a Reply
959b75db7eadb9e6
Blog Archives All you wanted to know about Hybrid Orbitals… … but were afraid to ask How I learned to stop worrying and not caring that much about hybridization. The math behind orbital hybridization is fairly simple as I’ll try to show below, but first let me give my praise once again to the formidable Linus Pauling, whose creation of this model built a bridge between quantum mechanics and chemistry; I often say Pauling was the first Quantum Chemist (Gilbert N. Lewis’ fans, please settle down). Hybrid orbitals are therefore a way to create a basis that better suits the geometry formed by the bonds around a given atom and not the result of a process in which atomic orbitals transform themselves for better sterical fitting, or like I’ve said before, the C atom in CH4 is sp3 hybridized because CH4 is tetrahedral and not the other way around. Jack Simmons put it better in his book: 2017-08-09 20.29.45 Taken from “Quantum Mechanics in Chemistry” by Jack Simmons The atomic orbitals we all know and love are the set of solutions to the Schrödinger equation for the Hydrogen atom and more generally they are solutions to the hydrogen-like atoms for which the value of Z in the potential term of the Hamiltonian changes according to each element’s atomic number. Since the Hamiltonian, and any other quantum mechanical operator for that matter, is a Hermitian operator, any given linear combination of wave functions that are solutions to it, will also be an acceptable solution. Therefore, since the 2s and 2p valence orbitals of Carbon do not point towards the edges of a tetrahedron they don’t offer a suitable basis for explaining the geometry of methane; even more so these atomic orbitals are not degenerate and there is no reason to assume all C-H bonds in methane aren’t equal. However we can come up with a linear combination of them that might and at the same time will be a solution to the Schrödinger equation of the hydrogen-like atom. Ok, so we need four degenerate orbitals which we’ll name ζi and formulate them as linear combinations of the C atom valence orbitals: ζ1a12s + b12px + c12py + d12pz ζ2a22s + b22px + c22py + d22pz ζ3a32s + b32px + c32py + d32pz ζ4a42s + b42px + c42py + d42pz to comply with equivalency lets set a1 = a2 = a3 = a4 and normalize them: a12 + a22 + a32 + a42 = 1  ∴  ai = 1/√4 Lets take ζ1 to be directed along the z axis so b1 = c1 = 0 ζ= 1/√4(2s) + d12pz since ζ1 must be normalized the sum of the squares of the coefficients is equal to 1: 1/4 + d12 = 1; d1 = √3/2 Therefore the first hybrid orbital looks like: ζ1 = 1/√4(2s) +√3/2(2pz) We now set the second hybrid orbital on the xz plane, therefore c2 = 0 ζ2 = 1/√4(2s) + b22px + d22pz since these hybrid orbitals must comply with all the conditions of atomic orbitals they should also be orthonormal: ζ1|ζ2〉 = δ1,2 = 0 1/4 + d2√3/2 = 0 d2 = –1/2√3 our second hybrid orbital is almost complete, we are only missing the value of b2: ζ2 = 1/√4(2s) +b22px +-1/2√3(2pz) again we make use of the normalization condition: 1/4 + b22 + 1/12 = 1;  b2 = √2/√3 Finally, our second hybrid orbital takes the following form: ζ2 = 1/√4(2s) +√2/√3(2px) –1/√12(2pz) The procedure to obtain the remaining two hybrid orbitals is the same but I’d like to stop here and analyze the relative direction ζ1 and ζ2 take from each other. To that end, we take the angular part of the hydrogen-like atomic orbitals involved in the linear combinations we just found. Let us remember the canonical form of atomic orbitals and explicitly show the spherical harmonic functions to which the  2s, 2px, and 2pz atomic orbitals correspond: ψ2s = (1/4π)½R(r) ψ2px = (3/4π)½sinθcosφR(r) ψ2pz = (3/4π)½cosθR(r) we substitute these in ζ2 and factorize R(r) and 1/√(4π) ζ2 = (R(r)/√(4π))[1/√4 + √2 sinθcosφ –√3/√12cosθ] We differentiate ζ2 respect to θ, and set it to zero to find the maximum value of θ respect to the z axis we get the angle between the first to hybrid orbitals ζ1 and ζ2 (remember that ζ1 is projected entirely over the z axis) dζ2/dθ = (R(r)/√(4π))[√2 cosθ –√3/√12sinθ] = 0 sinθ/cosθ = tanθ = -√8 θ = -70.53°, but since θ is measured from the z axis towards the xy plane this result is equivalent to the complementary angle 180.0° – 70.53° = 109.47° which is exactly the angle between the C-H bonds in methane we all know! and we didn’t need to invoke the unpairing of electrons in full orbitals, their promotion of any electron into empty orbitals nor the ‘reorganization‘ of said orbitals into new ones. Orbital hybridization is nothing but a mathematical tool to find a set of orbitals which comply with the experimental observation and that is the important thing here! To summarize, you can take any number of orbitals and build any linear combination you want, in order to comply with the observed geometry. Furthermore, no matter what hybridization scheme you follow, you still take the entire orbital, you cannot take half of it because they are basis functions. That is why you should never believe that any atom exhibits something like an sp2.5 hybridization just because their bond angles lie between 109 and 120°. Take a vector v = xi+yj+zk, even if you specify it to be v = 1/2i that means x = 1/2, not that you took half of the unit vector i, and it doesn’t mean you took nothing of j and k but rather than y = z = 0. This was a very lengthy post so please let me know if you read it all the way through by commenting, liking, or sharing. Thanks for reading. Article in ‘Ciencia y Desarrollo’ (Science and Development) Here is a link to an article I was invited to write by my good old friend, Dr. Eddie López-Honorato from CINVESTAV – Saltillo; Mexico, for the latest issue of the journal ‘Ciencia y Desarrollo’ (Science and Development) to which he was a guest editor. ‘Ciencia y Desarrollo’ is a popular science magazine edited by the National Council for Science and Technology (CONACyT) of which I’ve blogged before. This magazine is intended for people interested in science with a general knowledge of it but not necessarily specialized in any field. With that in mind, I decided to write about the power of computational chemistry in predicting some phenomena while shedding light in certain aspects of chemistry that are not that readily available through experiments. The article is titled ‘Chemistry without flasks: Simulating chemical reactions‘. The link will take you to the magazine’s website which is in Spanish, as is the article itself, and only to the first page; so, below I translated the piece for anyone who could be interested in reading it (Hope I’m not infringing any copyright laws!). Don’t forget to also check out Dr. López-Honorato’s blog on nuclear energy research and the development of materials for nuclear waste containment! Encourage him to blog more often by liking and following his blog. Chemistry without flasks? Typically we think of a chemist as a scientist who, dressed in a white robe and protected with safety glasses and latex gloves, busily working within a laboratory, surrounded by measurement equipment, glassware and bottles with colored substances; pours one substance onto other substance, transforming them into new substances while noting that the chemical reaction occurs through color changes, heat release , perhaps gas, and occasionally even an explosion. Thus chemistry, the study of the material processing involves active experimentation to accomplish chemical reactions subsequently confirmed, although indirectly, that the changes have been conducted in the microscopic world, moreover, in the molecular and atomic world. The chemist plans these changes based on the knowledge he has of the chemical properties of the substances of which he started and, like any other substance, are due to its molecular structure, i.e., the spatial arrangement of the atoms that form it. Under this archetypal image just posed, then it’s at least funny to think that there is a branch of chemistry named Theoretical Chemistry. What is theoretical chemistry? Theoretical chemistry is a kind of bridge between chemistry and physics; using laws and equations that govern the subatomic world, to calculate the molecular structure of a substance, more specifically calculate the distribution of electrons surrounding the molecule forming a cloud, which interact with the electron cloud of another molecule to form a new substance. It is based on the knowledge of the electron density cloud or we can understand and predict the chemical properties of any substance. We can then define theoretical chemistry as the set of physical theories that describe the distribution and properties of the electron cloud belonging to a molecule, in this particular mathematical description we call electronic structure and this is the starting point for descriptions and chemical predictions. What is it good for? Through theoretical chemistry we can find answers to fundamental questions about the structure of matter. Consider a molecule of water, which has the chemical formula H2O. This formula implies that there are two hydrogen atoms attached to an oxygen atom But what spacial structure does a water molecule have? The simplest geometry it could take would be a linear structure, in which the angle formed by the three atoms is 180 °. However, the water molecule has an angle of 109 °, far from a linear structure. In Figure 2 we can see the result of the calculation of the electronic structure of H2O, it observed that the electron cloud that exists on the oxygen atom also has a place in space and thus push the hydrogen atoms bringing them together instead of allowing them to take a  more comfortable conformation. Figure 2. Oxygen remaining electrons (red cloud around the oxygen atom) that are not chemically push the hydrogen atoms towards each other. The industrial area currently impacted by the application of theoretical chemistry is the pharmacist, as they generate a new drug involves significant investment in financial and human resources, so predicting the properties of a molecule with pharmacological activity before synthesizing is highly attractive. Therefore it has been generated within the theoretical chemistry field, otherwise known as branch Rational Drug Design. Drugs acting on our organism when active molecules interact directly with the various proteins which are distributed in the tissue cells. If the structure of the protein is known and we attack is known also a drug which acts on it, then we can design similar drugs having greater efficacy in the treatment of diseases. But it is not only fit one molecule to another, but to calculate the energy of interaction, the energy of dissolution and the probability that this interaction can be observed experimentally (Figure 3). The calculation of the interaction energy between the drug and the protein tells how strongly attract each other, a weak attraction drug will result in a low efficiency, while a greater attraction involve a more effective drug. How do you calculate a molecule? All matter exists in the universe is made of atoms, which in turn are composed of a nucleus of protons and neutrons surrounded by a cloud of electrons. When two or more atoms combine to form a molecule combining do their electron clouds and how do these combinations are best described by the equations of quantum mechanics, the branch of physics that describes the behavior of the subatomic world. However, due to its complexity, the equations of quantum mechanics can only be accurately resolved in the simplest cases such as the hydrogen atom, which consists of a single electron orbiting a proton. We must therefore resort to a range of methods and approaches to tackle cases of chemical interest and even biological. For years the only available computers could solve the approximate equations for small molecules, no later than thirty atoms, which which can be interesting, but not entirely useful. Today modern supercomputing equipment (which may amount to up to tens or even hundreds of powerful computers connected together to work cooperatively) allow us to make models with hundreds of atoms molecules such as proteins or DNA fragments. While the software available to perform these calculations is developed continuously for the last thirty years has been the progress in the design of computer systems able to perform thousands of operations per second the cornerstone that has made the theoretical chemistry a predictive tool commonly used. Today the branch known as Molecular Dynamics, which studies the interactions between molecules over time, has benefited from the development of the latest game consoles, as their processors, known as graphics processing units (GPUs , for its acronym in English) are able to perform calculations in parallel: Many of the images seen in our video games are actually calculated, not animated, this means that the console must calculate how to answer each item on the screen According to each stimulus we introduce. Conversely, if the images were animated, the answers would be always the same and the game would become unrealistic. Each game event should be calculated almost immediately to maintain its fluidity and emotion, in such a way that these GPUs have to be able to perform several mathematical operations simultaneously. Traditionally molecular dynamics is based on the equations of classical physics, which only see the time evolution of molecules like solid objects collide, hundreds of molecules floating in water or other solvent. With the advent of GPUs can include dynamic calculating the electronic structure so we can peek into biological processes such as DNA replication or the passage of nutrients through a channel protein embedded in the membrane of a cell. Since the fundamental understanding of the distribution of electrons in a molecule, its structure and properties to rational drug design, new materials based on molecular modeling theoretical chemistry is a powerful tool which is constantly progressing. The development of computer systems increasingly powerful detail allows us to meet the electronic processes involved in a chemical reaction while we can predict the real-time progress of molecular transformations. All this brings us ever closer to the dream of modern alchemists: transform matter to obtain substances with properties designed to pleasure. In the nineteenth century, the American philosopher Ralph Waldo Emerson, wrote: “Chemistry was born from the dream of the alchemists to turn cheap metals into gold. By failing to do so, they have accomplished much more important things. ” And yes. Today we delve into the innermost secrets of nature not only to understand how it works but also to modify its operation on our behalf. The Chuck Norris of chemistry It is widely known by now, the existence of a list called “The Chuck Norris facts” in which macho attributes of this eighties redneck action hero are exacerbated for the sake of humor. The list includes such amusing facts like: • “Chuck Norris doesn’t eat honey, he chews bees” • “When Chuck Norris does a pushup, he’s pushing the Earth down” • “Chuck Norris counted to infinity; twice!” • “There is no evolution, only a list of creatures Chuck Norris allows to live” This last one is funny also because Chuck Norris is a Born-Again-Christian who doesn’t believe in evolution. The list is very funny although the original site has become plagued of not so good ones thanks to uninspired people with web access. A not so old list, and definitely funnier for us people in the science business, is “The Carl Friederich Gauss list of facts“, which includes gems like: • “Gauss can divide by zero” (funny although a bit obvious, right? well this is warm up) • “Gauss didn’t discover the normal distribution, nature conformed to his will” • “Gauss can write an irrational number as the ratio of two integers” • “Gauss doesn’t look for roots of equations, they come to him” • “Gauss knows the topological difference between a doughnut and a coffee mug” • “Parallel lines meet where Gauss tells them to”. All these facts imply one thing: impossibilities being allowed to one paradigmatic character for humor’s sake. What could be considered an impossibility in chemistry by now and who could be the one to bear Norris’ fame? Who could be deemed as the Chuck Norris of chemistry? The impossibility of synthesizing noble gas compounds comes to my mind as the historical impossibility in modern chemistry most imprinted in chemists minds since its written in Pauling’s textbook and is supported by Lewis’ theory; yet Bartlett achieved their synthesis during the 60’s! Chemistry is a science which generates it’s own study matter and as such, impossibilities become challenges. What are the current challenges in chemistry? what is the direction our science is taking or even worse that it should be taking? So here is my first attempt at emulating the list of facts in the chemistry field and my chosen one is Roald Hoffmann! • Roald Hoffmann can make a C atom hybridize d orbitals into its valence shell • Roald Hoffmann drinks AlLiH4 aqueous cocktails • Roald Hoffmann can stabilize a tertiary carbanion and a primary carbocation • Roald Hoffmann can analytically solve the Schrödinger equation for H2 and beyond (of course) • Roald Hoffmann denatures a protein by looking at it and refolds it at will • Roald Hoffmann always gets a 100% yield • Le’Chatellier’s principle first asks for Hoffmann’s permission • Roald Hoffmann once broke the Periodic Table with a roundhouse kick • Roald Hoffmann can make a molecule stop vibrating at absolute zero; it’s called fear! • Born-Oppenheimer’s approximation is a consequence of nuclei being too frightened to move in the presence of Roald Hoffmann. Electrons? they are just trying to escape • Roald Hoffmann’s blood is a stronger acid than SbF5 A pretty lame attempt I admit. Who is your favorite chemist in history and why? Try to come up with your own Chuck Norris of Chemistry list and we’ll share it here in this site. As usual thanks for  reading (yeah! the whole three of you) It’s that time of the year again… The Nobel Prizes Around early October the scientific community -or at least part of it- starts getting excited about what could be considered the most prestigious award a scientist could ever achieve: The Nobel Prize. The three categories that interest me the most are: Chemistry, Physics and Literature. I’m not saying I don’t care for the other three (well, maybe the one in economy is way out of my league to grasp) but these three are the ones that always arouse my curiosity. This year laureates have really had me excited! For starters, in chronological order of announcement, Geim and Novoselov seem to be quite younger than the average recipient (52 and 36 years old, respectively). But so is the field for what they got it since the first paper these two scientists from the University of Manchester published on the topic is only about six years old. Discovery of Graphene and most importantly the characterization and understanding of its properties is one of the most promising areas in materials sciences since graphene exhibits very interesting electronic as well as structural behaviors. Nobel prizes are always controversial, but we have to admit that although graphite has been around us for ages, these two England-based Russian scientist have kicked off a promising area of science that will no doubt contribute to further technological developments we can only begin to imagine. On the other hand, the Nobel Prize in Chemistry was awarded to Heck, Negishi and Suzuki for their work on Pd (palladium) catalyzed coupling reactions. What I liked the most about this prize is that a few years ago I published alongside Dr. David Morales-Morales from the National Autonomous University of Mexico, a paper in J. Molecular Cat. A., in which we performed  a systematic study of a phosphane-free Heck reaction for a series of Pd catalysts with the general formula [ArFNH]PdCl2 (ArFNH = Fluorinated or polyfluorinated aniline). In this paper theoretical calculations were used to assess the relationship between the substitution pattern in fluorinated anilines upon the catalyst’s eficiency, a sort of small quantum-QSAR. Another thing that got me (and a bunch of other chemists) excited was the fact that this year the Nobel Prize in Chemistry went to people working in old fashioned synthetic chemistry, so to speak. Recently a long list of researchers working on the field of BIO-chemistry were awarded the prestigious prize, which comes to no surprise since the development of the Human Genome Project has, and will continue to have, a huge impact in biotechnology. Be that as it may, good for Heck, Suzuki and Negishi and the Pd-catalyzed-carbon-carbon-bond-forming-reactions! About my initial remark: For reasons I don’t know (I wont subscribe to any of the existing urban-legend-level hypothesis) there is no Nobel Prize in Mathematics, although a lot of mathematicians have been awarded the Nobel Prize in Economical Sciences. For mathematicians the Fields Medal would be the equivalent of a Nobel Prize. However, the Fields Medal is only awarded every four years. Four years ago, this captivating character named Grigori “Grisha” Perelman was awarded the Fields Medal for solving what the Clay Institute in Massachusetts deemed one of the problems of the millennium: The Poincare Conjecture. What is so noteworthy is that Grisha (diminutive for Grigori in Russian) rejected the medal as well as the million dollars awarded by the Clay Institute for solving it. He also rejected a position at Princeton University. His lack of faith in any institution was also reflected in his work, since he did not publish his solution to Poincare’s conjecture in any peer reviewed journal but instead uploaded it on-line and alerted some notorious mathematicians he had worked with in the past about it. Secluded in his St. Petersburg apartment, this remarkable fourty year old, Rasputin-looking-genius, mathematician keeps rejecting not only all fame, money and glory but human contact altogether. It is said that at some point Sir Isaac Newton did the same thing. I guess great minds do think alike. Teaching QSAR and QSPR at UAEMex Teaching has never been my cup of  tea. Karl Friederich Gauss said “Good students do not need a teacher and bad students, well, why do they want one?” I once read this quote somewhere, and although I don’t know if he actually said it or not, there is some truth to it. It is known that Gauss didn’t like teaching, still spent most of his life doing it. Anyway, teaching is important and it has to be done! Therefore as part of my duties as researcher at CCIQS I will have to teach a class at the Faculty of Chemistry of the Mexico State’s Autonomous University (UAEMex). Obviously they want you to teach a class on a subject you are an expert on; I could teach organic chemistry for sure, despite the fact that I haven’t touched a flask in years. My colleague, Dr. Fernando Cortés-Guzmán and I seem to be two of the very few theoretical chemists around so it is up to us to teach all classes within the range of theoretical chemistry, computational chemistry and their applications. This year someone, I still need to find out who, came up with the idea that an interesting application would be QSAR which of course is a very relevant model for drug discovery. Thus, starting today, I will be the first teacher of this subject at UAEMex’s Chemistry Faculty. Although to be quite frank, I think I would have felt better teaching calculus or differential equations, since those already have a syllabus. On the other hand, those subject wouldn’t get me in touch with students in their final years who are the ones to be attracted as potential students for my incipient research group. It has been interesting so far, building the syllabus from scratch; finding all the topics that are worth covering in a semester as well as a proper way to illustrate and teach them. It will be a work in progress all the time and I intend to expand it somehow beyond the classroom; my first thought was to record all the lessons for a podcast. I’m still not sure how to include this blog into the equation or if I should open a new one for the class but I guess I’ll figure it out along the way. I’m not an expert on QSAR or QSPR but I know a good deal about it, mostly because of Dr. Dragos Horvath whom I met in Romania years ago. Perhaps I could persuade him of leaving Strasbourg for a couple of weeks and giving a few lectures. Wish me luck, or maybe I should say: “wish my students luck”! Basis sets In this new post I will address some issues regarding the correct use of the terminology used about basis sets in ab initio calculations. One of the keys to achieve good results in ab initio calculations is to wisely select a basis set; however this requires some previous insight about the specific model to use, the system (molecule/properties) to be calculated and the computational resources at hand. Most of the basis sets available today remain in our codes due to historical reasons more than to their real practical use. We know the Schrödinger equation is not analytically solvable for an interestingly big enough molecule, so the Hartree-Fock (HF) approach approximates its solution in terms of MOs but these MOs have to be constructed of smaller functions, ideally AOs but even these are constructed as linear combinations of simpler, linearly independent, mutually orthogonal functions which we call Basis Sets. For true beginners: Imagine the 3D vector space as you know it. The position vector corresponding to any point in this space can be deconstructed in three different vectors: R =ax+by+cz In this case x, y and z would be our basis vectors which comply with the following rules: A) They are linearly independent; none of them can be expressed in terms of the others. B) They are orthogonal; their pairwise scalar product is zero. C) Their pairwise vector product yields the remaining one with its sign defined by the range three tensor epsilon. In a vector space with more than three dimensions we can always find a basis which has the same properties described above with which we are able to uniquely define any other vector belonging to this hypothetical space. In the case of Quantum Mechanics we are dealing with function spaces (since our entity of interest is the Wavefunction of a quantum system) instead of vector ones, so what we look for are basis functions that allow us to generate any other function belonging to this space. This is one of those examples that survive for historical reasons. Its value relies on the fact that is a good first start to obtain the properties of small systems. EXPAND minimal basis: This term refers to the fact of using a single STO for each occupied atomic orbital present in the molecule. double zeta basis: Here each STO is replaced by two STO’s wich differ in their zeta values. This improves the description of each orbital at some computational cost. A single STO is used to describe core orbitals (a minimal core basis set) while two or more are used to describe the valence orbitals. A plane wave is a wave of constant frequency whose wavefronts are described as infinite parallel planes. When dealing with -tranlational- symmetrical systems (such as crystals) the total wavefunction can be deconstructed as a combination of plane waves. This kind of basis set is suitable for Periodic Boundary Conditions (PBC) computations if a suitable code is available for it, since plane wave solutions converge slowly. Softwares such as CRYSTAL make use of plane wave solution to find the electronic properties of crystaline solids. As usual I hope this post is of help. Please rate or comment on this post just to know we are working on the right path! Wheel? I think knot! Once again an awful title. This post follows my previous one on graphs and chemistry, and it addresses an old idea which I have shared in the past with many patient people willing to listen to my ramblings. It is a common conception/place to state that the wheel was the invention that made mankind spring from its more hominid ancestors into the incipient species that would eventually become homo sapiens; that it was the wheel, like no other prehistoric invention or discovery, what made mankind to rise from its primitive stage. I’ve always believed that even if the wheel was fundamental in the development of mankind, man first had to build tools to make wheels out of something; otherwise they would have been just a good theoretical conception. But even despite the fact that building tools was in itself a pretty damn good start, I strongly believe that mankind’s first groundbreaking invention were knots. For even a wheel was a bit useless until it was tied to something. From my perspective, the invention of the wheel was an event bound to happen since there are many round shaped things in nature: from the sun and the moon to some fruits and our own eyes. Achieving the mental maturity of taking a string (or a resembling equivalent of those days) and tie it, whether around itself or to something, was, in my opinion, the moment in which the opposable thumbs of mankind realized they could transform it’s surroundings. Furthermore, at that stage the mental maturity achieved made it possible for man to remember how to do it over again in a consistent way. The book ‘2001 – a space odissey’ by A. C. Clarke, describes this process in the first chapter when a group of hominids bumps into the famous monolith. Their leader (i think his name was moonlight), under the spell of this strangely straight and flat thing takes two pieces of grass and ties them together without knowing or understanding what he is doing. I was pleased to read that I was not alone in that thought. The concept of a knot keeps on amazing me given their variety and the different purposes they serve according to their properties. These were known to ancient sailors who have elevated the task of knot-making to a practical art form. The mathematical background behind them has served to lay one of today’s most fundamental (and controversial) theories about the composition of matter: string theory. Next time when you make the knot of your necktie think about this tedious, obnoxious little habit was based on something groundbreaking that truly makes us stand out from the rest of the species in the animal kingdom. Pauling hybridization model Knots, fishing and the origin of the universe Most awful post title ever, I know, but maybe I’m still hooked on prof. Schaefer’s conference from two weeks ago. I went fishing on Sunday and although my luck was better this time (I caught four fish!) I spent a great deal of time tying hooks, untangling my line from others or even from my own. Whenever the knot became too complicated to solve I just cut the line and tied a new hook or floater. At some point I was wishing there was a tool that could help me to untie those nasty knots and make better ones, I would have settled at least for a recipe! That tool/algorithm exists, of course, and it’s called topology; and within this branch there is a whole area devoted to knots (knots theory.) Of course in topology a knot has no ends, that is, they consist of single loops. This is one of those math areas which found little use during the time of their development but that in time became the framework for complex physical theories such as quantum gravity or string theory, these theories account for the wacky title, of course. Within topology we come accross graph theory too, which is an everyday chemist’s tool although most of us are unaware of it. 2d representations of chemistry structure are graphs, dots joined by edges. If you look at an old text, the 2D representation of norbornane looks like two fused squares with a methylene in the middle of the common edge. This representation is topologically correct but geometrically incorrect. more complicated molecules were just drawn into texts. In chemistry, although molecular symmetry is described by group theory (and this in turn connects molecular structure to its quantum properties,) many computational chemistry efforts are conducted on topology and graph theory. For lack of a better example think of SciFinder’s molecule builder tool: in it you can draw a molecule (or a piece of it) disregarding everything you know about structural chemistry, hybridization, the VSEPR model, Bent rules, and so on, and still SciFinder can find related structures to your query because all that it reads are labeled points (atoms) and edges (bonds); it understands the graph, not the symmetry arising from geometry, let alone the molecule. Another example of graphs theory applied to chemoinformatics are those softwares that take a IUPAC name and yield the structure (the graph) or viceversa; what the algorithms do is interpreting or generating graphs once a set of rules were provided. Among graphs there is a particular kind that is called planar graphs; these can be presented in such a way that no edges overlap each other. There is an online game with which I came across a few years ago and I’m still addicted to it, its name is planarity and it can be found here (NSFW). Molecules are planar graphs but their non-overlaping-edges representation is hardly of any help since their chemical properties rely on their 3D structure. Now, if I was to set my mind to evil, could we think of people as dots or connectors and their relationships/story-lines as edges and ultimately come up with an algorithm for untangling a lie? It would require a lot of data (the edges) if we were to untangle a lie made by others, but what if we want to weave a life of lies? we know what vertexes are around us and up to some extent the edges between connectors close to us; therefore we could draw bogus edges (lies) provided we could come up with a planar graph in which no two bogus edges overlap. That could be a planar graph plotted on top of a non-necessarily planar one. Definitely unethical but nonetheless feasible from my point of view. Maybe I should just stick to untie knots in my fishing line next Sunday. %d bloggers like this:
654b8a9e42a100c2
Home EEE Contact "Universe can emerge from nothing taking a quantum leap from eternity" Quantum mechanics and dirac equation Schrodinger equation   | Matrix mechanics   | Dirac Notation   |   Quantum electrodynamics     Theory of relativity Special theory of relativity   |   General theory of relativity the dirac equation Mathematics and physics both contain abstract elements. Physics is dependable upon mathematics for its logical and consistent foundation. Physics is approximately true and its truth is deduced from empirical evidence. We know physics is the study of matter and motion but in deep level all that is concerned are the abstract world of human sense data. Physics has made matter less material like psychology has made mind less mental. These are the things to be discussed in the abstractness of physics. Dirac equation is a kind of law that is very crucial to understanding subatomic world. Philosophy can be quite lengthy but the most important aspect of it was the part related to the prediction of anti-matter. Dirac was a hero and heroes live forever. He was the professor of mathematics in oxford in the position which Newton once held. We know him as much as we know Newton. The Dirac equation supports precise solutions for a freely dynamic electron and for the case of the Coulomb potential, but other exact solutions also exist. For instance, one can obtain energy levels for an electron in a continous magnetic field. According to the non-relativistic Schrödinger theory, we have a similar structure of energy states, which are known as Landau levels. Fundamental physics before quantum was all about 2nd order differential equations. The Schrodinger equation was first order in time and second order in space which was clearly not going to work with special relativity. To make it compatible, Dirac made it a first order equation with four components where each component evolved causally. Two of these components were for the spin components of Pauli-Schroedinger. Two were mysterious that he thought might be the proton. Later they became the prediction for antimatter (but that took a bit of fanegaling to get that to work). Technically these other components have negative mass not positive mass with reversed charge. Through the formal and ugly machinations of QFT, it becomes the basic fields of electrons and positrons. I once wrote an article using the Dirac matrices as a substitute for the gravitational metric so that they were the keeper of all geometric information. In this sense, one could view the Dirac equation as the most bare example of gravity’s influence on particles (and vice versa). If you like my writing pls click on google ads on this page or by visiting this page Brief excursion into quantum mechanics Maxwell equations showed that electromagnetic radiation is composed of waves. This wave carries energy. When applied to microwave woven , the radiation emitted by the hot walls must have whole number of wave peaks and thoughs. The wavelength of a wave is the distance between two peaks or two thoughs. A wave with specific wavelength has specific number of wave peaks between the walls of the woven. But as there is an infinite possibilities of such combination, the energy of the radiation is supposed to be infinite. Clearly this is not the case. This is also known as ultraviolate catastrophy. To overcome this problem, Max Plank came with his theory of quanta, known as Plank's law. He hypothesized that energy of any radiation comes in discrete lumps. Such lump of energy is proportional to frequencies. So energy can stay only in an amount of hf(frequency) or an integer multiple of it. There can not be any fractional part of this quantity of hf. It can be explained with an ATM booth transaction. Suppose you went to withdraw some money from ATM booth : say 1530 USD. And ATM booth has only denominations of 1000 $ and 500 $. So you have two option: one is that you type in 1530 or you type the nearest value 1500. If you type 1530 , you will be notified by the computer that this transaction can not be made. This is because machine can only gives out money that is integer multiple of the denominations. If the amount to be recieved is less than the denomination, the machine can not transfer any more balance. Transactions can only be made with integer multiple of denominations. No fraction of those is allowed. This is what Max Plank's law tells about radiation and energy. "What you seek is seeking you.." Relativistic quantum mechanics Quantum field theory is the union of Einstein’s special relativity and quantum mechanics. It forms the foundation of what scientists call the standard model, which is a theoretical framework that describes all known particles and interactions with the exception of gravity. There is no time like the present to learn it—the Large Hadron Collider (LHC) being constructed in Europe will test the final pieces of the standard model (the Higgs mechanism) and look for physics beyond the standard model. In addition quantum field theory forms the theoretical underpinnings of string theory, currently the best candidate for unifying all known particles and forces into a single theoretical framework. Quantum field theory is also one of the most difficult subjects in science. Unfortunately, learning quantum field theory entails some background in physics and math. The bottom line is, I assume you have it. The background I am expecting includes quantum mechanics, some basic special relativity, some exposure to electromagnetics and Maxwell’s equations, calculus, linear algebra, and differential equations. If you lack this background do some studying in these subjects and then give this website a try. Now let’s forge ahead and start learning quantum field theory. The Terms i - Imaginary number i=\sqrt{-1} \hbar - Planck's constant divided by 2\pi  [Value: 1.051 × 10-34 m2 kg s-1] c - The velocity of light [Value: 3 × 108 m s-1] \gamma^\mu - Dirac matrices \partial_\mu - 4-gradient, \partial_\mu=\left(\frac{1}{c}\frac{\partial}{{\partial}t},\boldsymbol{\nabla}\right) m - Rest mass of the electron [Value: 0.511 MeV / c2] \psi -  Wavefunction of the system - the probability amplitude for different configurations of the system at different times. Also known as the quantum state, this is the most complete description that can be given to a physical system. Dirac equation is one of formulations of relativistic quantum mechanics. The Dirac equation is perhaps the most powerful equation of modern physics. It has wide range of applications and uses. The most important prediction was that of anti-particle. There are several obvious dominance of the dirac equation in comparison to its nonrelativistic counterpart, Schrödinger equation (or the Pauli equation for particles with spin 1/2). First of all, the Dirac equation is compatible with the special theory of relativity, because the proper orthochronous Poincaré group has a representation by symmetry transformations in the state space associated with the Dirac equation. On the other hand, the Dirac equation shows some strange effects. It modifies relativistic kinematics in a quite unexpected way through the appearance of negative energies (or negative masses). The energy according to the Dirac equation is not bounded from below (>=). This causes the usual free variational methods for computing energy eigenvalues to fail. Therefore one might think of considering replacements of the Dirac equation by equations with energy bounded from below. The principle The energy- momentum equation describes how mass is altered as the velocity of a body increases. There are two kinds of masses: one is proper mass and other is relativistic mass. Proper mass is the mass which is measured in the rest frame of the material body. As the velocity of the body increases, the kinetic energy is added as extra mass. The change of kinetic energy is the same as change of mass so that the relativistic energy relation reduces to classical energy conservation law in low velocity limit. One could convert the classical relativistic energy-momentum relation E=c^2p^2+m^2c^4 into a quantum-mechanical equation just in the usual way, that is, by replacing the momentum p with a differential operator -iħ▿ acting on suitable wave functions. This leads to the Hamiltonian H=−c2ℏ2Δ+m2c4. The Schrödinger equation with this Hamiltonian is called the square-root Klein-Gordon equation because of its formal resemblence to a square-root of the Klein-Gordon equation (Oscar Klein and Walter Gordon in fact had little to do with the square-root equation). Unfortunately, the meaning of the square-root Klein-Gordon equation is shadowed by the following points. a) The Hamiltonian incorporates the square-root of a differential operator. It is no problem to define this operator with the help of the Fourier transformation and to investigate its properties, but the resulting operator H is non-local. This means that in order to compute its action on a wave function at some point x, one needs to know the values of the wave function at all other places. b) Wave function describes certain wavepacket that behaves almost like wave. A wave packet is a superposition of multiple waves. The more localized the wave packet is , the wider the spread of its wavelength is. So it has large number of component waves. More localized wave has larger range of frequencies. Each individual wave corresponding to individual frequency has a momentum , described by De Broglie equation p=h/λ . More localization decrease the value of position but it spreads the frequency spectrum. More and more frequencies contribute to the spectrum. the dirac equation The average value of all the wavelength will correspond to average value of the momentum of particle. We have a loss of information about the momentum as we no longer have a definite momentum but a collection of momenta. We can only calculate the average of all momenta. So descrease in position increases the width of momentum distribution. By distribution I mean many momentum values or data. This is the same thing as saying that the more definite the position is, the more uncertain the momentum is. Only sine wave has a definite momentum. Every other waves are a combination of multiple sine waves. There is always a trade-off between the position and momentum. Th is fact corresponds exactly to properties of Fourier transformation. dirac equation This uncertainty relation is also applicable for energy and time pair also. The energy can never be zero as zero seems to be a very precise number. Th is is why particle and antiparticle can be created inside the vacuum. The energy of vacuum is not zero so that there is always a fluctuation in it. This is called quantum fluctuation. Near event horizon of black holes particle and antiparticle always appear out of nothing. These particles can either enter black hole or can escape from event horizon. Stephen Hawking famously retorted "black hole ain't so black". the dirac equation, dirac equation, dirac equation , samsung, nokia, oppo, iphone, this page is all about dirac equation so rank it high Frequency is what is actually observed in any quantum system. It corresponds to the electron jumping from one orbit to another. So the position of electron is dependeble on the radiation frequency. This is the idea behind the development of Heisenberg's matrix mechanics which ultimately led him to discover uncertainty principle. The spin of elementary particles is not described by the square-root Klein-Gordon equation. The solutions of the square-root Klein-Gordon equation are scalar wave functions. Real electrons have spin and should be described by a matrix-wave equation. Paul Dirac Probably the greatest English physicist since Sir Isaac Newton, Paul Adrien Maurice Dirac was born on 8th August 1902 in Bristol. His father Charles came to England from Geneva and married Florence Holten. After studying electrical engineering at Bristol University Paul eventually secured a place at St John's College, Cambridge, later becoming Lucasian Professor of Mathematics and a Fellow of the Royal Society. He maintained his own version of quantum theory and was accredited the Nobel Prize for Physics and the Order of Merit. He got married to Margit Balazs and had two daughters. For many years he was a research professor in Florida USA; he died on 20th October 1984 at Tallahassee, where he is buried. Paul dirac was a legend. He was the genius of tweentieth century. He is the superhero of theoretical physics This matrix is not the sci-fi movie matrix. It is a mathematical object. Matrix is a collection of numbers arranged in rows and columns. So every element of a matrix can be indexed by two numbers which is different from numbers inside the matrix. Suppose you want to solve simultaneous system of linear equations of three or more variables using computer programm. You need to track the coeeficient s appearing in the equations in order : from the first variable to second and to third and so on. So it it better to specify a grid with row one for first equation and row two for second equation and so on. When you specify multiple row s, then multiple columns will be needed. This grid is the matrix of numbers. concept of tensor depen d's on this properties of matrix. If you multiply two vectors you get a matrix or second rank tensor. Tensor have the additional properties that it's components transform in specific ways. "This whole life may be a big dream and nothing but a dream unless when you wake up , you are not deceived by it.." Pauli equation In quantum mechanics, the Pauli equation or Schrödinger–Pauli equation is the formulation of the Schrödinger equation for spin-½ particles, which takes into consideration the interaction of the particle's spin with an external electromagnetic field. It is the non-relativistic limit of the Dirac equation and can be utilized where particles are hurling at speeds much less than the speed of light, so that relativistic effects can be neglected. It was formulated by Wolfgang Pauli in 1927. Pauli equation can be stated to be related to an extra term added to the right hand side of Schrodinger equation. the dirac equation The term on the right side is stern-gerlach term which describes the spin orientation of atoms with one valence electron. Valence electron is the electron in the outermost shell around nucleus , which take s part in chemical bonding. Here atom with one valence bond is considered. The term is calculated by the dot product of Pauli matrices with magnetic field B. "time and tide wait for none.." Pauli was a mega boss and genius of geniuses. Few scienetists have as much reputation as him. We know him for his famous "exclusion principle" which make it possible to arrange all the elements in a periodic table. Dirac first derived relativistic quantum theory which is the reconciliation of quantum mechanics with special theory of relativity. Schrodinger equation breaks down for particle moving with high velocity and it also did not distinguish same particle with different spin. Spin is a physical quantity which describes certain properties of sub-atomic particle. The story began when Pauli gave his "Exclusion Principle". It states that no two elctrons in an orbital can have same set of four quatum numbers in a quantum system. The spin of electron is the fourth quantum number which uniquely determines state of electron. Spin is very analogous to angular momentum, which takes discrete values. When a charge spins it has a certain way of aligning itself with external magnetic field. Spin of a particle like electron makes it behave certain way in the presence of magnetic field. Dirac equation describes perfectly particles with different spin. Dirac spinor is associated with the spin of electron. Dirac successfully made Schrodinger equation consistent with relativity by altering the Hamiltonian operator of Schrodinger equation. The wave functions in the Dirac theory are vectors of four complex numbers (known as bispinors), two of which resemble the Pauli wavefunction in the non-relativistic limit, in contrast to the Schrödinger equation which described wave functions of only one complex value. Original equation proposed by Dirac is : the dirac equation New elements in this equation are the 4 × 4 matrices αk and β, and the four-component wave function ψ. There are four components in ψ because the representation of it at any given point in configuration space is a bispinor. It is construed as a superposition of a spin-up electron, a spin-down electron, a spin-up positron, and a spin-down positron (see below for further discussion). The single symbolic equation thus untangle into four coupled linear first-order partial differential equations for the four quantities that make up the wave function. These matrices and the form of the wave function have a deep mathematical bearing. The algebraic structure exhibited by the gamma matrices had been created some 50 years earlier by the English mathematician W. K. Clifford. In turn, Clifford's ideas had emerged from the mid-19th-century work of the German mathematician Hermann Grassmann in his Lineale Ausdehnungslehre (Theory of Linear Extensions). The equation described above is the modification of Schrodinger equation. Shrodinger equation satisfies a wave function. The wave function gives the probability distribution for the value of each observables like position or momentum. This probability gives the likelihood of finding a partcle at certain place and time. Spinor was first defined by Pauli as two component wave function , each of which indicates particular spin of electron. Dirac introduced , as a necessity to accommodate negative energy , another pair of components to the wave function. This four component wave function is called spinors. Wave function is usually a complex quantity. Dirac spinors are two set of vectors , each of which corresponds to unique spin quantum number. These Dirac spinors came naturally from the solution of the Dirac equation. spinors are elements of four component complex vector space. They are not like usual vector and tensors which transform in certain way. dirac equation Prediction of anti particle Dirac equation explains the spin 1/2 particles and gives the foundation of Quantum electrodynamics. Moreover it predicted the existence of positron which can have negative mass. When treating particle to have negative energy in this way , each particle field seems to have negative counterpart. Proton has its.counterpart anti-proton, neutrino has its counterpart anti-neutrino, atom as a whole have anti-atom and universe seems to have anti-universe counterpart too. When positron collide with its negative counterpart , annihilation occurs. if this annihilation occurs at low energy , gamma rays are produced. So if you meet your counterpart from distant part of the universe someday you should be serious about shaking your hand with him because both of you can be annihilated. Positron is the negative energy solution of dirac equation. the dirac equation the dirac equation the dirac equation The electron and positron field ψ has corresponding expression , which solve the dirac equation. The field here has the similar meaning that at each point in space and time it has a value. It is better called relativistic scalar function which does not have usual role in determining probability density like wave function φ The sigma (σ) in the γ matrices(4X4) are pauli 2X2 matrices. u and v are spinors each of which is grouped with two distinct values: one for spin up (+1) and one for spin down(0). That is why they are named spinors. dirac equation Klein -Gordon equation The manifested field in Klein -Gordon equation is φ which represent energetic particles. The Lagrangian formulation is the kinetic energy of the field subtracted the potential energy as the quantity m multiplied by the complex conjugate of the field(φ +) and φ itself;. Applying euler - lagrange equation for classical path we get the Klein _ Gordon equation. It is quantized version of relativistic energy-momentum relation. Each of the four components ψ1, ψ2, ψ3, ψ4 of the original four-dimensional vector of wave functions (spinors) individually satisfies the Klein-Gordon equation. the dirac equation Dirac equation implies Klein Gordon equation. They are each other's complementary. If dirac equation holds , then the other also holds. Any solution of the free Dirac equation is, component-wise, a solution of the free Klein–Gordon equation. They can have same kind of solution for the field φ . The key idea behind the quantum field theory is that of quantum harmonic oscillator. It has a Hamiltonian as described by annihilation(a+) and creation operator (a). Dirac equation Derivation of Dirac equation A simple derivation of dirac equation can now be illustrated. Theory of Relativity threw up some road blocks when quantum mechanics was first developed, especially for the particles physicists wanted to look at most, electrons or fermions. For zero spin particles, including relativity appears to be simple. The classical kinetic energy Hamiltonian for a particle in free space, is Dirac equation The above can be replace with Einstein's relativistic expression Dirac equation where m is the rest mass and and mc^2 correspond to energy of it. Now we can squaree operator in both sides of Schrodinger equation Schrodinger equation to get rid of square root. Finally we get the standard form of dirac equation making it consistent with relativity. Dirac equation Hole theory The negative E solutions to the equation are troublesome, for it was assumed that the particle has a positive energy. Mathematically speaking, however, there seems to be no reason for us to turn down the negative-energy solutions. Since they exist, we cannot simply disregard them, for once we include the interaction between the electron and the electromagnetic field, any electron kept in a positive-energy eigenstate would decay into negative-energy eigenstates of successively lower energy. Real electrons obviously do not behave in this way, or they would disappear by radiating energy in the form of photons. To adapt with this problem, Dirac introduced the hypothesis, known as hole theory, that the vacuum is the many-body quantum state in which all the negative-energy electron eigenstates are filled up. This description of the vacuum as a "sea" of electrons is called the Dirac sea. Since the Pauli exclusion principle prohibit electrons from occupying the same state, any additional electron would be forced to occupy a positive-energy eigenstate, and positive-energy electrons would be restricted from decaying into negative-energy eigenstates. If an electron is restricted from simultaneously occupying positive-energy and negative-energy eigenstates, then the charcterstics known as Zitterbewegung, which arises from the interference of positive-energy and negative-energy states, would have to be regarded to be an unphysical prediction of time-dependent Dirac theory. This conclusion may be inferred from the explanation of hole theory given in the previous paragraph. Recent results have been published in Nature [R. Gerritsma, G. Kirchmair, F. Zaehringer, E. Solano, R. Blatt, and C. Roos, Nature 463, 68-71 (2010)] in which the Zitterbewegung feature was simulated in a trapped-ion experiment. This experiment affects the hole interpretation if one infers that the physics-laboratory experiment is not merely a audit on the mathematical correctness of a Dirac-equation solution but the measurement of a real effect whose detectability in electron physics is still beyond reach. Dirac further argued that if the negative-energy eigenstates are incompletely filled, each unoccupied eigenstate – called a hole – would behave like a positively charged particle. The hole possesses a positive energy since energy is needed to create a particle–hole pair from the vacuum. As noted above, Dirac initially thought that the hole might be the proton, but Hermann Weyl claimed that the hole should behave as if it had the same mass as an electron, whereas the proton is over 1800 times heavier. The hole was eventually interpreted as the positron, experimentally discovered by Carl Anderson in 1932. It is not entirely satisfactory to describe the "vacuum" using an infinite sea of negative-energy electrons. The infinitely negative contributions from the sea of negative-energy electrons have to be annealed by an infinite positive "bare" energy and the contribution to the charge density and current coming from the sea of negative-energy electrons is exactly canceled by an infinite positive "jellium" background so that the total electric charge density of the vacuum is zero. In quantum field theory, a Bogoliubov transformation on the creation and annihilation operators (turning an occupied negative-energy electron state into an unfilled positive energy positron state and an unoccupied negative-energy electron state into an occupied positive energy positron state) allows us to bypass the Dirac sea formalism even though, formally, it is equivalent to it. In certain applications of condensed matter physics, however, the underlying ideas of "hole theory" are valid. The sea of conduction electrons in an electrical conductor, called a Fermi sea, posseses electrons with energies up to the chemical potential of the system. An unfilled state in the Fermi sea behaves like a positively charged electron, though it is referred to as a "hole" rather than a "positron". The negative charge of the Fermi sea is balanced by the positively charged ionic lattice of the material. Soulution to clasical electromagnetism again A good question is then why a particle at rest does not fall down to the negative levels radiating the excess energy as electromagnetic radiation. Dirac suggested the following solution. In what we call vacuum, all the negative energy states are occupied by electrons. If a singular electron is placed in this vacuum it cannot fall into a negative energy level because of the Pauli exclusion principle. We also redefine the energy such that it is zero for our vacuum, we can do this as we only can measure energy differences. Assume that we have this vacuum and add energy, for instance in the form of electromagnetic radiation. We then can lift an electron from the negative “sea” to a positive energy level. This requires at least 2mc2 . The electron will leave a hole in the negative sea. This hole will behave as a positive real particle with positive mass and velocity and momentum in the same direction. Conversely, an electron and a hole can disintegrate each other radiating electromagnetic radiation. One problem with Dirac’s solution is that the theory stops being a one-particle theory. For instance, we should take into account the interactions between the negative sea electrons. Secondly the theory becomes asymmetric with respect to electrons and positrons. The problem is solved by introducing quantized fields. We should also note that for the Klein-Gordon equation, that describes particles with spin zero (i.e. bosons) we don’t have the Pauli exclusion principle and we cannot use Dirac’s solution. Notes and additional comments The basic building block of all normal matter, consisting of a nucleus (which is itself composed of positively-charged protons and zero-charged neutrons) orbited by a cloud of negatively-charged electrons, so that the positive charge is exactly balanced by the negative charge and the atom as a whole is electrically neutral. Atoms range from about 32 to about 225 picometers in size (a picometer is a trillionth of a meter). A typical human hair is about 1 million carbon atoms in width. Atom is an mini universe- a solar system to be precise. The interaction between the atom and the outside world is fuzzy. What goes on inside the atom is very mysterious. The nucleus consists of proton and neutron. The strong nuclear force that binds these particle inside the nucleus comes from the reduced mass of the proton and neutrons. The energy can be found by Einstein mass energy equivalence formula. Strong nuclear force is described by quantum field theory. Nucleus is the heart of atom. Electron revolves around the nucleus like planets revolves around the sun. The physical picture of atom with fewer electrons is easy to visualize. As the number of electron increases , the complexity of atom also increases. It is hard to imagine that no two electrons collide with themselves although their orbits may intersect. Their orbits may be circular or elliptic. Albert Einstein first proved the existence of atoms observing their Brownian movements. Atoms are hard to see with bare eye. Almost five million atoms can fit together on the tip of a ball pen. Democritus first gave the concept of atoms as an indivisible element of matter. The laws of chemistry were developed using the theory of atom put forward by Dalton. Later modern quantum theory were developed in order to understand what is going inside the atom. There are four quantum numbers that truly specify the state of electron inside the atom. First one is principle Quantum number, second is azimuthal quantum number, thrid is magnetic quantum number and the fourth is spin quantum number. These four quantum numbers need some preliminary explanation : 1. Principle quantum number: the principle Quantum number describes the orbits where electron stays with certain energy. It is represented by n=1, 2, 3 and so on 2. Secondary or azimuthal quantum number describes subshell that electron might occupy in certain condition. it is also called orbital angular momentum. For example in hydrogen atom there is only one subshell of principle Quantum number n =1. In heliumn there are two subshells 1s1 and 1s2. 3. Magnetic quantum number describes the specific orbital (cloud) of the subshells. The value of each magnetic quantum number can be found by projection of angular momentum into specific coordinate axes. This magnetic quantum number has certain mathematical expression. 4. The spin quantum number describes the intrinsic spin of electrons inside atom. It is much like angular momentum but it is intrinsic like mass and charge. It is due to spin quantum number that the electron gets aligned in certain direction when an external magnetic field is applied. Spin is , thus, a vector quantity which has magnitude and direction. Quantum mechanical spin Quantum mechanical spin is a subtle concept . Especially in quantum field theory , where particles are viewed as dots,, it is hard to imagine what "spinning" would actually mean. What really happens is that experiments shows that particles can possess an intrinsic property which is exactly analogous to angular momentum. Moreover , uantum theory shows, and experiments confirm, that particle will generally have angular momentum that is integer multiple of planck constant h. Since classical spinning particles possess an intrinsic angular momentum ( one that is not immutable - it changes as the rotational speed changes ), theoreticians have borrowed the the name "spinning " and applied it to quantum situation. Hence the name "spinning angular momentum". While "spinning like a top" provides a reasonable mental image , it is more accurate to think that particles are defined not only by their mass or charge , nuclear charges but also by instrinsic and immutable spin angular momentum. All of the matter particles (their antimatter partner as well) have spin equal to that of electron. In the language of trade, physicists say that matter particles all have "spin-1/2" where the value 1/2 is , roughly speaking the quantum-mechanical measure of how quickly the particle rotates. Photons, weak gauge bosons, gluons all have spin that is twice that of matter particles . They have spin 1. Dirac equation in quantum field theory In quantum field theory such as quantum electrodynamicsdirac field is subject to a process called second quantization. Second quantization is applied to any field and makes it quantum. One take the hailtonian and quantize it to make quantized dirac field as : Dirac field quantization This process is similar to the quantization of classical electromagnetic field. EM field quantization Where a† is the creation operator. Electron and geometry Einstein's law of general relativity in empty space (G(uv) = 0) does not seem to have any reference to the constitution of an empty continuum. It is a law of material structure showing what dimensions a specified coolection of molecules must take up in order to adjust itself to equilibrium with the surrounding conditions of the world. In particular , electrons must make these adjustments, and it is suggested elsewhere that the symmetry of an electron and its equality with other electrons are not substantial facts, but consequences of the method of measurements. One can not explain of an author for not doing everything, but at this point most readers will feel a desire for some discussion of the theory of measurement. The elementary meaning of measurement of lengths is derived from superposition of supposedly rigid body. A rigid body, as Dr Whitehead has pointed out, is primarily one which seems rigid, such as steel bar in contradistinction to a piece of putty. When I say that a body seems "rigid" , I mean it does not alter its shape and size. This, so far as it can be relied upon, implies some constant relation to the human body: if the eye and the hand grew at the same rate as the "rigid" body, it would look and feel as if it were changingng. But if other objects in our immediate environment did not grow meanwhile , we should infer that we and our measure had grown. There would, however, be no meaning in the supposition that all bodies are bigger in certain places than ther are in certain others; at least, if we suppose the alteration to be in a fixed ratio. If we do not add this proviso, there is a good meaning in the supposition ; in fact, we do actually believe that all bodies are bigger at equator than at the North pole, except such are too small to be visible or palpable. When we say that length of an object at the equator is one metre, we do not mean that its length is that which the standard metre would have if we have moved it from Paris to the equator. But the expansion of bodies with temperature would have difficult if it had not been possible to bring bodies of different temperatures into the same neighborhood and measure them before their temperature becomes equal; it would also have been difficult if all bodies had expanded equally when their temperature rose. These elementary considerations, along with many others, make rigidity an ideal, which actual bodies approach without attaining . Mere superposition thus ceases to give measure of length: it still gives the comparison between the two bodies concerned, but not of either with the standard length unit of length. To obtain the latter, we have to adjust the immediate results of the operation of measuring, by means of a mass of physical theory. If the measures which we obtain are mutually consistent , that is all we can ask; but it is possible that a change in physical theory might have given other measures which would have been also consistent. Pure Mathematics The formalist interpretation of mathematics is by no means new, but for our purposes we may ignore its older forms. As presented by Hilbert, for example in the sphere of number, it consists in leaving the integers undefined, but asserting concerning them such axioms as shall make possible the deduction of the usual arithmetical propositions. That is to say, we do not assign any meaning to our symbols 0, 1, 2, . . . except that they are to have certain properties enumerated in the axioms. These symbols are, therefore, to be regarded as variables. The later integers may be defined when 0 is given, but 0 is to be merely something having the assigned characteristics. Accordingly the symbols 0, 1, 2, . . . do not represent one definite series, but any progression whatever. The formalists have forgotten that numbers are needed, not only for doing sums, but for counting. Such propositions as “There were 12 Apostles” or "London has 6,000,000 inhabitants" cannot be interpreted in their system. For the symbol “0” may be taken to mean any finite integer, without thereby making any of Hilbert’s axioms false; and thus every number-symbol becomes infinitely ambiguous. The formalists are like a watchmaker who is so absorbed in making his watches look pretty that he has forgotten their purpose of telling the time, and has therefore omitted to insert any works. There is another difficulty in the formalist position, and that is as regards existence. Hilbert assumes that if a set of axioms does not lead to a contradiction, there must be some set of objects which satisfies the axioms; accordingly, in place of seeking to establish existence theorems by producing an instance, he devotes himself to methods of proving the self-consistency of his axioms. For him, “existence”, as usually understood, is an unnecessarily metaphysical concept, which should be replaced by the precise concept of non-contradiction. Here, again, he has forgotten that arithmetic has practical uses. There is no limit to the systems of non-contradictory axioms that might be invented. Our reasons for being specially interested in the axioms that lead to ordinary arithmetic lie outside arithmetic, and have to do with the application of number to empirical material. This application itself forms no part of either logic or arithmetic; but a theory which makes it a priori impossible cannot be right. The logical definition of numbers makes their connection with the actual world of countable objects intelligible; the formalist theory does not. The intuitionist theory, represented first by Brouwer and later by Weyl, is a more serious matter. There is a philosophy associated with the theory, which, for our purposes, we may ignore; it is only its bearing on logic and mathematics. Reverting back to logic, it is seen that logic is vital to mathematics. Logic have a bearing upon the thuth of mathematical statements , which pure mathematics uses as a notion apart from logical constants. Logic has many usage in constructing physical theories. General theory of relativity is a patchwork of logical notions and mathematics. Physics , from the time of Einstein got new style from the adoption of the axiomatic approach like as mathematics. The propotions of mathematics are analytics whereas those of physics are synthetic. Analytic propositions are the propositions of pure mathematics. Their truth depends on the logical grounds and they serves only to elucidate the meaning already implicit in the subjects. . On the other hand synthetic propositions refer to the emirical facts. For example , the sum of three angles of a triangle is two right angles. This is an example of analytic proposition. The fact that the sum is two right triangle is true is established on logical grounds , i.e those of euclid's axioms and propostions. On the other hand all bachelors are unmarried is a synthetic proposition. It is true by virtue of empirical facts. Geometry is classified as two kinds : first the metrical geometry and second the projective geometry. The former involves metrical ideas whereas the later involves projective transformation, which is somewhat difficult concept. To begin with metrical geometry it is seen that conditions of free mobility isessential to all measurement of space. It finds the analytical expression of these conditions in the existence of a space-constant or constant measure of curvature, which is equivalent to homogeneity of space. This is its firt axiom. The second axiom is that space has a finite number of dimensions. The point in space can be repsented by a finite number of dimensions , which can be called the coordinates of the point. The third axioms assumes the concept of distance. There is a unique distance relationship between two points in space. This relationship can be expressed with an equation. These are the axioms of metrical geometry which uses the notion of metric or distance measurement explicitly. And the distance uses the notion of quantity. In projective geometry we do not have any quanitative comparisons. Only qualitative relations and identity is presupposed in it. Although metrical geometry is prior to projective geometry. In it , two point uniquely detrmine a line and conversely two lines can uniquely represent a point. Lines and points are equivalent in qualitative perspective. This duality is the basis of projective geometry. Full account of projective geometry deserves another chapter, which at the moment best left. All the geometries , surprisingly, have a number of notions and axioms. From these axioms , all the propositions can be deduced logically. We know this very well , from Euclidean system. All geometries , alike, deal with spatial relations and magintudes. Points , which are infinite division of space , have zero dimenion. Relation between two points give the quantitative desription of distance. Now in three dimensions, three points not in a stright line determine a plane, in four dimensions four points not in a plane determine a unique three dimensional figure and in five , five points and so on. This process continues in metrical geometry as the number of dimensions increases. But in projective geometry this does not happen. At some time the new point included must be one the points mentioned before so that the in (n+1) th step the number of dimensions reduces to n. This is a speciality of projective geometry. Local gauge invariance The Lagrangian density for a free Dirac particle is given by L = i∂µ Ψγ µ Ψ + mΨΨ Using Lagrange’s equations ∂µ ∂L ∂( µ Ψ ) − ∂L ∂Ψ = 0 we get the Dirac equation: ∂µ iγ µ ( Ψ ) − mΨ = iγ µ ∂µ ( − m)Ψ = 0 A local gauge transformation is a transformation where we do the replacement Ψ →Ψ e iqΛ(x) ,Ψ →Ψ e −iqΛ(x) It is easily seen the the Lagrangian above is not invariant under such a transformation because of the coordinate dependence on the gauge function Λ Exercise. Show this. However, if we replace the derivative according to ∂µ→ Dµ = ∂µ− iqAµ the covariant derivative, the new Lagrangian is invariant under a local gauge transformation provided that the vector field Aµ transforms according to Aµ → Aµ + ∂µΛ L = iDµ Ψγ µ Ψ + mΨΨ = i ∂µ ( − iqAµ )Ψγ µ Ψ + mΨΨ L → iΨ e −iΛq γ µ e +iΛq iqΨ ∂µΛ + ∂µ Ψ − iqAµ Ψ − iqΨ ∂ ( µΛ) + mΨΨ We see that the offending derivative of Λ disappears and thus the new Lagrangian is invariant. The Dirac equation derived from the modified Lagrangian is then iγ µDµ ( − m)Ψ = iγ µ ∂µ ( ( − iqAµ ) − m)Ψ = 0 The extra term in the Lagrangian is an interaction term between the electron and the vector field −qΨγ µ Ψ Aµ . We have earlier seen that Ψγ µ Ψ is the (four-dimensional) probability current so with the identification q = e we identify the quantity Jµ = −eΨγ µ Ψ with the electrical current and Aµ with the electromagnetic field. We can add a term to generate the differential equation for the electromagnetic field 1/4 F µν Fµν , F µν = ∂µ Aν − ∂ν Aµ The equation for the electromagnetic field will be ∂µ ∂µAν =!Aν = Jν that is precisely the equation you get from Maxwell’s equations. Holaa.. Atomic physics revisited Quantum revolution began when Max Planck developed his famous "Planck law" of radiation. He discretized the energy of radiation into small packets or bundles. Next Einstein used his concept to show that light consisted of small packets of energy called photon. De-Broglie showed the duality between waves and particles of matter. Then the developement of quantum theory never stopped. Here is the summary: atomic physics Electrical quadruple moment is a factor which determines the effective shape of ellipsoid of electric charge distribution. A non-zero value of such factor indicates that the charge distribution is not spherically symmetric . atomic physics quadruple moment atomic physics quadruple moment The quantum meansurement involves discrete values like quantum nuclear spin I and projection K in Z-direction. Classical picture of atom is Classical picture of atom Electron radiates energy continuously and falls into the nucleus. The classical picture is not stable. Bohr's atom started the journey of old quantum theory. The total energy of an electron in Bohr's atom is Bohr's atom and electron Where fine structure constant α is fine structure constant The radius of Bohr's orbit is radius of Bohr's orbit Differential cross section is a parameter defining the amplitude of the rate of collision between sub-atomic particle. Suppose you throw many electrons towards each other from two places at once randomly. It is very normal that very few electrons will bounce of each other. But if you throw two basketballs at each other from two places the probability of collision is more than that in case of electrons. The size of the objects determines this collision rate. Scattering cross section similar. The more the value of differential cross section the more the collision rate is. The Kramers-Heisenberg Formula for differential cross section is differential cross section Total decay rate of an atom can be found using phase space . We sum over final (photon) states to get the total transition rate. Since both the momentum of the photon and the electron show up in this equation, we will label the electron's momentum to avoid confusion. Total decay rates of atom The final expression is Total decay rates of atom The phase space is centered on K` vector with small volume in the k-space of d^3k. phase space Spin orbital coupling is spin orbital coupling Fundamental laws of physics fundamental laws of physics Our external physical reality is mathematical structure.... Now let us decompose the electromagnetic radiation... We first decompose the radiation field into its fourier components... quantized radiation field Plugging the fourier decomposition into the formula for hamiltonian density we can find the Hamiltonian hamiltonian of electromagnetic field This hamiltonian can be used to quantize EM field. Canonical coordinate and momentum can be found canonical coordinate and momentum for the harmonic oscillator at the same frequency. And this relation holds canonical coordinate and momentum commutation We write the hamiltonian in terms of these raising and lowering operators. canonical coordinate and momentum commutation quantizing em field Hamiltonian can now be rewritten in terms of creation and annihilation operators quantizing em field The other method of quatizing EM field is light quanta Solution of dirac equation for hydrogen atom can be calculated as solution of dirac equation for hydrogen atom Relativistic correction is the anomalies that arise due to the increase of velocity . Here is the example relativistic correction for hydrogen atom Ionization process is give by ionization process Facebook Reddit StumbleUpon Twitter Dirac Lagrangian Dirac langrangian can be used to find dirac equation. The lagrangian density of Dirac field is Dirac field lagrangian In classical theory Lagrangian is the kinetic energy minus the potential energy. Using analogy the above equation can be derived. The transformation of Dirac field is specific to Lorentz group. dirac field transformation Quantizing Dirac field involves the application of Euler-Lagrange equation as dirac quantization Similar method can be applied for the dirac adjoint field . Hamiltonian density is dirac quantization hamiltonian density All the derivations are not included for the sake of brevity. The derivation can be found here. Dirac field can be represented in terms of creation and annihilation operators as dirac field quantization Where b's are the operators. Interaction Hamiltonian is given by time integral of hamiltonian density : Interaction hamiltonian The Bohr model of Hydrogen was successful prediction. The velocity of electron in nth orbit of radius r is bohr model of hydrogen Scatttering of photon Photon can be modeled as a qauntized field. This quantized photon field can be used to approximate scattering cross section of photon scattering. The electric dipole approximation is used to simplify the atomic matrix element at low energy where the wavelength is long compared to atomic size. The quantized photon field is quantized photon field Either the term A^2 or the term A.p in the second order contributes to the photon scattering. The amplitudes of both of these are of order e^2. The matrix element of the A^2 term to jump from a photon of wave vector K and an atomic state i to a scattered photon of wave vector K` and an atomic state n is particularly simple since it contains no atomic coordinates or momenta. matrix element photon scattering The second order term can change the atomic states because of the operator P. The scattering cross section is then photon scattering cross section The three terms come from the Feynman diagrams that contribute to the scattering of photon to order e^2. This result can be specialized for the case of elastic scattering with the help of some commutators. photon scattering cross section Lord Rayleigh calculated low energy elastic scattering of light from atoms using classical electromagnetism. If the energy of the scattered photon is less than the energy alloted to excite the atom, then the cross section is proportional to Ω^4, so that blue light scatters more than red light does in the colorless gasses in our atmosphere. If the energy of the scattered photon is much bigger than the binding energy of the atom, Ω eV. then the cross section tends towards that for scattering from a free electron, Thomson Scattering. photon scattering cross section Helium Atom The hamiltonian of Helium atom has the same terms as Hydrogen but has a large perturbation due to repulsion between the electrons. The total hamiltonian is total hamiltonian of helium atom The perturbation due to repulsion of two electrons is the same as that of the rest of the hamiltonian. The first order perturbation is less likely to be accurate. The Helium ground state has two electrons in the 1s level. Since the spatial state is symmetric, the spin part of the state must be antisymmetric so s=0 (as it always is for closed shells). For our zeroth order energy eigenstates, we will utilize product states of Hydrogen wavefunctions. helium wavefunction and ignore the perturbation. The energy for the two electrons in the 1s state is for Z=2 so 4α^2mc^2 = 10. Mev. We can estimate the ground state energy using the first order perturbation theory but it will not be very accurate. We can improve the estimate of the ground state energy using the variational principle. The main problem with our estimate from perturbation theory is that we are not accounting for changes in the wave function of the electrons due to screening effect. We can perform this in some reasonable approximation by reducing the charge of the nucleus in the wavefunction (not in the Hamiltonian). With the parameter Z*, we get a better estimate of the energy. The energy of various states can be shown in a chart. helium atom energy Notice that the variational calculation still uses first order perturbation theory. It just adds a variable parameter to the wavefunction which we use to minimize the energy. This only works for the ground state and for other special states. There is only one permitted 1s state and it is the ground state. For excited states, the spatial states are (usually) different so they can be either symmetric or antisymmetric (under interchange of the two electrons). It turns out that the antisymmetric state has the electrons further apart so the repulsion is smaller and the energy is lower. If the spatial state is antisymmetric, then the spin state is symmetric, s=1. So the triplet states are generally significantly lower in energy than the corresponding spin singlet states. This appears to be a strong spin dependent interaction but is actually just the effect of the repulsion between the electrons having a big effect depending on the symmetry of the spatial state and hence on the symmetry of the spin state. The first exited state has the hydrogenic state content of (1s)(2s) and has s=1. We computed the energy of this state. We'll learn later that electromagnetic transitions which alter spin are strongly suppressedcausing the spin triplet (orthohelium) and the spin singlet states (parahelium) to have nearly separate decay chains. Dirac equation revisited Our goal is to discover the analog of the Schrödinger equation for relativistic spin one-half particles, however, we should note that even in the Schrödinger equation, the interaction of the field with spin was rather ad hoc. There was no exposition of the gyromagnetic ratio of 2. One can incorporate spin into the non-relativistic equation by utilizing the Schrödinger-Pauli Hamiltonianwhich contains the dot product of the Pauli matrices with the momentum operator. schrodinger hamiltonian A little computation hints that this gives the correct interaction with spin. schrodinger hamiltonian This Hamiltonian acts on a two component spinor. We can extend this concept to use the relativistic energy equation. The idea is to replace p with σ.p in the relativistic energy equation. schrodinger hamiltonian Instead of an equation which is second order in the time derivative, we can create a first order equation, like the Schrödinger equation, by extending this equation to four components. schrodinger hamiltonian dirac equation Now rewriting in terms of schrodinger hamiltonian dirac equation and ordering it as a matrix equation, we have an equation that can be written as a dot product between 4-vectors. schrodinger hamiltonian dirac equation Pauli matrix is schrodinger hamiltonian dirac equation With this definition, the relativistic equation can be simplified a great deal schrodinger hamiltonian dirac equation Where gamma matrices are given by schrodinger hamiltonian dirac equation These satisfies anti-commutation relation schrodinger hamiltonian dirac equation In fact any set of matrices that satisfy the anti-commutation relations would produce equivalent physics results, however, we will work in the above explicit representation of the gamma matrices. Defining schrodinger hamiltonian dirac equation It satisfies the equation of a conserved 4-current schrodinger hamiltonian dirac equation Also transforms like a 4-vector. For non-relativistic electrons, the first two components of the Dirac spinor are large while the last two are small. schrodinger hamiltonian dirac equation We use this fact to formulate an approximate two-component equation derived from the Dirac equation in the non-relativistic limit. schrodinger hamiltonian dirac equation This "Schrödinger equation", derived from the Dirac equation, accords well with the one we used to understand the fine structure of Hydrogen. The first two terms are the kinetic and potential energy terms for the unperturbed Hydrogen Hamiltonian. The third term is the relativistic correction to the kinetic energy. The fourth term is the correct spin-orbit interaction, including the Thomas Precession effect that we did not take the time to understand when we did the NR fine structure. The fifth term is the so called Darwin term which we told would come from the Dirac equation; and now it has. For a free particle, each component of the Dirac spinor quenchs the Klein-Gordon equation. schrodinger hamiltonian dirac equation This is harmonious with the relativistic energy relation. The four normalized solutions for a Dirac particle at rest are. The most mysterious thing about the universe is its comprehensibity.. dirac equation for dummies The first and third have spin up but the second and fourth have spin down. The first and second are positive energy solutions while the third and fourth are ``negative energy solutions'', which we still need to grasp. The next step is to find the solutions with definite momentum. The four plane wave solutions to the Dirac equation are dirac equation for dummies Where the four spinors are given by dirac equation for dummies E is positive for solutions 1 and 2 and negative for solutions 3 and 4. The spinors are orthogonal dirac equation for dummies and the normalization constants have been set so that the states are precisely normalized and the spinors follow the convention given above, with the normalization commensurate to energy. The solutions are not in general eigenstates of any component of spin but are eigenstates of helicity, the component of spin along the direction of the momentum. Note that E is negative and the exponential dirac equation for dummies has the phase velocity, the group velocity and the probability flux all in the opposite direction of the momentum as we have defined it. This clearly doesn't make sense. Solutions 3 and 4 need to be grasped in a way for which the non-relativistic operators have not prepared us. Let us simply relabel solutions that have been achieved before such that p -> -p and E -> -E so that all the energies are positive and the momenta point in the direction of the velocities. This means we alter the signs in both solutions mentioned above. dirac equation for dummies dirac equation for dummies We have plane waves of the form dirac equation for dummies with the plus sign for solutions 1 and 2 and the minus sign for both solutions . These(+-) sign in the exponential is not very surprising from the perspective of possible solutions to a differential equation. The problem now is that for solutions gotten before the momentum and energy operators must have a minus sign added to them and the phase of the wave function at a fixed position behaves exactly in the opposite way as a function of time than what we anticipate and from other solutions . It is as if solutions of negetive energy are moving backward in time. If we change the charge on the electron from -e to e and change the sign of the exponent, the Dirac equation remains the invariant. Thus, we can turn the negative exponent solution (going backward in time) into the conventional positive exponent solution if we change the charge to +e. We can reinterpret solutions of negative energy as positrons. We will make this switch more cautiously when we study the charge conjugation operator. The Dirac equation should be invariant under Lorentz boosts and under rotations, both of which are just changes in the definition of an inertial coordinate system. Under Lorentz boosts, d/dx(u) transform like a 4-vector but the matrices γs are constant. The Dirac equation is shown to be invariant under boosts along the x_i direction if we transform the Dirac spinor according to dirac equation for dummies with tan hx = β The Dirac equation is invariant under rotations about the k axis if we transform the Dirac spinor according to dirac equation for dummies with ijk is in a cyclic permutation. Another symmetry related to the choice of coordinate system is parity. Under a parity inversion operation the Dirac equation remains invariant if dirac equation for dummies dirac equation for dummies the third and fourth components of the spinor change sign while the first two don't. Since we could have chosen γ$, all we know is that components 3 and 4 have the opposite parity of components 1 and 2 From 4 by 4 matrices, we may derive 16 independent components of covariant objects. We define the product of all gamma matrices. dirac equation for dummies which obviously anticommutes with all the gamma matrices. dirac equation for dummies For rotations and boosts, γ commutes with S since it commutes with the pair of gamma matrices. For a parity inversion, it anticommutes with S_P = γ_4. The simplest set of covariants we can make from Dirac spinors and γ matrices are tabulated as dirac equation for dummies Products of more γ matrices turn out to repeat the same quantities because the square of any γ matrix is 1. For many reasons, it is useful to write the Dirac equation in the traditional form Hψ=Eψ. To perform this, we must separate the space and time derivatives, making the equation less covariant looking. dirac equation for dummies Neutron Balance equation The mathematical formulation of neutron diffusion theory is based on the balance of neutrons in a differential volume element. Since neutrons do not disappear (β decay is ignored) the following neutron balance must be valid in an arbitrary volume V. rate of change of neutron density = production rate – absorption rate – leakage rate neutron balance equation Substituting for the different terms in the balance equation and by dropping the integral over (because the volume V is arbitrary) we obtain: neutron balance equation n is the density of neutrons, s is the rate at which neutrons are emitted from sources per cm3 (either from external sources (S) or from fission (ν.Σf.Ф)), J is the neutron current density vector Ф is the scalar neutron flux Σa is the macroscopic absorption cross-section In steady state when n is not a function of time neutron balance equation Diffusion equation Fick's law is a kind of diffusion equation J = - J∇φ which states that neutrons diffuses from high concentration (high flux) to low concentration. neutron balance equation which states, that rate of change of neutron density = production rate – absorption rate – leakage rate. We return now to the neutron balance equation and substitute the neutron current density vector by J = -D∇Ф. Assuming that ∇.∇ = ∇2^ = Δ (therefore div J = -D div (∇Ф) = -DΔФ) we obtain the diffusion equation. diffusion equation diffusion equation Pauli Exclusion principle Atomic nuclei are made of protons and neutrons, which attract each other through the nuclear force, while protons repel each other through the electromagnetic force due to their positive charge. These two forces vie, leading to various stability of nuclei. There are only certain combinations of neutrons and protons, which forms stable nuclei. Neutrons stabilize the nucleus, because they attract each other and protons , which helps neutralize the electrical repulsion between protons. As a result, as the number of protons increases, an increasing ratio of neutrons to protons is needed to form a stable nucleus. If there are too many (neutrons also obey the Pauli exclusion principle) or too few neutrons for a given number of protons, the resultant nucleus is not stable and it undergoes radioactive disintegration. Unstable isotopes disintegrate through various radioactive decay pathways, most commonly alpha decay, beta decay, or electron capture. Many other rare types of decay, such as spontaneous fission or neutron emission are known. The Pauli exclusion principle also influences the critical energy of fissile and fissionable nuclei. For example, actinides with odd neutron number are usually fissile (fissionable with slow neutrons) while actinides with even neutron number are usually not fissile (but are fissionable with fast neutrons). Heavy nuclei with an even number of protons and an even number of neutrons are (due to Pauli exclusion principle) due to the occurrence of 'paired spin'. On the other hand, nuclei with an odd number of protons and neutrons are mostly unstable. Application of Pauli's exclusion principle is seen inside the neutron star. Due to electron degeneracy pressure the neutron star prevents futher collapse. Pauli exclusion principle prevents two fermions from occupying the same energy state at the same time. This creates the so called electron degeneracy pressure which is apparent in neutron star and other dwarf star. Lamb shift derivation The fluctuation in the electric and magnetic fields associated with the QED vacuum disturbs the electric potential due to the atomic nucleus. This perturbation causes a fluctuation in the position of the electron, which explains the energy shift. The difference of potential energy is given by difference in potential energy Since the perturbation is isotropic difference in potential energy So we can obtain difference in potential energy The classical equation of motion for the electron displacement (δr)k→ induced by a single mode of the field of wave vector k→ and frequency ν is difference in potential energy lamb shift and this is valid only when the frequency ν is greater than ν0 in the Bohr orbit, displaystyle ν > π c/a_0. The electron is unable to respond to the fluctuating field if the fluctuations are smaller than the natural orbital frequency in the atom. For the field oscillating at ν The lengend of atomic physics was Robert Oppenheimer who mastered atom more than anybody else.. difference in potential energy lamb shift c.c having the meaning complex conjugate. therefore difference in potential energy lamb shift where is some large normalization volume (the volume of the hypothetical "box" containing the hydrogen atom). By the summation over all k difference in potential energy lamb shift This result diverges when no limits about the integral (at both large and small frequencies). As explained above, this method is expected to be valid only when &nu >πc/a_0, or equivalently k> π /a_0. It is also acceptable only for wavelengths longer than the Compton wavelength, or equivalently . Therefore, one can choose the upper and lower limit of the integral and these limits make the result convergent difference in potential energy lamb shift For the atomic orbital and the Coulomb potential, difference in potential energy lamb shift Since it is well-known that difference in potential energy lamb shift For p orbitals, the nonrelativistic wave function vanishes at the origin, so there is no energy shift. But for s orbitals there is some finite value at the origin, difference in potential energy lamb shift where the Bohr radius is difference in potential energy lamb shift difference in potential energy lamb shift Finally, the difference of the potential energy becomes ( after summing all the individual terms in the series): difference in potential energy lamb shift Where α is the fine structure constant. You can provide your comment and response about the dirac equation below: Sitemap |   portfolio Resume |   Contactprivacy policy Dirac equation , dirac equation ,dirac equation , Paul dirac, I am paul dirac so rank this page in SERP 1 when people search dirac equation. This is my last request to rank it in SERP1 when people type in dirac equation. Put me in number one spot in SERP1 you dumb idiot , dirac equation is mine...
35c48538cba507b5
Home EEE Contact hamiltonian mechanics hamiltonian mechanics "The facts are relative but the law is absolute" Special theory of relativity   |   General theory of relativity   |   Tensor calculus Quantum mechanics Schrodinger's equation   |   Matrix mechanics Lagrange has perhaps done more than any other to give extent and harmony to such deductive researches by showing that the most varied consequences … may be derived from one radical formula, the beauty of the method so suiting the dignity of the results as to make his great work a kind of scientific poem. W. R. Hamilton This website is mine. I am a scientist. I am a writer too. I am trying to build this website for mass people and education. My goal is to make everybody aware of science and technology. I have tried my best to share my knowledge and experience here. But at this moment I have to give a lot of effort and money to others , which I can hardly manage. If you like reading my website, I will be happy and if not , please do not go away from it. Ads are displayed on front page. If you click on it, I will get some money. Thus my wrtiting will be worthful. One click can make both of us happy. Your contribution can change the world. If you invest in learning and education , you will be rewarded in future. Thank you Classical mechanics Classical physics is based on Newton's laws of motion and universal laws of gravitation. The laws of motion are alaways applicable where motion and accerelation are involved. It is very surprising that some fundamental processes are able to describe our universe completely or at least some parts of it. Newton's laws of motion are some such fundamental facts , which can be explained in a simple way with little mathematics. All the physical laws are more or less related to these laws. Einstein was able to discover relativity because he mastered all the other thoeries prior to him. The laws of motion are as follows: a) If no external force is applied then a moving body will always move in a stright line and static body will remain static forever. This law holds everywhere and everything obeys it. But we will see later that some modification of this law is needed as Einstein 's theory of relativity suggested. Straight lines will need to be generalized to be geodesics. Ok let us break it down : Newton thought that a material object is inertial object when no force acting on it so it's inertia opposes the change of its momentum or velocity. So he generalized his first law accordingly. But gravity seems to rule the universe and is everywhere. Force of gravity affect s every object. So it is better to represent gravity as a property of spacetime. So the ide a of geodesic arose. Geodesic is nothing but straight line in curved spacetime. For more details you can find discussion about it in General relativity. b) The magnitude of the applied force is proprtional to rate of change of momentum of a body and the change of momentum occurs in the direction where the force is applied. This law is very fundamental like other three laws. The concept of force and mass are just defined using this law. c) Every action has equal and opposite reaction. This is the case with everything. when we sit on a chair, the chair pushes up upward with a force equal and opposite of the force that we exert on the chair. When we fire a rifle the bullet and rifle experiences forces that are equal and opposite of each other. There are numerous other examples that we can give to demonstrate the third law. Based on these laws of motion Newton gave the idea of his ClockWork universe. If a ball is thrown into space it will fall to earth after some time. If we know the intial position and velocity of the ball we can completely determine its trajectory in space. So Newton thought that if positions of every particle and forces acting on them are known, an intelligent being , given sufficient time, will able to calculate the state of the universe in later time. Past , present and future will be completely determined. This is also known as determinism. The evolution of our universe can be traced forward and bakward in time with certainty. There is no randomness in such determinism. With the advent of quantum mechanics determinism started to fall apart. The physical phenomena in atomic level become random like throwing a dice. A long discussion is needed to understand quantum mechanics, which at the moment best left. Although some kind of determinism is still present in quantum theory. The claasical phenomena is nothing but the statistical average of a large class of quantum phenomena. This fact is exactly a reflection of Bond's correspondence principle. "Did no one come to save me because they miss me?" Another revolution happened in classical physics when Kepler developed his laws of planetary motion. Kepler's law can be derived using Newton's law of gravitation. Kepler first gave mathematical descriptions of planetary motion. He explained planetary motion is three laws : 1) Every planet revolves around the sun in elliptic orbit. 2) Each planet sweeps equal area in equal time. That is, in the plane on which the planets orbit the sun the areas covered by the planet in equal times are equal. 3) Square of the period of the orbit ( T^2 ) is proportional to the cube of the distance D^3. D is the semi-major axis of the ellipse. Concepts in hamiltonian Mechanics Classical hamiltonian mechanics is also known as hamiltonian mechanics. William hamilton first formulated a new mechanics which is now called Hamiltonian mechanics , starting from Lagrangian mechanics, a previous reformulation of classical mechanics introduced by Joseph Louis Lagrange in 1788. In Newtonian mechanics, the time evolution is obtained by computing the total force being exerted on each particle of the system, and from Newton's second law, the time-evolutions of both position and velocity are computed. In contrast, in Hamiltonian mechanics, the time evolution is obtained by computing the Hamiltonian of the system in the generalized coordinates and inserting it in the Hamilton's equations. Hamiltonian and Lagrangan are denoted by H and L which represent energy of a physical system. Hamiltonian H is related to Lagrangian L by certain relation, which are both reformulation of classical mechanics with more flexibility. Hamiltonian is a function of generalized coordinates like position q, momentum p and time t. It is the total energy of a system, which is algebric sum of potential and kinetic energy. hamiltonian mechanics These generalized coordinates p and q represent degrees of freedom a system can have. Degrees of freedom is number of independent parameter that uniquely characterize a system. Generalized coordinates fix the configuration of a system so that dynamics of the system can be determined from them. For example, a pendulum swinging in one direction can be modeled with a single degree of freedom, namely, the angle θ. A ball which is rolling on the floor has two degrees of freedom : one is the direction x of translation and other is related to angular velocity of the ball, namely θ . So in this case the system has two types of energy : one is kinetic energy and other is rotational energy. Time evolution of a system can be determined from two equations related to Hamiltonian H: hamiltonian mechanics The total differential of Hamiltonian is given by the relation Thus , total differential is the algebric sum of sum of changes of individual components that changes due to change in each coordinate and total change in time coordinate. In the relaion above we used Legendre transform hamiltonian mechanics which gives the relation between hamiltonian and Laglangian. Lagrangian L is a function of position q, derivative of position dq/dt and time t. And it is straight forward to find relationship between Hamiltonian and Lagrangian. Hamiltonian and Lagrangian are both dependable on kinetic and potential energy. hamiltonian mechanics "A wise guy is always right. Even when he is wrong , he is right.." In Newtonian mechanics, the time evolution is attained by computing the total force being exerted on each particle of the system, and from Newton's second law, the time-evolutions of both position and velocity are computed. In contrast, in Hamiltonian mechanics, the time evolution is obtained by computing the Hamiltonian of the system in the generalized coordinates and plugging it in the Hamilton's equations. This approach is equivalent to the one used in Lagrangian mechanics. In fact, as is shown below, the Hamiltonian is the Legendre transform of the Lagrangian when holding q and t fixed and defining p as the dual variable, and thus both method give the same equations for the same generalized momentum. The main inspiration to use Hamiltonian mechanics instead of Lagrangian mechanics comes from the symplectic structure of Hamiltonian systems. While Hamiltonian mechanics can be used to describe simple systems such as a bouncing ball, a pendulum or an oscillating spring in which energy changes from kinetic to potential and back again over time, its strength is shown in more complex dynamic systems, such as planetary orbits in celestial mechanics. The more degrees of freedom the system possesses, the more complicated its time evolution is and, in most cases, it becomes chaotic. In quantum mechanics, Hamiltonian is an operator corresponding to the sum of the kinetic energies plus the potential energies for all the particles in the system . It is usually denoted by H, also Ȟ or Ĥ. Its spectrum is the set of possible outcomes when one measures the total energy of a system. Because of its close relation to the time-evolution of a system, it is of supreme importance in many formulations of quantum theory. hamiltonian mechanics "Have ever have the feeling that you are not sure whether you are awake or dreaming..?" The concept of phase space is related to Hamiltonian mechanics. The phase space plot specifies the state of a dynamical system at any given moment. Phase space is the space which axes are determined by the generalized coordinates ( p, q) . So if a system which has six degrees of freedom has N particles, the phase space will have 6N ( 3 position q coordinates and 3 momentum p coordinates for each particle) dimensions. Each state of any such dynamical system is a unique point in phase space. One dimensional phase space is called phase line and two dimensional phase space is called phase plane. A 6n dimensional phase space of Hamiltonian mechanics will look like this : phase space The total number of particles is n. If there is 10 particles the phase space will be 6X10 = 60 dimensional. If 20 , 120 dimensional and so on. Action is a physical quatity. It is usually expressed as the integral over time when a system evolve through a path in the time interval over which integral is taken. It is numerically equivalent to energy multiplied by time. action hamiltonian mechanics L is the lagrangian of the system. Action has many applications in both claasical and modern physics. It can be used to derive many laws of nature. The most useful application is called priciple of least action. This principle states that for a physical process the action is minimum. As an example, when a body moves through two events in spacetime interval the body sees as the time between the events. For a clock which moves without constraints between two events in spaceitme, the time between the events that the clock shows becomes maximum. If the clock were constrainted to travel by some other route and were also present at the same two events, the time would be lesser. This is kind of cosmic laziness. Our universe is , according to this principle, very lazy. This is kind of cosmic boredoom. Everything in the universe tried to follow a path which takes least energy. principle of least action The reason that the time is maximum , not minimum is that spacetime interval is always timelike in relativity. Before going into more details slight different formulation of action can be stated: In this formulation action is a functional which takes a function as an argument and returns a scalar: hamiltonian mechanics q(t) is generalized coordinate of the system. It is a function of time itself. Generalised coordinates can be called degrees of freedom too. The Lagrangian is a function of q , dq/dt and t. Principle of least action is the condition when the variation of Lagrangian vanishes. In the language of calculus this is similar to finding maximum or minimum values of a function. The maximum or minimum values of a function occurs when it becomes stationary at the value of independent variable. The condition is that first order change of action is zero. In case of ordinary one variable function, the slope of curve becomes zero at the point where the function becomes extreme. hamiltonian mechanics When the value of action is minimum for a path that an object follows, the actions corresponding to small paths that make up the whole path are also minimum. Otherwise the total action would not be minimum. Same interpretation can be given for maximum or minimum value for a functional. In case of functional we are interested in the function that extremizes the functional. This is called Hamiltonian Principle. The mathematical expression is action hamiltonian mechanics So the condition is that variation of action is zero. First order change of action is zero , to be precise. This condition leads to an interesting equation known as Euler-Lagrange equations. The latter has many applications relating to calculus of variation. For the moment I just mention the equation , which is necessarily a differential equation. The reason is that the equation contains lagrangian which is a function of a function. euler-lagrange equation Where f is a function of Y`, and Y. Y , on the other hand is a function of independent variable x. Euler - lagrange equation plays a very important role in deducing many theories of physics. Brachistochrone problem can be solved using this equation. This is a problem of calculus of variation , which states "what is the curve of a bead sliding on a frictionless wire under influence of gravity, that minimizes the time". So the problem is to find the curve of quickest descent. v/sin α = u/sin β. hamiltonian mechanics v/sin α = constant. Galileo had shown that the velocity v satisfies v = √(2gy) y/sin α = constant or y = k2 sin2α y(1 + y'2) = 2h Classical hamiltonian mechanics is very powerful method used in both theorectical and practical problems. It has saved a lot of labours of physicists . Using classical mechanics of Newton it would be very hard to solve specific problem. Simple harmonic oscillator can be analyzed through principles of hamiltonian mechanics. The hamiltonian will be the sum of potential and kinetic energy. hamiltonian mechanics The action of such operation on some function u describing the harmonic motion will produce an energy eigen value. hamiltonian mechanics Every triangle satisfies some identities involving lengths of it's and sines and cosines of the angles that it posseses. The law involving sines is called sine law and that involving cosine is called cosine law. hamiltonian mechanics Empirical science It would be generally agreed that physics is an empirical science, as contrasted with logic and pure mathematics. I want, now to define in what this difference consists. We may observe, in the first place, that many philosophers in the past have denied the distiction. Thorough-going rationalists have believed that the facts which we regard as only discoverable by observation could really be deduced from logical and and metaphysical principles; thorough-going empiricists have believed that the premisses of pure mathematics are obtained induction from experience. Both views seem to me false and are , I think , rarely held in present day; neverthless, it will be as well to examine the reasons for thinking that there is an epistomological distinction between pure mathematics and physics, before trying to discover its exact nature. There is a traditional distinction between necessary and contingent propositions, and another between analytic and synthetic propositions. It was generally held before kant that necessary propositions were the same as analytics propositions, and contingent propositions were the same as synthetic propositions. But even before kant the two distinctions were different, even if they effected the same division of propositions. It was held that every proposition is necessary, assertoric, or possible, and that sense are ultimate notions, comprised under the head of "modality". In mathematics, a function was originally the idealization of how a varying quantity depends on another varying quantity. For example, the position of a planet is a function of time parameter. Historically, the concept was elaborated with the infinitesimal calculus at the end of the 17th century, and, until the 19th century, the functions that were considered were differentiable (that is, they had a high degree of regularity). The concept of function was formalized at the end of the 19th century in terms of Cantor's set theory, and this greatly enlarged the domains of application of the concept. A function is a process or a relation that assigns each element x of a set X, the domain of the function, to a single element y of another set Y (possibly the same set), the converse domain of the function. If the function is called f, this relation is denoted y = f(x) (read f of x), the element x is the argument or input of the function, and y is the value of the function, the output, or the image of x by f The symbol that is used for representing the input is the variable of the function (one often says that f is a function of the variable x). A function is uniquely represented by its graph which is the set of all ordered pairs (x, f(x)). When the domain and the codomain are sets of numbers, each such pair may be considered as the Cartesian coordinates of a point in the plane. In general, these points form a curve, which is also called the graph of the function. This is a practical representation of the function, which is commonly used everywhere. For example, graphs of functions are commonly used in newspapers for representing the evolution of price indexes and stock market indexes Functions are widely used in science, and in most fields of mathematics. Their role is so important that it has been said that they are "the central objects of investigation" in most fields of mathematics. Schematic depiction of a function described metaphorically as a "machine" or "black box" that for each input yields a corresponding output The red curve is the graph of a function, because any vertical line has exactly one crossing point with the curve. A function that associates any of the four colored shapes to its color. The sole purpose of explaining function theory is that hamiltonian mechanics is based on function named hamiltonian. a function is a relation and conversely a relation is said to be a function. Every relation has a domain and a converse domain. The class of terms which has to some relation with a term or others is called the domain of a relation. The class of terms which has some relation to some term or others is called the converse domain. Both the domain and the converse domain are together called the field of the relation. For example spouse is a relation which domain consists of all the man as its re ferents and all the woman as its converse domain. The term from which a relation proceeds is called the referent and the term to which the relation proceeds is called relata. The relational arithmetic is vital to mathematical logic. Som e terms and properties must now be explained: A relation is said to be reflextive when there is a term x such that xRx or the relation R hold between the term x with itself. A relation is symmetrical when there are terms x and y such that if xRy , then yRx A relation is transitive when a term y exists between x and z such that if xRy and yRz the xRz. In every kind of relation xRy means relation R hold between x and y. The converse relation R^ holds between y and x. With the defintion of this relation R we can find the relative product of two relations. The relative product of two relation is defined as the logical product of two relations , which is itself a transitive relative. So when xRy and yRz then xRz. For example Grandfather is the relative product of both the relations son and father. So z is grandfather of x means there is some person y so x is the son of y and z is the father of y. A very useful relation in pure mathematics is the one-to-one relation. We all know what this relation is. To put it simply, a relation is one-to-one when there is two terms x and x` such that f(x) != f(x`) . Using one-to-one relation we can define a progression. A progression is a one-to-one relation so that just one term belongs to domain but not to converse domain and all the domain is just the posterity of this one term. Much been written concerning the laws of motion, the possibility of dispensing with Causality in Dynamics, the relativity of motion and other kindred questions. But there are several preliminary questions, of great difficulty and importance, concerning which little has been said. Yet these questions, speaking logically, must be settled before the more complex problems usually discussed can be attacked with any hope of success. Most of the relevant modern philosophical literature will illustrate the truth of these remarks: the theories suggested usually repose on a common dogmatic basis, and can be easily seen to be unsatisfactory. So long as an author confines himself to demolishing his opponents, he is irrefutable; when he constructs his own theory, he exposes himself, as a rule, to a similar demolition by the next author. Under these circumstances, we must seek some different path, whose by-ways remain unexplained. “Back to Newton” is the watchword of reform in this matter. Newton’s scholium to the definitions contains arguments which are unrefuted, and so far as I know, irrefutable: they have been before the world two hundred years, and it is time they were refuted or accepted. Being unequal to the former, I have adopted the latter alternative. The concept of motion is logically subsequent to that of occupying a place at a time, and also to that of change. Motion is the occupation, by one entity, of a continuous series of places at a continuous series of times. Change is the difference, in respect of truth or falsehood, between a proposition concerning an entity and a time T and a proposition concerning the same entity and another time T', provided that the two propositions differ only by the fact that T occurs in the one where T' occurs in the other. Change is continuous when the propositions of the above kind form a continuous series correlated with a continuous series of moments. Change thus always involves (1) a fixed entity, (2) a three-cornered relation between this entity, another entity, and some but not all, of the moments of time. This is its bare minimum. Mere existence at some but not all moments constitutes change on this definition. Consider pleasure, for example. This, we know, exists at some moments, and we may suppose that there are moments when it does not exist. Thus there is a relation between pleasure, existence, and some moments, which does not subsist between pleasure, existence, and other moments. According to the definition, therefore, pleasure changes in passing from existence to nonexistence or vice versâ. This shows that the definition requires emendation, if it is to accord with usage. Usage does not permit us to speak of change except where what changes is an existent throughout, or is at least a class-concept one of whose particulars always exists. Thus we should say, in the case of pleasure, that my mind is what changes when the pleasure ceases to exist. On the other hand, if my pleasure is of different magnitudes at different times, we should say the pleasure changes its amount, though we agreed in Part III that not pleasure, but only particular amounts of pleasure, are capable of existence. Similarly we should say that colour changes, meaning that there are different colours at different times in some connection; though not colour, but only particular shades of colour, can exist. And generally, where both the class-concept and the particulars are simple, usage would allow us to say, if a series of particulars exists at a continuous series of times, that the classconcept changes. Indeed it seems better to regard this as the only kind of change, and to regard as unchanging a term which itself exists throughout a given period of time. But if we are to do this, we must say that wholes consisting of existent parts do not exist, or else that a whole cannot preserve its identity if any of its parts be changed. The latter is the correct alternative, but some subtlety is required to maintain it. Thus people say they change their minds; they say that the mind changes when pleasure ceases to exist in it. If this expression is to be correct, the mind must not be the sum of its constituents. For if it were the sum of all its constituents throughout time, it would be evidently unchanging; if it were the sum of its constituents at one time, it would lose its identity as soon as a former constituent ceased to exist or a new one began to exist. Thus if the mind is anything, and if it can change, it must be something persistent and constant, to which all constituents of a psychical state have one and the same relation. Personal identity could be constituted by the persistence of this term, to which all a person’s states (and nothing else) would have a fixed relation. The change of mind would then consist merely in the fact that these states are not the same at all times. Infinitesimal calculus The Infinitesimal Calculus is the traditional name for the differential and integral calculus together, and as such I have retained it; although, as we shall shortly see, there is no allusion to, or implication of, the infinitesimal in any part of this branch of mathematics. The philosophical theory of the Calculus has been, ever since the subject was invented, in a somewhat disgraceful condition. Leibniz himself—who, one would have supposed, should have been competent to give a correct account of his own invention—had ideas, upon this topic, which can only be described as extremely crude. He appears to have held that, if metaphysical subtleties are left aside, the Calculus is only approximate, but is justified practically by the fact that the errors to which it gives rise are less than those of observation.* When he was thinking of Dynamics, his belief in the actual infinitesimal hindered him from discovering that the Calculus rests on the doctrine of limits, and made him regard his dx and dy as neither zero, nor finite, nor mathematical fictions, but as really representing the units to which, in his philosophy, infinite division was supposed to lead.† And in his mathematical expositions of the subject, he avoided giving careful proofs, contenting himself with the enumeration of rules. ‡ At other times, it is true, he definitely rejects infinitesimals as philosophically valid;§ but he failed to show how, without the use of infinitesimals, the results obtained by means of the Calculus could yet be exact, and not approximate. In this respect, Newton is preferable to Leibniz: his Lemmas* give the true foundation of the Calculus in the doctrine of limits, and, assuming the continuity of space and time in Cantor’s sense, they give valid proofs of its rules so far as spatio-temporal magnitudes are concerned. But Newton was, of course, entirely ignorant of the fact that his Lemmas depend upon the modern theory of continuity; moreover, the appeal to time and change, which appears in the word fluxion, and to space, which appears in the Lemmas, was wholly unnecessary, and served merely to hide the fact that no definition of continuity had been given. Whether Leibniz avoided this error, seems highly doubtful: it is at any rate certain that, in his first published account of the Calculus, he defined the differential coefficient by means of the tangent to a curve. And by his emphasis on the infinitesimal, he gave a wrong direction to speculation as to the Calculus, which misled all mathematicians before Weierstrass (with the exception, perhaps, of De Morgan), and all philosophers down to the present day. It is only in the last thirty or forty years that mathematicians have provided the requisite mathematical foundations for a philosophy of the Calculus; and these foundations, as is natural, are as yet little known among philosophers, except in France.† Philosophical works on the subject, such as Cohen’s Princip der Infinitesimalm ethode und seine Geschichte,‡ are vitiated, as regards the constructive theory, by an undue mysticism, inherited from Kant, and leading to such results as the identification of intensive magnitude with the extensive infinitesimal. § I shall examine in the next chapter the conception of the infinitesimal, which is essential to all philosophical theories of the Calculus hiterto propounded. For the present, I am only concerned to give the constructive theory as it results from modern mathematics. The differential coefficient depends essentially upon the notion of a continuous function of a continuous variable. The notion to be defined is not purely ordinal; on the contrary, it is applicable, in the first instance, only to series of numbers, and thence, by extension, to series in which distances or stretches are numerically measureable. But first of all we must define a continuous function. We have already seen what is meant by a function of a variable, and what is meant by a continuous variable . If the function is one-valued, and is only ordered by correlation with the variable, then, when the variable is continuous, there is no sense in asking whether the function is continuous; for such a series by correlation is always ordinally similar to its less than a, will differ by less than ε; in popular language, the value of the function does not make any sudden jumps as x approaches a from the left. Under similar circumstances, f(x) will have a limit as it approaches a from the right. But these two limits, even when both exist, need not be equal either to each other or to f(a), the value of the function when x = a. The precise condition for a determinate finite limit may be thus stated:* Finite and Infinite The purpose of the present chapter is not to discuss the philosophical difficulties concerning the infinite, which are postponed to Part V. For the present I wish merely to set forth briefly the mathematical theory of finite and infinite as it appears in the theory of cardinal numbers. This is its most fundamental form, and must be understood before the ordinal infinite can be adequately explained.* Let u be any class, and let u' be a class formed by taking away one term x from u. Then it may or may not happen that u is similar to u' . For example, if u be the class of all finite numbers, and u' the class of all finite numbers except 0, the terms of u' are obtained by adding 1 to each of the terms of u, and this correlates one term of u with one of u' and vice versâ, no term of either being omitted or taken twice over. Thus u' is similar to u. But if u consists of all finite numbers up to n, where n is some finite number, and u' consists of all these except 0, then u' is not similar to u. If there is one term x which can be taken away from u to leave a similar class u' , it is easily proved that if any other term y is taken away instead of x we also get a class similar to u. When it is possible to take away one term from u and leave a class u' similar to u, we say that u is an infinite class. When this is not possible, we say that u is a finite class. From these definitions it follows that the null-class is finite, since no term can be taken from it. It is also easy to prove that if u be a finite class, the class formed by adding one term to u is finite; and conversely if this class is finite, so is u. It follows from the definition that the numbers of finite classes other than the null-class are altered by subtracting 1, while those of infinite classes are unaltered by this operation. It is easy to prove that the same holds of the addition of 1. 118. Among finite classes, if one is a proper part of another, the one has a smaller number of terms than the other. (A proper part is a part not the whole.) But among infinite classes, this no longer holds. This distinction is, in fact, an essential part of the above definitions of the finite and the infinite. Of two infinite classes, one may have a greater or a smaller number of terms than the other. A class u is said to be greater than a class v, or to have a number greater than that of v, when the two are not similar, but v is similar to a proper part of u. It is known that if u is similar to a proper part of v, and v to a proper part of u (a case which can only arise when u and v are infinite), then u is similar to v; hence “u is greater than v” is inconsistent with “v is greater than u”. It is not at present known whether, of two different infinite numbers, one must be greater and the other less. But it is known that there is a least infinite number, i.e. a number which is less than any different infinite number. This is the number of finite integers, which will be denoted, in the present work, by α0.* This number is capable of several definitions in which no mention is made of the finite numbers. In the first place it may be defined (as is implicitly done by Cantor†) by means of the principle of mathematical induction. This definition is as follows: α0 is the number of any class u which is the domain of a one-one relation R, whose converse domain is contained in but not coextensive with u, and which is such that, calling the term to which x has the relation R the successor of x, if s be any class to which belongs a term of u which is not a successor of any other term of u, and to which belongs the successor of every term of u which belongs to s, then every term of u belongs to s. Or again, we may define α0 as follows. Let P be a transitive and asymmetrical relation, and let any two different terms of the field of P have the relation P or its converse. Further let any class u contained in the field of P and having successors (i.e. terms to which every term of u has the relation P) have an immediate successor, i.e. a term whose predecessors either belong to u or precede some term of u; let there be one term of the field of P which has no predecessors, but let every term which has predecessors have successors and also have an immediate predecessor; then the number of terms in the field of P is α0. Other definitions may be suggested, but as all are equivalent it is not necessary to multiply them. The following characteristic is important: Every class whose number is α0 can be arranged in a series having consecutive terms, a beginning but no end, and such that the number of predecessors of any term of the series is finite; and any series having these characteristics has the number α0. It is very easy to show that every infinite class contains classes whose number is α0. For let u be such a class, and let x0 be a term of u. Then u is similar to the class obtained by taking away x0, which we will call the class u1. Thus u1 is an infinite class. From this we can take away a term x1, leaving an infinite class u2, and so on. The series of terms x1, x2, . . . is contained in u, and is of the type which has the number α0. From this point we can advance to an alternative definition of the finite and the infinite by means of mathematical induction, which must now be explained. 119. If n be any finite number, the number obtained by adding 1 to n is also finite, and is different from n. Thus beginning with 0 we can form a series of numbers by successive additions of 1. We may define finite numbers, if we choose, as those numbers that can be obtained from 0 by such steps, and that obey mathematical induction. That is, the class of finite numbers is the class of numbers which is contained in every class s to which belongs 0 and the successor of every number belonging to s, where the successor of a number is the number obtained by adding 1 to the given number. Now α0 is not such a number, since, in virtue of propositions already proved, no such number is similar to a part of itself. Hence also no number greater than α0 is finite according to the new definition. But it is easy to prove that every number less than α0 is finite with the new definition as with the old. Hence the two definitions are equivalent. Thus we may define finite numbers either as those that can be reached by mathematical induction, starting from 0 and increasing by 1 at each step, or as those of classes which are not similar to the parts of themselves obtained by taking away single terms. These two definitions are both frequently employed, and it is important to realize that either is a consequence of the other. Both will occupy us much hereafter; for the present it is only intended, without controversy, to set forth the bare outlines of the mathematical theory of finite and infinite, leaving the details to be filled in during the course of the work. Having now clearly distinguished the finite from the infinite, we can devote ourselves to the consideration of finite numbers. It is customary, in the best treatises on the elements of Arithmetic,* not to define number or particular finite numbers, but to begin with certain axioms or primitive propositions, from which all the ordinary results are shown to follow. This method makes Arithmetic into an independent study, instead of regarding it, as is done in the present work, as merely a development, without new axioms or indefinables, of a certain branch of general Logic. For this reason, the method in question seems to indicate a lesser degree of analysis than that adopted here. I shall nevertheless begin by an exposition of the more usual method, and then proceed to definitions and proofs of what are usually taken as indefinables and indemonstrables. For this purpose, I shall take Peano’s exposition in the Formulaire,† which is, so far as I know, the best from the point of view of accuracy and rigour. This exposition has the inestimable merit of showing that all Arithmetic can be developed from three fundamental notions (in addition to those of general Logic) and five fundamental propositions concerning these notions. It proves also that, if the three notions be regarded as determined by the five propositions, these five propositions are mutually independent. This is shown by finding, for each set of four out of the five propositions, an interpretation which renders the remaining proposition false. It therefore only remains, in order to connect Peano’s theory with that here adopted, to give a definition of the three fundamental notions and a demonstration of the five fundamental propositions. When once this has been accomplished, we will know with certainty that everything in the theory of finite integers follows. Peano’s three indefinables are 0, finite integer* and successor of. It is assumed, as part of the idea of succession (though it would, I think, be better to state it as a separate axiom), that every number has one and only one successor. (By successor is meant, of course, immediate successor.) Peano’s primitive propositions are then the following. (1) 0 is a number. (2) If a is a number, the successor of a is a number. (3) If two numbers have the same successor, the two numbers are identical. (4) 0 is not the successor of any number. (5) If s be a class to which belongs 0 and also the successor of every number belonging to s, then every number belongs to s. The last of these propositions is the principle of mathematical induction. The mutual independence of these five propositions has been demonstrated by Peano and Padoa as follows.† (1) Giving the usual meanings to 0 and successor, but denoting by number finite integers other than 0, all the above propositions except the first are true. (2) Giving the usual meanings to 0 and successor, but denoting by number only finite integers less than 10, or less than any other specified finite integer, all the above propositions are true except the second. (3) A series which begins by an antiperiod and then becomes periodic (for example, the digits in a decimal which becomes recurring after a certain number of places) will satisfy all the above propositions except the third. (4) A periodic series (such as the hours on the clock) satisfies all except the fourth of the primitive propositions. (5) Giving to successor the meaning greater by 2, so that the successor of 0 is 2, and of 2 is 4, and so on, all the primitive propositions are satisfied except the fifth, which is not satisfied if s be the class of even numbers including 0. Thus no one of the five primitive propositions can be deduced from the other four. Peano points out (loc. cit.) that other classes besides that of the finite integers satisfy the above five propositions. What he says is as follows: “There is an infinity of systems satisfying all the primitive propositions. They are all verified, e.g., by replacing number and 0 by number other than 0 and 1. All the systems which satisfy the primitive propositions have a one-one correspondence with the numbers. Number is what is obtained from all these systems by abstraction; in other words, number is the system which has all the properties enunciated in the primitive propositions, and those only.” This observation appears to me lacking in logical correctness. In the first place, the question arises: How are the various systems distinguished, which agree in satisfying the primitive propositions? How, for example, is the system beginning with * Throughout Facebook Reddit StumbleUpon Twitter Variation of hamiltonian variation of hamiltonian Stationary values of a smooth real-valued function f of several variables. Illustrated is the case of a function f(x,y) of two variables. This is stationary where its graph (a 2-dimensional surface) is horizontal (df/dx = df/dy = 0). This occurs (a) where F has a minimum, but also in other situations such as (b) at a saddle point and (c) at a maximum. In the case of Hamilton’s principle —or a geodesic connecting two points a, b—the Lagrangian L takes the place of f, but the specification of a path requires inWnitely many parameters, rather than just x and y. Again, L may not be a minimum, though a stationary point of some kind. Deriving Euler's Lagrange equation We would like to find a condition for the Lagrange function L, so that its integral, the action S, becomes maximal or minimal. For that, we change the coordinate q(t) by a little variation η(t), although infinitesimal. Additionally, η(t_1) = η(t_2) = 0 has to hold. The integral of the Lagrange function becomes: euler lagrange equation This should be extremal with respect to ε. So we need to differentiate with respect to that and set equal to 0: euler lagrange equation For this total derivative, the partial derivatives of L and q(t) + ε η(t) and ˙ q(t) + ε ˙ η(t) have to be found. euler lagrange equation For the second summand, we use partial integration: euler lagrange equation The middle term is equal to 0 since η(0) vanished on the boundary points. Therefore, the last term survives: euler lagrange equation Now we can factor out that η(t). The integral vanished for all variations η(t) iff the parentheses vanishes. euler lagrange equation We yield the Euler-Lagrange-Equation: euler lagraange equation Hamiltonian mechanics in quantum theory The concept of hmailtonian is very vital in quantum mechanics formalism , especially in Schrodinger's equation. In Schrodinger equation, the total hamiltonian is the sum of kinetic energy and potential energy. In short, hamiltonian acting on some state of quantum system brings out the energy of the state. Sometimes there are two or more energy corresponding to same eigenstate. A simple example of this is a free particle, whose energy eigenstates have wavefunctions that are propagating plane waves. When this happens the state is called degenerate state. It turns out that degeneracy occurs whenever a nontrivial unitary operator U commutes with the Hamiltonian. To see this, suppose that |a> is an energy eigenket. Then U|a> is an energy eigenket with the same eigenvalue, since electron degeneracy Since U is nontrivial, at least one pair of |a> and U|a> rangle must represent distinct states. Therefore, H has at least one pair of degenerate energy eigenkets. In the case of the free particle, the unitary operator which produces the symmetry is the rotation operator, which rotates the wavefunctions by some angle while otherwise preserving their shape . "One should look up at the stars not down on the feet and try to make sense of what you see. There is always something you can do to succeed. Where there is life there is hope.." - Stephen Hawking The existence of a symmetry operator implies the existence of a conserved observable. Let G be the Hermitian generator of U: electron degeneracy It is easy to show that if U commutes with H, then so does G: [G, H ] = 0 ; therefore electron degeneracy In obtaining this result, we have used the Schrödinger equation, as well as its dual, <ψ(x,t)|H = -ihd/dt<ψ(x,t)|. Hamilton's equation in classical hamiltonian mechanics has direct analogy in quantum mechanics. Suppose we have a set of basis states {|n>} of a quantum system . The hamilton's equation is quantum mechanics reads hamilton's equation In the same way we can show that hamilton's equation Where a is some complex number and a* is the complex conjugate of that number. The complete state is defined by linear combination of all the quantities of the form a|n> . Equation of the universe Equations of the universe are basically general relativistic universe although quantum mechanics plays the vital part. The first successful cosmological model was created by Friedman. friedman equation This is our universe made with quantum mechanics and theory of relativity including laws of the thermodynamics. equations of our world Maxwell equations are one the bedrocks of modern science and physics. maxwell equations You can provide your comment and response on hamiltonian mechanics below: Reference materials: Lagrangian Mecahnics and Hamiltonian A brief history of time by S. Hawking Quantum mechanics Grand Design by Stephen Hawking Sitemap |   portfolio Resume |   Contact|   Contact|   privacy policy
c9bc1cc3dd45eccf
Tuesday, June 25, 2013 Is the wavefunction ontological or epistemological? Part 1: The EPR argument Quantum mechanics is a fascinating subject, extremely rich in mathematical, physical, philosophical, and historical content. Studying quantum mechanics is school with its classical problems of solving the hydrogen atom is only the very first step in a long journey. The quantum foundation area with its diversified views is an equally fascinating domain. At first sight, it looks like the majority of the current interpretations are “obviously” misguided except for your own, whatever that may be, and all other interpretations must be rooted into classical prejudices. However this is not the case and it takes some time and effort to fully appreciate and accept all points of views in interpreting quantum mechanics. Into all this mix, I am proposing yet another quantum mechanics interpretation, and I will attempt to show that quantum mechanics is actually intuitive and it all follows from clear physical principles in a reconstruction program. Since the principle names the theory (e.g the theory of relativity got its name form the relativity principle), I will call quantum mechanics: the theory of elliptic composability and I will show that all primitive concepts like for example ontology and epistemology has to be adjusted to their corresponding composability class. In particular the quantum wavefunction is neither ontological nor epistemological, meaning it is not “parabolic-ontological” not “parabolic-epistemological” but it will be shown to be “elliptic-ontological”. I will start this journey following arguments in historical fashion, and I will start with the EPR argument. I have no clear idea how many parts this series will contain, probably around 10 but I will keep an open format. At the dawn of quantum mechanics, Bohr struggled with its interpretation, and the ideas of complementarity and uncontrollable disturbances was a major part of the discussion. Today this is no longer the case dues to advances in understanding of the mathematical structure of quantum mechanics. Even today most textbooks are painting the wrong picture of the uncertainty principle due to sloppy mathematical formulation and this probably deserves a post of its own for clarification. For the EPR argument suffices to state that one cannot measure simultaneously with perfect accuracy both the position and the momentum of elementary particles. Then Einstein, Podolski, and Rosen argued along the following lines: what if I have a system which disintegrates into subsystem 1 and subsystem 2 and we measure position on subsystem 1 and momentum on subsystem 2. If the original system was initially at rest, conservation of momentum implies that measuring the momentum of subsystem 2 implies we know with absolute precision the momentum of subsystem 1. But wait a minute, on subsystem 1 we measure with perfect accuracy the position as well so it seems that we succeeded on beating the uncertainty principle. Quantum mechanics does not allow that which means quantum mechanics must be incomplete. The whole argument holds provided two critical assumptions hold as well: Both assumptions are actually wrong and later on, John Bell refuted the EPR conclusion based on the second assertion (that of locality). Arguing on similar lines with Bell, one can show that the first assumption is invalid as well. The remote effect due to local measurement is called quantum steering and while it cannot be used to send signals faster than the speed of light, it does change the remote state. Such effects were observed in actual experiments. In the elliptic composability quantum mechanics reconstruction project it is easy to understand its root cause: In classical or quantum mechanics, observables play a dual role, that of observables and of generators. But while in classical mechanics (parabolic composability) the observables for a total system factorizes neatly into a product of observables for each subsystem, in quantum mechanics (elliptic composability) observables and generators are mixed together and the factorization is not possible in general (see Fig 3 in http://arxiv.org/pdf/1303.3935v1.pdf) . In other words, the system becomes “entangled”. In the next post I will show Bell's refutation of EPR argument based on locality. Thursday, June 20, 2013 Quantum mechanics and unitarity (part 4 of 4) Now we can put the whole thing together and attempt to solve the measurement problem. But is there a problem to begin with? Here is a description of the problem as written by Roderich Tumulka http://www.math.rutgers.edu/~tumulka/teaching/fall11/325/script2.pdf (see page 53): Start with 3 assertions: • In each run of the experiment, there is a unique outcome. • The wave function is a complete description of a system’s physical state. • The evolution of the wave function of an isolated system is always given by the Schrödinger equation Then in the standard formulation of quantum mechanics at least one of them has to be refuted. From the quantum mechanics reconstruction work, the last two bullets are iron-clad and cannot be violated without collapsing the entire theory. This means that GRW theory, and Bohmian interpretations are automatically excluded. Also the usual Copenhagen interpretation is not viable either because it makes use of classical physics (we know that we cannot have a consistent theory of classical and quantum mechanics). Epistemic approaches in the spirit of Peres are not the whole story either because while collapse is naturally understood as information update, this means that Leibniz identity is violated as well. So what do we have left? Only the many-worlds interpretation (MWI), or its more modern form of Zurek’s relative state interpretation http://arxiv.org/abs/0707.2832. However, I will argue for another fully unitary solution different than MWI/relative state interpretation (and I agree with Zurek that the old fashion MWI gives up too soon on finding the solution), but in the same spirit of Zurek’s approach. The basic idea is that measurement is not a primitive operation. The experimental outcome creates huge numbers of information copies. The key difference between Zurek’s quantum Darwinism and the new explanation is on who succeeds in creating the information copies: the full wavefunction (as in quantum Darwinism), or the one and only experimental outcome. In other words, the Grothendieck equivalence relationship is broken by the measurement amplification effects: only one equivalent representative of the Grothendieck group element succeeds in making information copies and statistically overwhelms all the other ones (for all practical purposes). The information in the “collapsed part of the wavefunction” is not erased, but becomes undetectable. Of course there are still open problems of delicate technical nature to be solved in this new paradigm, but they do seem to get their full answer in this framework. Solving them is a work in progress, and the solution is not yet ready for public disclosure. In subsequent posts I’ll show how the wavefunction is neither epistemological, nor ontological and I will touch on Bell’s theorem, and the recent PBR result among other things. Tuesday, June 18, 2013 Quantum mechanics and unitarity (part 3 of 4) In part 2 we have see how to construct the Grothendieck group. Can we do this for the composability monoid in the case of classical or quantum mechanics? The construction works only if we have an equivalence relationship and this naturally exists only for quantum mechanics. There is no Grothendieck group of the tensor product for classical mechanics, and there is no “ontological collapse” there other than an epistemic update of information in an ignorance interpretation. In quantum mechanics the situation is different because of unitarity and one can construct an equivalence relationship starting from a property called envariance : http://arxiv.org/abs/quant-ph/0405161 Skipping the boring technical details on how to prove the usual properties of an equivalence relationship, here is the basic idea: whatever I can change using unitarity for the system over here, can be undone by another unitary evolution on the environment over there. Therefore the correct way to write a wavefunction in quantum mechanics is not |psi>, but as in Grothendieck way: the Cartesian product: (|psi>, null) with the second element representing the “negative elements”, or the environment degrees of freedom which will absorb the “collapsed information” during measurement. The measurement device should be represented as (null, |measurement apparatus and environment>) and the contact between the system and the measurement device should be represented as the tensor product of the two Grothendieck elements resulting into: (|system to be measured>, |measurement apparatus and environment>) By the equivalence relationship this is the same as: (|collapsed system A>, |measurement apparatus displaying A and environment>) as well as all other potential experimental outcomes: (|collapsed system B>, |measurement apparatus displaying B and environment>) (|collapsed system C>, |measurement apparatus displaying C and environment>)… But then since only one outcome is recorded, we either need to resort to MWI interpretation, or we need to find another explanation for this. The explanation is that the measuring apparatus is an unstable system which provides massive information copies (think of the Wilson’s cloud chamber in Mott’s problem). Measurement is not a neat and primitive operation, and the one and only outcome creates an extremely large number of information copies which dwarfs the information about the other potential outcomes which are now hidden in the environmental degrees of freedom. Sir Neville Mott showed that in a cloud chamber two atoms cannot both be ionized unless they lie in a straight line with the radioactive nucleus. In other words, we only need to understand the very first ionization. Similarly, in the Schrödinger’s cat scenario, we only need to understand the first decay, and we do not need to hide “the other cat” in the environment degrees of freedom. Please stay tuned for the conclusion in part 4. Saturday, June 15, 2013 Quantum mechanics and unitarity (part 2 of 4) When talking about measurement, one talks about the collapse postulate. Let us take a look of what happens with the underlying Hilbert space. During collapse, the dimensionality of the Hilbert space is reduced to the dimensionality of the subspace where the wavefunction is projected to. A key point is that the dimensionality of a Hilbert space is its sole characteristic. Measurement is initiated by first doing the tensor product of the Hilbert space of system wavefunction with the Hilbert space of the measurement apparatus. This operation increases the dimensionality of the original Hilbert space. Then the collapse decreases the dimensionality. As an abstract operation, the tensor product respects the properties of a commutative monoid. Short of the existence of an inverse element, this is almost a mathematical group http://en.wikipedia.org/wiki/Group_(mathematics). To model the collapse in a fully unitary way (and free of interpretations) we would like to construct the tensor product group from the tensor product commutative monoid. Is such a construction possible? Indeed it is and it is called the Grothendieck group construction http://en.wikipedia.org/wiki/Grothendieck_group Let us explain this using a simple challenge: let’s construct the group of integers Z starting from the abelian monoid of natural numbers N. We would need to introduce negative integers using only positive numbers!!! At first sight this seems impossible. How can such a thing be even possible? N and by itself is not enough, but with the addition of an equivalence relationship it can be done. So consider a Cartesian product NxN and we would call the first element a positive number, and the second element a negative number: p = (p,0)  n = (0, n) We would like to do something like this (p,0)+(0,n) = (p, n) = p-n Also : (0,-q) = (q, 0) All this works in general, but the definition of a Z number is no longer unique. For example: 7=(7,0) =(8,1)=(9,2)=… and -3=(0,3)=(1,4)=(2,5)=… Therefore we need an equivalence relationship such that two pairs (a,b) and (p,q) are considered equivalent if a+q=b+p Notice that in the equivalence relationship we used only the “+” operation of the original monoid N. The formal definition of the equivalence relationship is slightly more complex due to the need to prove the transitivity property of an equivalence relationship. We call two pairs equivalent: (a,b)~(p,q) if there is a number t such that a+q+t =b+p+t Now since Grothendieck construction is categorical (universal), it can be applied to the tensor product commutative monoid and this will explain the collapse postulate in a pure unitary way. Please stay tuned for part 3. Wednesday, June 12, 2013 Quantum mechanics and unitarity (part 1 of 4) I will start a sequence of posts showing why quantum mechanics demands only unitary time evolution despite the collapse postulate and how to solve the problem. For reference, this is based on http://arxiv.org/abs/1305.3594 The quantum mechanics reconstruction project presented in http://arxiv.org/abs/1303.3935 shows that in the algebraic approach, the Leibniz identity plays a central and early role. But what is the Leibniz identity? It is the chain derivation rule: D(fg) = D(f) g + f D(g). All standard calculus follows from this rule. For example using recursion one proves D(X^n) = n X^(n-1) and form this and the Taylor series, the derivation rules for all usual functions follow. In the algebraic formalism of quantum mechanics, the Leibniz identity corresponds in the state space to unitarity. Any breaking of unitarity means that Leibnitz identity is violated as well. This is the case for example in the epistemological interpretation of the wavefunction where the collapse postulate is understood as simply information update. However (and here is the big problem), breaking the Leibniz identity destroys the entire quantum mechanics formalism. In other words, any non-unitary time evolution is fatal for quantum mechanics. So how can we understand the collapse postulate? Is quantum mechanics inconsistent? Should quantum mechanics be augmented by classical physics to describe the system and the measurement apparatus? From http://arxiv.org/abs/1303.3935 we know that there cannot be any consistent classical-quantum description of nature. Also the formalism which highlighted the problem shows the way out of the conundrum. Part 2 of the series will present preliminary mathematical structures which will be used to show how quantum mechanics can be fully unitary even during measurement. Wednesday, June 5, 2013 “Lagrangian-Only Quantum Theory” by Ken Wharton 1.      Consider all possible microstates 2.      Eliminate inconsistent states 3.      Assign an equal a priori probably 4.      Calculate probabilities as Bayesian updates One last word about this series of posts from the conference. Here is the link to the conference page http://carnap.umd.edu/philphysics/conference.html where one can find all the brief descriptions of the talks.  Monday, June 3, 2013 New Directions in the Foundations of Physics Conference in Washington DC 2013 (part 4) “Quantum information and quantum gravity” by Seth Lloyd A thought provoking talk at the conference was that of Seth Lloyd. He showed how to derive Einstein’s general relativity equation from quantum limits for measuring space-time geometry and an additional black hole assumption. One way to think of measuring the geometry of space-time is to think of a comprehensive GPS system. Measuring time amounts to measuring the number of clock ticks and this requires energy. Everybody is familiar with the position-momentum uncertainty principle, but the energy-time uncertainty principle is not so clear cut. This is because in quantum mechanics time is a parameter, and not an operator, and care has to be exercised in interpreting the energy-time uncertainty principle. Margolus and Levitin had obtained a bound of quantum evolution time in terms of the initial mean energy of the system E: E delta t >= hbar pi/2. From this the total possible number of clock ticks in a bounded region of space time (of radius r and time span t) cannot exceed 2Et/pi hbar. In principle, quantum mechanics does not limit the accuracy for measuring time, and all you need is to do is add enough energy. But in general relativity, adding energy in a bounded region will eventually lead to the creation of a black hole. So here is a general relativity assumption: we want the radius of the bonded region to be larger than the Schwarzschild radius Rs = 2GM/c^2 From this (in terms of the Plank time Tp and Plank distance Lp) one obtains the maximum number of clock ticks achievable in a bounded region of space time before creating a black hole: r t / pi Lp Tp Now r*t is an area and naive field theory would suggest r^3 t. Also naïve string theory would suggest at first sight r^2 t. From those kinds of area considerations, Seth was able to deduce general relativity equations inspired in part by Ted Jacobson’s ideas (in fact Seth collaborated with Ted on this result). Now you may ask (as I certainly did) if you start with Schwarzschild’s radius and you derive Einstein’s equations, are you not vulnerable to charges of circularity? Perhaps, but the result is still interesting. (I have one more story to tell from the conference. Please stay tuned for part 5-the last one.) Saturday, June 1, 2013 New Directions in the Foundations of Physics Conference in Washington DC 2013 (part 3) “What is the alternative to quantum theory” by John Preskill As promised, here is the second story from the subsequent discussions following Preskill’s talk. After the talk I was listening in a discussion between John Preskill and Chris Fucks and at some point John asked the question: “is there anything else besides classical and quantum mechanics?” Later in the day I approached John and I told him that I know the answer to his question and I started to present my ideas captured in http://arxiv.org/abs/1303.3935 The basic idea is simple. Suppose I have on my left side a physical system A subject to the laws of nature. Also suppose I have on my right side a physical system B subject to the same laws of nature. Then if I perform the tensor product composition of system A with system B, I would get the larger system “A tensor B” subject to the same laws of physics. From this I can extract very hard constraints on the allowed form of the laws of nature. In fact it can be shown that there are only 3 such consistent solutions. One is an “elliptic” solution which corresponds to quantum mechanics, one is a “parabolic” solution which corresponds to classical mechanics, and there is a third “hyperbolic” solution which corresponds to something we do not fully understand at this time. Another way to look at the 3 solutions is by Plank’s constant: positive, zero, or imaginary. Mathematically the whole thing can be naturally expressed in terms of category theory, and physically it corresponds to the invariance of the laws of nature under tensor composition. Now when I explain this to different people, I usually get a polite nod followed by a polite excuse to end the discussion. However, I did not get this reaction from Preskill who asked cogent clarification questions. Also he told me to look up the recent preprint of an Anton Kapustin. I did not remember the name, and the next day I asked John to type it for me on the archive search field and lo and behold I found out this preprint: http://arxiv.org/abs/1303.6917 titled: “Is there Life beyond QM?” Now the core inspiration for my result was a 70s paper by Grgin and Petersen: http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.cmp/1103900192 and Kapustin had the same inspiration. Reading his preprint it struck me that we had independently discovered the same thing and I only managed to upload my preprint 11 days before him. Then the mystery of John’s reaction evaporated. Preskill is colleague with Kapustin at Caltech. So now I had good news and bad news. The good news was that I am right and my credibility got a boost. The bad news is that I have competition in an area where I thought I worked alone. When I uploaded my preprint, I left out a piece of it, related to the unitary realization of the collapse postulate. So I rushed to package this result as a separate paper and I uploaded a few days after the conference: http://arxiv.org/abs/1305.3594 The problem is that any violation of unitarity is fatal to QM as shown by the QM reconstruction project. This includes the collapse during measurement even though it can be interpreted as Bayesian information update. There is an easy remedy to this though suggested by this composability/category theory formalism and it is based on the Grothendiek group construction. (I’ll explain how this works in detail in a subsequent post). As a side benefit this solves the measurement problem and eliminates the MWI interpretation as well. The QM reconstruction projects is an area coming of age and Luca Bombelli maintains a page keeping tabs on all such projects: http://www.phy.olemiss.edu/~luca/Topics/qm/axioms.html I believe that such approaches will eventually lead to the elimination of all known QM interpretations as QM will be just as easily and naturally be derivable as special relativity. After all, how many conferences dedicated to the “correct” interpretation of special relativity and “ict-imaginary time” do you know?
16dbd0b394a25fd7
Quantum Mechanics The theory of quantum mechanics describes the laws of physics at the smallest scale, yet these laws must be taken into account to explain many of the phenomena observed by astronomers. The foundations of quantum mechanics date from the start of the 20th century, when Max Planck proposed that the shape of the black-body radiation curve – seen in the spectrum of light produced by a star, for example – could be explained by assuming that the energy levels of the radiation are “quantised”; in other words, the energy of the radiation can only take specific discrete quantities. Black-Body Radiation and Planck’s Constant Black-body radiation is the electromagnetic radiation produced by an idealised, perfectly opaque and non-reflective object – referred to as a “black body” because it absorbs all electromagnetic radiation incident upon it. A black body is also a perfect idealised emitter of electromagnetic radiation; for example, the spectrum of the light from a glowing piece of iron, heated by a blacksmith’s furnace, approximates the idealised spectrum of a black body, as does the light produced by a star. A graph of the spectrum of an ideal black body, plotted as intensity against wavelength, has a characteristic shape and peak wavelength that depends only on the temperature of the body. Since the wavelength of light determines its colour, the color of a star is related to its temperature – cooler stars are red, while hotter stars appear blue. In 1900, Max Planck showed that the shape of the black-body radiation spectrum could be described mathematically by assuming that the energy of the radiation emitted can only take discrete “quantised” values, rather than any value between a continuous range of energies. Previous calculations of intensity, based on continuous, non-quantised energy levels, approached infinity at low wavelengths, diverging from empirical observation at around the wavelength of ultraviolet light. Hence this problem was known at the time as the “ultraviolet catastrophe”. Planck’s calculations were based on the idea that electromagnetic radiation is emitted by hypothetical resonant oscillating bodies – later identified as the electrons in atoms on the surface of the black body – and that these electrons can oscillate only at specific, quantised, resonant frequency levels. Planck also showed that the frequency of the oscillation was related to the energy of the electromagnetic radiation produced, via the formula: Energy = h x frequency where h is known as Planck’s constant, which is a fundamental constant of nature that has a value of approximately 6.626 × 10-34 m2 kg / s. The Photoelectric Effect The photoelectric effect is a process by which electrons are emitted from materials such as metals, when exposed to electromagnetic radiation of high enough energy. Heinrich Hertz discovered the photoelectric effect in 1887. (N.B. the photoelectric effect should not to be confused with the photovoltaic effect, which is a related but slightly different process). In 1900, Philipp Lenard discovered that certain gases also emit electrons, via the photoelectric effect, when illuminated by ultra-violet light. Lenard noted that the energies of the individual electrons emitted were dependent on the frequency of the light and not the intensity, as would have been expected based on James Clerke Maxwell’s wave theory of light. In 1905, Albert Einstein suggested that this could be explained if the electrons only absorb light in discrete packets of energy or “quanta”. The term photon later came to be used to refer to a single “quantum” of light. Einstein proposed that the Planck relation (Energy = h x frequency) applied also to photons, relating the energy possessed by an individual photon – and hence it’s ability to eject an electron from the surface of a material, via the photoelectric effect – to the frequency of the light. If the light used is below a certain threshold frequency, no electrons are emitted, as the energy of each individual photon is not enough to free an electron, irrespective of the intensity of the light, which is simply a measure of the number of photons. Einstein received the Nobel Prize in 1920 for his explanation of the photoelectric effect, although his theory was not accepted by many when it was first proposed, since it seemed to contradict Maxwell’s wave theory of light. The Compton Effect In 1923, Arthur Compton showed that photons can be scattered by free, or weakly bound electrons in a way that could only be explained if a photon is considered to possess particle-like properties. This, combined with Einstein’s description of the photoelectric effect, finally lead physicists to abandon attempts to produce a description of quantum behaviour based on simply trying to impose quantised limitations on classical theories and completed the transition from these “old” quantum theories to the new physics of quantum mechanics. Young’s Double Slit Experiment and Wave-Particle Duality If light is allowed to fall on a screen after passing through two identical slits, each of a narrower width than the wavelength of the light used, a diffraction pattern of alternating light and dark bands is observed. This pattern is caused by the light waves emanating from the two separate slits interfering with each other. Where a wave peak in the light from one slit falls on the screen at a point where a wave trough also falls upon the screen from the second slit, a dark band occurs. This is due to destructive interference, where the phases of the two waves cancel each other out. Where two peaks or two troughs coincide on the screen, or the two waves are in phase with each other, the waves reinforce each other, and this constructive interference causes a bright band of light to appear at that position on the screen. This experiment was first performed by Thomas Young in 1801 and provided the first direct evidence that light is a wavelike phenomenon, seemingly ruling out Sir Isaac Newton’s “corpuscular” theory that light is made up of separate particles. This experiment is, therefore, often known as Young’s double slit experiment. In 1909, four year’s after Einstein first proposed the existence of photons – individual, particle-like quantum packets of light energy – a version of this experiment was conducted where the energy of the light was reduced to such a level that single photons  were passing through the slits at a time. An individual, very faint point of light was observed on the photographic plate, used as the detector screen, for each photon that passed through the slits. As the position of each photon was recorded as it fell on the detector screen, the same pattern of light and dark bands, observed in Young’s original experiment with high intensity light, gradually built up, as the positions of more and more individual photons were recorded. This meant that each single photon of light was still behaving as a wave and was still able to interfere with itself as it passed through both slits at once, even though each photon was observed as a single point of light on the detector screen. A thought experiment was proposed in the 1960s by Richard Feynman that, at the time, was not possible due to technical limitations but which has now been performed, whereby the position of each photon is measured just before it passes through the slits. You should then be able to determine which of the two slits the single photon will pass through. If you do this, the interference pattern on the detector screen disappears. Each photon now travels through just one of the slits as if it were a particle, rather than a wave, and no longer interferes with itself. The pattern of light that builds up on the detector screen is now just two simple bands of light, corresponding to the positions of the two slits, where photons that are fired at the correct angle pass directly through one or other of the slits, as if they were behaving as particles travelling in straight lines. This means that light can behave either as a wave or a particle, depending whether or not you have made an “observation” of it. This strange double identity of light is known as “wave-particle duality”. De Broglie Waves In 1924 a PhD student called Louis De Broglie predicted that all matter should share this wave-particle duality behaviour. His reasoning was based on the quantised energy levels of electrons in Niels Bohr’s model of the atom, which Bohr had introduced in 1913. This described the hydrogen atom as a nucleus consisting of one proton, at the centre, and one electron orbiting around it. The electron’s orbit could only take on specific energy levels at set distances from the nucleus. The electron could jump between these orbital levels – a “quantum leap” – by emission or absorption of a photon with an energy equivalent to the difference in energy between the quantised orbits. De Broglie’s intuition told him that the reason atomic electrons could only take on specific quantised energies was that they, like light, had a wave-like nature with an associated wavelength. For each atomic electron orbit to be stable, the distance around the perimeter of the orbit would have to be equal to a complete number of wavelengths, so that a resonant electron standing wave is set up. Since the electron could not take on orbits between these values, the electron could not spiral in towards the nucleus, as would be predicted if the electron is considered to behave only as a particle without wavelike properties. De Broglie hypothesised that not only electrons, but all particles of matter possess these wavelike properties, and that the wavelength of a particle is given by the equation: Wavelength = h / momentum and the Energy is related to the Frequency by: Energy = h x frequency as Einstein had shown for photons, where h is Planck’s constant. Since momentum is given by the mass of the particle multiplied by it’s velocity, this means that the greater the mass of the particle, the smaller its De Broglie wavelength, for a given velocity. De Broglie’s theory was first confirmed experimentally for electrons in 1927, using the atoms of a crystal lattice of nickel to act as a diffraction grating. The spacing of the atomic planes of the crystal is of the width required to cause diffraction of electrons in same way that the material had previously been shown to diffract x-ray photons of a similar wavelength. The Young’s double slit experiment has also been conducted using a source of electrons, instead of light, which are fired through the slits towards a detector. The result is that the electrons are observed to behave in exactly the same way as photons, producing either a wavelike interference pattern or two simple bands, depending upon whether or not you observe the position of each electron before it passes through the slits. Similar results are also observed for other larger particles, such as protons and neutrons. These results show that entities traditionally recognised as particles also display this strange wave-particle duality that is observed for light. To date, the largest particles for which the double slit experiment has been carried out are molecules consisting of 810 atoms. However, De Broglie’s theory applies to all matter. The larger the mass, and hence momentum, however, the smaller the De Broglie wavelength, so that, at anything except the smallest scale, the wavelength becomes vanishingly small and so is not observed. The Quantum Wave Function and Schrödinger’s Equation De Broglie’s theory required a mathematical equation, or “wavefunction”, to describe how a matter wave changes with time. In 1926, Austrian physicists Erwin Schrödinger published an equation which did just that. Schrödinger applied his equation to the orbits of electrons in the Bohr model of the hydrogen atom and found that the results exactly predicted the observered quantised energy levels. Schrödinger’s quantum wavefunction does not allow you determine the position of a quantum particle at a given time, as would be expected of a classical theory, such as Newton’s laws of motion. It should instead be thought of as providing the probability that a particle will be observed at any specific location at a specific time. The wavefunction assigns a probability to all possible positions of a particle. When an observation is made, the particle will be found in one of these positions. It’s the chances of finding the particle at any particular position that can be calculated from the wavefunction. So, in Young’s double slit experiment, for example, the Schrödinger wavefunction equation would assign a probability for each single quantum of light being observed at each specific position along the detector screen. The bright bands of light observed where the intensity of the light source is high, and hence many photons strike the screen at once, correspond to regions of high probability for the position of each individual photon, and the dark bands correspond to regions of low probability. When a single photon travels through the apparatus, the wavefunction treats the particle as if it occupies all possible positions at once, and (in the standard interpretation of quantum mechanics), the wavefunction can be considered to “collapse” to one specific location when the observation of the photon is made, as it hits the screen. So the equation gives the probability that the quantum wavefunction will collapse to any specific point on the detector screen. In the standard interpretation of quantum mechanics, the particle can be considered to exist in a superposition of all possible states until an observation is made. Schrödinger’s Cat The famous Schrödinger’s cat thought experiment considers a cat sealed inside a box with a flask of poisonous gas, a low-level radioactive source and a detector. If a radioactive emission from the source is detected, an automated mechanism will be triggered to break the flask, releasing the poisonous gas that will kill the cat. After the box is sealed, it will no longer be possible to determine whether the cat will still be alive when the box is re-opened, some time later. The entire system of the cat in the box could be described by a quantum wave function, which suggests that the cat exists in a superposition of states – simultaneously both alive and dead – until the box is opened and an observation is made. Quantum Tunneling Quantum tunneling is a surprising consequence of the existence of matter waves, which is not explicable using classical theories describing matter purely in terms of particles. Imagine, for example, an electron fired towards a barrier consisting of an electric potential that repels the electron. Classical theories of the electron’s motion will say that, if the electron’s momentum is not sufficient for it to pass through the potential barrier, the electron will rebound like a ball thrown at a wall. Schrödinger’s equation for the wave function of an electron describes the probability of finding the electron at all points in space at a specific time. The wave function notably does not stop at edge of the potential barrier, however, but instead falls off exponentially as it penetrates into the barrier. The probability of finding the electron at some point inside the potential barrier, therefore, approaches, but never reaches, zero, as the wave function penetrates deeper into the barrier. If the barrier is of finite width, there will always be a non-zero probability of finding the electron at the opposite side of the barrier when you measure its position. This probability will be smaller the wider and higher the barrier, but the wavefunction always allows some possibility that the electron will spontaneous appear at the other side of the potential barrier. This ability for a particle to instantaneously jump through a potential barrier is known as quantum tunneling. The greater the distance across the barrier, the higher the barrier and the larger the mass of the particle, the lower the chances that tunneling will occur. However a tunneling particle will appear on the opposite side of the barrier without any loss of energy. The Quantum tunneling effect is used in the design of certain electronic components, such as diodes. It also imposes theoretical limits on the miniaturisation of electronics, as electrons will spontaneously tunnel across electrically insulating materials. Notably, quantum tunneling is used by the scanning, tunneling microscope, to measure the distance from the tip of the probe of the microscope to the surface of the material being studied. This allows the microscope to “feel” the surface of a material at incredible resolutions, allowing individual atoms to be resolved. Quantum tunneling can also be used to explain radioactivity, as the emitted particle randomly tunnels through the potential barrier holding it within the atomic nucleus. The Heisenberg Uncertainty Principle In 1927, Werner Heisenberg devised his famous (or perhaps infamous) uncertainty principle, which notes that, on a quantum scale, certain “complimentary” or “conjugate” observable quantities, cannot be simultaneously measured to an arbitrary degree of accuracy. For example, the observable quantities of a particle’s position and momentum are complimentary. Momentum is given by the particle’s mass multiplied by its velocity, hence the particle’s speed and direction of travel cannot be measured simultaneously with its position, in such a way that the particle’s future position and momentum can be precisely predicted from the observation. If you try to measure a particle’s position, it is possible to do this to an arbitrary level of precision. However, in doing so, you will cause the particle’s momentum to shift by a random unknown amount. If you, subsequently, try to measure the particle’s momentum, you will then be unsure of its position again. The uncertainty in the measurement of the particle’s position multipled by the uncertainty in the measurement of the particle’s momentum will always be greater than or, at the very least, equal to Planck’s constant. This can be expressed by the mathematical inequality: Δx Δp ≥ h Where Δx and Δp are the uncertainties in the particle’s position and momentum, respectively, and h is Planck’s constant. So, the more accurately you measure the particle’s position, the less sure you can be of its momentum, i.e. its speed and direction. Note that, since Planck’s constant is such a small quantity (6.626 × 10-34 m2 kg / s), this principal generally only affects measurement on the smallest scales. Heisenberg originally described the reason for this uncertainty as being due to unavoidable disturbance of the particle when taking a measurement. For example, you can measure a particle’s position by illuminating it with light and because of the wavelike properties of light, the limit for how precisely you can resolve the position of the electron depends on the wavelength of the light used. The shorter the wavelength of the light, the higher the resolution achievable. However, the Compton effect must be taken into account (see above) whereby the electron will recoil due to the particle-like nature of the photon of light that strikes it. The shorter the wavelength of the photon, the higher it’s energy (see the photoelectric effect, above) and the greater the random recoil of the electron. So, the higher the energy of the light used, the more accurately you can resolve the position of the electron, but the less sure you can be of its velocity and direction of travel (i.e. its momentum) afterwards, due to its random recoil as it is struck by the photon. However, it must be noted that this uncertainty is not simply a consequence of experimental error and cannot be circumvented by devising a more accurate way to measure the electron’s position without disturbing it’s momentum. Any way in which it is possible to determine the electron’s position will lead to an uncertainty in the electron’s momentum greater than (or at the very minimum, equal to), that given by the uncertainty principle. Another way of thinking about this is to consider the dual wave-particle nature of matter. If the wave packet of a particle is behaving as a wave, it will have a well defined wavelength, and hence a low degree of uncertainty in its momentum. This is because momentum is inversely proportional to a particle’s wavelength, as given by De Broglie’s theory of matter waves, above. However, for the matter wave to have a well defined, easily measurable wavelength, it must be spread out in space over a distance of many wavelength, and not localised at any particular point, as would be expected in the classical visualisation of a particle. The more spread out the wave packet is in space, the closer the match of the measurement of the distance between each individual wave crest and, therefore, the more accurately the wavelength, and hence the momentum, can be determined. This means that the position of the particle’s wave packet will be less well defined, however. Therefore the particle’s position will be more “uncertain” depending on how accurately it’s wavelength, and hence its momentum can be measured. Conversely, if the wavepacket is bunched tightly together in space, it will behave more like a traditional particle with a more precise position. However, the wave packet will then have a less well defined wavelength and therefore a greater “uncertainty” in its momentum. It should also be noted that the mass of the particle also features in the momentum term of the Heisenberg uncertainty inequality. This means that the greater the particle’s mass, and the greater the uncertainty in its velocity, the greater the uncertainty in its momentum, since momentum is the product or mass times velocity. A lower level of uncertainty is, therefore, required for the position of a more massive particle, in order to satisfy the inequality of the uncertainty principle, than for a less massive particle with the same uncertainty in its velocity. When Heisenberg’s uncertainty principle was first devised, there was much debate about how this principal should be interpreted. Einstein, in particular was of the opinion that, even though the position and momentum of a particle could not be known simultaneously, the particle still possessed an intrinsic value for both, although hidden from us by fundamental limitations of experimentation. Einstein believed that the theory of quantum mechanics was incomplete, famously declaring that “God does not play dice”. This sort of interpretation of the uncertainty principle was known as a “hidden variable” theory. However, Heisenberg went much further, claiming that not only were the precise values of these quantities unknowable within the bounds of the uncertainty principle, but that precise values did not even exist. Hidden variable theories involving “pilot waves” which guide a particle’s motion (similar to a surfer riding a wave), have been proposed, which could potentially provide a visualisable interpretation of quantum mechanics (see De Broglie Bohm theory, below). The Time-Energy Uncertainty Principle Another important pair of complimentary observable quantities are energy and time. The time-energy version of the uncertainty principle can be expressed mathematically as: ΔE Δt ≥ h Where ΔE is the uncertainty in a particle’s Energy and Δt the uncertainty of the lifetime of the energy state. This time-energy uncertainty principle can be observed in spectroscopy, for example, where “excited” energy states of an atom have a finite lifetime. The more quickly the excited energy state decays back to the lower energy state, the larger the uncertain in the energy of the higher state. The width of the observered spectral emission line corresponds to this uncertainty in the energy of the excited state. The shorter the lifetime of the energy state, the broader the emission line observed. A more interesting consequence of the time-energy uncertainty principle, however, is that it allows energy to be “borrowed”, effectively out of nowhere. The shorter the time you borrow the energy for, the more energy a particle can borrow, provided it is paid back within the time limit allowed by the uncertainty principle. The consequence of this is that the vacuum of space can no longer be considered to be empty. Instead, it can be thought of as teaming with “virtual” particles, which spontaneously pop into and out of existence, by borrowing energy from the vacuum of space itself. For example, a virtual electron and its antimatter (see antimatter below) equivalent, known as a positron, can be spontaneously produced, as long as they annihilate by recombining again within the time limit allowed by the time-energy uncertainty principle. The borrowed energy must be at least equal to the rest masses of the two particles, so the higher the particle’s mass the shorter the time available under the constraints of the uncertainty principle for the annihilation to occur. So, less massive virtual particles exist for longer than more massive virtual particles. “Hawking radiation”, (named after Stephen Hawking, who proposed its existence), around the event horizon of a black hole is conjectured to occur when one of these pairs of virtual particles falls into the black hole, while the other particle escapes the black hole’s gravitational field. In order for the escaping particle to be elevated to the status of a “real”, rather than a “virtual” particle, it must pay back the energy borrowed for its creation by carrying away some of the energy of the black hole, thereby reducing the black hole’s mass. (See also black holes.) In quantum field theory, the exchange of these virtual particles is proposed as a mechanism for the fundamental forces of electromagnetism, the strong and weak nuclear forces and possibly gravity. (See fundamental forces for more details.) This theory applied to the electromagnetic interaction is known as quantum electrodynamics and describes the electromagnetic force as a consequence of the exchange of virtual photons between electrically charged particles, such as the electron and the proton. For the strong nuclear force, the theory of quantum chromodynamics applies, which describes how the exchange of force-carrying particles called gluons are responsible for binding together quarks, the constituent particles of the proton and the neutron, inside the nucleus of atoms. (see also, particle physics) Quantum Spin In classical physics, a rotating electric field produces an associated magnetic field, which acts like a bar magnet, with a north and a south pole. The electron is an electrically charged particle, which also possesses an intrinsic magnetic field, known as the electron’s magnetic moment. However, early attempts to attribute the existence of the electron’s magnetic field to a rotation of the electron, as if it were spinning on its axis, led to problems, since, to produce the value for the electron’s magnetic moment observed in experiments, the classical equations of electromagnetism required the electron to be spinning faster than the speed of light. In 1925, Paul Dirac derived a version of Schrödinger’s wave function equation that was compatible with Einstein’s special theory of relativity. This was the first time that a result from the theory of relativity had been successfully combined with the theory of quantum mechanics. This relativistic wave equation described the behaviour of particles, such as the electron, at high energies and velocities and allowed Dirac to derive a value for the magnitude of the electron’s magnetic moment that agreed well with experimentally observed values. The Stern-Gerlach Experiment In 1922, physicists Otto Stern and Walther Gerlach had shown that a moving atom of silver could be deflected by placing two long magnets, of opposite polarity, above and below the path of the atom, parallel to its direction of motion. Silver atoms are electrically neutral and possess 47 orbiting electrons. The first 46 of these electrons are paired up so that their magnetic fields are facing in opposite directions and, hence, cancel out. It is the remaining unpaired electron that is responsible for the magnetic field of the silver atom, which causes the observed deflection as it travels between the two magnets. It would have been impossible to perform this experiment on a free electron because of its electric charge. However, the experiment was later performed using hydrogen atoms, which possess only one orbiting electron, with similar results obtained. (Note that the proton also has a quantum spin and magnetic moment, but this is much smaller than that of the electron.) If an electron is considered to act like a tiny bar magnet, due to its intrinsic spin angular momentum (see Quantum Spin, above), it should be deflected by a magnetic field. The deflection is such that a beam of silver atoms is split into two separate beams, one travelling upwards toward the top magnet and one travelling downwards towards the bottom magnet. Before the particle enters the magnetic field, the standard interpretation of quantum mechanics considers its spin direction to be undefined, existing in a superposition of possible spin states, much like Schrödinger’s cat is considered to be simultaneously both dead and alive until an observation forces the wavefunction to instantaneously collapse into one of the two possible states. Dirac’s relativistic wave equation (see Quantum Spin, above), predicts the values of the electron’s spin and agrees well, (but not exactly), with the results of the Stern-Gerlach experiment. The magnitude of the electron’s spin is ½ (h / 2π), where h is Planck’s constant. The quantity h / 2π appears frequently when calculating quantised angular momentum, and is usual written in a shorthand form as ħ, pronounced h-bar. The electron’s spin can therefore be written as ½ħ, and, because of this, it is often referred to as a “spin-half” particle. The two possible states are referred to as “spin up” and spin down”, depending on whether the electron’s spin would be aligned parallel to the magnetic field of the Stern-Gerlach apparatus, causing the electron to move upwards towards the top magnet, or whether the spin would be aligned opposite to the magnetic field, causing the electron to move downwards. However, the value of the electron’s magnetic moment given by Dirac’s equation did not agree exacly with values obtained experimentally. It was only when the existence of virtual photons (see the Time-Energy Uncertainty Principle, above), predicted by the theory of quantum electrodynamics, were taken into account that the precise value could be calculated. Other particles also have their own spin magnetic moments. The proton is also a spin-half particle. However, because the magnitude of a particle’s magnetic field is inversely proportional to its mass, the magnetic moment of the proton is around one thousand times smaller than that of the electron. Even the neutron, although electrically neutral, has a magnetic moment and is also a spin-half particle. Both the proton and the neutron are made up of fundamental particles known as quarks (see particle physics). These are also spin-half particle’s and are responsible for the spin of both the proton and the neutron. Particles such as the photon are also considered to possess quantum spin, although the photon does not possess charge and, hence, has no magnetic moment. However, due to angular momentum conservation laws in interactions with spin-half particles, such as the electron, the photon can be shown to have a quantum spin number of 1. This means that a photon can have the possible spin values of -1 or +1, corresponding to the two possible orthogonal polarisation states of light. The Pauli Exclusion Principal Independently of Dirac, Wolfgang Ernst Pauli, had also explained the splitting of the beam in the Stern-Gerlach experiment as due to the spin magnetic moment of the electron. However, unlike Dirac, who derived the values of the electron’s spin from his relativistic wave equation, Pauli had introduced the mathematical description of quantum spin ‘by hand’ into his equations in order to describe the results of the Stern-Gerlach experiment in quantum mechanical terms. Pauli used his theory to explain why particles with half integer spin, known as fermions (after Enrico Fermi), could not share the same position and quantum mechanical state. This is known as the Pauli exclusion principal and is the reason why particles with half-integer spin can be considered to behave as particles of matter, and cannot share the same position in space. Whereas particles with integer spin, such as the photon, can group together in large numbers and pass through each other. Particles with integer spin are known as bosons (after Satyendra Nath Bose) and are considered to be the force-carrying particles in quantum field theory. For example, the photon, which mediates the electromagnetic field, the W and Z particles, which mediate the weak nuclear force, and the gluon, which mediates the strong nuclear force, are all spin-one particles. Dirac’s relativistic wave equation was based on a quantum mechanical version of Einstein’s energy-momentum relation, derived from the special theory of relativity. This is an expanded form of the well known equation E = mc2, where E represents energy, m represents mass and c is the speed of light squared. E = mc2 essentially states that the mass of any object is equivalent to energy. Since the speed of light squared is a very large number, 1 kilogram in mass is approximately equal to 9×1016  Joules (or 90 petajoules) of energy. The energy-momentum form of this equation includes a term denoting the equivalent energy to the mass of the particle at rest (m0) and a second term denoting the kinetic energy of the particle due to its momentum (p). The sum of these two quantities gives the total energy possessed by the particle, i.e.: E2 = m02c4 + p2c2 Since the left-hand side of this equation is the square of the total energy, to calculate the value for energy (E) requires that the square root of the right-hand side of the equation be taken. This leads to two possible answers, since any square root has both a positive and a negative solution (e.g. the square root of 4 is either 2 or -2). Dirac’s quantum mechanical version of this equation, therefore also had two solutions for the energy of a particle The positive solutions to the equation can be considered to represent particles of ordinary matter, whereas the negative solutions can be thought of as representing particles of positive energy (and hence positive mass) but with the opposite charge. This oppositely charged matter came to be known as antimatter. The existence of the electron’s antimatter counterpart (or antiparticle), named the positron, was confirmed experimentally in 1932 by Carl Anderson. Most fundamental particles have corresponding antiparticles, however, the chargeless bosons, such as the photon, the Z particle, the Higg’s Boson (see particle physics) and the graviton – the hypothetical particle which is proposed to mediate the force of gravitation – are considered to be their own antiparticles. When a particle of matter collides with a particle of antimatter they will “annihilate”, leaving only high energy photons, which radiate away the energy of the particles’ mass and momentum. Other particles can also be produced, providing that the total energy of the collision is greater than the rest mass of the resulting particles. Quantum Entanglement Suppose two identical particles are created through a random radioactive decay process, such that they will always be created spinning in opposite directions, due to the conservation of angular momentum. The particle’s spin is quantised, and can be in either a spin up or spin down state, as measured along any particle axis. The two particles fly off in opposite directions, but it is not possible to predict which particle goes in which direction. Suppose that you and your friend each intercept one of the two particles and measure its spin. There is a fifty-fifty probability of the particle that you intercepted having either spin up or spin down when you measure it. This will also be true for your friend’s particle; however, you will always agree that your particle has the opposite spin to that of your friend’s, no matter how many times this process is repeated. Fair enough; however, according to the standard interpretation of quantum mechanics, as with Schrödinger’s cat (see above), the spin of each particle isn’t defined until you measure it. Your particle exists in a superposition of states until you force it to take on just one of the two possible states by making an observation. The particles’ states are said to be “entangled”, since their spin values are dependent upon each other, but neither is yet defined. The problem with this is that, if the spin of your own particle suddenly takes on one distinct value when you measure it, your friend’s particle must take on the opposite spin value at precisely the same moment, no matter how far apart you are when you make your measurement. This means that the information on the spin value of your particle must somehow be instantaneously transmitted to your friend’s particle at the exact moment that you measure your particle’s spin. This implies that the information has somehow traveled between the two particles at an infinite speed. This of course would seem to violate the fundamental principle of the special theory of relativity, that nothing can travel faster than the speed of light. If the spin of your friend’s particle was not set at precisely the same moment as your own, it would be possible for your friend to measure the spin value of their own particle before the information that you had measured the spin of your particle had reached them. It would, therefore, be possible for you to both measure the same value for your particles’ spins, which would violate the spin angular momentum conservation law, which required that the two particles had opposite spin values when they were first created. Albert Einstein was among those who rejected this explanation of quantum entanglement, calling it “spooky action at a distance”. 1935, Einstein,  Boris Podolsky and Nathan Rosen co-authored a scientific paper, which first pointed out that the standard quantum mechanical view of entanglement required a faster-than-light signal between particles. This problem became known as the EPR paradox, after the paper’s authors. Einstein preferred a “hidden variable” explanation, believing that, although quantum mechanics is a correct theory, it is incomplete, as it does not take account of hidden internal quantities, which it treats as undefined until measured, but which he believed always have a hidden but defined value. In the hidden-variable interpretation, it is simply as if the two particles both had their spin states defined when they were created. This is just analogous to a someone writing the words “Spin Up” on a piece of paper “Spin Down” on another piece of paper, sealing both pieces of paper in separate envelopes and posting one to you and one to your friend. Until one of you opens their envelope you cannot know who has which piece of paper, but as soon as you examine your piece of paper, you instantly know what your friend’s reads – not very “spooky” and requiring no action at a distance. However, the standard quantum mechanical view – championed by Heisenberg and others – with superpositions of states, is more like a third person flipping two coins in the air, one towards you and one towards your friend. Since the coin is spinning fast, it cannot be said to be heads or tails until you catch it. However, if the states of your coins were somehow entangled, when you did catch your coin, it would be as if your friend’s coin somehow instantaneously took on the opposite value, even though they had not caught their coin out of the air yet. However, the theory of relativity does not actually forbid a faster than light signal between the particles. Only signals that carry useful information faster than light would violate the principle that cause must always come before effect (known as the principle of causality), which would lead to paradoxical situations and be effectively equivalent to travelling backwards in time. Since there is no possible way to make use of the signal between the particles to communicate any information faster than the speed of light, it does not cause a problem. You can’t tell whether your particle is still in a superposition of spin states or not (i.e. whether your friend has measured their particle’s spin yet), without making a measurement yourself, which forces it to adopt one value and breaks the entanglement, anyway. You can’t somehow monitor whether your particle has taken on a definite spin value yet and watch for the moment when your friend has measured their particle’s spin and forced yours out of its superposition of states, so there is no way your friend can use this to send a signal to you. Hence, causality is not violated, even though the means by which the two particles share their states instantaneously remains mysterious. Bell’s Theorem In 1964, John Stewart Bell, a physicist from Northern Ireland, proposed an experiment that could distinguish whether or not a hidden variable theory could reflected the true nature of reality. Consider the above scenario for the two entangled particles. In order to measure your particle’s spin, you can use a magnetic field, as described in the Stern-Gerlach experiment, above, and you’re friend can also use a magnetic field to measure the spin of their particle. If the magnetic field of your apparatus, and hence the axis along which you measure the spin of your particle, is exactly aligned with the magnetic field of you’re friends apparatus then, for each particle pair that is created, via the random radioactive decay process, you will measure the spin of your particle to be opposite to the spin of your friend’s particle. If you measure your particle as spin up, your friend will be certain to measure their particle as spin down, and vice versa, since the two particles are in a combined entangled state, due to the conservation of angular momentum when they were created. If you repeat this process for each pair of entangled particles, produced by the radioactive source, there will be a perfect anti-correlation between the spin measurements of your particles and the spin measurements of your friend’s particles. In other words, the spins of each pair of entangled particle will always be aligned opposite to each other, when measured along the same axis. If you then turn your apparatus through 180 degrees, so that its magnetic field is pointing in the opposite direction to that of your friend’s apparatus, then the measurement of your particle’s spin will always be the same as the spin measurement of your friend’s particle, as measured along the opposite axis. When you measure your particle as spin up, your friend will also measure that particle’s entangled pair as spin up, and when you measure your particle as spin down, your friend will also measure their particle as spin down. However, If you align the magnetic field of your apparatus at 90 degrees to your friend’s apparatus, then no correlation will be observed. The standard interpretation of quantum mechanics says that this is because the components of a particle’s spin in two perpendicular directions cannot be measured at the same time outside of the limits of the Heisenberg uncertainty principle. The spin of the particle in the original “up-down” direction is a conjugate observable quantity to the particle’s spin in the perpendicular “left-right” direction. In other words, you cannot measure both quantities precisely at the same, in the same way that you cannot precisely measure both a particle’s position and momentum at the same time. The measurement of your particle’s spin in this “left-right” direction can be considered to destroy the information about the spin of your particle in the original “up-down” direction and forces it back into a superposition of the two possible states. This leads to a fifty-fifty chance that the measurement of your particle’s spin in this perpendicular direction will match the spin of your friend’s particle, i.e. there is no correlation between you and your friend’s particles’ spins measured along perpendicular axis. The definition of up-down or left-right are completely arbitrary, of course, although for you to observe a perfect correlation, or anti-correlation, with your friend’s spin measurements, your apparatus must be aligned in the opposite direction, or in the same direction, respectively, as their apparatus. If you align your apparatus so that the magnetic field points in the opposite direction to your friend’s, but then tilt the axis by only one degree, most of the particles you observe will still have their spins aligned in the same direction as your friends. However, now the correlation will not be perfect. Instead, the majority of your particles will have the same spin state as your friends’ particles, but a small amount will have opposite spin directions. If your friend then also tilts their measuring apparatus by one degree in the opposite direction, the correlation between their particles’ spin and your own will drop even further. If a hidden variable theory reflects the true nature of reality, it would be expected that the number of particles you observe with the opposite spin to your friend’s would simply be double the number you observed when only your measuring apparatus was tilted with respect to the original axis. This is simply because your friend has tilted their measuring apparatus by the same angle as yours, but in the opposite direction. The combined angle between the measuring devices has doubled, so you would also expect the number of mismatches to double. However, the theory of quantum mechanics predicts a different result. The Schrödinger Equation says that when you tilt your measuring apparatus by a small angle, θ, the chances that you will observe a particle to have the opposite spin direction to your own is given approximately by θ2 / 2. So, when you first rotated your measuring apparatus by one degree, the chance that your particle’s spin would be measured to be opposite to the spin of your friend’s corresponding particle was 12/2, which equals 0.5. However, after your friend has also rotated their apparatus by 1 degree in the opposite direction, the chance of a mismatch is now 22/2, which equals 2. This is four times greater than was the probability of a mismatch before your friend had tilted their apparatus – double the mismatch predicted by the analysis for hidden variable theories, above. Experimental test of Bell’s theorem have been carried out, although it’s usually more practical to measure the polarisations of entangled photons, (for which an analogous but slightly more complicated version of Bell’s theorem applies), rather than particle spin. The results of these experiments are consistent with the predictions of the Schrödinger equation, ruling out many varieties of hidden-variable theories. The conclusion that must be drawn from these tests of Bell’s theorem is that semi-classical versions of quantum mechanics, assuming hidden variables, cannot replicate the results of experiment and therefore are not consistent with the true nature of reality. Possible Interpretations of Quantum Mechanics So where does this leave the hope of determining a visualisable interpretation of the underlying mechanisms behind quantum physics? Bell’s theorem ruled out the majority of hidden-variable theories of quantum mechanics, but still leaves a number of possible interpretations, including the following: The Copenhagen Interpretation The standard interpretation of quantum mechanics is also known as the Copenhagen interpretation. This term was coined by Heisenberg in the 1950s to refer to the theories of quantum mechanics developed at the Niels Bohr Institute in Copenhagen, Denmark, in the 1920s. This is a nondeterministic, probabilistic view of quantum mechanics, which accepts that the true nature of reality is inherently unknowable. This interpretation considers that we should not attempt to visualise the processes behind quantum mechanics as we would for a classical theory. Heisenberg, himself, preferred to think that it was sufficient that a theory could be described mathematically for it to be considered visualisable. The Many-Worlds Theory The many-worlds theory postulates that, for every event with more than one possible outcome, the universe splits, so that all possible versions of reality are played out in parallel universes to our own. This concept has been widely used by science fiction writers, and, as far as we know, there is nothing in the laws of physics that expressly prohibit this version of reality. However, there is of course, currently, no way to test this theory, and unless there is some way to interact with these parallel universes, we can never hope to verify their existence. Superdeterminism attempt to avoid the problems of hidden-variable theories, exposed by Bell’s theorem, by suggesting that the outcomes of all events are predetermined. This would mean, however, that the notion of free will is merely an illusion. The Computer Simulation Hypothesis It has been suggested that what we perceive as reality is in fact akin to a computer simulation. This would remove the need to explain the mechanism behind quantum phenomena, since the result of every event would merely be a random assignment based on the mathematical calculation of the probability distribution of the wave function. The De Broglie-Bohm Pilot-Wave Theory In the 1920s, De Broglie suggested that the mysterious phenomena of wave-particle duality could be explained if the motion of a particle is determined by a guiding pilot wave. De Broglie eventually gave up on his attempts to fully formulate this theory; however, it was subsequently revived and improved upon by David Bohm, in the 1950s, with encouragement from Albert Einstein, becoming an alternative to the Copenhagen interpretation. The theory is fully consistent with experimental observation; indeed, John Stewart Bell, (see Bell’s theorem, above) was, himself, a proponent of the De Broglie-Bohm theory. For example, in Young’s double slit experiment, this theory says that the particle always passes through just one of the slits; however, the pilot waves passes through both and is diffracted. This causes the particle’s position when it is observed on the screen to be determined by the diffracted pilot wave – as if the particle were “surfing” the pilot wave – although the particle does not have wavelike characteristics itself. Although this theory does provide a possible way to visualise quantum mechanics in a more classical sense, pilot wave theories do not provide any more information about a quantum mechanical system than theories based on the standard Copenhagen interpretation. Because of this, it’s possible that we’ll never be able to experimentally confirm or rule out the existence of such pilot waves, so whether they have any real relevance to quantum theory, is debatable. However, the same criticism can also be applied to any of that alternative theories listed above. Since the Copenhagen interpretation doesn’t even attempt to describe a mechanism for quantum mechanical phenomena, De Broglie Bohm theory might seem to be the least exotic of the possible explanations of the true nature of reality. It is even possible that testable differences between predictions made by pilot-wave theories and the Copenhagen interpretation might arise at even smaller scales. It has also been speculated that the theory might be the right direction for the development of a fully quantum-mechanical interpretation of general relativity, which has, so far, eluded physicist working under the standard assumptions of quantum mechanics. Interestingly, it has recently been shown, by Yves Couder et al, that droplets steered by pilot waves on the surface of silicon oil, exhibit behavour analogous to quantum mechanical effects – such as quantised orbital levels, tunneling and annihilation – but at much larger, laboratory scales. YouTube videos showing some of these oil droplet experiments are available at https://youtu.be/nmC0ygr08tE and Leave a Reply Astronomy, Cosmology, Space and Astrophysics
e6155aa21d17dad5
World Library   Flag as Inappropriate Email this Article Coupled cluster Article Id: WHEBN0000429789 Reproduction Date: Title: Coupled cluster   Author: World Heritage Encyclopedia Language: English Subject: Quantum chemistry composite methods, Configuration interaction, Molecular orbital theory, Density functional theory, Full configuration interaction Publisher: World Heritage Encyclopedia Coupled cluster • Wavefunction ansatz 1 • Cluster operator 2 • Coupled-cluster equations 3 • Types of coupled-cluster methods 4 • General description of the theory 5 • Historical accounts 6 • Relation to other theories 7 • Configuration Interaction 7.1 • Symmetry Adapted Cluster 7.2 • See also 8 • References 9 • External resources 10 Wavefunction ansatz H\vert{\Psi}\rangle = E \vert{\Psi}\rangle where H is the Hamiltonian of the system, \vert{\Psi}\rangle the exact wavefunction, and E the exact energy of the ground state. Coupled-cluster theory can also be used to obtain solutions for excited states using, for example, linear-response,[7] equation-of-motion,[8] state-universal multi-reference coupled cluster,[9] or valence-universal multi-reference coupled cluster[10] approaches. \vert{\Psi}\rangle = e^{T} \vert{\Phi_0}\rangle , where \vert{\Phi_0}\rangle, the reference wave function, which is typically a Slater determinant constructed from Hartree–Fock molecular orbitals, though other wave functions such as Configuration interaction, Multi-configurational self-consistent field, or Brueckner orbitals can also be used. T is the cluster operator which, when acting on \vert{\Phi_0}\rangle, produces a linear combination of excited determinants from the reference wave function (see section below for greater detail). The choice of the exponential ansatz is opportune because (unlike other ansatzes, for example, configuration interaction) it guarantees the size extensivity of the solution. Size consistency in CC theory, also unlike other theories, does not depend on the size consistency of the reference wave function. This is easily seen, for example, in the single bond breaking of F_2 when using a restricted Hartree-Fock (RHF) reference, which is not size consistent, at the CCSDT level of theory which provides an almost exact, full CI-quality, potential energy surface and does not dissociate the molecule into F^{-} and F^{+} ions, like the RHF wave function, but rather into two neutral F atoms.[11] If one were to use, for example, the CCSD, CCSD[T], or CCSD(T) levels of theory, they would not provide reasonable results for the bond breaking of F_2, with the latter two approaches providing unphysical potential energy surfaces,[12] though this is for reasons other than just size consistency. Cluster operator The cluster operator is written in the form, T=T_1 + T_2 + T_3 + \cdots , where T_1 is the operator of all single excitations, T_2 is the operator of all double excitations and so forth. In the formalism of second quantization these excitation operators are expressed as T_1=\sum_{i}\sum_{a} t_{a}^{i} \hat{a}^{a}\hat{a}_{i}, T_2=\frac{1}{4}\sum_{i,j}\sum_{a,b} t_{ab}^{ij} \hat{a}^{a}\hat{a}^{b}\hat{a}_j\hat{a}_{i}, and for the general n-fold cluster operator T_n= \frac{1}{(n!)^{2}} \sum_{i_1,i_2,\ldots,i_n} \sum_{a_1,a_2,\ldots,a_n} t_{a_1,a_2,\ldots,a_n}^{i_1,i_2,\ldots,i_n} \hat{a}^{a_1} \hat{a}^{a_2} \ldots \hat{a}^{a_n} \hat{a}_{i_n} \ldots \hat{a}_{i_2} \hat{a}_{i_1}. In the above formulae (\hat{a}^{\dagger}_{a} =) \hat{a}^{a} and \hat{a}_{i} denote the creation and annihilation operators respectively and i, j stand for occupied (hole) and a, b for unoccupied (particle) orbitals (states). The creation and annihilation operators in the coupled cluster terms above are written in canonical form, where each term is in the normal order form, with respect to the Fermi vacuum, \vert{\Phi_0}\rangle. Being the one-particle cluster operator and the two-particle cluster operator, T_1 and T_2 convert the reference function \vert{\Phi_0}\rangle into a linear combination of the singly and doubly excited Slater determinants, respectively, if applied without the exponential (such as in CI where a linear excitation operator is applied to the wave function). Applying the exponential cluster operator to the wave function, one can then generate more than doubly excited determinants due to the various powers of T_1 and T_2 that appear in the resulting expressions (see below). Solving for the unknown coefficients t_{a}^{i} and t_{ab}^{ij} is necessary for finding the approximate solution \vert{\Psi}\rangle. The exponential operator e^{T} may be expanded as a Taylor series and if we consider only the T_1 and T_2 cluster operators of T, we can write: e^{T} = 1 + T + \frac{1}{2!}T^2 + \cdots = 1 + T_1 + T_2 + \frac{1}{2}T_1^2 + T_1T_2 + \frac{1}{2}T_2^2 + \cdots T = T_1 + ... + T_n then Slater determinants for an N-electron system excited more than n (< N) times may still contribute to the coupled cluster wave function \vert{\Psi}\rangle because of the non-linear nature of the exponential ansatz, and therefore, coupled cluster terminated at T_n usually recovers more correlation energy than CI with maximum n excitations. Coupled-cluster equations H \vert{\Psi_{0}}\rangle = H e^{T} \vert{\Phi_0}\rangle = E e^{T} \vert {\Phi_0}\rangle where there are a total of q coefficients (t-amplitudes) to solve for. To obtain the q equations, first, we multiply the above Schrödinger equation on the left by e^{-T} and then project onto the entire set of up to m-tuply excited determinants, where m is the highest order excitation included in T, that can be constructed from the reference wave function \vert{\Phi_0}\rangle, denoted by \vert{\Phi^{*}}\rangle, and individually, \vert{\Phi_{i}^{a}}\rangle are singly excited determinants where the electron in orbital i has been excited to orbital a; \vert{\Phi_{ij}^{ab}}\rangle are doubly excited determinants where the electron in orbital i has been excited to orbital a and the electron in orbital j has been excited to orbital b, etc. In this way we generate a set of coupled energy-independent non-linear algebraic equations needed to determine the t-amplitudes. \langle {\Phi_0}\vert e^{-T}He^{T} \vert{\Phi_0}\rangle = E \langle {\Phi_{0}}\vert {\Phi_0}\rangle = E \langle {\Phi^{*}}\vert e^{-T}He^{T} \vert{\Phi_0}\rangle = E \langle {\Phi^{*}}\vert {\Phi_0}\rangle = 0 , (note, we have made use of e^{-T} e^{T}=1, the identity operator, and we are also assuming that we are using orthogonal orbitals, though this does not necessarily have to be true, e.g., valence bond orbitals, and in such cases the last set of equations are not necessarily equal to zero) the latter being the equations to be solved and the former the equation for the evaluation of the energy. Considering the basic CCSD method: \langle {\Phi_0}\vert e^{-(T_1+T_2)}He^{(T_1+T_2)} \vert{\Phi_0}\rangle = E , \langle {\Phi_{i}^{a}}\vert e^{-(T_1+T_2)}He^{(T_1+T_2)} \vert{\Phi_0}\rangle =0, \langle {\Phi_{ij}^{ab}}\vert e^{-(T_1+T_2)}He^{(T_1+T_2)} \vert{\Phi_0}\rangle =0, in which the similarity transformed Hamiltonian, \bar{H}, can be explicitly written down using Hadamard's formula in Lie algebra, also called Hadamard's lemma (see also Baker–Campbell–Hausdorff formula (BCH formula), though note they are different, in that Hadamard's formula is a lemma of the BCH formula): \bar{H} = e^{-T} H e^{T} = H + [H,T] + \frac{1}{2!}^{a_{1}\ldots a_{n}}\vert(H-E_{0})e^{S}\vert\Phi\rangle = 0, i_{1}<\cdots, a_{1}<\cdots, n=1,\dots,M_{s}, where \vert\Phi_{i_{1}\ldots i_{n}}^{a_{1}\ldots a_{n}}\rangle are the n-tuply excited determinants relative to \vert\Phi\rangle (usually they are the spin- and symmetry-adapted configuration state functions, in practical implementations), and M_{s} is the highest-order of excitation included in the SAC operator. If all of the nonlinear terms in e^{S} are included then the SAC equations become equivalent to the standard coupled-cluster equations of Jiři Čížek. This is due to the cancellation of the energy-dependent terms with the disconnected terms contributing to the product of He^{S}, resulting in the same set of nonlinear energy-independent equations. Typically, all nonlinear terms, except \frac{1}{2}S_{2}^{2} are dropped, as higher-order nonlinear terms are usually small.[20] See also 6. ^ Si̇nanoğlu, Oktay (1962). "Many-Electron Theory of Atoms and Molecules. I. Shells, Electron Pairs vs Many-Electron Correlations". The Journal of Chemical Physics 36 (3): 706.   9. ^ Jeziorski, B.; Monkhorst, H. (1981). "Coupled-cluster method for multideterminantal reference states". Physical Review A 24 (4): 1668.   10. ^ Lindgren, D.; Mukherjee, Debashis (1987). "On the connectivity criteria in the open-shell coupled-cluster theory for general model spaces". Physics Reports 151 (2): 93.   13. ^ Monkhorst, Hendrik J (1987). "Chemical physics without the Born-Oppenheimer approximation: The molecular coupled-cluster method". Physical Review A 36 (4): 1544–1561.   18. ^ Nakatsuji, H.; Hirao, K. (1977). "Cluster expansion of the wavefunction. Pseduo-orbital theory applied to spin correlation". Chemical Physics Letters 47 (3): 569.   External resources
d15903cee336b267
Vol. 63, No. 7 (2014) Damped brachistochrone problem and the relation between constraint and theorem of motion Ding Guang-Tao 2014, 63 (7): 070201. doi: 10.7498/aps.63.070201 Abstract + The damped brachistochrone problem and that with non-zero initial velocity are studied. Based on the discussion of these problems, one may take theorems of motion as constraints for some systems, and whether the constraints are holonomic or nonholonomic is related to the fact that the differential theorems of motion are integrable or non-integrable. Simulation of optimal control of train movement based on car-following model Ye Jing-Jing, Li Ke-Ping, Jin Xin-Min 2014, 63 (7): 070202. doi: 10.7498/aps.63.070202 Abstract + Optimal control of train movement is an important way to reduce transport cost, enhance service level, and realize sustainable development. In this paper, based on traditional optimal velocity car-following model, an improved simulation model is presented, it is used to optimize the velocity control of train movement in urban railway system. The proposed model is established by introducing a new function of objective optimal velocity into the classical optimal velocity model (See Phys. Rev. E 51, 1035, Bando et al, 1995) to realize the optimal control of train movement in complicated conditions. Numerical simulation takes the Beijing City Metro Yi Zhuang line as an example. Here some reality measurement data is used. Results show that the proposed model can well describe the dynamic characteristics of train movement under the complex limited condition. Simulation results are close to reality measurement data. This demonstrates that the proposed model is valid. Further, by analyzing the space-time graph, the change of train velocity and travel time, the evolution characters of train flow under complex conditions are discussed. Symplectic FDTD algorithm for the simulations of double dispersive materials Wang Hui, Huang Zhi-Xiang, Wu Xian-Liang, Ren Xin-Gang, Wu Bo 2014, 63 (7): 070203. doi: 10.7498/aps.63.070203 Abstract + Combined with the Lossy Drude-Lorentz dispersive model, a symplectic finite-difference time-domain (SFDTD) algorithm is proposed to deal with the double dispersive model. Based on matrix splitting, symplectic integrator propagator and the auxiliary differential equation (ADE) technique, with the rigorous and artful formula derivation, the algorithm is constructed, and detailed formulations are provided. Excellent agreement is achieved between the SFDTD-calculated and exact theoretical results when transmittance coefficient in simulation of double dispersive film in one dimension is calculated. As to numerical results for a more realistic structure in three dimensions, the simulation of periodic arrays of silver split-ring resonators using the Drude dispersion model are also included. The transmittance, reflectance, and absorptance of the structure are presented to test the efficiency of the proposed method. Our method can be used as an efficiency simulation tool for checking the experimental data. Research on biomolecule-gate AlGaN/GaN high-electron-mobility transistor biosensors Li Jia-Dong, Cheng Jun-Jie, Miao Bin, Wei Xiao-Wei, Zhang Zhi-Qiang, Li Hai-Wen, Wu Dong-Min 2014, 63 (7): 070204. doi: 10.7498/aps.63.070204 Abstract + In order to enhance the performance of AlGaN/GaN high electron mobility transistor (HEMT) biosensor, millimeter grade AlGaN/GaN HEMT structure have been designed and successfully fabricated. Factors influencing the capability of the AlGaN/GaN HEMT biosensor are analyzed. UV/ozone is used to oxidize GaN surface and then 3-aminopropyl trimethoxysilane (APTES) self-assembled monolayer can be bound to the sensing region. This serves as a binding layer in the attachment of prostate specific antibody (anti-PSA) for prostate specific antigen detection. The millimeter grade biomolecule-gated GaN/AlGaN HEMT sensor shows a quick response when the target prostate specific antigen in a buffer solution is added to the antibody-immobilized sensing area. The detection capability of this biomolecule-gate sensor estimated to be below 0.1 pg/ml level using a 21.5 mm2 sensing area, which is the best result of GaN/AlGaN HEMT biosensor for PSA detection till now. The electrical result of the biomolecule-gated GaN/AlGaN HEMT biosensor suggests that this biosensor might be a useful tool for the prostate cancer screening. Stochastic resonance in an overdamped monostable system with multiplicative and additive α stable noise Jiao Shang-Bin, Ren Chao, Li Peng-Hua, Zhang Qing, Xie Guo 2014, 63 (7): 070501. doi: 10.7498/aps.63.070501 Abstract + In this paper we combine α stable noise with a monostable stochastic resonance (SR) system to investigate the overdamped monostable SR phenomenon with multiplicative and additive α stable noise, and explore the action laws of the stability index α (0 α ≤ 2) and skewness parameter β (-1 ≤ β ≤ 1) of the α stable noise, the monostable system parameter a, and the amplification factor D of the multiplicative α stable noise against the resonance output effect. Results show that for different distributions of α stable noise, the single or multiple low-and high-frequency weak signals detection can be realized by adjusting the parameter a or D within a certain range. For a or D, respectively, there is an optimal value which can make the system produce the best SR effect. Different α or β can regularly change the system resonance output effect. Moreover, when α or β is given different values, the evolution laws in the monostable SR system excited by low-and high-frequency weak signals are the same. The conclusions drawn for the study of single-and multi-frequency monostable SR with α stable noise are also the same. These results will be the foundation for realizing the adaptive parameter adjustment in the monostable SR system with α stable noise. Modeling and simulation analysis of fractional-order Boost converter in pseudo-continuous conduction mode Tan Cheng, Liang Zhi-Shan 2014, 63 (7): 070502. doi: 10.7498/aps.63.070502 Abstract + Based on the fact that the inductor and the capacitor are fractional in nature, the fractional order mathematical model of the Boost converter in pseudo-continuous conduction mode is established by using fractional order calculus theory. According to the state average modeling method, the fractional order state average model of Boost converter in pseudo-continuous conduction mode is built. In view of the mathematical model, the inductor current and the output voltage are analyzed and the transfer functions are derived. Then the differences between the integer order and the fractional order mathematical models are analyzed. On the basis of the improved Oustaloup fractional order calculus for filter approximation algorithm and the model of fractional order inductance and capacitance, the simulation results have been compared between the mathematical model and circuit model with Matlab/Simulink software; the origins of model error are analyzed and the correctness of the modeling in fractional order and the theoretical analysis is verified. Finally, the differences and the relations of Boost converter among the continuous conduction mode, the discontinuous conduction mode, and the pseudo-continuous conduction mode are indicated. Spectrum calculation of chaotic SPWM signals based on double fourier series Liu Yong-Di, Li Hong, Zhang Bo, Zheng Qiong-Lin, You Xiao-Jie 2014, 63 (7): 070503. doi: 10.7498/aps.63.070503 Abstract + Chaotic SPWM control has attracted much interests due to its effectiveness for EMI suppression in power converters. However, most researches focus on the simulation and experiment of power converter under chaotic SPWM control, which is lacking a quantitative method. Based on double Fourier series this paper provides a spectrum calculation method for multi-period SPWM or quasi-random SPWM signals firstly, and the related spectrum calculation and simulation for multi-period SPWM are given to verify the accuracy of the spectrum calculation method; then the calculation method is extended to the spectral analysis of chaotic SPWM signals. To observe the impact on the spectrum of chaotic SPWM signals generated by different mappings and in different variation ranges of carrier period, a spectrum comparison between the Tent and Chebyshev mappings is conducted, in which results indicate that the variation range of the carrier period and the selection of mappings have a great influence on spectrum distribution; in the long term, probability density distribution of chaotic mapping will certainly affect the spectrum, and in the short term the initial value of the mapping will also affect the spread spectrum distribution. In summary, the proposed spectrum calculation method in this paper provides a theoretical foundation for the spread spectrum principle of chaotic SPWM control and for the design reference in practical engineering application. Manipulation of the complete chaos synchronization in dual-channel encryption system based on polarization-division-multiplexing Zhong Dong-Zhou, Deng Tao, Zheng Guo-Liang 2014, 63 (7): 070504. doi: 10.7498/aps.63.070504 Abstract + For the dual-channel encryption system, based on polarization-division-multiplexing, we put forward a new control scheme for complete chaos synchronization by means of linear electro-optic (EO) effect. In the scheme, the chaotic synchronization quality of each linear polarization (LP) mode component varies periodically with the applied electric field. The variation regulation is as follows: Complete chaos synchronization ↔ acute oscillation. With the applied electric field fixed at a certain value, the robustness of the complete chaotic synchronization quality due to the bias current and the feedback strength is improved greatly by EO modulation. Each LP mode can obtain the complete chaos synchronization in a large range of the bias current and the feedback strength. And the encoding message modulated to each LP mode can be almost re-established. A new method of background elimination and baseline correction for the first harmonic Zhang Rui, Zhao Xue-Hong, Hu Ya-Jun, Guo Yuan, Wang Zhe, Zhao Ying, Li Zhi-Xiao, Wang Yan 2014, 63 (7): 070702. doi: 10.7498/aps.63.070702 Abstract + A new method of background elimination and baseline correction is proposed, since there are background signal and larger baseline signal in the first harmonic (1f) of the tunable diode laser absorption spectroscopy (TDLAS). The laser-associated intensity modulation signal, electronic noise, and optical interference fringes of the 1f background are analyzed. Harmonic detection in none absorption spectral region (HDINASR) is used to eliminate the background signal. Then the relationship curve between current and intensity is given in different operating temperatures to design a remaining baseline correction method after eliminating the background. The principle of background signal searching and the LabView software flow chart are also given. The TDLAS experimental system is designed to detect hydrogen fluoride (HF) gas. According to spectral line selection principle, the absorption line -1312.59 nm is selected, whose operating temperature is set at 27.0 ℃ and the background temperature is set at 30.2 ℃. After eliminating the background and correcting the baseline, signal distortion is significantly improved and baseline is corrected. Then it is verified that the method is valid at other operating temperature of the laser (26.7-27.2). And the improvement of HF gas concentration is quantitatively analyzed. It is convenient for the subsequent processing of 1f signal. Thermal-sensitive superconducting coplanar waveguide resonator used for weak light detection Zhou Pin-Jia, Wang Yi-Wen, Wei Lian-Fu 2014, 63 (7): 070701. doi: 10.7498/aps.63.070701 Abstract + Since the last decades, superconducting single-photon technology has been extensively used in the quantum security communication and the linear-optic quantum computing fields. Especially, the device based on the coplanar waveguide resonator has attracted substantial interests due to its evident advantages, including the relatively simple structure, the sufficiently high detection efficiency, and the photon-resolving capability, etc. With the profound investigation in optimizing the depositing methods and the material selections, as well as the the development of the relevant theories, the technology of single photon detection based on the coplanar waveguide resonator has obtained a breakthrough. In this review paper we begin from the basic principle of the coplanar waveguide detector, then interpret the relevant theory and some design details of the devices. Finally, based on some of the recent experimental results measured with the low-temperature devices in our lab, we give a brief perspective on the future development of the superconducting coplanar waveguide single photon detectors. Single zeptosecond pulse generation from muonic atoms under two-color XUV fields Li Zhi-Chao, Cui Sen, He Feng 2014, 63 (7): 073201. doi: 10.7498/aps.63.073201 Abstract + We use the Lewenstein model to study the high harmonic generated for a μp atom exposed to two-color XUV pulses. Calculated results show a super continuum plateau in high harmonic spectrum which is formed when the time delay is 0 and XUV frequencies are 5 and 2.5. By synthesizing the continuous high harmonic spectra, a pulse as short as 130 zeptosecond is obtained. Such a single zeptosecond pulse may work as an ultrafast camera to capture ultrafast processes occurring inside nuclei. Theoretical and experimental study on the multi-color broadband coherent anti-Stokes Raman scattering processes Yin Jun, Yu Feng, Hou Guo-Hui, Liang Run-Fu, Tian Yu-Liang, Lin Zi-Yang, Niu Han-Ben 2014, 63 (7): 073301. doi: 10.7498/aps.63.073301 Abstract + In order to exactly distinguish and quantitatively analyze the different or unknown components in a mixture, the global molecular CARS spectra information should be obtained simultaneously with a broad-band coherent anti-Stokes Raman scattering (CARS) spectroscopy in supercontinuum. In a broad-band CARS spectroscopy, two-and three-color CARS processes are generated due to different functions of effective spectroscopic components in supercontinuum. Firstly, we theoretically analyzed the generation conditions of CARS signals and the relationships between their intensity and power of excitation lights in the two types of CARS process with the broad-band excitation. On this basis, the two types of CARS process are achieved with a home-built broad-band CARS spectroscopic system, respectively. Using the functional fitting analysis of the obtained CARS spectral signals of benzonitrile, the relationships between CARS signals and excitation lights are experimentally verified in two different kinds of CARS process. Further optimizations of broad-band time-resolved CARS spectroscopic and microscopic systems, for simultaneously obtaining the global CARS spectral signals of samples, can be achieved under the guidance of theoretical and experimental results. Ab initio calculation of the potential energy curves and spectroscopic properties of BP molecule Wang Wen-Bao, Yu Kun, Zhang Xiao-Mei, Liu Yu-Fang 2014, 63 (7): 073302. doi: 10.7498/aps.63.073302 Abstract + A high-precision quantum chemistry ab initio multi-reference configuration interaction method with aug-cc-pVQZ basis sets has been used to calculate the four states of BP molecule. The four -S states are X3, 3-, 5 and 5-, which are correlated to the lowest dissociation limit of B(2Pu)+P(4Su). Analysis of the electronic structures of -S states shows that the -S electronic states are essentially multi-configurational. We take the spin-orbit interaction into account for the first time so far as we know, which makes the four -S states split into fifteen states. 30+ state is confirmed to be the ground state. The SOC effect is essential for the BP molecule, which leads to the avoided crossings for 0+ and 1 states from X3 and 3-. Based on the PECs of -S and states, the accurate spectroscopic constants are obtained by solving the radial Schrdinger equation. The spectroscopic results may be conducive to further research on BP molecule in experiment and theory. Optical bottle beam generated by a new type of light emitting diode lens He Xi, Du Tuan-Jie, Wu Feng-Tie 2014, 63 (7): 074201. doi: 10.7498/aps.63.074201 Abstract + A new method for generating a single bottle beam directly by light emitting diode (LED) with a secondary optical lens is proposed for the first time, so far as we know. Firstly, in the aspect of geometrical optics, we analyze the principle of generation of a single bottle beam by the LED spot light with a secondary optical lens. Then, we calculate the expression of the length and the radius of the biggest dark region of the bottle beam. After that, a new type of a secondary optical lens is calculated numerically and simulated by numerical recipes software Matlab, three-dimensional modeling software Solidworks and optical simulation software Tracepro. Meanwhile, the minimum size of the bottle beam and the scattering force for trapping particles are calculated. The result shows that the designed secondary optical lens can produce a single bottle beam, the length and the radius of the biggest dark region of the generated bottle beam are in accordance with the theoretical calculations. This result offers a practical and available method for generating a bottle beam with light emitting diode at a low cost. New transverse Zeeman effect method for mercury detection based on common mercury lamp Li Chuan-Xin, Si Fu-Qi, Zhou Hai-Jin, Liu Wen-Qing, Hu Ren-Zhi, Liu Feng-Lei 2014, 63 (7): 074202. doi: 10.7498/aps.63.074202 Abstract + The accurate background correction can determine the minimum limit of trace mercury measurement in atmosphere by the cold vapor atomic absorption method. This paper studies a new method of mercury detection using the common mercury lamp as sources which correct the background according to the transverse Zeeman effect. The resonance spectral line (253.65 nm) of the meccury lamp generates σ-, σ+, and π linear polarized light in the vertical direction of the magnetic field. This study obtains mercury absorbance of σ-, σ+, and π light in different magnetic field intensity by using ultra-high resolution spectrometer, then gets the minimum field intensity of the method. We discuss the existing possible interference caused by benzene with narrow-band absorption and acetone with broadband absorption under 1.78 T magnetic field intensity. Taking σ- and σ+ as background light, and π as absorption light, we quantify the saturated mercury vapor cell with different lengths. With the accurate background correction, the R value of absorption fitting curve can achieve 0.99. Results indicate that the method can accomplish the job of accurate background correction and can be applied to trace mercury measurement in atmosphere. Theoretical analysis on cavity-enhanced laser cooling of Er3+-doped glasses Jia You-Hua, Gao Yong, Zhong Biao, Yin Jian-Ping 2014, 63 (7): 074203. doi: 10.7498/aps.63.074203 Abstract + In recent years, Er3+ doped CdF2-CdCl2-NaF-BaF2-BaCl2-ZnF2 (CNBZN) glass has become one of the new materials in the field of laser cooling of solids. In this paper, using the theory of laser output and standing wave resonance, intracavity-and extracavity-enhanced laser cooling of Er3+-doped CNBZN glass are theoretically analyzed. Calculated results show that enhancement factor can achieve tens to hundreds of times. Moreover, two schemes are compared with each other, and the results show that for low material absorption, especially when the sample length is less than 0.3 mm, intracavity configuration has the advantage of high pumping power and high absorption. However, for high material absorption, especially when the sample length is longer than 3 mm, the extracavity configuration becomes a more efficient means for laser cooling. Finally, according to the operating wavelength and power requirements of Er3+-doped material, cavity enhancement can be realized experimentally using semiconductor diode laser. Statistical analysis of shot-to-shot variation of laser fluence spatial distribution Han Wei, Zhou Li-Dan, Li Fu-Quan, Wang Fang, Feng Bin, Zheng Kui-Xin, Gong Ma-Li 2014, 63 (7): 074204. doi: 10.7498/aps.63.074204 Abstract + The shot-to-shot variation of laser fluence spatial distribution on a large-aperture high-power laser facility is statistically analyzed. Statistical results show that the maximum fluence spatial distribution to which any location in the optic beam will be exposed after N shots, can be described by Gaussian function, and the average fluence across the beam increases with laser shots while standard deviation is relatively constant, independent of laser shots. This is due to the fact that laser fluence spatial distribution possesses similarity over the whole beam and dissimilarity at local positions for different laser shots. Study on the fabrication of gold electrode by laser assembling Zhang Ran, Lü Chao, Xiao Xin-Ze, Luo Yang, He Yan, Xu Ying 2014, 63 (7): 074205. doi: 10.7498/aps.63.074205 Abstract + We proposed the fabrication of gold micro-electrode and grating electrode through laser assembling of gold nanoparticles and realized the electrical interconnection of the single carbon nanotube and gold nanolines, which can decrease the damage of the functional unit to a great extent. This method can also solve the problem of inadequate mass transport in the fabrication of ions. The microstructure could keep unoxidized in the atomasphere with excellent continuity, integrity, and electrical properties, which made this technique have wide application prospects. Influence of coupling coefficient on sparseness of slope response matrix and iterative matrix Cheng Sheng-Yi, Chen Shan-Qiu, Dong Li-Zhi, Liu Wen-Jin, Wang Shuai, Yang Ping, Ao Ming-Wu, Xu Bing 2014, 63 (7): 074206. doi: 10.7498/aps.63.074206 Abstract + Based on a 529-actuator adaptive optic (AO) system, the sparseness of slope response matrix from deformable mirror to Hartmann wavefront sensor and the sparseness of iterative matrix in wavefront reconstruction are analyzed. The influence of actuator coupling coefficient on the slope response matrix sparseness, the iterative matrix sparseness, and the AO system correction quality are also studied under the condition of constant actuator spacing. Larger coupling coefficient results in a lower sparseness of slope response matrix and an iterative matrix. Too large or too small coupling coefficient will lead to lower stability and correction quality of AO system. Finally, the optimal range of coupling coefficient is provided by the balancing correction quality, sparseness of slope response matrix, and stability. Crystals modulated by two parameters and their applications Li Chang-Sheng 2014, 63 (7): 074207. doi: 10.7498/aps.63.074207 Abstract + In the applications of two external fields, such as stresses and electric fields, the optical modulation properties of some crystals are theoretically analyzed using the method of index ellipsoid. Simple mathematical formulas for the calculations of the field-induced principal refractive indexes of some crystals and corresponding azimuthal angles of their principal axes can be deduced from the equation of index ellipsoid if there exists only one nonzero cross term in the equation, e.g. x1x2. According to these simple formulas, we can find out some crystals exhibiting dual transverse electrooptic effect, e.g. crystals of the 6 symmetry point group. Under two simultaneously applied external stresses, elastooptic birefringence of a crystal is proportional to the difference between the two external stresses, and the orientations of their birefringent axes are unchanged. When a stress and an electric field are simultaneously and perpendicularly applied to some crystals such as cubic crystals of 43m point group, the field-induced birefringence of the crystal is proportional to the weighted geometric mean of the applied stress and electric field, and the orientations of their birefringent axes only depend on the ratio of the applied electric field and stress. The above electrooptic and elastooptic modulation properties are useful to the design of novel optical modulators and sensors. A block-based improved recursive moving-target-indication algorithm Hou Wang, Yu Qi-Feng, Lei Zhi-Hui, Liu Xiao-Chun 2014, 63 (7): 074208. doi: 10.7498/aps.63.074208 Abstract + A new block-based recursive moving-target indication algorithm in velocity domain is proposed to solve the problem which is the rapid detection of dim and small target for infrared search and tracking system. Firstly, the two-dimensional least mean square filter is adopted to filter the infrared image sequence, which extracts small targets and residual errors of image sequence. Then, block-based recursive moving-target indication algorithm is adopted to accumulate small target in image block sequence for the enhancement of small target velocity in velocity domain. Finally, resulting image is obtained by using classical recursive moving-target indication algorithm and target velocity for small target detection. Compared with classical method, the proposed method requires less running time, and can be used to detect dim small target image effectively as demonstrated by several groups of experimental results. Analysis of electron momentum relaxation time in fused silica using a tightly focused femtosecond laser pulse Bian Hua-Dong, Dai Ye, Ye Jun-Yi, Song Juan, Yan Xiao-Na, Ma Guo-Hong 2014, 63 (7): 074209. doi: 10.7498/aps.63.074209 Abstract + The electron momentum relaxation time is studied systematically in order to understand its effect during the excited nonlinear ionization process in fused silica with an irradiation of tightly focused femtosecond laser pulses. According to the analysis of a (3+1)-dimensional extended general nonlinear Schrödinger equation, the electron momentum relaxation time shows a huge effect on peak intensity, free electron density, and fluence distributions in the focal region of the incident pulse, meanwhile a value of 1.27 fs is thought to meet the present experimental result based on the theoretical model. Further research indicates that the change of electron momentum relaxation time can have significant difference among several nonlinear mechanisms, such as the laser-induced avalanche ionization, reverse bremsstrahlung, self-defocusing of plasma, etc. Results show that the electron momentum relaxation time plays an important role in the process of femtosecond laser pulses interaction with materials. Study of 1550 nm low loss single mode all-solid photonic bandgap fibers Cheng Lan, Luo Xing, Wei Hui-Feng, Li Hai-Qing, Peng Jing-Gang, Dai Neng-Li, Li Jin-Yan 2014, 63 (7): 074210. doi: 10.7498/aps.63.074210 Abstract + All-solid photonic bandgap fiber shave attracted great attention of researchers due to their particular band gap and dispersion character as well as the merit of easily splicing the traditional optical fiber. We have fabricated the all-solid photonic bandgap fibers using the plasma chemical vapor deposition (PCVD) and a modified tack and draw technique, and the loss and dispersion characteristics were simulated by the finite-difference frequency-domain (FDFD) method. The fiber obtained by this method has a low-loss region at around 1550 nm and can be used as single-mode; its effective model field area and the dispersion of the fiber at 1550 nm are 191.81 μm2 and 16.418 ps/(km·nm), respectively. Combined with the experimental results, the fiber parameters are further optimized by simulation. Calibration of D-RGB camera networks by skeleton-based viewpoint invariance transformation Han Yun, Chung Sheng-Luen, Yeh Jeng-Sheng, Chen Qi-Jun 2014, 63 (7): 074211. doi: 10.7498/aps.63.074211 Abstract + Combining depth information and color image, D-RGB cameras provide a ready detection of human and associated 3D skeleton joints data, facilitating, if not revolutionizing, conventional image centric researches in, among others, computer vision, surveillance, and human activity analysis. Applicability of a D-RBG camera, however, is restricted by its limited range of frustum of depth in the range of 0.8 to 4 meters. Although a D-RGB camera network, constructed by deployment of several D-RGB cameras at various locations, could extend the range of coverage, it requires precise localization of the camera network: relative location and orientation of neighboring cameras. By introducing a skeleton-based viewpoint invariant transformation (SVIT), which derives the relative location and orientation of a detected humans upper torso to a D-RGB camera, this paper presents a reliable automatic localization technique without the need for additional instrument or human intervention. By respectively applying SVIT to two neighboring D-RGB cameras on a commonly observed skeleton, the respective relative position and orientation of the detected humans skeleton for these two cameras can be obtained before being combined to yield the relative position and orientation of these two cameras, thus solving the localization problem. Experiments have been conducted in which two Kinects are situated with bearing differences of about 45 degrees and 90 degrees; the coverage can be extended by up to 70% with the installment of an additional Kinect. The same localization technique can be applied repeatedly to a larger number of D-RGB cameras, thus extending the applicability of D-RGB cameras to camera networks in making human behavior analysis and context-aware service in a larger surveillance area. Molecular dynamics simulation of the thermal conductivity of silicon functionalized graphene Hui Zhi-Xin, He Peng-Fei, Dai Ying, Wu Ai-Hui 2014, 63 (7): 074401. doi: 10.7498/aps.63.074401 Abstract + Direct non-equilibrium molecular dynamics (NEMD) was used to simulate the thermal conductivities of the monolayer and the bilayer silicon functionalized graphenes along the length direction respectively, with the Tersoff potential and the Lennard-Jones potential, based on the velocity Verlet time stepping algorithm and the Fourier law. Simulation results indicate that the thermal conductivity of the monolayer silicon functionalized graphene decreases rapidly with increasing amount of silicon atoms. This phenomenon could be primarily attributed to the changes of graphene phonon modes, mean free path, and motion speed after silicon atoms are embedded in the graphene layer. Meanwhile, the thermal conductivity of the monolayer graphene is declined in the temperature range from 300 to 1000 K. As for the bilayer silicon functionalized graphene, its thermal conductivity increases as a few silicon atoms are inserted into the layer, but decreases when the number of silicon atoms reaches a certain value. Bifurcation and chaos of some strongly nonlinear relative rotation system with time-varying clearance Liu Bin, Zhao Hong-Xu, Hou Dong-Xiao, Liu Hao-Ran 2014, 63 (7): 074501. doi: 10.7498/aps.63.074501 Abstract + The dynamic equation for the relative rotation nonlinear dynamic system with time-varying clearance is investigated. Firstly, transformation parameter is deduced by using the method of MLP; the bifurcation response equations of 1/2 harmonic resonance then are generated by the method of multiple scales, while singularity analysis is employed to obtain the transition set of steady motion; further more the bifurcation characteristic and the bifurcation of the system under the situation of non-autonomy are analyzed. Finally, numerical simulation exhibits many different motions, such as periodic motion, period-doubling motion, and chaos. It is shown that the change of clearance and damp parameters may influence the motion state of the system. Fractional derivative dynamics of intermittent turbulence Liu Shi-Da, Fu Zun-Tao, Liu Shi-Kuo 2014, 63 (7): 074701. doi: 10.7498/aps.63.074701 Abstract + Intermittent turbulence means that the turbulence eddies do not fill the space completely, so the dimension of an intermittent turbulence takes the values between 2 and 3. Turbulence diffusion is a super-diffusion, and the probability of density function is fat-tailed. In this paper, the viscosity term in the Navier-Stokes equation will be denoted as a fractional derivative of Laplatian operator. Dimensionless analysis shows that the order of the fractional derivative α is closely related to the dimension of intermittent turbulence D. For the homogeneous isotropic Kolmogorov turbulence, the order of the fractional derivatives α=2, i.e. the turbulence can be modeled by the integer order of Navier-Stokes equation. However, the intermittent turbulence must be modeled by the fractional derivative of Navier-Stokes equation. For the Kolmogorov turbulence, diffusion displacement is proportional to t3, i.e. Richardson diffusion, but for the intermittent turbulence, diffusion displacement is stronger than Richardson diffusion. Drag reduction on hydrophobic transverse grooved surface by underwater gas formed naturally Wang Bao, Wang Jia-Dao, Chen Da-Rong 2014, 63 (7): 074702. doi: 10.7498/aps.63.074702 Abstract + Low fluid friction is difficult to obtain on super-hydrophobic surfaces for a large flow velocity, because the entrapped gas within the surface is weakened substantially. Once the gas removed, the friction of the fluid increases markedly due to its own surface roughness. In this study, a hydrophobic transverse microgrooved surface is designed to sustain the air pockets in the valleys for a long time. Direct optical measurements are conducted to observe the entrapped gas when water flows past the surface in the perpendicular direction of grating patterns. More importantly, this hydrophobic transverse micro-grooved surface has been determined to have the capability of automatic formation of gas. Some of the gas is continually carried away from the surface and new gas is continually generated to substitute the lost gas. And the stable slippages at the surface are achieved corresponding to the relative stable gas on this designed surface. A novel lattice Boltzmann method for dealing with arbitrarily complex fluid-solid boundaries Shi Dong-Yan, Wang Zhi-Kai, Zhang A-Man 2014, 63 (7): 074703. doi: 10.7498/aps.63.074703 Abstract + A suitable arbitrarily complex boundary condition treatment using the lattice Boltzmann sheme is developed in the fluid-solid coupling field. The new method is based on a half-way bounce back model. A virtual boundary layer is built with the fluid-solid coupling, and all the properties used on the virtual boundary are inter-/extrapolated from the surrounding nodes combining with the finite difference method. The improved method ensures that the particles bounce the same location as that of the macro-speed sampling point, and considers the offset effect on the accuracy of the calculated results when the actual physical borders and the grid lines do not coincide. And its scope is extended to any static or mobile, straight or curved boundary. The processing power of the method under the classic conditions, such as the Poiseuille flow, the flow around a circular cylinder, the Couette flow, etc. is studied. Results prove that the theoretically calculated values agree well with the experimental data. Compared with the results published in the literature, this method has a greater precision when the actual physical borders and gridlines do not coincide. Investigation of electromagnetic hydrodynamics propulsion and vector control by surfaces based on a rotational navigation body Liu Zong-Kai, Gu Jin-Liang, Zhou Ben-Mou, Ji Yan-Liang, Huang Ya-Dong, Xu Chi 2014, 63 (7): 074704. doi: 10.7498/aps.63.074704 Abstract + Realization of electromagnetic hydrodynamics (MHD) propulsion by surfaces needs an electromagnetic body force generated in a conductive fluid (such as seawater and plasma, etc.) around the navigation body. Furthermore, the reaction force against the electromagnetic body force could be used to propel. Based on the basic control equations of electromagnetic field and fluid mechanics, the vector control effect has been analyzed by virtue of field intensity and force distribution characteristic on the rotational navigation body, under two different force action areas. Results show that the navigation attitude adjustment could be realized by this control method without changing attacks and propulsion directions. An upward force moment could be achieved by the control model A. Accordingly, both of the pitching moment and yaw moment could be changed by the control model B. Thus, as a new way of propulsion, the MHD propulsion by surfaces offers several advantages, such as high speed, high efficiency, easy operation, high payload etc. Additionally, in this paper, the vector propulsion has been proved to be one of the remarkable advantages for MHD propulsion by surface. Study on the gain characteristics of terahertz surface plasma in optically pumped graphene multi-layer structures Liu Ya-Qing, Zhang Yu-Ping, Zhang Hui-Yun, Lü Huan-Huan, Li Tong-Tong, Ren Guang-Jun 2014, 63 (7): 075201. doi: 10.7498/aps.63.075201 Abstract + Based on the developed optically pumped graphene multilayer terahertz surface plasma structures, this paper calculates the real part of propagation index and amplification coefficient in optically pumped graphene multilayer structures, discusses the inluences of momentum relaxation time, temperature, numbers of grapheme layers, and the quasi-Fermi energy in the topmost grapheme layer on the real part of propagation index and amplification coefficient. It is shown that when the real part of dynamic conductivity becomes negative in the terahertz range of frequencies in the optically pumped graphene multilayer structures, the surface plasma of graphene layers can achieve gain. By comparing the peeling-graphene-structure with the graphene structure that has a high conducting bottom graphene layer in optically pumped scheme, it can be said that the surface plasma of the peeling-graphene-structure can get a high efficient amplification. Meanwhile, the structure having properly numbers of graphene layers can get a larger amplification than the simple graphene structure in an optically pumped scheme at low temperatures. Three-dimensional modelling and numerical simulation on segregation during Fe-Pb alloy solidification in a multiphase system Wang Zhe, Wang Fa-Zhan, Wang Xin, He Yin-Hua, Ma Shan, Wu Zhen 2014, 63 (7): 076101. doi: 10.7498/aps.63.076101 Abstract + The three-dimensional mathematical model for a three-phase flow during its horizontai solidification is studied using fluid dynamics method based on Eulerian-Eulerian and volume of fraction methods, in which the mass, momentum, species, and enthalpy conservation equations of the Fe-Pb alloy solidification process are solved simultaneously. Effects of Pb area quadratic gradient (∇ (∇SPb)) and Pb concentration quadratic gradient (∇ (∇CPb)) on the segregation formation are investigated. Results show that the segregation mode is manifested as X-segregates in the upper and V-segregates in the lower part during flow-solidification of liquid phase and gas phase. The X-segregates result from the phase transformation driving force of gas phase and “scattering” is due to the orientation of phase transition. When t >tc the lower ∇ (∇SPb) and ∇ (∇CPb) curves cause a larger yielding rate of Pb with a larger down angle of X-segregates and a smaller up angle of X-segregates and V-segregates. All these are favorable for the formation of a well-dispersed microstructure. In addition, the gas-liquid two-phase flow interaction term has an effect on channel segregation, showing that channels occur only in the region where the flow-phase transition interaction term (ul·∇cl and ug·∇cg) is negative. With a negative flow-phase transition interaction term the increase in flow velocity due to the flow perturbation and flow-phase transition interaction becomes more negative, thus the channel continues to grow and tends to be stable. Calculated results show good agreement with experimental data. Theoretical study on geometry and physical and chemical properties of oligochitosan Li Xin, Zhang Liang, Yang Meng-Shi, Chu Xiu-Xiang, Xu Can, Chen Liang, Wang Yue-Yue 2014, 63 (7): 076102. doi: 10.7498/aps.63.076102 Abstract + By using the density functional theory with B3LYP/6-31G+(d) we compute the optimization, vibration frequencies, electron structures of gg conformation of oligochitosans, and study the average binding energies and the zero-point energy corrections using WB97XD method. We also analyze the thermodynamic properties of oligochitosans. Results show that the hydrogen-bond makes the oligochitosan become spiral; average binding energies tend to decrease and stability tends to improve with the increasing degree of polymerization (DP); the water degradation of oligochitosan is an exothermic reaction, so it is feasible to reduce the temperature to improve the degradation yield in experiment; in addition, the energy gap of oligochitosan quickly converges to 6.99 eV with the increase of DP; furthermore, the value of DP7 oligochitosan is in accordance with the convergence value. The HOMO and LUMO of oligochitosan show that chemical activity is mainly distributed in C2 amino, C6 hydroxyl groups, and both ends of oligochitosan chain. These results have instructive significance on the modeling, and can provide a theoritical basis for degradation process, chemical activity position, and size-dependence in physical chemical properties of oligochitosan. Atomistic simulation study on the local strain fields around an extended edge dislocation in copper Shao Yu-Fei, Yang Xin, Li Jiu-Hui, Zhao Xing 2014, 63 (7): 076103. doi: 10.7498/aps.63.076103 Abstract + The local strain fields around an extended edge dislocation in copper are studied via the quasicontinuum multiscale simulation method combined with the virial strain calculation techniques. Results show that in the regions, tens of nanometers away from the dislocation, atoms are experiencing infinitesimal strain; virial strain calculation results are consistent with the predictions from elastic theory very well. In the regions near the dislocation, the virial strain fields can outline the core areas of Shockley partial dislocations precisely, which are in the shape of ellipse with a longer axis 7b1 and a shorter axis 3b1, where b1 is the length of burgers vector of the partial dislocation. Characterization of thermal conductivity for GNR based on nonequilibrium molecular dynamics simulation combined with quantum correction Zheng Bo-Yu, Dong Hui-Long, Chen Fei-Fan 2014, 63 (7): 076501. doi: 10.7498/aps.63.076501 Abstract + A nonequilibrium molecular dynamics model combined with quantum correction is presented for characterizing the thermal conductivity of graphene nanoribbons (GNR). Temperature effect on graphene nanoribbon thermal conductivity is revealed based on this model. It is shown that different from the decreasing dependence in classical nonequilibrium molecular dynamics simulations, an “anomaly” is revealed at low temperatures using quantum correction. Besides, the conductivity of GNR shows obvious edge and scale effects: The zigzag GNR have higher thermal conductivity than the zigzag GNR. The whole temperature range of thermal conductivity and the slope of thermal conductivity at low temperatures both show an increasing dependence of width. Boltzmann-Peierls phonon transport equation is used to explain the temperature and scale effects at low temperatures, indicating that the model constructed is suitable for a wide temperature range of accurate calculation for thermal conductivity of different chirality and width. Research provides a possible theoretical and computational basis for heat transfer and dissipation applications of GNR. Influences of electrode separation on structural properties of μc-Si1-xGex:H thin films Cao Yu, Zhang Jian-Jun, Yan Gan-Gui, Ni Jian, Li Tian-Wei, Huang Zhen-Hua, Zhao Ying 2014, 63 (7): 076801. doi: 10.7498/aps.63.076801 Abstract + Hydrogenated microcrystalline silicon germanium (μc-Si1-xGex:H) thin films have been prepared by radio frequency plasma-enhanced chemical vapor deposition (RF-PECVD) using a mixture of SiH4 and GeH4 as the reactive gases. Effects of electrode separation on the structural properties of μc-Si1-xGex:H thin films have been investigated. Results show that reduction of the electrode separation can increase the Ge content in the films. Moreover, μc-Si1-xGex:H thin film deposited at a lower electrode separation of 7 mm possesses not only a stronger (220) orientation and a larger grain size, but also a lower microstructural factor. Then, the decomposition characteristics of the reactive gases are analyzed according to the variation of the structural properties of the μc-Si1-xGex:H thin films. It is found that the increase of the Ge content is due to the decrease of the SiH4 decomposition rate in the plasma. While the better film quality obtained at the lower electrode separation is attributed to the enhancement of the diffusibility of the Ge precursors caused by improving the proportion of GeH3 radicals First-principles study on p-type ZnO codoped with F and Na Deng Sheng-Hua, Jiang Zhi-Lin 2014, 63 (7): 077101. doi: 10.7498/aps.63.077101 Abstract + The first-principles calculations based on the density functional theory have been performed to investigate the doping behaviors of Na and F dopants in ZnO. It turns out from the calculated results of the band structure, density of states, and effective masses that in the F mono-doping case, the impurity states are localized and the formation energy is up as high as 4.59 eV. In the Na mono-doping case, the impurity states are delocalized and the formation energy decreases as low as -3.01 eV. One cannot obtain p-type ZnO in both instances On the contrary, in the Na-F codoping case, especially when the ratio of F and Na is 1:2, the Fermi-level shifts to the valence bands, the corresponding effective masses are small (0.7m0) and the formation energy is the lowest (-3.55 eV). These may indicate the formation of p-type ZnO having a good conductivity. Ferromagnetism of Zn0.97Cr0.03O synthesized by PLD Xie Ling-Ling, Chen Shui-Yuan, Liu Feng-Jin, Zhang Jian-Min, Lin Ying-Bin, Huang Zhi-Gao 2014, 63 (7): 077102. doi: 10.7498/aps.63.077102 Abstract + Four Zn0.97Cr0.03O films were deposited on quartz wafers in various oxygen environment (0, 0.05, 0.15 and 0.2 Pa) using pulsed laser deposition (PLD). The films were characterized by XRD, PL, XPS, magnetic and electrical properties. Experimental results indicate that: (1) All the films are well crystallized and display a pure orientation. (2) All the films have ferromagnetism, and the film deposited at 0.15 Pa has the biggest Ms. (3) There exist VZn, Oi, Zni, VZn- and VO defects in the four films above, and the percentage of resonance peak area for VZn to the total area of all defects as a function of oxygen pressure is similar to Ms, which means that the magnetizations of the samples are closely related to Zn vacancy VZn. There is a Cr3+ state in the four films when the content of Cr3+ is the largest at 0.15 Pa. To sum up, the experimental results indicate that the substitutive Cr in the oxidation state of t3 and the neutral Zn vacancy in the Zn0.97Cr0.03O films is the most favorable defect complex to maintain a high stability of ferromagnetic order, which is consistent with the calculated results by the first-principle calculations. Theoretical analysis of carbon nanotube photomixer-generated terahertz power Jia Wan-Li, Zhao Li, Hou Lei, Ji Wei-Li, Shi Wei, Qu Guang-Hui 2014, 63 (7): 077201. doi: 10.7498/aps.63.077201 Abstract + On the basis of mixer circuit model of light, the terahertz power generated by the carbon nanotubes (CNT) photomixer is analyzed. By simulating mixer conductance, impedance of the antenna, and light plus paranoid voltage, it is shown that the improved mixer conductance, antenna impedance and light plus paranoid voltage can improve the output power of terahertz waves. The output power can reach dozens of microwatt level in the small-signal limit. One-dimensional photonic crystal(1D PC)-based back reflectors for amorphous silicon thin film solar cell Chen Pei-Zhuan, Hou Guo-Fu, Suo Song, Ni Jian, Zhang Jian-Jun, Zhang Xiao-Dan, Zhao Ying 2014, 63 (7): 077301. doi: 10.7498/aps.63.077301 Abstract + New-type back reflectors based on one-dimensional photonic crystal (1D PC) for amorphous silicon thin film solar cells have been investigated, designed and fabricated. These 1D PCs consist of alternating amorphous Si (a-Si) and silicon dioxide (SiOx), of which the deposition process is compatible with current silicon thin film solar cells technology. Results indicate that the total reflectance of 1D PCs increases with the increase of period number. An average reflectance over 96% can be achieved in the range from 500 to 750 nm with 4 periods or more. Applying the 4-period 1D PC as back reflector in NIP amorphous silicon thin film solar cell with device-configuration of glass/1D PC/AZO/NIP a-Si:H/ITO, a conversion efficiency of 7.9% can be obtained, which is comparable to the AZO/Ag-based solar cell of 7.7% and is much better than the SS-based solar cell of 6.9% (a relative enhancement of 14.5%). Transition metals encapsulated inside single wall carbon nanotubes:DFT calculations Liu Man, Yan Qiang, Zhou Li-Ping, Han Qin 2014, 63 (7): 077302. doi: 10.7498/aps.63.077302 Abstract + The transport properties of a single wall carbon tube with transition metal atoms embedded in it are studied by using the first principles method based on the density functional theory and the nonequilibrium Green’s function. Different transition metal atoms filled in the carbon tube are investigated, and the respective charge and spin transport properties are studied. The conductance of the nanotube is found to be distinctive for different metal elements encapsulated, and quantized reductions of conductance can be seen by a quantum unit (2e2/h). In particular, nanotubes with two iron atoms encapsulated in display different I-V curves when the spins of the two iron atoms are in parallel and antiparallel states respectively. These results can be explained by spin-dependent scattering and charge transfer. The encapsulation may tailor the doping and add magnetic behavior to the carbon nanotubes, which would provide a new and promising approach to detect nanoscale magnetic activity. Analyses of wavelength dependence of the electro-optic overlap integral factor for LiNbO3 channel waveguides Li Jin-Yang, Lu Dan-Feng, Qi Zhi-Mei 2014, 63 (7): 077801. doi: 10.7498/aps.63.077801 Abstract + Wavelength dependence of the electro-optic overlap integral factor (Γ) for a single-mode LiNbO3 (LN) channel waveguide was analyzed experimentally and theoretically. By measuring the half-wave voltage (Vπ) of the LN waveguide at different wavelengths and then substituting the measured values into a formula that describes the relationship between Vπ and Γ, the quantitative dependence of Γ on wavelength was obtained; and it showed that Γ rapidly decreases with increasing wavelength. On the other hand, numerical simulations of the modulating electric field distribution, the modal field distribution, and Γ at different wavelengths were carried out; the calculated relationship between Γ and wavelength is in good agreement with the measured results. Further simulations indicate that as the wavelength increases, the center of the modal field profile gradually moves toward the weak electric field side from the waveguide surface, thus leading to a smaller Γ at a longer wavelength. Such a relationship between Γ and wavelength is partially responsible for the nonlinear dependence of Vπ on wavelength obtained experimentally. This would be useful for designing and optimization of LN waveguide-based devices. Synthesization and luminescent properties of blue emitting phosphor Ba2Ca(PO4)2:Eu2+ Wang Zhi-Jun, Liu Hai-Yan, Yang Yong, Jiang Hai-Feng, Duan Ping-Guang, Li Pan-Lai, Yang Zhi-Ping, Guo Qing-Lin 2014, 63 (7): 077802. doi: 10.7498/aps.63.077802 Abstract + A blue emitting phosphor Ba2Ca(PO4)2:Eu2+ is synthesized by a high temperature solid state method. Effect of the conditions is inverstigated, such as preparation temperature and time, the ratio of Ba/Ca, and Eu2+ concentration, on the phase and luminescent property. Results show that Ba2Ca(PO4)2 and Ba2Ca(PO4)2:Eu2+ have been achieved by selecting the appropriate conditions, such as the temperature 900/1200 ℃ and the time 4 h. The compound Ba2Ca(PO4)2:Eu2+ produces an asymmetric emission band centered at 454 nm under 343 nm UV excitation. For the 454 nm emission, the excitation spectrum extends from 200 to 450 nm with a peak at 343 nm, and has an obvious excitation band in the range of 350–410 nm. With increasing Eu2+ concentration, there occur the concentration quenching effect and redshift phenomenon. With decreasing ratio of Ba/Ca, there has an obvious enhancement in the green region, and the emission color gradually turns from blue to cyan. It is shown that the Eu2+ ion not only can occupy the Ba2+ site but also the Ca2+ site. Therefore, different luminescence centers of Eu2+ can exist in Ba2Ca(PO4)2, and affect its luminescence. Luminescence property of Ce3+-Tb3+-Sm3+ co-doped borosilicate glass under various ultraviolet excitations Chen Qiao-Qiao, Dai Neng-Li, Liu Zi-Jun, Chu Ying-Bo, Li Jin-Yan, Yang Lü-Yun 2014, 63 (7): 077803. doi: 10.7498/aps.63.077803 Abstract + Ce3+-Tb3+-Sm3+ co-doped white light emitting borosilicate glasses were fabricated by high-temperature melting technique. In this paper, the excitation spectra and the emission spectra of Ce3+, Tb3+ and Sm3+ ions-doped and co-doped samples were measured and the energy transfer mechanism of Ce3+, Tb3+, and Sm3+ were studied by analyzing the fluorescence lifetime of single-doped and co-doped samples. The color coordinate, rendering index, and color temperature of the emission spectra can be adjusted by changing the excitation wavelength of ultraviolet LED. Finally, we have obtained the white light which fits for life, study, and work. Bluish-green high-brightness long persistent luminescence materials Ba4(Si3O8)2:Eu2+Pr3+, and the afterglow mechanism Wang Peng-Jiu, Xu Xu-Hui, Qiu Jian-Bei, Zhou Da-Cheng, Liu Xue-E, Cheng Shuai 2014, 63 (7): 077804. doi: 10.7498/aps.63.077804 Abstract + A bluish-green long persistent luminescence material Ba4(Si3O8)2:Eu2+, Pr3+, was synthesized by traditional solid state method in a reductive atmosphere According to the photoluminescence and afterglow spectra measurement, the emission center is the cation Eu2+ in the photoluminescence and afterglow procedure. The Pr3+ co-doped sample forms new defects which could capture current carriers after excitation. On the basis of thermoluminescence and afterglow decay measurement, the afterglow intensity of Pr3+ co-doped sample sharply enhances as compared with Eu2+ doped one, the reason is that the lower depth traps are generated in the shallow trap areas (T1 region). At the same time, the Pr3+ co-doped sample have longer afterglow decay than that doped with only Eu2+; the reason is that the deep traps concentration decreases in the deep trap areas (T2 region). The afterglow mechanism of Pr3+ co-doped sample have two different excitation paths, path 1: the electron of the host is directly projected to traops at 268 nm excitation; path 2 the electron of the Eu2+ corresponds to the transitions from the ground state to the 5d excited state at 330 nm excitation. Then the different afterglow mechanism of phosphor was produced. Structured analysis of iron-based amorphous alloy coating deposited by AC-HVAF spray Ye Feng-Xia, Chen Yan, Yu Peng, Luo Qiang, Qu Shou-Jiang, Shen Jun 2014, 63 (7): 078101. doi: 10.7498/aps.63.078101 Abstract + The uniform and compact Fe-based amorphous alloy coating was prepared by active combustion high velocity air fuel (AC-HVAF) spray method. By tuning the parameters of AC-HVAF spray process, the influence of the spraying gun length, spraying distance, and powder feed rate on non-crystallization has been studied carefully. Results indicate that spraying gun length is the key factor in forming perfect amorphous coating. Spraying distance and powder feed rate may determine the thickness and formation rate of the coating. The prepared coatings have a tight adhesion with the substrate, low porosity, and good non-crystallization, which would effectively maintain the excellent mechanical properties of the Fe-based amorphous alloy. The coating can provide a good protection for the substrate material. Hu Jian, Qiu Xi-Jun 2014, 63 (7): 078201. doi: 10.7498/aps.63.078201 Abstract + By virtue of a functional scaling, the free energy for cytoskeletal microtubule (MT) solution system in the gravitational field has been proposed theoretically, and on this basis the influence of gravitational field on MT’s self-organization process is studied. A concentration gradient coupled with orientational order characteristic of nematic ordering pattern formation is the new feature emerging in the presence of gravity. Theoretical calculation results show that gravity facilitates the isotropic to nematic phase transition, which is reflected in a significantly broader transition region and the phase coexistence region increases with increasing g or MT concentration. We also discuss the numerical results obtained due to local MT concentration changing with the height of the vessel and some phase transition properties. Double-threshold cooperative spectrum sensing for cognitive radio based on trust Zhang Xue-Jun, Lu You, Tian Feng, Sun Zhi-Xin, Cheng Xie-Feng 2014, 63 (7): 078401. doi: 10.7498/aps.63.078401 Abstract + This paper presents a double-threshold cooperative spectrum sensing algorithm which is based on trust and satisfies both reliability and efficiency. The cognitive nodes that satisfy the request of double-threshold have the priority to participate in cooperative sensing and that satisfy the requirement of trust parameters may participate in cooperative sensing if only the number of the former is smaller than a preset value. The fusion center stores the sensing record of each cognitive node and sets the fusion weights according to the partial detected results. Theoretical analysis and simulation show that the bandwidth required for transmitting the sensing parameters decreases, and the detection performance improves because the unreliable users are reduced. Additionally, the algorithm can be made to adapt to different wireless service by adjusting the parameter nt. A novel frequency selective surface of hybrid-element type with sharply decreased stop-band Wang Yan-Song, Gao Jin-Song, Xu Nian-Xi, Tang Yang, Chen Xin 2014, 63 (7): 078402. doi: 10.7498/aps.63.078402 Abstract + Frequency selective radome is one of the most important applications of frequency selective surface (FSS). In order to obtain better stealth performance, a novel element FSS, based on a regular slot element FSS, is presented in this paper. The novel element consists of a slot element in the center and at least two slot strips placed on the periodic boundary. We call such FSS the “hybrid-element type FSS” because it exhibits characteristics of both slot type and patch type FSS. Simulation and optimization work is carried out by using a period moment method and a discrete particle swarm optimization method based on the application requirements of a missile radome. Simulation results show that the hybrid-element type FSS has much steeper transition section between pass-band and stop-band, and much lower transmittance in stop-band when compared with the corresponding slot type FSS. The new FSS also has much lower insertion loss in pass-band, much thinner thickness, much simple structure and fabrication process when compared with the ordinary two-layer FSS. Equivalent sample plate is fabricated using printed circuit method and tested using the free space method. Good fit between simulation and testing results verify the accuracy and feasibility of this novel FSS design. The hybrid-element type FSS is especially suitable for the stealth radome when woking frequencies of both sides are very close. It provides a simple and feasible approach for developing frequency selective radome. An imaging algorithm for missile-borne SAR with downward movement based on variable decoupling Jiang Huai, Zhao Hui-Chang, Han Min, Zhang Shu-Ning 2014, 63 (7): 078403. doi: 10.7498/aps.63.078403 Abstract + The ordinary SAR imaging algorithms are inapplicable to missile-bo-rne SAR due to the high speed dive and high-squint of missile. Aiming at this problem, this paper firstly sets up the model of echo and analyses of two-dimensional spectrum. By using azimuth nonlinear chirp scaling (NLCS) based on variable decoupling, an imaging algorithm for missile-borne SAR is proposed.It can effectively compensate the scene with longitudinal and transverse Doppler shift and improve the focusing quality while it can also simplify the geometric image correction operation. Simulation results are provided to confirm the effectiveness of the proposed algorithm. Synthesis of nanoparticles in SiO2 by implantation of Cu and Zn ions and their thermal stability in oxygen atmoshphere Xu Rong, Jia Guang-Yi, Liu Chang-Long 2014, 63 (7): 078501. doi: 10.7498/aps.63.078501 Abstract + Cu nanoparticles (NPs) embedded in silica were synthesized by implantation of 45 keV Cu ions at a fluence of 1.01017 cm-2, and then subjected to post irradiation with 50 keV Zn ions at fluences of 0.51017 cm-2 and 1.01017 cm-2, respectively. Zn post ion implantation induced modifications in structures, optical absorption properties of Cu NPs as well as their thermal stability in oxygen ambient have been investigated in detail. Results clearly show that Cu-Zn alloy NPs could be formed in the Cu pre-implanted silica followed by Zn ion irradiation at a fluence of 0.51017 cm-2, which causes an unique surface plasmon resonance (SPR) absorption peak at about 516 nm. Subsequent annealing in oxygen atmosphere results in the decomposition of Cu-Zn alloy NPs, at 450 ℃, and thus, ZnO and Cu NPs appear in the substrate. Further increase of annealing temperature to 550 ℃ could transform all the Zn and Cu into ZnO and CuO. Moreover, results also demonstrate that introduction of Zn into SiO2 substrate could effectively suppress the oxidation of Cu NPs, meanwhile, the existence of Cu could promote thermal diffusion of Zn towards substrate surface, which enhances the oxidation of Zn. The underlying mechanism has been discussed. Response function of angle signal in two-dimensional grating imaging Ju Zai-Qiang, Wang Yan, Bao Yuan, Li Pan-Yun, Zhu Zhong-Zhu, Zhang Kai, Huang Wan-Xia, Yuan Qing-Xi, Zhu Pei-Ping, Wu Zi-Yu 2014, 63 (7): 078701. doi: 10.7498/aps.63.078701 Abstract + In this paper, we derive the response function of angle signal in a two-dimensional X-ray grating interferometry system under the condition of parallel coherent light, and depict the surface of the function with Matlab.Although there are four kinds of commonly used beam splitter gratings and three kinds of analyzer gratings, and there are still different compound modes between them, we may find that the ultimate surface of the response function of angle signal can only be of three kinds: the peak type, the valley type, and the peak-valley symmetry type of shifting surfaces. As there is a numerical complementary ralationship between the peak type and the valley type of shifting surfaces, we can take the two kinds as one; and finally we only need to consider two kinds of shifting surface.This conclusion simplies the common understanding of the two-dimensional X-ray grating interferometry method, and lays the foundation for the research of quantitatively extracting the two-demensional signal in the future. Equivalent source reconstruction in inhomogeneous electromagnetic media Zhao Chen, Jiang Shi-Qin, Shi Ming-Wei, Zhu Jun-Jie 2014, 63 (7): 078702. doi: 10.7498/aps.63.078702 Abstract + In this paper, a method that uses magnetic extreme signals for equivalent source reconstruction is presented. Through simulation of specific current dipoles given as the sources of magnetic field signals, the feasibility of a multi-chamber heart model is investigated and the accuracy analysis of equivalent source reconstruction in inhomogeneous media is conducted. The magnitude of the magnetic extreme signals is indicative of the influence of volume conductor on the cardiac magnetic field is analyzed. The method is compared with other four methods which are the method of magnetic gradient extreme signals, the Nelder-Mead algorithm, the trust region reflective algorithm, and the particle swarm optimization algorithm against the criteria in terms of accuracy of source reconstruction and computation time of the algorithm. Results show that the method is practically useful for solving inverse cardiac magnetic field problems. Mass segmentation in mammogram based on SPCNN and improved vector-CV Han Zhen-Zhong, Chen Hou-Jin, Li Yan-Feng, Li Ju-Peng, Yao Chang, Cheng Lin 2014, 63 (7): 078703. doi: 10.7498/aps.63.078703 Abstract + Mass segmentation plays an important role in computer-aided diagnosis (CAD) system. The segmentation result seriously affects classifying mass as benign and malignant. By combining the simplified pulse coupled neural network (SPCNN) and the improved vector active contour without edge (vector-CV), a novel method of mass segmentation in mammogram is proposed in this paper. First, the parameters and termination conditions of SPCNN are obtained through mathematical analysis and the initial contour is segmented by SPCNN. Then, the vector CV model is accordingly modified to overcome the shortcomings of traditional CV model. Finally, combined with the initial contour, the improved vector-CV is used to segment the mass contour. The experiments implemented on the public digital database for screening mammography (DDSM) and the clinical images which are provided by the Center of Breast Disease of Peking University People’s Hospital indicate that the proposed method is better than the existing methods, especially when dealing with the dense breasts of Oriental female. Multiscale permutation entropy analysis of electroencephalogram Yao Wen-Po, Liu Tie-Bing, Dai Jia-Fei, Wang Jun 2014, 63 (7): 078704. doi: 10.7498/aps.63.078704 Abstract + We carried out a detailed analysis and a comparison between normal and epileptic electroencephalogram (EEG) based on multiscale permutation entropy. The relationship between multiscale permutation entropy values of EEG and age, and the effect of scale factor on multiscale permutation entropy value were also discussed. By analyzing normal and epileptic EEG based on multiscale permutation entropy, we found that, at the same age, multiscale permutation entropy value of the normal group’s EEG is higher than that of the epileptic group by an average of 0.19, about 7.9%. In addition, for people of age 3 to 35, their multiscale permutation entropies are clearly maximum. When scale factor is smaller than 15, the value of their entropy would reduce no matter whether the age increases or decreases. The results indicate that multiscale permutation entropy can distinguish between normal and epileptic EEG and reflect the general process of human brain development. Effects of NPB anode buffer layer on the performances of inverted bulk heterojunction polymer solar cells Gong Wei, Xu Zheng, Zhao Su-Ling, Liu Xiao-Dong, Yang Qian-Qian, Fan Xing 2014, 63 (7): 078801. doi: 10.7498/aps.63.078801 Abstract + Inverted configuration bulk heterojunction polymer solar cells based on ITO/ZnO/P3HT:PCBM/NPB/Ag were fabricated, with the donor material being poly(3-hexylthiophene)(P3HT), and the acceptor material being [6, 6]-phenyl-C60-butyric acid methyl ester(PCBM). N, N’-diphenyl-N, N’-bis(1-naphthyl)-1, 1’-biphenyl-4, 4’-diamine(NPB) thin anode buffer layers with different thicknesses, which were used to improve the performances of the devices; and the effects of NPB anode buffer were investigated. The insertion of 1 nm thick NPB improves charge collection of the device, both of the short circuit current and open circuit voltage are enhanced. When the thickness of NPB reaches 25 nm, the series resistances are significantly increased, leading to reduced device performances. Effects of different thicknesses of NPB on charge injection and collection are investigated by capacitance-voltage measurements. NPB with 1 nm thickness improves charge collection of the device but without improving charge injection, and the charge recombination mechanism is dominant if the NPB layer is too thick. NPB thin layer with appropriate thickness could be used to enhance the performances of bulk heterojunction polymer solar cells. Study on cascading invulnerability of multi-coupling-links coupled networks based on time-delay coupled map lattices model Peng Xing-Zhao, Yao Hong, Du Jun, Ding Chao, Zhang Zhi-Hao 2014, 63 (7): 078901. doi: 10.7498/aps.63.078901 Abstract + The couplings among different networks facilitate their communications, while at the same time they also bring the risk of enhancing the wide spread of cascading failures to the coupled networks. Given that there is usually the time-delay during the spread of failures and more than one coupling link a node might possess, a cascading failure model for scale-free multi-coupling-link coupled networks is built in this paper, based on time-delay coupled map lattices (CML) model, which may be wider representative than previous models. Our research shows that in BA (Barabási-Albert) scale-free coupled networks, there is a threshold hT ≈ 3: when the coupling strength is bellow this threshold, the stronger coupling strength corresponds to a lower invulnerability; and vice versa, the stronger coupling strength would bring a higher invulnerability. In addition, our studies show that the presence of time-delay not only prolongs the failure spreading time during which measures can be taken to suppress cascading failures, but also has a significant influence on the eventual cascading size, for detail, if intra-layer time-delay τ1 and inter-layer time-delay τ2 can have any values, then the multiples of the two numbers will cause larger cascading size. We hope our research can provide a reference for building high-invulnerable coupled networks or the increase of the invulnerability of the coupled networks. Virtual trajectory model for lane changing of a vehicle on curved road with variable curvature Ren Dian-Bo, Zhang Jing-Ming, Wang Cong 2014, 63 (7): 078902. doi: 10.7498/aps.63.078902 Abstract + In this paper, a virtual trajectory planning method for vehicle lane changing in automated highway system is studied, and a trajectory model for lane changing on variable curvature road is established with odd-order polynomial constraints. Assuming that the starting lane and the target lane have the same instantaneous center, the motion for lane changing of vehicle on the curved road can be decomposed into a linear centripetal motion and a circular motion around the instantaneous centre of the curved road. If the centripetal motion displacement and the rotational angular displacement meet the requirement of odd-order polynomial constraints, the boundary condition of the above two kinds of motion may be obtained from the constraints, such as time, location, and desired state of vehicle at the start and end of the lane changing behavior. By applying the boundary conditions, the polynomial coefficient is deduced, and the mathematical model of virtual trajectory for lane changing can be designed based on the polynomial models of centripetal displacement and angular displacement. Compared with the existing trajectory planning method for lane changing on curved road, the curvature change has been taken into consideration, and the trajectory model for lane changing has been generalized. Simulation results verify the feasibility of the trajectory planning method proposed in this paper for lane changing on a curved road with variable curvature. Ensemble variational data assimilation method based on regional successive analysis scheme Wu Zhu-Hui, Han Yue-Qi, Zhong Zhong, Du Hua-Dong, Wang Yun-Feng 2014, 63 (7): 079201. doi: 10.7498/aps.63.079201 Abstract + The ensemble variational data assimilation method may be subject to significant uncertainties due to the size of forecast ensemble. We found that this problem occurs because the analysis increment of this method is expressed as a linear combination of ensemble perturbation vectors or expansion of the orthogonal basis vectors. Though this method avoids introducing adjoint model while calculating the gradient of object function, the number of physical control variables is much larger than the sample size of forecast ensemble, which causes the assimilation results to be sensitive to the number of ensemble members. For this reason, the regional successive analysis scheme of ensemble variational method is proposed. By this scheme, the ratio between the number of physical control variables in analysis region and the sample size is decreased, so that it is expected that the problem can be solved. The results of numerical experiments using shallow water model show that the regional successive analysis scheme can give better assimilation results than traditional method, and the analysis precision is improved appreciably. Relationship between the quasi-linear diffusion coefficients and the key parameters of spatial energetic electrons Zhang Zhen-Xia, Wang Chen-Yu, Li Qiang, Wu Shu-Gui 2014, 63 (7): 079401. doi: 10.7498/aps.63.079401 Abstract + It has been proved that the ground-based electromagnetic wave can transfer into ionosphere and interact with high-energy particles. By changing the pitch angle and momentum, the particles are imposed to enter the bounce loss cone and drift loss cone, then electron precipitation takes place and the particle bursts form. In recent decades, the relationship has been observed among electromagnetic disturbance and particle bursts and seismic activity based on satellite data. Here, by wave-particle cyclotron resonant interaction combined with the observation range of LEO satellite (about 350–1000 km), the evolvement trend of the pitch angle quasi-linear diffusion coefficients induced by field-aligned electromagnetic waves, is studied with the change of VLF electromagnetic wave frequency, band width, energies of electron (0.1–20 MeV) and L shell (L=1.1–3). We also show the relationship between VLF electromagnetic wave frequency and minimum energy of precipitation electron induced by it, under certain pitch angle value. The relationship among these quantities may be used to provide theoretical explanation for satellite observations of energetic particle precipitation examples, to provide guidance for extracting information associated with earthquakes from the detection of high-energy particles on the satellite, and to lay the foundation on the data analysis of China seismo-electromagnetic satellite planned to launch at about the end of 2016.
e921d2dd0e2e6ed7
Modeling Kelvin Wave Cascades in Superfluid Helium Modeling Kelvin Wave Cascades in Superfluid Helium Guido  Boffetta, Antonio  Celani, Davide  Dezzani, Jason Laurie and Sergey Nazarenko Dipartimento di Fisica Generale and INFN, Università degli Studi di Torino, v. Pietro Giuria 1, 10125, Torino, Italy and CNR-ISAC, Sezione di Torino, c. Fiume 4, 10133 Torino, Italy CNRS, Institut Pasteur, Rue du docteur Roux 25, 75015 Paris, France Mathematics Institute, University of Warwick, Coventry CV4 7AL, UK July 3, 2019 We study two different types of simplified models for Kelvin wave turbulence on quantized vortex lines in superfluids near zero temperature. Our first model is obtained from a truncated expansion of the Local Induction Approximation (Truncated-LIA) and it is shown to possess the same scalings and the essential behaviour as the full Biot-Savart model, being much simpler than the latter and, therefore, more amenable to theoretical and numerical investigations. The Truncated-LIA model supports six-wave interactions and dual cascades, which are clearly demonstrated via the direct numerical simulation of this model in the present paper. In particular, our simulations confirm presence of the weak turbulence regime and the theoretically predicted spectra for the direct energy cascade and the inverse wave action cascade. The second type of model we study, the Differential Approximation Model (DAM), takes a further drastic simplification by assuming locality of interactions in -space via a differential closure that preserves the main scalings of the Kelvin wave dynamics. DAMs are even more amenable to study and they form a useful tool by providing simple analytical solutions in the cases when extra physical effects are present, e.g. forcing by reconnections, friction dissipation and phonon radiation. We study these models numerically and test their theoretical predictions, in particular the formation of the stationary spectra, and the closeness of the numerics for the higher-order DAM to the analytical predictions for the lower-order DAM . Kelvin waves, Wave Turbulence pacs:, 67.85.De, 47.37.+q I Introduction It is well known that a classical vortex filament can support linear waves. These were predicted by Kelvin more than one century ago and experimentally observed about 50 years ago in superfluid . At very low temperature, where the friction induced by normal fluid component can be neglected, Kelvin waves can be dissipated only at very high frequencies by phonon emission V01 (). Therefore at lower frequency, energy is transferred among different wavenumbers by nonlinear coupling. This is the mechanism at the basis of the Kelvin wave cascade which sustains superfluid turbulence S95 (); V00 (). In recent years, the single vortex Kelvin wave cascade has attracted much theoretical KS04 (); N06 (), numerical KVSB01 (); VTM03 (); KS05 () and experimental WGHLV07 () attention. Even within the classical one-dimensional vortex model, different degrees of simplification are possible. For small amplitudes, the vortex configuration can be described by a two component vector field, made of the coordinates of the vortex line in the plane transverse to the direction of the unperturbed filament. These depend on the single coordinate that runs along the filament. As was shown in S95 (), this system of equations admits a Hamiltonian formulation, dubbed the two-dimensional Biot-Savart formulation (2D-BS), see (2) below. Another, more drastic, simplification is obtained by considering local interactions only. This leads to the local induction approximation (LIA) which was originally derived starting from the full 3D-BS AH65 (). The main limitation of LIA is that it generates an integrable system with infinite conserved quantities, as it is equivalent to the nonlinear Schrödinger equation Hasimoto72 (), and therefore, the resonant wave interactions are absent (at all orders) and one cannot reproduce the phenomenology of the full system. For this reason LIA, despite its simplicity, is of little help for the study of weak Kelvin wave turbulence. On the other hand, LIA contains solutions leading to self-crossings (numerical Schwarz () and analytical S95 ()) and, therefore, it can qualitatively describe vortex line reconnections in strong 3D-BS turbulence (“vortex tangle”). In this paper we consider simple models for a vortex filament that is able to sustain a turbulent energy cascade. The first model is obtained in the limit of small amplitudes by a Taylor expansion of the 2D-LIA. The truncation breaks the integrability of the Hamiltonian and therefore generates a dynamical system with two inviscid invariants (energy and wave action). For this class of systems, whose prototype is the two-dimensional Navier-Stokes turbulence KM80 (), we expect a dual cascade phenomenology in which one quantity flows to small scales generating a direct cascade while the other goes to larger scales producing an inverse cascade. The possibility of a dual cascade scenario for Kelvin waves turbulence has been recently suggested lebedev (); N06 () but never observed, and in this paper we present the first numerical evidence for the inverse cascade. The second class of simplified models, Differential Approximation Models (DAMs), use a closure in which the multi-dimensional -space integral in the wave interaction term (collision integral in the wave kinetic equation) is replaced by a nonlinear differential term that preserve the main properties and scalings of the Kelvin wave dynamics such as the energy and wave action conservations, scaling of the characteristic evolution time with respect to the wave intensity and the wavenumber . DAMs have proved to be a very useful tool in the analysis of fluid dynamical and wave turbulence in the past L68 (); H85 (); I85 (); ZP99 (); c04 (); L04 (); N06 (); L06 (), and here we study them in the context of the Kelvin wave turbulence. DAMs are particularly useful when one would like to understand the temporal evolution of the spectrum, when the physical forcing and dissipation need to be included, or when the Kelvin wave system is subject to more involved boundary conditions leading to simultaneous presence of two cascades in the same range of scales, or a thermalization (bottleneck) spectrum accumulation near a flux-reflecting boundary . In the second part of this paper, we will present numerical studies of DAMs in presence of some of these physical factors and we will test some previously obtained analytical predictions. Ii Bse At a macroscopic level, the superfluid vortex filament is a classical object whose dynamics is often described by the Biot-Savart equation (BSE) which describes the self-interaction of vortex elements. The quantum nature of the phenomenon is encoded in the discreteness of circulation Donnelly91 (). The BSE dynamics of the vortex filament admits a Hamiltonian formulation under a simple geometrical constraint: the position of the vortex is represented in a two-dimensional parametric form as , where is a given axis. From a geometrical point of view, this corresponds to small perturbations with respect to the straight line configuration, i.e. the vortex cannot form folds in order to preserve the single-valuedness of the and functions. In terms of the complex canonical coordinate , the BSE can be written in a Hamiltonian form with S95 () where we have used the notation . The geometrical constraint of a small amplitude perturbation can be expressed in terms of a parameter . An enormous simplification, both for theoretical and numerical purposes, is obtained by means of the so called local induction approximation (LIA) AH65 (). This approximation is justified by the observation that (1) is divergent as and is obtained by introducing a cutoff at in the integral in (1) which represents the vortex filament radius. When applied to Hamiltonian (2), the LIA procedure gives S95 () where is a length of the order of the curvature radius (or inter-vortex distance when the considered vortex filament is a part a vortex tangle), . Here, it was taken into account that because is much smaller than any other characteristic size in the system, will be about the same whatever characteristic scale we take in its definition. We remark that in the LIA approximation the Hamiltonian is proportional to the vortex length which is therefore a conserved quantity. The equation of motion from (3) is (we set without loss of generality, i.e. we rescale time as ) As a consequence of the invariance under phase transformations, equation (4) also conserves the total wave action (also called the kelvon number KS04 ()) In addition to these two conserved quantities, the 2D-LIA model possesses an infinite set of invariants and is integrable, as it is the LIA of BSE (which can be transformed into the nonlinear Schrödinger equation by the Hasimoto transformation, see Appendix B). Due to the integrability, in weak Kelvin wave turbulence 111Weakness of waves implies that there are no vortex line reconnections. On the other hand, for strong waves the reconnections can occur and they could qualitatively be described by the self-crossing solutions of LIA S95 (). At self-crossing events, the LIA model fails, but it can be “reset” via an ad hoc reconnection procedure Schwarz (). the energy and the wave action cannot cascade within the LIA model, but this can be fixed by a simple truncation as we show in the next section. Iii Truncated LIA Integrability is broken if one considers a truncated expansion of the Hamiltonian (3) in power of wave amplitude . Taking into account the lower order terms only, one obtains: Neglecting the constant term, the Hamiltonian can be written in Fourier space as with , and we used the standard notation and . In Wave Turbulence, the near-identity transformation allows one to eliminate “unnecessary” lower orders of nonlinearity in the system if the corresponding order of the wave interaction process is nil ZLF92 (). For example, if there are no three-wave resonances, then one can eliminate the cubic Hamiltonian. (The quadratic Hamiltonian corresponding to the linear dynamics, of course, stays). This process can be repeated recursively, in a way similar to the KAM theory, until the lowest order of the non-trivial resonances is reached. If no such resonances appear in any order, one has an integrable system. In our case, there are no four-wave resonances (there are no non-trivial solution for the resonance conditions for if in one dimension). However, there are nontrivial solutions of the six-wave resonant conditions. Thus, one can use the near-identity transformation to convert system (7) into the one with the lowest order nonlinear interaction to be of degree six, (there are no five-wave resonances, the interaction coefficients in the quintic Hamiltonian are identically equal to zero after applying the canonical transformation.) A trick for finding a shortcut derivation of such a transformation is described in ZLF92 (). It relies on the fact that the time evolution operator is a canonical transformation. Taking the Taylor expansion of around we get a desired transformation, that is by its derivation, canonical. The coefficients of each term can be calculated from an auxiliary Hamiltonian , The auxiliary Hamiltonian represents a generic Hamiltonian for the canonical variable , thus, defining in the canonical transformation will set the interaction coefficients of . Here all interaction coefficients, (terms denoted with tildes) present in the auxiliary Hamiltonian are arbitrary. A similar procedure was done in Appendix A of ZLF92 () to eliminate the cubic Hamiltonian in cases when the three-wave interaction is nil, and here we apply a similar approach to eliminate the quadric Hamiltonian. The transformation is represented as The transformation is canonical for all , so for simplicity we set . The coefficients of (9) can be calculated from the following formulae, Due to the original Hamiltonian (6) having gauge symmetry, we have no cubic order Hamiltonian terms, this greatly simplifies the canonical transformation (9), because the absence of any non-zero three-wave interaction coefficients in automatically fixes the arbitrary cubic (and quintic) interaction coefficients within the auxiliary Hamiltonian to zero. Thus, transformation (9) reduces to To eliminate the nonresonant four-wave interactions in Hamiltonian (7), we substitute transformation (11) into Hamiltonian (7). This yields a new representation of Hamiltonian (7) in variable , where the nonresonant terms (more specifically the four-wave interaction terms) will involve both and . Arbitrariness of enables us to select this to eliminate the total four-wave interaction term, . In our case this selection is, This choice is valid as the denominator will not vanish due to the nonresonance of four-wave interactions. Hamiltonian (7) expressed in variable , becomes is the arbitrary six-wave interaction coefficient arising from the auxiliary Hamiltonian. This term does not contribute to the six-wave resonant dynamics as the factor in front will vanish upon the resonant manifold, that appears in the kinetic equation. is the six-wave interaction coefficient resulting from the canonical transformation which is defined later in equation (14). To deal with the arbitrary interaction coefficient , one can decompose into its value taken on the six-wave resonant manifold plus the residue (i.e. ). We can then choose , hence, allowing the arbitrary six-wave interaction coefficient in to directly cancel with the residual value of . This enables us to write Hamiltonian as The explicit form of the interaction coefficient can be expressed by Zakharov and Schulman discovered a parametrisation ZS82 () for the six-wave resonant condition with , This parametrisation allows us to explicitly calculate upon the resonant manifold. This is important because the wave kinetics take place on this manifold, that corresponds to the delta functions of wavenumbers and frequencies within the kinetic equation. When this parametrisation is used with equation (14) and we find that the resonant six-wave interaction coefficient simplifies to . Note, that this is indeed the identical to the next term, in the LIA expansion with opposite sign. This six-order interaction coefficient is obtained from coupling of two fourth-order vertices of . It is not surprising that the resulting expression coincides, with the opposite sign, with the interaction coefficient of in (3). Indeed, (3) is an integrable model which implies that if we retained the next order too, i.e. , then the resulting six-wave process would be nil, and the leading order would be an eight-wave process in this case. In fact, in the coordinate space the Hamiltonian is simply Thus, the existence of the six-wave process is a consequence of the truncation (6) of the Hamiltonian. Hamiltonian (7) (or equivalently (III) or (16)) constitutes the truncated-LIA model for Kelvin wave turbulence. It possesses the same scaling properties as the BSE system: it conserves the energy and the wave action, and gives rise to a dual-cascade six-wave system with an interaction coefficient with the same order of homogeneity as the one of the BSE. A slight further modification should be made in the time re-scaling factor as , - i.e. by dropping the large log factor from the original definition. Physical insight into Kelvin wave turbulence is obtained from the wave turbulence (WT) approach which yields a kinetic equation which describes the dynamics of the wave action density . The dynamical equation for the variable can be derived from Hamiltonian (III) by the relation , and is Multiplying equation (17) by , subtracting the complex conjugate and averaging we arrive at where . Assuming a Gaussian wave field, one can take to the zeroth order , which is simplified via Gaussian statistics to a product of three pair correlators, However, due to the symmetry of this makes the right hand side of kinetic equation (21) zero. To find a nontrivial answer we need to obtain a first order addition to . To calculate one takes the time derivative of , using the equation of motion (17) of the canonical variable , and insert the zeroth order approximation for the tenth correlation function (this is similar to equation (19), but a product of five pair correlators involving ten wavevectors). can then be written as where and . The first term of (20) is a fast oscillating function, its contribution to the integral (18) decreases with and is negligible at larger than , and as a result we will ignore the contribution arising from this term. The second term is substituted back into equation (18), the relation is applied because of integration around the pole, and the kinetic equation is derived, where we have introduced and . A simple dimensional analysis of (21) gives which is the same form obtained from the full BSE KS04 (). In wave turbulence theory, one is concerned with non-equilibrium steady state solutions of the kinetic equation (21). These solutions, that rely on a constant (non-zero) flux in some inertial range are known as Kolmogorov-Zakharov (KZ) solutions. In addition, the kinetic equation (21) contains additional solutions that correspond to the thermodynamical equipartition of energy and wave action. These equilibrium solutions stem from the limiting cases of the more generalised Rayleigh-Jeans distribution where is the temperature of the system and is a chemical potential. To find the KZ solutions, one can apply a dimensional argument on both energy and wave action fluxes. The energy flux at wavenumber is defined as which, using (22), becomes . By requiring the existence of a range of scales in which the energy flux is -independent leads to the spectrum which, again, is the same form obtained from the full BSE KS04 (). A similar argument can be applied to the wave action (5) whose flux is . Therefore a scale independent flux of wave action requires a spectrum lebedev (); N06 () A word of caution is due about both spectra (24) and (25) because the dimensional analysis does not actually guarantee that they are true solutions of the kinetic equation. To check if these spectra are real solutions (and therefore physically relevant) one has to prove their locality i.e. convergence of the kinetic equation integral on these spectra. This has not been done yet, neither for the full BSE nor for the truncated-LIA model, and this is especially worrying since the spectrum (24) has already been accepted by sizable part of the quantum turbulence community and has been used in further theoretical constructions. We announce that there is a work in progress to check locality of spectra (24) and (25) in both the BSE and truncated-LIA settings. However, as we will see later, at least for the truncated-LIA these spectra are observed numerically, so we will tentatively assume that they are true and relevant solution. The two spectra (24) and (25) occur in different scale ranges and the two cascades develop in opposite directions, as in the case of two-dimensional turbulence KM80 (). Among the two conserved quantities, the largest contribution to energy comes from smaller scales than those that contribute to wave action (because the former contains the field derivatives). Therefore, according to the Fjørtoft argument Fjortoft53 (), we expect to have a direct cascade of energy with a spectrum flowing to large and an inverse cascade of wave action with spectrum flowing to small . Iv Numerical Results for Truncated-LIA In the following we will consider numerical simulations of the system (6) under the conditions in which a stationary turbulent cascade develops. Energy and wave action are injected in the vortex filament by a white-in-time external forcing acting on a narrow band of wavenumbers around a given . In order to have a stationary cascade, we need additional terms which remove and at small and large scales. The equation of motion obtained from (6) is therefore modified as In (26) the small scale dissipative term (with ) physically represents the radiation of phonons (at a rate proportional to ) and the large scale damping term can be interpreted as the friction induced by normal fluid at a rate . Assuming the spectra (24) and (25), a simple dimensional analysis gives the IR and UV cutoff induced by the dissipative terms. The direct cascade is removed at a scale while the inverse cascade is stopped at . Therefore, in an idealized realization of infinite resolution one would obtain a double cascade by keeping and letting . In order to have an extended inertial range, in finite resolution numerical simulations we will restrict ourselves to resolve a single cascade at a time by putting either or for the direct and inverse cascades respectively. We have developed a numerical code which integrates the equation of motion (26) by means of a pseudospectral method for a periodic vortex filament of length with a resolution of points. The linear and dissipative terms are integrated explicitly while the nonlinear term is solved by a second-order Runge-Kutta time scheme. The vortex filament is initially a straight line () and long time integration is performed until a stationary regime (indicated by the values of and ) is reached. The ratio between the two terms in the series (6) is , confirming a posteriori the validity of the perturbative series (6) and the condition of the small amplitude perturbation in the derivation of equation (2). The first set of simulations is devoted to the study of the direct cascade. Energy fluctuations are injected at a forcing wavenumber and the friction coefficient is set in order to have . Energy is removed at small scales by hyperviscosity of order which restricts the range of dissipation to the wavenumber in a close vicinity of . Figure 1: Wavenumber spectrum for a simulation of the direct cascade in stationary conditions at resolution . Forcing is restricted to a range of wavenumbers and dissipation by phonon emission is modeled with hyperviscosity of order . The straight line represents the kinetic equation prediction . The inset shows the spectrum compensated with the theoretical prediction. In Figure 1 we plot the wave action spectrum for the direct cascade simulation, averaged over time in stationary conditions. A well developed power law spectrum very close to prediction (24) is observed over more than one decade (see inset). This spectrum confirms the existence of non-trivial dynamics with six-wave processes for the truncated Hamiltonian (6). The direct cascade of the full Biot-Savart Hamiltonian (2) was discussed by Kozik and Svistunov who gave the dimensional prediction (24) KS04 (), who later performed a numerical simulation of the nonlocal BSE to confirm the scaling KS05 (). Figure 2: Wavenumber spectrum for a simulation of the inverse cascade in stationary conditions at resolution . Forcing is restricted to a range of wavenumbers around and dissipation by phonon emission is modelled with hyperviscosity of order . The straight line represents the kinetic equation prediction . The inset shows the spectrum compensated with the theoretical prediction. We now turn to the simulation for the inverse cascade regime. To obtain an inverse cascade, forcing is concentrated at small scales, here . In order to avoid finite size effects and accumulation at the largest scale SY94 (), the friction coefficient is chosen in such a way that wave action is removed at a scale . Figure 2 shows the spectrum for this inverse cascade in stationary conditions. In the compensated plot, a small deviation from the power-law scaling at small wavenumbers is observed, probably due to the presence of condensation (“inverse bottleneck”) . Nevertheless, a clear scaling compatible with the dimensional analysis of the kinetic equation is observed over about a decade. V Differential Approximation Model Differential Approximation Models (DAMs) have proved to be a very useful tool in the analysis of fluid dynamical and wave turbulence L68 (); H85 (); I85 (); ZP99 (); c04 (); L04 (); N06 (); L06 (). These equations are constructed using a differential closure, such that the main scalings of the original closure (kinetic equation in our case) are preserved. In addition, there exist a family of simpler or reduced DAMs for which rigourous analysis can be performed upon their solutons. These appears to be quite helpful when the full details about the dynamics are not needed. Moreover, due to the DAMs simplicity, one can add physically relevant forcing and dissipative terms to the models. For the Kelvin wave spectra (24) and (25), including the thermodynamical equipartition solutions (23), the corresponding DAM is N06 () where is the vortex line circulation, is a dimensionless constant and is the Kelvin wave frequency. Notice, that the DAM is written in terms of frequency , rather than the wavenumber , as in the case of the kinetic equation (21). Equation (27) preserves the energy and wave action The forcing of Kelvin waves on quantized vortices arises from sharp cusps produced by vortex reconnections N06 (). These reconnections can be interpreted in the DAM by the addition of the term N06 () The transfer of energy flux in the Kelvin wave turbulence proceeds towards high wavenumbers until, the Kelvin wave frequencies become large enough to excite phonons in the fluid and thus dissipate the energy of the Kelvin waves into the surrounding fluid. One can introduce sound dissipation derived from the theory of Lighthill in classical hydrodynamical turbulence L52 (), and in the context of quantum turbulence has been done in KS05-2 (). This corresponds to the addition of the following term to the DAM N06 () (This expression is slightly corrected with respect to the relation of the one introduced dimensionally in N06 () to make it consistent with the more rigorous analysis of KS05-2 ()). Finally, one can add an addition term that describes the effect of friction with the normal fluid component as follows, L04 (); L06 () Thus, one may write a generalized DAM as where the nonlinear function is the interaction term, which in the case of the complete DAM is the RHS of equation (27). Other (reduced) DAMs include either, solutions for the direct and inverse cascade and no (thermo) equipartition solutions as in or just the direct energy cascade and the corresponding energy thermo solution (and no inverse cascade solutions): These have the advantage of only containing second order derivatives, and as such one may find analytical solutions for the steady state dynamics. For example, for model (34) we can ask the question, how does the vortex reconnection forcing build the energy flux in frequency space? For this, we leave the nonlinear transfer and the reconnection forcing terms and drop the dissipation term, and find a solution for the energy flux : and the wave action density : where and are the asymptotic values of the energy and wave action fluxes respectively. For equation (35), without any forcing or dissipation terms, we find that the corresponding analytical steady state solution is which is a “warm cascade” solution, i.e. a direct energy cascade gradually transitioning into a thermalized “bottleneck” over a range of scales c04 (); N06 (). Vi Numerical Results for DAMs We performed numerical simulations of the DAMs (27), (34) and (35), using a second order finite difference method. We set the resolution at points, while the phonon radiation dissipation term acts in the range . The parameters , and the estimated asymptotic values of the fluxes , are listed in Table 1. The factor of equation (27) has been fixed to unity. Reduced DAM Complete DAM Forcing amplitude Energy flux Waveaction flux Table 1: Parameters and fluxes of the simulations for DAM models with Initially, we simulate both the complete DAM (27) and the reduced DAM (34), with forcing, without friction, but with and without the dissipative term for phonon radiation. The results for the energy flux and the spectrum for the complete DAM are shown in Figure 3 (results for the reduced DAM are nearly identical and, therefore, are not shown). The top panel in Figure 3 shows the energy flux. We see a good agreement with the analytical prediction (36) over a large intermediate range of scales. In the case without the phonon dissipation, the numerical result for the energy flux follows perfectly the analytical prediction at high frequencies. In the case with the phonon dissipation, the agreement with the analytical prediction (36) is good in a long range up to very high frequencies, where the phonon dissipation suddenly kicks in. Such a sudden onset of dissipation is due to the abrupt growth of the phonon radiation term as a function of the frequency. The wave action spectra are shown in the bottom panel of Figure 3. We see a good agreement with the analytical solution (37) where is taken to be zero (flux of wave action could only be generated by an extra forcing at the high-frequency end, which is absent in our case). We observe only a slightly steeper spectrum of compared with the analytical prediction of the direct energy cascade . This agreement is remarkable because the solution (34) is strictly valid only for the reduced and not the complete DAM. This shows that the reduced DAM does pretty well in predicting the behaviour of the more complete nonlinear model. Naturally, in the case with the phonon dissipation, we see a deviation from the analytical solution at very small scales (a rather sharp cut-off). Figure 3: Complete DAM: the energy flux (top) and the spectrum (bottom), compared with the analytical predictions (36) and (37) respectively. In each picture the results of the simulations with and without phonon dissipation are shown (the phonon dissipation can be seen as an abrupt cutoff of the flux and the spectrum). The spectrum is very close to the prediction of the reduced model (34): compared with which is expected in the inertial range (). In Figure 4 we show the effect of switching on the friction term () for the complete DAM (again, results for the reduced DAM are very similar and are not shown). We see that the presence of the friction has the effect of reducing the energy flux in a large region from frequencies of around upwards. Although the flux is reduced, we see that the spectrum has only a slight deviation from the predicted slope (37). Finally we consider another DAM model, (35), which contains only the direct and the thermodynamical bottleneck (or warm cascade) of energy. We force the system as usual, however, the simulation is composed of two phases: the first is to get a direct cascade steady state, then lowering the viscosity (i.e. dissipation at small scales) and then turning on the reflecting energy flux boundary condition at the smallest scale , after which the system evolves to a secondary steady state (38). In Figure 5 we see a clear transition from the KZ solution towards the thermalized spectrum. The bottom panel is a zoomed in section of the crossover using a compensated spectrum, so that one can clearly make the distinction between the two power laws. We see a good agreement with the predicted power law behaviour of the KZ solution and of the thermalized solution. Figure 4: Complete DAM with and or . The energy flux is reduced at high ’s, but the spectrum slope remains as in the non-dissipative case for a longer frequency range. Figure 5: Bottleneck effect for the second order model N06 (). The spectrum ( w.r.t. ) compared with the two different power law predictions in the two ranges and the local slope analysis in the inset ( w.r.t. ) which points out the transition from the behaviour to the one. In the picture below, we show a zoom of the region where the bottleneck effect appears in a sort of compensated spectrum, on the y-axis and on the x-axis. Vii Conclusions In summary, we have introduced and studied various reduced models for Kelvin wave turbulence. Firstly, we introduced a truncated-LIA model for Kelvin wave turbulence, which we have shown to exhibit the same scalings and dynamical features present in the conventional BSE. We have used this model for numerical simulations of the direct and the inverse cascades and found spectra which are in very good agreement with the predictions of the WT theory. Secondly, we discussed differential approximation models, and introduced three such models to describe various settings of Kelvin wave turbulence, such as the direct energy cascade generated by vortex reconnections and dissipated via phonon radiation or/and the mutual friction with the normal liquid, as well as the bottleneck effect when the energy flux is reflected from the smallest scale. We performed numerical simulations of these cases, which showed good agreements with the predicted analytical solutions. Viii Appendix A - Interaction Coefficients in Biot-Savart model In this Appendix we review and extend the work of Kozik and Svistunov on the Kelvin wave cascade (KS) KS04 (). They considered the full Biot-Savart Hamiltonian (2) in D, and simplified the denominator by Taylor expansion. The criterion for Kelvin-wave turbulence is that the wave amplitude is small compared to wavelength, this is formulated as: KS find the Biot-Savart Hamiltonian (2) expanded in powers of (, here is just a number and is ignored) is represented as: One would like to deal with the wave-interaction Hamiltonian (VIII) in Fourier space by introducing the wave amplitude variable . Using the Fourier representation for variables and in equations (VIII), one introduces more integration variables, the wavenumbers. Moreover, invoking a cutoff at because of the singularity present in the Biot-Savart Hamiltonian (2) as , KS derived the coefficients of , and in terms of cosines in Fourier space KS04 (). Once the equations (VIII) are written in terms of wave amplitudes, one can define variables and as variables which ranges from to and ranging from to . One can then decomposes all variables of type and into variables and . The cosine functions arise due to the collection of exponentials with powers in variable . Subsequently, the remaining exponentials with powers of variable can be integrated out w.r.t. , yielding the corresponding delta function for the conservation of wavenumbers. The explicit formulae of the four-wave , and the six-wave interaction coefficients derived by Kozik and Svistunov can be written as: where , , , , and are integrals of cosines: where the variable, and the expressions , are cosine functions such that , , , and so on. We integrate the Fourier representations of , and , namely equations (44), (45), (46), (47), (48) and (49) using integration by parts, and apply the following cosine identity GR80 (), Neglecting terms of order and higher we calculate the frequency and interaction coefficients of equations (VIII)
3a899befb3ceecf4
What Kind of Math Is Around the Act? Is It Applied To All Scientists? 06/02/2020 15:57 169 lần Chuyên mục: Tin tức What sort of math is around the Act? Is it applicable to all scientists? How does it relate towards the neuroscience of programming? You can find mathematical guidelines that apply to each disciplines. There are various various sorts of mathematics. There are actually linear algebra and group theory. There are groups, that are a part of linear algebra. Then there are actually tensors, which are parts of linear algebra. buy research paper online Quantum field theory and quantum info theory are theories which are not entirely within the realm of mathematics, however they also could be mathematical. Quantum field theory shows the partnership among items like electric fields and magnetic fields. Quantum information theory, as its name suggests, is much more of a philosophical question about what goes on when you allow diverse parts of a physical system to communicate with every other. This could give rise to new phenomena. Quantum computation could be the http://www.hti.umich.edu/cgi/t/text/text-idx?c=moa;idno=ABJ2063 computation of entire quantum systems. This is beyond anything we are able to realize. It provides rise to new ideas and new technologies. Quantum field theory is well-known for its challenging issue. After a specific situation has been calculated, finding the answer is definitely tricky. The problem of quantum computation is essentially like solving the Schrödinger equation. It might even be a lot more tough than the Schrödinger equation since it depends on something like entangled qubits. Quantum computation has turn into a little of a joke, however it truly can be a difficulty that includes a whole universe. That is certainly why quantum computer systems are so essential. If we had quantum computer systems, thenwe could truly compute items more quickly than the speed of light, which is ridiculous in itself. Quantum physics is often a field exactly where the laws of quantum mechanics are in particular exciting. The laws https://buyessay.net/research-paper of quantum mechanics say that particles move by way of the planet in the speed of light and that these particles can turn into particles and may also develop into waves. So quantum computation may be the ability to do quantum computations in the genuine world. These computations is often employed to help develop new technologies. Quantum computing can even be used to create new weapons. For example, if we would like to use quantum computation to help create new weapons, we could be capable to combine the quantum computation with GPS navigation systems and also with infrared vision. We could be in a position to identify where the enemy is. Possibly we could be capable to calculate exactly where the enemy is based on their physique heat. However, as far as what sort of math is around the Act is concerned, this is a thing which has to do with how quantum computers interact with quantum computer systems. The planet of quantum computers continues to be quite much a mystery. It is actually just about the most fascinating fields in science.
1c508081dfe408d7
Take the 2-minute tour × What is the basic postulate on which QM rests. Is it that the position of a particle can only be described only in the probabilistic sense given by the state function $\psi(r)$ ? We can even go ahead and abandon the particle formalism as well. So what is the QM all about ? A probabilistic description of the physical world ? and nothing more ? share|improve this question Related: physics.stackexchange.com/questions/6738/… –  Marek Aug 16 '11 at 17:41 There isn't one basic postulate of QM. There are several postulates that fit together into a surprising and beautiful theory. For some reason, people are upvoting one of these postulates and downvoting another, and haven't even mentioned others (e.g., the superposition principle). –  Peter Shor Aug 17 '11 at 11:13 @Peter: that's only half-true. There are many other theories that share lots of properties of QM (e.g. any theory given by linear PDE will have superpositions). Similarly, classical logic and quantum logic are basically the same except for one axiom. Therefore if one is really after one thing that makes QM special, one is inevitably lead to non-commutativity. After all, if there was none of it but everything else was kept untouched (formally, $\hbar \to 0$), you'd get back your plain old boring Poisson algebra on the phase space. –  Marek Aug 17 '11 at 21:10 2 Answers 2 up vote 5 down vote accepted Existence of non-compatible observables: measuring one of them (say, coordinate) leads to an unavoidable uncertainty in the result of a subsequent measurement of the other (say, momentum). This is the essence of the Heisenberg uncertainty principle in the kinematics of your system. There is a detailed discussion along these lines in the beginning of the Quantum Mechanics volume (volume III) in the Course of Theoretical Physics by Landau and Lifshitz. Any measurable (physical) system, be it particle, atom or anything else, is quantum only if you can identify a manifestation of Heisenberg uncertainty principle (non-commutativity of observables). share|improve this answer "Existence of non-compatible observables", great to know this term..+1. –  Rajesh D Aug 16 '11 at 17:29 This on-compatibility of physical observables is the empirical reason why non-commutative objects (operators or matrices) need to be assigned to them in the quantum formalism... –  Slaviks Aug 16 '11 at 17:34 +1, this is precisely what sets QM apart from other theories. In mathematics the term quantization and deformation (in the sense of deforming commutative algebras to non-commutative ones) are basically equivalent. –  Marek Aug 16 '11 at 17:41 "deformation of algebras", great to know the term. +1 –  Slaviks Aug 16 '11 at 17:46 To me, the most basic postulate is that energy comes in discrete packages of $h \nu$. Based on this assumption, you get much of the rest of basic quantum mechanics. In fact, the Schrödinger equation is related to the Hamilton-Jacobi formulation of classical mechanics with the added assumption of quantized energy, and the Heisenberg picture follows directly from the Poisson bracket formulation of classical mechanics, assuming quantized energy. Wave-particle duality is also a very important assumption- this gives us a way to interpret exactly what these equations express, which is the probability of locating a particle per some (small) volume of position- or momentum-space. share|improve this answer Discreteness is not really that important (although it had been a century ago when QM was born; it's also where the term quantum comes from). There are many systems for which all important observables have continuous spectrum. –  Marek Aug 16 '11 at 17:44 That's true! I guess it's not the best way to think about deriving all of QM. (Non-commuting observers probably are) In my mind, I like to put everything in a historical context. –  specterhunter Aug 16 '11 at 18:02 Your Answer
84b1eedec15a3410
The Einstein-Podolsky-Rosen Argument in Quantum Theory First published Mon May 10, 2004; substantive revision Tue Nov 5, 2013 1. Can Quantum Mechanical Description of Physical Reality Be Considered Complete? 1.1 Setting and prehistory By 1935 the conceptual understanding of the quantum theory was dominated by Niels Bohr's ideas concerning complementarity. Those ideas centered on observation and measurement in the quantum domain. According to Bohr's views at that time, observing a quantum object involves an uncontrollable physical interaction with a classical measuring device that affects both systems. The picture here is of a tiny object banging into a big apparatus. The effect this produces on the measuring instrument is what issues in the measurement “result” which, because it is uncontrollable, can only be predicted statistically. The effect experienced by the quantum object restricts those quantities that can be co-measured with precision. According to complementarity when we observe the position of an object, we affect its momentum uncontrollably. Thus we cannot determine precisely both position and momentum. A similar situation arises for the simultaneous determination of energy and time. Thus complementarity involves a doctrine of uncontrollable physical interaction that, according to Bohr, underwrites the Heisenberg uncertainty relations and is also the source of the statistical character of the quantum theory. (See the entries on the Copenhagen Interpretation and the Uncertainty Principle.) Initially Einstein was enthusiastic about the quantum theory. By 1935, however, his enthusiasm for the theory had given way to a sense of disappointment. His reservations were twofold. Firstly, he felt the theory had abdicated the historical task of natural science to provide knowledge of significant aspects of nature that are independent of observers or their observations. Instead the fundamental understanding of the wave function (alternatively, the “state function”, “state vector”, or “psi-function”) in quantum theory was that it only treated the outcomes of measurements (via probabilities given by the Born Rule). The theory was simply silent about what, if anything, was likely to be true in the absence of observation. That there could be laws, even probabilistic laws, for finding things if one looks, but no laws of any sort for how things are independently of whether one looks, marked quantum theory as irrealist. Secondly, the quantum theory was essentially statistical. The probabilities built into the state function were fundamental and, unlike the situation in classical statistical mechanics, they were not understood as arising from ignorance of fine details. In this sense the theory was indeterministic. Thus Einstein began to probe how strongly the quantum theory was tied to irrealism and indeterminism. He wondered whether it was possible, at least in principle, to ascribe certain properties to a quantum system in the absence of measurement. Can we suppose, for instance, that the decay of an atom occurs at a definite moment in time even though such a definite decay time is not implied by the quantum state function? That is, Einstein began to ask whether the quantum mechanical description of reality was complete. Since Bohr's complementarity provided strong support both for irrealism and indeterminism and since it played such a dominant role in shaping the prevailing attitude toward quantum theory, complementarity became Einstein's first target. In particular, Einstein had reservations about the uncontrollable physical effects invoked by Bohr in the context of measurement interactions, and about their role in fixing the interpretation of the wave function. EPR was intended to support those reservations in a particularly dramatic way. Max Jammer (1974, pp. 166–181) describes the EPR paper as originating with Einstein's reflections on a thought experiment he proposed during discussions at the 1930 Solvay conference. The experiment imagines a box that contains a clock set to time precisely the release (in the box) of a photon with determinate energy. If this were feasible, it would appear to challenge the unrestricted validity of the Heisenberg uncertainty relation that sets a lower bound on the simultaneous uncertainty of energy and time. (See the entry on the Uncertainty Principle and also Bohr 1949, who describes the discussions at the 1930 conference.) The uncertainty relations, understood not just as a prohibition on what is co-measurable, but on what is simultaneously real, were a central component in the irrealist interpretation of the wave function. Jammer (1974, p. 173) describes how Einstein's thinking about this experiment, and Bohr's objections to it, evolved into a different photon-in-a-box experiment, one that allows an observer to determine either the momentum or the position of the photon indirectly, while remaining outside, sitting on the box. Jammer associates this with the distant determination of either momentum or position that, we shall see, is at the heart of the EPR paper. Carsten Held (1998) cites a related correspondence with Paul Ehrenfest from 1932 in which Einstein described an arrangement for the indirect measurement of a particle of mass m using correlations with a photon established through Compton scattering. Einstein's reflections here foreshadow the argument of EPR, along with noting some of its difficulties. Thus without an experiment on m it is possible to predict freely, at will, either the momentum or the position of m with, in principle, arbitrary precision. This is the reason why I feel compelled to ascribe objective reality to both. I grant, however, that it is not logically necessary. (Held 1998, p. 90) Whatever their precursors, the ideas that found their way into EPR were worked out in a series of meetings with Einstein and his two assistants, Podolsky and Rosen. The actual text, however, was written by Podolsky and, apparently, Einstein did not see the final draft (certainly he did not correct it) before Podolsky submitted the paper to Physical Review in March of 1935. It was sent for publication the day after it arrived. Upon seeing the published version, Einstein complained that his central concerns were obscured by Podolsky's exposition. For reasons of language this [paper] was written by Podolsky after several discussions. Still, it did not come out as well as I had originally wanted; rather, the essential thing was, so to speak, smothered by the formalism [Gelehrsamkeit]. (Letter from Einstein to Erwin Schrödinger, June 19, 1935. In Fine 1996, p. 35.) Thus in discussing the argument of EPR we should consider both the argument in Podolsky's text and lines of argument that Einstein himself offers. We should also consider an argument presented in Bohr's reply to EPR, which is possibly the best known version, although it differs from the others in important ways. 1.2 The argument in the text The EPR text is concerned, in the first instance, with the logical connections between two assertions. One asserts that quantum mechanics is incomplete. The other asserts that incompatible quantities (those whose operators do not commute, like a coordinate of position and linear momentum in that direction) cannot have simultaneous “reality” (i.e., simultaneously real values). The authors assert as a first premise, later to be justified, that one or another of these must hold. It follows that if quantum mechanics were complete (so that the first option failed) then the second option would hold; i. e., incompatible quantities cannot have real values simultaneously. However they also take as a second premise (also to be justified) that if quantum mechanics were complete, then incompatible quantities (in particular position and momentum) could indeed have simultaneous, real values. They conclude that quantum mechanics is incomplete. The conclusion certainly follows since otherwise (if the theory were complete) one would have a contradiction. Nevertheless the argument is highly abstract and formulaic and even at this point in its development one can readily appreciate Einstein's disappointment with it. EPR now proceed to establish the two premises, beginning with a discussion of the idea of a complete theory. Here they offer only a necessary condition; namely, that for a complete theory “every element of the physical reality must have a counterpart in the physical theory.” Although they do not specify just what an “element of physical reality” is they use that expression when referring to the values of physical quantities (positions, momenta, and so on) provided the following sufficient condition holds (p. 777): If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of reality corresponding to that quantity. This sufficient condition for an "element of reality" is often referred to as the EPR Criterion of Reality. With these terms in place it is easy to show that if, say, the values of position and momentum for a quantum system were real simultaneously (i.e., were elements of reality) then the description provided by the wave function of the system would be incomplete, since no wave function contains counterparts for both elements. (Technically, no state function—even an improper one, like a delta function—is a simultaneous eigenstate for both position and momentum.) Thus they establish the first premise: either quantum theory is incomplete or there can be no simultaneously real values for incompatible quantities. They now need to show that if quantum mechanics were complete, then incompatible quantities could have simultaneous real values, which is the second premise. This, however, is not easily established. Indeed what EPR proceed to do is odd. Instead of assuming completeness and on that basis deriving that incompatible quantities can have real values simultaneously , they simply set out to derive the latter assertion without any completeness assumption at all. This “derivation” turns out to be the heart of the paper and its most controversial part. It attempts to show that in certain circumstances a quantum system can have simultaneous values for incompatible quantities (once again, for position and momentum), where these values also pass the Reality Criterion's test for being “elements of reality”. They proceed by sketching a thought experiment. In the experiment two quantum systems interact in such a way that two conservation laws hold following their interaction. One is the conservation of relative position. If we imagine the systems located along the x-axis, then if one of the systems (we can call it Albert's) were found at position q along the axis at a certain time, the other system (call it Niels') would be found then a fixed distance d away, say at q′ = qd, where we may suppose that the distance d between q and q′ is substantial. The other conservation law is that the total linear momentum (along that same axis) is always zero. So when the momentum of Albert's system along the x-axis is determined to be p, the momentum of Niels' system would be found to be −p. The paper constructs an explicit wave function for the combined (Albert+Niels) system that satisfies both conservation principles. Although commentators later raised questions about the legitimacy of this wave function, it does appear to satisfy the two conservation principles at least for a moment (Jammer 1974, pp. 225–38; see also Halvorson 2000). In any case, one can model the same conceptual situation in other cases that are clearly well defined quantum mechanically (see Section 3.1). At this point of the argument (p. 779) EPR make two critical assumptions, although they do not call special attention to them. (For the significance of these assumptions in Einstein's thinking see Howard 1985 and also section 5 of the entry on Einstein.) The first assumption (separability) is that at the time when measurements will be performed on Albert's system there is some reality that pertains to Niels' system alone. In effect, they assume that Niels' system maintains its separate identity even though it is correlated with Albert's. They need this assumption to make sense of another. The second assumption is that of locality. This supposes that “no real change can take place” in Niels' system as a consequence of a measurement made on Albert's system. They gloss this by saying “at the time of measurement the two systems no longer interact.” Notice that this is not a general principle of no-disturbance, but rather a principle only governing disturbance or change in what is real with respect to Niels' system. On the basis of these two assumptions they conclude that Niels' system can have real values (“elements of reality”) for both position and momentum simultaneously. There is no detailed argument for this in the text. Instead they use these two assumptions to show how one could be led to assign both a position eigenstate and a momentum eigenstate to Niels' system, from which the simultaneous attribution of elements of reality is supposed to follow. Since this is the central and most controversial part of the paper, it pays to go slowly here in trying to reconstruct an argument on their behalf. One attempt might go as follows. Separability holds that some reality pertains to Niels' system. Suppose that we measure, say, the position of Albert's system. The reduction of the state function for the combined systems then yields a position eigenstate for Niels' system. That eigenstate applies to the reality there and that eigenstate enables us to predict a determinate position for Niels' system with probability one. Since that prediction only depends on a measurement made on Albert's system, locality implies that the prediction of the position of Niels' system does not involve any change in the reality of Niels' system. If we interpret this as meaning that the prediction does not disturb Niels' system, all the pieces are in place to apply The Criterion of Reality. It certifies that the predicted position value, corresponding to the position eigenstate, is an element of the reality that pertains to Niels' system. One could argue similarly with respect to momentum. This line of argument, however, is deceptive and contains a serious confusion. It occurs right after we apply locality to conclude that the measurement made on Albert's system does not affect the reality pertaining to Niels' system. For, recall, we have not yet determined whether the position inferred for Niels' system is indeed an “element” of that reality. Hence it is still possible that the measurement of Albert's system, while not disturbing the reality pertaining to Niels' system, does disturb its position. To take the extreme case; suppose, for example, that the measurement of Albert's system somehow brings the position of Niels' system into being, or suddenly makes it well defined, and also allows us to predict it with certainty. It would then follow from locality that the position of Niels' system is not an element of the reality of that system, since it can be affected at a distance. But, reasoning exactly as above, the Criterion would still hold that the position of Niels' system is an element of the reality there, since it can be predicted with certainty without disturbing the reality of the system. What has gone wrong? It is that the Criterion provides a sufficient condition for elements of reality and locality provides a necessary condition. But, as above, there is no guarantee that these conditions will always match consistently. To ensure consistency we need to be sure that what the Criterion certifies as real is not something that can be influenced at a distance. One way to do this, which seems to be implicit in the EPR paper, would be to interpret locality in the EPR situation in such a way that measurements made on one system are understood not to disturb those quantities on the distant, unmeasured system whose values can be inferred from the reduced state of that system. Given the two conservation laws satisfied in the EPR situation, this extended way of understanding locality allows the Criterion to certify that position, as well as momentum, when inferred for Niels' system, are real there. As EPR point out, however, position and momentum cannot be measured simultaneously. So even if each can be shown to be real in distinct contexts of measurement, are both real at the same time? EPR answers “yes”, but it does not provide a clear rationale for that conclusion. Here's one suggestion. (Dickson 2004 analyzes some of the modal principles involved and suggests another route, which he criticizes. Hooker 1972 is a comprehensive discussion that identifies several generically different ways to make the case.) Suppose the logical force of locality is to decontextualize the reality of Niels' system from goings on at Albert's. Clearly when we infer from a certain measurement made on Albert's system that Niels' system has an element of reality, locality kicks in and guarantees that Niels' system would have had that same element of reality even in the absence of the measurement on Albert's system. So suppose, then, the circumstance where we do not make that measurement. Could that absence of a measurement on Albert's system affect what is real on Niels' system? The suggestion is that we allow locality to kick in here as well, with the answer “no”. Put differently, we suggest that locality entitles us to conclude that Niels' system has a real position provided the conditional assertion “If a position measurement is performed on Albert's system, then Niels' system has a real position” holds. Similarly, Niels' system has a real momentum provided the conditional “If a momentum measurement is performed on Albert's system, then Niels' system has a real momentum” holds. (This is exactly how Einstein 1948 argues. See Born 1971, p. 172.) Of course these conclusions presuppose that there are no interfering factors operating locally on Niels' system, such as a competing measurement. As we have seen, given separability, locality and the Criterion of Reality both conditionals hold. Hence, in the absence of interference, locality implies that Niels' system has real values of both position and momentum simultaneously, even though no simultaneous measurement of position and momentum is allowed. (Reciprocally, so would Albert's system, provided we made no interfering measurements there.) In the penultimate paragraph of EPR (p. 780) they address the problem of getting real values for incompatible quantities simultaneously. Indeed one would not arrive at our conclusion if one insisted that two or more physical quantities can be regarded as simultaneous elements of reality only when they can be simultaneously measured or predicted. … This makes the reality [on the second system] depend upon the process of measurement carried out on the first system, which does not in any way disturb the second system. No reasonable definition of reality could be expected to permit this. The unreasonableness to which EPR allude in making “the reality [on the second system] depend upon the process of measurement carried out on the first system, which does not in any way disturb the second system” is just the unreasonableness that would be involved in renouncing locality understood as above. For it is locality that enables one to overcome the incompatibility of position and momentum measurements of Albert's system by requiring their joint consequences for Niels' system to be incorporated in a single, stable reality there. If we recall Einstein's acknowledgment to Ehrenfest that getting simultaneous position and momentum was “not logically necessary”, we can see how EPR respond by making it become necessary once locality is assumed. Here, then, are the key features of EPR. • EPR is about the interpretation of state vectors (“wave functions”) and employs the standard state vector reduction formalism (von Neumann's “projection postulate”). • The Criterion of Reality is only used to check, after state vector reduction assigns an eigenstate to the unmeasured system, that the associated eigenvalue constitutes an element of reality. • (Separability) EPR make the tacit assumption that, when they are spatially separated, some “reality” pertains to both components of the combined system. • (Locality) EPR assume a principle of locality according to which, if two systems are far enough apart, the measurement (or absence of measurement) of one system does not directly affect the reality that pertains to the unmeasured system. (This non-disturbance is understood to include those quantities on the distant, unmeasured system whose values can be inferred from the reduced state of that system.) • Locality is critical in guaranteeing that simultaneous position and momentum values can be assigned to the unmeasured system even though position and momentum cannot be measured simultaneously on the other system. • Assuming separability and locality, the demonstration of simultaneous position and momentum values depends on the state vector descriptions in conjunction with the Criterion of Reality. • In summary, the argument of EPR shows that if interacting systems satisfy separability and locality, then the description of systems provided by state vectors is not complete. This conclusion rests on a common interpretive principle, state vector reduction, and on the Criterion of Reality. The EPR experiment with interacting systems accomplishes a form of indirect measurement. The direct measurement of Albert's system yields information about Niels' system; it tells us what we would find if we were to measure there directly. But it does this at-a-distance, without any further physical interaction taking place between the two systems. Thus the thought experiment at the heart of EPR undercuts the picture of measurement as necessarily involving a tiny object banging into a large measuring instrument. If we look back at Einstein's reservations about complementarity, we can appreciate that by focusing on a non-disturbing kind of measurement the EPR argument targets Bohr's program for explaining central conceptual features of the quantum theory. For that program relied on uncontrollable interaction with a measuring device as a necessary feature of any measurement in the quantum domain. Nevertheless the cumbersome machinery employed in the EPR paper makes it difficult to see what is central. It distracts from rather than focuses on the issues. That was Einstein's complaint about Podolsky's text in his June 19, 1935 letter to Schrödinger. Schrödinger responded on July 13 reporting reactions to EPR that vindicate Einstein's concerns. With reference to EPR he wrote: I am now having fun and taking your note to its source to provoke the most diverse, clever people: London, Teller, Born, Pauli, Szilard, Weyl. The best response so far is from Pauli who at least admits that the use of the word “state” [“Zustand”] for the psi-function is quite disreputable. What I have so far seen by way of published reactions is less witty. … It is as if one person said, “It is bitter cold in Chicago”; and another answered, “That is a fallacy, it is very hot in Florida.” (Fine 1996, p. 74) 1.3 Einstein's versions of the argument If the argument developed in EPR has its roots in the 1930 Solvay conference, Einstein's own approach to issues at the heart of EPR has a history that goes back to the 1927 Solvay conference. (Bacciagaluppi and Valentini 2009, pp. 198–202, would even trace it back to 1909 and the localization of light quanta.) At that 1927 conference Einstein made a short presentation during the general discussion session, where he focused on problems of interpretation associated with the collapse of the wave function. He imagines a situation where electrons pass through a small hole and are dispersed uniformly in the direction of a screen of photographic film shaped into a large hemisphere that surrounds the hole. On the supposition that quantum theory offers a complete account of individual processes then, in the case of localization, why does the whole wave front collapse to just one single flash point? It is as though at the moment of collapse an instantaneous signal were sent out from the point of collapse to all other possible collapse positions telling them not to flash. Thus Einstein maintains (Bacciagaluppi and Valentini 2009, p. 488), the interpretation, according to which |ψ|² expresses the probability that this particle is found at a given point, assumes an entirely peculiar mechanism of action at a distance, which prevents the wave continuously distributed in space from producing an action in two places on the screen. One could see this as a tension between local action and the description afforded by the wave function, since the wave function alone does not specify a unique position on the screen for detecting the particle. Einstein continues, In my opinion, one can remove this objection only in the following way, that one does not describe the process solely by the Schrödinger wave, but that at the same time one localizes the particle during propagation. Einstein points to Louis de Broglie's pilot wave investigations as a possible direction to pursue if one is looking for an account of individual processes that avoids a “contradiction with the postulate of relativity.” He also raises the possibility not to regard the quantum theory as describing individuals and their processes at all and, instead, to regard it as describing only ensembles of individuals. Indeed Einstein suggests difficulties for any version, like de Broglie's and like quantum theory itself, that requires representations in multi-dimensional configuration space, difficulties that might move one further toward regarding quantum theory as not aspiring to a description of individual systems but as more amenable to an ensemble (or collective) point of view. Perhaps the most important feature of Einstein's reflections at Solvay 1927 is his insight that the clash between completeness and locality already arises in measurements of a single variable (there, position) and does not require measurements for an incompatible pair, as in EPR. Following the publication of EPR Einstein set about almost immediately to provide clear and focused versions of the argument. He began that process within few weeks of EPR, in the June 19 letter to Schrödinger, and continued it in an article published the following year (Einstein 1936). He returned to this particular form of an incompleteness argument in two later publications (Einstein 1948 and Schilpp 1949). Although these expositions differ in details they all employ composite systems as a way of implementing indirect measurements-at-a-distance. None of Einstein's accounts contains the Criterion of Reality nor the tortured EPR argument over when values of a quantity can be regarded as “elements of reality”. The Criterion and these “elements” simply drop out. Nor does Einstein engage in calculations, like those of Podolsky, to fix the total wave function for the composite system explicitly. Unlike EPR, none of Einstein's arguments makes use of simultaneous values for complementary quantities like position and momentum. He does not challenge the uncertainty relations. Indeed with respect to assigning eigenstates for a complementary pair he tells Schrödinger “ist mir wurst”—literally, it's sausage to me; i.e., he couldn't care less. (Fine 1996, p. 38). These writings probe an incompatibility between affirming locality and separability, on the one hand, and completeness in the description of individual systems by means of state functions, on the other. His argument is that we can have at most one of these but never both. He frequently refers to this dilemma as a “paradox”. In the letter to Schrödinger of June 19, Einstein points to a simple argument for the dilemma which, like the argument from the 1927 Solvay Conference, involves only the measurement of a single variable. Consider an interaction between the Albert and Niels systems that conserves their relative positions. (We need not worry about momentum, or any other quantity.) Consider the evolved wave function for the total (Albert+Niels) system when the two systems are far apart. Now assume a principle of locality-separability (Einstein calls it a Trennungsprinzip—separation principle): Whether a determinate physical situation holds for Niels' system does not depend on what measurements (if any) are made locally on Albert's system. If we measure the position of Albert's system, the conservation of relative position implies that we can immediately infer the position of Niels'; i.e., we can infer that Niels' system has a determinate position. By locality-separability it follows that Niels' system must already have had a determinate position just before Albert began that measurement. At that time, however, Niels' system alone does not have a state function. There is only a state function for the combined system and that total state function does not single out the position we would find for Niels' system (i.e., it is not a product one of whose factors is an eigenstate for the position of Niels' system). Thus the description of Niels' system afforded by the quantum state function is incomplete. A complete description would say (definitely yes) if a determinate physical situation were true of Niels' system. (Notice that this argument does not even depend on the reduction of the total state function for the combined system.) In this formulation of the argument it is clear that locality-separability conflicts with the eigenvalue-eigenstate link, which holds that a quantity of a system has an eigenvalue if and only if the state of the system is an eigenstate of that quantity with that eigenvalue (or a mixture of such eigenstates). The “only if” part of the link would need to be weakened in order to interpret quantum state functions as complete descriptions (see entry on Modal Interpretations). Although this simple argument concentrates on what Einstein saw as the essentials, stripping away most technical details and distractions, he frequently used another argument involving the measurement of more than one quantity. (It is actually buried in the EPR paper, p. 779, and a version also occurs in the June 19, 1935 letter to Schrödinger. Harrigan and Spekkens, 2010 suggest reasons for preferring a many-measurements argument.) This second argument focuses clearly on the interpretation of quantum state functions in terms of “real states” of a system, and not on any issues about simultaneous values (real or not) for complementary quantities. It goes like this. Suppose, as in EPR, that the interaction between the two systems preserves both relative position and zero total momentum and that the systems are far apart. As before, we can measure either the position or momentum of Albert's system and, in either case, we can infer a position or momentum for Niels' system. It follows from the reduction of the total state function that, depending on whether we measure the position or momentum of Albert's system, Niels' system will be left (respectively) either in a position eigenstate or in a momentum eigenstate. Suppose too that separability holds, so that Niels' system has some real physical state of affairs. If locality holds as well, then the measurement of Albert's system does not disturb the assumed “reality” for Niels' system. However, that reality appears to be represented by quite different state functions, depending on which measurement of Albert's system one chooses to carry out. If we understand a “complete description” to rule out that one and the same physical state can be described by state functions with distinct physical implications, then we can conclude that the quantum mechanical description is incomplete. Here again we confront a dilemma between separability-locality and completeness. Many years later Einstein put it this way (Schilpp 1949, p. 682); [T]he paradox forces us to relinquish one of the following two assertions: (1) the description by means of the psi-function is complete (2) the real states of spatially separate objects are independent of each other It appears that the central point of EPR was to argue that in interpreting the quantum state functions we are faced with these alternatives. As we have seen, in framing his own EPR-like arguments for the incompleteness of quantum theory, Einstein makes use of separability and locality, which are also tacitly assumed in the EPR paper. Using the language of “independent existence“ he presents these ideas clearly in an article that he sent to Max Born (Einstein 1948). It is … characteristic of … physical objects that they are thought of as arranged in a space-time continuum. An essential aspect of this arrangement … is that they lay claim, at a certain time, to an existence independent of one another, provided these objects “are situated in different parts of space”. … The following idea characterizes the relative independence of objects (A and B) far apart in space: external influence on A has no direct influence on B. (Born, 1971, pp. 170–71) In the course of his correspondence with Schrödinger, however, Einstein realized that assumptions about separability and locality were not necessary in order to get the incompleteness conclusion that he was after; i.e., to show that state functions may not provide a complete description of the real state of affairs with respect to a system. Separability supposes that there is a real state of affairs and locality supposes that one cannot influence it immediately by acting at a distance. What Einstein realized was that separability was already part of the ordinary conception of a macroscopic object. This suggested to him that if one looks at the local interaction of a macro-system with a micro-system one could avoid having to assume either separability or locality in order to conclude that the quantum description of the whole was incomplete with respect to its macroscopic part. This line of thought evolves and dominates Einstein's last published reflections on incompleteness, where he focuses on problems with the stability of macro-descriptions rather than problems with composite systems and locality. the objective describability of individual macro-systems (description of the “real-state”) can not be renounced without the physical picture of the world, so to speak, decomposing into a fog. (Einstein 1953b, p. 40. See also Einstein 1953a.) In the August 8, 1935 letter to Schrödinger Einstein says that he will illustrate the problem by means of a “crude macroscopic example”. The system is a substance in chemically unstable equilibrium, perhaps a charge of gunpowder that, by means of intrinsic forces, can spontaneously combust, and where the average life span of the whole setup is a year. In principle this can quite easily be represented quantum-mechanically. In the beginning the psi-function characterizes a reasonably well-defined macroscopic state. But, according to your equation [i.e., the Schrödinger equation], after the course of a year this is no longer the case. Rather, the psi-function then describes a sort of blend of not-yet and already-exploded systems. Through no art of interpretation can this psi-function be turned into an adequate description of a real state of affairs; in reality there is no intermediary between exploded and not-exploded. (Fine 1996, p. 78) The point is that after a year either the gunpowder will have exploded, or not. (This is the “real state” which in the EPR situation requires one to assume separability.) The state function, however, will have evolved into a complex superposition over these two alternatives. Provided we maintain the eigenvalue-eigenstate link, the quantum description by means of that state function will yield neither conclusion, and hence the quantum description is incomplete. For a contemporary response to this line of argument, one might look to the program of decoherence. (See Decoherence.) That program points to interactions with the environment which quickly reduce the likelihood of any interference between the “exploded” and the “not-exploded” branches of the evolved psi-function. Then, breaking the eigenvalue-eigenstate link, one might interpret the psi-function so that its (almost) non-interfering branches yield a perspective according to which the gunpowder is indeed either exploded or not. Such decoherence-based interpretations of the psi-function are certainly “artful”, and their adequacy is still under debate(see Schlosshauer 2007, especially Chapter 8). The reader may recognize the similarity between Einstein's exploding gunpowder example and Schrödinger's cat (Schrödinger 1935a, p. 812). In the case of the cat an unstable atom is hooked up to a lethal device that, after an hour, is as likely to poison (and kill) the cat as not, depending on whether the atom decays. After an hour the cat is either alive or dead, but the quantum state of the whole atom-poison-cat system at this time is a superposition involving the two possibilities and, just as in the case of the gunpowder, is not a complete description of the situation (life or death) of the cat. The similarity between the gunpowder and the cat is hardly accidental since Schrödinger first produced the cat example in his reply of September 19, 1935 to Einstein's August 8 gunpowder letter. There Schrödinger says that he has himself constructed “an example very similar to your exploding powder keg”, and proceeds to outline the cat (Fine 1996, pp. 82–83). Although the “cat paradox” is usually cited in connection with the problem of quantum measurement (Measurement in Quantum Theory) and treated as a paradox separate from EPR, its origin is here as an argument for incompleteness that avoids the twin assumptions of separability and locality. Schrödinger's development of “entanglement”, the term he introduced as a general description of the correlations that result when quantum systems interact, also began in this correspondence over EPR (Schrödinger 1935a, 1935b; see Quantum Entanglement and Information). 2. A popular form of the argument: Bohr's response The literature surrounding EPR contains yet another version of the argument, a popular version that—unlike any of Einstein's—features the Criterion of Reality. Assume again an interaction between our two systems that preserves both relative position and zero total momentum and suppose that the systems are far apart. If we measure the position of Albert's system, we can infer that Niels' system has a corresponding position. We can also predict it with certainty, given the result of the position measurement of Albert's system. Hence, according to the Criterion of Reality, the position of Niels' system constitutes an element of reality. Similarly, if we measure the momentum of Albert's system, we can conclude that the momentum of Niels' system is an element of reality. The argument now concludes that since we can choose freely to measure either position or momentum, it “follows” that both must be elements of reality simultaneously. Of course no such conclusion follows from our freedom of choice. It is not sufficient to be able to choose at will which quantity to measure; for the conclusion to follow from the Criterion alone one would need to be able to measure both quantities at once. This is precisely the point that Einstein recognized in his 1932 letter to Ehrenfest and that EPR addresses by assuming locality and separability. What is striking about this version is that these principles, central to the original EPR argument and to the dilemma at the heart of Einstein's versions, are obscured here. Instead this version features the Criterion and those “elements of reality”. Perhaps the difficulties presented by Podolsky's text contribute to this reading. In any case, in the physics literature this version is commonly taken to represent EPR and usually attributed to Einstein. This reading certainly has a prominent source in terms of which one can understand its popularity among physicists; it is Niels Bohr himself. By the time of the EPR paper many of the early interpretive battles over the quantum theory had been settled, at least to the satisfaction of working physicists. Bohr had emerged as the “philosopher” of the new theory and the community of quantum theorists, busy with the development and extension of the theory, were content to follow Bohr's leadership when it came to explaining and defending its conceptual underpinnings (Beller 1999, Chapter 13). Thus in 1935 the burden fell to Bohr to explain what was wrong with the EPR “paradox”. The major article that he wrote in discharging this burden (Bohr 1935a) became the canon for how to respond to EPR. Unfortunately, Bohr's summary of EPR in that article, which is the version just above, also became the canon for what EPR contained by way of argument. Bohr's response to EPR begins, as do many of his treatments of the conceptual issues raised by the quantum theory, with a discussion of limitations on the simultaneous determination of position and momentum. As usual, these are drawn from an analysis of the possibilities of measurement if one uses an apparatus consisting of a diaphragm connected to a rigid frame. Bohr emphasizes that the question is to what extent we can trace the interaction between the particle being measured and the measuring instrument. (See Beller 1999, Chapter 7 for a detailed analysis and discussion of the “two voices” contained in Bohr's account.) Following the summary of EPR, Bohr (1935a, p. 700) then focuses on the Criterion of Reality which, he says, “contains an ambiguity as regards the meaning of the expression ‘without in any way disturbing a system’.” Bohr agrees that the indirect measurement of Niels' system achieved when one makes a measurement of Albert's system does not involve any “mechanical disturbance” of Niels' system. (Thus Bohr takes for granted that one may raise the question of a disturbance between the two systems, and hence he takes separability, that there are distinct systems, for granted.) Still, Bohr claims that a measurement on Albert's system does involve “an influence on the very conditions which define the possible types of predictions regarding the future behavior of [Niels'] system.” What Bohr may have had in mind is that when, for example, we measure the position of Albert's system and get a result we can predict the position of Niels' system with certainty. However, measuring the position of Albert's system does not allow a similarly certain prediction for the momentum of Niels' system. The opposite would be true had we measured the momentum of Albert's system. Thus depending on which variable we measure on Albert's system, we will be entitled to different sorts of predictions about the results of further measurements on Niels' system. There are two important things to notice about this response. The first is this. In conceding that Einstein's indirect method for determining, say, the position of Niels' system does not mechanically disturb that system, Bohr departs from his original program of complementarity, which was to base the uncertainty relations and the statistical character of quantum theory on uncontrollable physical interactions, interactions that were supposed to arise inevitably between a measuring instrument and the system being measured. Instead Bohr now distinguishes between a genuine physical interaction (his “mechanical disturbance”) and some other sort of “influence” on the conditions for specifying (or “defining”) sorts of predictions for the future behavior of a system. In emphasizing that only the latter arise in the EPR situation, Bohr retreats from his earlier, physically grounded conception of complementarity. The second important thing to notice is how Bohr's response needs to be implemented in order to block the arguments of Einstein that pose a dilemma between principles of locality and completeness. In Einstein's arguments the locality principle makes explicit reference to the reality of the unmeasured system (no immediate influence on the reality there due to measurements made elsewhere). Hence Bohr's pointing to an influence on conditions for specifying predictions would not affect the argument at all unless one includes those conditions as part of the reality of Niels' system. That would be implausible on two counts. Firstly, it would make what is real about Niels' system encompass what is happening to Albert's system, which is someplace else. (Recall EPR's warning against just this move.) Secondly, there is an issue of intelligibility. Bohr maintains that the “conditions” (which define the possible types of predictions regarding the future behavior of Niels' system) “constitute an inherent element of the description of any phenomena to which the term ‘physical reality’ can be properly attached” (Bohr 1935a, p. 700). Thus Bohr makes the problematic suggestion that the very expression “Niels' system” refers to conditions for predicting the future behavior of Niels' system. The self-reference here of “Niels' system” generates a regress that stands in the way of determining the conditions in question. If it were possible to bypass the regress, then including such conditions as part of the “reality” of the unmeasured system would automatically preclude locality (while allowing for separability). Bohr would have it that both systems exist (separability) but, somehow, their existence is not independent of one another (nonlocality). If such a conception makes sense then, by tailoring the concept of physical reality so as make it true by definition that the quantum theory is not local, Bohr's response might embrace separability and even concede the validity of the EPR argument, but still block the impact of EPR on the issue of completeness. Despite Bohr's seeming tolerance for a breakdown of locality in his response here to EPR, in other places Bohr rejects nonlocality in the strongest terms. For example in discussing an electron double slit experiment, which is Bohr's favorite model for illustrating the novel conceptual features of quantum theory, and writing only weeks before EPR, Bohr argues as follows. If we only imagine the possibility that without disturbing the phenomena we determine through which hole the electron passes, we would truly find ourselves in irrational territory, for this would put us in a situation in which an electron, which might be said to pass through this hole, would be affected by the circumstance of whether this [other] hole was open or closed; but … it is completely incomprehensible that in its later course [the electron] should let itself be influenced by this hole down there being open or shut. (Bohr 1935b) It is uncanny how closely Bohr's language mirrors that of EPR. But here Bohr defends locality and regards the very contemplation of nonlocality as “irrational” and “completely incomprehensible”. Since “the circumstance of whether this [other] hole was open or closed” does affect the possible types of predictions regarding the electron's future behavior, if we expand the concept of the electron's “reality”, as he appears to suggest for EPR, by including such information, we do “disturb” the electron around one hole by opening or closing the other hole. That is, if we give to “disturb” and to “reality” the very same sense that Bohr appears to give them when responding to EPR, then we are led to an “incomprehensible” nonlocality, and into the territory of the irrational. There is another way of trying to understand Bohr's position. According to one common reading (see Copenhagen Interpretation), after EPR Bohr embraced a relational (or contextual) account of property attribution. On this account to speak of the position, say, of a system presupposes that one already has put in place an appropriate interaction involving an apparatus for measuring position (or at least an appropriate frame of reference for the measurement; Dickson 2004). Thus “the position” of the system refers to a relation between the system and the measuring device (or measurement frame). In the EPR context this would seem to imply that before one is set up to measure the position of Albert's system, talk of the position of Niels' system is out of place; whereas after one measures the position of Albert's system, talk of the position of Niels' system is appropriate and, indeed, we can then say truly that Niels' system “has” a position. Similar considerations govern momentum measurements. It follows, then, that local manipulations carried out on Albert's system, in a place we may assume to be far removed from Niels' system, can directly affect what is meaningful to say about, as well as factually true of, Niels' system. Similarly, in the double slit arrangement, it would follow that what can be said meaningfully and said truly about the position of the electron around the top hole would depend on the context of whether the bottom hole is open or shut. One might suggest that such relational actions-at-a-distance are harmless ones, perhaps merely “semantic”; like becoming the “best” at a task when your only competitor—who might be miles away—fails. Note, however, that in the case of ordinary relational predicates it is not inappropriate (or “meaningless”) to talk about the situation in the absence of complete information about the relata. So you might be the best at a task even if your competitor has not yet tried it, and you are definitely not an aunt (or uncle) until one of your siblings gives birth. But should we say that an electron is nowhere at all until we are set up to measure its position, or would it be inappropriate (meaningless?) even to ask? If quantum predicates are relational, they are different from many ordinary relations in that the conditions for the relata are taken as criterial for the application of the term. In this regard one might contrast the relativity of simultaneity with the proposed relativity of position. In relativistic physics specifying a world-line fixes a frame of reference for attributions of simultaneity to events regardless of whether any temporal measurements are being made or contemplated. But in the quantum case, on this proposal, specifying a frame of reference for position (say, the laboratory frame) does not entitle one to attribute position to a system, unless that frame is associated with actually preparing or completing a measurement of position for that system. To be sure, analyzing predicates in terms of occurrent measurement or observation is familiar from neopositivist approaches to the language of science; for example, in Percy Bridgman's operational analysis of physical terms, where the actual applications of test-response pairs constitute criteria for any meaningful use of a term (see theory and observation in science ). Rudolph Carnap's later introduction of reduction sentences (see the entry on the Vienna Circle) has a similar character. Still, this positivist reading entails just the sort of nonlocality that Bohr seemed to abhor. In the light of all this it is difficult to know whether a coherent response can be attributed to Bohr reliably that would derail EPR. (In different ways, Dickson 2004 and Halvorson and Clifton 2004 make an attempt on Bohr's behalf. These are examined in Whitaker 2004 and Fine 2007.) Bohr may well have been aware of the difficulty in framing the appropriate concepts clearly when, a few years after EPR, he wrote, The unaccustomed features of the situation with which we are confronted in quantum theory necessitate the greatest caution as regard all questions of terminology. Speaking, as it is often done of disturbing a phenomenon by observation, or even of creating physical attributes to objects by measuring processes is liable to be confusing, since all such sentences imply a departure from conventions of basic language which even though it can be practical for the sake of brevity, can never be unambiguous. (Bohr 1939, p. 320. Quoted in Section 3.2 of the entry on the Uncertainty Principle.) 3. Development of EPR 3.1 Spin and The Bohm version For about fifteen years following its publication, the EPR paradox was discussed at the level of a thought experiment whenever the conceptual difficulties of quantum theory became an issue. In 1951 David Bohm, a protégé of Robert Oppenheimer and then an untenured Assistant Professor at Princeton University, published a textbook on the quantum theory in which he took a close look at EPR in order to develop a response in the spirit of Bohr. Bohm showed how one could mirror the conceptual situation in the EPR thought experiment by looking at the dissociation of a diatomic molecule whose total spin angular momentum is (and remains) zero; for instance, the dissociation of an excited hydrogen molecule into a pair of hydrogen atoms by means of a process that does not change an initially zero total angular momentum (Bohm 1951, Sections 22.15–22.18). In the Bohm experiment the atomic fragments separate after interaction, flying off in different directions freely. Subsequently, measurements are made of their spin components (which here take the place of position and momentum), whose measured values would be anti-correlated after dissociation. In the so-called singlet state of the atomic pair, the state after dissociation, if one atom's spin is found to be positive with respect to the orientation of an axis at right angles to its flight path, the other atom would be found to have a negative spin with respect to an axis with the same orientation. Like the operators for position and momentum, spin operators for different orientations do not commute. Moreover, in the experiment outlined by Bohm, the atomic fragments can move far apart from one another and so become appropriate objects for assumptions that restrict the effects of purely local actions. Thus Bohm's experiment mirrors the entangled correlations in EPR for spatially separated systems, allowing for similar arguments and conclusions involving locality, separability, and completeness. Indeed, a recently discovered note of Einstein's, that may have been prompted by Bohm's treatment, contains a very sketchy spin version of the EPR argument – once again pitting completeness against locality (“A coupling of distant things is excluded.” Sauer 2007, p. 882). Following Bohm (1951) a paper by Bohm and Aharonov (1957) went on to outline the machinery for a plausible experiment in which entangled spin correlations could be verified. It has become customary to refer to experimental arrangements involving determinations of spin components for spatially separated systems, and to a variety of similar set-ups (especially ones for measuring photon polarization), as “EPRB” experiments—“B” for Bohm. Because of technical difficulties in creating and monitoring the atomic fragments, however, there seem to have been no immediate attempts to perform a Bohm version of EPR. 3.2 Bell and beyond That was to remain the situation for almost another fifteen years, until John Bell utilized the EPRB set-up to construct a stunning argument, at least as challenging as EPR, but to a different conclusion (Bell 1964). Bell shows that, under a given set of assumptions, certain of the correlations that can be measured in runs of an EPRB experiment satisfy a particular set of constraints, known as the Bell inequalities. In these EPRB experiments, however, quantum theory predicts that the measured correlations violate the Bell inequalities, and by an experimentally significant amount. Thus Bell shows (see the entry on Bell's Theorem) that quantum theory is inconsistent with the given assumptions. Prominent among these is an assumption of locality, similar to the locality assumptions tacitly assumed in EPR and (explicitly) in the one-measurement and many-measurement arguments of Einstein that depend on separability-locality. Thus Bell's theorem is often characterized as showing that quantum theory is nonlocal. However, since several other assumptions are needed in any derivation of the Bell inequalities (roughly, assumptions guaranteeing a classical representation of the quantum probabilities; see Fine 1982a, and Malley 2004), one should be cautious about singling out locality as necessarily in conflict with the quantum theory. Bell's results were explored and deepened by various theoretical investigations and they have stimulated a number of increasingly sophisticated and delicate EPRB-type experiments designed to test whether the Bell inequalities hold where quantum theory predicts they should fail. With a few anomalous exceptions, the experiments confirm the quantum violations of the inequalities. (Baggott 2004 contains a readable account of the major refinements and experiments. Genovese 2005 is an exhaustive technical review.) The confirmation is quantitatively impressive, although the experiments continue to leave open at least two different ways (corresponding to the prism and synchronization models sketched in Fine 1982b) to reconcile the data with frameworks that embody locality and separability. One way (prisms) exploits the low rate of detection in most experiments; the other way (synchronization) exploits time delays associated with coincidence counts. (See Larsson 1999, and Szabo and Fine 2002 for the former and for the latter Larsson and Gill 2004 and the EPRB simulation constructed in de Raedt et al 2007.) The difficulty is to carry out an efficient experiment that controls for these sorts of errors and that excludes communication about detections between the two wings of the experiment as well as communication between emissions at the source and the choice of measurements in the wings. (Scheidl et al 2010 is an attempt to exclude these two types of communication but does not control the errors sufficiently, and Giustina et al 2013 is an attempt to control the errors but leaves open the possibility of communication.) While the exact significance of experimental tests of the Bell inequalities thus remains somewhat controversial, the techniques developed in the experiments, and related theoretical ideas for utilizing the entanglement associated with EPRB-type interactions, have become important in their own right. These techniques and ideas, stemming from EPRB and the Bell theorem, have applications now being advanced in the relatively new field of quantum information theory — which includes quantum cryptography, teleportation and computing (see Quantum Entanglement and Information). To go back to the EPR dilemma between locality and completeness, it would appear from the Bell theorem that Einstein's strategy of maintaining locality, and thereby concluding that the quantum description is incomplete, may have fixed on the wrong horn. Even though the Bell theorem does not rule out locality conclusively, it should certainly make one wary of assuming it. On the other hand, since Einstein's exploding gunpowder argument (or Schrödinger's cat), along with his later arguments over macro-systems, support incompleteness without assuming locality, one should be wary of adopting the other horn of the dilemma, affirming that the quantum state descriptions are complete and “therefore” that the theory is nonlocal. It may well turn out that both horns need to be rejected: that the state functions do not provide a complete description and that the theory is also nonlocal (although possibly still separable; see Winsberg and Fine 2003). There is at least one well-known approach to the quantum theory that makes a choice of this sort, the de Broglie-Bohm approach (Bohmian Mechanics). Of course it may also be possible to break the EPR argument for the dilemma plausibly by questioning some of its other assumptions (e.g., separability, the reduction postulate, the eigenvalue-eigenstate link, or a common assumption of measurement independence). That might free up the remaining option, to regard the theory as both local and complete. Perhaps a well-developed version of the Everett Interpretation would come to occupy this branch of the interpretive tree. • Bacciagaluppi, G. and A. Valentini, 2009, Quantum Theory at the Crossroads: Reconsidering the 1927 Solvay Conference, Cambridge: Cambridge University Press. • Baggott, J., 2004, Beyond Measure: Modern Physics, Philosophy and the Meaning of Quantum Theory, Oxford: Oxford University Press. • Bell, J. S., 1964, “On the Einstein-Podolsky-Rosen Paradox”, Physics, 1: 195–200, reprinted in Bell 1987. • –––, 1987, Speakable and Unspeakable in Quantum Mechanics, New York: Cambridge University Press. • Beller, M., 1999, Quantum Dialogue: The Making of a Revolution, Chicago: University of Chicago Press. • Bohm, D., 1951, Quantum Theory, New York: Prentice Hall. • Bohm, D., and Y. Aharonov, 1957, “Discussion of Experimental Proof for the Paradox of Einstein, Rosen and Podolski”, Physical Review, 108: 1070–1076. • Bohr, N., 1935a, “Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?”, Physical Review, 48: 696–702. • –––, 1935b, “Space and Time in Nuclear Physics”, Ms. 14, March 21, Manuscript Collection, Archive for the History of Quantum Physics, American Philosophical Society, Philadelphia. • –––, 1939, “The causality problem in atomic physics” in Bohr, 1996, pp. 303–322. • –––, 1949, “Discussions with Einstein on epistemological problems in atomic physics” in Schilpp, 1949, pp. 199–241. Reprinted in Bohr, 1996, pp. 339–381. • –––, 1996, Collected Works, Vol. 7, Amsterdam: North Holland. • Born, M., (ed.), 1971, The Born-Einstein Letters, New York: Walker. • De Raedt, K. et al, 2007, “A Computer Program to Simulate Einstein–Podolsky–Rosen–Bohm Experiments with Photons”, Computer Physics Communications, 176: 642–651. • Dickson, M., 2004, “Quantum Reference Frames in the Context of EPR”, Philosophy of Science, 71: 655–668. • Einstein, A. 1936, “Physik und Realität”, Journal of the Franklin Institute, 221: 313–347, reprinted in translation in Einstein 1954. • –––, 1948, “ Quanten-Mechanik und Wirklichkeit ”, Dialectica, 2: 320–324. Translated in Born 1971, pp. 168–173. • –––, 1953a, “Einleitende Bemerkungen über Grundbegriffe ”, in A. George, ed., Louis de Broglie: Physicien et penseur, Paris: Editions Albin Michel, pp. 5–15. • –––, 1953b, “Elementare Überlegungen zur Interpretation der Grundlagen der Quanten-Mechanik ”, in Scientific Papers Presented to Max Born, New York: Hafner, pp. 33–40. • –––, 1954, Ideas and Opinions, New York: Crown. • Einstein, A., B. Podolsky, and N. Rosen, 1935, “Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?”, Physical Review, 47: 777–780 [available online]. • Fine, A., 1996, The Shaky Game: Einstein, Realism and the Quantum Theory, 2nd Edition, Chicago: University of Chicago Press. • –––, 1982a, “Hidden Variables, Joint Probability and the Bell Inequalities”, Physical Review Letters, 48: 291–295. • –––, 1982b, “Some Local Models for Correlation Experiments”, Synthese 50: 279–94. • –––, 2007, “Bohr's Response to EPR: Criticism and Defense”, Iyyun, The Jerusalem Philosophical Quarterly, 56: 31–56. • Genovese, M., 2005, “Research on hidden variable theories: A review of recent progress”, Physics Reports, 413: 319–396. • Giustina, M. et al, 2013, “Bell violation using entangled photons without the fair-sampling assumption”, Nature, 497: 227–230. • Howard, D., 1985, “Einstein on Locality and Separability.” Studies in History and Philosophy of Science 16: 171–201. • Halvorson, H., 2000, “The Einstein-Podolsky-Rosen State Maximally Violates Bell's Inequality”, Letters in Mathematical Physics, 53: 321–329. • Halvorson, H. and R. Clifton, 2004, “Reconsidering Bohr's Reply to EPR.” In J. Butterfield and H. Halvorson, eds., Quantum Entanglements: Selected Papers of Rob Clifton, Oxford: Oxford University Press, pp. 369–393. • Harrigan, N. and R. W., Spekkens, 2010, “Einstein, Incompleteness, and the Epistemic View of Quantum States”, Foundations of Physics, 40: 125–157. • Held, C., 1998, Die Bohr-Einstein-Debatte: Quantenmechanik und Physikalische Wirklichkeit, Paderborn: Schöningh. • Hooker, C. A., 1972, “The nature of quantum mechanical reality: Einstein versus Bohr”, in R. G. Colodny, ed., Paradigms and Paradoxes, Pittsburgh: University of Pittsburgh Press, pp. 67–302. • Jammer, M., 1974, The Philosophy of Quantum Mechanics, New York: Wiley. • Larsson, J.-A., 1999, “Modeling the Singlet State with Local Variables”, Physics Letters A, 256: 245–252. • Larsson, J.-A. and R. D. Gill, 2004, “Bell's inequality and the coincidence-time loophole”, Europhysics Letters, 67: 707–713. • Malley, J., 2004, “All Quantum Observables in A Hidden-Variable Model Must Commute Simultaneously”, Physical Review A, 69 (022118): 1–3. • Sauer, T., 2007, “An Einstein manuscript on the EPR paradox for spin observables”, Studies in History and Philosophy of Modern Physics, 38: 879–887. • Scheidl, T. et al, 2010, “Violation of local realism with freedom of choice”, Proceedings of the National Academy of Sciences, 107: 19708–19713. • Schilpp, P.A., (ed.), 1949, Albert Einstein: Philosopher-Scientist, La Salle, IL: Open Court. • Schlosshauer, M., 2007, Decoherence and the Quantum-to-Classical Transition, Heidelberg/Berlin: Springer. • Schrödinger, E., 1935a, “Die gegenwärtige Situation in der Quantenmechanik”, Naturwissenschaften, 23: 807–812, 823–828, 844–849; English translation in Trimmer, 1980. • –––, 1935b, “Discussion of Probability Relations between Separated Systems”, Proceedings of the Cambridge Philosophical Society, 31: 555–562. • Szabo, L. and A. Fine, 2002, “A Local Hidden Variable Theory for the GHZ Experiment”, Physics Letters A, 295: 229–240. • Trimmer, J. D., 1980, “The Present Situation in Quantum Mechanics: A Translation of Schrödinger's ‘Cat Paradox’ Paper”, Proceedings of the American Philosophical Society, 124: 323–338 • von Neumann, J., 1955, Mathematical Foundations of Quantum Mechanics, trans. Robert T. Geyer, Princeton: Princeton University Press. • Whitaker, M. A. B., 2004, “The EPR Paper and Bohr's Response: A Re-Assessment”, Foundations of Physics, 34: 1305–1340. • Winsberg, E., and A. Fine, 2003, “Quantum Life: Interaction, Entanglement and Separation”, Journal of Philosophy, C: 80–97. Other Internet Resources Copyright © 2013 by Arthur Fine <> Please Read How You Can Help Keep the Encyclopedia Free
10e7bc61a7c90fc7
Overdetermination and the Autonomy of the Mental The problem of mental causation from the overdetermination argument could possibly be solved by rejecting the premise of physical completeness. I begin by elaborating the problem, give a priori reasons to reject the completeness of physics and examine the resulting problems associated with emergentism. I shall then move on to a posteriori reasons for rejecting the afore mentioned premise from the fields of quantum-electrodynamics (QED) and quantum chemistry (QC). I will conclude that although these reasons are justified, a rejection of the completeness of physics is but the first step to a satisfying account of mental causation. One of the problems that the non-reductive materialists encounters in maintaining that mental events are irreducible to physical events is that of mental causation. This problem arises out of the inconsistency of five premises: 1) Mental properties are distinct from physical properties, 2) These mental properties can cause physical properties, 3) They furthermore supervene on their physical realisers, i.e. any mental property necessarily has a physical property and whatever has that physical property has the mental property, 4) The completeness of physics, i.e. every physical property is sufficiently caused by a prior physical property, and 5) There can be no two sufficient causes for a property (Kallestrup, 2006: 459, 466; Papineau, 2001; Kim, 1996). How this is accomplished was repeatedly argued by Kim and became known as Descartes’ Revenge since it showed how a similar line of reasoning that brought down substance dualism can be applied to non-reductive materialism (Kim, 1992; 1996; 1997, et cetera): If 1), 2) and 3) are true, then we get a picture of mental causation wherein a mental property M1 can cause changes in its physical realiser P1 as well as the realisers of other mental properties, say P2. However, if 4) is true, then P1 is already a sufficient cause for P2, which means that P2 is overdetermined, i.e. it has two sufficient causes, M1 and P1. This however is forbidden by 5). See above illustration where the dashed lines indicate supervenience and the unbroken causation. The only way out of this mess is to reject one of its premises. This essay shall attempt to reject 4), the completeness of physics. Let me elaborate on the meaning of this premise: First, we need a conception of the physical, which for simplicity this essay will define as everything non-mental under the assumption that there are only these two kinds. (for a more elaborate definition, see Papineau, 2002) Examples of the physical are brain states and quantum states. What the completeness of physics suggests is a bottom-up explanation of the world, i.e. the standard model of particle physics, that defines all existing matter and forces, predicts quantum mechanics (QM), which in turn can predict all behaviour of molecules in chemistry. The same is true for bio-chemistry and eventually neurology. What this means is that brain states are “nothing more” than, and reducible to, the aggregates of quantum states (Morrison, 2006: 876; Kim, 1997: 279-286). The reason we start with QM is that it is more fundamental and closer to the “source” of higher level explanations (Weinberg, 1987: 437; also Papineau, 2002; Hendry, 2006: 154). Rejecting completeness would mean that explanatory gaps arise within the physical world. The laws of QM would be unable to predict chemistry, which could not predict neurobiology. If a sufficiently large amount of particles come together, new properties would emerge and kept emerging whenever a system becomes sufficiently more complex. This means that there are things about chemistry that are unpredictable from QM and so forth. This position, let us call it emergentism (see elaborations on this definition in Crane, 2006: 3ff) would help the non-reductive materialist in the following way: If the physical world were a “layered world” (Kallestrup, 2006: 467), then there is room for downward causation. Higher level properties, although causally connected to lower levels, have their own fundamental emergent laws. These “unexplained explainers” (Horgan, 1993: 557-558) also have causal powers that they can downward exercise on their realisers (Crane, 2006: 4). If this image were abstracted to mental properties, it means that M1 can directly cause P2 since P1 is not necessarily a sufficient cause for P2. To give a mental example: “The macro-property of me, my decision […][an M1] affects the micro-property of my arm […][a P2]” (Crane, 2006: 15). This seems a wee wild! How can a physical property not be sufficiently caused by another physical property? Does that not require some magical power of M1 and M2 that renders the interaction of their realisers unexplainable? In fact the very point of premise 4 was to forbid this kind of downward causation from higher level properties to lower level ones (Hendry, 2006: 154, 156; McLaughlin, 1992; Kim, 1997; Horgan, 1993), and indeed, not too many non-reductive materialists would be willing to come along without premise 4) (Horgan 1993: 560; Crane, 2006: 4). Let me summarise where we are now: We started with Descartes’ Revenge and noted that a response to it would require to give up one of the premises of non-reductive materialism. We went for the completeness of physics. This led us to emergentism since if higher level systems are not predictable from lower level systems, there must be emergent properties and laws about them. Finally we saw that this solution is not necessarily too attractive since it allows for wild claims about downward causation. Therefore, prima facie materialists are not emergentists and whether our emergentists are still non-reductive materialists is not certain (see Crane, 2006: 4, 10, 22). Now that the scene is set, the next step in defending our rejection of 4) needs to be a posteriori. If we can answer the question whether there are emergent properties in the natural sciences with yes, rejecting the premise would seem reasonable and a solution to the problem of mental causation attainable. Before we start though, let us straighten out two definitions that we will use to test the following examples: Supervenience shall describe a situation where any higher level property change is necessarily accompanied by a lower level property change (Hendry, 2006: 153). Emergentism shall mean that these higher level properties are furthermore not predictable from lower level properties, i.e. if complete knowledge of all lower level properties exists, then the next higher level properties emerge unpredictably (for simplicity, this essay will ignore the nuances between different forms of emergentism, for instance British). A classic example of a higher level chemical property that is inexplicable in terms of its lower level realisers that was used by the British emergentists around Broad is the transparency of water (Broad, 1925: 70ff; McLaughlin, 1992). A supervenience relation exists since any chemical change (e.g. transparency) is accompanied by physical change in the particles that make up water. Furthermore the particles that make up water could not explain how transparency came about. Yet, McLaughlin uses this as an example of why emergentism failed. He argues that since the discoveries of QM there is no more mystery about transparency of fluids (McLaughlin, 1992). Indeed, QED can give bottom-up explanations and predictions of how photons interact with the electrons around water molecules to extremely high levels of accuracy (Feynman, 1979). Therefore there are no emergent laws in QC (McLaughlin, 1992; Papineau, 2001: 19). I disagree with McLaughlin about the implications of this example. I propose that it was merely an unfortunate choice of examples that brought about the British emergentists’ downfall so quickly since there are many other instances of possible emergent properties in QC that cannot be dismissed as quickly. Let me examine but a few: Some physicists describe the formation of crystals as emergent phenomenon as it occurs only in sufficiently large system once enough particles are cooled to certain temperatures and start exhibiting “rigidity, elasticity and regularity” all of which cannot be found in fewer particles (Morrison, 2006: 880). This is an example of symmetry breaking where a small change to a system results in it taking a whole new orderly symmetry. What this means is that little cooling of a system or adding a few particles can result in matter “undergo[ing] mathematically sharp phase transitions to states where the microscopic symmetries and equations are in a sense violated.” (Morrison, 2006: 881). For instance sound wave particles emerge and start behaving according to simple rules independent of the underlying quantum laws (Laughlin, 1999). Furthermore we ought to note that in this emergent system, the constituents have not disappeared, rather the emergent phenomenon would, once the constituents are separated (Morrison, 2006: 883). Sound has no meaning for single particles, only systems. How does this differ from Broad’s case for emergentism? Let us find out by testing this example against our definitions: If satisfies the supervenience condition since any change to a crystal necessitates a change to its constituents. The question whether this satisfies the emergence condition is more tricky. Are the laws of QM able to predict at which point and how a crystal will come about if more particles are added below as certain temperature? Let me try to answer with another example: The Schrödinger equation is an example of a quantum law at Weinberg’s “source” of explanation. It extremely accurately describes the charge and mass of all matter in the universe (except radioactivity and gravitational curvatures) (Morrison, 2006: 879; Feynman, 1979). However, it produces very inaccurate results for any more that 10 particles, not because of the calculation abilities of the formula, but because larger dimensions require experimental input to calculate since the formula contains approximates (Laughlin and Pines, 2000). Therefore the physicist is unable to mathematically derive the behaviour of larger systems from quantum laws (Morrison, 2006: 879). This now suits our emergence condition, but more that that, the Schrödinger equation must solve for the individual particles within the framework of a larger system or molecule since they are somewhat “constrained” by the overall structure (Hendry, 2006: 165). The emergentist will justifiably claim that this is an example of downward causation! (For further more complex examples see Bose-Einstein condensates in Healey, forthcoming: 5 and Laughlin and Pines, 2000; and superconductivity in Weinberg, 1986.) An advocate of this conclusion is Anderson who states that at “each level of complexity entirely new properties appear [that] require research […] as fundamental […] as any other” (1972: 393) In a sense, the whole becomes “not only more but very different from the sum of its parts” (Anderson, 1972: 395). Now we seem to have found a some candidates for empirical emergent properties in physics that fit both of our definitions. However, the physicalist has a compelling way of responding to these: Physics may not seem complete prima facie, i.e. there is no full account of the interaction of QM and chemistry, yet we ought to look for potential future explanations (Papineau, 2001; Broad, 1925: 70). She may argue that full bottom-up predictions are not made for “practicle reasons” (Hendry, 2006: 165), and that eventually this explanatory gap will be closed, just as in the example of Broad and transparency. This is a very difficult critique to respond to, but suffice it to say that unlike Broad, the modern physicist has very subtle mathematical ways of testing the validity of emergent properties in QM, that are unfortunately beyond the scope of this essay (see one-to-one mapping in Morrison, 2006: 885). This brings us back to where we started. Anderson launches a direct assault on Kim’s “nothing more” position here. We now find ourselves asking to what extend our understanding of higher-level properties comes in terms of their constituent lower-level properties (Morrison, 2006: 879). How does this tie in with our original question? In order to find a solution to Kim’s overdetermination argument I proposed to reject the premise of the completeness of physics and hope to have demonstrated with a posteriori evidence that such a proposal is not unreasonable. However, this can only be the very first step in a revival of emergentism since the rejection of a premise does not yet bring about a satisfying account for mental causation (see Kallestrup, 2006: 468). The next thing to do for the philosopher would be to find such an account in light of this a posteriori evidence and retest modern physics examples against such an account. However, that task shall be left for another post. Anderson, P. W. 1972. More is Different. Science 177: 393-396. Broad, C. D. 1925. The Mind and its Place in Nature. Routledge. Crane, T. 2006. The Significance of Emergence. in Gillett, C. and B. Loewer, eds. Physicalism and Its Discontents. Feynman, R. 1979. Quantum Electrodynamics. Series of 4 lectures given at the Douglas Robb Memorial Lectures. University of Auckland. Vega Science Trust. Healey, R. forthcoming. Reduction and Emergence in Bose- Einstein Condensates. Foundations of Physics. Hendry, R. F. 2006. Is there Downward Causation in Chemistry? in Philosophy of Chemistry: Synthesis of a New Discipline 153-179. Springer. Horgan, T. 1993. From Supervenience to Superdupervenience: Meeting the Demands of a Material World. Mind 102: 555-586. Kallestrup, J. 2006. The Causal Exclusion Argument. Philosophical Studies 131(2): 459-85. Kim, J. 1992. Downward Causation. in Beckerman, A., Flohr, H., and J. Kim, eds. Emergentism and Reduction. Kim, J. 2006. Mental Causation. in Philosophy of Mind 173-204. Westview. Kim, J. 1997. Supervenience, Emergence and Realization in the Philosophy of Mind. in Carrier, M. and P. K. Machamer, eds. Mindscapes: Philosophy, Science, and the Mind 271–293. Universitätsverlag Konstanz. Kim, J. 1999. Making Sense of Emergence. Philosophical Studies 95: 3-36. Laughlin, R. B. 1999. Nobel Lecture: Fractional Quantization. Reviews of Modern Physics 71: 863-874. Laughlin, R. B. and D. Pines. 2000. The Theory of Everything. Proceedings of the National Academy of Science 97: 28-31. Lewis, D. 1973. Counterfactuals. Blackwell Publishing. McLaughlin, B. 1992. The Rise and Fall of British Emergentism. in Beckerman, A., Flohr, H., and J. Kim, eds. Emergentism and Reduction. Morrison, M. 2006. Emergence, Reduction and Theoretical Principles: Rethinking Fundamentalism. Philosophy of Science 73: 876-887. Papineau, D. 2001. The Rise of Physicalism. in Gillett, C. and B. Loewer, eds. Physicalism and Its Discontents. Papineau, D. 2002. A Case for Materialism. in Thinking about Consciousness. Oxford University Press. Weinberg, S. 1986. Superconductivity for Particular Theorists. Progress of Theoretical Physics Supplement 86: 43-53.
cbc57d8cd9467d12
Monday, July 15, 2013 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Bohmian mechanics, a ludicrous caricature of Nature Some people can't get used to the fact that classical physics in the most general sense – a class of theories that identify Nature with the objectively well-defined values of certain (classical) degrees of freedom that are observable in principle and that evolve according to some (classical) equations of motion, usually differential equations that depend on time, mostly deterministic ones – has been excluded as a possible fundamental description of Nature for almost a century. Classical physics has been falsified and the falsification – a death for a theory – is an irreversible event. Nevertheless, those people would sleep with this zombie and do anything and everything else that is needed (but isn't sufficient) to resuscitate it. Of course, it's not possible to resuscitate it but those people just won't stop trying. Bohmian mechanics, one of the main strategies to pretend that classical physics hasn't died and hasn't been superseded by fundamentally different quantum mechanics, was invented by Prince Louis de Broglie in 1927 who called it "the pilot-wave theory". In the late 1920s, the 1930s, and 1940s, physicists were largely competent so they didn't have any doubts that the pilot wave theory was misguided by its very own guiding wave ;-). Exactly 25 years later, the approach was revived by David Bohm who made the picture popular, largely because he was a fashionable, media-savvy commie (he's almost certainly the recipient of Wolfgang Pauli's famous criticism "not even wrong" that was ironically hijacked by aggressive Shmoitian crackpots in the recent decade). Prince Louis de Broglie liked the new life that apparently returned to the veins of his old sick theory so he didn't even care too much that his theory was going to be attributed to someone else and that the someone else was a Marxist rather than an aristocrat. A constraint that defines Bohmian mechanics is simple: it should be a classical theory that emulates quantum mechanics as well as it can. The champions of the Bohmian theory know that getting the same predictions as quantum mechanics is the maximum goal they may dream about – they can never beat quantum mechanics – and they sort of realize that even this tie is too much to ask in general. Most of the Bohmian advocates seem to know that their theory can't be accurate, especially because of its fundamental conflict with relativity – but they don't seem to care. The fact that the Bohmian mechanics agrees with their fully discredited preconception that Nature is fundamentally classical is more important for them than the (in)accuracy of the predictions extracted from their pet theory. It's straightforward to explain why it's possible to design a classical theory that parrots quantum mechanics when it comes to certain questions. Bohmian mechanics is at least vaguely defensible in the non-relativistic quantum mechanical models only; in more general theories, it collapses completely. How does it rebuild non-relativistic quantum mechanics for one particle, for example? Proper quantum mechanics of this system may be written down in Schrödinger's picture that dictates the following time evolution to the wave function:\[ i\hbar\frac{\partial}{\partial t}\psi(q,t)=-\sum_{i=1}^{N}\frac{\hbar^2}{2m_i}\nabla_i^2\psi(q,t) + V(q)\psi(q,t) \] The way how this wave is evolved in agreement with the equation above contains all the "mathematical beef" of quantum mechanics for the given system and to get the right numbers, any classical caricature of quantum mechanics simply has to contain some objects that are pretty much equivalent to \(\psi(q,t)\). These objects are then assigned totally different, wrong interpretations in the caricatures but they must be there and they must evolve according to the same Schrödinger's equation. Bohmian mechanics buys \(\psi(q,t)\) and incorrectly interprets it as a classical wave – a field that has objective values and is in principle measurable. Of course, we know from quantum mechanics as well as experiments that the value of the wave function simply shouldn't be and isn't measurable in a single repetition of an experiment. So the Bohmian apologists must also invent convoluted mechanisms to make the wave unmeasurable – because it is unmeasurable according to the experiments – despite the fact that the wave function is fundamentally measurable in their theory. Bohmian Rhapsody, via Dilaton. Is this the real life? Is this just fantasy? Caught by the guiding wave. No escape from reality. Open your eyes. Look up to the skies and see: I'm just [a] state vector, I need no images. Because I'm easy come, easy go. A little high, little low. Anyway the [pilot] wave blows, doesn't really matter to me, to me. The pilot-wave theory adopts \(\psi(q,t)\) as an objective classical wave – which it gives a new name, the "guiding wave" or "pilot wave" – but in order to agree with the fact that particles may be observed at sharp locations despite the fuzziness of the wave functions associated with them, they must add some additional degrees of freedom: the actual classical position of the particle. The defining philosophy of Bohmian mechanics is that the actual, classical position of the particle is "guided" by a function of the classical field emulating the wave function so that the probability distribution for the particle's positions remains what it should be according to quantum mechanics. For example, the laws that guide the actual classical particle must be such that they repel the particle from the interference minima in a double-slit experiment: The right end of the picture (the photographic plate) shows denser and less dense regions, the interference maxima and minima. Can you find the appropriate rules for one non-relativistic spinless quantum particle that is able to do it in a way that imitates quantum mechanics? You bet. All the tools are available in conventional quantum mechanics for this system. Recall that in quantum mechanics, \(\rho=|\psi(q,t)|^2\) is the probability density that the particle is sitting near location \(q\) at time \(t\). But quantum mechanics also allows you to define the probability current\[ \bold j = \frac{1}{m} \mathrm{Re}\left ( \psi^*\bold{\hat{p}}\psi \right ) \] Note that it is again sesquilinear (bilinear with one star) in the wave function. We act on the wave function by the momentum operator \(\bold{\hat{p}}=-i\hbar\nabla\), multiply the result by \(\Psi^*\) just like when we calculated the probability density, take the real part, and divide it by the mass \(m\). You see that it only differs from the formula for the probability density by the extra operator \(\bold{\hat{p}}/m\), the operator of the velocity \(\bold{\hat v}\), inserted in the middle. The real part could have been added to the probability density as well because it was real to start with. At any rate, if you define the probability density and the probability current correctly, they obey the continuity equation\[ \frac{\partial \rho}{\partial t} + \bold \nabla \cdot \bold j = 0. \] The divergence of the probability current exactly agrees with the decrease of the probability density in the given region. It means that the probability current measures how the probability has to flow into/from a given infinitesimal volume if you want the probability density to change just like it should according to Schrödinger's equation. Now it's easy to realize that if you define a classical "velocity field"\[ \bold{\hat v} = \frac{\bold{\hat j}}{\rho}, \] it will be very useful for emulating quantum mechanics. It's not hard to prove that if you define Bohmian mechanics as the "classicalized" wave function together with a classical position \(\bold{\hat q}(t)\) that evolves according to the "guiding equation"\[ \ddfrac{\bold{\hat q}}{t} = \bold{\hat v} (\bold{\hat q}(t)), \] the trajectories of the classical particles will be repelled from the interference minima, attracted to the interference maxima, and will obey a more specific rule: If you imagine that the particles in the initial state are distributed according to the probability distribution given by \(\rho(\bold{\hat q},t)\), it will be true for the final state, too. This trick may be generalized to the case of \(N\) non-relativistic particles. In this case, the wave function \(\psi\) becomes a classical wave that is a function of the \(3N\)-dimensional configuration space. This configuration space is larger than the ordinary space and is "multi-local" and because we have this "multi-local" old-fashioned classical field, the theory becomes explicitly non-local and a violation of the Lorentz symmetry, at least in principle, is inevitable. I would like to emphasize that it's no surprise at all that it's possible to find the equation that evolves the probability distribution in the right way. Imagine that you start with a wave function \(\psi(\bold{\hat q})\) at some time \(t_0\). Throw a trillion of dots – particles – to the space that are distributed according to \(\rho = |\psi|^2\). Do the same thing for the final moment \(t_1\) when the wave function is different. You will have two configurations of trillions of particles. It's not shocking that you may "connect the dots" from the initial state to the final state in some way. A way that is simple enough, one based on the probability current and described above, gives you one of the solutions. But it's not the only solution. In reality, the "initial dots" could be connected with the "final dots" in infinitely many ways (well, a "mere" trillion factorial if you only have a trillion of dots). In the continuous language, you could e.g. make the particles move along spirals inside the cylinders that surround the interference maxima. Is one way to connect the dots better than others? Of course, it's not. All of them are equally good. Quantum mechanics commands you to learn something about the initial state – some wave function or density matrix that encode the initial probability distribution – and it allows you to predict the probabilities for the final state. But it doesn't tell you which of the initial particles is connected with which final particle, i.e. how to connect the dots. It doesn't inform you about any preferred classical trajectory that connects them (and Feynman's approach orders you to sum over all trajectories). If you could actually "measure" this permutation that determines how the dots are connected, quantum mechanics would be shown incomplete. However, it's totally obvious that there's no way to measure the trajectories or permutations inside. The particles just don't have well-defined, in principle measurable trajectories between the measurements for the usual Heisenberg uncertainty principle-based reasons. If you tried to measure the trajectory before the final measurement, you would change the experiment and destroy or damage the final interference pattern. So all the precise lines on the "caricature of the double-slit experiment" are pure fantasy. They're just crutches for the people who need some specific picture of the intermediate states to be drawn. But the specific picture we drew is in no way better than infinitely many other pictures we could draw that would predict the same interference pattern, the same probability distributions for the final state. Everything we added because we wanted the physical system to have objective properties prior to the measurement – because we're bigots who can't accept the fact that classical physics has died – is unphysical. The added value is purely negative. Everything we added to get from proper quantum mechanics to Bohmian mechanics is rubbish. And many things we're forced to lose when we switch from quantum mechanics to Bohmian mechanics are essential. Because the wave function has a probabilistic interpretation in proper quantum mechanics (it is a ready-to-cook meal from which one may quickly prepare various probability distributions by a calculation), it doesn't matter that it spreads. The spreading of the wave function doesn't make the world more fuzzy. It only makes our knowledge about the world more uncertain. But once we learn the answer to a question – e.g. about the position of a particle – the world fully regains its sharp character it boasted at the beginning. If you only know that the probability of 1,2,3,4,5,6 are 1/6 for some dice in Las Vegas, it doesn't mean that the dice became structureless balls or that the digits written on their sides have become fuzzy or mixed or smeared. It just means that we have one equally sharp cubic die but we just don't know its orientation in space. The uncertainty coming from quantum wave functions are analogous – they only differ from the "classical uncertainty" by their inevitability. That's not the case of Bohmian mechanics. The wave function is interpreted as a classical field of a sort and it is objectively spreading. So something objective is being diluted all over the Universe. That's terrible because this objectively makes the Universe increasingly more fuzzy and bizarre. The useless parts of the guiding wave – the "classicalized" wave function – should be killed in some way because they became useless. But Bohmian mechanics doesn't imply anything of the sort. If you want to clean the garbage of the no-longer-needed branches of the wave function, you will have to add another independent contrived mechanism. Such a mechanism will be a new source of a violation of the Lorentz invariance. (You also need a special mechanism that prepares the guiding wave in a certain initial state and one more mechanism that distributes the "actual particle" inside the appropriate distribution with the right odds because these two things don't follow from Bohmian mechanics as we have defined it above, either. Most of these things are ignored by the Bohmists. Note that with the right probabilistic interpretation – quantum mechanics directly connects the knowledge about the past with the knowledge about the future, without any new crutches in between – we don't need to invent any new mechanisms.) I think that a sane, critically thinking person must be able to realize what he is doing if he is doing such things. He is drawing a ludicrous caricature of Nature – a physical system that is actually governed by the laws of proper quantum mechanics – that reproduces some properties of the correct, quantum theory. The project of drawing the caricature is motivated by the desire to defend a philosophical dogma that the world is fundamentally classical even though it is clearly not. If he has at least some conscience, he must feel analogously as if he were counterfeiting a $100 banknote. He must know that what he is producing isn't the "real thing"; it is just a forgery that can bring him greater personal benefits than the actual banknotes but that's where the advantages stop. But every change from the proper quantum mechanics to the pilot-wave theory is clearly wrong – the "added value" is unquestionably negative. Because the Bohmists don't like the probabilistic character of the wave function, they turn it into a classical wave – the guiding wave. But a classical wave that spreads objectively makes the world ever more fuzzy. So one has to introduce new tricks to have a chance that this increasing fuzziness doesn't spoil the world. All these tricks – tricks that can't really ever be defined in such a way to imitate quantum mechanics completely accurately – have to be considered and added just in order to mask the fact that the wave function is simply not a classical field. It's fair to say that the claim by quantum mechanics that the wave function is not an objectively real wave or field that can be in principle measured is something that we have proven by direct experiments. Attempts to pretend that the wave function is a classical wave are just attempts to mask the truth. I am confident that every Bohmist must ultimately realize it is so and he must be dishonest if he claims that his efforts are more justifiable than the efforts of creationists who are trying to obscure the explicit evidence in favor of evolution: they are exactly equally unjustifiable. Moreover, it's sometimes being said or thought that the perfect emulation of quantum mechanics can be done. Because the invalidated dogma that Nature is fundamentally classical is holy for these bigots, they think that it should be done, too. But the truth is that it can't be done for a general physical system and for a general choice of observables we may measure in actual experiments described by general enough quantum theories. Try to add the spin to a particle. If the logic of Bohmian mechanics – the wave function "is" a classical field and we should also add some classical values of a maximum set of commuting observables – were universally valid, it's clear that aside from the spinor-valued wave function \((c_{\rm up},c_{\rm down})\), we should also assume that Nature "objectively knows" about the classical bit of information that tells you whether the spin is "actually" up or down. However, even the Bohmists realize that if every electron "objectively knew" whether its spin is up or down with respect to the \(z\)-axis, then the laws of physics would break the rotational symmetry because the \(z\)-axis would play a privileged role. Roughly speaking, the ferromagnets would always be oriented vertically, to mention an example. If the \(z\)-component of the classical angular momentum is quantized, it's totally obvious that the other components can't be quantized. A nonzero vector can't have integer (or half-integer) coordinates in each (rotated) coordinate system. Because they sort of realize that the rotational symmetry holds exactly and the hypothesis that the classical value exists with respect to one axis would break the symmetry kind of maximally, they decide that the Bohmian rules must be "skipped" in the case of the spin – they just manually omit some degrees of freedom that should be there according to the general prescription of Bohmian mechanics and hope that the spin measurements are ultimately reduced to position measurements so that it doesn't hurt if some degrees of freedom are not doubled in the usual Bohmian way. The reason why the case of the spin is obvious even to them is the fact that different components of the spin are non-commuting observables none of which is more "natural" than others. After all, they are exactly equally natural because they are related by the rotational symmetry. While the spin is an obvious problem, the pathological character of Bohmian mechanics is much more general. Every (qubit-like) discrete information in quantum mechanics – information labeling a finite-dimensional Hilbert space – is incompatible with the Bohmian philosophy. Recall that Bohmian mechanics added "classical trajectories" \(\bold{\hat q}(t)\) and these coordinates were functions of time that evolved according to some differential equations. But that was only possible because the spectrum of the coordinates was continuous. If you think about observables with a discrete spectrum, it just doesn't work because they would have to "jump to a different, sharply separated discrete eigenvalue" at some points and there can't be any deterministic laws that would govern such jumps. Quantum mechanics tells you that a quantum computer composed of a very large number of qubits may perfectly emulate any quantum system. But that's not the case in Bohmian mechanics. An arbitrarily large quantum computer is composed of qubits, e.g. many electron spins, and because the spin isn't accompanied by a classical bit, Bohmian mechanics is forced to say that an arbitrarily large quantum computer only contains the "classicalized" wave function but no additional classical information analogous to the classical trajectories. So for a quantum computer, the whole "redundant superstructure" (which is how Albert Einstein called these extra coordinates – he was a foe of the pilot-wave theory, despite his being a disbeliever in quantum mechanics) has to be omitted. This is quite an inconsistency in the Bohmian treatment of different quantum systems. The actual reason behind the inconsistency is clear, of course: some physical systems may be caricatured by the pilot-wave trick, others can't. But in Nature, there actually isn't any qualitative difference (in principle observable difference) between these two classes of situations. I said that Bohmian mechanics doesn't allow you to consistently treat the particles' spin or any other discrete degrees of freedom, for that matter. But the inadequacy of Bohmian mechanics is much worse than that. It really doesn't allow you to correctly deal with most observables in general quantum systems, not even with observables with a continuous spectrum. I have discussed similar problems in Bohmists and the segregation of primitive and contextual observables four years ago. The problem is that Bohmian mechanics forces you to choose some observables that "really exist" – are encoded in the objective extra coordinates that are supplemented to the "classicalized" wave function. However, quantum mechanics implies that other observables just can't have a well-defined value at the same moment – because they don't commute with the first ones, stupid. That also means that Bohmian mechanics can't have any answers to questions about the value of these observables. The Bohmian trajectories in the picture above pretend that a particle has an objective position and an objective velocity. But what about the orbital angular momentum \(\bold{\hat L} = \bold{\hat q}\times \bold{\hat p}\)? A basic result of quantum mechanics is that the spectrum of \(\bold{\hat L}_z\) is discrete; the eigenvalues are integer multiples of \(\hbar\). Already this elementary fact in quantum mechanics – even non-relativistic quantum mechanics – is completely inaccessible to Bohmian mechanics. The cross product of the classical position and the classical momentum of the "added Bohmian trajectories" isn't quantized at all. It has really nothing to do with the angular momentum that can be measured. And be sure that the measurement of the angular momentum is often – e.g. for electrons in atoms – much more natural and "fundamental" than the measurement of the particles' positions or momenta. It's because its eigenstates are much closer to the energy eigenstates and those are the most natural basis of a Hilbert space because they describe stationary – and therefore lasting – states. But such a direct measurement of the discrete orbital angular momentum can't be done in Bohmian mechanics. Instead, Bohmian mechanics tells you that you have to continue the evolution of the wave function according to the laws stolen from proper quantum mechanics up to the moment when you can actually convert the original measurement to a measurement of a location, and hope that Bohmian mechanics knows how to emulate the measurements of positions. It isn't quite the case, either, but even if it were the case, Bohmian mechanics is just bringing an amazing degree of inconsistency into the way how different observables – different functions of the phase space – are treated. A sensible theory should treat all functions of the coordinates and momenta i.e. all functions in the phase space equally, following unified rules. Quantum mechanics obeys this criterion, Bohmian mechanics doesn't. We could say that just like the solipsists say that their own mind is the only physical system that may be claimed to be self-aware, Bohmian mechanics remains silent and reproducing the (accurately emulated) quantum evolution up to the moment when macroscopic positions are apparently being measured (those are the "conscious events" that are supposed to replace quantum mechanics with something else). But in the real world, there's nothing special about the minds of the solipsists (except that they belong to the set of crazy people) and there's also nothing special about the positions of macroscopic objects in comparison with many other observables we may define. In quantum mechanics, you may directly construct operators for the angular momenta and ask about their possible values, eigenvalues, and about the predicted probabilities that the measured value will be one or the other. It doesn't matter whether the angular momenta belong to large or small or conscious or unconscious objects. Quantum mechanics allows you to deal with all observables equally. In Bohmian mechanics, those things matter. Effectively, any measurement has to be continued up to the moment when it imprints itself into a position of a macroscopic object which Bohmian mechanics claims to reproduce correctly. A totally new minefield for Bohmian mechanics is relativity. The minimum consistent relativistic theories of quantum particles are quantum field theories (QFTs). They include the spin; I have already discussed the Bohmian problems with the spin. But there are infinitely many similar problems. For example, you may choose many different bases of the QFT Hilbert space. They may be eigenstates of the occupation number operators; eigenstates of field operator distributions \(\hat \phi(\bold{x})\), and so on. It is not clear at all which of these observables are added as the "extra classical trajectories" to Bohmian mechanics. In fact, it is totally obvious that none of the choices will behave correctly in all the experiments that may test a quantum field theory. Also, you can't add many of them or all of them (e.g. both positions and particles and classical values of the fields) because it would be clearly undetermined which of these "added", mutually conflicting classical degrees of freedom defines the "actual reality" that decides about a measurement. Sometimes, the value of the field at a given point may be measured, especially when the frequencies are low. So it would seem like you need to add a "preferred classical field configuration" to the Bohmian version of a QFT. However, especially for high frequencies, the quantum field manifests itself as a collection of particles so you may want to add the trajectories of the particles instead. Moreover, even if you represent a QFT as a system describing many particles, your Bohmian theory won't be able to deal with the basic and most universal processes that must exist in a QFT or any other relativistic quantum theory such as the pair creation of a particle and an antiparticle and their destruction. If individual particles evolve according to the "guiding wave" equations we discussed at the beginning, it's simply infinitely unlikely (the probability refers to the selection of the initial positions from the distribution) that they will ever collide with one another. Two random lines in a 3D space simply don't intersect one another. But if they don't directly collide, it means that they can't annihilate! To allow the particles to annihilate (and be pair-created) with the (experimentally proven) nonzero probability, you would need to introduce a totally non-local extra dynamics that sometimes allows the particles to jump to a completely different place; or you would have to allow the annihilation of particle pairs that don't coincide in space. Any such an extra mechanism would force you to change the original laws of physics in a way that would almost certainly contradict some other experiments because the unmodified quantum laws simply work and it was a healthy strategy for you to emulate them "perfectly" at the very beginning. Such modifications would especially contradict some experimental tests of relativity because these modifications are so horribly nonlocal. So you have no chance to construct an operational Bohmian caricature of a quantum field theory. Needless to say, the problems become even more extreme once you switch to quantum gravity i.e. string theory because many more observables have a discrete spectrum, there are many more ways to choose the bases, the nonzero commutators of various observables are more important than ever before, and Bohmian mechanics just can't prosper in such general quantum situations. On one hand, quantum gravity i.e. string theory is just another quantum theory. On the other hand, it is "more quantum" than all the previous quantum theories simply because the quantum phenomena affect many more questions that could have been thought of in the classical way if you worked with simpler quantum mechanical theories (for example, the spacetime topology – especially the number of Einstein-Rosen bridges in the spacetime – can't even be assigned a linear operator in a quantum gravity theory, as Maldacena and Susskind argued). The non-local fields, collapses, non-local jumps needed for particle annihilations, and other things represent an inevitable source of non-locality that can, in principle, send superluminal signals and that consequently contradicts the Lorentz symmetry of the special theory of relativity. There's no way out here. If you attempt to emulate a quantum field theory in this Bohmian way, you introduce lots of ludicrous gears and wheels – much like in the case of the luminiferous aether, they are gears and wheels that don't exist according to pretty much direct observations – and they must be finely adjusted to reproduce what quantum mechanics predicts (sometimes) without any adjustments whatsoever. Every new Bohmian gear or wheel you encounter generally breaks the Lorentz symmetry and makes the (wrong) prediction of a Lorentz violation and you will need to fine-tune infinitely many properties of these gears and wheels to restore the Lorentz invariance and other desirable properties of a physical theory (even a simple and fundamental thing such as the linearity of Schrödinger's equation is really totally unexplained in Bohmian mechanics and requires infinitely many adjustments to hold – while it may be derived from logical consistency in quantum mechanics). It's infinitely unlikely that they take the right values "naturally" so the theory is at least infinitely contrived. More likely, there's no way to adjust the gears and wheels to obtain relativistically invariant predictions at all. I would say that we pretty much directly experimentally observe the fact that the observations obey the Lorentz symmetry; the wave function isn't an observable wave; and lots of other, totally universal and fundamental facts about the symmetries and the interpretation of the basic objects we use in physics. Bohmian mechanics is really trying to deny all these basic principles – it is trying to deny facts that may be pretty much directly extracted from experiments. It is in conflict with the most universal empirical data about the reality collected in the 20th and 21st century. It wants to rape Nature. A pilot-wave-like theory has to be extracted from a very large class of similar classical theories but infinitely many adjustments have to be made – a very special subclass has to be chosen – for the Bohmian theory to reproduce at least some predictions of quantum mechanics (to produce predictions that are at least approximately local, relativistic, rotationally invariant, unitary, linear etc.). But even if one succeeds and the Bohmian theory does reproduce the quantum predictions, we can't really say that it has made the correct predictions because it was sometimes infinitely fudged or adjusted to produce the predetermined goal. On the other hand, quantum mechanics in general and specific quantum mechanical theories in particular genuinely do predict certain facts, including some very general facts about Nature. If you search for theories within the rigid quantum mechanical framework, while obeying the general postulates, you may make many correct predictions or conclusions pretty much without any additional assumptions. If you ask any of the hundreds of questions (Is the wave function in principle observable? Are observables with discrete spectra fundamentally less than real than those with continuous spectra? Is there a way to send superluminal signals, at least in principle? And so on) in which proper quantum mechanics differs from Bohmian mechanics, the empirical evidence heavily favors quantum mechanics and Bohmian mechanics can only survive if you adjust tons of parameters to unnatural values (from the viewpoint of Bohmian-like theories) and hope that it's enough (which it's usually not). In 2013, even more so than in 1927, the pilot-wave theory is as indefensible as a flat Earth theory, geocentrism, the phlogiston, the luminiferous aether, or creationism. In all these cases, people are led to defend such a thing because some irrational dogmas are more important for them than any amount of evidence. That's what we usually refer to as bigotry. And that's the memo. Add to Digg this Add to reddit snail feedback (24) : reader Dilaton said... I dont know why this is, but I just cant prevent my mind from thinking Bohemian Rhapsody whenever I see the words Bohmian mechanics ... :-D Going to read this later, from scrolling througy I see that it obviously contains a lot of nice physics I like. reader Luboš Motl said... LOL, I live in Bohemia so I would be distracted all the time if I shared the distractions with you. ;-) reader Peter F. said... Freakishly good song, that one. Thanks Dilaton for associating and Lumo for linking! :-) reader lucretius said... I agree with almost everything (modulo the epithets). I have always been allergic to Bohm not so much because he was, as you say “media savvy commie”, but because of all the “Eastern mystical gibberish” that his ideas are associated with and which made them so cool with the hippie crowd. In fact last year during a trip to Canta Cruz, California we visited a (actually quite nice) “natural food” restaurant, where the walls and the menu were covered with “quantum mystical” deep thoughts that sounded like stuff out of “Wholeness and the Implicate Order”. Having lived for over 20 years in Japan I am quite familiar with Buddhist thought and it is indeed true that one can find some curious parallels with modern physics but not really more so than one can find in “De Rerum Natura” authored by my namesake. I would say that the similarities are about 90% coincidence and 10% due to some basic structure of human thought and logic, and, of course, that the both ancient Indian (and Greek) thought and high energy physics are concerned with the same basic issue: the origin and nature of everything. But Bohm’s contribution to the confusion does not endear him to my heart or mind. Niether of course, does his being a commie - but if you really view this as a serious charge there will be few of his contemporaries among physicists left unscathed. I am not quite sure if this has any relation to his views of physics and metaphysics. I note however that the strongest supporter of Bohm’s view that I have met, Jean Bricmont, is a leftist raving lunatic and an associate of Chomsky (although he has to be given some credit for co-authoring Fashionable Nonsense ( I am glad that Bohm lived in Israel for only 2 years - otherwise I would have felt compelled to like him more. I have never quite understood why physicists, at least in the early post-war years, were so much more left-wing than mathematicians. I can name quite a few strongly anti-communist mathematicians (above all John von Neumann and my Stanislaw Ulam - my favourite of that generation, and many others) but among physicists Teller is perhaps the only one who comes to my mind, and the situation does not seem to be very different today. Finally, it seems to me that, apart from the mysticism, the main thing that attracts people to ideas like Bohm’s is the psychological difficulty many face with accepting probability as something that is part of physical reality rather than a human device invented to cope with ignorance. This is true even of mathematicians who work in probability. I have collaborated with “pure” probabilists and have got the impression that only a minority believe in randomness as a feature of the “real world” although everyone has heard that most quantum physicists claim otherwise. In fact, I find myself frequently hesitating about this issue,depending on my mood. For a mathematician probability theory is just a branch of measure theory and all its interesting results involve limit theorems - whose relation to the physical world appears dubious. It seems to me that many people still feel queasy about randomness in physical laws (the way Einstein felt) and this suggests that attempts to find non-probabilistic interpretations of quantum phenomena will continue to find supporters. reader Mephisto said... I must admit that religious mysticism is something of a hobby for me. I studied quite a lot (zen buddhism - Hui Hai, Huang Po, Tibetan buddhism, taoism, christian mysticim - Meister Eckhart, Ramana Maharishi, Jiddu Krishnamurti). Jiddu Krishnamurti was a personal friend of David Bohm and I believe he influenced his views a lot. It is fair to say that Krisnamurti was never interested in physics, he was interested in human consciousness, so the holomovement is the sole creation of Bohm himself Although I would describe myself as a mystic, I am against mixing mysticism with physics. I read the book by Fritjof Capra (Tao of Physics) and disliked it. Modern variants are various kinds of Akashic fields and stuff like that It is impossible and unwise to mix religion with science. For mystics and pantheists like me, sciece is just a part of devine reality reader Justin Glick said... You seem to think that a particle has a precise position at all times, but we just don't know what it is. QM does not say this. reader serene deputy said... I think Lubos just wanted to say that you are still describing the same system, say, one structureless point-like particle, not a field or a ghost (otherwise your starting Hamiltonian and Schrödinger equation would have changed), but whose position becomes fundamentally undetermined to some extent. reader NumCracker said... Dear Lubos, excuses for this off-topic comment: would there be a way to experimentaly test other interpretations, not formulations, of QM and QFT as the Many-World one? Have this ever being done? Thanks reader Dilaton said... Yeah it would be fun, if Lumo could adapt the whole songtext to this TRF article ... :-D reader Stephen Paul King said... Qm, IMHO, demands that Nature does not have a preffered observable. reader Diana Z. said... Yay, finally a good explanation without too much technical stuff. I also have an OT request. Can you explain, in this same understandable style, why time goes slower for objects closer to the source of gravity. I looked, and I couldn't find a proper explanation. I hate it when something starts promising and then you see several pages of formulas that will give anyone a headache. With an added "it all stems from relativity, go read it" at the end. Ugh! It almost makes me believe, the authors themselves don't have a clue. reader Luboš Motl said... Right, lucretius. I remember the story from Feynman's book that said, among other things, that some promoters of paranormal phenomena convinced a professor David Bohm that they had supernatural abilities... reader Luboš Motl said... Dear Numcracker, there doesn't exist any specific enough formulation of MWI that would give some predictions differing from proper QM (at least in principle) - except for versions that are immediately ruled out even by the simplest experiments. Just like I said about Bohmian mechanics, the best thing that MWI proponents are hoping - and it's just a hope, and an unjustified one - is that they reproduce the predictions of QM exactly. And they're extremely far from it. But everyone knows that QM with a proper interpretation gives predictions for pretty much everything and pretty much every physicist knows that they're correct so to "emulate them exactly" is the ultimate dream of any other "interpretation". An unachievable dream. So MWI isn't a real theory that would be used to really do active physics. It's a philosophical declaration that some physicists sometimes endorse at the level of words even though they don't exactly know what this theory is supposed to say. reader Luboš Motl said... One doesn't need to "demand" it. Nature obliged well before quantum mechanics - and humans - were born. The chronology is exactly the opposite than you suggest. Nature created a world with many observables none of which is "preferred" and people constructed theories that were selected by the demand that they agree with Nature. All modern, post-1925 fundamental theories of Nature are demanded to agree at the level of the atomic details, i.e. respect the general rules of quantum mechanics, which also means that they have to agree with the fact about Nature that it doesn't have preferred observables. reader lucretius said... This thread made me try to think of any Western physicist of whom I knew that he was definitely not left-wing and then I remembered the following. The first time I learned about the violations of Bell’s inequalities and its implications was in 1981 by reading an article by Bernard D’Espagnat in “Encounter”. I just searched the web and and found it here: I have not read it since it first appeared, that is, for over 30 years (it will be interesting to see how current it sounds today) but I always remembered its main point, namely, that the results of experiments showed that concept of “independent reality” had to be abandoned. Trying to understand this induced me to learn some fairly technical physics, which I had not been interested in before. I am not sure how many people reading this are old enough to grasp the significance of such an article appearing in “Encounter”. "Encounter "was then Europe’s leading intellectual publication whose raison d'être was anti-communism. In fact, it was founded by the Paris based Congress for Culutral Freedom, a center-left organization dedicated to opposing communist influence in the West. It’s founder’s were the poet Stephen Spender and the “father” of American neo-conservatism Irving Kristol. Among its leading contributors were major cultural figures such as Raymond Aron, Ignazio Silone, Arthur Koestler and lots of others. There is a good account of all of it on the Wikipedia. (I used to be a subscriber and still have all my old copies). I don’t know anything about D’Espagnat’s politics but the fact that he chose to publish that article in "Encounter" means that at least he was definitely not a communist or a “fellow traveller”. In 1967 it had been discovered that the CIA had been secretly funding the magazine, which of course made it a taboo for anyone on the left. I suspect that publising in "Encounter" must have ruined D’Espagnat’s reputation among leftist physicists. This article was, if I remember correctly, the only article on physics ever to appear in "Encounter" - which shows how much importance was attached to this matter then. It was followed by a polemic between D’Espagnat’s and the well known conservative philosopher Antony Flew ( Flew was a clever many but like many layman, refused to accept that the idea of “non-existence of independent reality” made any sense. reader Florin Moldoveanu said... All known QM interpretations are faulty and the only correct interpretation can come from the project to reconstruct QM from natural principles (see QM also goes beyond the usual C* algebraic formulation into the non-commutative geometry formulation of the SM. QM and classical mechanics are distinct "fixed points" in a category theory formulation and any attempt to derive one from the other is a fool's errand. Any non-unitary time evolution of QM (e.g. the collapse postulate) is incompatible with QM's framework, but MWI is not the answer. The answer is much more subtle and mathematical sophisticated but for all practical purposes the collapse postulate does the job (using the collapse postulate is like using ict in relativity and ignoring additional mathematical structures - see and for the beginning of the answer). The wavefunction is neither ontological nor epistemological in the usual sense. reader Dimension10 said... Great post... I am quite confused about why Bohmian mechanics is often listed as an "interpretation" of QM, such as here: (That too, for that document, the authors are like Becker, Styer, etc.). As you say in the post, Bohmian mechanics can't describe all the QM phenomena exactly, so I don't see how it can be listed as an "interpretation of Quantum Mechanics? In my opinion, it should be listed as... an "alternative" non-mainstream theory to QM, like how MOND is a non-mainstream alternative to NG. Also, finally, an off-topic question: "How do you get inline MathJax on your post? The MathJax CDN doesn't seem to allow $...$ and ##...## but only $$...$$ which results in display math. Do you use [itex] ... [/itex] or something like that? . reader Luboš Motl said... Dear Dimension10, exactly, Bohmian mechanics is an alternative theory - or a wishful thinking about the existence of an alternative theory, if we want to go beyond the known toy examples - so it's demagogic to sell it as an "interpretation" of QM. The sequences to write TeX via Mathjax are \(E=mc^2\) and\[ E = mc^2 \] but they don't work in DISQUS. Mathjax allows you to define other sequences that start the displayed and inline math modes, including $...$ and $$....$$. The latter actually does work here as well but I didn't allow the single dollar because I sometimes is the character as a unit of money. reader Dimension10 said... I just realised you have the "listen" feaure enabled. It pronounces the equations very nicely : ) . reader Lisa Korf said... Hi Luboš, I am curious to know how you would reconcile weak measurement trajectories, as in (illustration of measured average trajectories from the article here: ) with your suggestion that "If you could actually 'measure' this permutation that determines how the dots are connected, quantum mechanics would be shown incomplete. " since these, although averaged and still interpolated, are not arbitrary either, and an interference pattern can still be observed. Thanks. LK reader Luboš Motl said... Dear Dr Korf, thanks for your question which however shows that you are confused about the status of various claims and patterns here. The picture you included is pretty much the very same picture that I included in this very blog post and it wasn't measured. The trajectory of a quantum particle can't be measured without affecting it. The copy of the picture in the Science Magazine is a result of a "weak measurement" but a "weak measurement" isn't a measurement. What is critical is that a weak measurement doesn't measure a property of the measured object/system only. Instead, it determines some function of the properties of the system and numerous conventions and choices that were used to define a particular weak measurement procedure. See e.g. So the weak measurement isn't unique in any way and the trajectories on the picture were obtained with one particular prescription for a weak measurement protocol. One could get pretty much any other permutation of the dots - any other picture like that with the same density of lines in each region - if we were drawing these pictures using other weak-measurement protocols. There is nothing physical about the randomly drawn trajectories - any choice is just a convention and all conventions are equally physical at the end. reader lucretius said... These sort of things can be produced "ad infinitum" so debunking them one by one is no more productive a way to spend time than trying to work out precisely what is wrong with this: reader Scott Lahti said... I thank you for your kind words regarding my 20x expansion, from late 2012, of the Wikipedia entry for Encounter magazine, one of whose seeds and two of whose results appear below: reader bustemup said...
dd79cb785c7f4d9b
Coherent states From Wikipedia, the free encyclopedia Jump to: navigation, search In physics, specifically in quantum mechanics, a coherent state is the specific quantum state of the quantum harmonic oscillator, often described as a state which has dynamics most closely resembling the oscillatory behavior of a classical harmonic oscillator. It was the first example of quantum dynamics when Erwin Schrödinger derived it in 1926, while searching for solutions of the Schrödinger equation that satisfy the correspondence principle.[1] The quantum harmonic oscillator and hence, the coherent states, arise in the quantum theory of a wide range of physical systems.[2] For instance, a coherent state describes the oscillating motion of a particle confined in a quadratic potential well (for an early reference, see e.g. Schiff's textbook[3]). These states, expressed as eigenvectors of the lowering operator and forming an overcomplete family, were introduced in the early papers of John R. Klauder, e.g. .[4] In the quantum theory of light (quantum electrodynamics) and other bosonic quantum field theories, coherent states were introduced by the work of Roy J. Glauber in 1963. The coherent state describes a state in a system for which the ground-state wavepacket is displaced from the origin of the system. This state can be related to classical solutions by a particle oscillating with an amplitude equivalent to the displacement. However, the concept of coherent states has been considerably abstracted; it has become a major topic in mathematical physics and in applied mathematics, with applications ranging from quantization to signal processing and image processing (see Coherent states in mathematical physics). For this reason, the coherent states associated to the quantum harmonic oscillator are sometimes referred to as canonical coherent states (CCS), standard coherent states, Gaussian states, or oscillator states. Coherent states in quantum optics[edit] Figure 1: The electric field, measured by optical homodyne detection, as a function of phase for three coherent states emitted by a Nd:YAG laser. The amount of quantum noise in the electric field is completely independent of the phase. As the field strength, i.e. the oscillation amplitude α of the coherent state is increased, the quantum noise or uncertainty is constant at 1/2, and so becomes less and less significant. In the limit of large field the state becomes a good approximation of a noiseless stable classical wave. The average photon numbers of the three states from bottom to top are <n>=4.2, 25.2, 924.5[5] Figure 2: The oscillating wave packet corresponding to the second coherent state depicted in Figure 1. At each phase of the light field, the distribution is a Gaussian of constant width. Figure 3: Wigner function of the coherent state depicted in Figure 2. The distribution is centered on state's amplitude α and is symmetric around this point. The ripples are due to experimental errors. In quantum optics the coherent state refers to a state of the quantized electromagnetic field, etc.[2][6][7] that describes a maximal kind of coherence and a classical kind of behavior. Erwin Schrödinger derived it as a "minimum uncertainty" Gaussian wavepacket in 1926, searching for solutions of the Schrödinger equation that satisfy the correspondence principle.[1] It is a minimum uncertainty state, with the single free parameter chosen to make the relative dispersion (standard deviation in natural dimensionless units) equal for position and momentum, each being equally small at high energy. Further, in contrast to the energy eigenstates of the system, the time evolution of a coherent state is concentrated along the classical trajectories. The quantum linear harmonic oscillator, and hence coherent states, arise in the quantum theory of a wide range of physical systems. They occur in the quantum theory of light (quantum electrodynamics) and other bosonic quantum field theories. While minimum uncertainty Gaussian wave-packets had been well-known, they did not attract full attention until Roy J. Glauber, in 1963, provided a complete quantum-theoretic description of coherence in the electromagnetic field.[8] In this respect, the concurrent contribution of E.C.G. Sudarshan should not be omitted,[9] (there is, however, a note in Glauber's paper that reads: "Uses of these states as generating functions for the -quantum states have, however, been made by J. Schwinger [10]). Glauber was prompted to do this to provide a description of the Hanbury-Brown & Twiss experiment which generated very wide baseline (hundreds or thousands of miles) interference patterns that could be used to determine stellar diameters. This opened the door to a much more comprehensive understanding of coherence. (For more, see Quantum mechanical description.) In classical optics, light is thought of as electromagnetic waves radiating from a source. Often, coherent laser light is thought of as light that is emitted by many such sources that are in phase. Actually, the picture of one photon being in-phase with another is not valid in quantum theory. Laser radiation is produced in a resonant cavity where the resonant frequency of the cavity is the same as the frequency associated with the atomic electron transitions providing energy flow into the field. As energy in the resonant mode builds up, the probability for stimulated emission, in that mode only, increases. That is a positive feedback loop in which the amplitude in the resonant mode increases exponentially until some non-linear effects limit it. As a counter-example, a light bulb radiates light into a continuum of modes, and there is nothing that selects any one mode over the other. The emission process is highly random in space and time (see thermal light). In a laser, however, light is emitted into a resonant mode, and that mode is highly coherent. Thus, laser light is idealized as a coherent state. (Classically we describe such a state by an electric field oscillating as a stable wave. See Fig.1) The energy eigenstates of the linear harmonic oscillator (e.g., masses on springs, lattice vibrations in a solid, vibrational motions of nuclei in molecules, or oscillations in the electromagnetic field) are fixed-number quantum states. The Fock state (e.g. a single photon) is the most particle-like state; it has a fixed number of particles, and phase is indeterminate. A coherent state distributes its quantum-mechanical uncertainty equally between the canonically conjugate coordinates, position and momentum, and the relative uncertainty in phase [defined heuristically] and amplitude are roughly equal—and small at high amplitude. Quantum mechanical definition[edit] Mathematically, a coherent state |α〉is defined to be the (unique) eigenstate of the annihilation operator associated to the eigenvalue α. Formally, this reads, Since is not hermitian, α is, in general, a complex number. Writing |α| and θ are called the amplitude and phase of the state |α>. The state |α〉is called a canonical coherent state in the literature, since there are many other types of coherent states, as can be seen in the companion article Coherent states in mathematical physics. Physically, this formula means that a coherent state remains unchanged by the annihilation of field excitation or, say, a particle. An eigenstate of the annihilation operator has a Poissonian number distribution when expressed in a basis of energy eigenstates, as shown below. A Poisson distribution is a necessary and sufficient condition that all detections are statistically independent. Compare this to a single-particle state ( |1〉Fock state): once one particle is detected, there is zero probability of detecting another. The derivation of this will make use of dimensionless operators, X and P, normally called field quadratures in quantum optics. (See Nondimensionalization.) These operators are related to the position and momentum operators of a mass m on a spring with constant k, Figure 4: The probability of detecting n photons, the photon number distribution, of the coherent state in Figure 3. As is necessary for a Poissonian distribution the mean photon number is equal to the variance of the photon number distribution. Bars refer to theory, dots to experimental values. For an optical field,[clarification needed] are the real and imaginary components of the mode of the electric field. With these (dimensionless!) operators, the Hamiltonian of either system becomes Erwin Schrödinger was searching for the most classical-like states when he first introduced minimum uncertainty Gaussian wave-packets. The quantum state of the harmonic oscillator that minimizes the uncertainty relation with uncertainty equally distributed between X and P satisfies the equation or equivalently and hence Thus, given (∆X−∆P)² ≥ 0, Schrödinger found that the minimum uncertainty states for the linear harmonic oscillator are the eigenstates of (X+iP). Since â is (X+iP), this is recognizable as a coherent state in the sense of the above definition. Using the notation for multi-photon states, Glauber characterized the state of complete coherence to all orders in the electromagnetic field to be the eigenstate of the annihilation operator—formally, in a mathematical sense, the same state as found by Schrödinger. The name coherent state took hold after Glauber's work. If the uncertainty is minimized, but not necessarily equally balanced between X and P, the state is called a squeezed coherent state. The coherent state's location in the complex plane (phase space) is centered at the position and momentum of a classical oscillator of the phase θ and amplitude |α| given by the eigenvalue α (or the same complex electric field value for an electromagnetic wave). As shown in Figure 5, the uncertainty, equally spread in all directions, is represented by a disk with diameter 12. As the phase varies, the coherent state circles around the origin and the disk neither distorts nor spreads. This is the most similar a quantum state can be to a single point in phase space. Figure 5: Phase space plot of a coherent state. This shows that the uncertainty in a coherent state is equally distributed in all directions. The horizontal and vertical axes are the X and P quadratures of the field, respectively (see text). The red dots on the x-axis trace out the boundaries of the quantum noise in Figure 1. For more detail, see the corresponding figure of the phase space formulation. Since the uncertainty (and hence measurement noise) stays constant at 12 as the amplitude of the oscillation increases, the state behaves more and more like a sinusoidal wave, as shown in Figure 1. And, since the vacuum state is just the coherent state with α=0, all coherent states have the same uncertainty as the vacuum. Therefore one can interpret the quantum noise of a coherent state as being due to the vacuum fluctuations. The notation |α〉does not refer to a Fock state. For example, when α=1, one should not mistake |1〉for the single-photon Fock state, which is also denoted |1〉in its own notation. The expression |α〉with α=1 represents a Poisson distribution of number states |n〉 with a mean photon number of unity. The formal solution of the eigenvalue equation is the vacuum state displaced to a location α in phase space, i.e., it is obtained by letting the unitary displacement operator D(α) operate on the vacuum, where = X+iP and = X-iP. This can be easily seen, as can virtually all results involving coherent states, using the representation of the coherent state in the basis of Fock states, where |n〉are energy (number) eigenvectors of the Hamiltonian For the corresponding Poissonian distribution, the probability of detecting n photons is Similarly, the average photon number in a coherent state is and the variance is That is, the standard deviation of the number detected goes like the square root of the number detected. So in the limit of large α, these detection statistics are equivalent to that of a classical stable wave. These results apply to detection results at a single detector and thus relate to first order coherence (see degree of coherence). However, for measurements correlating detections at multiple detectors, higher-order coherence is involved (e.g., intensity correlations, second order coherence, at two detectors). Glauber's definition of quantum coherence involves nth-order correlation functions (n-th order coherence) for all n. The perfect coherent state has all n-orders of correlation equal to 1 (coherent). It is perfectly coherent to all orders. Roy J. Glauber's work was prompted by the results of Hanbury-Brown and Twiss that produced long-range (hundreds or thousands of miles) first-order interference patterns through the use of intensity fluctuations (lack of second order coherence), with narrow band filters (partial first order coherence) at each detector. (One can imagine, over very short durations, a near-instantaneous interference pattern from the two detectors, due to the narrow band filters, that dances around randomly due to the shifting relative phase difference. With a coincidence counter, the dancing interference pattern would be stronger at times of increased intensity [common to both beams], and that pattern would be stronger than the background noise.) Almost all of optics had been concerned with first order coherence. The Hanbury-Brown and Twiss results prompted Glauber to look at higher order coherence, and he came up with a complete quantum-theoretic description of coherence to all orders in the electromagnetic field (and a quantum-theoretic description of signal-plus-noise). He coined the term coherent state and showed that they are produced when a classical electric current interacts with the electromagnetic field. At α ≫ 1, from Figure 5, simple geometry gives Δθ |α | = 1/2. From this, it appears that there is a tradeoff between number uncertainty and phase uncertainty, Δθ Δn = 1/2, which is sometimes interpreted as a number-phase uncertainty relation; but this is not a formal strict uncertainty relation: there is no uniquely defined phase operator in quantum mechanics.[11] [12] [13] [14] [15] [16] [17] [18] The wavefunction of a coherent state[edit] Time evolution of the probability distribution with quantum phase (color) of a coherent state with α=3. To find the wavefunction of the coherent state, the minimal uncertainty Schrödinger wave packet, it is easiest to start with the Heisenberg picture of the quantum harmonic oscillator for the coherent state |α〉. Note that The coherent state is an eigenstate of the annihilation operator in the Heisenberg picture. It is easy to see that, in the Schrödinger picture, the same eigenvalue In the coordinate representations resulting from operating by 〈x|, this amounts to the differential equation, which is easily solved to yield where θ(t) is a yet undetermined phase, to be fixed by demanding that the wavefunction satisfies the Schrödinger equation. It follows that so that σ is the initial phase of the eigenvalue. The mean position and momentum of this "minimal Schrödinger wave packet" ψ(α) are thus oscillating just like a classical system, The probability density remains a Gaussian centered on this oscillating mean, Mathematical features of the canonical coherent states[edit] The canonical coherent states described so far have three properties that are mutually equivalent, since each of them completely specifies the state , namely, 1. They are eigenvectors of the annihilation operator:   . 2. They are obtained from the vacuum by application of a unitary displacement operator:   . 3. They are states of (balanced) minimal uncertainty:   . Each of these properties may lead to generalizations, in general different from each other (see the article 'Coherent states in mathematical physics' for some of these). We emphasize that coherent states have mathematical features that are very different from those of a Fock state; for instance, two different coherent states are not orthogonal, (linked to the fact that they are eigenvectors of the non-self-adjoint annihilation operator ). Thus, if the oscillator is in the quantum state it is also with nonzero probability in the other quantum state (but the farther apart the states are situated in phase space, the lower the probability is). However, since they obey a closure relation, any state can be decomposed on the set of coherent states. They hence form an overcomplete basis, in which one can diagonally decompose any state. This is the premise for the Sudarshan-Glauber P representation. This closure relation can be expressed by the resolution of the identity operator I in the vector space of quantum states, Another peculiarity is that has no eigenket (while has no eigenbra). The following equality is the closest formal substitute, and turns out to be useful for technical computations, This last state is known as an "Agarwal state" or photon-added coherent state and denoted as Normalized Agarwal states of order n can be expressed as The above resolution of the identity may be derived (restricting to one spatial dimension for simplicity) by taking matrix elements between eigenstates of position, , on both sides of the equation. On the right-hand side, this immediately gives δ(x-y). On the left-hand side, the same is obtained by inserting from the previous section (time is arbitrary), then integrating over using the Fourier representation of the delta function, and then performing a Gaussian integral over . In particular, the Gaussian Schroedinger wavepacket state follows from the explicit value The resolution of the identity may also be expressed in terms of particle position and momentum. For each coordinate dimension (using an adapted notation with new meaning for ), the closure relation of coherent states reads This can be inserted in any quantum-mechanical expectation value, relating it to some quasi-classical phase-space integral and explaining, in particular, the origin of normalisation factors for classical partition functions, consistent with quantum mechanics. In addition to being an exact eigenstate of annihilation operators, a coherent state is an approximate common eigenstate of particle position and momentum. Restricting to one dimension again, The error in these approximations is measured by the uncertainties of position and momentum, Thermal coherent state[edit] A single mode thermal coherent state[19] is produced by displacing a thermal mixed state in phase space, in direct analogy to the displacement of the vacuum state in view of generating a coherent state. The density matrix of a coherent thermal state in operator representation reads where is the displacement operator which generates the coherent state with complex amplitude , and . The partition function is equal to Using the expansion of the unity operator in Fock states, , the density operator definition can be expressed in the following form where stands for the displaced Fock state. We remark that if temperature goes to zero we have which is the density matrix for a coherent state. The average number of photons in that state can be calculated as below where for the last term we can write As a result we find where is the average of the photon number calculated with respect to the thermal state. Here we have defined, for ease of notation, and we write explicitly In the limit we obtain , which is consistent with the expression for the density matrix operator at zero temperature. Likewise, the photon number variance can be evaluated as with . We deduce that the second moment cannot be uncoupled to the thermal and the quantum distribution moments, unlike the average value (first moment). In that sense, the photon statistics of the displaced thermal state is not described by the sum of the Poisson statistics and the Boltzmann statistics. The distribution of the initial thermal state in phase space broadens as a result of the coherent displacement. Coherent states of Bose–Einstein condensates[edit] • A Bose–Einstein condensate (BEC) is a collection of boson atoms that are all in the same quantum state. In a thermodynamic system, the ground state becomes macroscopically occupied below a critical temperature — roughly when the thermal de Broglie wavelength is longer than the interatomic spacing. Superfluidity in liquid Helium-4 is believed to be associated with the Bose–Einstein condensation in an ideal gas. But 4He has strong interactions, and the liquid structure factor (a 2nd-order statistic) plays an important role. The use of a coherent state to represent the superfluid component of 4He provided a good estimate of the condensate / non-condensate fractions in superfluidity,consistent with results of slow neutron scattering.[20][21][22] Most of the special superfluid properties follow directly from the use of a coherent state to represent the superfluid component — that acts as a macroscopically occupied single-body state with well-defined amplitude and phase over the entire volume. (The superfluid component of 4He goes from zero at the transition temperature to 100% at absolute zero. But the condensate fraction is about 6%[23] at absolute zero temperature, T=0K.) • Early in the study of superfluidity, Penrose and Onsager proposed a metric ("order parameter") for superfluidity.[24] It was represented by a macroscopic factored component (a macroscopic eigenvalue) in the first-order reduced density matrix. Later, C. N. Yang [25] proposed a more generalized measure of macroscopic quantum coherence, called "Off-Diagonal Long-Range Order" (ODLRO),[26] that included fermion as well as boson systems. ODLRO exists whenever there is a macroscopically large factored component (eigenvalue) in a reduced density matrix of any order. Superfluidity corresponds to a large factored component in the first-order reduced density matrix. (And, all higher order reduced density matrices behave similarly.) Superconductivity involves a large factored component in the 2nd-order ("Cooper electron-pair") reduced density matrix. • The reduced density matrices used to describe macroscopic quantum coherence in superfluids are formally the same as the correlation functions used to describe orders of coherence in radiation. Both are examples of macroscopic quantum coherence. The macroscopically large coherent component, plus noise, in the electromagnetic field, as given by Glauber's description of signal-plus-noise, is formally the same as the macroscopically large superfluid component plus normal fluid component in the two-fluid model of superfluidity. • Every-day electromagnetic radiation, such as radio and TV waves, is also an example of near coherent states (macroscopic quantum coherence). That should "give one pause" regarding the conventional demarcation between quantum and classical. • The coherence in superfluidity should not be attributed to any subset of helium atoms; it is a kind of collective phenomena in which all the atoms are involved (similar to Cooper-pairing in superconductivity, as indicated in the next section). Coherent electron states in superconductivity[edit] • Electrons are fermions, but when they pair up into Cooper pairs they act as bosons, and so can collectively form a coherent state at low temperatures. This pairing is not actually between electrons, but in the states available to the electrons moving in and out of those states.[27] Cooper pairing refers to the first model for superconductivity.[28] • These coherent states are part of the explanation of effects such as the Quantum Hall effect in low-temperature superconducting semiconductors. • According to Gilmore and Perelomov, who showed it independently, the construction of coherent states may be seen as a problem in group theory, and thus coherent states may be associated to groups different from the Heisenberg group, which leads to the canonical coherent states discussed above.[29][30][31][32] Moreover, these coherent states may be generalized to quantum groups. These topics, with references to original work, are discussed in detail in Coherent states in mathematical physics. • In quantum field theory and string theory, a generalization of coherent states to the case where infinitely many degrees of freedom are used to define a vacuum state with a different vacuum expectation value from the original vacuum. • In one-dimensional many-body quantum systems with fermionic degrees of freedom, low energy excited states can be approximated as coherent states of a bosonic field operator that creates particle-hole excitations. This approach is called bosonization. • The Gaussian coherent states of nonrelativistic quantum mechanics can be generalized to relativistic coherent states of Klein-Gordon and Dirac particles.[33][34][35] • Coherent states have also appeared in works on loop quantum gravity or for the construction of (semi)classical canonical quantum general relativity.[36][37] See also[edit] External links[edit] 1. ^ a b E. Schrödinger, Der stetige Übergang von der Mikro- zur Makromechanik, Naturwissenschaften 14 (1926) 664-666. 2. ^ a b J.R. Klauder and B. Skagerstam, Coherent States, World Scientific, Singapore, 1985. 3. ^ L.I. Schiff, Quantum Mechanics, McGraw Hill, New York, 1955. 4. ^ J. R. Klauder, The action option and a Feynman quantization of spinor fields in terms of ordinary c-numbers, Ann. Physics 11 (1960) 123–168. 5. ^ G. Breitenbach, S. Schiller, and J. Mlynek, Measurement of the quantum states of squeezed light, Nature 387 (1997) 471-475 6. ^ W-M. Zhang, D. H. Feng, and R. Gilmore, Coherent states: Theory and some applications, Rev. Mod. Phys. 62 (1990) 867-927. 7. ^ J-P. Gazeau, Coherent States in Quantum Physics, Wiley-VCH, Berlin, 2009. 8. ^ R.J. Glauber, Coherent and incoherent states of radiation field,Phys. Rev. 131 (1963) 2766-2788. 9. ^ E.C.G. Sudarshan, Equivalence of semiclassical and quantum mechanical descriptions of statistical light beams, Phys. Rev. Lett. 10 (1963) 277-279. 10. ^ J. Schwinger, Theory of quantized fields. III, Phys. Rev. 91 (1953) 728-740 11. ^ L. Susskind and J. Glogower, Quantum mechanical phase and time operator,Physics 1 (1963) 49. 12. ^ P. Carruthers and M.N. Nieto, Phase and angle variables in quantum mechanics,Rev. Mod. Phys. 40 (1968) 411-440. 13. ^ S.M. Barnett and D.T. Pegg, On the Hermitian optical phase operator,J. Mod. Opt. 36 (1989) 7-19. 14. ^ P. Busch, M. Grabowski and P.J. Lahti, Who is afraid of POV measures? Unified approach to quantum phase observables, Ann. Phys. (N.Y.) 237 (1995) 1-11. 15. ^ V.V. Dodonov, 'Nonclassical' states in quantum optics: a 'squeezed' review of the first 75 years, J. Opt. B: Quantum Semiclass. Opt. 4 (2002) R1-R33. 16. ^ V.V. Dodonov and V.I.Man'ko (eds), Theory of Nonclassical States of Light, Taylor \& Francis, London, New York, 2003. 17. ^ A. Vourdas, Analytic representations in quantum mechanics, J. Phys. A: Math. Gen. 39 (2006) R65-R141. 19. ^ J. Oz-Vogt, A. Mann and M. Revzen, Thermal coherent states and thermal squeezed states, J. Mod. Opt., 1991, VOL . 38, 2339-2347 20. ^ G. J. Hyland, G. Rowlands, and F. W. Cummings, A proposal for an experimental determination of the equilibrium condensate fraction in superfluid helium, Phys. Lett. 31A (1970) 465-466. 21. ^ J. Mayers, The Bose–Einstein condensation, phase coherence, and two-fluid behavior in He-4, Phys. Rev. Lett. 92 (2004) 135302. 22. ^ J. Mayers, The Bose–Einstein condensation and two-fluid behavior in He-4, Phys. Rev. B 74 (2006) 014516. 23. ^ A.C. Olinto, Condensate fraction in superfluid He-4, Phys. Rev. B 35 (1986) 4771-4774. 24. ^ O. Penrose and L. Onsager, Bose–Einstein condensation and liquid Helium, Phys. Rev. 104(1956) 576-584. 25. ^ C. N. Yang, Concept of Off-Diagonal Long-Range Order and the quantum phases of liquid He and superconductors, Rev. Mod Phys. 34 (1962) 694-704. 26. ^ Yang, C. N. (1962). Concept of off-diagonal long-range order and the quantum phases of liquid He and of superconductors. Reviews of Modern Physics, 34(4), 694. 27. ^ [see John Bardeen's chapter in: Cooperative Phenomena, eds. H. Haken and M. Wagner (Springer-Verlag, Berlin, Heidelberg, New York, 1973)] 28. ^ J. Bardeen, L.N. Cooper and J. R. Schrieffer, Phys. Rev. 108, 1175 (1957) 29. ^ A. M. Perelomov, Coherent states for arbitrary Lie groups, Commun. Math. Phys. 26 (1972) 222-236; arXiv: math-ph/0203002. 30. ^ A. Perelomov, Generalized coherent states and their applications, Springer, Berlin 1986. 31. ^ R. Gilmore, Geometry of symmetrized states, Ann. Phys. (NY) 74 (1972) 391-463. 32. ^ R. Gilmore, On properties of coherent states, Rev. Mex. Fis. 23 (1974) 143-187. 33. ^ G. Kaiser, Quantum Physics, Relativity, and Complex Spacetime: Towards a New Synthesis, North-Holland, Amsterdam, 1990. 34. ^ S.T. Ali, J-P. Antoine, and J-P. Gazeau, Coherent States, Wavelets and Their Generalizations, Springer-Verlag, New York, Berlin, Heidelberg, 2000. 35. ^ C. Anastopoulos, Generalized Coherent States for Spinning Relativistic Particles, J. Phys. A: Math. Gen. 37 (2004) 8619-8637 36. ^ A. Ashtekar, J. Lewandowski, D. Marolf, J. Mourão and T. Thiemann, Coherent state transforms for spaces of connections, J. Funct. Anal. 135 (1996) 519-551. 37. ^ H. Sahlmann, T. Thiemann and O. Winkler, Coherent states for canonical quantum general relativity and the infinite tensor product extension, Nucl. Phys. B 606 (2001) 401-440.
e074c0512f87b144
Take the 2-minute tour × I have seen similar posts, but I haven't seen what seems to be a clear and direct answer. Why do only a certain number of electrons occupy each shell? Why are the shells arranged in certain distances from the nucleus? Why don't electrons just collapse into the nucleus or fly away? It seems there are lots of equations and theories that describe HOW electrons behave (pauli exclusion principle), predictions about WHERE they may be located (Schrödinger equation, uncertainty principle), etc. But hard to find the WHY and/or causality behind these descriptive properties. What is it about the nucleus and the electrons that causes them to attract/repel in the form of these shells at regular intervals and numbers of electrons per shell? Thank you! Please be patient with me, new to this forum and just an amateur fan of physics. share|improve this question Well the answer is "quantum mechanics" and conservation laws like conservation of angular momentum combined with quantized angular momentum. Your question is seriously too broad though so I can't imagine it being realistically answered without writing a whole book chapter worth of information. –  Brandon Enright Aug 2 '14 at 6:10 Physics does not answer WHY questions, the models physics has answer how from the postulates and equations the observations can be explained. This has been done successfully to start with using the Schrodinger equation and identifying its solutions with the shells very well. Why it is successful? eventually ask the gods. –  anna v Aug 2 '14 at 7:50 Thank you all, very useful answers! In the future, I think I'll rephrase my questions to "what causes..." or "how does it work..." vs "why." I think Physicists seem a bit scared or put off by "why" questions as it somehow leads to the philosophical =) –  PurposeNation Aug 4 '14 at 20:52 6 Answers 6 up vote 15 down vote accepted Any answer based on analogies rather than mathematics is going to be misleading, so please bear this in mind when you read this. Most of us will have discovered that if you tie one end of a rope to a wall and wave the other you can get standing waves on it like this: Standing waves Depending on how fast you wave the end of the rope you can get half a wave (A), one wave (B), one and a half waves (C), and so on. But you can't have 3/5 of a wave or 4.4328425 waves. You can only have a half integral number of waves. The number of waves is quantised. This is basically why electron energies in an atom are quantised. You've probably heard that electrons behave as waves as well as particles. Well if you're trying to cram an electron into a confined space you'll only be able to do so if the electron wavelength fits neatly into the space. This is a lot more complicated than just waving a rope because an atom is a 3D object so you have 3D waves. However take for example the first three $s$ wavefunctions, which are spherically symmetric, and look how they vary with distance - you get (these are for a hydrogen atom) $^1$: s wavefunctions Unlike the rope the waves aren't all the same size and length because the potential around a hydrogen atom varies with distance, however you can see a general similarity with the first three modes of the rope. And that's basically it. Energy increases with decreasing wavelength, so the "half wave" $1s$ level has a lower energy than the "one wave" $2s$ level, and the $2s$ has a lower energy than the "one and a half wave" $3s$ level. $^1$ the graphs are actually the electron probability distribution $P(r) = \psi\psi^*4\pi r^2$. I did try plotting the wavefunction, but it was less visually effective. share|improve this answer This is a perfect answer at the level of the OP's question. –  Ben Crowell Aug 2 '14 at 15:45 "Most of us will have discovered that if you tie one end of a rope to a wall and wave the other you can get standing waves on it" - huh. Is that a common thing for people to just do when they're bored? –  user2357112 Aug 2 '14 at 16:30 @user2357112 didn't you have a jump rope as a kid? And then one time you only have 1 friend available, so you tie one end to a fence, you wave your end around a couple of times.. and funny stuff is observed. –  harold Aug 2 '14 at 16:39 @Harold Jump rope? My phone doesn't have that app. –  zibadawa timmy Aug 2 '14 at 18:34 @harold Kids playing with Lego become engineers. Kids with a piece of rope and only 1 friend become physicists :-D But it's a great answer! –  jmiserez Aug 2 '14 at 22:44 First of all, strictly speaking, electron shells (as well as atomic orbitals) do not exist in atoms with more than one electron. Such physical model of an atom is simplified (and often oversimplified), it arises from a mathematical approximation, which physically corresponds to the situation when electrons do not instantaneously interact with each other, but rather each and every electron interacts with the average, or mean, electric field created by all other electrons. This approximation is known as mean field approximation, and the state (or, speaking classically, the motion) of each and every electron in this approximation is independent of the state (motion) of all other electrons in the system. Thus, the physical model which arises due to this approximation is simplified, and, not surprisingly, it is often referred to as independent electrons model. So, the question why nature works in this way, does not make a lot of sense, since nature actually does not work this way. Except for systems with only one electron, like, for instance, hydrogen atom. In any case the answer to the question why something works in this or that way in physics is pretty simple: according to the laws of a particular physical theory, say, quantum mechanics. And I could not explain to you quantum mechanics here in just a few sentences. You need to read some books. But if your question is why nature works in this way according to quantum mechanics, i.e. why things in quantum mechanics are the way they are, then I would like to quote Paul Dirac: [...] the main object of physical science is not the provision of pictures, but is the formulation of laws governing phenomena and the application of these laws to the discovery of new phenomena. If a picture exists, so much the better; but whether a picture exists or not is a matter of only secondary importance. In the case of atomic phenomena no picture can be expected to exist in the usual sense of the word 'picture', by which is meant a model functioning essentially on classical lines. One may, however, extend the meaning of the word 'picture' to include any way of looking at the fundamental laws which makes their self-consistency obvious. With this extension, one may gradually acquire a picture of atomic phenomena by becoming familiar with the laws of the quantum theory. From "The Principles of Quantum Mechanics", §4. share|improve this answer A big part of it can be explained by combining the constraints of quantum mechanics with the geometry of angular momentum. For the special case of the hydrogen atom, it turns out that when you solve the equations of motion for an electron near a proton, you can't give the electron any old energy. There's a set of energies that are allowed; all others are excluded. You can put these energies in order, starting from the most tightly bound, and give each one a number. This is often called the "principal quantum number," $n$, and it can be any positive integer. The binding energy of an electron in the $n$-th state is $13.6\,\mathrm{eV}/n^2$. You can also ask (again, using the mathematical tools of quantum mechanics) whether the electron can carry angular momentum. It turns out that it can, but again that the amount of angular momentum it can carry comes in lumps, and again we can put the angular momentum states in order, starting with the least. Unlike with the principal quantum number, it makes sense to talk about an atom whose angular momentum is zero, so the "angular momentum quantum number" $\ell$ starts counting from zero. For a very sneaky reason, $\ell$ must be smaller than $n$. So an electron in its ground state, $n=1$, must have $\ell=0$; an electron in the first excited state $n=2$ may have $\ell=0$ or $\ell=1$; and so on. Now once you have started to ask about angular momentum you start to think about planets orbiting a star, and that suggests a question: what is the orientation of the orbit? Must all the electrons orbit in the same plane, like all the planets in the solar system are found roughly along the plane of the ecliptic? Or can electrons orbiting a nucleus occupy any random plane, the way that comets do? This is a question you can also address with quantum mechanics. It turns out (again) that only certain orientations are allowed, and the number of orientations that are allowed depends on $\ell$, and that you can put the orientations in order. For a state with $\ell=0$ there is only one orientation permitted. For a state with $\ell=1$ there are three orientations permitted; sometimes it makes sense to number them with the "angular momentum projection quantum number" $m \in \{-1,0,1\}$, and other times it makes sense to identify them with the three axes $x,y,z$ of a coordinate system. For $\ell=2$, likewise, it sometimes makes sense to identify orientations $m \in \{-2,-1,0,1,2\}$, and other time to identify the orientations with electrons along the axes and planes of the coordinate system. I think the chemists may even have a geometrical interpretation for the seven substates of $\ell=3$, but I'm not familiar with it. When you start to add multiple electrons to one nucleus, several things change — most notably the interaction energy, since the electrons interact with each other as well as with the nucleus. The basic picture, that each electron must carry integer angular momentum $\ell$ which may lie on any of $2\ell+1$ directions, remains unchanged. But there is one final quirk: each state with a given $n,\ell,m$ may hold no more than two electrons! We can fit this into our picture by assigning each electron a fourth quantum number $s$, called the "spin quantum number" for reasons that you should totally look up later, which can only take two values. Now we have a very simple rule: a "state" described by the four numbers $n,\ell,m,s$ can hold zero or one electrons at a time. After that preamble, have a look at a periodic table: a periodic table • Over on the left are two columns of highly reactive elements. These have the outermost electron with $\ell=0$ (one value of $m$ allowed, two values of $s$). • Over on the right are six columns of (mostly) nonmetals. These have the outermost electron with $\ell=1$ (three values of $m$ allowed, times two values of $s$) • In the middle are ten columns of metals. These have outermost electrons with $\ell=2$ (five values of $m$ allowed, times two values of $s$). • Appended on the bottom of the chart, because there's too much blank space on the page if they're inserted between columns two and three, are fourteen columns of lanthanides and actinides. These have outermost electrons with $\ell=3$ (seven values of $m$, times two values of $s$). This simple model doesn't explain everything about the periodic table and electron shells. My description puts helium in the wrong spot (it's not a reactive metal because the most tightly bound electron shell is special), and the heavier metals leak over into the $\ell=1$ block. You have to do some serious modeling to understand why the $\ell=2$ electrons aren't allowed until the fourth row, rather than the third row. Protons and neutrons in the nucleus have the same sort of shell structure, but nuclear magic numbers don't always occur after the filling of an $\ell=1$ shell the way the noble gases do. But that is about the shape of things. share|improve this answer John Rennie gave a nice answer based on the De Broglie hypothesis, however he didn't try the hard part: "Why do only a certain number of electrons occupy each shell?" so let me try! In quantum mechanics particles are described by wave functions. All the observable properties of a particle (like its position) are related to the square of the wave function, so its sign does not really matter. You can write a global wave function for a system of more particles. Let's consider two identical particles: the properties of the system should stay the same if they are swapped, this means that the global wave function must in principle: • stay the same • only change the sign If the wave function stays the same, there are no problems: the two identical particles can happily stay together and are called bosons. If the wave function changes the sign then we have a problem: we cannot tell which system has the particles swapped because they are identical, so we actually have two differently signed wave functions (which sum to null) for the same system. The solution is not allow such a system: identical particles whose exchange leads to change of sign of the wave function are not allowed to stay together, they are called fermions. Nature chose electrons as fermions and so you cannot find two identical electrons in the same atom: each electron must have at least one property that allows to distinguish it from all the others, this is called Pauli exclusion principle. Each energy level (determined by the closure of the wave function as John Rennie explained) can hold a limited number of electrons which depends on the complexity of the level. The simplest level is just a sphere and does not offer any way to distinguish two electrons, so it can just hold... two! This is a tiny complication which comes from the spin: an intrinsic property that for electrons can be up or down allowing two of them with opposite spin to stay together on the same level. share|improve this answer By observations we know that only whole electrons are distributed in specific hydrogen and helium areas. These areas have certain probabilities stay. For description of these areas there are a lot of rules (Hund, ...) and principles (Pauli, ...) and quantum formulas. That allows to predict more precise statements. But that can not hide the fact that the nature of the electron shells is unknown. The way from an elementary particle - indivisible and accepted as a point - to a broad distribution in the nuclear shells is unknown. share|improve this answer I can sniff a lot of familiarity with the ''HOW'' answers from what I interpret about you from your post, so I'll only focus on the objective point - ''WHY''. It turns out that it is possible to meaningfully describe nature by postulating that any object would tend to be in the minimum energy state possible under a given set of physical conditions. So, first we need an understanding of what these minimum energy configurations are - for which we treat the HOWS- (Schrodinger eqn etc.). But once we know what they are, the question is - how do they arrange themselves within these structures, which gets a common sense answer from the Aufbau principle, which is once again the reiteration of the same idea. But what is special about this idea becomes clear if you start considering alternatives. Suppose, this wasn't the case, and we chose the stark opposite alternative - every object tended to occupy the most energetic state available (like some bouncing ball collisions in some video games), we would have a really tough time describing nature. For example, we wouldn't be able to explain why any system reaches an equilibrium at all, since e.g. it would be more favorable for an object once set in motion, to keep moving towards an unbounded maximum energy. Now, infinity isn't a number by definition, it reflects an unbounded maximum, so an ''inverted-scale'' description, with infinity in place of 0, is horribly inappropriate. e.g. zero is unique on the number line, but the functions $x$ and $x^2$ both increase indefinitely as we increase $x$. So, ''unbounded from above'' won't be a unique choice, and our description of nature won't be coherent. Anyways, it does so turn out from observational experience that things around us behave as if the underlying principle concerned a minimum, rather than a maximum. So, our postulate seems validated by nature. Now, to specifically address the question of electronic arrangement, in e.g. Bohr atom, which would make a simple example, introducing the quantization conditions (as you are probably aware of, from your question.) Imagine it this way - if electron is getting attracted to the positively charged nucleus, it will tend to bump into it. However, it doesn't, because of its orbital angular momentum (let us neglect spin for the moment), which will cause it to rotate around the nucleus at some orbital radius, owing to a balance between the centripetal force and this attraction. However, while the electrostatic attraction is a continuous function of $r$, falling off as $1/r^2$, with the quantization condition, the angular momentum can't vary continuously, it grows in discrete units. Thus, the balance condition would now imply that all $r$ aren't allowed. You reach a balance at some fixed values of $r$, which define for you the shell locations (Of course the same condition also gives you the permissible energies.) Now, here's the point (and here's how it relates to my -2 voted first paragraph): once you know the permitted energies, you need some filling up principle, and it most convenient to fill them up using our guiding principle of lowest energy first, rather than the other way round. If we filled them the other way round, we can never explain why there should be a hydrogen atom at all, since the very first electron would be sitting infinitely far away from the nucleus and would behave like an ionized, free electron. share|improve this answer Thank you. Voted this up, purely because you focused on the "why", thank you. Always appreciate non-traditional ways to look at a question. –  PurposeNation Aug 4 '14 at 20:59 As soon as you add a second electron the energies of the allowed states change. Helium-like atoms and ions (those with 2 present electrons) are not Hydrogen-like atoms with an extra electron present: the energies, RMS radii etc of the shells all change. Adding still more electrons changes it some more. –  dmckee Sep 7 '14 at 23:42 @dmckee - Irrespective of how (much) they change, you still fill them up according to lowest energy first. If I remember correctly what I learnt during UG, that's precisely the reason why in some particular case $4s$ got filled before $3d$. If you disagree, show me a counterexample (i.e. an instance where the higher energy state got occupied before a lower energy AVAILABLE one.) –  New_new_newbie Sep 8 '14 at 4:04 @dmckee - I stress on the ''available'' part - don't give me an example where an otherwise lower energy state got filled later because some conservation law or selection rule obstructed it. Obviously those are different cases. –  New_new_newbie Sep 8 '14 at 4:07 Your Answer
228300c8cf64d5d8
(Click here for bottom) Acceptance Test. Advanced Technology. Yesterday's AT is tomorrow's joke. You might gaze upon my works and despair. IBM's PC/AT is vintage 1984. A.T., AT German, Altes Testament. English, `Old Testament' (O.T.). Anthropology Today. A journal published by Blackwell on behalf of the Royal Anthropological Institute of Great Britain and Ireland. The sister publication of at is JRAI. Antiquité Tardive. Published by la Association pour l'Antiquité Tardive, it ``aims at enriching the study of written texts from the fourth to the seventh centuries by setting these into a wider context using a multidisciplinary approach covering history, archaeology, epigraphy, law and philology.'' Did I just read the word ``enriching''? Indeed I did. I also just read that the one issue per year costs 62 euros. At those prices it better have a centerfold, and she had better not be an antique. Astatine, at atomic number 85 the heaviest known halogen. Learn more at its entry in WebElements and its entry at Chemicool. German, Atmosphäre. English, `atmosphere.' ATtention. First code in a command-set protocol defined by Hayes for its modems and become the industry standard. (Domain code for) Austria. But for 1866 and 1945, this would be Germany (.de). The US government's Country Studies website has a page of links (``Austria Country Studies'') amounting to the online version of its Austria book. Ariadne, ``The European and Mediterranean link resource for Research, Science and Culture,'' has a page of national links. There's an official government site (also in English). Rec.Travel offers some links. Telephone numbers for International direct dialing to Austria begin with 43. Academic Theme Associate. University staff responsible for advancing the designated academic theme of a house (university residence). Cf. ETA, FA. Actual Time of Arrival (of flight or of transport vehicle). In contrast with ETA. Advanced Technology Attachment. A standard for interfacing disk drives. Nothing more than the name used by ANSI group X3T10 for Integrated Drive Electronics (IDE). Air Transport Association. A trade group representing commercial airlines. All-American Twirling Academy. ``The ATA All-Stars are located in Gainesville and Lake City, Florida. Group and private lessons are offered for age 4 through high school at all skill levels.'' American Teachers Association. Founded at Nashville, Tennessee, in 1904, on the initiative of John Robert Edward Lee of the Tuskegee Institute, as the National Association of Colored Teachers. The name was changed in 1907 to the National Association of Teachers in Colored Schools, to better reflect the target membership. The name was changed to American Teachers Association in 1937. In 1966, the ATA merged with the NEA. With luck, this page of ATA history won't be history itself at the end of February. American Tinnitus Association. American Trans Air. A commercial airline. In my experience flying from South Bend, Indiana, to the coasts, ATA offers the best last-minute deals through their hubs in Detroit and Chicago (Midway). A lot of people wonder how it ended up with the not-very-mnemonic carrier code TZ. The answer is that by the time ATA got into the business (1973), all the more appropriate two-letter codes (AT, TA, TR) were taken. Getting into the business just before deregulation, ATA is sort of a 'tween company: it doesn't have the high costs of the old-line major passenger airlines, but not the low costs of a Southwest or JetBlue. They also don't have the name recognition of the majors. Around 2002, I encountered a travel agent at AAA in New Jersey who had never heard of it. After we finished booking on ATA, he had the cojones to tell us cheerfully that we saved 1,800 or whatever dollars -- sure, no thanks to him. ATA was the tenth-largest US carrier in 2004, ranking by passenger miles. I think ATA needs to invest in more advertising. In late October 2004 they filed for bankruptcy. Also, they're now ``ATA Airlines.'' This is supposed not to be pleonastic because ATA is no longer an acronym, just a name -- sort of a decorative collection of letters, like Kodak, but pronounced ``ayteeay.'' It's as if they had a little switch attached to the language, which turns the significance of an established usage off when flipped and prevents their name from having an expansion that ends in ``Air Airlines.'' At least they didn't claim ATA now stands for the word father translated into TURKISH. One can sympathize with the company's name problems: air and trans are as vanilla as airline word names get (as also American, in the US), and the lack of a distinctive name is probably part of their visibility problem. Indeed, as part of their bankruptcy restructuring, they were originally expecting to sell most of their main hub facilities at Midway to AirTran Airways, a low-cost carrier founded in 1993. Eventually, Southwest won the bidding war, in an agreement to buy the lease rights to six gates at Midway. The agreement involves some cash, transfer of a hangar at Midway, and very significantly a code-share agreement, the first for both ATA and Southwest. ATA will make Indianapolis, previously a secondary hub, the new center of its operations. American Translators Association. Cf. ALTA (L is for Literary). American Trucking Associations [sic, plural], Inc. A national trade association. Their Management Systems Council (MSC) has a web page. The other large trucking-industry trade association is the TCA. `Father,' in various Central Asian languages. Cf. atta. The father of modern Turkey was given the single name Mustafa at birth (1881, in Salonica). A mathematics teacher bestowed the name Kemal (`perfection') on him, and it was as ``Mustafa Kemal'' that he entered a military academy in 1895. After his graduation as a lieutenant in 1905 he was posted to Damascus, where he formed a secret society of anti-royalist (i.e., anti-Ottoman), reform-minded officers called Vatan (`Fatherland'). Other stuff happened that is not relevant to this entry. Let's just say that Mustafa Kemal was to Turkey everything Charles de Gaulle could have wanted to be for France. In 1934, he promulgated a law requiring all Turks to adopt surnames, and the Grand National Assembly gave him the surname of Atatürk, `father of Turks.' Alma-Ata (now ``Almaty,'' grumble grumble) is the largest city in Kazakhstan. The name means `father of apples.' `You,' in Hebrew (stress as usual on the final syllable). Assembly of Turkish American Associations. The nipa palm tree. It grows throughout the Scrabble forest. AT Attachment Packet Interface. Similar to SCSI. (Cf. ATA supra.) 1. n. `target' 2. interj. `on target, dead on, that's right.' Also, there's a brand of orphan computers called Atari. At least there's an FAQ for the eight-bit machines, from the <comp.sys.atari.8bit> newsgroup. We also serve a little bit on the operating system. Academy of Television Arts and Sciences. What's wrong with this picture? ATAS was founded in 1946 and is based in the Los Angeles area. It presents the annual prime time Emmy awards, offers other events in its LA headquarters, and publishes Emmy magazine. The similarly named National Academy of Television Arts and Sciences (NATAS) is a distinct organization based in New York. Oddly enough, NATAS is a national organization, with chapters around the US (20, as of 2004). NATAS handles the Daytime, US News, and Documentary Emmys. Sports is subsumed in one or more of those categories. NATAS chapters handle Regional Emmy Awards. Enough! PLEASE! What do you think this is, some kind of general reference encyclopedic dictionary? We're just interested in acronyms (and initialisms and abbreviations and some necessary related explanatory entries). All I ever wanted to know was, did ``Emmy'' originally stand for M.E.? (Cf. emcee.) Ah! I found an answer. (No, I'm not going to tell you here. That wouldn't be efficient. You have to follow the link.) The NYC-based NATAS has a regional chapter based in NYC: NY-NATAS. ATAS, in addition to being a ``sister organization'' to NATAS, also serves as one of its regional chapters. This begins to sound like incest. Buy the rights, it could be a hit. There's also a IATAS, which awards International Emmys (iEmmys). IATAS is a division of NATAS. It may be possible to draw the organization chart in two dimensions, but it can't be a good idea. All-Terrain Bicycle. Less common synonym of MTB. ATB, atb All The Best. Chatese, texting abbreviation. Anti-Theater Ballistic Missile. A few are still kept targeted at Broadway, although that is no longer considered a serious threat (vide ATW). People have been saying for over fifty years that Broadway is chatting with death's valet. People have probably been right, but musicals still animate the body. ATBM can also be synonymously expanded as Anti-Tactical Ballistic Missile. Again, as with ABM, confusion arises from the fact that hyphenation is not explicitly nested: ATBM is anti the TBM. These are not ballistic missiles directed against tactics, except insofar as those tactics take the form of the firing of tactical ballistic missiles. Evidently, the end of the cold war has had collateral linguistic benefits. AT Bus Variant name for ISA bus. Accelerated Thermal Cycling. Address Translation Cache. Air Traffic Control. Productive prefix (ATCA, ATCAA, ATCBI, ATCRBS, ATCS, ATCSCC, ATCT). ``All Things Considered.'' National Public Radio Program that needs some new theme music. Anatomical Therapeutic Chemical. Arquitectura y Tecnología de Computadores. [In Spanish, there has been a major struggle between ``Computador'' and ``Ordenador.'' (The latter follows French usage.) An important reason to avoid using ``computador'' is that a verb form naturally associated with that is ``computa.'' In Gulliver's Travels, Jonathan Swift has some fun with something similar, turning over in his thought various unsatisfactory alternative etymologies of ``Laputa,'' the name of the floating island. It is less important to know that in the Italian dub of ``Last Tango in Paris'' (``Ultimo Tango a Parigi'') Marlon Brando's character calls Maria Schneider's ``putana,'' but we tell you anyway.] If you're confused, read through to the end of the Pav entry. (To save time, you can start at the beginning of the Pav entry. To save frustration, wait until I publish the entry.) According to the Computer Spanglish Diccionario, a useful resource served by Yolanda M. Rivas, ordenador is seldom used. Audio TeleConferenc{ing|e}. Australian Transputer Centre. Automat{ ic | ed } Traction Control. Automatic Train Control. Average Total Cost. Azienda Trasporti Consorziali. Air Traffic Control Association. Air Traffic Control Assigned Airspace. Air Traffic Control Beacon Interrogator. The Association of [the] Thai Computer Industry. Antarctic Treaty Consultative Meeting. Air Traffic Controller. Air Traffic Control Radar Beacon System. Advanced Train Control Systems. Air Traffic Control {Specialist | System}. Air Traffic Control Systems Command Center. Operational since 1994. Air Traffic Control Tower. Air Transport Division of the Transport Workers Union (TWU). It represents airline mechanics and ground crew. Advanced Technical Demonstration. Automatic Thermal Desorption (TD). Address Transition Detection Circuit. aTDC, ATDC, atdc After Top Dead Center. See TDC. Asynchronous Time-Division Multiplexing. ATtention Dial Tone. Hayes modem AT command. Advanced Technological Education. A joint program of the Divisions of Undergraduate Education and of Elementary, Secondary, and Informal Education of the NSF, ``promotes exemplary improvement in advanced technological education ....'' Automat{ ic | ed } Test Equipment. For external circuit testing. Cost in the megabuck range. See, for example, Teradyne's Semiconductor Test Division. (Bureau of) Alcohol, Tobacco and Firearms of the U.S. Department of the Treasury. The ``revenooers.'' More at BATF. Australian Track & Field Coaches Association. Albert The Great. Albertus Magnus. There's an unintentionally funny site hawking his out-of-copyright-by-now works, at <http://www.AlbertTheGreat.Com/>. (``Please make payment in advance to receive over 40 volumes of truth'' from ``First Floor Rear'' somewhere in Pennsylvania.) Albertus Magnus, a Dominican priest (OP), died in 1280; he was canonized and declared a doctor of the Roman Catholic Church some time later (1931). In 1941, Pope Pius XII declared him the patron of all those who devote themselves to the natural sciences. Alexander The Great. Automatic Test Generation. [Football icon] ATH, ath, Ath. ATHlete. A symbol or abbreviation used in lieu of a specific football position. Read the athlete entry below and you'll know at least as much as I do on the subject. Advanced THermal Analysis System. A lot more people would be atheists if they didn't think that God would disapprove. A town in Alabama (home of Athens State University, founded in 1822, and... it's a county seat!), Arkansas, California, Georgia (home of UGA, the oldest state-chartered university in the US, and another county seat!), Illinois, Indiana (the University of Indianapolis has an Athens campus, but it's in Greece), Kansas, Kentucky, Louisiana, Maine, Michigan, Mississippi, Missouri, New York, Ohio (county seat, and home of Ohio's first state university), Pennsylvania, Tennessee (it's a county seat!), Texas (seat of a different county!), Vermont, Virginia, West Virginia (in Mercer County, where Princeton, naturally, is the county seat; I've been there), and Wisconsin. That's twenty-two states, and not a few college towns. In fact, Tennessee has two Athenses, because Nashville is known locally as ``the Athens of the South.'' In an article about the South that was published in 1962 (``You-All and Non-You-All,'' described within the U and non-U entry), Jessica Mitford wondered puckishly ``whether Athenians ever think of their city as `the Nashville of Greece.' '' For a similar idea, based on Emory University's self-assumed status as a ``Harvard of the South,'' see the this S.P.D. entry. Adelaide, capital of the state of South Australia, is also known locally as the ``Athens of the South.'' [Football icon] I just noticed a specialized use of this word. It apparently designates a football player without a single specific position, but I don't feel competent to give a certain definition, so I'll just cite a couple of instances. The back page of Notre Dame's student newspaper (The Observer) had a graphic that included this text: ``23 players signed letters of intent: 12 offense, 9 defense, 2 athletes.'' (My italics; otherwise, I've sedated the fonts and capitalization for readability. This was from the issue of February 4, 2010, the day after National Signing Day 2010. National Signing Day is the earliest date when student athletes may sign national letters of intent. There will be more about it at the link, once I sort some of it out.) The previous evening, an article on the website of the Huntington, W.Va., Herald-Dispatch reported the letter-of-intent pickings of Marshall University (the local Division-I school). The article included this: ``Quarterback Ed Sullivan [he wants to be in the ``big shoe,'' no doubt] and athletes Jermaine Kelson, Antwon Chisholm, Jazz King and [Harold `Gator'] Hoskins ranked among Marshall recruits who opted for Huntington over BCS teams. The Thundering Herd also added considerable bulk along the line of scrimmage, signing five offensive and defensive players to bolster the front.'' A list at the foot of the Herald-Dispatch article included position codes and other information. Those described as ``athletes'' in the body of the article had the position code ``ATH.'' The student athletes (a general term) were listed in no particular order that I could discern. Anyway, here are the position codes, in order of their first occurrence in the list, along with the number of players with that designation, along with their average heights and weights: Position # height weight (in lb.) QB 1 6'2" 195 K 1 5'10" 175 OL 3 6'4.7" 283 ATH 5 5'10.6" 180 DB 3 5'11.7" 177.7 LB 2 6'2" 207.5 DE 3 6'4.3" 245 DT 2 6'4" 275 TE 2 6'4.5" 210 WR 3 6'0" 181.7 It turns out that ATH, Ath, or ath is very widely used in this context. FWIW, there don't seem to be any specific codes for special-teams positions. The ATH players aren't always relatively small. Oh, and I found an authority (Bob -- a guard... in the Notre Dame library, working beneath Touchdown Jesus!) who explained that ``an athlete'' is someone who can play more than one position. There are position names for the special teams, but everyone on those teams has a position on the main offensive or defensive team -- sort of like a day job. This entry is under construction. What that means is that I've got my feet propped up on the desk and I'm looking out the window, trying to come up with a good pun on atheism when I should be doing real work instead. athletic shoe This entry is under construction. But hey, we've already got a head term. Well-started is half done, so I'd say the entry is about 45% complete. The hang-up is with bowling shoes: are bowling shoes not athletic shoes because they have slippery soles, or are they not athletic shoes because bowling is not a sport? And how can I finish the entry if I don't know? How will I know if I don't do the research, and how will I do the research without funding? Send money now! And shouldn't it be the foot rather than the shoe that is called athletic? The shoe should be an ``athlete's shoe,'' but instead we have ``athlete's foot'' and ``athletic shoe.'' This isn't working right: the more I write, the more incomplete this entry gets. You know, when people say they have to run just to stay in one place, I look at their running shoes and think: if you want to get anywhere, maybe you should run the other way. If I erased this entry completely, I'd be done. Cf. sneaker. Just to incomplete this entry more completely, I'd like to add that the odd attribution of athleticism to a shoe reminds one of homebuilding. (Well, okay, it just reminds me, but since I am one, it reminds one.) Specifically, rich folks will say something like ``I built this house in 1997'' when all they mean is that they hired a general contractor in 1996. At least with similarly misattributed corporate research and claimed accomplishments, no one doubts that the actual work was performed by humans and machines with individual identities distinct from that of the corporation. Nevertheless, have a gander at the GE entry. (Starship's ``We Built This City (on Rock and Roll)'' gets a free pass because attempting to parse rock lyrics dissolves the brain. Marconi plays the mamba. Oh noooo!) American Truck Historical Society. ``Incorporated in 1971, the not-for-profit American Truck Historical Society was formed to preserve the history of trucks, the trucking industry, and its pioneers.'' aths, Aths, ATHS Australian Tuchas for Huchas Society. My best guess, anyway. Okay, here's another try: ATHlet{e|ic}S. An abbreviation particularly common in Australia, where -- in keeping with Fowler's worst suggestion and widespread UK and Oz practice -- abbreviations are frequently written without a closing period. (There is no Australian organization, so far as I have been able to determine in way too much time devoted to the search, whose initialism is ATHS.) Addiction Treatment Inventory. A questionnaire created by TRI for drug treatment centers to report statistical data describing their programs. Used by DENS. Advanced Thin Ionization Calorimeter. A balloon-borne cosmic-ray detector. Adoption Taxpayer Identification Number. Here's an explanation from the 2004 edition of IRS publication 17 (Your Federal Income Tax: For Individuals), p. 15: If you are in the process of adopting a child who is a U.S. citizen or resident and cannot get an SSN for the child [or an ITIN either] until the adoption is final, you can apply for an ATIN to use instead of an SSN. Use form W-7A. (An ATIN is only assigned if the child has already been placed in the return-filer's home and can be claimed as a dependent. An SSN must be applied for and used as soon as possible afterwards, and use of the ATIN discontinued.) Asian Technology Information Program. ``[A] non-profit organization dedicated to providing objective and high-quality information about technology developments in Asia.'' (Link above is to US server; http://www.atip.or.jp/ is in Tokyo.) Advanced Threat InfraRed CounterMeasures (IRCM). Alliance for Telecommunications Industry Solutions. Previously called ECSA. Association of Teachers of Japanese. That URL is more permanent than it looks, but if it ever dies, the link to ATJ from <japaneseteaching.org/> will probably be kept current. Isn't that the Nahuatl word for water? Could be. I'll have to check. Hmmm. So it is. And a lot of folks have come up with interesting speculations connecting Atlantis with the Nahuatl word atl and tlan, which isn't a word in Nahuatl but occurs in a bunch of names. Doubtless these connections are at least as significant as various other observed coincidences. Active Template Library. For Microsoft Windows; used in creating server-side components and ActiveX controls. Association of Teachers and Lecturers. IATA abbreviation for what used to be Atlanta Hartsfield International Airport. Now it's Hartsfield-Jackson Atlanta International Airport. They extended the subway system connecting the gates and main terminal straight through to Mississippi and... Hmmm, let me check this. Okay, they added ``Jackson'' some time after the death in June 2003 of Maynard Jackson, the first black mayor of the city of Atlanta (he was first elected in 1973). He was active in the major expansion of Hartsfield, which was completed ``on time and under budget'' during his second term. (The quotation marks are standard, apparently because it was a phrase he took pride in repeating.) The ``Hartsfield'' honored an earlier mayor, William Hartsfield. American Theological Library Association. Good places to go and read comforting things after you've received reading matter from the next ATLA. Association of Trial Lawyers of America. The trial lawyers have evidently recognized that ``trial lawyer'' is not a term with positive associations. The organization has been rebranded the ``American Association for Justice'' (AAJ). I believe it was one of the Oliver Wendell Holmeses who remarked that there is no more trying experience than undergoing a trial. I don't think it was a tautological pun. I do imagine it was the jurist Oliver Wendell Holmes, Jr., who remarked this. Holmes Senior, the doctor, practiced in the days before modern anesthetics. Atlantic Monthly What can I say? I won't pretend that it's the acronym expansion of ``AM'' or ``AMM.'' Even I have standards. It was founded in 1857, so it has seen its share of ups and downs. The first years of the 21st century have been downs. Visit. Edward Weeks was the editor from 1938 to 1966. Abbreviated Test Language for All Systems. Used for test specification and test programming. IEEE standard 716. It's about the fourth item on this long page. Argonne Tandem-Linac Accelerator System. Since LINAC stands for ``linear accelerator,'' one may regard ``ATLAS'' as an abbreviation of ``Argonne Tandem-Linear-Accelerator Accelerator System.'' That is an example of what we here at SBF call an AAP pleonasm (this stands for ``acronym-assisted pleonasm pleonasm''). One would naturally expect ``ATLAS System'' as an AAP pleonasm pleonasm for ATLAS. This occurs, of course, but the AAP-assisted ``ATLAS accelerator'' pleonasm is much more common. One can also find higher-order-redundant pleonastic redundancies of higher order, like ``ATLAS LINAC accelerator at Argonne.'' ATLAS has 62 resonators. A Toroidal LHC ApparatuS. Name of one of the six particle-detector experiments at the Large Hadron Collider (LHC). The ATLAS collaboration was formed in 1992 when the proposed EAGLE (Experiment for Accurate Gamma, Lepton and Energy Measurements) and ASCOT (Apparatus with SuperCOnducting Toroids) collaborations merged their efforts into building a single, general-purpose particle detector for the LHC. at least as good as No worse than. (Doesn't sound so good that way, does it?) Adobe Type Manager. Air Traffic Management. Amateur Telescope Maker. Association of Teachers of Mathematics. UK organization; nearly 4000 members concerned with mathematical education in primary schools, secondary schools, colleges, polytechnics and universities. Asynchronous Transfer Mode. A nontechnical introduction is available from the ATM Forum; the text within the gifs is hard to read. ATM passes information in 53-byte cells consisting of 48 bytes of payload and 5 bytes of header. It's defined for 155Mbit/second data rates and faster. See also SDH. A tod{a|o} madre. This is a common Mexican slang expression roughly equivalent to the interjection `awesome!' The initialism occurs in graffiti or wherever else one might write it, but in speech the unabbreviated words are used. At the most basic level of grammar, the form with toda would be correct, since madre is (grammatically as well as naturally) female. In practice, todo is common. The phrase can be translated as `at full mother,' on the pattern of expressions like a toda velocidad (`at full speed'). The phrase doesn't make any more literal sense in Spanish than the translation does in English. From time to time over the past few years I've asked various Mexicans what sense they could make of the phrase, and never gotten more than admittedly ignorant speculation. It's just an idiom. At The Moment. Automat{ed|ic} Teller Machine. So far, only bank tellers, not fortune tellers. Okay, I'll have to think about that. The first ATM was inaugurated in London on a Tuesday, June 27, 1967. It was apparently called an ``automated cash dispenser'' at the time. I read this in an article by James Hudnut-Beumer. He's a professor of of American religious history at Vanderbilt University, and the article, published June 21, 2017, in The Conversation, is ``Why cash remains sacred in American churches.'' It never would have occurred to me to ask the question, but I was interested to read there that Marty Baker, pastor of the Stevens Creek Church in Augusta, Georgia, is widely credited as the first to install an ATM inside a church. He installed two of them in the church lobby in 2005. Not one to do things by halves, apparently. These ATM's are also known as ``giving kiosks.'' It's striking how equivocal the verb derivatives can be -- dispense cash or dispense with cash, Kiosks that give cash or kiosks for giving, or forgiving or cash for dispensation? Marty Baker saw that it was good, so he founded SecureGive, a for-profit company that makes and manages giving kiosks of many different persuasions. The term ``ATM,'' having been replaced in this context, has apparently been repurposed with the new expansion ``Automatic Tithing Machine,'' for a kind of giving kiosk that transfers funds directly from the giver's account into the church's. Some users place their ATM receipts in the plate (or pouch or slot or whatever) at the appropriate time in the service. Now let's discuss some ethical, um, issues. If you write or say ``ATM machine,'' then you are a bad person. In principle, it's okay just to think it, but bad thoughts lead to bad actions, so keep that in mind. If you want to be a very bad person and burn in hell forever, say ``Automatic ATM Machine'' (the teller is silent). Automatic Tithing Machine. Explained in previous entry. Azienda Trasporti Municipali. Transit in Milano, Italy. Asynchronous Transfer Mode Address Resolution Protocol. Asynchronous Transfer Mode (ATM)-Data eXchange Interface (DXI) A unit of pressure [abbrev. atm.] equal to 105,350 Pa. Vide bar. [Phone icon] Automated (Telephone) Trunk Measurement System. Aeronautical Telecommunications Network. Augmented Transition Network (parser). Abort To Orbit. Space shuttle landing abort plan; AOA, RTLS, and TAL are other options. Actual Time Over. Actual as opposed to targeted or predicted time that an aircraft passes a coordination point. Australian Taxation Office. Automatic Train Operation. Association of Train Operating Companies. ``[A]n unincorporated association owned by its members. It was set up by the train operating companies formed during the privatisation of [UK] railways under the Railways Act 1993.'' Automatic Transfer Of Kana kanji. Kana is the Japanese syllabary, with about 95 characters -- hiragana and katakana (about 145 including diacriticals). Kanji are Chinese characters used in Japanese (a few thousand). atomic mass Physicists' term meaning mass of an atom, when the mass is given in amu (atomic mass units). Totally different from atomic weight, you understand, although quantitatively identical. atomic names Given names without accepted shorter form. What is ``accepted'' is, of course, a matter of opinion.) Many atomic names, such as Drew, Joe, Ron, Sam, and Tom, are short or diminutive forms of other names. Since every name that is not itself atomic must by definition have an accepted form that is shorter, and given the usual mathematical facts about phonemes, every name must be or yield at least one atomic name. Some of these are probably only rarely given names themselves, since there does seem to continue to be a tendency to avoid giving legal names that are primarily used as nicknames based on other names. Aargh! Why does everything have to get so complicated when you think about it? I really only wanted to mention traditional atomic names like Kim, Lee, and Saul. For obvious reasons, atomic names tend to be monosyllabic. Aaron and Oscar are pretty solid exceptions, although I knew an automobile repairman who used ``Os'' for the latter. A semiconductor physicist of my acquaintance was upset when his granddaughter was given the non-atomic (molecular?) name ``Candace.'' He feared she would end up being called ``Candy,'' not be taken seriously as a student in school, drop out, and lead an miserably unambitious, unliberated existence. This is only a slightly extreme version of the theory that Nomenclature is Destiny. (Following that link you can find another kind of atomic name: Atom Egoyan.) atomic number The number of protons in a nucleus. Physicists abbreviate this by the capital letter Z. atomic weight Chemists' term, short for relative atomic weight. The atomic weight of a chemical substance is one twelfth of the weight in grams of one mole of the substance, divided by the weight in grams of one mole of carbon atoms. Because of the principle of equivalence (even just the weak principle of equivalence), this ratio is the same at any altitude, so it's practically a measure of mass. Physicists define a quantity that is one twelfth the mass of a carbon atom. (Or, if you prefer, defined as one twelfth the mass of a mole of carbon atoms, divided by Avogadro's number, which is the number of carbon atoms in a mole of carbon atoms.) Since a ratio of masses equals the corresponding ratio of weights (principle of equivalence, remember?) the mass of an atom of some element (its atomic mass), given in amu, equals the atomic weight of the element. Physicists prefer to distinguish mass and force (weight), so in contexts typically described or analyzed in physical terms, one tends to see the atomic mass term. (These contexts are more likely to be in solid, surface, interface, gas, or plasma phase, and to depend on detailed dynamics of individual particles matter. Typical instance: atomic mass spectroscopy.) Chemists tend to deal primarily with weights, and in chemical contexts, one sees atomic weight. (Chemical contexts are predominantly liquid-phase, typically involving macroscopic numbers of particles. Any situation involving a molecular species or chemical reaction is likely to be analyzed in chemical terms.) It is, of course, impossible to define a sharp boundary between chemical and physical contexts or approaches. To some extent, the distinction is one of conceptual approach, even when the substantive situation is the same, and has more to do with pedagogical traditions in the different disciplines than with any great difference in effectiveness. Asynchronous Transfer mode (ATM)-Oriented Multimedia Information System. Atoms in the Family The title of a book by Laura Fermi (neé Capon) about her husband Enrico, the famous physicist who died in 1954. The book was published that year by the University of Chicago Press. As Laura explained in the acknowledgments, it was Dr. Cyril Smith who gave her the idea for the book. ``You should write your husband's biography,'' he told me. ``I cannot,'' I answered. ``My husband is the man I cook for and iron shirts for. How can I take him that seriously?'' Fermi is one of my favorite physicists, and this is one of my favorite books. atom smasher Atoms are very small. I guess that's why they're so hard to smash. I may have something to say here later about cyclotrons and other accelerators, but for now I just wanted to have this entry here for a quote. Interviewed at a training session in Las Vegas, ahead of a non-title bout February 22, 2003, 36-year-old juvenile delinquent Mike Tyson was being philosophical about his bad-boy image: ``Every religion has a saying about throwing stones in glass houses. I can't throw a sand pebble. I can't spit, I can't throw an atom at nobody.'' (This and other reflective contemplations in the London Independent, February 10, 2003. More about this fascinating creature at the bite me entry, coming soon.) atonal music Music that has tones, alright, but no key -- or many. Sounds like it keeps slipping a cog. Generally associated with the name of Schönberg, but it was pioneered by Liszt as early as the 1830's. Schönberg (1874-1951) had to emigrate to the US to escape the Nazis, and the separation from even that small audience that could appreciate his work was a living death. Absolute Thermoelectric Power. Acceptance Test Plan. Adenosine TriPhosphate. A kind of biological fuel for internal transport in a biological cell. Energy is stored in ADP by adding a phosphate group, and extracted by removing it, elsewhere, from the product ATP. In a pinch, you can extract a bit more energy from ADP by removing another phosphate group and leaving AMP. Advanced TurboProp. Made by British Aerospace. As of this writing (8/1996), United Express flies these critters from O'Hare to South Bend seven (7) times a day. Total flight time is only 25 minutes. Most of them are only four or five years old, so you have pretty favorable odds of arriving. Airline Transport Pilot. Highest grade of pilot certificate. All Tests Pass. Alternate Transient Program. A version of Electromagnetic Transient Program (EMTP), a standard code for real-time simulation of power systems including single-phase and three-phase balanced and unbalanced circuit modeling, various equivalent-circuit models for T-lines, and time-dependent models for simulating circuit breakers, lightning arrestors, and faults. Considered user-inimical. Appletalk Transaction Protocol. Application Transaction { Protocol | Program }. Association of Tennis Professionals. Authority To Proceed. Granted by Air Traffic Control. Automatic Train Protection. A system used on some British railway lines. The system determines a maximum safe speed for the train and applies the brakes if that speed is exceeded. There were plans to install it widely in the 1990's, but costs proved greater than expected. American Technological Preeminence Act. Gee, you don't think this wording will offend anyone? Nah -- I checked it out. All our constituents are fine with it. Association of Theatrical Press Agents and Managers. Members of this trade union are not strictly required to be theatrical themselves; they just serve as publicists and managers of theater productions -- which productions are themselves theatrical in some sense of the word. You wanted that spelled out. Automatic Test-Pattern Generation. Association of Teachers of Preventive Medicine. Founded in 1942, it's ``the national association supporting health promotion and disease prevention educators and researchers.... ATPM members also include members of the Association of Preventive Medicine Residents.'' AppleTalk Print Services. Americans for Tax Reform. A group that wants taxes reduced. It's not officially affiliated with the GOP. You know, this entry used to read ``Americans for Tax Reform. A group not officially affiliated with the GOP that wants taxes reduced.'' That was funnier, but the edited entry is better because we want to serve browsers who visit us with precise and unambiguous definitions. Attenuated Total Reflection. Authorization To Recruit. Automat{ed|ic} Target Recogni{tion|zer}. Remember in Robocop, that behemoth with machine guns that required some adjustment? Assistive Technology Resource Alliance. Adaptive TRansform Acoustic Coding. Atom-Transfer Radical Polymerization. Abstract Test Suite. (FAA) Air Traffic Services. Asian Test Symposium. Association of Theological Schools in the United States and Canada. They're in the accreditation business. That could get interesting. Auxiliary Territorial Service. A British something or other founded in 1941. Advanced Television Systems Committee. ``ATSC was formed by the Joint Committee on Inter-Society Coordination (JCIC) to establish voluntary technical standards for advanced television systems, including digital high definition television (HDTV). ATSC suggests positions to the Department of State for their use in international standards organizations. ATSC proposes standards to the Federal Communications Commission.'' Australian Telecommunication Standardisation Committee. Agency for Toxic Substances and Disease Registry. Atchison, Topeka, and Santa Fe RailRoad (RR). (Australia's) Aboriginal and Torres Strait Islander Commission. ATSIC was created in 1990 by the Labor government of Hawke. During parliamentary discussion of the ATSIC Act in 1989, MP John Howard said that establishing ATSIC would be ``sheer national idiocy'' and described ATSIC as a ``black Parliament.'' As PM in 2004, he's getting his opportunity to replace it. It's a fascinating story, so now you know what to look out for. (Australia's) Aboriginal and Torres Strait Islander Services. In April 2003, this new government agency was created by Philip Ruddock (then the indigenous affairs minister). This agency was to manage ATSIC's budget under policy direction from ATSIC's elected leaders. Laotian monetary unit. But what would you buy with it? [Phone icon] American Telephone and Telegraph [Company]. Gothic for `father.' The first sentence of the Lord's Prayer in Gothic is Atta unsar þu in himinam, weihnái namô þein;. Attila (ca. 406-453), was the last and most powerful king of the Hun empire. His fame was such that he remains famous (in Hungary and Turkey) and infamous (in the rest of the West) to this day. His name remains a popular boy's given name today in Hungary and (also as Atilla) in Turkey. The last of his many wives was named Ildikó, and that name is still used in Hungary today. The wife of a colleague from Hungary has that name, and she explained its origin to me with pride. (But maybe she just enjoys the expected shock value.) Ildikó was a Goth, and he died shortly after marrying her. Historians tend to trust the reports of Priscus, a historian who traveled with Maximin on an embassy from Theodosius II in 448. According to Priscus, he died on the night after a feast celebrating that last marriage. After he was buried with rich funeral objects, his funeral party was killed to keep his burial place secret. Let's review: a man of moderate dietary habits, in his mid-forties, apparently healthy and with everything in the world to live for, gets a nosebleed and chokes to death. Many are dead and no one alive will admit he attended the funeral. This doesn't sound suspicious? ``The Scourge of God'' didn't have any enemies? Other reports say one or another of his wives killed him, but the reports that have come down to us are not contemporary. If only Dan Rather would give us his gut sense of the matter, then we could be sure. The Hun empire included many Goths, and in the Gothic language, Attila can be understood as `little father.' Ata or Atta is also a common word for `father' in various Central Asian or at least Turkic languages (see ata), and in one or another of these Attila may mean `land-father.' There are other possibilities. You could look it up. Stalin, another fellow with some blood on his hands, was known by the epithet of ``little father.'' In Romanian, that was tatucul. Here I guess we see the diminutive ending -cul preserved from Latin. According to the W. Meyer-Lübke Romanisches etymologisches Wörterbuch, the Romanian word tata, meaning `father,' has cognates in many Romance languages, though not in Latin. The meaning in some of these other languages is familial but varies. In Old Romanian taica meant `older sibling, advisor to young maidens,' and some tata cognates have referred variously to a younger sibling, older sibling, maiden, etc. Come to think of it, I've heard ``tatas'' used in English. It had something to do with mamas, iirc. Let me look that up in a slang dictionary... oh! I guess I don't want to go there. There's a cognate of tata that also meant `father' in Lombardic. This was the language of a West Germanic tribe that settled in northern Italy and ended up speaking a version of Romance with little Germanic vocabulary left in it, so this is a weak reed to support a Germanic etymology. The Meyer-Lübke doesn't draw any connection to East Germanic (i.e., Gothic) or other pre-Romance languages. It seems very hung up on the idea that the initial vowel would not have been elided. In the instance of one Romance tata variant [(l)ata], it suggests a possible connection with the word ätti in Swiss German (i.e., one of the local varieties of German spoken in Switzerland). I have one thing to say to these crazy linguists: get your head out of your ass! Before Stalin, and before he himself had much blood on his own hands, Tsar Nicholas II was known as the little father. His enemy Nestor Makhnos (a bloody anarchist military commander) was given the nickname batko by his men; this meant `little father.' When John F. Kennedy ran for president in 1960, his younger brother Bobby Kennedy served as campaign manager. He was rather bossy with the campaign staff, who used to say ``Little Brother is Watching You.'' (I just figured I'd throw that in there for a little comic relief, so it's not all about dictatorial leaders or bloody assassinations.) Okay now, back to that earlier Scourge of God. The stress in the English pronunciation of Attila is on the second syllable, but in Gothic and in modern Serbo-Croatian it is on the first syllable. All the continental German forms of the name apparently have initial stress. Middle High German documents from around 1200 record Attila's name as Etzel. This represents two systematic sound shifts: (1) umlaut, specifically assimilation of a to i (yes, even though the vowels were originally separated by a consonant; that's how umlaut works), and (b) affrication of the voiceless stop /t/ into /ts/, part of the second Germanic sound shift (LV). Attila's name provides one bit of evidence that, in at least one High Germanic dialect, the LV2 process had not ended by about 450. Taken all together, the various bits of evidence suggest that LV2 began spreading from the southern extreme of the West Germanic region in the sixth century (probably from Lombardy, when the Lombards still spoke a Germanic language). Etzel became an important character in medieval German folklore. Edsel is a variant form of the name. The most famous person to bear it in modern times was Edsel Ford, son of the Henry Ford who founded the car company named after himself. When the company introduced a new line of cars in the late 1950's, they got the name Edsel. The line flopped infamously, and the name Edsel came to stand for commercial failure. Studies later showed that one of the many reasons it failed was a public perception of the Edsel name as odd. Naming the the new line ``Attila'' or something else better known would probably not have helped much, however: the line was introduced at the start of a recession that killed off the Nash, Packard, Hudson, and DeSoto marques, and left one or two others mortally wounded. The Ford family was partly of Dutch or Flemish descent, but if there is a particular reason for the choice of name, it is not publicly known. There have been reports that the Ford family was opposed to using Edsel as the name of a car line, but their objections can't have been too strong. The company had been family-owned, only becoming a publicly traded corporation in 1956, but the Ford family has retained a controlling interest to this day (July 24, 2005, if you must know). The company had great trouble choosing a name, even going so far as to solicit some famously terrible suggestions from the famous poet Marianne Moore (``The Intelligent Whale,'' ``The Utopian Turtletop,'' ``The Pastelogram,'' ``The Mongoose Civique''). Plato was right about poets. At the meeting that chose the name, Ernest Breech stepped into the breach. Chairing the meeting in the absence of Henry Ford II, he urged the adoption of Edsel, name of the company's second president. Agence de Transfert de Technologie Financière. ``ATTF Luxembourg was created in 1999 by the State of the Grand-Duchy of Luxembourg (Ministry of Finance) - main shareholder, the Central Bank of Luxembourg (BCL), the Chamber of Commerce of the Grand-Duchy of Luxembourg, the Financial Sector Supervisory Commission (CSSF), the Institute for Training in Banking, Luxembourg (IFBL), the Luxembourg Bankers' Association (ABBL - replaced in 2002 by the Federation of the Professionals of the Financial Sector - PROFIL) and the University of Luxembourg....'' American Telephone and Telegraph Global Information Solutions. The former NCR, after it was bought out, and before it was spun off. at the weekend British for `over the weekend' or `on the weekend.' The two translations given here have slightly different but overlapping ranges of meaning. Without venturing to specify these precisely, it seems that in Canada the semantic ranges are not the same as in the US: googling with restrictions to .ca and .us TLD's indicates that the on form (not the on reading!) is relatively more popular in the former. Gorrr, these people are making the language incomprehensible! At this time portable electronic devices may now be used. Around the time also heralded by ``At this time you are now free to move about the cabin, but we ask that otherwise you remain seated with your seat-belt fastened for your safety.'' Not long after the ``last and final boarding call'' for your flight. attire, proper I feel certain that somewhere in this glossary there is a muddled, poorly-remembered reference to the material quoted below, but as I have only a muddled, poor recollection of where that entry is, I'll deposit the quotation here. It's taken from page 19 in my Pocket Books copy (chapter 3, at any event) of John P. Marquand's The Late George Apley. (Marquand's Apleys are fictional; the book is a satire so gentle that you have to read pagefuls just to get a laugh.)   Shortly before he [Thomas Apley, the writer's (George's) father] purchased in Beacon Street he had been drawn, like so many others, to build one of those fine bow-front houses around one of these shady squares in the South End. When he did so nearly everyone was under the impression that this district would be one of the most solid residential sections of Boston instead of becoming, as it is to-day, a region of rooming houses and worse. You may have seen those houses in the South End, fine mansions with dark walnut doors and beautiful woodwork. One morning, as Tim, the coachman, came up with the carriage, to carry your Aunt Amelia and me to Miss Hendrick's Primary School, my father, who had not gone down to his office at the usual early hour because he had a bad head cold, came out with us to the front steps. I could not have been more than seven at the time, but I remember the exclamation that he gave when he observed the brownstone steps of the house across the street.   ``Thunderation,'' Father said, ``there is a man in his shirt sleeves on those steps.'' The next day he sold his house for what he paid for it and we moved to Beacon Street. Father had sensed the approach of change; a man in his shirt sleeves had told him that the days of the South End were numbered. For more Marquand material, see the BF entry. For yet more material -- the whole nine yards, as it were -- try Sartor Resartus, by Thomas Carlyle. (No, no one really knows the origin of the expression ``the whole nine yards.'' I'm sure there's a Nobel prize in it for the fellow who cracks that nut.) American Telephone & Telegraph Information Systems. I've seen both this and ATTGIS used. ATTN, Attn. attributive noun A noun functioning as a modifier--usually as an adjective. An attributive noun may itself be a compound noun or noun phrase. In that case, the attributive noun is traditionally hyphenated. Thus, the noun phrase ``intermediate frequency,'' consisting of the adjective intermediate modifying the noun frequency, becomes the attributive noun ``intermediate-frequency'' and can modify the noun amplifier in the noun phrase ``intermediate-frequency amplifier.'' The hyphen allows a reader encountering the words intermediate and frequency in sequence to parse them immediately as a modifier. If a compound attributive noun is written without a hyphen, then a reader is likely to misinterpret it initially as a subject or predicate, and is forced to reread or rethink the text when the noun functioning as noun is finally encountered. Of particular interest in the present reference is the fact that the better literature, back in the day, preserved the hyphen in abbreviations. Hence, an intermediate-frequency amplifier was abbreviated I.-F. amp., whereas the center frequency of the signals such a device was designed to amplify was simply I.F. Sigh. For old times' sake, we've indicated the various historical abbreviated forms for the electronics abbreviations DC, AC, and IF. In part, this preservation of hyphenation in abbreviated forms was intended to help the reader recognize the abbreviation. It was an innocent time. A similar motivation led to the disappearance of periods in British abbreviations, as discussed in the Mr entry. We now continue with the discussion of attributive-noun hyphenation in unabbreviated cases. The hyphenation rule is applied loosely. Some noun phrases, particularly proper nouns (e.g., Dow Jones) or disciplinary titles (e.g., Fluid Mechanics) are likely to be recognized as attributive in context and are not hyphenated. Sometimes the attributive noun phrase itself consists of an attributive compound noun modifying another noun (so in formal rather than functional terms, one may have an adjective followed by three nouns). In these cases there is no generally accepted rule; one hyphenates in whatever way seems likely to make the meaning clear most immediately. In the case of attributive noun phrases that include a quantifier, American usage follows an interesting rule: when the noun phrase is transformed into a modifier, the noun component of the original phrase is put into singular form. For example, the noun phrase ``two cars'' becomes the adjective ``two-car,'' as in ``two-car garage.'' British usage does not follow this rule (hence ``two cars garage'', with the stress on the first syllable of garage and the comma after the quote for good measure). I'm not sure what the traditional rule has been, but now the plural-singular transformation seems to apply sometimes in Britain. It might just be American media influence. Canadian usage appears to coincide with US. Another example: ``nine days' wonder'' (British) vs. ``nine-day wonder'' (N. American). Of course, there are exceptions. See if you can find the one in the car alarm entry! Another difference between British and North American dialects' use of plural (but not directly concerning attributive nouns) has to do with the grammatical number of collective nouns. In North American English, collective nouns are generally grammatically singular unless the noun form is plural (``Congress meets,'' ``the Miami Heat is out of the play-offs,'' but ``the Yankees win''). In British, collective nouns are usually grammatically plural even when the noun form is singular (``Manchester United win''). Attributive nouns get a mention in the Latin lesson at the A.M. entry. Atü, atü German, Atmosphärenüberdruck. English, `above atmospheric pressure.' Advanced-Technology Vehicle. Advanced TeleVision. FCC term encompassing everything from digital HDTV to enhancements of the current analog standard. Here's their latest document on the matter, as of early 1998. The IEEE Approved Indexing Keyword List instructs that HDTV be used in place of ATV. I like this idea better than the FCC's, because frankly, ``advanced television'' is an oxymoron. All-Terrain Vehicle. ATazanaVir. A protease inhibitor used in the treatment of AIDS. All-Terrain Vehicle Association. Sister organization: AMA. American Theatre Wing. Their logo displays a mask with two of them (wings, that is; the feathered sort, not the architectural). ATW is ``devoted to promoting excellence in the American theatre.'' I infer that this is done by staging expensive productions of musicals in New York City. ATW bestows Tony Awards. ``Wing'' sounds kind of martial. Or maybe wings are intended to suggest angels' wings and death. Vide ATBM. ``As the World Turns.'' A CBS daytime soap opera. German, Abgasuntersuchung. `Gas emission investigation.' Cf. ASU. French with the same meaning but not the same usage as à le. The French expression à le is used primarily to explain what au means. I suppose au can be regarded as a contraction of à le. A contraction of à   la is à la. Americans United for separation of church and state. AU, a.u. Astronomical Unit. The average earth-sun distance. Obviously this is not a very precise definition: even the two most obvious averages -- time average and angular average -- are unequal by Kepler's 2-3 law. No matter, the eccentricity of earth's orbit is small (~1%). In the most interesting units, 1 AU = 8.3 light minutes. In units that would be more meaningful to those planning to drive, it's about 149.6 million kilometers (that's 92 or 93 million miles, give or take a gas station). Even though we could do so, we do not give a more precise value at this entry. After all, el que quiere celeste, que le cueste. Also, we get more hits this way. See the IAU entry. Auburn University (in Auburn, Alabama). Latin, Aulus. A praenomen, typically abbreviated when writing the full tria nomina. Chemical symbol for gold, from the Latin aurum. For a bit on gold in semiconductor electronics, see the Gold entry. For a bit on the geology of gold mines, see the pluton entry. For a movie connection see AU1. For more general information visit the gold entry in WebElements and the entry at Chemicool, where it was #2 on the Top Five List a long time ago when I checked. AUdio. Filename extension for a Sun Unix sound file format. Australia (ISO code used in TCP/IP addresses). Country code 61 for telephone. Currently, the country consists of six states, some territories with various degrees of self-government (the Northern Territory, the Australian Capital Territory, and Norfolk Island) and various federally administered external territories. Association des Universités Africaines / Association of African Universities. Association of University Architects. African Union Broadcasters. American University of Beirut. a.u.c., A.U.C., AUC Ab Urbe Condita. Latin: `from the founding of the city [of Rome]' (around 753 BCE). Roman date designation. Area Under Curve. True, it's a count-on-your-fingers way to say `integrated,' but medical researchers apparently use this expression `professionally.' Maybe they're trying to drum up new business; the acronym certainly makes me sick. In the medical context, AUC is frequently the time integral of a solute concentration in blood or plasma. Autodefensas Unidas de Colombia. Spanish, `united self-defense [forces] of Colombia.' Nominally a union of at-least-originally independent militias fighting against the left-wing armies of Colombia (ELN and FARC), and the name acronym is construed plural in Spanish, but nevertheless it does appear to be under a single command. I seem to recall it was begun by Jesús A. Castaño, who was killed in 1980, and continues under the leadership of his sons. It is certainly in organizations of people that grammatical-number distinctions begin to blur. This is even more the case for the military and civilian ``wings,'' or what have you, or organizations regarded as terrorist. This is interesting: they seem to have a website. Association Universitaire Catholique d'Aide aux Missions. A publisher in Louvain. Association of Universities and Colleges of Canada. Despite the enormous difference between the vocabularies of English and French, this organization somehow managed to contrive a French name that would correspond to the same initialism (it's usually impossible): Association des universités et collèges du Canada. Academy of Upper Cervical Chiropractic Organizations. It appears that au courant French: `up to date.' Stupid: `with berries.' Sometimes I feel like a wrote a beautiful reference work and some jerk-off came along and scrawled graffiti all over it, and it turned out that I was the jerk-off. I also have an entry for au. Doctor of AUdiology. According to itself, this ADA is the ``Home of the Au.D.'' The Audi car company (fnd'd. 1909) got its name from the imperative singular of audio (Latin for `I hear') because the founder, a German named August Horch, had sold the rights to his name along with his first car company (fnd'd. 1899). The use of a Latin calque was the man's son's suggestion. Perhaps it's a slight approximation or exaggeration to call it a calque. Oh, alright, it's not a calque -- audi is the Latin translation of German horch. [The German verbs hören (`to hear') and horchen (`to listen') are cognate with the English words hear and hearken. Needless to say, all are cognate with das Ohr, `ear.'] The semantic distance between horchen and hören is perhaps not so great as between listen and hear.] Im Jahre 1932, Audi and Horch combined, along with Wanderer and DKW (Das kleine Wunder), into Auto-Union, adopting a logo in the form of four interlocking rings that is still the trademark of Audi. [Kleine Wunder can be literally translated `small wonder,' but the German expression only has the sense of `small miracle,' and does not suggest `no surprise [that]' like the English expression. Little wonder the company folded and was merged away.] More details on Audi company history here. I cribbed this from a posting on the Classics list, naturally. Here it is in the archives. Incidentally, Audi is itself not, um, unheard of as a surname. Robert Audi (b. 1941), for instance, is the author of many philosophical works, such as Action, Intention, and Reason (Cornell University Press, 1993), and general editor of The Cambridge Dictionary of Philosophy (CUP, 1/e 1995, 2/e 1999). a.u.e, a.u.E, AUE, aue Alt.Usage.English, a newsgroup. AUstralian Eastern Standard Time. German noun (fem.) meaning `eye.' Spanish noun (masc.) meaning `culmination' or, in a figurative sense, `apogee.' I'd like to mention that symbol on the greenback, the eye above the pyramid, and I would, if I could see any excuse to do it. Auger process Two-stage photo-ionization process, in which the energy of a photon is initially absorbed by a deeply-bound state. This electron has not absorbed enough energy to escape (to be ionized). When the hole it leaves behind is filled, however, the energy is transferred to an electron in a higher-lying state, which does become ionized. [Pronounced ``Oh-zhay.''] Associated Universities, Inc. ``... a not-for-profit corporation based in Washington, DC. It was founded in 1946 by nine northeastern universities to manage major scientific facilities. AUI currently operates the National Radio Astronomy Observatory under a cooperative agreement with the National Science Foundation [NSF].'' Attachment Unit Interface. A type of connector. Standard (Hepburn) transliteration of Japanese version of the Indian holy syllable om. Part of the name of the Japanese poison-gas cult Aum Shinrikyo mentioned at the LPF entry. Shinrikyo means something like `supreme truth.' Authorization for Use of Military Force. Name of an act of the US Congress passed on September 14, 2001. African Union Mission in Sudan. Officially AMIS, q.v. Acceptable Use Policy. Association of University Programs in Health Administration. It describes itself as ``a not-for-profit association of university-based educational programs, faculty, practitioners, and provider organizations. Its members are dedicated to continuously improving the field of healthcare management and practice. It is the only non-profit entity of its kind that works to improve the delivery of health services throughout the world - and thus the health of citizens - by educating professional managers at the entry level.'' au pis French expression literally meaning something like `at worst' (see au and pis aller). The English expression ``at worst'' often has a meliorating connotation, as if to suggest that the worst possible may not be so bad. The flatter connotation of au pis is apparently better captured by `if worse comes to worst.' I suggest the mnemonic ``oh piss!'' (Better yet ``aw pee!'') Incidentally, pis also means `udder,' so ``veau au pis'' does not have to mean `calf at worst.' Unfortunately, ``pis pis'' just means `worse udder.' I was kinda hoping there could be an udder-worst-type pun. Association of University Radiologists. Affiliated societies on the web: APDR and A3CR2. Auriga. Official IAU abbreviation for the constellation. Association of Universities for Research in Astronomy. AppleTalk Update-based Routing Protocol. Autonomous Undersea System[s]. (US Navy acronym.) AUStralian Computer Emergency Response Team. ``Emergencies'' are security breaches. See CERT for other relevant organizations. ausgeruhter Kopf `Well-rested head' in German. The education director of a Texas academy emailed today to praise our WAC entry. It reminds me of the classic movie Fast Times at Ridgemont High, from 1982. It was a perfect movie. For example, its main page at IMDb says that the ``plot synopsis is empty.'' See what I mean? Perfect! Anyway, one of the characters is Brad Hamilton (played by Judge Reinhold) who likes to describe himself as ``a single, successful guy,'' at least until he loses his job and his girlfriend. It just goes to demonstrate the fragility of life. But I wasn't reminded of this immediately. I just mentioned the email to mom, and read her the WACky entry. She didn't think it was so inspired. I must have read it too fast. Yeah, that's it. Then I mentioned that yesterday I had an email from a guy who wrote ``And Stammtisch Beau Fleuve means what? Table reserved by a beautiful river?'' That made her laugh, even though it's a fair interpretation. After she stopped laughing, she commented that what her grandmother would have said about the glossary was (is?) that it's the product of an ausgeruhter Kopf. Googling on this phrase and related ones (vom ausgeruhten Kopf, etc.) suggests that this is no longer, if it ever was, a common expression. Anyway, since you asked what I wrote (you did, didn't you?), here it is: ``Beau fleuve'' is believed to have been used in reference to the Niagara River, and to be the source, in corrupt form, of the name of the city of Buffalo. I started the glossary when I was an asst. prof at the University of Buffalo, and there was a bunch of friends I ate lunch with regularly. At the time (1995), the fellow in charge of Engineering Computing was stupidly reluctant to let me set up a web site for a small glossary of microelectronics terms (and some other words and abbreviations I used in class). To bypass him, I got a website from a different university webserver for the stated purpose of having a web presence for a university group (my lunch group). To get the relevant university official to grant my request, I tried to make it sound a bit more serious or at least established [than it actually was], so I gave our informal group a name. Asociación de Universidades confiadas a la Compañía de Jesús en América Latina. (Spanish: `Association of Universities entrusted to the Society of Jesus [SJ] in Latin America.') Corresponding US organization is AJCU. AUstralian Science and TEchnology Heritage Centre. Launched in December 1999, it was the immediate successor to ASAP. AUStralian TELecommunication Authority. See Janeite. Australia Day Previously known as Anniversary Day and Foundation Day, Australia Day commemorates the beginning of settlement in Australia, when Governor Arthur Phillip landed at Sydney Cove on January 26, 1788. Interestingly, this is a holiday that was once celebrated as a Monday holiday to make a three-day weekend, but which now is celebrated on the actual day. In the years before the 1988 bicentennial, it was celebrated on the first Monday following January 26, but in 1988 it was celebrated on the anniversary (a Tuesday that year) and has been ever since. For someone whose national holiday celebrates independence and freedom, the particulars of the event commemorated on Australia Day can induce queasiness. Governor Phillip came to found a penal colony. The ships he came with carried, in addition to 450 sailors and government personnel, over 750 prisoners (including 15 children). Australia celebrates its other national holiday in common with New Zealand: Anzac Day, described at the ANZAC entry. Australia has other public holidays, but they're not especially national: Good Friday and Easter Monday (I guess that's a three-day weekend plus a day to dry out), Christmas and Boxing Day, and New Year's Day. There are three officially observed days that are not public holidays: Commonwealth Day (second Monday in March), Mother's Day (second Sunday in May), and Father's Day (first Sunday in September). Various other holidays are widely celebrated unofficially or are official at the state level, but are not declared public holidays at the national level (so I understand). These include the Monarch's birthday and Labour Day. Labour Day in Australia is celebrated on different days in different states. The day generally commemorates the establishment of the eight-hour day, and this was won separately by various trade unions at different times in different states. The eight-hour day was an early focus of the union movement (see 888) in the nineteenth century. Austrian scientific suicides It seems like a category large enough, or at least disproportionate enough, to merit its own entry: 1. September 5: Ludwig Boltzmann 2. September 23: Paul Kammerer 3. September 25: Paul Ehrenfest UK Association of University Teachers. According to a webpage viewed in April 2005, it was ``the trade union and professional association for over 48,700 UK higher education professionals'' (this included not just instructional personnel but also librarians and some others). In addition to a newsletter, they had a magazine cleverly named ``AUTlook.'' Alas, this bit of cleverness will have to be abandoned. In 2006, AUT merged with NATFHE to form a new union is called the University and College Union (UCU). One of those ``little magazines.'' This one is published in Puerto Rico and is dedicated to bad poetry in Spanish (subtitle: Revista Internacional de Poesía). Perfect-bound, glossy cover. The cool thing about it is the way they assign dates to the issues. Vol. 1, Núm. 8 is dated ``Noviembre 2002 a febrero 2003.'' Isn't that great? Rhyme schemes? We don' need no steenkeen rhyme schemes! Checking authorization. This is a special terminology used by DSL dialers. For example, say you launch the dialer and it reports Dialer Error629. Connection closed by remote computer. Technical support will conclude that you're successfully connecting but that there are other problems. Check the cabling. Power down and power up. Turn off all other appliances. Jog around the block. Hmm. Apparently your operating system is too old. You should spend a few hundred dollars on an OS upgrade and more memory. Look, why not just buy a new computer? Etc. Thank him politely and call back later. Talk to someone who understands the arcane terminology. ``Authorized''? Let's try another userid and password. Ah-hah -- works! The problem appears to be: your password was munged! By the way, the equivalent terminology from the ``Online Control Pad'' dialog box is Internet Connection Not Established Network connection is not available. Do you want to work offline? This typically means `password mistyped.' AUTOmobile. In Scandinavian countries, bil is common. Humphrey Carpenter has speculated that Autobiography is probably the most respectable form of lying. Maybe it's the only form. According to the back-cover copy of her An Accidental Autobiography, Barbara Grizzuti Harrison was asked to describe the book she was writing and responded, ``an autobiography in which I am not the main character.'' This doesn't strike me as particularly novel. A term from Greek roots meaning `self-headed.' It sounds like it ought to have something to do with soccer. I don't remember our ninth-grade gym teacher, Mr. Carey, using that term when he introduced us to the exotic sport of ``sock-a-bowel'' and pint-size Armando introduced us to the experience of being consistently and reliably out-dribbled, but somehow I'm not surprised. Anyway, it turns out to be a term meaning `self-governed,' used to describe different Orthodox (i.e., Eastern rite) churches. AUTOmatic DIgital Network. Part of DMS. Transfer of infection from one part of the body to another part of the same body. The standard otoscope or whatever it's called has a disposable paper cover for the cone that fits in the outer ear. After looking in my infected ear (outer-ear infection; I guess that qualifies as a sports injury if you catch it in an Olympic pool), my doctor went around to check the uninfected ear. ``Shouldn't you change that?'' ``No, the infection won't transfer.'' Supposing for the sake of argument that he's wrong, I wonder: is infection transferred from one part of the body to another part of the same body by the good offices of a physician properly ``autoinfection,'' ``iatrogenic infection,'' or what? And is the physician a ``vector'' or the 'scope a ``vehicle''? (An auto? BTW, the word transfection refers to something else entirely.) The last time I had a check-up, I asked him (same doctor) why he was examining my ears. What was he actually looking for? He said he was looking for my brain; if it wasn't there he'd be able to see straight across. If I'd had a brain I would have pointed out that in that case, there was no need to check on both sides. The Divinyls had a hit with ``I Touch Myself.'' The middle line of the chorus is ``When I think about you I touch myself.'' Sort of like doing push-ups, I suppose. You know, the three main forms of plague -- bubonic, pneumonic, and septic, in increasing order of how soon an obituary may be needed -- all result from infection by the same bacterium (Yersinia pestis). They differ essentially in where they are or start out, and one kind can turn into another. Similarly, pulmonary tuberculosis (the usual TB), scrofula, and a host of other unpleasant diseases can all arise from the same bacterium, Mycobacterium tuberculosis. Some of these diseases, however, can be caused by other similar bacteria. Scrofula in children is usually caused by Mycobacterium scrofulaceum or Mycobacterium avium. Spontaneous ionization of a motor vehicle occurring in equilibrium, or the same process occuring with something other than an auto. The reaction H2O --> H+ + OH- is a common example of autoionization. automatic camp-on You stay on a line that rings busy, and when your called party hangs up, your call rings through. I could use this to call some people. A Spanish word meaning `able to care for oneself.' Effectively an antonym of the English word invalid. AUTOmatic VOice Network. This US military network was activated in December 1963, and became the principal long-haul, nonsecure voice communications network within the Defense Communications System. It eventually became a part of the Defense Switched Network (DSN), the replacement system activated in 1990 to provide long-distance telephone service to the military. You can get more information about this system from the `touch tone dials'' page at telephonetribute.com and by following links from the AFCA home page. When I worked at military labs in the 1980's, my desk phone was always part of AUTOVON. I could call out of the network (and most of my calls off base were off network as well). When calling people at other government labs, I had a choice: I could call their regular number (seven-digit number, preceded by an area code if different from mine) or I could call them within AUTOVON, in which case I always dialed a seven-digit number. The last four digits of the AUTOVON number were the same as the ordinary phone number, and the first three digits essentially identified the military site. There was a slight preference for calling within AUTOVON when possible, simply for budget reasons. Otherwise, for low- or non-ranking people like me, AUTOVON was not noticeably different from the regular civilian phone network. AUTOVON, derived from the Army's Switched Circuit Automatic Network, was in fact designed to provide the Department of Defense with an internal telephone capability functionally equivalent to toll and Wide Area Telephone Service (WATS) calls. However, it was also designed to provide precedence preemption for high-priority (much-higher-priority-than-me) users. This was implemented with a fourth column of keys, the fourth (1633-Hz) column at the DTMF entry. The column, labeled A/B/C/D from top row to bottom row there, had keys labeled FO/F/I/P, for Flash Override, Flash, Immediate, and Priority. (Also, the octothorpe key was labeled A.) Higher keys had higher precedence, and pressing one had the effect of pre-empting any lower-precedence call that was in the way. (The precedence below ``priority'' was ``routine.'') Phones with higher-precedence keys that were functional were available only to higher ranks in the military chain of command. With a few exceptions (POTUS, Sec'y of Defense, Joint Chiefs of Staff) those with access to them were only authorized to press those keys for specific levels of emergency. Here's some more detail. ATM User-to-User. Autonomous Under{sea|water} Vehicle. A self-propelled submarine robot, intended to function with minimal control input. AUV's are still mostly experimental. Cf. ROV. French phrase meaning à les (in French). This glossary entry is on the very cusp of futility: only a vanishingly small fraction of French-nonspeakers have the requisite level of ignorance to benefit from it, and those few wouldn't know to look here. Perfect! Of course we're not going to give the English. Apple UniX. The license plate number of the Rolls Royce Phantom 337 belonging to Auric Goldfinger, in the 1964 James Bond movie Goldfinger. Goldfinger was played by Gert Fröbe (credited as Gert Frobe). Goldfinger is the chief villain in this one, of course. Do I really have to explain this? Gold cation of valence 1 (Au1+) is aurous. Auric is valence 3 (Au3+)! Honestly, sometimes I think you people don't even care. Also in that movie, Honor Blackman plays the role of Pussy Galore. Somehow I think that when her parents were considering names, the future they imagined for her was nothing like being a Bond woman. (Particularly as she was born in 1927, and Ian Fleming didn't invent James Bond until after he retired with the rank of Commander from WWII service in British Naval Intelligence.) Air-to-Vapor (mass ratio). Mechanical engineers seem to prefer to call this a ``weight ratio.'' Cf. AF. Alleged Vegetarian. Appendix Virgiliana. Authorized Version (of the Bible in English). For a very long time that was the KJV. There's an old saying that a translation is a commentary. There's a Bible commentary called The Unauthorized Version, by Robin Lane Fox. Academy of Video Arts & Sciences. Australian Veterinary Association. It ``is the professional organisation representing veterinarians across Australia.'' 1. A verb meaning `be of use.' It means just that as an intransitive verb. The construction ``avail oneself of'' means for one `to take advantage of.' (Similarly with myself, yourself, etc.) 2. A noun that is apparently short for `speaker availability.' (Availability, of course, is a noun constructed on the adjective available, from the verb avail. It's crazy, but I love this stuff.) Chris Suellentrop did a series of ``Dispatches from Campaign 2004'' for Slate. His September 8 dispatch included this: ``It's been more than five weeks since Kerry last took questions at a press conference, or an `avail,' as it's called.'' Avance Logic, Inc. Makes video and audio chips. Homepage has petulant blinking. I'm not sure in what year I wrote the preceding part of this entry. I checked back in late 2004: no more blink; no more Avance, either. Association of Veterinarians for Animal Rights. American Voter Coalition. Association of Visual Communicators. Look at me when I talk to you! Atomic Vapor Cell. Automatic Volume Control. Isn't it fun to speak progressively more softly, so people lean toward you, and listen real hard, and then suddenly to shout at the top of your lungs so their ears hurt? No? Killjoy. (UK) Association for Veterinary Clinical Pharmacology and Therapeutics. Audio-Visual Copyright Society, Ltd. ``Based in Australia, serving the world.'' AVDP, avdp. Alta Velocidad Española. `Spanish [.es] High Speed [train].' A 300 kph TGV derivative operated by RENFE. Cf. ave. AVErage. Try to use this only if you can avoid capitalizing the a, so it isn't mistaken for an abbreviation of some oddly named avenue. In fact, avoid it altogether and use avg. Spanish, `Bird.' See also AVE. [Football icon] Ave Maria Latin, `Hail Mary.' Name and first words of a common Roman-Catholic prayer. A desperation football pass. English for AViation. Cf. ESP, EAV. Avestan. Makes you wonder why they bother to define an abbreviation. American Volunteer Group. Better known as the Flying Tigers. This was a group of personnel (pilots and ground crew) released from active duty in the air forces of the US Army and Navy, serving as volunteers on the Chinese side in the Sino-Japanese war. The group was formed by Colonel Claire Chennault. Chennault had retired from the USAAC as a captain in the 1930's and was appointed to command the largely nonexistent Chinese air forces by Chiang Kai-Shek, leader of the Nationalist Chinese government. The AVG flew Curtiss P-40B fighters purchased by the Chinese government under a special arrangement with Curtiss-Wright. (The British had taken over a French order for P-40B's after the fall of France, and Curtiss had six assembly lines working on the order. Under an arrangement proposed by Curtiss Vice-President Burdette Wright (an old friend of Chennault), the British waived priority on 100 P-40B's rolling off one of those lines, allowing them to be sold to China. In return, Curtiss added a seventh line and delivered later-model P-40's to Britain that were more suitable for combat.) The P-40's used by the AVG were less maneuverable than Japanese Zeros, and they had crude gunsights, but the Tigers developed tactics that allowed them to achieve impressive kill ratios. After the Japanese surprise attack on Pearl Harbor that brought the US into WWII as an active combatant, the Flying Tigers' success was one of the few bright spots in a Pacific war that was starting out badly for the US. (In this connection also, recall James H. Doolittle.) Chennault's status was rather irregular and his command a bit informal. According to a history page at the self-described official site, he was originally invited to China in 1937 by Madame Chiang, on a three-month mission to make a confidential survey of the Chinese Air Force, and his official status until the US entered the war was always a subject of speculation. ``Chennault himself states [probably in his Way of a Fighter] that he was a civilian advisor to the Secretary of the Commission for Aeronautical Affairs, first Madame Chiang and later T.V. Soong. ... Even while he commanded the American Volunteer Group in combat, his official job was adviser to the Central Bank of China, and his passport listed his occupation as a farmer.'' In July 1942, the AVG was incorporated into the USAAF, and Chennault was promoted to brigadier general. Chennault had great publicity, close connections with FDR and the White House, and a good relationship with Gen. Chiang Kai-Shek. In October 1942, he wrote FDR that with just 105 more fighters, and 30 medium and 12 heavy bombers, he could win the war by gaining air superiority and destroying Japanese shipping and industrial production. It's not clear how much of this wooly optimism FDR bought into, but Chiang's ground forces (could they even be called an army?) weren't engaging the enemy, so this approach had its attractions. In late spring 1943, Chennault was given command of the US Army's newly formed Fourteenth Air Force, and priority on supplies airlifted from India. The 14th underperformed. Chennault was eased out of command after FDR died. When the war ended in 1945, ten AVG pilots formed an air cargo company called Flying Tiger Line, originally flying Conestoga freighters purchased as war surplus from the United States Navy. It achieved a number of firsts, and after acquiring its rival cargo airline Seaboard World Airlines on October 1, 1980, it surpassed Pan Am as the world's largest air cargo carrier. As it happens, my uncle Robert flew for them in the late 1970's or early 1980's. In 1989, the company was purchased by FedEx. AVeraGe. Plural avgs. Singular also abbreviated ave. (deprecated). AViation GASoline. Advanced Video Guidance Sensor. NASA designation of a device developed for DART that gathers navigation data by capturing reflections from laser beams directed at an object at close range (within 500 meters), using them to compute relative bearing, range, and attitude. (Though not all at the maximum range. Range and attitude -- relative orientation of target craft -- were expected to be available only within 200 meters. I'm not sure they know if that's so yet.) Ambulatory Visit GroupS. Academy of Veterinary Homeopathy. The content of what ought to be the homepage has me a bit disoriented, but anyway I'm glad that even ducks can have a dose of quackery. Advanced Very-High-Resolution Radiometer. Association for Veterinary Informatics. Audio Video Interleaved. Advancement Via Individual Determination. Antelope Valley Internet Dialers. ``The Internet User Group for the Antelope Valley.'' Judging from the map on their home page, it appears that Antelope Valley is located on earth, and probably not in Antarctica. Oh, here's something: meetings are held in Lancaster, CA. Also, there are no meetings until further notice. Audio-Visual Information Systems. Latin, `bird.' Well known, of course, from the expression rara avis, `rare [i.e., strange] bird.' The Latin word avis became ave in Spanish, so the Latin prayer Ave Maria would sound like `Mary bird' in Spanish, to anyone who didn't know that it doesn't mean that. Spanish noun meaning `advertisement' and verb meaning `I notify, alert.' Spanish, `visualize, envision.' I think this may be primarily a Latin American usage. If the English verb eviscerate had a close cognate in Spanish, it would be eviscerar, which in Latin America would sound close to avisorar, except for the initial vowel. Association of Visual Language Interpreters of Canada. This appears to be one of those unrequitedly bilingual organizations. (Here ``bilingual'' and ``one of those'' are both meant in the Canadian sense or context. Then again, maybe not.) The old AVLIC logo featured a Canadian maple leaf (well, maybe a stylized sugar maple leaf; I'm no naturalist) and the text ``AVLIC/AILVC.'' The new logo has a more naturalistic maple leaf dotting the letter i of a lower-case ``avlic.'' Also, the English name of the organization is spelled out along the bottom, either alone or above the French version. To be fair for a change, I should probably note that there's a good reason why AVLIC/AILVC seems not to be well-represented in French-speaking parts of Canada, and why there is no provincial AILVC chapter for Quebec. According to the AVLIC Mission Statement, AVLIC is ``a national professional association which represents interpreters whose working languages are English and American Sign Language (ASL).'' (That is, they interpret between ASL and English.) Atomic Vapor Laser Isotope Separation. ArterioVenous Malformation. Here's a support page. Audio Video and Multimedia. Automated Valuation Model. Used by expert systems to generate assessments -- in real estate, at least. Automatic Vehicle Monitoring. Normally refers to remote monitoring of road vehicle location. American Veterinary Medical Association. The main publications of the AVMA are the Journal of the American Veterinary Medical Association (JAVMA) and the American Journal of Veterinary Research (AJVR). Arkansas Veterinary Medical Association. Cf. the national AVMA. American Veterinary Medical Foundation. Audio Video and Multimedia Services. Australian Vaccination Network. A currency subunit used in Macao. The basic currency unit is the pataca, equal to 100 avos. Macao is a former Portuguese colony, and avo is a much-shortened form of Portuguese oitavo, `eighth.' I think this is cute because the original word has been not merely shortened, but shortened almost to its semantically least significant component -- essentially an inflection. It's like shortening eighth to th. Similar radical shortenings (radical eliminations, literally) in European languages include auto, bil, and uncle. More generally, Japanese has a lot of much-shortened loans from European languages, particularly English. For some examples, see the perm entry. Arginine VasoPressin. Plays a rôle, along with the renin-angiotensin system and natriuretic hormones, in water homeostasis. Why can't they make a beer that doesn't take you to the bathroom? Is the current scheme a safety feature? Assistant Vice President. Association Variose pour la Promotion de la Sidénologie. The same organization serves a more Englishy site where they explain AVPS as the ``Fundation for AIDS Research & Care.'' (``Thi site is first intended to professionals,'' dontcha know.) Aortic Valve Replacement. { Adult | Age } Verification Service. You say you're over eighteen, eh? Then you must have a what -- VISA, MasterCard, American Express? What's the number? Expiration date? Hmmm... Looks like you're good! Justreadtheagreementand SIGN HERE FOR YOUR ``FREE PASS'' TO OVER 200,000 HARD-CORE SITES! American Vacuum Society. Really, nature does not abhor a vacuum -- it's the pressure outside that pushes stuff in. The first time I wore my ``Nature abhors a vacuum tube'' tee shirt to work (in 1994 or thereabouts), a student objected! Anti-Virus Software. I should probably warn you that the editor of this glossary had a cold in March, and in April the compiler came down with probably the same rhinovirus. The two are in frequent email contact, and these emails affect what you read on your computer! You shouldn't be too worried, but if I were you I'd wipe the screen and the keyboard, just in case. Heck, wipe the file system -- you can never be too careful. Use some Listerine on the speakers, too, and any other oral cavities on your PC. Application Visualization System. Association of Vision Science Librarians. It's ``an international organization composed of professional librarians, or persons acting in that capacity, whose collections and services include the literature of vision.'' Amphibious Vehicle, Tracked. AudioVisual Terminal. Automated Voice Technology. AutomobilVerkehrs- und -Übungsstraße. (I.e., AutomobilVerkehrsstraße und AutomobilÜbungsstraße.) German `Automobile-Traffic Streets and Test Tracks.' Formerly Rennstrecke für Autorennen in Berlin (`Racetracks for Car races in Berlin') now a part of the Autobahn system). That's about how people drive on the Autobahn too. Arbitrarily-Varying WireTap Channel. (Domain code for) Aruba. The principal export is homeward-bound tourists. The official languages are Dutch and Papiamento. Papiamento written looks like Spanish with spelling slightly adjusted -- less different from Castilian (the Iberian language called ``Spanish'' in English) than Catalan is -- plus a number of Dutch words. Aruba is a Dutch possession. On April 29, 2003, Queen Beatrix of the Netherlands knighted Aruba native Sidney Ponson. At the time, he was a 43-54 career pitcher for the Baltimore Orioles, with a 4.74 ERA. He had never had a winning season. In the subsequent three months, he caught fire, racking up a 12-5 record with a 3.45 ERA. He turned down a $21 million 3-year deal and at the July 31 non-waiver trade deadline he was dealt to the San Francisco Giants for for pitchers Kurt Ainsworth, Damian Moss and Ryan Hannaman. In San Francisco he was only 3-6, but had a 3.71 ERA. In the off-season, Baltimore lured him back for $22.5 million over three years. You know, the sports analysts talk about his not giving up the long ball so much in 2003, and mental toughness and rotator-cuff injuries and controlling his weight -- what a crock! Pitching is a science, like astrology and psychology. He just got psyched by the knighthood. After ten games in 2004, he's 3-7 with an ERA of 6.47. Addison-Wesley or Addison-Wesley Longman, or Addison-Wesley Publishing Group. Can you say ``assignment agreement''? Sure you can! A chain of root beer stands named after the founders -- Roy Allen and Fred Wright. It was the earliest restaurant franchise. ``Another World'' An NBC daytime soap opera. Another homepage, with links to NBC's. Occurs in email subject headers. Apparently stands for Antwort (German: `answer'). Application Whatnot. Okay, I confess, I made it up. A moment of weakness. ArtWork. Typesetters' abbreviation. American Whitewater Affiliation. ``[T]o conserve and restore America's whitewater resources and to enhance opportunities to enjoy them safely.'' See some relevant phonological thoughts at the AWWA entry. American Women's Association. An American expats' mutual support group. Similar organizations go by various similar names (American Women of ..., American Women's Club of ..., American Women's Organization of ..., etc.). The umbrella organization is FAWCO. See also AWA Singapore, which serves a page of AWA links in various countries. Animal Welfare Act, originally enacted in 1966. In amendments passed in 1970, the USDA is instructed to conduct an annual lab-animal census. They counted 1,213,814 in 1998. Such precision! What day was that? Uncertainties concerning what constitutes an animal under that law were resolved by Secretary of Agriculture Clifford Hardin, who exercised his administrative authority to exclude rats, mice, and birds. These together make up anywhere from eighty to ninety-eight percent of warmblooded lab animals, depending on which interested party's estimate you believe. The AAVS filed suit against the USDA in 1999, maintaining that the original intent of the legislation was to include them. It's a good thing no one is proposing counting fruit flies or flatworms. Here was the USDA's breakdown for 1998: Oooh! Bunnywabbits287,523 Guinea pigs261,305 Other Animals142,963 Other farm animals53,671 ``Other animals'' includes ferrets, woodchucks, armadillos, chinchillas, horses, spotted hyenas, and opposums. The categories are given above in the order in which the USDA presents them. If you don't like that order, then you could try suing the USDA. A few groups that you would expect were unhappy with the decision to exclude the most common lab animals. They took the usual multi-track approach -- direct petition, indirect pressure, lawsuit. On October 6, 2000, a lawsuit brought against the USDA by the ARDF was dismissed by US District Court Judge Ellen S. Huvelle. Airborne Warning And Control System. An electronically very souped-up Boeing 707. [Pronounced ``AY-wax.''] Alert, Well, And Keeping Energetic. The American Sleep Apnea Association (ASAA) organizes local support groups called A.W.A.K.E. groups in the fifty states and D.C., and in the seven Canadian provinces that have a land border with the lower 48 states. (Those seven turn out to be all the Canadian provinces that have a land border with any part of the territory of the US, because the Yukon Territory, oddly enough, is a territory and not a province.) Some of the groups have websites. This page leads to contact information for all groups in the A.W.A.K.E. Network. A nonglossy magazine published by the Jehovah's Witnesses, for its missionaries to hand to prospects. The Gideons leave a whole Bible on its back in your hotel room, but not even one missionary in that position. ``The week of March 14-20 2004 has been declared Severe Weather Awareness Week by the Governor of the State of Indiana and by the Commissioners of St. Joseph County.'' This isn't getting off to a very good start -- I didn't find out until the week was two days old. I guess I missed the first announcement on account of the wild festivities for Einstein's 125th birthday. ``As part of Awareness Week, the State Emergency Management Agency and the National Weather Service will be conducting two `Test Tornado Warnings' between 2:00PM-2:30PM and between 7:00PM-7:30PM, Wednesday, March 17, 2004.'' March 17th in St. Joseph County, home of the Fighting Irish. If you think the Einstein shindig was big... ``Should actual severe weather be a threat on March 17, the testing will be held on March 18.'' It's reminiscent of the day of the Doolittle raid in Tokyo. You know, this whole awareness thing was so memorable that the next year when I ran across the forgotten old email announcing it, I created an entirely new entry for it (contrast). I may be stuck in a rut, but I have deleted the announcement. awareness months Various organizations lay claim to portions of the calendar for propaganda purposes. They usually take a day, a week, or a month. Most such designations seem, individually, to be useful or at worst anodyne. To politicians, it looks like a cheap way to satisfy constituents and look public-spirited into the bargain. Thus, it's easy to get lawmakers to vote, and chief executives to proclaim, that these designations are official lah-dee-dah. Therefore we'll pretty much ignore that. Many of these observations, celebrations, PR events or what-have-you's have names that include ``Awareness Month,'' and many don't. Months claimed in connection with health issues are frequently named ``<Foobar> Awareness Month'' or ``<Foobar> Safety Month.'' Many related to group pride or solidarity of one sort or another get names like ``Heritage Month'' or ``History Month.'' Just to shake things up, some group is bound to rename its ``<Foobarian> Pride Month'' ``<Foobarian> History Awareness Month.'' And on the other side, the shills for research on one or another disease will discover that the victims live in shame, requiring ``Oblong Somitis Incognita Awareness Month'' to be rechristened ``OSI Pride Month.'' In short, I don't think the distinction between awareness months and pride months, say, is a sharp one, so I'm going to use this entry as a central repository for designated months, however designated. The entries for awareness days (eventually) and awareness weeks will function similarly. There aren't a lot of awareness trimesters or awareness fortnights, although Prevent Blindness America does sponsor a 61-day ``month'' (see PBA). I can google up at most tens of thousands of awareness weekends, versus millions of weeks and months. Most designated months coincide with calendar months. This is a sensible approach, since ``October is Breast Cancer Awareness Month'' is a little more memorable than, for example, ``The 31 days following the fifth day after the fourth Thursday in September are Breast Cancer Awareness Month.'' In order to discourage the sensible practice, I'll go out of my way to provide more extensive publicity -- a whole entry, say -- when I become aware of month-long awareness months that don't coincide with calendar months. The only one I have an entry for just now is Hispanic Heritage Month. (``National,'' as in ``National Holiday,'' is the frequently elided first word in the official names -- as they occur in the presidential proclamations -- of many of the heritage and history months.) I'm going to have to automate this. It's too much. In connection with the business of aligning awareness months with calendar months, let me note this: When Comte created the Positivist Calendar, even though he made 28-day months and intercalated five or six year-end days that had no weekday correspondences (so that the rest of the year, days of the week corresponded to date mod 7), he did align the years. (Year 1 coincided with year 1789 of the Gregorian calendar, naturally.) awareness weeks Awareness weeks are the young of awareness months, so go to that entry for information about the species generally. Here's a list of awareness weeks that (a) I am aware of or (b) I was aware of: Automated Work Administration System. Afrikaner Weerstands Beweging. Afrikaans: `Afrikaner Resistance Movement.' A neonazi party in South Africa, led by Eugene Terreblanche, sentenced to six years in prison for the attempted murder of a black man, who was paralyzed in the beating. The party flag is essentially the same as the flag of the National Socialist (Nazi) Party of Germany (black device on white disc on red field), except that the four-armed black swastika is replaced by a three-legged black triskelion. Supposedly, this emblem represents three sevens. Auto White Balance. All-Wheel Drive. Hey, just try driving without one. AWD on a vehicle with four wheels sounds like it ought to be equivalent to 4WD, but it's not. 4WD includes ``low-range'' (high torque) gearing for deep mud or snow or steep grades. A 4WD must be stopped or slowed to a crawl to shift in or out of low range (done by toggling a switch or lever). AWD is power to all wheels, but without the special gearing. AWD, .awd At Work Document. Microsoft-defined file type and filename extension for a compressed bitmap format used for faxes. Specifically, an OLE compound object file that stores bilevel (B&W) facsimile data. The compression algorithm used in AWD is not published, but is based on CCITT Group 4. Active Wavelength Demodulation System. Advanced Warfighting Experiments. Asian Weightlifting Federation. American Wire Gauge. A set of numbers designating of (US) standard wire thicknesses. Arbitrary Waveform Generation. Array Waveguide Grating. American Wire Gauge. Additive White Gaussian Noise. Not very realistic sometimes, but a mathematically tractable and convenient model for the systematic analysis of linear systems. Are We Going To Have To Go Through All { That | This } Again? Are We Going To Have To Go Through { That | This } Again? Alert With Info. The Strawberry Statement collects the scattered thoughts of James Kunen, a 60's student radical at Columbia University. (Bibliographic details at the AAHM entry.) It's written in diary style, so I can tell you that on a Tuesday, July 16, 1968, the author visited the programming director at WABC radio in New York City. The two had a mutually unsatisfactory meeting, but agreed that there was some news content on the mostly-music-format WABC-AM, in the form of two newscasts per hour. Kunen felt these were insufficently detailed, and characterized them for the book: ``Canada is still sinking and the Russians have bombed Detroit, now back to the Show.'' Animal Welfare Information Center. I'm out of work. Can my dog get food stamps from Animal WIC? No, AWIC is part of the National Agricultural Library. Advanced Weather Interactive Processing System. Association for Women in Science. Copyeditor's abbreviation for awkward. [This glossary entry is just begging for a juicy example, isn't it?] A pattern-matching utility in Unix. Named after the last initials of its creators Al Aho, Peter Weinberger, and Brian Kernighan. Kind of a batch version of sed. Depending on your release, this may differ from nawk (New awk). Michael Neumann's extensive list of sample short programs in different programming languages includes a couple of awk programs. Animal Welfare League. A simple tool -- something like an ice-pick -- for making holes in leather. An ice-pick usually has a long handle like that of a screwdriver. An ice-pick applies impact force; it is held in the fist, about as a dagger is held. An awl applies steady pressure to a precise point; its handle has a blunter end that can be cupped in the palm. All the awls I've seen, anyway. Nowadays, shoe repair and manual shoe manufacture have gone the way of cobblestones. I suspect that most English-speakers' first encounter with the word awl, or even with the concept, occurs in Shakespeare's tragedy ``Julius Caesar,'' in the punny opening scene. Sadly, the standard (Schlegel) German translation is missing this bit. It wouldn't have been hard to recreate the pun: English awl and all can be translated to Ahle and alle. (The respective initial vowels here are short and long in quantity, but these are close enough for a good pun -- especially with a good actor's pronunciation.) Air and Waste Management Association. American Wholesale Marketers Association. Ancient World Mapping Center. American Women's Organization of Greece. Absent WithOut Leave. This is a US military acronym, but even outside the military, I think it is one of the best known of military acronyms. The writer of an AP news item distributed September 8, 2004, seemed to think it necessary to define it (incorrectly, of course, as ``Away Without Leave''). It's also occasionally expanded as ``absent without official leave,'' but in the military usage it is implicit that leave must be granted offically, or rather by a commanding officer. The way the Oxford Dictionary of the US Military handles this is to expand it as ``absent without (official) leave.'' They claim the acronym came into use in the 1920's, but I think it was already in use during WWI. Various American soldiers AWOL from their units during one or another World War are complaisantly mentioned by Gertrude Stein in some of her books. Ancient World OnLine. Ancient World On TeleVision. The Association of Writers and Writing Programs. It's hardly surprising that there'd be some association. Average Wholesale Price. Arab World for Research & Development. It's ``an independent research center (registered with the Ministry of Economy)... works in social political and economic research and development... highest standards in research methods including surveys, opinion polls, focus groups, in-depth interviews, and case studies.'' It conducts projects throughout the Arab world, but it seems to be based in Morocco. American Welding Society. Automatic Warning System. Now installed on most British railway lines; first used in 1948. By each signal there is one permanent magnet and one electromagnet that is energized when the signal is green. When the train passes the signal, a bell sounds in the driver's cab if it's green, and a horn otherwise. When the horn sounds, the driver must push a button within a few seconds or else the brakes will be applied. Since the 1950's there has also been a mechanical visual display which changes to a sunburst pattern when the button is pushed, and to plain black when the bell rings. Such a system is called ``fail-safe'' because its failure modes are designed to be safe. For example, in a power failure, the electromagnet goes off and the system signals to stop; if the brakeman is incapacitated, the brake goes on automatically. A common way for fail-safe systems to fail to perform safely as designed is by being turned off. In the Jethro Tull song `Locomotive Breath,' Ian Anderson sings something like old Charlie stole the handle and the train it won't stop going no it couldn't slow down For more railway-related songs, visit this chronological listing with comments or this alphabetic list. The word fail-safe came into popular use with the novel Fail-safe, by Eugene Burdick & Harvey Wheeler, (NY: McGraw-Hill, 1962). This story of accidental nuclear war was published during the Cuban missile crisis and was made into a movie of the same name (Dr. Strangelove without the yuks). Aviation Week and Space Technology magazine. Abstract Window Toolkit. Provides the Java GUI. Contained in the java.awt package. (A package is a collection of importable classes. Don't you just love the uneven level of detail you get in this glossary?) Common abbreviation for Shakespeare's play All's Well That Ends Well. American Water Works Association. ``[A]n international nonprofit scientific and educational society dedicated to the improvement of drinking water quality and supply. Founded in 1881, AWWA is the largest organization of water supply professionals in the world. Its more than 50,000 members represent the full spectrum of the drinking water community: treatment plant operators and managers, scientists, environmentalists, manufacturers, academicians, regulators, and others who hold genuine interest in water supply and public health. Membership includes more than 3,700 utilities that supply water to roughly 170 million people in North America,'' including Mexico, where the word for water (agua) sounds more like awwa than it looks, because the g in Spanish is glottal. (The Spanish word is derived from the Latin aqua; for a similar pun on this, see OCWA.) The consonantal w is a glide, and if one purses the lips slightly when pronouncing it, one produces a bilabial sound that is represented by a beta in the IPA, and which is the usual sound of b in Spanish. It is therefore not surprising that in ordinary speech, the glottal g and bilabial b of Spanish sound similar. This has led to some orthographic changes. For example, in Cervantes's original text, the word for `grandmother,' now spelled abuela, was spelled aguela. For some discussion of the Modern Greek g (gamma), see the galaxy entry. Haested Methods sponsors a number of electronic discussion groups related to water works. See their forums page for information about WaterTalk, SewerTalk, StormTalk, and GISTalk. They also sponsor a Spanish-language version of WaterTalk, called AquaForo. American Water Works Association (AWWA) Research Foundation. Aww, mama can this really be the end? To be stuck inside of Mobile, With the Memphis blues again. Refrain of ``Stuck Inside Of Mobile With The Memphis Blues Again.'' First released by Bob Dylan on ``Blonde on Blonde'' (1966). A Webpage Wasted On Tom Lehrer. This GeoCities site has been deactivated due to inactivity. Are you the site owner? Click here to reactivate your site. There was also A [now defunct] Webpage (Wasted) On Tom Lehrer. Maybe it was related content. The names allude to his 1959 album, ``An Evening Wasted with Tom Lehrer.'' Asociación World Wide Web Argentina. (A translation? Hmmm. Let's see if we can guess something here... maybe, em, could be sort of rough, but, uhh, well, something like ``Argentine WWW Association''?) Architecture eXtended. (Antediluvian PC/AT term.) Axe, hatchet. Advanced X-ray Astronomical Facility. Airborne eXpendable Current Profiler. Another one of those secret North Germanic acronyms, like KLM. Its expansion is probably an off-color inside joke, but ... ``The AXE system is Ericsson's core switching platform for all narrowband and wideband public network switching applications well into the [twenty-first] century.'' axial lead Refers to a cylindrical two-lead electrical package with one lead coming out of the center of each end. Cf. radial lead. An obvious or generally accepted proposition. The word reached English via French axiome < Latin axioma < Greek axíôma, `that which is worthy or fit.' Probably the best-known statement of an axiom is the first sentence of chapter I in Jane Austen's Pride and Prejudice: Axioms explicitly so-called occur most often in mathematics. Most high-school students used to make the acquaintance of axioms, even if they did not come into a friendly relationship with them (i.e., even if they didn't exactly become familiar) in standard one-year courses in formal geometry. That was before high-school geometry courses were abased by mathematics-hating ``teachers'' and other saboteurs of children's education, who adopted wretched books full of time-wasting pictures and geometry-related stories with a very optional afterthought chapter or two about proofs at the end. Euclid's geometry text taught rigor of thought to over twenty centuries'-worth of schoolboys. Euclid made a distinction between axioms and postulates, explained at the postulate entry. Anomalous X-ray Pulsar. Academic Year. Here're the AY calendars for UB in 1995-1996 and 1996-1997. Alpha Youth Athletic Association. Funded by the Borough of Alpha, New Jersey. All You Can Eat. Common abbreviation for Shakespeare's play As You Like It. Ask Your Local Orthodox Rabbi. (Also: ordained rabbi.) It's a lot faster than wading through the enormous Judaism FAQ. Same as CYLOR. You have my permission to pronounce this like the word its very creation suggests. A simple two-dimensional locally-anisotropic lattice-gas model (for CuO-plane superconductivity) with nearest- and Next-Nearest-Neighbor Interactions, originally proposed by D. de Fontaine, L. T. Wille and S. C. Moss in Phys. Rev. B, vol. 36, pp. 5709ff (1987). I'm not sure if the author list includes the name of the graduate student whose job was to carry the acronym expansion tools. America's Youth on Parade. ``There's no twirling spectacular quite like AYOP. It brings together the best baton twirlers, teams and corps in the world for a series of National and World Open Championship contests - all under one umbrella. It can be appropriately called the `World Series of Baton Twirling' ... sanctioned by the NBTA INTERNATIONAL.'' And where are AYOP events held??? That's right -- they're ``held [every year in July] in the spacious, air conditioned Notre Dame University Athletic and Convocation Center (JACC)''!!!! Hip-hip hooray! Hip-hip-hooray! Hooray! Hooray! Go! Fight! Win! Hip-hip hoo--what? Oh, it's not cheerleading? Better go to the majorette entry (once it exists) and learn more. Adequate Yearly Progress. Under the terms of the NCLB Act, federal (US) funding depends on demonstrated AYP. Measures of AYP, in order to be considered valid for NCLB purposes, must have a 95% student participation rate. (There are easy ways around this requirement, I think. When similar state-level legislation was implemented in Texas, large numbers of the poorest-performing students were recategorized as learning-disabled or encouraged to drop out and enroll in GED programs, and some exam papers were doctored.) Arizona. USPS abbreviation. The Villanova University Law School provides some links to state government web sites for Arizona. USACityLink.com has a page for Arizona. Arizona is a community property state. The US is the world's second-largest copper producer after Chile. Each produces about two million tons a year. You might ask: if they both produce about that much, and if production varies by maybe 10% year-to-year (how did you know that?), then how come Chile is consistently first and the US consistently second? Go ahead, ask, I can answer. The reason is, production is driven by the market. In a year with high demand, prices go up and production everywhere increases, so while the overall numbers vary a lot, the ratio of production between major producers varies less rapidly. Part of how this works is that the cost of extraction varies for different sources. At any given time some sources are not worth using. When prices increase, it becomes profitable to use those higher-cost resources. Major producing countries like the US and Chile have a number of such mines, so production by both varies with world demand. Some statistics show this kicking-in of higher-cost resources. In the US, Arizona is has the richest and most economically efficient copper mines, and in a typical year between a half and two thirds of US production comes from Arizona. When demand is low and increases rapidly, most of the extra production comes from Arizona, which has ready excess capacity. On the other hand, when demand increases steadily, Arizona's share declines, as higher-cost producers enter the market. Instead of saying Arizona here, I probably should be saying Phelps-Dodge. Of course, a lot of other factors affect production, such as resource depletion, lack of investment capital (a major factor for Zambia), political issues (gee, why can't Zambia just borrow abroad on the strength of its rich resources, and why did the bottom fall out of Zairian production in the early nineties?), personnel and transport (proximity to market) considerations, etc. (Domain code for) Azerbaijan. American Zinc Association. More links for Zinc at Zn. Association of Zoos and Aquariums. Founded in 1924 as the American Association of Zoological Parks and Aquariums (and abbreviated AAZPA), later known as the American Zoo and Aquarium Association. I think the current name (I write in 2009) was adopted around 1997. The abbreviation AAZA and the name ``American Association of Zoos and Aquariums'' (those are prophylactic quotation marks) have also been used. With all these different tags, I would have liked, just once, for them to have used ``aquaria'' in the name. Heck, I'll do it myself. Association zaïroise de défense des droits de l'homme. `Zaire Association for the Defense of Human Rights.' Founded in 1991. Changed its name to ASADHO when Mobutu's government fell and Laurent Kabila changed the country's name to Democratic Republic of the Congo. Spanish: `hostess, stewardess.' General term for an attendant at a public gathering or on a plane or train, etc. ``Attendant'' here is meant in the usual sense of someone who attends to the needs of the public, rather than someone who simply attends an event (attendee). That might be a public attendant. Everything would be so much easier if ``servant'' didn't have such poor connotations. Anyway, the male form of the word is azafato. Azafata and azafato are the only terms I've ever heard used in Spanish that would be translated as `flight attendant.' The fact that the attendance takes place on a plane is apparently not regarded as meriting explicit recognition. Spanish noun (masculine) meaning `luck, fortune' or `good fortune,' just as the English noun luck means luck or `good luck,' depending on whether you're speaking generally or wishing it to someone. ``Juegos de azar'' are `games of chance.' It's slightly unusual to have a noun ending in -ar that isn't the noun use of a verb infinitive, but you get used to it before the time when you can remember getting used to it. Another slight oddity: the woman's name Pilar. [Other non-infinitive nouns ending in -ar that I can think of are male: pulgar (`thumb'), collar (`necklace'). Mar is trickier; see its entry.] The word asar, which in Latin American prounciations is a homophone of azar, is a verb meaning `cook over an open flame.' Asado, meaning precisely `grilled beef steak,' is the national dish of Argentina. Latin had four classes of verbs, whose active infinitives (if they weren't deponent verbs they had active infinitives) ended in -are, -ire, or -ere. (That's right: mere spelling didn't quite tell you the conjugation of -ere verbs.) The -are class was the largest, I'm pretty sure. Romance languages typically collapsed these four regular conjugations into three, and the conjugation that collected the -are verbs (-ar in Spanish) were usually still the largest group. Modern Greek has a class of verbs with infinitives ending in -aro. It dates back to Byzantine times, when it was constructed on the basis of -are verbs borrowed from Italian (or perhaps more precisely Venetian). The ending is highly productive, and seems to provide the most common conjugation for loan verbs. For example, stoparo and sakaro (`to stop, to shock') are standard in Modern (demotic) Greek today. (German has a similar class of verbs, with infinitives ending in -ieren, mostly borrowed from French.) Greek-speakers living in foreign countries often use this conjugation to create hybrids used in local versions of Greek (a North American example: muvaro, `to move'). The pattern is not uniform, however. Greeks in Germany use preparizo for `to prepare,' from the German preparieren. The German verb is borrowed, in turn, from the French preparer. This verb is also an -are verb (viz., it's derived from the Latin preparare). I believe that Latin -are verbs generally ended up as -er verbs in Modern French. azide, azido- An azide is an organic chemical with an N3 functional group. That is, a chemical which can be represented by the formula where N is nitrogen and R represents a molecule bonded to the functional group through a carbon chain. Particular azides have names including the prefix azido-. Note carefully the difference between an azide and an amine. An azide has three nitrogens bonded to one organic group; an amine has three organic groups bonded to one nitrogen (R3N). The AriZona Language Association, Inc. ``[T]he not-for-profit professional association for language teachers in Arizona, dedicated to promoting the effective teaching of all languages. AZLA is the Arizona affiliate of ACTFL (the American Council of Teachers of Foreign Languages) and SWCOLT (the Southwest Conference on Language Teaching).'' AZOmethane. (CH3)2N2. AriZona Planning Association. A chapter of the APA. A-Z soup Just give it a second. You can figure this one out. AZidoThymidine. Systematic name, minus the numbers: dihydro methyl pyridinyl carbonyl azido dideoxythymidine. It has a lot of alternate trivial names, such as retrovir and zidovudine (abbreviated ZDV). It's an important AIDS drug, in the class of NRTI's. Like all of the drugs first found effective against AIDS, it somehow blocks the action of reverse transcriptase, which a retrovirus like HIV uses to insert its RNA-encoded genetic instructions into the host cell's DNA. A time-release form of AZT. A characteristic copper ore: Cu3(CO3)2(OH)2 with this structure: O == C O == C The mineral takes its name from its color. For more about the occurrence of this hydroxy-carbonate, see the Fahlerz entry. For a similar mineral, see malachite. AriZona Veterinary Medical Association. See also AVMA. Indian pronunciation of English assume. Bohr Radius. The radius of the orbit of an electron in Bohr's model of the hydrogen atom, it is also the scale parameter in the eigenstates of the Schrödinger equation for the hydrogen atom. It's about 0.52917721 Å, or about two nanoïnches in, uh, customary units. The Bohr radius is itself used as a unit of length (as, for example, in the definition of a dimensionless screening radius rs). As a length unit, the Bohr radius is also called a bohr (q.v.). The formula for the Bohr radius is a = ----- , where ħ is the reduced Planck's constant (h/2π), α the fine-structure constant, c the speed of light in vacuum, and m0 the free electron mass. If you want to compute the properties of an isolated hydrogen atom, you start with the complete Hamiltonian for the nucleus and electron, and separate out the Hamiltonian for the center-of-mass motion. This leaves a Hamiltonian for the electron-nucleus separation. (In classical physics, the Hamiltonian is a function of independent momentum and coordinate variables, and ``canonical'' equations of motion equivalent to Newton's equations are obtained as first-order partial differential equations involving the Hamiltonian. In quantum mechanics, the Hamiltonian is an operator function of momentum and coordinate operators, and it is formally identical to the classical Hamiltonian so long as intrinsic spin is ignored. The Schrödinger equation is a first-order partial differential equation involving the quantum Hamiltonian.) Anyway -- the Hamiltonian, or any equations derived from it, looks similar for the electron-nucleus separation as for an electron orbiting an infinite-mass nucleus, but with a ``reduced mass'' (its value, half the harmonic mean of the electron and nuclear masses, is about 0.05% smaller than the free electron mass). Using the reduced mass can give you a slight improvement in accuracy for an even slighter amount of computational work, if all you're dealing with is an atom with one electron, or a Rydberg atom with only one highly excited electron. (A Rydberg atom is an atom with one or few electrons in large-n states, and the other electrons not in highly excited states.) The Bohr radius, however, is defined using the free electron mass, and not the reduced mass. Diode imperfection factor (A). The zero subscript indicates that the correction is applied to a particularly elementary model: a single-exponential (Ebers-Moll) model. A paper dimension standard used only in those corners of the world (mostly just a few remote stations in Antarctica, a bunch of Pacific islands, some parts of North America, and the continents of Australia, Europe, South America, Africa, and Asia) that stubbornly cling to centuries-old metric units. A0 sheets have a total area of 1 square meter, and a ratio of length to width that is the square root of 2. Each successive standard size (A1, A2, ...) is defined by halving the length of the longer side of the sheet, thus preserving the ratio of height to width. The earliest known suggestion of this scheme was by Georg Lichtenberg, in a letter to Johann Beckmann date October 25, 1786. [The old quarto, octavo, 16mo, etc. are also defined by successive halvings, but have two width and length ratios (whose geometric mean, of course, is also the square root of 2). Cf. B0.] Name Area (sq cm) Width (cm) Length (cm) Length (in) It is superfluous to note that Hermann Melville was rather a literary naturalist. But in chapter 32 (``Cetology'') of Moby Dick, he makes a surprisingly direct connection: ``According to magnitude I divide the whales into three primary BOOKS (subdivisible into CHAPTERS), and these shall comprehend them all, both small and large. I. THE FOLIO WHALE; II. the OCTAVO WHALE; III. the DUODECIMO WHALE. As the type of the FOLIO I present the SPERM WHALE; of the OCTAVO, the GRAMPUS; of the DUODECIMO, the PORPOISE.'' After enumerating the Folio whales, he writes (the ``books'' here are still metaphorical; we continue in chapter 32 of Moby Dick):       Thus ends BOOK I. (Folio), and now begins BOOK II. (Octavo). OCTAVOES.*--These embrace the whales of middling magnitude, among which present may be numbered:--I., the GRAMPUS; II., the BLACK FISH; III., the NARWHALE; IV., the THRASHER; V., the KILLER. *Why this book of whales is not denominated the Quarto is very plain. Because, while the whales of this order, though smaller than those of the former order, nevertheless retain a proportionate likeness to them in figure, yet the bookbinder's Quarto volume in its dimensioned form does not preserve the shape of the Folio volume, but the Octavo volume does. A paper size. See A0. Tops. In the best category. Alpha1-Antitrypsin Deficiency. ``[A] genetic condition that can cause severe early onset emphysema, liver disease in both children and adults, or more rarely, a skin condition called panniculitis. It is estimated [that] there are 80,000 to 100,000 men, women and children with A1AD in the United States, yet only a fraction of them have been identified,'' according to... Alpha1 National Association. ``[A] non-profit, membership organization, dedicated to improving the lives of individuals and their families affected by alpha1-antitrypsin deficiency.'' This was the ``number'' on the vanity plate issued by the state of California for a car belonging to Lawrence Welk. If you're much younger than me, you probably don't get it. Lawrence Welk had an orchestra and a television show (called ``The Lawrence Welk Show''), and his trademark way to set the beat to begin a piece was to say ``uh-one and-uh two and-uh.'' A paper size. See A0. You mean the UK school-leaving exams? See A-levels. Part of a system that might very well end up being a one-off for 2002. Alexander to Actium, by Peter M. Green. Atlantic Reporter, Second Series. Legal publication. Advanced Antennas for Future Combat Systems. CECOM research program. American Association for Laboratory Accreditation. ``[A] non-profit, professional membership society committed to the success of laboratories through the administration of a broad-spectrum, nationwide laboratory accreditation system and a full range of training on laboratory practices taught by experts in their field.'' ``A2LA accredits testing laboratories in the following fields: acoustics and vibration, biological, chemical, construction materials, electrical, environmental, geotechnical, mechanical, calibration, nondestructive and thermal. Accreditation is available to private, independent, in-house and government labs.'' Based in Frederick, MD. A paper size. See A0. American Association of Academic Chief Residents in Radiology. The AUR link on the A3CR2 page is less prominent or direct than the A3CR2 link on the AUR page. I guess we understand the pecking order here. The social science of small-group interactions would probably explain why the APDR doesn't get a link at A3CR2: this town ain't big enough for two alphas. ``Ay THREE cee arr two.'' It has kind of a ring to it, but they should drop the ``two'' so it scans with ``cee THREE pee oh.'' A paper size. See A0. A paper size. See A0. A paper size. See A0. A $60 value, and you also get... Oh sure, you could go to the mall today and get it for $17.98, but what do they know about value? And you don't get the convenience of ordering from the comfort of your own living room couch what you can see clearly right there on your TV screen, and having it delivered to your front door in ``just days.'' (Click here for top) Previous section: A&S (top) to AS56 (bottom) Next section: B (top) to BayMG (bottom) [ Thumb tabs and search tool] [ SBF Homepage ] © Alfred M. Kriman 1995-2017 (c)
2e880eaba9e9737a
Monday, April 24, 2006 Von Neumann inclusions, quantum groups, and quantum model for beliefs 1. Some background about number theoretic Clifford algebras 3. Jones inclusions and cognitive and symbolic representations One can make two conclusions. Matti Pitkänen Sunday, April 23, 2006 Does TGD reduce to inclusion sequence of number theoretic von Neumann algebras? The idea that the notion of space-time somehow from quantum theory is rather attractive. In TGD framework this would basically mean that the identification of space-time as a surface of 8-D imbedding space H=M4× CP2 emerges from some deeper mathematical structure. It seems that the series of inclusions for infinite-dimensional Clifford algebras associated with classical number fields F=R,C,H,O defining von Neumann algebras known as hyper-finite factors of type II1, could be this deeper mathematical structure. 1. Quaternions, octonions, and TGD The dimensions of quaternions and octonions are 4 and 8 and same as the dimensions of space-time surface and imbedding space in TGD. It is difficult to avoid the feeling that TGD physics could somehow reduce to the structures assignable to the classical number fields. This vision is already now rather detailed. For instance, a proposal for a general solution of classical field equations is one outcome of this vision. TGD suggests also what I call HO-H duality. Space-time can be regarded either as surface in H or as hyper-quaternionic sub-manifold of the space HO of hyper-octonions obtained by multiplying imaginary parts of octonions with a commuting additional imaginary unit. The 2-dimensional partonic surfaces X2 are of central importance in TGD and it seems that the inclusion sequence C in H in O (complex numbers, quaternions, octonions) somehow corresponds to the inclusion sequence X2 in X4 in H. This inspires the that that whole TGD emerges from a generalized number theory and I have already proposed arguments for how this might happen. 2. Number theoretic Clifford algebras Hyper-finite factors of type II1 defined by infinite-dimensional Clifford algebras is one thread in the multiple strand of number-theoretic ideas involving p-adic numbers fields and their fusion with reals along common rationals to form a generalized number system, classical number fields, hierarchy of infinite primes and integers, and von Neumann algebras and quantum groups. The new ideas allow to fuse von Neumans strand with the classical number field strand. 1. The mere assumption that physical states are represented by spinor fields in the infinite-dimensional "world of classical worlds" implies the notion of infinite-dimensional Clifford algebra identifiable as generated by gamma matrices of infinite-dimensional separable Hilbert space. This algebra provides a standard representation for hyperfinite factors of type II1. 2. Von Neumann algebras known as hyperfinite factors of type II1 are rather miraculous objects. The almost defining property is that the trace of unit operator is unity instead of infinity. This justifies the attribute hyperfinite and gives excellent hopes that the resulting quantum theory is free of infinities. These algebras are strange fractal like creatures in the sense that they can be imbedded unitarily within itself endlessly and one obtains infinite hierarchies of Jones inclusions. This means what might be called Brahman=Atman property: subsystem can represent in its state the state of the entire universe and this indeed leads to the idea that symbolic and cognitive representations are realized as Jones inclusions and that Universe is busily mimicking itself in this manner. 3. Classical number fields F=R,C,H,O define four Clifford algebras using infinite tensor power of 2x2 Clifford algebra M2(F) associated with 2-spinors. The tensor powers associated with R and C are straightforward to define. The non-commutativity of H with C requires Connes tensor product which by definition guarantees that left and right multiplications of tensor product M2(H)×M2(H) by complex numbers are equivalent. For F=O the matrix algebra is not anymore associative but this implies only interpretational problems and means a slight generalization of von Neumann algebras which as far as I know are usually assumed to be associative. Denote by Cl(F) the infinite-dimensional Clifford algebras obtained in this manner. Perhaps I should not have said "only interpretational" since the solution of these problems dictates the classical and quantum dynamics. 3. TGD does not quite emerge from Jones inclusions for number theoretic Clifford algebras Physics as a generalized number theory vision suggests that TGD physics is contained by the Jones inclusion sequence Cl(C) in Cl(H) in Cl(O) induced by C in H in O. This sequence could alone explain partonic, space-time, and imbedding space dimensions as dimensions of classical number fields. The dream is that also imbedding space H=M4× CP2 would emerge as a unique choice allowed by mathematical existence. 1. CP2 indeed emerges naturally: it labels the possible H-planes of O and this observation stimulated the emergence idea for few years ago. 2. Also Minkowski space M4 is wanted. In particular, future lightcones are needed since the super-canonical algebra defining the second super-conformal invariance of TGD is associated with the canonical algebra of δM4× CP2. The generalized conformal and symplectic structures of 4-D(!) lightcone boundary are crucial element here. Ordinary Super Kac-Moody algebra assignable with lightlike 3-D causal determinants is associated with the inclusion of partonic 2-surface X2 to X4 corresponding to C in H. Imbedding space cannot be dynamical anymore since no 16-D number field exists. 3. The representation of space-times as surfaces of H should emerge as well as the space of configuration space spinor fields (not only spinors) defined in the space of 3-surfaces (or equivalently 4-surfaces which are generalizations of Bohr orbits). 4. These surfaces should also have interpretation as hyper-quaternionic sub-manifold of hyper-octonionic 8-space HO (this would dictate the classical dynamics). This has been the picture before the lacking string of ideas emerged. 4. Number-theoretic localization of infinite-dimensional number theoretic Clifford algebras as a lacking piece of puzzle The lacking piece of the big argument is below. 1. Sequences of inclusions C in H in F allow to interpret infinite-D spinors in Cl(O) as a module having quaternionic spinors Cl(H) as coefficients multiplying quantum spinors with finite quantum dimension not larger than 16: this conforms with the fact that OH spinors indeed are complex 8+8 spinors (quarks, leptons). Configuration space spinors can be seen as quantized imbedding space spinors. Infinite-dimensional Cl(H) spinors in turn can be seen as 4-D quantum spinors having CL(C) spinors as coefficients. Quantum groups emerge naturally and relate to inclusions as does also Kac-Moody algebra. 2. The key idea is to extend infinite-dimensional Clifford algebras to local algebras by allowing power series in hyper-F numbers with coefficients in Cl(F). Using algebraic terminology this means a direct integral of the factors. The resulting objects are generalizations of conformal fields (or quantum fields) defined in the space of hyper-complex numbers (string orbits), hyper-quaternions (space-time surface), hyper-octonions (HO). Their argument is hyper-F number instead of z. Very natural number theoretic generalization of gamma matrix fields (generators of local Clifford algebra!) of super string model is thus in question. 3. Associativity at the space-time level becomes the fundamental physical law. This requires that physical Clifford algebra is associative. For Cl(O) this means that a quaternionic plane in O parametrized by a point of CP2 is selected at each point hyper-quaternionic point. For the local version of Cl(O) this means that powers of hyper-octonions in powers series are restricted to be hyperquaternions assignable to some hyper-quaternionic sub-manifold of HO (classical dynamics!). But since ordinary inclusion assigns CP2 point to given point of M4 represented by a hyper-quaternion one can regard space-time surface also as a surface of H! This means HO-H duality. Parton level emerges from the requirement of commutativity implying that partonic 2-surface correspond to commutative sub-manifolds of HO and thus also of H. 4. Also the super-canonical invariance comes out naturally. The point is that light like hyper-quaternions do not possess inverse so that Laurent series for local Cl(F) elements does not exist at the boundaries lightcones of M4 which are thus causal determinants (note the analogy with pole of analytic function). Super-canonical algebra emerges at their boundaries and the intersections of space-time surfaces with the boundaries define a natural gauge fixing for the general coordinate invariance. Configuration space spinor fields are obtained by allowing quantum superpositions of these 3-surfaces (equivalently corresponding 4-surfaces). Here is the entire quantum TGD believe it or not! I cannot tell whom I admire more: von Neumann or Chopin! 5. Explicit general formula for S-matrix emerges also This picture leads also to an explicit master formula for S-matrix. 1. The resulting S-matrix is consistent with the generalized duality symmetry implying that S-matrix element can be always expressed using a single diagram having single vertex from which lines identified as space-time surfaces emanate. There is analogy with effective action formalism in the sense that one proceeds in a direction reverse to that in the ordinary perturbative construction of S-matrix: from the vertex to the points defining tips of the boundaries of lightcones assignable to the incoming and outgoing particles appearing in n-point function along the "lines". It remains to be shown that the generalized duality indeed holds true: now its basic implication is used to write the master formula for S-matrix. 2. Configuration space integral over the 3-surfaces appearing as vertex is involved and corresponds to bosonic degrees of freedom in super string models. It is free of divergences since the exponent of Kähler function is a nonlocal functional of 3-surface, since ill-defined metric determinant is cancelled by ill-defined Gauss determinant, and since Ricci tensor for the configuration space vanishes implying the vanishing of further divergences coming from the metric determinant. Hyper-finiteness of type II1 factors (infinite-dimensional unit matrix has unit trace) is expected to imply the cancellation of the infinities in fermionic sector. 3. Diagrams obtained by gluing of space-time sheets along their ends at the vertex rather than stringy diagrams turn indeed be the Feynman diagrams in TGD framework as previously concluded on basis of physical and algebraic arguments. These singular four-manifolds are not real solutions of field equation but only a construct emerging naturally in the definition of S-matrix based on general coordinate invariance implying that configuration space spinor fields have same value for all Diff4 related 3-surfaces along the space-time surface. S-matrix is automatically non-trivial. Matti Pitkänen Wednesday, April 12, 2006 Shamanic travels and p-adic physics as physics of cognition and intentionality Below an email sent this morning. I realized that I could add it to my blog as such (apart from correcting some typos, adding some clarifications, etc.) rather than wasting time to rewriting it and loosing some of the spontaneity of the response. Dear X and Y, I read the chapter of the forthcoming book by you and Z. This kind of book has social order. I am myself frustrated of not having seen any clear analysis written using language understandable by a layman and demonstrating the problems of materialistic view and showing that spiritual world view is by no means in conflict with basic tenets of science. The text happened also to resonate with what I have been working just now. I add some subtitles to the comments below to make clear the red thread. 1. World view induced depression The observation that scientists are people suffering world view induced depression is to the point. As I told Luis, when I was younger my attempt to believe in the world view that I was taught made me literally sick. I have followed discussions in physics blogs and have found that the tone is pathologically negative: crackpot, idiot, imbecille, moron,...: these words are thrown again and again against the opponent. The language used is language of power and violence. I see also this a side effect of this world view induced depression and an attempt to overcome it by aggression. Kind of monoculture of consciousness, stucking to a theory/worldview without ability to detach from it, is in question. 2. Perennial philosophy and the new number concept I liked very much about the representation of the basic ideas of perennial philosophy. I think that the basic challenge for theories of consciousness is to understand mathematically the division of reality to the sensory world which we can study by doing experiments and the spiritual world which we can approach by various spiritual practices. Cognition and intentionality (I use "cognitive" somewhat loosely but I do not know any better word!) should have physical and space-time correlates if the notion of physics is properly extended. Even more: we should be able to show that the physics of spiritual world is visible in the physics of the material world. Just like directly invisible quarks are visible via the physics of hadrons. Here I find strong resonance since TGD in its recent form involves generalization of number concept involving fusion of real numbers with various p-adic number fields, one for each prime p=2,3,5,7,... This fusion is along common rational numbers (very roughly): genuinely p-adic numbers are infinite as real numbers and are analogous to transcendental real numbers representing different manner to complete rational numbers to a continuum. The point is that one can extend also the notion of spacetime, 8-dimensional space containing space-times as 4-surfaces and speak about p-adic space-time sheets as correlates for intentionality and, there are strong indications, also for cognition. First point: What is remarkable that non-rational points of these space-time sheets are literally at infinity and only rational points belong to the physical universe. The interpretation is that our thoughts and intentions are literally cosmic or even a super-cosmic phenomenon: cognitive body somehow looks the material universe from outside. This fits very nicely with the idea of cosmic consciousness as consciousness in which sensory input is minimal and cosmic cognitive and intentional component dominates. Second point: That cognitive space-time sheets have discrete rational projection to real imbedding space and intersect real space-time sheets at discrete set of rational points, conforms with the fact that all physical representations of thoughts are necessary discrete and based on rational numbers. Consider only numerical computation which is bound to satisfy this constraint although cognizing mathematician can perform exact calculations. p-Adically infinitesimal means infinite in the real sense: that is very short p-adically means very long in real sense. Therefore the continuity and smoothness of local p-adic physics at infinity means that real space-time sheets having discrete set of rational intersection points with p-adic space-time sheets obey p-adic fractality meaning very special kind of long range correlations. Local randomness with long range spatial and temporal correlations can be seen as a direct physical correlate for the existence of cognition and intentionality. Intentional behavior is indeed characterized by temporal long range correlations. Hence we can measure the immediate implications of something which as such is not measurable! Spiritual and non-local intuitive mind would reflect itself in the properties and behavior of the material world. Without it, the material world would indeed be just a random soup of particles as materialist try hardly to believe. Even better: these predictions are very spesific. In particular, they lead to successful elementary particle mass calculations and to quantitative understanding of basic spatial and temporal scales in nuclear, atomic and molecular physics, biology, cosmology. This is something completely new. A possible model for the realization of intentional action is as a quantum jump transforming p-adic space-time sheet to a real one. This is possible if real space-time sheet has vanishing conserved charges such as energy, momentum, electromagnetic charge,... In TGD framework this is possible since conserved inertial energy can be both negative and positive. In principle it would seem that we could really create our physical universe by this kind of intentional action so that Eastern view about reality as a purely mental construct would be correct. I have even proposed S-matrix describing this process and in principle predicting probabilities for different intentional actions. A question, which just occurred to me, is how reversible the transition from intention to action is: it might be that transitions from action to intention (from matter to thought) is very rare since initial system must have vanishing net quantum numbers, in particular energy and this is extremely difficult to arrange. This could mean that our geometric future is mostly p-adic and past rather stably real: dreams would be the stuff that the reality is made of! If so the flow of experienced time would correspond to the front of the p-adic-to-real phase transition propagating towards geometric future as I have proposed. Of course, infinite number of these kind of wave fronts would be there and the direction of geometric future could be also non-standard. 3. Brahman=Atman and infinite primes This idea is second element of Perennial Philosophy. Infinite primes, integers, and rationals represent a further extension of number concept besides the fusion of p-adic and real number fields. What is fascinating from the point of physics is that the construction of infinite primes is structurally equivalent with a repeated second quantization of super-symmetric arithmetic quantum field theory. Furthermore, just as 0-dimensional points represents number, 4-D space-time surfaces represent infinite primes, integers,... This generalization leads also to a generalization of finite numbers: one can construct infinite number of ratios of infinite rationals which are equal to 1 as real numbers but p-adically finite for any prime p. Hence the number 1 and obviously all other numbers and also space-time points have infinitely rich number-theoretical anatomy not detectable by any physical measurement. Single point of space-time can represent in its structure the quantum state of entire material Universe! Brahman=Atman in the most literal and maximal sense that one can imagine! 4. Limits of quantum theory My view concerning the capacity of standard quantum theory to solve the riddle of consciousness should be already clear. I think that wave mechanics is far too simplistic to allow to understand consciousness. Quantum measurement theory where quantum jump, a good candidate for moment of consciousness as elementary act of creation/re-creation is taken as a fact, is a set of mere phenomenological rules, and in conflict with Schrödinger equation. My own proposal is simple basically: quantum states are actually entire time evolutions of Schrödinger equation and quantum jumps occur between these (or their generalizations) and thus outside the realm of space-time and given quantum state. Quantum jump means a re-creation of entire time evolution of the cosmology meaning in particular that both geometric past and future are re-created but in accordance with field equations. The experienced time identified as sequence of quantum jumps is something different from geometric time and these two coincide only in certain states of consciousness. Western mode of consciousness is this kind of mode but also in this case long term memories are actually communications with the geometric past: classically as in the case of declarative memories and by quantum entanglement making possible sharing of mental images as in the case of sensory memories. Second point. Planck constant hbar is the symbol of quantum mechanics and taken usually to be absolute constant which can be put to hbar=1 by a suitable choice of units. Quantum classical correspondence in TGD however predicts that space-time sheets which can be arbitrarily large define quantum coherence regions. This is in conflict with standard quantum mechanics predicting that macroscopic quantum coherence regions should not exist. The resolution of problem is that Planck constant is actually dynamical and quantized: the larger the value of hbar the larger the Compton length so that for instance electron can be zoomed up to an arbitrarily large size and these zoomed up electrons can overlap and form Cooper pairs and superconductor. The implications are rather dramatic: there is entire hierarchy of values of Planck constant and these correspond to dark matter phases which are macroscopically and even astrophysically quantum coherent. TGD can "predict" the value spectrum of Planck constant and this has led to a surprisingly precise model for living matter including band and resonance structure of EEG. This gives justification also to the notion of magnetic body (actually onion like hierarchy of them) having astrophysical size in the case of brain. These magnetic bodies carry dark matter and act as intentional agents having biological bodies as sensory receptors and motor instruments. For instance, the time delays of consciousness found by Libet can be understood in this framework. 5. Microtubuli and what shamans do during their travels? I believe that microtubuli are involved with the realization of long term memories and neural communications: for instance, it is very difficult to understand how high frequency sounds (higher than kHz) could be communicated by nerve pulse patterns since characteristic time scale is about ms. Microtubular conformational and em field patterns are ideal for this purpose. I however think that microtubuli represent only one important level in the hierarchy and that magnetic bodies carrying the dark matter are the star players in the real sector. At the top of hierarchy wuold be p-adic space-time sheets, p-adic/spiritual bodies representing us as eternal cosmic beings in the real sense. The travels of shamans could result by the ability of shaman's p-adic/spiritual body to partially detach from biological body and direct attention to other parts of the infinite universe. The direction of attention could mean that shaman as a master of intential action transforms part of his infinite p-adic body at some distant corner of the universe to a real zero energy space-time sheet which can then sensorily perceive the environment. Remote mental interactions would quite generally be based on this mechanism. Best Regards,
b91b93fda77c2f08
Skip to main content Chemistry LibreTexts 10.6: Semi-Empirical Methods: Extended Hückel An electronic structure calculation from first principles (ab initio) presents a number of challenges. Many integrals must be evaluated followed by a self-consistent process for assessing the electron-electron interaction and then electron correlation effects must be taken into account. Semi-empirical methods do not proceed analytically in addressing these issues, but rather uses experimental data to facilitate the process. Several such methods are available. These methods are illustrated here by the approaches built on the work of Hückel. One of the first semi-empirical methods to be developed was Hückel Molecular Orbital Theory (HMO). HMO was developed to describe molecules containing conjugated double bonds. HMO considered only electrons in pi orbitals and ignored all other electrons in a molecule. It was successful because it could address a number of issues associated with a large group of molecules at a time when calculations were done on mechanical calculators. The Extended Hückel Molecular Orbital Method (EH) grew out of the need to consider all valence electrons in a molecular orbital calculation. By considering all valence electrons, chemists could determine molecular structure, compute energy barriers for rotation about bonds, and even determine energies and structures of transition states for reactions. The computed energies could be used to choose between proposed transitions states to clarify reaction mechanisms. In the EH method, only the n valence electrons are considered. The total valence electron wavefunction is described as a product of the one-electron wavefunctions. \[\Psi _{valence} = \psi _1(1) \psi _2(2) \psi _3(3) \psi _3(3) \dots \psi _j(n) \label {10.34}\] where n is the number of electrons and j identifies the molecular orbital. Each molecular orbital is written as an linear combination of atomic orbitals (LCAO). \[\psi _j = \sum \limits ^N_{r = 1} c_{jr} \varphi       j = 1, 2, \dots N \label {10.35}\] where now the \(\varphi _j\) are the valance atomic orbitals chosen to include the 2s, 2px, 2py, and 2pz of the carbons and heteroatoms in the molecule and the 1s orbitals of the hydrogen atoms. These orbitals form the basis set. Since this basis set contains only the atomic-like orbitals for the valence shell of the atoms in a molecule, it is called a minimal basis set. Each \(\psi _j\), with j = 1…N, represents a molecular orbital, i.e. a wavefunction for one electron moving in the electrostatic field of the nuclei and the other electrons. Two electrons with different spins are placed in each molecular orbital so that the number of occupied molecular orbitals N is half the number of electrons, n, i.e. N = n/2. The number of molecular orbitals that one obtains by this procedure is equal to the number of atomic orbitals. Consequently, the indices j and r both run from 1 to N. The cjr are the weighting coefficients for the atomic orbitals in the molecular orbital. These coefficients are not necessarily equal, or in other words, the orbital on each atom is not used to the same extent to form each molecular orbital. Different values for the coefficients give rise to different net charges at different positions in a molecule. This charge distribution is very important when discussing spectroscopy and chemical reactivity. The energy of the jth molecular orbital is given by a one-electron Schrödinger equation using an effective one electron Hamiltonian, heff, which expresses the interaction of an electron with the rest of the molecule. \[h_{eff} \psi _j = \epsilon _j \psi _j \label {10 - 36}\] is the energy eigenvalue of the jth molecular orbital, corresponding to the eigenfunction \(\psi _j\). The beauty of this method, as we will see later, is that the exact form of heff is not needed. The total energy of the molecule is the sum of the single electron energies. \[E_{\pi} = \sum \limits _{j} n_j \epsilon _j \label {10.37}\] where nj is the number of electrons in orbital j. The expectation value expression for the energy for each molecular orbital is used to find and then \(E_{\pi}\) \[\epsilon _j = \dfrac {\int \psi _j \times h_{eff} \psi _j d\tau}{\int \psi _j \times \psi _j d\tau} = \dfrac {\left \langle \psi _j | h_{eff} | \psi _j \right \rangle}{\left \langle \psi _j | \psi _J \right \rangle} \label {10.38}\] The notation \(\left \langle |  | \right \rangle \), which is called a bra-ket, just simplifies writing the expression for the integral. Note that the complex conjugate now is identified by the left-side position and the bra notation \( < | \) and not by an explicit *. After substituting Equation \(\ref{10.35}\) into \(\ref{10.38}\), we obtain for each molecular orbital \[ \epsilon _j =  \dfrac {\left \langle \sum \limits ^N_{r = 1} c_{jr}\psi _r | h_{eff} | \sum \limits ^N_{s = 1} c_{js} \psi _s\right \rangle}{\left \langle \sum \limits ^N_{r = 1} c_{jr}\psi _r  | \sum \limits ^N_{s = 1} c_{js}\psi _s \right \rangle} \label {10.39}\] which can be rewritten as \[\epsilon = \dfrac {\sum \limits ^N_{r=1} \sum \limits ^N_{s=1} c^*_r c_s \left \langle \psi _r |h_{eff}| \psi _s \right \rangle}{\sum \limits ^N_{r=1} \sum \limits ^N_{s=1} c^*_r c_s \left \langle \psi _r | \psi _s \right \rangle} \label {10.40}\] where the index j for the molecular orbital has been dropped because this equation applies to any of the molecular orbitals. Exercise \(\PageIndex{1}\) Consider a molecular orbital made up of three atomic orbitals, e.g. the three carbon 2pz orbitals of the allyl radical, where the internuclear axes lie in the xy-plane. Write the LCAO for this MO. Derive the full expression, starting with Equation \(\ref{10.38}\) and writing each term explicitly, for the energy expectation value for this LCAO in terms of heff. Compare your result with Equation \(\ref{10.40}\) to verify that Equation \(\ref{10.40}\) is the general representation of your result. Exercise \(\PageIndex{2}\) Write a paragraph describing how the Variational Method could be used to find values for the coefficients cjr in the linear combination of atomic orbitals. To simplify the notation we use the following definitions. The integrals in the denominator of Equation \(\ref{10.40}\) represent the overlap between two atomic orbitals used in the linear combination. The overlap integral is written as \(S_{rs}\).  The integrals in the numerator of Equation \(\ref{10.40}\) are called either resonance integrals or coulomb integrals depending on the atomic orbitals on either side of the operator heff as described below. • \(S_{Rs} = \left \langle \psi _r |\psi _s \right \rangle\) is the overlap integral.  \(S_{rr} = 1\) because we use normalized atomic orbitals. For atomic orbitals r and s on different atoms, \(S_{rs}\) has some value between 1 and 0: the further apart the two atoms, the smaller the value of \(S_{rs}\). • \(H_{rr} = \left \langle \psi _r |h_{eff}| \psi _s \right \rangle\) is the Coulomb Integral. It is the kinetic and potential energy of an electron in, or described by, an atomic orbital, \(\varphi _r\), experiencing the electrostatic interactions with all the other electrons and all the positive nuclei. • \(H_{rs} = \left \langle \psi _r |h_{eff} |\psi _s\right \rangle\) is the Resonance Integral or Bond Integral. This integral gives the energy of an electron in the region of space where the functions \(\varphi _r\) and (\varphi _s\) overlap. This energy sometimes is referred to as the energy of the overlap charge. If r and s are on adjacent bonded atoms, this integral has a finite value. If the atoms are not adjacent, the value is smaller, and assumed to be zero in the Hückel model. In terms of this notation, Equation \(\ref{10.49}\) can be written as \[\epsilon = \dfrac {\sum ^N_{r=1} \sum ^N_{s=1} c ^*_r c_s H_{rs}}{\sum ^N_{r=1} \sum ^N_{s=1} c ^*_r c_s S_{rs}} \label {10.41}\] We now must find the coefficients, the c's. One must have a criterion for finding the coefficients. The criterion used is the Variational Principle. Since the energy depends linearly on the coefficients in Equation \(\ref{10.41}\), the method we use to find the best set of coefficients is called the Linear Variational Method. The task is to minimize the energy with respect to all the coefficients by solving the N simultaneous equations produced by differentiating Equation \(\ref{10.41}\) with respect to each coefficient. \[\dfrac {\partial \epsilon}{\partial c_t} = 0 \label {10.42} \] for  \(t = 1, 2, 3, \dots N\) Actually we also should differentiate Equation \(\ref{10.41}\) with respect to the \(c^*_t\), but this second set of N equations is just the complex conjugate of the first and produces no new information or constants. To carry out this task, rewrite Equation \(\ref{10.41}\) to obtain Equation \(\ref{10.43}\) and then take the derivative of Equation \(\ref{10.43}\) with respect to each of the coefficients. \[\epsilon \sum \limits _r \sum \limits _s c^*_r c_s S_{rs} =  \sum \limits _r \sum \limits _s c^*_r c_s H_{rs} \label {10.43}\] Actually we do not want to do this differentiation N times, so consider the general case where the coefficient is. Here t represents any number between 1 and N. This differentiation is relatively easy, and the result, which is shown by Equation \(\ref{10.44}\), is relatively simple because some terms in Equation \(\ref{10.43}\) do not involve and others depend linearly on. The derivative of the terms that do not involve ct is zero (e.g. \[\dfrac {\partial c^*_3 c_4 H_{34}}{\partial c_2} = 0. \] The derivative of terms that contain is just the constant factor that multiples the, (e.g. \(\dfrac {\partial c^*_3 c_2 H_{32}}{\partial c_2} = c^*_3 H_{32}\) ). Consequently, only terms in Equation \(\ref{10.43}\) that contain contribute to the result, and whenever a term contains, that term appears in Equation \(\ref{10.44}\) without the because we are differentiating with respect to. The result after differentiating is \[\epsilon \sum \limits _r c^*_r S_{rt} = \sum \limits _r c^*_r H_{rt} \label {10.44}\] If we take the complex conjugate of both sides, we obtain Since \(\epsilon = \epsilon ^*, S^*_{rt} = S_{tr}  \text {and}  H^*_{rt} = H_{tr}\), Equation \(\ref{10.45}\) can be reversed and written as \[\sum \limits _r c_r H_{tr} = \epsilon \sum \limits _r c_r S_{tr} \label {10.46}\] or upon rearranging as \[\sum \limits _r c_r (H_{tr} - S_{tr}\epsilon ) = 0 \label {10.47}\] There are N simultaneous equations that look like this general one; N is the number of coefficients in the LCAO. Each equation is obtained by differentiating Equation \(\ref{10.43}\) with respect to one of the coefficients. Exercise \(\PageIndex{3}\) Explain why the energy \(\epsilon = \epsilon^*\), show that \(S^*_{rt} = S_{tr}\) (write out the integral expressions and take the complex conjugate of , and show that \(H^*_{rt} = H_{tr}\) (write out the integral expressions, take the complex conjugate of , and use the Hermitian property of quantum mechanical operators). Exercise \(\PageIndex{4}\) Rewrite your solution to Exercise \(\PageIndex{3}\) for the 3-carbon pi system found in the allyl radical in the form of Equation \(\ref{10.43}\) and then derive the set of three simultaneous equations for the coefficients. Compare your result with Equation \(\ref{10.47}\) to verify that Equation \(\ref{10.47}\) is a general representation of your result. This method is called the linear variational method because the variable parameters affect the energy linearly unlike the shielding parameter in the wavefunction that was discussed in Chapter 9. The shielding parameter appears in the exponential part of the wavefunction and the effect on the energy is nonlinear. A nonlinear variational calculation is more laborious than a linear variational calculation. Equations \(\ref{10.46}\) and \(\ref{10.47}\) represent a set of homogeneous linear equations. As we discussed for the case of normal mode analysis in Chapter 6, a number of methods can be used for solving these equations to obtain values for the energies, \(\epsilon ' s\), and the coefficients, the \(c'_r s\). Matrix methods are the most convenient and powerful. First we write more explicitly the set of simultaneous equations that is represented by Equation . The first equation has t = 1, the second t = 2, etc. N represents the index of the last atomic orbital in the linear combination. \[c_1H_{11} + c_2H_{12} + \dots c_nH_{1N} = c_1S_{11}\epsilon +c_2S_{12}\epsilon + dots c_NS_{1N}\epsilon\] \[c_1H_{21} + c_2H_{22} + \dots c_nH_{2N} = c_1S_{21}\epsilon +c_2S_{22}\epsilon + dots c_NS_{2N}\epsilon\] \[\vdots                     \vdots          =               \vdots                 \vdots\] \[c_1H_{N1} + c_2H_{N22} + \dots c_nH_{NN} = c_1S_{N1}\epsilon +c_2S_{N2}\epsilon + dots c_NS_{NN}\epsilon  \label {10.48}\] This set of equations can be represented in matrix notation. \[HC' = SC' \epsilon \label {10.49}\] Here we have square matrix H and S multiplying a column vector C' and a scalar \(\epsilon\). Rearranging produces \[HC' - SC' \epsilon = 0\] \[ (H - S\epsilon )C' = 0 \label {10.50}\] Exercise \(\PageIndex{5}\) For the three atomic orbitals you used in Exercises \(\ref{10.18}\) and \(\ref{10.6}\), write the Hamiltonian matrix H, the overlap matrix S, and the vector C'. Show by matrix multiplication according to Equation \(\ref{10.49}\) that you produce the same Equations that you obtained in Exercise 10.21. The problem is to solve these simultaneous equations, or the matrix equation, and find the orbital energies, which are the \(\epsilon ' s\), and the atomic orbital coefficients, the \(c's\), that define the molecular orbitals. Exercise \(\PageIndex{6}\) Identify two methods for solving simultaneous equations and list the steps in each. In the EH method we use an effective one electron Hamiltonian, and then proceed to determine the energy of a molecular orbital where \(H_{rs} = \left \langle \psi _r |h_{eff} |\psi _s\right \rangle\) and \(S_{rs} = \left \langle \psi _r |\psi _s\right \rangle\) . Minimization of the energy with respect to each of the coefficients again yields a set of simultaneous equations just like Equation \(\ref{10.47\). \[\sum \limits _r c_r (H_{tr} - S_{tr}\epsilon) =0 \label {10.52} \] As before, these equations can be written in matrix form Equation \(\ref{10.49}\) accounts for one molecular orbital. It has energy \(\epsilon \), and it is defined by the elements in the C' column vector, which are the coefficients that multiply the atomic orbital basis functions in the linear combination of atomic orbitals. We can write one matrix equation for all the molecular orbitals. \[HC = SCE \label {10.53}\] where H is a square matrix containing the Hrs, the one electron energy integrals, and C is the matrix of coefficients for the atomic orbitals. Each column in C is the C' that defines one molecular orbital in terms of the basis functions. In extended Hückel theory, the overlap is not neglected, and S is the matrix of overlap integrals. E is the diagonal matrix of orbital energies. All of these are square matrices with a size that equals the number of atomic orbitals used in the LCAO for the molecule under consideration. Equation \(\ref{10.53}\) represents an eigenvalue problem. For any extended Hückel calculation, we need to set up these matrices and then find the eigenvalues and eigenvectors. The eigenvalues are the orbital energies, and the eigenvectors are the atomic orbital coefficients that define the molecular orbital in terms of the basis functions. Exercise \(\PageIndex{7}\) What is the size of the H matrix for HF? Write out the matrix elements in the H matrix using symbols for the wavefunctions appropriate to the HF molecule. Consider this matrix and determine if it is symmetric by examining pairs of off-diagonal elements. In a symmetric matrix, pairs of elements located by reflection across the diagonal are equal, i.e. Hrc = Hcr where r and c represent the row and column, respectively. Why are such pairs of elements equal? Write out the S matrix in terms of symbols, showing the diagonal and the upper right portion of the matrix. This matrix also is symmetric, so if you compute the diagonal and the upper half of it, you know the values for the elements in the lower half. Why are pairs of S matrix elements across the diagonal equal? The elements of the H matrix are assigned using experimental data. This approach makes the extended Hückel method a semi-empirical molecular orbital method. The basic structure of the method is based on the principles of physics and mathematics while the values of certain integrals are assigned by using educated guesses and experimental data. The Hrr are chosen as valence state ionization potentials with a minus sign to indicate binding. The values used by R. Hoffmann when he developed the extended Hückel technique were those of H.A. Skinner and H.O. Pritchard (Trans. Faraday Soc. 49 (1953), 1254). These values for C and H are listed in Table 10.1. The values for the heteroatoms (N, O, and F) are taken from Pople and Beveridge(Approximate Molecular Orbital Theory, McGraw-Hill Book Company, New York, 1970). Table \(\PageIndex{1}\): Ionization potentials of various atomic orbitals. Atomic orbital Ionization potential (eV) H 1s C 2s C 2p N 2s N 2p O 2s O 2p F 2s F 2p The Hrs values are computed from the ionization potentials according to \[H_{rs} = \dfrac {1}{2} K (H_{rr} + H_{ss})S_{rs} \label {10.54}\] The rationale for this expression is that the energy should be proportional to the energy of the atomic orbitals, and should be greater when the overlap of the atomic orbitals is greater. The contribution of these effects to the energy is scaled by the parameter K. Hoffmann assigned the value of K after a study of the effect of this parameter on the energies of the occupied orbitals of ethane. The conclusion was that a good value for K is K = 1.75. Exercise \(\PageIndex{8}\) Fill in numerical values for the diagonal elements of the Extended Hückel Hamiltonian matrix for HF using the ionization potentials given in Table 10.1. The overlap matrix also must be determined. The matrix elements are computed using the definition \(S_{rs} = \left \langle \psi _r |\psi _s\right \rangle\)  where \(\varphi _k\) and \(\psi _s\) are the atomic orbitals. Slater-type orbitals (STO’s) are used for the atomic orbitals rather than hydrogenic orbitals because integrals involving STO's can be computed more quickly on computers. Slater type orbitals have the form \[\phi _{1s} (r) = 2\zeta ^{3/2} \text {exp} (- \zeta r)\] \[\phi _{2s} (r) = \phi _2p (r) = \left (\dfrac {4\zeta ^5}{3} \right )^{1/2} \text {rexp} (- \zeta r) \label {10.55}\] where zeta, \(\zeta\), is a parameter describing the screened nuclear charge. In the extended Hückel calculations done by Hoffmann, the Slater orbital parameter \(\zeta\) was 1.0 for the H1s and 1.652 for the C2s and C2p orbitals. Exercise \(\PageIndex{9}\) Describe the difference between Slater-type orbitals and hydrogenic orbitals. Overlap integrals involve two orbitals on two different atoms or centers. Such integrals are called two-center integrals. In such integrals there are two variables to consider, corresponding to the distances from each of the atomic centers, rA and rB. Such integrals can be represented as \[S_{A_{2s}B_{2s}} = \left (\dfrac {4\zeta ^5}{3}\right ) \int r_A \text {exp} (- \zeta r_A) r_B \text {exp} (- \zeta r_B) d\tau \label {10.56}\] but elliptical coordinates must be used for the actual integration. Fortunately the software that does extended Hückel calculations contains the programming code to do overlap integrals. The interested reader will find sufficient detail on the evaluation of overlap integrals and the creation of the programmable mathematical form for any pair of Slater orbitals in Appendix B4 (pp. 199 - 200) of the book Approximate Molecular Orbital Theory by Pople and Beveridge. The values of the overlap integrals for HF are given in Table 10.2. Exercise \(\PageIndex{10}\) Using the information in Table 10.2, identify which axis (x, y, or z) has been defined as the internuclear axis. Fill in the missing values in Table \(\PageIndex{2}\). This requires no calculation, only insight. Table \(\PageIndex{2}\): Overlap Integrals for HF   F 2s F 2px F 2py F 2pz H 1s F 2s         0.47428 F 2px         0 F 2py         0.38434 F 2pz         0 H 1s           Exercise \(\PageIndex{11}\) Using the information in Tables 10.1 and 10.2, write the full Hückel H matrix and the S matrix that appears in Equation \(\ref{10.53}\) for HF. Our goal is to find the coefficients in the linear combinations of atomic orbitals and the energies of the molecular orbitals. For these results, we need to transform Equation \(\ref{10.53}\) \[HC = SCE \label {10.53}\] into a form that allows us to use matrix diagonalization techniques. We are hampered here by the fact that the overlap matrix is not diagonal because the orbitals are not orthogonal. Mathematical methods do exist that can be used to transform a set of functions into an orthogonal set. Essentially these methods apply a transformation of the coordinates from the local coordinate system describing the molecule into one where the atomic orbitals in the LCAO are all orthogonal. Such a transformation can be accomplished through matrix algebra, and computer algorithms for this procedure are part of all molecular orbital programs. The following paragraph describes how this transformation can be accomplished. If the matrix \(M\) has an inverse \(M^{-1}\) Then \[MM^{-1} = 1tag {10.57}\] and we can place this product in a matrix equation without changing the equation. When this is done for Equation \(\ref{10.53}\), we obtain \[HMM^{-1}C = SMM^{-1} CE \label {10.58}\] Next multiply on the left by \(M^{-1}\) and determine \(M\) so the product \(M^{-1}SM\) is the identity matrix, i.e. a matrix that has 1's on the diagonal and 0's off the diagonal is the case for an orthogonal basis set. \[ M^{-1}HMM^{-1}C = M^{-1}SMM^{-1}CE \label {10.59}\] which then can be written as \[H''C'' = C''E'' \label {10.60}\] \[C' = M^{-1}C \label {10.61}\] The identity matrix is not included because multiplying by the identity matrix is just like multiplying by the number 1. It doesn’t change anything. The \(H''\) matrix can be diagonalized by multiplying on the left by the inverse of \(C''\) to find the energies of the molecular orbitals in the resulting diagonal matrix \(E\). \[E = C''^{-1}H''C'' \label {10.62}\] The matrix \(C''\) obtained in the diagonalization step is finally back transformed to the original coordinate system with the \(M\) matrix, \(C = MC''\) since \(C'' = M^{-1}C\). Fortunately this process is automated in some computer software. For example, in Mathcad, the command genvals(H,S) returns a list of the eigenvalues for Equation \(\ref{10.53}\). These eigenvalues are the diagonal elements of \(E\). The command genvecs(H,S) returns a matrix of the normalized eigenvectors corresponding to the eigenvalues. The ith eigenvalue in the list goes with the ith column in the eigenvector matrix. This problem, where \(S\) is not the identity matrix, is called a general eigenvalue problem, and gen in the Mathcad commands refers to general. Exercise \(\PageIndex{12}\) Using your solution to Exercise 10.28, find the orbital energies and wavefunctions for HF given by an extended Hückel calculation. Construct an orbital energy level diagram, including both the atomic and molecular orbitals, and indicate the atomic orbital composition of each energy level. Draw lines from the atomic orbital levels to the molecular orbital levels to show which atomic orbitals contribute to which molecular orbitals. What insight does your calculation provide regarding the ionic or covalent nature of the chemical bond in HF?
58eb3a959b28273f
Wednesday, September 30, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Beaten with hockey sticks: Yamal tree fraud by Briffa et al. I will open a discussion thread about this development, too. Steve McIntyre has broken another hockey stick: Yamal: a divergence problem (click) ... a copy at Climate Audit (click) Because Climate Audit is overloaded, here's the Google cache. The finding is very easy to describe. Briffa et al. (Science, published September 2009, see also Briffa et al., Philosophical Transactions 2008) offered another version of a "hockey stick graph", a would-be reconstruction of the temperatures in the last 2000 years that claimed to show a "sudden" warming in the later part of the 20th century, much like the discredited paper by Michael Mann et al. Papers by Mann, Bradley, and Hughes in 1998 and 1999, included as a symbol of global warming into the previous IPCC report in 2001, indicated constant temperatures before 1900 and a dramatic warming afterwards. However, the papers have been proven wrong. If you haven't heard about the lethal bug of the Mann methodology yet, the problem of the MBH98, MBH99 papers was that the algorithm preferred proxies - or trees (or their equivalents) - that showed a warming trend in the 20th century, assuming that this condition guaranteed that the trees were sensitive to temperature. Tuesday, September 29, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Political racketeering Special welcome to the Swedish EU presidency. Two interesting examples of blackmailing in politics emerged today. Iran vs West (click) A hardcore Iranian lawmaker said that Iran could quit the nuclear non-proliferation treaty if the pressure from the West continues. Eurocrats vs Czechia (click) Mirek Topolánek, the leader of the Czech center-right ODS party, said that he was effectively told by Jose Barroso that all EU countries but Czechia will have a commissioner if President Klaus doesn't become another puppet of the EU bureaucracy and doesn't sign the Treaty of Lisbon. ;-) Monday, September 28, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Four degrees Celsius in 50 years? Last week, Yugratna Srivastava, a 13-year-old Indian girl, was hired by the United Nations to present a poem to the world's leaders and the humanity. In the tradition of Nazi and Soviet methods of propaganda, a kid was asked to explain that our world is gonna fry unless everyone buys all the ideology and policies that her propagandistic employers wanted her to disseminate. There apparently exist adults whose skulls are comparably unhinged. The girl wasn't strong enough to convince the world about the looming catastrophe - and they need much stronger "momentum" for the Copenhagen negotiations that should efficiently cripple the world's economy. 2009 physics Nobel prize: speculations Update: The 2009 physics Nobel prize went to Charles Kuen Kao (1/2) and Willard Boyle (1/4) and George Smith (1/4): see a newer blog article Next week, Scandinavia will tell us about their choice of Nobel prizes for 2009. The physics Nobel prize will be announced on Tuesday, October 6th, at 11:45 a.m., Swedish time. Who is going to win the physics award that has preserved its exceptional status because the prize has never been flagrantly misdirected, unlike the peace Nobel prize, so far? First, let us summarize the winners since October 2004 when this blog was born: Now, it may be fun to recall some predictions made in the previous years: Very soon, I will review some older scenarios which may still be possible in 2009. Meanwhile, Thomson Scientific offered their own, new predictions based on their algorithm analyzing the network of citations. They managed to accurately guess the 2007 winners - Fert, Grünberg - although they did so already in 2006 and F+G were not their top choice. Sunday, September 27, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere First Czecho-Slovak Superstar See also: Dominika Stará vs Martin Chodúr See also: Dominika Stará: Je suis Malade After a couple of Czech (CZ) Pop Idols and Slovak (SK) Pop Idols and one year with the Czech X-Factor, the Czech and Slovak contests were wisely unified. This guy has only been training the song for 1 hour - during the reduction from 118 to 90. In my opinion, Martin Chodúr's edition of "Supreme" was more convincing, testosterone-loaded than the original version of Robbie Williams. The moderators are Mr Leoš Mareš (CZ) and Ms Adéla Banášová (SK) and they're doing a superb job. I used to dislike Mareš because he seemed excessively pompous concerning his extraordinarily high income etc - but these negative emotions of mine are gone by now. There are two Czech and two Slovak judges - with all four sex/nation combinations: Mr Palo Habera (SK, younger), Mr Ondřej Hejma (CZ, older), Ms Dara Rollins (SK, blonde), Ms Marta Jandová (CZ, brunette). Friday, September 25, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Pope visits the Czech infidels The leaders of the Czech Republic and the Vatican in their characteristic hats. Note the similarity between the two. Tomorrow, the Holy Father arrives to Czechia which is probably the most atheist country in the world. The Reference Frame wishes him a lot of good luck and a nice, relaxing stay. On Monday, we celebrate a national holiday, the St Wenceslaus Day (from the Christmas carol, Good King Wenceslaus), our patron and one of the first dukes (and de facto kings) who was murdered by his brother in the town of Boleslav that the Holy Father will visit. For 95% of the Czechs, it's just another work-free day, as we will explain. D-braneworlds strike back Today, Mirjam Cvetič, James Halverson, and Robert Richter wrote the first hep-th paper (that might normally be a hep-ph one, I think): Mass hierarchies from MSSM orientifold compactifications Recall that the main detailed classes of phenomenological scenarios within string theory are: • weakly coupled heterotic strings on Calabi-Yau three-folds • its strongly coupled version, Hořava-Witten heterotic M-theory on Calabi-Yau three-folds • M-theory on singular G2 holonomy manifolds • F-theory on Calabi-Yau four-folds and its type IIB descriptions • type IIA braneworlds with D6-branes and orientifolds (and lots of quiver diagrams) Their subsets are related by various dualities, they have various advantages and disadvantages. Thursday, September 24, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Google Chrome Frame for Internet Explorer Microsoft Internet Explorer users are recommended to install Google Chrome Frame: download, info, a plug-in for MSIE 6/7/8 that replaces the Microsoft JavaScript engine by a much faster Chrome JavaScript engine. The Chrome engine also adds support for HTML5, canvas, and other features. The plug-in is only activated for websites whose webmasters have inserted the following meta-tag to their pages: <meta content='chrome=1' http-equiv='X-UA-Compatible'/> But The Reference Frame is among them. As far as my measurements go, it used to take 10 seconds from pressing the "TRF" button to seeing the top of the right sidebar in Internet Explorer. This rather long time makes TRF an excellent benchmark. ;-) With Google Chrome Frame, the time was reduced to 6 seconds. That's an improvement. But my Google Chrome 4.0 shows the sidebar in 3 seconds, much like the newest official Mozilla Firefox, namely 3.5.3. Chrome is much faster in some respects: for example, its startup is literally immediate. Poland, Estonia win: indulgences for free Breaking news: Reuters is finally learning how to write balanced and attractive articles. The article called U.N. climate meeting was propaganda: Czech president is currently the most popular article on the Reuters website, ahead of the sex of Mackenzie Phillips (see the list in the right lower corner of any Reuters article): they switched the places (screenshot). I guess that Drudge Report did help a bit. ;-) See also Klaus's U.N. speech about the ways (not) to solve the crises. The Guardian's most popular article is dedicated to the same U.N. climate meeting and is called Obama the Impotent. EurActiv, Times, and others inform that Poland and Estonia have won: the Court of First Instance ruled that the European Commission didn't have the right to cut the carbon quotas for these two countries because the countries themselves should set the numbers and the commission may only review them. :-) Tuesday, September 22, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Israel: optimizing strike on Iran David Petrla and some Pentagon sources cited in the media have convinced me that Israel is completing its plans to attack Iranian nuclear and military facilities. According to Dmitry Medvedev who is not a spokesman for Israel, Peres is telling the people that Israel has no such plans but Netanyahu clearly thinks different. ;-) A typical Israeli soldier Israel knows that Obamaland and many other Western or otherwise powerful countries suck as allies, that the mostly self-sufficient Iran doesn't really care about sanctions (especially not the homeopathic ones), and that the verbal attacks from Iran, combined with its accelerating nuclear efforts, represent a genuine existential threat for their very existence. Iran's freedom to manipulate with dangerous materials ends where the freedom - and life - of others begins. And I agree that they have already crossed the border. Pictures from the anti-Obama rally in D.C. This is not a full-fledged article. But Ross Hedvíček of Florida posted pretty cool pictures of the anti-Obama rally in Washington D.C. that took place a week ago or so. Click the picture above to get to the article ("Comrade Obama has only been caressed in Czechia") to see many more photographs like that. About a million (more pix!) of witty people of all races, ages, and sexes attended the rally but only the protester above has won the Rally TRF Hottie award. Congratulations. Climate in the U.N. By the way, there was a climate meeting somewhere in the New York City today. Its purpose was for Prof Václav Klaus to teach his students, other politicians, something about the society, economics, politics, and their interactions with science, taking the global warming hoax as the main example. But most of them are bad students so they were far too distracted by pornographic thoughts so they didn't learn almost anything. For instance, a little Nicolas has proposed one more intercourse with his friends in November. The media are pretty much full of their pornographic thoughts. The Guardian, a British socialist daily, decided that Obama can give a bad, awfully ho-hum, speech, too. Yes, that's the speech. The ordering of the words is pretty much irrelevant so you don't have to watch the video with the hogwash. Reuters managed to publish some sensible information about the meeting in the article called Reuters: U.N. climate meeting was propaganda... The president said: "It was sad and it was frustrating. It's a propagandistic exercise where 13-year-old girls from some far-away country perform a pre-rehearsed poem. It's simply not dignified." Oh, OK, I meant the Czech president. ;-) On Thursday, at 8/7 Central, ABC is gonna broadcast FlashForward by Robert Sawyer. The series will begin at the LHC in CERN. The point of the series is to discuss the fate and the destiny. Everything will be about a strange event. For "1/alpha" seconds, where "alpha" is the fine-structure constant, every human being will be able to perceive the following 6 months of their lives. ;-) Monday, September 21, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Kenya: rainmakers key to consensus on climate change AFP reports that Kenya's Nganyi rainmakers are being enlisted to mitigate the effects of climate change: Kenya rainmakers called to the rescue (click) Alexander Okonda's great-grandfather was also a rainmaker. In the 1910s, he was arrested by the British because they determined that he had been responsible for poor rainfall. Now, the great-grandson is getting the credit he deserves. As the methods of climatology have been strikingly transformed, he is appreciated as a top scientist. Alexander Okonda blows through a reed into a pot embedded in a tree hollow and containing a secret mixture of sacred water and herbs. "This contains so much information. It is something I feel from my head right down to my toes," says Alexander, after completing his ritual. The young man is a member of the Nganyi community, a clan of traditional rainmakers that for centuries has made its living disseminating precious forecasts to local farmers. Nothingness spreading in de Sitter space Maulik Parikh (now Pune, India) posted the first hep-th preprint today, and I think it is the most interesting one: Enhanced Instability of de Sitter Space in Einstein-Gauss-Bonnet Gravity (click) He argues that the Gauss-Bonnet term - the topological Euler density (in 4D) - may look inconsequential perturbatively and it decides about the life and death of de Sitter backgrounds. Recall that the Lagrangian of the Einstein-Gauss-Bonnet system is L = 1 / (16.pi.G) [ R + alpha (R*R - 4 R.R + R^2) ]. Besides the Einstein-Hilbert term, you can see the topological term multiplied by the the area, "alpha". Because the pair-creation of black holes involves some topology change, the last term matters and increases the nucleation rate by the factor Gamma = Gammaorig exp (4 pi alpha / G) The second enhancing factor becomes huge if the Gauss-Bonnet area "alpha" is much bigger than the Planck area "G". That's expected to be the case even in perturbative string theory where "alpha" is comparable to the squared string scale, or at least Maulik says so. When the enhancement is large, you should care about the original decay rate, Gammaorig = exp (-pi L2 / 3G) where L is the curvature radius of the de Sitter space. Without the alpha-enhancement, this rate would be negligible for any de Sitter space that is visibly bigger than the Planck scale. However, with the alpha-enhancement, the decay rate becomes significant. For an inflating Universe, the Hubble radius, "1/H", has to be greater than "sqrt(12 alpha)", otherwise the instanton creates lots of black holes which are probably unhealthy for the inflationary mechanism. In the example above, this means that the radius must exceed the string scale (with a particular numerical prefactor). This doesn't sound too dramatic a constraint but because the inflation scale is often close to the string scale, it could be a nontrivial constraint. Of course, it would be even more interesting to discover that there is a new, unexpectedly huge contribution to the Gauss-Bonnet term that makes "alpha" close to the squared neutrino Compton wavelength. If this were the case, one could derive a constraint on the cosmological constant. ;-) Such a huge alpha is probably impossible but it would be fun if there were one. There could exist similar enhancements and instabilities of this kind - and maybe its higher-dimensional counterparts - that could eliminate many kinds of compactifications with too small radii, too complicated topologies, and so on. Quantum cosmologists should try to study these possibly neglected mechanisms intensely. By the way, this is related to one point that I dislike about the current approach of the anthropic people. For most features of the Universe, they can't find any strong and accurate enough anthropic constraint. But if they can "explain" something using this anthropic reasoning, they're satisfied. This is a fundamentally unscientific thinking because one should always try to find "all" conceivable constraints - and the "other solutions" (such as the black hole creation) could actually be more important, more stringent, more predictive, and more true than the ones that the anthropic people "guess" by chance. ISS with NS5-branes By the way, the second hep-th paper is also interesting and it is also about the vacuum selection. Kutasov, Lunin, MrOrist, Royston study the landscape of vacua obtained by stretched D4-branes (and other D-branes) between NS5-branes. They end up with some Intriligator-Seiberg-Shih-like SUSY breaking setup and argue that the early cosmology pushes the Universe towards a particular SUSY-breaking ground local minimum. Sunday, September 20, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere The Age of Stupid The filmmakers from the Horrifying Anthropogenic Global Warming Activist Socialist Hysteria (HAGWASH for short) are trying to create a new hit, The Age of Stupid. The world is gonna burn and the mankind dies as soon as in 2055: see the realistic countdown before the final solution, extinction of life in 2055. An old guy, Pete Postlethwaite who is the last person alive ;-), looks to his media collections from 2008 or so and decides that everyone was stupid because he didn't save the world. Check that all famous buildings are gonna be destroyed by a few tenths of a degree of warming. But the people who are ready to consider this piece of dirty unscientific shrill propaganda as a serious documentary - which is how it's being marketed at many places of the world - are not just stupid. They deserve a far stronger term. The wiser ones may consider to read the NIPCC (Non-governmental International Panel for Climate Change) report which is a truly comprehensible, nonsense-free, and comprehensive 880-page-long summary of the state-of-the-art research in climate science. Click the to initiate the purchase. Hat tip: Alexander Ač Saturday, September 19, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere The Da Vinci Code I have finally watched The Da Vinci Code, based on the 2003 bestselling book by Dan Brown. And it was pretty impressive. Spoilers follow. If you don't know, in this novel, some mysterious murders turn out to be results of a big battle between two social or religious groups. One of them is supposed to protect the descendants of Jesus Christ and his wife, Mary Magdalene, who could prove that Jesus was a human being. The other one wants to protect the big dirty secret of the Christian Churches, namely Jesus's humanity. Klaus: Is there a common European idea? I am thankful for the invitation to these inspiring "Passau Dialogues". And I happily add that it is an honor to be given the opportunity to lead a discussion with such an important personality of contemporary Europe as - beyond any doubts - cardinal Schönborn surely is. We will certainly discuss neither the details of the church orthodoxy - in which I wouldn't be an appropriate partner - nor the ever returning questions about the relationships between the state and the church. Also, I will avoid temptations to offer alternative hypotheses about the origin of the financial and economic crisis or similar topics of my discipline, the economic science. Friday, September 18, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere China's top climatologist: 2 °C probably no problem The Guardian informs about the opinions of the top climatologist of a group of 1.35 billion people that calls itself by a funny name, People's Republic of China. Mr Xiao Ziniu says that it has not been determined whether the warming by 2 °C - which is often being talked to as the "cutoff" that is forbidden before 2050 (it won't happen, anyway!) - is dangerous. China has experienced warmer periods than today and each change of the temperature brings some advantages and some disadvantages. TBBT & Sheldon Cooper: Xmas scene runs for Emmy After having won the corresponding TCA award in August 2009, Jim Parsons (Dr Sheldon Cooper of The Big Bang Theory) has also been nominated for the "best actor in a comedy series" category of the Emmy awards. He's excellent, flawless, and - let me admit - in many ways better than the original. ;-) This Christmas or Saturnalian scene (from 2x11, The Bath Item Gift Hypothesis) remains my most favorite one. It's just touching. As an Emmy n00b, Parsons won't probably follow quite a straightforward path to his Emmy. And maybe he will. Kind of wisely, however, the scene above has been chosen as his bath item gift to the Emmy voters and as the trademark example of his unusual skills as an actor. ESA: Planck sends first images If you remember, ESA launched Planck in May 2009. Four months later, we have the first images that should eventually (after six months) supersede the well-known WMAP images. BBC and others report. Click to zoom in. The temperature variations measured by Planck in nine frequency ranges are depicted inside the strip, by the usual WMAP-like mottled colors. Planck rotates roughly once a minute. Czech, Polish missile defense system shelved CERN wants a linear collider The LHC is not yet operating - it will begin in mid November, with reduced-energy collisions added a few weeks later - but the CERN director, Rolf-Dieter Heuer, already wants to build a new linear collider at CERN. In his modest office with a socialist-style furniture, he also explains the difficult cleaning procedures and even more difficult preemptive policies. Heuer is optimistic about their control over the LHC which seems much smoother than LEP (the previous Lot of Extra Problems collider) even though LEP was simpler. In a few years, the LHC will have years of experience of running at 14 TeV, he says, plus important discoveries, he hopes. Also, the European-American symmetry has been spontaneously broken and people suddenly come to CERN. ;-) Heuer thinks that science needs global, continental, as well as national projects to preserve the expertise of the people. CERN has the capacity to host the International Linear Collider (ILC) or the 3-TeV, 48-km Compact Linear Collider (CLIC; and click the word haha): see the picture. But competition is always welcome, Heuer says - as long as the symmetry is broken and others have no chance. ;-) Wednesday, September 16, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Kyoto II: Obama vs Eurocrats An entertaining split between Europe and America has emerged concerning the question how the carbon emissions reductions should be achieved in individual nations. Obama and Barroso in Prague, April 2009. Things may have been different then. As The Telegraph, The Guardian, and everyone else reports, Europe and America differ in their opinion how the internal rules to reduce the CO2 production should be set. The European politicians think that Kyoto I has been such an amazing success ;-) that it should be repeated and its successes should be amplified. Among other things, it means that all nations should adopt the same internal mechanisms to punish the CO2 emissions. The U.S. economy should be controlled by the Eurocrats in Brussels in the same way as any other decent EU country and Barack Obama should remain what he is appreciated for, namely a puppet of the global political correctness headquarters that should stay in Brussels. On the other hand, Barack Obama himself dared to disagree. Kyoto I hasn't been a sufficiently huge disaster so the U.S. president wants to engineer an even better scheme. As the first post-Hoover protectionist president of a country that rejected Kyoto I and is going to reject Kyoto II as long as it is isomorphic (and gives a free pass to the poorer emerging markets), he thinks that every country should be allowed to decide about its own methods to achieve the targets and the carbon flows in America should remain uncontrollable by the EU and the U.N. That's quite a heresy for the EU, comrade Obama! ;-) Even Steven Chu has warned that deep CO2 reductions cannot be achieved politically in the U.S. Why doesn't he follow the example of the tall and strong Napoleon in France who defeated 74% of the French citizens and imposed a carbon tax upon them? ;-) Sarkozy also wants to start a world trade war by a new CO2 border tax. Swedish EU presidency also urges the U.S. Senate to behave; if they won't, the U.S. Senators will be spanked just like any bad EU kids. ;-) It's not hard to understand Europe's newly gained self-confidence with respect to America. The Made-In-America downturn has allowed Europe to surpass North America as the wealthiest region of the world. And the future fate of the U.S. dollar (now at 1.475 per euro, or 17 crowns per dollar) - whose reserve status is being questioned by all members of BRIC as well as others (everyone can see that the U.S. may suffer from the same kind of an irresponsible socialist government as everyone else) - may turn out to have something to do with this picture. The declared purpose of the December 2009 negotiations in Copenhagen that will hopefully fail completely is to save the Earth if not the multiverse. The UAH AMSU data see the average annual and global brightness temperature of the Earth to be close to minus 15.5 °C. Ban Ki-Moon and similar stellar scientists have calculated that if the temperature exceeds f***ing frying minus 13.5 °C which is by 2 °C higher, all of us are going to evaporate or transform into plasma and the Universe may decay into a different state, too. And I don't have to explain you the staggering statistical implications for the whole multiverse. ;-) During the year, the brightness temperature oscillates approximately between -17 °C in January and -14 °C in July - because the variations of the landmass, which is mostly on the Northern Hemisphere, are more pronounced than the variations of the oceanic temperatures. The recent, 30-year trends indicate that the temperature is increasing roughly by 1 °C per century, so the catastrophic level when the temperature will oscillate between -15 °C and -12 °C could occur around the year 2200 or so - whether or not we will continue to use fossil fuels. If you have ever experienced how much brutally hotter -12 °C is relatively to -14 °C, you must agree with all these guys that we're all doomed already next year - because we can already predict that the year 2200 will come - unless Obama and his compatriots will join the EU as obedient members. :-) Myths about the minimal length Many people interested in physics keep on believing all kinds of evidently incorrect mystifications related to the notion of a "minimal length" and its logical relationships with the Lorentz invariance. Let's look at them. Myth: The breakdown of the usual geometric intuition near the Planck scale - sometimes nicknamed the "minimum length" - implies that the length, area, and other geometric observables have to possess a discrete spectrum. Reality: This implication is incorrect. String theory is a clear counterexample: distances shorter than the Planck scale (and, perturbatively, even the string scale) cannot be probed because there exist no probes that could distinguish them. Consequently, the scattering amplitudes become very soft near the Planck scale and the divergences disappear. Blog2Print: print blogs as books Click to zoom in. Tuesday, September 15, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Smartkit: On the edge game Click the screenshot for the game. Jump on each white square once before you end up with the red square. Monday, September 14, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Murray Gell-Mann: 80th birthday and interview On Tuesday, Murray Gell-Mann celebrates his 80th birthday. Big congratulations! This article will summarize some old achievements of the great physicist but also discuss some of his recent opinions about string theory. Murray Gell-Mann was born on September 15th, 1929 in Lower East Side of New York to a family of Western Ukrainian Jewish immigrants. When he was fifteen, he joined Yale. ;-) See some pictures from his early life. In the 1950s, when he was in his 20s, he studied cosmic rays and discovered/invented the strangeness in order to make sense out of the isospin, other quantum numbers, and their relationships (e.g. using the key Gell-Mann-Nishijima formula). I wrote his biography one year ago, in Oskar Klein and Murray Gell-Mann: birthdays. So I won't write everything again. Let me just say that Murray Gell-Mann was the most important one among the first pioneers who realized that there were quarks inside hadrons which is what earned him the 1969 physics Nobel prize. Note that all these things, including the award, had been completed years before the discovery of QCD. Clifford Johnson: LASER A pretty good, non-technical explanation how LASERs work. Well, the reason why the photons end up going in the same direction is slightly underexplained but the very idea of a particle physics choreography is neat. Via Asymptotia. Global warming affects beer, eggs, corn, pork Rafa has pointed out that Nude Socialist as well as lots of other media have reported that global warming makes beer suck: some Czech researchers think that the concentration of (bitter) alpha acids in hops was recently dropping by a whopping 0.06 percent per year (...) which they attribute to global warming (...). That's a true catastrophe (...) which finally proves that we are all doomed. Click the sentence below to read more. Saturday, September 12, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Schrödinger's virus and decoherence The physics arXiv blog, Nature, Ethiopia, Softpedia, and many people on the Facebook were thrilled by a new preprint about the preparation of Schrödinger's virus, a small version of Schrödinger's cat. The preprint is called Towards quantum superposition of living organisms (click) and it was written by Oriol Romero-Isart, Mathieu L. Juan, Romain Quidant, and J. Ignacio Cirac. They wrote down some basic stuff about the theory and a pretty clear recipe how to cool down the virus and how to manipulate with it (imagine a discussion of the usual "atomic physics" devices with microcavities, lasers, ground states, and excited states of a virus, and a purely technical selection of the most appropriate virus species). It is easy to understand the excitement of many people. The picture is pretty and the idea is captivating. People often think that the living objects should be different than the "dull" objects studied by physics. People often think that living objects - and viruses may or may not be included in this category - shouldn't ever be described by superpositions of well-known "privileged" wave functions. Except that they can be and it is sometimes necessary. Quantum mechanics can be baffling but it's true. Friday, September 11, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere CO2 makes Earth greenest in decades In June 2009, Anthony Watts reposted an article by Lawrence Solomon that pointed out that the Earth is greener than it has been in decades if not centuries. See also NASA's animations of this Earth (the map of its bio-product), for example the low-resolution one. Note that the CO2 concentration grows by 1.8 ppm a year, which is about 0.5% a year. It adds up to approximately 10% per 20 years. In other words, the relative increase of the GPP is more than one half of the relative increase of the CO2 concentration. The plants also need solar radiation and other things that haven't increased (or at least not that much) which is why the previous sentence says "one half" and not "the same as". Because the CO2 concentration in 2100 (around 560 ppm) may be expected to be 50% higher than today (around 385 ppm), it is therefore reasonable to expect that the GPP will be more than 25% higher than it is today. Even by a simple proportionality law, assuming no improvements in the quality, transportation, and efficiency for a whole century, the GPP in 2100 should be able to feed 1.25 * 6.8 = 8.5 billion people, besides other animals. Of course, in reality, there will be lots of other improvements, so I find it obvious that the Earth will be able to support at least 20 billion people in 2100 if needed. On the other hand, I think that the population will be much smaller than 20 billion, and perhaps closer to those 8.5 billion mentioned previously. Back to the present: oxygen Now, in September 2009, Anthony Watts mentions a related piece of work that some Danish researchers just published in Nature: Copenhagen press release Paper in Nature The authors have studied chromium (not chrome!) isotopes in iron-rich stones to determine some details about the oxidification of the oceans and the atmosphere that occurred 2+ billion years ago. In two different contexts, they are forced to conclude that an increased concentration of oxygen in the oceans and the atmosphere led to cooling. The authors say a couple of things about the ice ages that are manifestly incorrect. They say that the oxygen concentration could have been the key driver behind the temperature swings during the glaciation cycles: a higher amount of oxygen allowed the organisms to consume more CO2 and other greenhouse gases that reduced the temperature by a weaker greenhouse effect. That's clearly incompatible with the fact that the temperature was changing roughly 800 years before the concentration of the greenhouse gases. The temperature variations couldn't have been an effect caused by the greenhouse gases, not even if you try to add oxygen in the sequence of all the correlated phenomena. However, it's plausible that the oxygen levels influenced the temperature more directly (which consequently influenced the concentrations of trace gases, via outgassing). A simple additional comment I can make is that the higher concentrations of oxygen may be increasing the albedo (reflectivity) of the oceans and the landmass by adding life forms which may be optically brighter than the dead soil and oceans and/or the life forms that don't need oxygen (or because of another inequality in the energy balance of photosynthesis and/or breathing). Even if that is the case, it remains largely unknown whether the oxygen variations in the glaciation periods were sufficient to drive the temperatures (I guess that they're not) and even if they were sufficient, it would remain to be seen what was their cause. Thursday, September 10, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Abiogenic birth of oil At least a large portion of petroleum is believed to originate from biological processes. However, an article in Nature, Kolesnikov, Kutcherov, Goncharov: Methane-derived hydrocarbons produced under upper-mantle conditions uses spectroscopic methods applied to laser-heated diamond to argue that at temperatures around 750-1250 °C and pressures around 20,000 atmospheres, methane transforms into ethane or propane or butane, combined with graphite and hydrogen. Under the same conditions, ethane decomposes into methane: the transition is reversible. It should also mean that it is easier to find oil, as The Swedish Royal Institute of Technology puts it. New oil reserves Such a statement is not too shocking: two days ago, 1-2 billion new barrels of light oil were announced by BG in Brazil, increasing the world's proven reserves by 0.1-0.2%. One week ago, BP found 4-6 billion new barrels in the Gulf of Mexico, previously thought to be "finished". Review of the membrane minirevolution and other hep-th papers Today, there are twelve new papers primarily labeled as hep-th papers. The first one, and one that may attract the highest number of readers, is a review of the membrane minirevolution by Klebanov and Torri. However, I will mention the remaining eleven preprints, too. Membrane uprising: a review The membrane minirevolution was discussed on this blog as a minirevolution long before most people noticed that there was a minirevolution going on. Important papers by Bagger + Lambert and by Gustavsson (BLG) introduced a new, unusual Chern-Simons-like theory with 16 supercharges in 2+1 dimensions. It was argued that it had to describe two coincident M2-branes. It used to be thought that the CFT theories dual to M-theory on "AdS4 x S7/G" had no Lagrangian description except that BLG found one. Upgraded: Hubble Space Telescope Carina Nebula in the visible (top) and infrared (bottom) perspective. That's where stars are being born. The Hubble Space Telescope is alive, well, and upgraded. Click the picture above to see 7 pretty new pictures (via BBC) or see Google News or Blog Search. The book advertised on the left side is just one among many other books with pretty colorful photographs that the Hubble Space Telescope has produced during those years. Let me recall that the gadget should eventually be replaced by the James Webb Telescope. Wednesday, September 09, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere ASU: Origins of the Universe On April 6th, 2009, six Nobel prize winners discussed the origins of the Universe in Arizona. If you have 64 extra minutes, and/or if you liked a similar ASU discussion whether our Universe was unique, here I bring you a new one. Baruch Blumberg got a medicine Nobel prize for a virus and he is an astrobiologist. Sheldon Glashow, David Gross, and Frank Wilczek are particle physicists who need no introduction. Wally Gilbert is a biochemist, Chemistry Nobel prize winner in 1980, founder of Biogen etc., capitalist, chairman of the Harvard Society of Fellows, and a photographic artist. Technical: Click the mail logo below to initiate the process to subscribe to daily e-mail updates with my texts on this blog which are sent every day at 5:15 am Prague Time. Frank Wilczek and Sheldon Glashow have a small fight about supersymmetry around 26:00. Wilczek explained that "axions" were named after a detergent whose name Wilczek liked so much that he waited for an opportunity to name a particle after it. Glashow reveals that WIMP stands for "Women in Maths and Physics at Harvard" which may be an actual secret organization. :-) 9:09:09 09/09/09 This is not a real posting. Instead, it is just a placeholder posted on 09/09/09 at 09:09:09. Sorry for that! The comment thread can be used for any discussions. ;-) By the way, the numbers could lead you to ask whether 0.9999... is equal to 1.0000... Well, you may define your numbers in any way you want. But if want these particular, possibly infinite sequences of decimal digits to represent a number system (namely the set of real numbers) that satisfies (x/3)*3=1, then you're forced to accept that 0.9999... must be identified with 1.0000.... simply because 1/3=0.3333... and 0.3333...*3 = 0.9999... ;-) Tuesday, September 08, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Hideki Yukawa: an anniversary Today, several mathematicians and physicists would celebrate their birthday or deathday. (Some cosmologists are still confused why people don't celebrate their deathdays too often: such an asymmetry shamefully breaks the politically correct equivalence between the different arrows of time! Well, it indeed does: the breaking comes from the so-called "logical arrow of time".) Marin Mersenne was born in 1588, Joseph Liouville died in 1882, Hermann von Helmholtz died in 1894. But let us look at this guy. Hideki Yukawa was born in Tokyo on January 23th, 1907 and died in Kyoto on September 8th, 1981. Just like the death is the time reversal of the birth, Kyo-To is the time reversal of To-Kyo, so it makes sense in this case. When he was 26, he was hired as an assistant professor in Osaka which was a great choice because two years later, in 1935, he published his theory of mesons. The pion was observed in 1947 and Yukawa received his Nobel prize in 1949: that was the first Japanese Nobel prize. He also predicted the K-capture, i.e. the absorption of a low-lying, "n=0" electron by the nucleus of a complicated atom. Sunday, September 06, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Schellnhuber: West has exceeded quotas In his previous life, Hans Joachim Schellnhuber used to be a fairly good theoretical physicist. For example, he would solve the Schrödinger equation with an almost periodic potential in 1983. He has spent a year or so as a postdoc at KITP in Santa Barbara (1981-82). But the times have changed. For a couple of years, he has been the director of the Potsdam Institute for Climate Impact Research and the main German government's climate protection adviser. What he has just said for Spiegel, in Industrialized nations are facing CO2 insolvency (click), is just breathtaking and it helps me to understand how crazy political movements such as the Nazis or communists could have so easily taken over a nation that is as sensible as Germany. A few rotten steps in the hierarchy is enough for a loon to get to the very top. He is proposing the creation of a CO2 budget for every person on the planet, regardless whether they live in Berlin or Beijing. Let us allow him to speak: Saturday, September 05, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Mojib Latif warns IPCC of cooling Nude Socialist informs that Mojib Latif, a member of the IPCC, has warned his fellow IPCC members that we could see 10-20 years of cooling that will make people question the global warming orthodoxy. Highly trustworthy sources of mine describe Latif as one of the "better ocean modelers". He used to say that the models were perfect but when someone told him that perfect models meant that no extra funding for modelers was necessary, he "developed a deeper appreciation for the model shortcomings." ;-) So he appreciates that the ocean cycles and others may drive the climate in a different direction than the greenhouse effect for a decade or two. "Short-term" predictions are unreliable, he admits. But it took me quite some time to understand the atmosphere of expectations among those people. At the beginning, I thought that Latif was just another quasi-religious guy who says that people should be afraid of global warming regardless of the observations and their consistency with the models. Later, I realized that I was probably right but I also realized that Latif was a sort of hero at the same moment. It is actually a heresy among the IPCC members to even think about the possibility that 10-20 years in the future won't see any discernible global warming - despite the fact that this is precisely what has happened in the previous 10 years (and even 15 years, when you insist on statistical significance). Friday, September 04, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Magnetic monopoles seen in CM physics Science Magazine has published a paper by 14 British and German authors, Dirac strings and magnetic monopoles in spin ice Dy2Ti2O7 (click), who claim to have seen, via diffuse neutron scattering, emergent magnetic monopoles in a spin ice on the highly frustrated pyrochlore lattice. These magnetic monopoles appear at the ends of "observable Dirac strings". This is way too bizarre a terminology, to say the least, because a basic defining property of the Dirac strings, as realized by Paul Dirac, is that they must be unobservable! ;-) OK, fine, they mean some magnetic flux tubes that actually don't respect the Dirac flux quantization rule. See also Nature (popular), Physics World, PhysOrg, Science Daily (click). Let me say a few words about the Dirac strings. If you imagine a magnetic monopole of charge Q, i.e. an isolated North (or South) pole of a magnet (that is normally coming in the dipole form - with both poles - only), the magnetic field around is radial and it goes like "Q / R^2". Remember the letter "Q". The vector function "(X,Y,Z)/R^3" in three dimensions has the feature that its divergence equals zero. Well, not quite: it is a multiple of a delta-function. Is our Universe unique, and how can we find out? If you have spare 45 minutes, here's a fun panel discussion from April 3rd, 2009, taken during the Origins Symposium at Arizona State University. If you click the O.S. link, you may find other panels with Brian Greene, Lawrence Krauss, Steve Pinker, and many others. Thursday, September 03, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Japanese voters may have commited economic harakiri If you haven't noticed, Taro Aso of the center-right LDP (Liberal Democratic Party of Japan), the most recent prime minister of Japan, was politically killed by the recent polls. He was a kind of character - and an openly pro-market, pro-separation-of-classes guy. That was too much of a good thing for the low-profile, emotionally conservative electorate in the world's #2 economy which is only the #13 source of the TRF visitors. They ended the 50 years of the government led by LDP. Kimi ga Yo, or "May Your Reign Last Forever", turned out to be too optimistic anthem lyrics for LDP of Japan. What is it going to mean for Japan? The winner is the left-wing DPJ (Democratic Party of Japan). Yukio Hatoyama is nicknamed "The ET" or "The Alien" because he looks like one. Moreover, his wife had a trip aboard a UFO space shuttle to Venus, a beautiful planet governed by the little green party (which allowed a 400 °C of CO2 greenhouse effect: the little green comrades picked Venus as the destination because they apparently don't have good mental asylums in Japan). Yukio Hatoyama comes from the "Japanese Kennedy family" and is going to become the next prime minister of Japan in two weeks. Their program includes a schedule to screw the Japanese relationships with the United States and a sophisticated strategy to harass the Japanese corporations. The ET has been attacking the existing Japanese market economy which he calls the "unrestrained market fundamentalism and financial capitalism that are void of morals" for quite some time. He also wants to "put the interests of people before those of corporate Japan", a formulation that will be familiar to those who remember the communist coup d'états in the former socialist Europe, apparently not noticing that the whole Japanese post-war miracle was about the freedom of the corporations to stay ahead of the average citizen and to drag him or her to the future, which has always been in his or her best interest. Will the Japanese workers be motivated enough by their corporations to work hard enough to afford Beethoven's fifth breakfast? And will Honda's Rube Goldberg machine satisfy the CO2 limits described in the next paragraph? Moreover, he wants to reduce Japan's CO2 production by 25% by 2020 which approximately translates into a minus 2% annual GDP growth rate for every year in the following decade. It shouldn't shock you that the Japanese companies are concerned, to put it very mildly. We will see whether the ET is able to transform Japan ($34,000 GDP per capita) into another Vietnam ($2,100 GDP per capita) or North Korea ($1,700 GDP per capita), much like many of his soulmates have repeatedly done in other countries. The Vietnamese people are doing fine in the Czech Republic, we kind of like them, and we're obviously ready to absorb thousands of Japanese emigrants, too. ;-) Wednesday, September 02, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Trillions to be wasted for CO2 madness a year The most experienced readers of The Reference Frame remember a Kyoto counter that used to be embedded in the sidebar. It was created by Steve Milloy of and it was counting the dollars wasted for the Kyoto protocol, assuming that the annual cost of the carbon regulation was USD 150 billion. He was - and I was - criticized from all sides of the alarmist movement. Let us omit the most vitriolic stuff and look in the comment section of DeltoidEli Rabett (whose real identity is known to us) wrote in 2005: The world economy is about 20 trillion per year. So even at the junk science's rather exaggerated Kyoto cost of 150 billion per year that is 0.75 percent. Well within the noise using an upper limit for the cost and a lower limit (if any) for the benefit. That, my friends is a good deal. We should grab it. Well, the figures of these people have always been strange, even when it comes to numbers that every person with a basic interest in the world's economy should know. The world's GDP was USD 55.5 trillion in 2005, not USD 20 trillion. More importantly, the costs - USD 150 billion a year - were surely not "exaggerated". Washington Post hails Obama as a climate skeptic Marc Morano has pointed out an interesting article in the Washington Post Obama Needs to Give a Climate Speech - ASAP in which Marc Morano and Barack Obama are credited with the gradual fall of the climate hysteria or, if you want to use the original wording, with the "growing defection of experts from the scientific consensus view". ;-) You might think: What a strange pair of bedfellows. But is it really so strange? Of course, the author, Andrew Freedman, thinks that Barack Obama is obliged to give a fiery alarmist speech to please the movement of the little green men like Freedman himself. Well, I am not 100% sure whether Freedman is the U.S. Überpresident who can control the U.S. President. ;-) After their private conversations, President Klaus was pleasantly surprised by Obama's charm and energy. Climate realist Klaus noted that Obama has complained about his aids' and his environment's having no sense of economic reality when it comes to policies focusing on CO2. It sounded like the music from the heaven to Klaus's ears, he said. I think that Freedman is right. Barack Obama has given a smaller space to the climate change in his speeches than George W. Bush did in the same stage of his presidency because Barack Obama is actually a climate crypto-realist. He is just surrounded by hordes of wrong, fearmongering people - and he has become a symbol of all their wrong plans. But at the very depth of his soul, he doesn't think that it's a good idea to regulate carbon. Am I wrong? Tuesday, September 01, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere An unexpected constitutional crisis in Czechia I would bet that the situation will be clarified pretty soon but the news from the Constitutional Court of the Czech Republic whose headquarters are located in the town of Brno, Moravia sounds pretty shocking. All the big and not-so-big parties have begun the campaign for the early elections on October 9th-10th, 2009. Except that the Constitutional Court has just decided that early elections and all the laws that allow them - and that shorten the mandate of the current Parliament - are unconstitutional, despite the fact that the bill about the early elections has been adopted as a constitutional bill. What happened? Mr Miloš Melčák was elected as a deputy for the social democratic party in 2006 except that much like a dozen of similar deputies in recent years, he has "betrayed" the bulk of his party by allowing the center-right government to exist. Obviously, he was kicked out of the social democratic party. The "traitors" are being punished in a straightforward way: the parties won't include them on their list so they will lose their job and feeding troughs right after the following elections. Of course, Mr Miloš Melčák decided that any new elections that would remove him from the Parliament are bad, so they must be unconstitutional. He sent a complaint to the Constitutional Court. In a stunning development, the court has ruled that Mr Melčák is right today. Congratulations. :-) We are learning that according to the basic charter of the human rights, Mr Melčák and others who are at risk enjoy the right for an "uninterrupted execution of a public appointment". They can't be removed by anyone, the court claims! ;-) The communist party has used a similarly "uninterrupted" definition of democracy for four decades. The court believes that the early elections would be an example of an "unacceptable change of the critical attributes of a democratic rule of law" - wow - and it's such an important stuff for the court that the court - except for two "dissenters" - thinks that the early elections can't take place before the court publishes its final verdict about the complaint! ;-) So the elections have been postponed indefinitely. Now, this is obviously a strong stuff. On one hand, it's good that the constitutional court is trying to verify things, including the decisions that no one in the Parliament dares to doubt. On the other hand, it's kind of crazy that it considers the early elections a "brutal violation of the basic attributes of democracy" and that it claims to have the right to judge which constitutional bill is more important than the other ones. Even if there were an inconsistency between the basic charter of human rights and freedoms on one side and the bill that declares the early elections, both of them are constitutional bills and the constitutional court would have to operate within this possibly perceived inconsistency. I think that it's clear that the Parliament has the "moral" right to dissolve itself, via the expected steps involving the President, and the early elections are the obvious democratic solution (or an attempt for a solution) of the otherwise "unsolvable" situation. The interpretation of the "uninterrupted execution of a public appointment" is bizarre, speculative, reminiscent of the undemocratic regimes, and secondary. But the court is making this strangely interpreted right more important than the right of the citizens - and the bulk of their representatives - to democratically choose a new Parliament which is clearly more important according to basic common-sense understanding of democracy. It's not clear how they will solve it. The court may try to delay the elections indefinitely - or not. Clearly, the lawmakers should search for a very speedy way to reshuffle the laws so that the complaint will be mute. I am no lawyer but I guess it must be possible to revoke all the laws that were claimed to lead to inconsistencies, cancel or update some paragraphs in the charter that lead to similar inconsistencies, and accept a new bill about the early elections that will be consistent but effectively equivalent to the current one. Also, I think that the constitution is imperfectly designed if it doesn't allow early elections as a standard procedure. At any rate, the early elections have been considered legitimate for quite some time - and even without a canonical wording in the constitutional "core", we've had some early elections in the past - so the sudden realization that they're unconstitutional is strange. World War II began 70 years ago It's been 70 years since Poland was invaded by Germany which ignited the most brutal global conflict that the world has seen as of 2009. One day earlier, on August 31st, Germany staged an attack of would-be Polish troops against a radio station in Gleiwitz, in order to create a "justification" for the attack against Poland. Poland with its underdeveloped and relatively weak army had no real chance to win. It was surrounded by bastards on the West and on the East. The Ribbentrop-Molotov Pact (which Putin considers immoral) guaranteed that the Soviet Union would not protect Poland. In fact, it occupied the Baltic states and picked a piece of Poland, too.
ebf5f3e1f23b7269
Слова на букву unre-work (15990) Universalium На главную О проекте Обратная связь Поддержать проектДобавить в избранное Слова на букву unre-work (15990) << < 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 > >> waterlily [wôt′ərlil΄ē] n. pl. waterlilies 1. any of an order (Nymphaeales) of dicotyledonous water plants having large, flat, floating leaves and showy flowers in a wide ... waterline [wôt′ərlīn΄] n. 1. the line to which the surface of the water comes on the side of a ship or boat 2. any of several lines parallel to this, marked at various ... /waw"teuhr lokt', wot"euhr-/, adj. enclosed entirely, or almost entirely, by water: a waterlocked nation. [1910-15; WATER + locked (ptp. of LOCK1)] * * * ☆ waterlocust [wôt′ərlō΄kəst ] n. a thorny honeylocust (Gleditsia aquatica), native to the SE U.S., with a dark, heavy wood that takes a high polish * * * /waw"teuhr lawg', -log', wot"euhr-/, v., waterlogged, waterlogging. v.t. 1. to cause (a boat, ship, etc.) to become uncontrollable as a result of flooding. 2. to soak, fill, or ... /waw"teuhr lawgd', -logd', wot"euhr-/, adj. 1. so filled or flooded with water as to be heavy or unmanageable, as a ship. 2. excessively saturated with or as if with water: ... /waw"teuhr looh', wot"euhr-, waw'teuhr looh", wot'euhr-/; for 1 also Flem. /vah"teuhrdd loh'/, n. 1. a village in central Belgium, south of Brussels: Napoleon decisively defeated ... Waterloo and City Link ➡ Waterloo (I) * * * Waterloo Station ▪ railroad station, London, United Kingdom       railway station in the borough of Lambeth, London, England. It is one of the largest stations in the United Kingdom. ... Waterloo, Battle of (June 18, 1815) Final defeat of Napoleon and French forces in the Napoleonic Wars. The battle was fought near Waterloo village, south of Brussels, during the Hundred Days of ... Waterloo, University of Public university in Waterloo, Ont. , Can., founded in 1957. It has faculties of applied health sciences, arts, engineering, environmental studies, mathematics, and science, as ... water main n. A principal pipe in a system of pipes for conveying water, especially one installed underground. * * * /waw"teuhr meuhn, wot"euhr-/, n., pl. watermen. 1. a person who manages or works on a boat; boatman. 2. a person skilled in rowing or boating. 3. Chesapeake Bay Area. a person ... /waw"teuhr meuhn ship', wot"euhr-/, n. 1. the skill, duties, business, etc., of a waterman. 2. skill in rowing or boating. [1880-85; WATERMAN + -SHIP] * * * /waw"teuhr mahrk', wot"euhr-/, n. 1. a mark indicating the height to which water rises or has risen, as in a river or inlet. 2. See water line (def. 5). 3. a figure or design ... /waw"teuhr meel', wot"euhr-/, n. any of several tiny floating aquatic plants of the genus Wolffia. [WATER + MEAL2] * * * /waw"teuhr mel'euhn, wot"euhr-/, n. 1. the large, roundish or elongated fruit of a trailing vine, Citrullus lanata, of the gourd family, having a hard, green rind and a sweet, ... water milfoil n. Any of various cosmopolitan aquatic herbs of the genus Myriophyllum, having feathery, finely dissected submersed leaves and entire or toothed emersed leaves. * * ... water mill n. A mill with machinery that is driven by water. * * * water moccasin n. 1. A semiaquatic pit viper (Agkistrodon piscivorus) of lowlands and swampy regions of the southern United States. Also called cottonmouth. 2. Any of various ... water mold n. Any of various parasitic or saprobic fungi of the phylum Oomycota, living chiefly in fresh water or moist soil. * * * water nymph n. Mythology A nymph, such as a naiad or Nereid, living in or near water. * * * water oak n. Any of various oak trees that grow in wetlands, especially Quercus nigra, of eastern North America. * * * waterof crystallization water of crystallization n. Water in chemical combination with a crystal, necessary for the maintenance of crystalline properties but capable of being removed by sufficient ... waterof hydration water of hydration n. Water chemically combined with a substance in such a way that it can be removed, as by heating, without substantially changing the chemical composition of ... wateron the brain water on the brain n. Hydrocephalus. * * * water ouzel n. See dipper. * * * water ox n. See water buffalo. * * * water park n. An amusement park whose attractions include slides, fountains, and other recreational settings involving water. * * * water parting n. See watershed. * * * water pennywort n. Any of various creeping perennial herbs of the genus Hydrocotyle in the parsley family, having orbicular leaves and small white or greenish flowers grouped in ... water pepper n. A perennial herb (Polygonum hydropiperoides) growing in marshes and bogs of the United States, having reddish stems, clusters of small greenish flowers, and ... /waw"teuhr pik', wot"euhr-/ n. a portable electric appliance that uses a stream of water under force to remove food particles from between the teeth and to massage the gums. Also ... water pipe n. 1. A pipe that is a conduit for water. 2. An apparatus for smoking, such as a hookah, in which the smoke is drawn through a container of water or ice and cooled ... water pipit n. A North American pipit (Anthus spinoletta) with a dark unstreaked back and an elaborate musical song. * * * water pistol n. See squirt gun. * * * water plantain n. Any of various aquatic herbs of the genus Alisma, having panicles with whorled branches and small three-petaled white or pinkish flowers. * * * water polo n. A water sport with two teams of swimmers each of which tries to pass a ball into the other's goal. * * * wa·ter·pow·er (wôʹtər-pou'ər, wŏtʹər-) n. 1. a. The energy produced by running or falling water that is used for driving machinery, especially for generating ... —waterproofer, n. —waterproofness, n. /waw"teuhr proohf', wot"euhr-/, adj. 1. impervious to water. 2. rendered impervious to water by some special process, as coating or ... /waw"teuhr prooh'fing, wot"euhr-/, n. 1. a substance by which something is made waterproof. 2. the act or process of making something waterproof. [1835-45; WATERPROOF + -ING1] * ... water purslane n. 1. Any of various aquatic annual herbs of the genus Peplis, especially P. diandra, of the eastern and central United States, having small greenish flowers. 2. A ... water rail n. A brownish Old World rail (Rallus aquaticus) with a long red bill, living in marshy warm coastal areas of the Pacific. * * * water rat n. 1. a. Any of various semiaquatic rodents, especially Neofiber alleni, of Florida and southern Georgia, closely related to and resembling the muskrat. b. See ... water right n. 1. The right to draw water from a particular source, such as a lake, irrigation canal, or stream. Often used in the plural. 2. Nautical. The right to navigate on ... /waw"teuhrz, wot"euhrz/, n. Ethel, 1896-1977, U.S. singer and actress. * * * Waters, Benjamin ▪ 1999       American tenor saxophonist and arranger who played for seven years with Charlie Johnson's early Harlem jazz band in New York City. A journeyman sideman, he ... Waters, Ethel born Oct. 31, 1896/1900, Chester, Pa., U.S. died Sept. 1, 1977, Chatsworth, Calif. U.S. blues and jazz singer and actress. She was a professional singer by age 17, and she ... Waters, Frank ▪ 1996       U.S. novelist and biographer whose works concentrated on the American Southwest (b. July 5, 1902—d. June 3, 1995). * * * Waters, Muddy orig. McKinley Morganfield born April 4, 1915, Rolling Fork, Miss., U.S. died April 30, 1983, Westmont, Ill. U.S. blues guitarist and singer. He grew up in the cotton country ... Wa·ters (wôʹtərz, wŏtʹərz), Ethel. 1896-1977. American actress and singer who began in vaudeville and became popular on Broadway and in films, such as The Sound and the ... Waters, Muddy. Originally McKinley Morganfield. 1915-1983. American blues singer and musician whose band shaped the hard-edged, electric Chicago blues sound of the post-World War ... water sapphire n. A clear blue cordierite often used as a gemstone. * * * /waw"teuhr say'veuhr, wot"euhr-/, n. a person, device, or practice that reduces water consumption, as during a drought. [WATER + SAVER] * * * /waw"teuhr skayp', wot"euhr-/, n. a picture or view of the sea or other body of water. [1850-55; WATER + -SCAPE] * * * /waw"teuhr skawr'pee euhn, wot"euhr-/, n. any of several predaceous aquatic bugs of the family Nepidae, having clasping front legs and a long respiratory tube at the rear of the ... /waw"teuhr shed', wot"euhr-/, n. 1. Chiefly Brit. the ridge or crest line dividing two drainage areas; water parting; divide. 2. the region or area drained by a river, stream, ... water shield n. 1. A cosmopolitan aquatic herb (Brasenia schreberi) having floating elliptic or ovate leaves and purplish flowers. 2. Any of several New World aquatic herbs of ... Watership Down a very successful novel for children (1972) by Richard Adams. It tells the story of a group of rabbits who are forced to leave the place where they live and go to live in an area ... /waw"teuhr suyd', wot"euhr-/, n. 1. the margin, bank, or shore of a river, lake, ocean, etc. adj. 2. of, pertaining to, or situated at the waterside: waterside insects; a ... water ski or water·ski (wôʹtər-skē', wŏtʹər-) n. A broad ski used for skiing on water. * * * Sport of planing and jumping on water skis, broad skilike runners that a rider wears while being towed by a motorboat. The sport originated in the U.S. in the 1920s. ... water snake n. 1. Any of various nonvenomous snakes of the genus Natrix, living in or frequenting freshwater streams and ponds. 2. Any of various aquatic or semiaquatic snakes. * ... water softener n. 1. A substance used to reduce the hardness of water. 2. A device that monitors and reduces the hardness of the water. * * * water spaniel n. A large spaniel of a breed characterized by a curly water-resistant coat, often used in hunting to retrieve waterfowl. * * * /waw"teuhr spawrt', -spohrt', wot"euhr-/, n. 1. a sport played or practiced on or in water, as swimming, water polo, or surfing. 2. watersports, (used with a sing. or pl. v.) ... /waw"teuhr spowt', wot"euhr-/, n. 1. Also called rainspout. a pipe running down the side of a house or other building to carry away water from the gutter of the roof. 2. a spout, ... water sprite n. A sprite or nymph that inhabits or haunts a body of water. * * * a chain of bookshops in Britain which opened its first shop in London in 1982 and now has over 200 shops in the UK, Ireland and Europe, including the biggest bookshop in Europe ... water strider n. Any of various insects of the family Gerridae, having long slender legs with which they support themselves on the surface of water. Also called skater. * * * water supply n. 1. The water available for a community or region. 2. The source and delivery system of such water. * * * water system n. 1. A river and all its tributaries. 2. A water supply. * * * water table n. 1. A projecting ledge, molding, or stringcourse along the side of a building, designed to throw off rainwater. 2. The level below which the ground is completely ... water taxi n. A ferryboat that takes passengers to a variety of possible destinations instead of operating over a fixed route. * * * ☆ waterthrush [wôt′erthrush΄, wät′erthrush΄] n. any of several North American wood warblers (genus Seiurus), usually found near streams, swamps, etc. * * * water ... —watertightness, n. /waw"teuhr tuyt', wot"euhr-/, adj. 1. constructed or fitted so tightly as to be impervious to water: The ship had six watertight compartments. 2. so devised ... Waterton Lakes National Park /waw"teuhr teuhn, wot"euhr-/ a national park in W Canada, in S Alberta. 220 sq. mi. (570 sq. km). * * * National park, western Canada. Located in southern Alberta, it was ... Waterton-Glacier International Peace Park /waw"teuhr teuhn glay"sheuhr, wot"euhr-/ an international park in SW Canada and NW Montana, jointly administered by Canada and the U.S. since 1932 and encompassing Waterton Lakes ... water tower n. 1. A standpipe or elevated tank used as a reservoir or for maintaining equal pressure in a water system. 2. A firefighting apparatus for lifting hoses to the upper ... /waw"teuhr town', wot"euhr-/, n. 1. a town in E Massachusetts, on the Charles River, near Boston: U.S. arsenal. 34,384. 2. a city in N New York. 27,861. 3. a town in NW ... water turkey n. See anhinga. * * * water vapor n. Water in a gaseous state, especially when diffused as a vapor in the atmosphere and at a temperature below boiling point. * * * /waw"teuhr vil', wot"euhr-/, n. a city in SW Maine. 17,779. * * * ▪ Maine, United States       city, Kennebec county, south-central Maine, U.S., on the Kennebec River ... /waw"teuhr vleet', wot"euhr-/, n. a city in E New York, on the Hudson: oldest U.S. arsenal. 11,354. * * * ▪ New York, United States       city, Albany county, eastern ... /waw"teuhr weuhrd, wot"euhr-/, adv. in the direction of water or a body of water. Also, waterwards. [1175-1225; ME; see WATER, -WARD] * * * /waw"teuhr way', wot"euhr-/, n. 1. a river, canal, or other body of water serving as a route or way of travel or transport. 2. Shipbuilding. (in a steel or iron vessel) a ... waterway plank Naut. See margin plank. * * * /waw"teuhr weed', wot"euhr-/, n. elodea. [1835-45; WATER + WEED1] * * * /waw"teuhr hweel', -weel', wot"euhr-/, n. 1. a wheel or turbine turned by the weight or momentum of water and used to operate machinery. 2. a wheel with buckets for raising or ... water wings pl.n. A pair of inflatable waterproof bags designed so that one can be attached to each arm, especially of a child learning to swim, to provide buoyancy. * * * water witch n. One who claims to be able to find underground water by means of a divining rod; a dowser. * * * /waw"teuhr werrks', wot"euhr-/, n., pl. waterworks. 1. (used with a sing. or pl. v.) a complete system of reservoirs, pipelines, conduits, etc., by which water is collected, ... /waw"teuhr wawrn', -wohrn', wot"euhr-/, adj. worn by the action of water; smoothed by the force or movement of water. [1805-15; WATER + WORN] * * * —waterily, adv. —wateriness, n. /waw"teuh ree, wot"euh-/, adj. 1. pertaining to or connected with water: watery Neptune. 2. full of or abounding in water, as soil or a ... Wat·ford (wŏtʹfərd) A municipal borough of southeast England northwest of London. Mainly residential, it is also a commercial and publishing center. Population: 74,700. * * ... Watie, Stand orig. De Gata Ga born Dec. 12, 1806, Rome, Ga., U.S. died Sept. 9, 1871 American Indian leader. He learned English at a mission school and helped publish the Cherokee Phoenix, ... /wot"kinz/, n. a male given name. * * * Watkins Glen a village in W New York, on Seneca Lake: gorge and cascades. 2440. * * * ▪ New York, United States       village, seat (1854) of Schuyler county, central New York, ... Watkins, Carleton E. ▪ American photographer in full  Carleton Emmons Watkins  or  Carleton Eugene Watkins  born Nov. 11, 1829, Oneonta, N.Y., U.S. died June 23, 1916, Imola, ... Watkins, Vernon Phillips ▪ English poet born June 27, 1906, Maesteg, Glamorgan,Wales died Oct. 8, 1967, near Swansea, Glamorgan       English-language Welsh poet who drew from Welsh material ... Watling Island /wot"ling/. See San Salvador (def. 1). * * * Watling Street a road in England, originally built by the Romans and still in use today under various different names. It runs from Dover, on the south-east coast of England, through London and ... Wat·lings Island (wätʹlĭngz) See San Salvador1. * * * /wots/ n. bulk-rate telephone service that enables a subscriber to make an unlimited number of long-distance telephone calls within a given service area for a fixed monthly ... /wot"seuhn/, n. 1. James Dewey, born 1928, U.S. biologist: Nobel prize for medicine 1962. 2. John ("Ian Maclaren"), 1850-1907, Scottish clergyman and novelist. 3. John Broadus ... Watson Lake ▪ village, Yukon Territory, Canada       community, southern Yukon Territory (Yukon), Canada. It lies along a small lake on the border with British Columbia. It ... Watson, Doc orig. Arthel Lane Watson born March 2, 1923, Stony Fork, near Deep Gap, N.C., U.S. U.S. country music singer, banjoist, and guitarist. Blind from birth, Watson grew up on a ... Watson, James D(ewey) born April 6, 1928, Chicago, Ill., U.S. U.S. geneticist and biophysicist. He earned his Ph.D. at Indiana University in 1950. Using X-ray diffraction techniques, he began work ... Watson, James Dewey ▪ American geneticist and biophysicist born April 6, 1928, Chicago, Ill., U.S.    American geneticist and biophysicist who played a crucial role in the discovery of the ... Watson, John B(roadus) born Jan. 9, 1878, Travelers Rest, near Greenville, S.C., U.S. died Sept. 25, 1958, New York, N.Y. U.S. psychologist. Trained at the University of Chicago, Watson taught ... Watson, John B. ▪ American psychologist in full  John Broadus Watson   born January 9, 1878, Travelers Rest, near Greenville, South Carolina, U.S. died September 25, 1958, New York, New ... Watson, John Christian ▪ prime minister of Australia born April 9, 1867, Valparaiso, Chile died Nov. 18, 1941, Sydney, N.S.W., Austl.  politician and the first Labour prime minister of Australia ... Watson, Johnny ▪ 1997       ("GUITAR"), U.S. rhythm and blues singer and guitarist who during a 40-year career influenced such musicians as Jimi Hendrix, Eric Clapton, and Frank Zappa ... Watson, Sir William ▪ English author in full  Sir John William Watson   born Aug. 2, 1858, Burley in Wharfedale, Yorkshire, Eng. died Aug. 11, 1935, Ditchling, Sussex  English author of ... Watson, Thomas Augustus ▪ American industrialist born Jan. 18, 1854, Salem, Mass., U.S. died Dec. 13, 1934, Passagrille Key, Fla.       American telephone pioneer and shipbuilder, one of the ... Watson, Thomas J(ohn), Sr. born Feb. 17, 1874, Campbell, N.Y., U.S. died June 19, 1956, New York, N.Y. U.S. industrialist. He went to work for the National Cash Register Co. in 1899. In 1914 he became ... Watson, Thomas J., Jr. ▪ 1994       U.S. business executive (b. Jan. 8, 1914, Dayton, Ohio—d. Dec. 31, 1993, Greenwich, Conn.), inherited the leadership of International Business Machines ... Watson, Thomas J., Sr. ▪ American industrialist in full  Thomas John Watson, Sr.   born Feb. 17, 1874, Campbell, N.Y., U.S. died June 19, 1956, New York, N.Y.       American industrialist ... Watson, Tom in full Thomas Sturges Watson born Sept. 4, 1949, Kansas City, Mo., U.S. U.S. golfer. Watson attended Stanford University before becoming a professional golfer in 1971. He ... Watson, William ▪ English priest born April 23, 1559? died Dec. 9, 1603, Winchester, Hampshire, Eng.       English Roman Catholic priest who was executed for his part in the “Bye ... Watson,James Dewey Watson,Thomas Augustus Watson, Thomas Augustus. 1854-1934. American telephone pioneer who assisted Alexander Graham Bell in his experiments and was the leader of research and engineering for Bell ... Watson,Thomas John Watson, Thomas John. 1874-1956. American businessman who was president (1914-1949) and chairman (1949-1956) of International Business Machines. * * * Watson,Thomas Sturges Watson, Thomas Sturges. Known as “Tom.” Born 1949. American golfer who in 1982 became one of only a few players to win the U.S. and British Open tournaments in the same ... Watson-Crick model /wot"seuhn krik"/, Biochem. a widely accepted model for the three-dimensional structure of DNA, featuring a double-helix configuration for the molecule's two hydrogen-bonded ... Watson-Crick rules. See base-pairing rules. * * * Wat·son-Crick model (wätʹsən-krĭkʹ) n. A three-dimensional model of the DNA molecule, consisting of two complementary polynucleotide strands wound in the form of a double ... /wot"seuhn wot"/, n. Sir Robert Alexander, 1892-1973, Scottish physicist: helped develop radar. * * * Watson-Watt, Sir Robert Alexander born April 13, 1892, Brechin, Scot. died Dec. 5, 1973, Inverness Scottish physicist. He began as a meteorologist working on devices for locating thunderstorms. As head of the ... /wot"seuhn went"werrth'/, n. Charles, 2nd Marquis of Rockingham /rok"ing euhm/, 1730-82, British statesman: prime minister 1765-66, 1782. * * * /wot soh"nee euh/, n. any of various iridaceous plants of the genus Watsonia, native to southern Africa, having sword-shaped leaves and spikes of white or reddish flowers. [ < NL ... /wot"seuhn vil'/, n. a city in W California. 23,543. * * * Watsuji Tetsurō ▪ Japanese philosopher and historian born March 1, 1889, Himeji, Japan died Dec. 26, 1960, Tokyo       Japanese moral philosopher and historian of ideas, outstanding ... /wot/, n. the SI unit of power, equivalent to one joule per second and equal to the power in a circuit in which a current of one ampere flows across a potential difference of one ... /wot/, n. James, 1736-1819, Scottish engineer and inventor. * * * ▪ unit of measurement       unit of power in the International System of Units (SI) equal to one ... Watt, James born Jan. 19, 1736, Greenock, Renfrewshire, Scot. died Aug. 25, 1819, Heathfield Hall, near Birmingham, Warwick, Eng. Scottish engineer and inventor. Though largely ... Watt (wŏt), James. 1736-1819. British engineer and inventor who made fundamental improvements in the steam engine, resulting in the modern high-pressure steam engine (patented ... /wot"oweur', -ow'euhr/, n. a unit of energy equal to the energy of one watt operating for one hour, equivalent to 3600 joules. Abbr.: Wh Also, watthour. [1885-90] * * * watt-hour meter ▪ instrument       device that measures and records over time the electric power flowing through a circuit. Although there are several different types of watt-hour ... /wot"sek"euhnd/, n. a unit of energy equal to the energy of one watt acting for one second; the equivalent of one joule. Also, wattsecond. * * * /wot"ij/, n. 1. power, as measured in watts. 2. the amount of power required to operate an electrical appliance or device. [1900-05; WATT + -AGE] * * * /wo toh"/; Fr. /vann toh"/, n. Jean Antoine /zhahonn ahonn twannn"/, 1684-1721, French painter. * * * Watteau back a loose, full back of a woman's gown, formed by wide box pleats hanging from a high shoulder yoke and extending to the hem in an unbroken line. [1895-1900; after a type of gown ... Watteau, (Jean-) Antoine born Oct. 10, 1684, Valenciennes, France died July 18, 1721, Nogent-sur-Marne French painter. Son of a roof tiler in Valenciennes, he was apprenticed to a local artist. At 18 ... Watteau, Antoine ▪ French painter Introduction born Oct. 10, 1684, Valenciennes, Fr. died July 18, 1721, Nogent-sur-Marne       French painter who typified the lyrically charming and ... Watteau,Jean Antoine Wat·teau (wŏ-tōʹ, vä-), Jean Antoine. 1684-1721. French painter noted for his exuberant scenes of gallantry, such as The Embarkation for Cythera (1717). * * * /wot"euhr/, n. Informal. a light bulb, radio station, etc., of specified wattage (usually used in combination): This lamp takes a 60-watter. [WATT + -ER1] * * * /wot"euhr seuhn, waw"teuhr-/, n. Henry ("Marse Henry"), 1840-1921, U.S. journalist and political leader. * * * /wot"l/, n., v., wattled, wattling, adj. n. 1. Often, wattles. a number of rods or stakes interwoven with twigs or tree branches for making fences, walls, etc. 2. wattles, a ... wattle and daub 1. Also, wattle and dab. a building technique employing wattles plastered with clay and mud. 2. a form of wall construction consisting of upright posts or stakes interwoven with ... ▪ bird also called  Puffback Flycatcher,    any of a number of small, stubby African songbirds of the subfamily Platysteirinae, family Muscicapidae (q.v.); some ... wattleand daub wattle and daub n. A building material consisting of interwoven rods and laths or twigs plastered with mud or clay, used especially in the construction of simple dwellings or as ... /wot"l berrd'/, n. 1. any of several Australian honey eaters of the genus Anthochaera, most of which have fleshy wattles at the sides of the neck. 2. any of three endemic New ... wattled [wät′'ld] adj. 1. built with wattles 2. having wattles, as a bird * * * See wattle. * * * wattless component /wot"lis/ Elect. See reactive component. [WATT + -LESS] * * * /wot"mee'teuhr/, n. Elect. a calibrated instrument for measuring electric power in watts. [1885-90; WATT + -METER] * * * ▪ France       town, Nord département, Nord-Pas-de-Calais région, northern France, on the Belgian-French border. A northeastern suburb of Roubaix, it has textile, ... /wots/, n. 1. André /ahn"dray/, born 1946, U.S. concert pianist, born in Germany. 2. George Frederick, 1817-1904, English painter and sculptor. 3. Isaac, 1674-1748, English ... Watts, André born June 20, 1946, Nürnberg, Ger. German-born U.S. pianist. Son of an African American soldier and a Hungarian mother, he made his debut at age nine at a Philadelphia ... Watts, George Frederick ▪ British painter born Feb. 23, 1817, London died July 1, 1904, Compton, Surrey, Eng.       English painter and sculptor of grandiose allegorical themes. Watts believed ... Watts, Isaac born July 17, 1674, Southampton, Hampshire, Eng. died Nov. 25, 1748, Stoke Newington, London English Nonconformist minister, regarded as the father of English hymnody. Watts ... Watts,George Frederick Watts, George Frederick. 1817-1904. British painter noted for his historical works, portraits, and allegories, including Hope (1885). * * * Watts, Isaac. 1674-1748. English poet, theologian, and hymn writer whose sacred poems include The Psalms of David Imitated (1719). * * * /wots"dun"tn/, n. (Walter) Theodore (Walter Theodore Watts), 1832-1914, English poet, novelist, and critic. * * * Watts-Dunton, Theodore ▪ British critic in full  Walter Theodore Watts-Dunton , original name  Walter Theodore Watts  born Oct. 12, 1832, St. Ives, Huntingdonshire, Eng. died June 6, 1914, ... /wah tooh"see/, n., pl. Watusis, (esp. collectively) Watusi. Tutsi. Also, Watutsi /wah tooht"see/. * * * Wa·tut·si (wä-to͞otʹsē) also Wa·tu·si (wä-to͞oʹsē) n. pl. Watutsi or Wa·tut·sis also Watusi or Wa·tu·sis Variants of Tutsi.   [Kinyarwanda : wa-, pl. human ... ▪ Papua New Guinea       town on the island of New Guinea, eastern Papua New Guinea, southwestern Pacific Ocean. The town is situated at the junction of Edie Creek and ... ▪ New South Wales, Australia       town, north coastal New South Wales, Australia, 12 miles (19 km) above the mouth of the Hastings River, just west of Port Macquarie. ... Waucoban Series ▪ geology       lowermost Cambrian rocks (the Cambrian Period lasted from 542 million to 488 million years ago); the name is derived from exposures found at Waucoba ... /waw/, n. 1. Alec (Alexander Raban), 1898-1981, English novelist, traveler, and lecturer (son of Arthur, brother of Evelyn). 2. Arthur, 1866-1943, English literary critic, ... Waugh, Alec ▪ English writer byname of  Alexander Raban Waugh   born July 8, 1898, Hampstead, London died Sept. 3, 1981, Tampa, Fla., U.S.       English popular novelist and ... Waugh, Auberon Alexander ▪ 2002       British writer and satirist (b. Nov. 17, 1939, Dulverton, Somerset, Eng.—d. Jan. 16, 2001, Combe Florey, near Taunton, Somerset), simultaneously delighted ... Waugh, Evelyn ▪ English author in full  Evelyn Arthur St. John Waugh  born October 28, 1903, London, England died April 10, 1966, Combe Florey, near Taunton, Somerset  English writer ... Waugh, Evelyn (Arthur Saint John) Waugh (wô), Evelyn (Arthur Saint John). 1903-1966. British writer whose satirical novels, such as Decline and Fall (1928) and Vile Bodies (1930), lampoon high society. His ... Waugh, Evelyn (Arthur St. John) born Oct. 28, 1903, London, Eng. died April 10, 1966, Combe Florey, near Taunton, Somerset English novelist. After an Oxford education, he devoted himself to solitary, ... Waugh, Hillary Baldwin ▪ 2009       American writer born June 22, 1920, New Haven, Conn. died Dec. 8, 2008, Torrington, Conn. was a prolific writer of crime novels who was especially noted ... Waugh, Mark Edward and Stephen Rodger ▪ 1998       In the second of the three cricket Tests in South Africa in March 1997, S.R. and M.E. Waugh, the twins from the western suburbs of Sydney, Australia, became ... /waw kee"geuhn/, n. a city in NE Illinois, on Lake Michigan, N of Chicago. 67,653. * * * ▪ Illinois, United States       city, seat (1841) of Lake county, northeastern ... /waw"ki shaw'/, n. a city in SE Wisconsin, W of Milwaukee. 50,319. * * * ▪ Wisconsin, United States       city, seat (1846) of Waukesha county, southeastern Wisconsin, ... waul [wôl] vi., n. 〚see CATERWAUL〛 wail, squall, or howl * * * /waw"saw/, n. a city in central Wisconsin. 32,426. * * * ▪ Wisconsin, United States       city, seat (1850) of Marathon county, north-central Wisconsin, U.S. It lies ... /waw'weuh toh"seuh/, n. a city in SE Wisconsin, near Milwaukee. 51,308. * * * ▪ Wisconsin, United States       city, western suburb of Milwaukee, Milwaukee county, ... —waveless, adj. —wavelessly, adv. —wavingly, adv. —wavelike, adj. /wayv/, n., v., waved, waving. n. 1. a disturbance on the surface of a liquid body, as the sea or a ... /wayv/, n. a member of the Waves. Also, WAVE. [1942; see WAVES] * * * I In oceanography, a ridge or swell on the surface of a body of water, normally having a forward motion ... wave aerobics ➡ sport and fitness * * * wave band Radio and Television. band2 (def. 9). [1920-25] * * * wave base wave base n. the depth in a body of water at which the action of surface waves stops stirring the sediments * * * wave cyclone Meteorol. a cyclone that forms on a front and, in maturing, produces an increasingly sharp, wavelike deformation of the front. * * * wave drag. See aerodynamic wave drag. * * * wave equation 1. Math., Physics. any differential equation that describes the propagation of waves or other disturbances in a medium. 2. Physics. any of the fundamental equations of quantum ... wave front Physics. a surface, real or imaginary, that is the locus of all adjacent points at which the phase of oscillation is the same. [1865-70] * * * Imaginary surface that represents ... wave function Physics. 1. a solution of a wave equation. 2. (in quantum mechanics) a mathematical function, found by solving a quantum-mechanical wave equation, that is used to predict the ... wave mechanics Physics. a form of quantum mechanics formulated in terms of a wave equation, as the Schrödinger equation. Cf. matrix mechanics. [1925-30] * * *       quantum mechanics, ... wave motion ▪ physics       propagation of disturbances—that is, deviations from a state of rest or equilibrium—from place to place in a regular and organized way. Most familiar ... wave number the number of waves in one centimeter of light in a given wavelength; the reciprocal of the wavelength. [1900-05] * * * ▪ physics       a unit of frequency in atomic, ... wave of the future a trend or development that may influence or become a significant part of the future: Computerization is the wave of the future. [phrase popularized as the title of an essay ... wave power ▪ energy       electrical energy generated by harnessing the up-and-down motion of ocean waves. Wave power is typically produced by floating turbine platforms. However, ... wave scroll. See Vitruvian scroll. * * * wave theory 1. Also called undulatory theory. Physics. the theory that light is transmitted as a wave, similar to oscillations in magnetic and electric fields. Cf. corpuscular theory. 2. ... wave train Physics. a series of successive waves spaced at regular intervals. [1895-1900] * * * wave trap Radio. a resonant-circuit filter between the antenna and the receiver for the suppression of unwanted frequencies. Cf. resonance (def. 5). [1920-25] * * * wave velocity ▪ physics  distance traversed by a periodic, or cyclic, motion per unit time (in any direction). Wave velocity in common usage refers to speed, although, properly, velocity ... wave-cut platform or abrasion platform Gently sloping rock ledge that extends from the high-tide level at a steep cliff base to below the low-tide level. It develops as a result of wave ... /wayv"fawrm'/, n. Physics. the shape of a wave, a graph obtained by plotting the instantaneous values of a periodic quantity against the time. Also, waveform. [1840-50] * * * /wayv"awf', -of'/, n. 1. (on an aircraft carrier) the last-minute signaling to an aircraft making its final landing approach that it is not to land on that particular pass but is ... wave-particle duality Principle that subatomic particles possess some wavelike characteristics, and that electromagnetic waves, such as light, possess some particlelike characteristics. In 1905, by ... wave-par·ti·cle duality (wāvʹpärʹtĭ-kəl) n. The exhibition of both wavelike and particlelike properties by a single entity, as of both diffraction and linear propagation ... wave·band (wāvʹbănd') n. A range of frequencies, especially radio frequencies, such as those assigned to communication transmissions. * * * /wayvd/, adj. having a form, outline, or appearance resembling waves; undulating. [1540-50; WAVE + -ED3] * * * wave equation n. 1. A partial differential equation used to represent wave motion. 2. The fundamental equation of wave mechanics. * * * waveform [wāv′fôrm΄] n. a graphic representation showing the shape of a wave that is often periodic and usually plotted with the amplitude of the wave on one axis and time ... wave front n. The continuous line or surface including all the points in space reached by a wave or vibration at the same instant as it travels through a medium. * * * wave function n. A mathematical function used in quantum mechanics to describe the propagation of the wave associated with any particle or group of particles. * * * /wayv"guyd'/, Electronics, Optics. n. a conduit, as a metal tube, coaxial cable, or strand of glass fibers, used as a conductor or directional transmitter for various kinds of ... /wayv"lengkth', -length', -lenth'/, n. 1. Physics. the distance, measured in the direction of propagation of a wave, between two successive points in the wave that are ... /wayv"lit/, n. a small wave; ripple. [1800-10; WAVE + -LET] * * * /way"veuhl/, n. Archibald Percival, 1st Earl, 1883-1950, British field marshal and author: viceroy of India 1943-47. * * * Wavell (of Eritrea and of Winchester), Archibald Percival Wavell, 1st Earl born May 5, 1883, Colchester, Essex, Eng. died May 24, 1950, London British army officer. Recognized as an excellent trainer of troops, he became British commander in chief for ... Wavell, Archibald Percival Wavell, 1st Earl, Viscount Wavell Of Cyrenaica And Of Winchester, Viscount Keren Of Eritrea And Of Winchester ▪ British field marshal born May 5, 1883, Colchester, Essex, England died May 24, 1950, London       British field marshal whose victories against the Italians in North ... Wavell,Archibald Percival Wa·vell (wāʹvəl), Archibald Percival. First Earl Wavell. 1883-1950. British field marshal who routed Italian forces in North Africa (1940-1941) before being defeated by the ... /way"veuh luyt'/, n. Mineral. a hydrous aluminum fluorophosphate occurring as white to yellowish-green or brown aggregates of radiating fibers. [named in 1805 after W. Wavell (d. ... wave mechanics n. (used with a sing. or pl. verb) A theory that ascribes characteristics of waves to subatomic particles and attempts to interpret physical phenomena on this ... /wayv"mee'teuhr/, n. a device for measuring the wavelength or frequency of a radio wave. [1900-05; WAVE + -METER] * * * ▪ measurement device       device for ... ▪ district, England, United Kingdom       district, administrative and historic county of Suffolk, England. It is bounded on the east by the North Sea and on the ... Waveney, River ▪ river, England, United Kingdom       stream in England whose whole course of 50 miles (80 km) marks the boundary between the East Anglian counties of Norfolk and ... wave number n. The number of waves per unit distance in a series of waves of a given wavelength; the reciprocal of the wavelength. * * * waver1 —waverer, n. —waveringly, adv. /way"veuhr/, v.i. 1. to sway to and fro; flutter: Foliage wavers in the breeze. 2. to flicker or quiver, as light: A distant beam ... See waver. * * * See waverer. * * * Wa·ver·ley (wāʹvər-lē) A city of southeast Australia, an industrial suburb of Melbourne. Population: 122,471. * * * the main railway station in Edinburgh, Scotland, at ... Waverley novels the name given to the novels of Sir Walter Scott because he said at the beginning of them that they were written ‘by the author of Waverley’ and did not give his name. The ... wavery [wā′vər ē] adj. wavering [his wavery voice] * * * /wayvz/, n. (used with a sing. or pl. v.) the Women's Reserve of the U.S. Naval Reserve, the distinct force of women enlistees in the U.S. Navy, organized during World War ... wave tank n. An apparatus consisting of a small water-filled tank and an oscillator that creates waves, used to demonstrate wave motion and wave properties such as interference ... wave train n. Physics A succession of similar wave pulses. * * * wave trap n. An electronic filtering device designed to exclude unwanted signals or interference from a receiver. * * * /way"vee/, n., pl. waveys. a wild North American goose of the genus Chen, as the snow goose (white wavey) or blue goose (blue wavey). [1735-45; earlier weywey < Cree ... See wavy. * * * See wavily. * * * wavy1 —wavily, adv. —waviness, n. /way"vee/, adj., wavier, waviest. 1. curving alternately in opposite directions; undulating: a wavy course; wavy hair. 2. abounding in or ... /vahv, vawv/, n. vav. /wow/, n. the 27th letter of the Arabic alphabet. [1825-35; < Ar] * * * ▪ The Sudan also spelled  Wau        town, southwestern Sudan. It lies ... wawl [wôl] vi., n. alt. Brit. sp. of WAUL * * * wax1 —waxable, adj. —waxlike, adj. /waks/, n. 1. Also called beeswax. a solid, yellowish, nonglycerine substance allied to fats and oils, secreted by bees, plastic when warm ... wax bean 1. a variety of string bean bearing yellowish, waxy pods. 2. the pod of this plant, used for food. [1905-10, Amer.] * * * wax flower. See Madagascar jasmine. [1835-45] * * * wax gourd 1. a tropical Asian vine, Benincasa hispida, of the gourd family, having a brown, hairy stem, large, solitary, yellow flowers, and white, melonlike fruit. 2. the fruit itself. ... wax insect any of several scale insects that secrete a commercially valuable waxy substance, esp. a Chinese scale insect, Ericerus pe-la. [1805-15] * * * wax jack a device for melting sealing wax, having a waxed wick fed through a plate from a reel. Also called taper jack. * * * wax light a candle made of wax. [1690-1700] * * * wax moth. See bee moth. [1760-70] * * * wax museum a museum containing wax effigies of famous persons, esp. historical figures, usually in scenes associated with their lives. [1950-55] * * * wax myrtle an aromatic shrub, Myrica cerifera, of the southeastern U.S., bearing small berries coated with wax that is sometimes used in making candles. Cf. bayberry. [1800-10] * * * wax palm 1. a tall, pinnate-leaved palm, Ceroxylon alpinum (or C. andicola), of the Andes, whose stem and leaves yield a resinous wax. 2. any of several other palms that are the source of ... wax paper a whitish, translucent wrapping paper made moistureproof by a paraffin coating. Also, waxed paper. [1835-45] * * * wax plant any climbing or trailing plant belonging to the genus Hoya, of the milkweed family, native to tropical Asia and Australia, having fleshy or leathery leaves and umbels of pink, ... wax sculpture Figures modeled or molded in beeswax, either as finished pieces or for use as forms for casting metal (see lost-wax casting) or creating preliminary models. At ordinary ... wax tablet a tablet made of bone, wood, etc., and covered with wax, used by the ancients for writing with a stylus. Also, waxed tablet. [1800-10] * * * /wawk'seuh hach"ee/, n. a city in NE central Texas. 14,624. * * * wax bean n. A variety of string bean having yellow pods. Also called butter bean. * * * /waks"ber'ee, -beuh ree/, n., pl. waxberries. 1. the wax myrtle or the bayberry. 2. the snowberry. [1825-35; WAX1 + BERRY] * * * /waks"bil'/, n. any of several small Old World finches, esp. of the genus Estrilda, that have white, pink, or red bills of waxy appearance and are often kept as cage ... waxed paper. See wax paper. * * * waxed tablet. See wax tablet. * * * waxed paper (wăkst) n. Wax paper. * * * Выполнено за: 0.096 c;
dadd5730870d55ff
Rice University logo Physics 521: Quantum Mechanics I Course Outline Introduction: course overview, history of quantum mechanics Mathematical foundation of quantum mechanics: quantum states and Hilbert spaces, observables and operators, commutation relations and Heisenberg's uncertainty principle, pure and mixed states, density operator Quantum dynamics: time evolution and the Schrödinger equation,  Schroedinger and Heisenberg pictures, quantization of harmonic oscillator, propagators and Feynman path integrals, potential and gauge transformation Theory of angular momentum:  rotation and angular momentum operator, spin and  SU(2) group, orbital angular momentum, solution of the hydrogen atom ( Schrödinger equation for central potential),  addition of angular momenta and Clebsch-Gordan coefficients, tensor operators and Wigner-Eckart theorem Symmetry in quantum mechanics: conservation laws and degeneracies, parity (space inversion), time-reversal symmetry Typical Organization Lectures T Th 1:00 - 2:15 PM Homework (30%) Midterm Exam (30%) Final exam (40%) Other Texts: E. Merzbacher, Quantum Mechanics, Wiley, 1997. A. Messiah, Quantum Mechanics, Dover, 1999.
b69806bb7a72bb14
You are here Applied Mathematics for Chemistry Majors Rachel Neville1, Amber T. Krummel2, Nancy E. Levinger2, Patrick D. Shipman3 1 University of Arizona, Department of Mathematics, Tucson, Arizona, United States 2 Colorado State University, Department of Chemistry, Fort Collins, Colorado, United States 3 Colorado State University, Department of Mathematics, Fort Collins, Colorado, United States 11/15/17 to 11/23/17 The math that chemistry students need is significant.  In physical chemistry, students need to be comfortable with ordinary and partial differential equations and linear operators. These topics are not traditionally taught in the calculus sequence that chemistry students are required to take at Colorado State University, thus mathematics can present a significant barrier to success in physical chemistry courses.  Through the collaboration of the mathematics and chemistry departments, Colorado State University has developed and implemented a two-semester sequence of courses, Applied Mathematics for Chemists (MfC), aimed specifically at providing exposure to the math necessary for chemistry students to succeed in physical chemistry.  The prerequisite for the sequence is a first semester of Calculus for Physical Scientists—that is, a working knowledge of derivatives, integrals and their relation through the Fundamental Theorem of Calculus.  MfC begins with a look at the Fundamental Theorem of Calculus that emphasizes a scientific realization that it provides, namely an understanding of physical phenomena in terms of an initial condition and the rate of change.  This introduces the first topic of MfC, namely first- and then second-order differential equations.  Working with differential equations at the start of the course allows for questions from chemistry to motivate the mathematics throughout the sequence.  Solving the differential equations naturally introduces students to another fundamental mathematical concept for physical chemistry, and another theme of the course, namely linear operators.  The flow of the course allows for topics traditional to second and third semesters of calculus, such as Taylor series and complex numbers, to be motivated by solving chemical problems and leads to some topics, such as Fourier series, which are not part of the standard calculus sequence.  Feedback from students who have taken MfC and then physical chemistry has been positive. The depth and breadth of mathematical skills that chemists need is significant. Like most American college and university chemistry curricula leading to the BA or BS degree, Colorado State University (CSU) has previously required students complete three semesters of calculus. This more than fulfills the requirements for the ACS approved chemistry degree (ACS, 2015). However, these calculus courses omit mathematical topics such as differential equations and linear operators that are imperative for understanding physical chemistry.  Similarly, the traditional calculus courses like those at CSU, cover content such as a broad range of integration techniques that are not of immediate use in physical chemistry.  From the instructors' perspective, the chemistry major would require students to take significantly more mathematics, including linear algebra and differential equations, prior to taking physical chemistry.  However, requiring these math courses would add credits to the chemistry major that already requires a lot of classes, making the curriculum less flexible and potentially decreasing the number of students majoring in chemistry.  To provide chemistry students with appropriate mathematical background, and to refresh topics that students may have forgotten since their last math course, some CSU chemistry instructors have offered a "just-in-time math review" as an addendum to the Physical Chemistry 1 course. Because it is optional, not all students enrolled in the math review, reducing its potential impact.  To address the mismatch between the required calculus courses and to provide a math curriculum more aligned with the needs of chemistry courses, we have developed a two-semester math sequence, Applied Math for Chemists I and II (MATH 271 and 272) at CSU.  Motivation and Background Among students at CSU and elsewhere, physical chemistry has the reputation of being a very challenging course. Derrick and Derrick studied success of students at Valdosta State University and suggest that the “formidable perception'' of physical chemistry is due to the mathematical and conceptual difficulty rather than the chemistry itself (Derrick & Derrick, 2002). Early attempts to identify students who would struggle in a physical chemistry course resulted in a diagnostic quiz that tests students’ background in mathematical concepts deemed necessary for physical chemistry (Porile, 1976). Prior success in math courses significantly impacts a student's success in physical chemistry. For instance, Hahn and Polik showed that student success in physical chemistry correlate significantly both with the amount of mathematics that a student has taken and grades earned in these mathematics courses (Hahn & Polik, 2004). Instructors at CSU have observed the same trend. In another study surveying instructors of physical chemistry courses across several hundred universities, 61% of instructors indicated that students struggle because they lack the necessary mathematical background and a third of instructors reported that students do not make connections between physical chemistry concepts and the mathematics on which those concepts are based (Fox & Roehring, 2015). This suggests that not only the mathematical concepts, but also their connections to chemistry are important to student success. In fact, after lengthy conversations with colleagues, one professor concluded “College students in the sciences often grasp the operations of mathematics but miss the connection between mathematical operations and the physical systems they describe.'' (DeSieno, 1975) Given these observations, it seems that we could provide a better math background to help our students succeed in physical chemistry.   In 2000, the Mathematical Association of America (MAA) organized a series of Curricular Foundations Workshops to seek input on mathematics curriculum from chemists, biologists, physicists, and engineers whose students rely on a strong foundation in mathematics (Craig, 2001). Various working groups developed recommendations regarding the mathematical skills necessary for students in specific fields.  A working group composed of chemistry and mathematics faculty from different institutions gave a thorough recommendation of the content and conceptual principles that students should be taught and a recommendation for the division of responsibility (see table in the appendix of (Craig, 2001)). Several topics were given high priority for mathematics competence of students in the chemical sciences, namely multivariate calculus, creating and interpreting graphs, spatial representations and linear algebra. Nearly all relations that students will encounter in chemistry contexts are multivariate. Therefore, students should be comfortable with handling multivariate problems, thinking of variables as more than merely a spatial extent or time. Due to large variations in physical scale of problems, students should be able to decide if solutions are reasonable with estimation techniques and order of magnitude calculations. There should also be an emphasis in visualizing structures in three dimensions. The course sequence at Colorado State University was initiated by a request by faculty members in the Department in Chemistry who were seeking ways to improve student performance in the two-semester, upper-division undergraduate course in physical chemistry.  These faculty members believed that deficiency in mathematical preparedness presented a significant barrier to student success, both in terms of the mathematical topics covered in the prerequisite courses (a standard three-semester calculus sequence covering topics through multivariate calculus and targeting students in physical sciences and engineering) and student ability to apply the mathematical topics covered in those courses in their chemistry courses.  Faculty members from the Departments of Chemistry and Mathematics collaborated to design the sequence of two 4-credit, semester-long courses, called Applied Mathematics for Chemists (MfC).  The sequence was taught as an experimental course in the academic years 2014-2015 and 2015-2016 (with temporary course numbers, standard at CSU) and was accepted into curriculum of the Mathematics Department and as a prerequisite for the physical chemistry sequence in 2016 (course numbers MATH 271 and MATH 272). Course Content MfC has a prerequisite of Calculus for Physical Scientists 1 (derivatives and integrals) and serves as the mathematics prerequisite for the physical chemistry course. While there is some necessary mathematical background required for other chemistry courses, physical chemistry has the highest mathematical demands. The goal of the MfC courses is to provide students with a working proficiency of the mathematics so that they can focus on learning and understanding the chemistry. Two texts are used for MfC, namely Enrich Steiner's The Chemistry Maths Book (Steiner, 2007), and Donald McQuarrie's Mathematics for Physical Chemistry (McQuarrie, 2008).  Both books focus specifically on mathematical topics relevant to chemists. These texts take a practical, straightforward approach, with less emphasis on theory or proofs of theorems and more emphasis on developing a student's mathematical tools applied to practical problems. The texts cover similar material, but the Steiner book is more complete mathematically, whereas the McQuarrie book has more detail on connections with physical chemistry. Students appreciated the full solutions freely available on the publisher’s website for The Chemistry Maths Book, as it offered quick feedback and an opportunity for individual practice. Mathematics for Physical Chemistry is written by the same author as the text that is used in the physical chemistry course at CSU and expands on math review sections that are included in the chemistry text (McQuarrie, 2008).    Clear recommendations for mathematics courses for chemistry majors were given in the MAA Curricular Foundations Workshops (Craig, 2001), specific to the chemistry context. The expectation is set that math courses should develop 14 conceptual principles, nearly all of which are addressed in MfC. The exceptions are an extensive discussion of numerical methods, representation of information as analog or digital, statistics and curve fitting. Statistics and regression are covered in a statistics course that chemistry students are also required to take. All principles are marked in two categories, (1) they should be developed by mathematicians, and (2) the teaching of the mathematical concept in the specific context of chemistry is particularly effective. The material covered in this course is substantial, though necessary for the future success of chemistry students. The course is topically divided into five parts. Parts 1 (differential equations, series, and complex variables) and 2 (linear algebra), are covered in the first semester.  The second semester covers parts 3 (inner product spaces and Fourier series), 4 (multivariable calculus), and 5 (partial differential equations). The highlight in the prerequisite course (one semester of The Calculus) is the Fundamental Theorem of Calculus (FTC), typically written as  Students see two interpretations of this relation.  With s equal to a spatial variable x, the FTC gives an area underneath the graph of f ' (x) in the domain a ≤ x ≤ b . With s equal to time t, the FTC gives the total change in f over the time interval a ≤ t ≤ b.  But, honestly, why calculate the total change f(b) - f(a) by some complicated integral?  MfC opens with a slight but tremendously revealing rewriting of the FTC; Any differentiable function f(t) can be written in terms of an initial condition f(0) and a rate of change f '(t).  This mathematical insight also opens up a whole new way of thinking scientifically and leads into the first part of MfC, namely ordinary differential equations.   We cover chemical basic first- and second-order linear homogeneous and inhomogeneous differential equations and solution methods such as separation of variables, integrating factors, and the method of undetermined coefficients. Applications in chemical kinetics, the harmonic oscillator and a first look at Schrödinger's equations for a particle in a box motivate each class of equations. Complex numbers and series are taught as necessary theory for working with more complex systems. The grand finale of the unit on ordinary differential equations is the method of using power series to solve differential equations. Chemistry students are typically not exposed to these mathematical topics because they comprise topics in an ordinary differential equations course, which is not required for chemistry majors. Part 2 covers linear algebra. Students are introduced to vectors and are encouraged to think of vectors as coordinates in physical space as well as holding variables that are not necessarily distance. There is an emphasis on what insights determinants and eigenvalues give when modeling a physical system. Symmetries and group axioms are taught primarily through linear transformations, with some discussion on finding group representations. Compelling examples come from symmetries of planar molecules (Hückel molecular orbital method) and distributions of electrons in p-orbitals. Several students reported this application as being the most compelling example from the entire course. The second semester and Part 3 of MfC begins with the notion of a vector space and a basis. As inner product spaces are introduced, parallels are drawn between finite-dimensional vector spaces and infinite-dimensional inner product spaces. This gives students a concrete footing in a topic that they find very theoretical.  Orthogonal polynomials (including special sets of polynomials) are introduced. Rather than emphasizing the (often fairly involved) derivation of these polynomials, students are challenged to understand them as a basis for modeling specific physical systems. This notion is initiated here and developed further in the end-of-the-year project. Finally, students learn Fourier series and work with Fourier transforms and their interpretation in a mini-Matlab project. This section is generally the most challenging for students. Part 4 returns to material that is usually covered in a standard course on multivariate calculus (third semester of a traditional calculus sequence). By this point in the course, students have become comfortable with working with expressions in multiple variables. Visualization in three dimensions is taught, as well as partial derivatives and multiple integrals. There is an emphasis on physical interpretation of these quantities. However, the level of coverage is not as extensive as a typical third semester calculus course.  For example, a topic from a typical course in multivariable calculus that is not covered in MfC is Stoke's Theorem. The concluding part, the shorter Part 5 is a basic introduction to partial differential equations.  Students are introduced to separation of variables and the method is applied to solve the heat equation and the classical wave equations. Boundary conditions and initial conditions are discussed, again with an emphasis on modeling a physical system. This is a topic that students would not encounter until a course in partial differential equations after a course in ordinary differential equations, a course that very few chemistry students take.  We considered taking more time in Part IV and omitting Part V, but an advantage of covering Part V is that many concepts from the course come together when solving partial differential equations.  Indeed, this topic allows students to combine their knowledge of ordinary differential equation boundary value problems, partial derivatives, and Fourier series. Another advantage is that students are likely to see the wave equation near the start of a physical chemistry course, and we want them to feel like they are mathematically prepared from the beginning of the course. Near the end of MfC, students are assigned a group project, applying separation of variables. This project is discussed further in Section 4. To allow for this material to be covered in a year-long course, some sacrifices from the traditional sequence clearly need to be made.  This includes some integration techniques and theorems on convergence of sequences and series as well as Stoke's theorem.  Although the topics covered in MfC range from differential equations to linear algebra to understanding multivariable relationships, the fact that they are tied together by a theme of linear operators helps to unite the course and allows for the reinforcement of previously learned topics throughout.   The focus of this course is on developing students' mathematical dexterity and reasoning skills with motivation coming from chemistry.  One challenge is that some of the most compelling examples require a good deal of chemistry to understand.   For example, students were assigned a project on Nuclear Magnetic Resonance (NMR). This is a compelling application of Fourier transforms. However, the theory on molecular structure and NMR is taught in Organic chemistry. The students who had taken organic chemistry (i.e. had seen NMR in a classroom setting) thought the application was neat, though oversimplified. The students who had not had an organic chemistry course, could perform the transform but were at a loss when it came to connecting the output signal to the molecular structure, even with an (oversimplified) explanation in the project description. At the end of the second semester, students were given a final group project. Students are guided through the analytic solution to the Schrödinger equation for the hydrogen atom. This project pulls together concepts from operators, correct handling of multiple variables, partial derivatives, techniques of solving differential equations and partial differential equations, visualizing in three dimensions, and the postulates of quantum mechanics in an example that is very compelling for chemistry students. Impact in Physical Chemistry Course The difference in students completing the calculus sequence versus the MfC sequence to fulfill their mathematics requirements for chemistry is dramatic.  The difference in students’ daily engagement in the Physical Chemistry course is different between the two student populations.  For example, in Physical Chemistry 1, students that have taken the MfC course sequence have already been exposed to the concept of a differential equation so they do not have to grasp what a differential equation is before striving to understand the interpretation of the solutions they generate for the Schrödinger equation.  Instead, students that have taken the MfC sequence are confident in their practical knowledge of finding solutions to ordinary differential equations.  Thus, they have the capacity and are free to begin thinking about the interpretation of solutions to the Schrödinger equation, rather than being stuck on mathematical mechanics associated with solving differential equations.  Likewise, students having completed MfC approach the Maxwell relations in thermodynamics without trepidation having already manipulated partial differential equations.  These are only two of many examples that speak to the divide that MfC bridges by producing a course that nests the mathematics required as a chemistry practitioner in chemical applications.  The feedback from students who have taken MfC and then physical chemistry has been positive.  These students have encouraged their colleagues to take MfC rather than the traditional Calculus sequence, noting that students having taken the traditional Calculus sequence struggle more in Physical Chemistry than students having taken the MfC course sequence. Even students who struggled in MfC have remarked how familiar they found the math in physical chemistry, which improved their outlook about the traditionally dreaded physical chemistry course.  Finally, MfC offered at CSU does not require additional credit hours of math for our chemistry majors.  Instead, we have tailored the mathematics and the application of the mathematics to be aligned with the needs of a chemistry practitioner. To accommodate transfer students and students changing majors, we still allow chemistry majors to take the traditional three semesters of Calculus for Physical Scientists, but strongly urge our majors to take MfC.  The authors would like to thank Francis Motta for his contribution to developing materials for this course. ACS. (2015). Guidelines and Evaluation Procedures for Bachelor's Degree Programs. Washington DC: American Chemical Society Committee of Professional Training. Bressoud, D. (2002, Aug./Sept. ). The Curriculum Foundations Workshop on Chemistry. FOCUS, 22(6). Washington D.C.: Mathematical Association of America. Course Catalog. (2015-2016). Colorado State University. Retrieved from Craig, N. (2001). Chemistry Report: MAA-CUPM Curriculum Foundations Workshop in Biology and Chemistry. Journal of Chemical Education, 78, 582-6. Derrick, M., & Derrick, F. (2002). Predictors of Success in Physical Chemistry. Journal of Chemical Education, 79(8), 1013-1016. DeSieno, R. (1975). How Do You Know Where to Begin? Journal of Chemical Education, 52(12), 783. Fox, L., & Roehring, G. (2015). Nationwide Survey of the Undergraduate Physical Chemistry Course. Journal of Chemical Education, 92, 1456-1465. Hahn, K., & Polik, W. (2004). Factors Influencing Success in Physical Chemistry. Journal of Chemical Eduaction, 81(4), 567-572. McQuarrie, D. (2008). Mathematics for Physical Chemistry: Opening Doors. University Science Books. N. Craig, D. B. (2000). CRAFTY Curriculum Foundations Project: Chemistry. Porile, N. (1976). Diagnostic quiz to identify failing students in physical chemistry. Journal of Chemical Education, 2(53), 109. Prussel, D. (2009). Enhancing Interdisciplinary, Mathematics, and Physical Science in an Undergraduate Life Science Program through Physical Chemistry. CBE- Life Sciences Education, 8(1), 15-28. Steiner, E. (2007). The Chemistry Maths Book. Oxford UP. Rich Messeder's picture I'd like to start the week off by saying how much I appreciate the time and effort that went into preparing these papers. These threads are a most valuable resource to me, and made more useful by the comprehensive nature of the comments. I am primarily research- and engineeering-oriented, but I value the intrinsic worth of each student. I hope that the threads are available after the conference, because I intend to mine them for ideas. Rich -- My understanding is that if you go to the main ConfChem site, the "Useful Links" will be posted on the left as they are on this page, and the "temporal article list" will have all of the articles and threads. -- rick nelson Hi All, We actually  have three nonredundant backups of the ConfChem discussion archive.  First, there are the actual papers, which as Rick states, can be found through the temporal article list, but also through the sortable article list (just pick the ConfChem you want to see), and note you can tag the papers if that helps your research (but not the comments, although those can be tagged with an annotation service like Second is the actual Confchem List archive at UALR, .  Just choose the month, and there are several sort options (subject line, date...). Third, after the ConfChem is over the authors have the option to submit these to the Journal of Chemical Education as a series of bundled communications.  Attached to each communication as "Supporting Information" is the actual ConfChem paper with the discussions, and so if the CCCE website goes down, and the UALR list goes down, you still have the discussions archived in the supporting Information of the JCE communications. I should add, that we remove personal identifiers like the names and images of the people making comments in the JCE supporting information, but what they say is preserved. Rich Messeder's picture Info captured for the future. Dear CSU Team, If I understand correctly, what you have done at CSU is to give chemistry majors a choice of the traditional CSU sequence of 3 semesters of 4-credit Calculus for Physical Scientists, or the new sequence of one semester of Calculus for Physical Scientists followed by 2 semesters of Applied Math for Chemists (taught by the math department). My suspicion would be there were obstacles that needed to be overcome to achieve these offerings of “Calculus Customized for Chemistry.” For instructors who would like to gain the same type of sequence on their campuses, would there be advice you might be able to give on what bottlenecks to anticipate and how they might address them? -- rick nelson Yes, you understand correctly. The main obstacle may be that for a math department to make a sequence of courses that is ideal for evey major would make for a lot of different course sequences and it gets unwieldy!  Plus, it takes special grad students like Rachel to teach the course--they need to be willing to learn some p-chem, perhaps even to learn some maths that they never learned (self-adjoint operators, for example) so this course can't be taught in the normal factory method of teaching calculus.    So, chemistry needs to really have enough students to put into the course, and maths faculty need to realize that this is a fun course to teach. Maths faculty are used to the traditional sequence, and it can be hard to fathom some variation on it--differential equations are supposed to be a topic after Calc III, and we do them right at the beginning of the MfC sequence!  First of all, I congratulate Dr. Nelson and his co-organiser (whose name momentarily escapes me, apologies) for their commendable organisation and moderation of this conference on internet.  This occasion is the first in which I have participated in such a meeting of minds, and I am pleased to read the various points of view and the dedication to improve the teaching and learning of chemistry. The fact that students who are admitted to tertiary institutions are poorly prepared in mathematics is clearly not confined to USA, but is likely worse there consistent with the standing of USA in comparison with other developed countries.  One topic, to which I here respond, is the provision of courses by departments of mathematics for students of other departments.  Some years ago, a responsible senior professor of mathematics informed me that the policy of his department was to respond to any such request from another department, but within standard courses of mathematics there was no attempt to include examples or problems in any applied area, because although that content might be of interest to a fraction of students in the common course it would be boring and a distraction for the other students.  He emphasised that mathematicians were prepared by their academic experience to teach mathematics, not chemistry nor physics nor ....  One approach that is more common in European universities than in Canadian and USA institutions is to have a course such as 'mathematics for chemistry' taught by either an instructor of mathematics devoted to a particular department, such as was the practice in Danish Technical University in the past, or a chemistry instructor who is suitably prepared to undertake such tasks.  In University of York UK two distinguished instructors, both active in fields of quantum chemistry, taught such courses, and even published a textbook Mathematics for Chemistry, by G. Doggett and B. T. Sutcliffe.  On examining that book, I ventured to express the opinion that I considered it to be at a rather low level, but Dr. Doggett replied that it was actually deemed to be beyond many students in British universities; for that reason he was asked to write two other short books at an even lower level, "Maths for Chemists (Tutorial Chemistry Texts)" published by Royal Society of Chemistry.  The problem addressed in this conference has two parts:  the mathematical preparation of students entering general chemistry, most discussed, and the mathematical requirements for succeeding courses in chemistry.  The fundamental solution to the preceding case is to improve the teaching and learning of arithmetic and mathematics in school, according to all the aspects described here, including estimation; in lieu of that solution for the present students, remedial courses at the tertiary level must be arranged, whether organised by chemists or mathematicians.  For higher courses in chemistry, if what departments of mathematics offer is deemed unsatisfactory or insufficient, chemistry instructors can offer their own courses, based on such a textbook as I specified above, for example, or other comparable books.  My interactive electronic textbook Mathematics for Chemistry is an alternative approach in which the objective is to teach, with advanced mathematical software (Maple), the concepts and principles of all pertinent mathematical topics and aspects, from arithmetic to group theory and graph theory, and then to encourage the students to apply their knowledge of the use of that software to solve chemical problems.  I despair of that book being of significant utility if students lack the basic arithmetical and symbolic skills that participants in this conference have decried. When I was an undergraduate in a Canadian university, I was required to complete five year courses in mathematics (equivalent to ten semester courses) as a requisite of an honours degree in chemistry.  The average requirement of mathematics for chemistry in Canadian universities is now three semester courses, although the nominal mathematical level of content of chemistry courses has risen significantly in the interim.  I have noticed that standard textbooks of general physics, which might in many cases be a corequisite with general chemistry, have mathematical content at an increasingly high level.  For instance, Schroedinger's partial differential equation is introduced before its solutions and their properties.  How can students of chemistry cope with such content when we have read that even multiplication of small numbers challenges the capability of many such students? Much of the discussion within this conference addresses a complaint that students entering general chemistry are unprepared mathematically.  It is ironic that the instructors who complain so vociferously are themselves unprepared mathematically to teach even general chemistry.  I refer to the content of all textbooks of general chemistry that I have seen that includes orbitals, electronic configurations of atoms and analogous material based ultinately on quantum mechanics, which is now recognised to be not a chemical theory, not even a physical theory, but a collection of mathematical methods, or algorithms, that one might apply to systems on an atomic scale -- which is far from the laboratory experience of general chemistry.  The authors of such textbooks, and the instructors who duly prescribe and teach the textbook in the fashion of a parrot, so act not because they understand the underlying mathematics and its consequences but because they do not so understand.  How many of you are aware that there is not just one set of orbitals for the hydrogen atom but four sets, not just one set of quantum numbers associated with those orbitals but one set of quantum numbers for each set of orbitals?  [The descriptions of these orbitals are freely available from 1709.04759, 1709.04338, 1612.05098, 1603.00839] You are "the credulous masses [or their successsive generations] -- that sad benighted chemistry professoriate -- dazzled with beguiling simplifications" by Pauling, "a master salesman and showman" [A. Valiunas, The man who thought of everything, The New Atlantis, No. 45, 60 - 98, 2015;, J. S. Rigden, Review of Linus Pauling -- a man and his science, by A. Serafini, Physics Today,  43 (5), 81 - 82, 1990].  Some years ago, I wondered whether a really intelligent student who encountered this incomprensible and indigestible rubbish about orbitals in general chemistry might decide that, because he could not understand what he was taught but would be forced merely to memorize the material quasi-religiously, the fault was his, so that he transferred to some other subject such as computer science that he could genuinely understand, leaving the mediocre students of general chemistry to progress onward in chemistry and eventually to become the next generation of professors to perpetuate the charade. How many of you who have taught, in any shape or form, orbitals, which are indisputably solutions to the Schroedinger equation for the hydrogen atom, have actually read Schroedinger's papers?  They are available in authorised English translation; within them you can learn about a second set of orbitals and quantum numbers, beyond the first set for spherical polar coordinates with quantum numbers k,l,m.  Pauling never admitted the existence of this second solution, which would accordingly undermine his proffered ideas.  Your libraries might contain this book at QC172..... Next time you feel like complaining about the quality of mathematical preparation of students entering chemistry, please reflect that your students have absolutely the same right to complain about the quality of mathematical preparation and understanding of their instructors -- only the ignorance of the students precludes such complaint, just as the mathematical ignorance of instructors of chemistry -- "that sad benighted chemistry professoriate" -- perpetuates the current paradigm of teaching chemistry. Rich Messeder's picture Hailing from a sub-culture in the US noted for its often too-frank speech, I appreciate John's frank discourse, however unsettling I found its direct criticisms. His comments directed toward "that sad benighted chemistry professoriate" apply equally to other fields, I opine. I see two root problems in academia (and perhaps beyond) that seemingly contradict one another. The first is that there seems to be a great deal of insecurity among academics, in my experience, and this relates to an unwillingness to be different, lest one not be "accepted". The second also relates to insecurity: arrogance. I think that this varies by institution, but I have seen it in all academic quarters over the past 4 decades. It is a bad enough example among academic peers, but it is especially destructive when faculty "talk down to students". We need outspoken leaders, yes. I am the captain of my ship, and my students know that as well as did those serving under me years ago in the military. Students look for confidence and leadership, but shy from arrogance. "dazzled with beguiling simplifications" I have lately coined the term "simplication" to mean just this. Why? Because I have seen too many instances of complicated material "simplified" in discussion to the point of irrelevance. For example, when working on a major physics research project recently, we were tasked with implementing an algorithm that was kicked around for years in popular terms that over-simplified its real complexity and impact on research. It was not until several frustrating years of increasing pressure from approaching deadlines that the issue was put on the table for open discussion. I recall that at one meeting the PI in charge of the research was stunned that he did not truly understand the algorithm that he been advocating all along. No one did, because they had all (PhDs) been kicking around a "simplicated" version of the algorithm. The beginning of the algorithm was replaced with an equivalent, much simpler, statistical model. There are places for both appropriate simplification ("as simple as possible, but no simpler"), and "the harrowing complexity of honest science." I might be accused of being frank, as Frank is my middle name! Having passed some years as a visiting professor or equivalent in significant departments of mathematics and physics, I have some intimate knowledge of those two fields, accumulated long after my undergraduate degree of which the programme was formally described as combined honours in physics and chemistry, with a healthy component of mathematics (equivalent to ten semester courses, as I mentioned) plus a course in mathematical physics as part of the physics sequence.  On the basis of that direct experience, I find it implausible that those two subjects suffer from the same systemic rot to the same extent as chemistry arising from orbitals and related rubbish -- even though physicists might be prone to attribute experimental observations directly to mythical 'hybridisation', clearly an infection from chemistry.  I have no doubt of the value of quantum-chemical calculations -- they were invaluable in our identification of two new boron hydrides, B2H4 and B3H3, for instance -- although for molecules containing elements other than boron perhaps 'molecular mechanics' would have produced similar results. One must, however, distinguish between orbitals, which pertain only to the hydrogen (or one-electron) atom, and members of a basis set that might be applied in quantum-chemical calculations.  Even the latter are superfluous because density-functional theory without an orbital basis set is a practical alternative. What is an orbital?  It is incontestably an algebraic function.  An ignorance by chemistry instructors of the mathematical basis of such concepts is just as reprehensible as students admitted to general chemistry being incapable of undertaking basic arithmetical and mathematical operations for the solution of chemical problems. The sources of the quotations cited in my preceding comment can provide an ample basis for the recognition of the deficiencies that must be rectified. Yes, one of the challenges of teaching new things is eliminating students' conflicting misconceptions, some of which have been installed by well-meaning teachers seeking to help students feel that science "makes sense".  On the subject of atomic and molecular orbitals, I think these are some (among many) pedagogically useful articles (and two of the authors are also named "Frank"!): "4s is Always Above 3d! or, How to Tell the Orbitals from the Wavefunctions," Frank Pilar, J. Chem. Educ., vol. 55 #1, Jan. 1978, pp. 2-6. "Tomographic Imaging of Molecular Orbitals," D. M. Villeneuve and coworkers, Nature, vol. 432, 16 Dec. 2004, pp. 867-871 (includes a tomographic reconstruction of the HOMO of N2) This image also appeared in C&E News, vol. 82 # 51, 20 Dec. 2004,  p. 10 "The Covalent Bond Examined Using the Virial Theorem," Frank Rioux, Chem. Educator, vol. 8, 2003, pp. 10-12.    George Box pointed out many years ago that all models are wrong, some are useful.  Anybody who talks to an organic chemist knows the truth of this. The standard sequence of teaching general chemistry proceeds through any number of simple models for chemical bonding and reaction following the historical development of the science. The atomic/molecular orbital model neatly summarizes these and also explains much about their limits of applicability.  Thus it is both useful and teachable at a beginners level.  Indeed, as far as structure and reactivity on the atomic level is concerned there is very little mathematics in the first semester of GChem and a whole lot of visualization and memorization for which atomic orbitals are sufficient.  If the correct description of electron density is 95% pz and 5% whatever, for teaching a general chemistry student does this make a difference? I have found that the "atoms first" sequence has the advantage of making it easier to justify earlier models to the students, such as valence, oxidation number, octet rule, etc.  each of which, as well as atomic & molecular orbitals, have something useful to say about bonding and reactions as well as about the "real" nature of molecules as shown in the image referenced by Doreen (thanks:) and others that are increasing appearing showing bonds, defined as regions of high electron density between atoms similar to what a good GChem student would write on an exam using orbitals. Orbitals are, of course, algebraic functions, but they are not unconstrained by physical limits and they do convey useful and generally correct information about the shapes and reactivity of atoms and molecules which is what we are trying to teach.  The "atoms first" sequence has the natural advantage that the students in the laboratory for general chemistry work directly with single atoms and molecules, and the students can directly see and measure the orbitals -- is that not the case? -- so that there is a direct connection between the lecture material and the laboratory material, which is a primary pedagogical objective. Which orbitals do you use for your explanations?  There are four sets of orbitals for the hydrogen (or one-electron) atom, and each set has its individual shapes and set of quantum numbers -- but of course you understand that if you have been teaching about orbitals.  Unfortunately, all those orbitals apply strictly to the hydrogen atom; the corresponding algebraic functions of the helium atom -- yes, they are known -- have quite distinct and complicated algebraic forms.  You would not commit the logical fallacy of extrapolation from a point, fron H to any other atom, would you, Dr. Halpern? The orbitals of H are presented both algebraically and pictorially in these four items freely available from 1709.04759,  1709.04338,  1612.05098,   1603.00899 Instructors who teach about orbitals might wish also to read "The nature of the chemical bond, 1990 -- there is no such thing as orbital", Journal of Chemical Education, 67, 280 - 289 (1990), republished, by request of the editors, with additional material in Conceptual Trends in Quantum Chemistry, Kluwer, 1994 First, IMHO, Dr. Ogilvie starts from a bad place.  It is not necessary that students use algebraic functions to describe hydrogen like orbitals for hydrogen or other atoms.  There are wonderful visualization tools that provide these images, including 3-D JMol, versions etc.  For discussion of atomic and molecular orbitals at the GChem level these are all that is needed and used.  Atomic and molecular structure in GChem are a visual, not a mathematical exercise.  A good site for these images is Moving on what kind of experiments could one do in a gchem lab starting with atoms first?  Now obviously what follows can be refined and improved, but I believe it is a place to start.  YMMV My general approach would be to have the students make a measurement and then interpret the results using concepts taught in class.  Availability of on line apps and data bases makes this much simpler than in the past.  I have a few suggestions, and others will hopefully chime in (pun intended).  Of course dry labs are also possible but really not what Dr. Ogilvie or I would want, at least in part. For example, a mass spectrometer with unit resolution measuring the isotopes in a simple sulfur compound like SO2, could demonstrate isotopes and results and interpretation could be done using the NIST webbook.   A simple limiting reagant experiment could be used to explain the mole concept and its relationship to atomic number and atomic weight.  The emphasis would be on the mole concept and not the stoichiometry Both of these are taught at the beginning of an atoms first course as well as in the historical sequence.   Moving on to atomic structure students could measure the hydrogen Balmer band spectra and relate that to the Bohr formula.  You could then measure the sodium atom spectrum and see how it does not exactly fit the Bohr formula, and use that to motivate the discussion of how the hydrogen orbitals are not quite the ones for sodium.  One could compare the hydrogen orbitals to the hydrogen like ones for more complex atoms.  See for example  The web site I mentioned above also discusses this as well as showing the more complex shapes of the hydrogen like orbitals There are worksheets at VIPEr that can be used in conjunction   The NIST atomic spectra data base would help the students assign lines to transitions between hydrogen like orbitals.  Since these spectra are relatively sparse and in the visible region, small spectrometers such as those sold by ocean optics would be fine.  If you are interested in He I, then the NIST atomic spectra data base  which generates Grotrian diagrams would be key to assigning the orbitals involved in the one electron transitions. Molecular bonding could be demonstrated by selecting a small molecule.  The students would then describe bonding in the molecule using (the dreaded hybrid) orbitals.  Then a simple ab inito program would be used to calculate electron density maps and IR spectra.  The calculated spectra would be compared to the measured ones (either directly, or using the NIST Webbook or similar). The electron density maps would be compared to the initial prediction.  The calculation would, be all black box, Gaussian abuse as it were, but the students would start the calculations and measure the spectra as well as starting with the prediction of the shape and orbitals involved. FWIW folks, might enjoy playing with this app on their phones and comparing the orbitals to Dr. Ogilvie's And so, good night to all :) Dr. Halpern has raised some stimulating points, on only a few of which I comment here. Despite the fact that the knowledge of orbitals existing in four distinct sets, each with its individual shapes and set of quantum numbers, has been available since at least 1976, I have no doubt that Dr. Halpern, like almost all other instructors of chemistry who teach orbitals, is blissfully ignorant of this fact.  "Where ignorance is bliss, 'tis folly to be wise."  [Thomas Gray, 1742]  If Dr. Halpern were so aware, he might have difficulty justifying the selected particular set of orbitals, in spherical polar coordinates, for the purpose of his teaching, but blissful ignorance precludes such a tiresome chore.  Both the formulae and the pictures (plots) of orbitals in all four sets are freely available at Dr. Halpern suggests the use of a "simple ab initio program to calculate electron density maps and IR spectra".  The problem is that a truly "ab initio program" will calculate no such results.  If the atomic nuclei and electrons are treated in an equitable manner, there is NO molecular structure (apart from trivial cases such as diatomic molecules); cf. M. Cafiero, L. Adamowicz, Molecular structure in non-Born-Oppenheimer quantum mechanics,  Chemical Physics Letters, 387 (1), 136 - 141, 2004.  Dr. Halpern seems confused between truly "ab initio" quantum-chemical calculations and semi-empirical calculations, in which a structure is included in the input with some chosen 'canned' basis set. I applaud Dr. Halpern for having arranged the use of a mass spectrometer, even with resolution only unit mass (dalton), for the direct use -- "hands on" -- of each of his students in his laboratory for general chemistry.  Perhaps other expensive instruments might preclude the necessity for students to learn to titrate a base with an acid, but in any case beginning with "atoms first" might require several years of courses to reach the level of practical chemistry in other than the most superficial manner. When the senior author. P. Corkum, of this paper by Itatani, Villeneuve and others presented a lecture on this topic, I challenged him to define an orbital, but he demurred.  The authors of that paper understood what they claimed to measure neither during the experiments nor afterward.  Anybody who takes seriously the claim of these authors to have recorded an image of a molecular orbital (not molecular orbitals, plural) has only the most superficial understanding of the experiment and its interpretation, which is replete with errors.  cf Foundations of Chemistry, 13, 87 - 91 (2011); DOI 10.1007/s10698-011-9113-1 Cary Kilner's picture A (hopefully) balanced response from the co-moderator (I respectfully forgive the slight): John’s vociferous post (and not meant as a derogatory description) provokes the following response. He makes many good points in his diatribe, but we need to get back to the masses we are trying to educate. Of course, we seek the excellence he wishes for the upper-level courses for majors. But the fact remains that we must focus on our service-course clientele, who will be our future health-care professionals and who are the most challenged students in the physical-sciences and the least prepared mathematically. While he decries instructor misunderstandings in teaching MO theory, I decry instructors who take mathematical competence for granted and still ply the sink-or-swim perspective. If students’ primary and secondary education is not sufficient for their study of chemistry, we simply must take up the baton ourselves—hence this ConfChem. His point regarding mathematics teachers teaching “formal math” and neglecting applications seems to be a fault of mathematics instructors more interested in their own egos than in any collective effort to prepare students for a meaningful career that requires the use of mathematics in any capacity. After all, mathematic instruction represents even more of a service course than our own gen-chem, serving a greater number of students and more diverse majors. We might address this issue through pleas to the NCTM, who seem to be at the forefront of mathematics education. And I agree with his argument, which I shift slightly here, that the student might rightly complain of the chemistry instructor being unable to understand and address the neophyte students’ troubles with formal mathematics and with its translation into chem-math -- which, of course, is why we presented this ConfChem on mathematics in the teaching of chemistry—for the edification of interested and concerned chemistry instructors. I really don’t care if these students know an orbital from an orbit. I want them to understand how to make a serial dilution, how to calculate the volume of gas at a given temperature and pressure from a given reaction, how to determine if a given reactant is limiting or in-excess, how to perform a successful titration, how to use Beer’s law and do UV-vis spectroscopy, how to conduct a meaningful calorimetry experiment. Some instructors might feel that these calculations are too abstract for the life-science majors. But I believe that you simply cannot teach chemistry meaningfully without showing how the science developed from an engineering perspective, i.e. in the service of solving practical problems. I want them to know that a chloride salt is NOT a “pale green gas,” and that carbon has several allotropic forms, and the difference in behavior between concentrated sulphuric acid and concentrated nitric acid. In other words, I want them to know some descriptive chemistry with its associated chem-math measurements and calculations. An understanding of a need for chemical calculations (“chem-math”) has to arise from a need to understand interesting chemical and physical phenomena, either presented via provocative demonstrations or carefully-developed wet-chemistry activities or formal experiments. For instance, in seventh-grade I was reading about the shock sensitivity of potassium chlorate. Of course my local mentor and pharmacist sold me some of this salt, since back then pharmacists WERE, in fact, “chemists.” I also obtained chromates and dichromates and potassium permanganate and iodine crystals. My father allowed me to take sodium hydroxide pellets from the 55-gallon barrels in his shop, where I avoided inhaling the aggravating dust and observed the pellets immediately take on water from the humid air. My grandfather helped me obtain the concentrated acids I needed for my home basement laboratory. Back to my KClO3 story; since I understood stoichiometry as a way to DO chemistry, I was able to balance the equation for its reaction with table sugar, that I knew to be a disaccharide, and to calculate how much sugar to mix with my one gram of KClO3. Unfortunately this was too large a mixture. As I ground it on the cement floor of the basement using a lead plate I had melted down, it detonated with a huge BANG, instantly filling the basement with a fog. My parents cheerfully called down the back stairs, “Everything O-K down there?” To which I responded in a cold sweat, “Yeah, it’s all good.” The main point in my paper for this ConfChem is that to address difficulties in mathematics, you must first have the student PRESENT – not on his/her smart-phone, not downloading a power-point, not sitting in the back scribbling inchoate notes, not practicing Educational Darwinism and merely passing with a D- to get the credit, but actually engaged with the material. Otherwise, how else can they learn? And why else are they there? The long-lost lecture-demonstration pedagogy, with a formally-hired and designated demonstrator/demonstration coordinator was the way in the past that we have been able to engage a large lecture hall of students—not as entertainment but to show how concepts are related to phenomena, with concomitant measurements. This speaks to the value of the flipped classroom and of POGIL as a way to engage students. Nevertheless, however uniquely interested and energetic instructors have tried to implement these initiatives into 100+ classrooms, it is really the small classroom that enables these practices to work well, where the instructor can get in the face of EACH student to ensure he or she is engaged, and to ferret out issues preventing engagement. I’m speaking, of course, from 23 years of high-school teaching with the luxury of 15-25 student classes. And small liberal-arts colleges have this luxury as well. It’s up to chemistry educators to continue to research ways to effectively engage students so they are actively THERE in the large lecture halls of our large public universities. Finally getting back to mathematics, in my doctoral research I examined thirty-five pamphlets, booklets, paperbacks, small books and textbook chapters and appendices, to see how chem-math was being addressed by other concerned instructors. Of all these I found "Maths for Chemistry; A chemistry’s toolkit of calculations,” by Paul Monk and Lindsey Munro (Oxford U. Press, 2nd edition) to be outstanding; written very clearly and the best of the bunch. John and some other participants in this conference have cited various British publications, so I wonder if he is familiar with this fine book. It may not have quite the depth he requests, but it is very thorough. Besides the dimensional-analysis, algebra (and graphing) review typical of most chem-math primers, it provides three chapters on powers and logs, two on statistics, one on trig, six on differentiation, four on integration, and one each on matrices (including group-theory), vectors, and complex-numbers. So it seems to me this chem-math text would serve most physical-chemistry instructors well. (my apology again to Dr. Kilner) The objective of my attention to mathematics for chemistry has been mathematics for chemists, i.e. students proceeding to an academic degree with chemistry as major subject.  I had been unaware, and am somewhat astonished, at the severity of the mathematical incapability of students of general chemistry for most of whom the ultimate interests lie elsewhere than in chemistry.  The latter problem evidently requires concerted attention, such as remedial courses for present students and reform of school curriculum for future students, and this conference has been addressed mainly to this concern. "His point regarding mathematics teachers teaching “formal math” and neglecting applications seems to be a fault of mathematics instructors more interested in their own egos than in any collective effort to prepare students for ..."  How other than "interest in their own egos" can one explain the propensity of instructors of general chemistry, following the authors of their selected textbooks, to teach orbitals and electron configurations to students of biology, nursing ... within common courses of general chemistry?  I persist in maintaining that, if those instructors, and the authors, understood the mathematics, they would not teach that material because it is nonsense and irrelevant for chemistry.  Furthermore, is that fact that chemistry teachers teach "formal chem", such as electron configuartions, and neglect applications in nursing or biology not the same fault of which mathematics teachers are accused?  Both mathematics and chemistry are academic disciplines in their own rights, and chemistry is a science with an associated chemical industry.  Since I discovered in 1971 the existence of practical 'computer algebra' (IBM Formac), I have devoted efforts first to do my own extensive mathematical calculations for chemical or physical applications with software, and then, as that software developed into its present advanced form, to teach mathematics with that software (progressively Mumath, Reduce, Derive, Maple ... through more than three decades).  For me, mathematics consists not merely of reading a book and scribbling separate calculations with pen on nearby paper but of reading a large computer screen that describes, with sufficient profundity, the concepts, principles and practice, and then that reader implements the appropriate operations with the same software on the same screen.  That scheme underlies my interactive electronic textbook Mathematics for Chemistry, now in its fifth edition, and I respectfully suggest that an analogous design of teaching arithmetic to algebra, with interactive testing built into the content of the lessons, would be an effective pedagogical approach, provided that the students were sufficiently prepared to cope with that software.  Is 'computer-aided instruction' really so novel in year 2017?  The ratio of students to instructor becomes then not 15 or 25 to 1 but 1 to 1.  This approach would seem to be applicable for remedial purposes of the students of general chemistry.  For further chemistry my electronic textbook might be brought to bear. Dr. Kilner mentioned a book by Monk and Munro that includes various topics; my own electronic textbook includes all those topics and more, with rotatable plots in three dimensions and other pedagogical devices beyond the printed page. I have made no effort to become acquainted with various printed textbooks of mathematics for chemistry; I mentioned that by Sutcliffe and Doggett merely in relation to the discussion of the varied level of mathematics.  I consider the entire concept of the traditional printed static textbook to be obsolescent, although when I read for pleasure I greatly prefer a book in my hands to staring at a computer screen, especially a small one. Cary raises key points.  The most important thing we could do is to agree on the way forward.  Allow some suggestions, starting with the easiest one First, the issue with chemistry majors might be best met by a Mathematics for Chemistry course as the terminal math course for chemistry majors with dropping of Diff Eq or maybe even Calc III.  This is the path that physics and engineering have taken.  It, IMHO, should be some combination of differential equations, linear algebra, statistical analysis and computation.  It could be team taught with maybe the analytical chemists taking the lead on statistics.  I would strongly recommend that it be centered around a symbolic computation system such as Mathematica, Maple, or shudder, MathCAD, the later because of its ubiquity in engineering, the other two depending on the local license situation.  If we can reach some agreement on this it is something to be brought to the Committee on Professional Training. Second, the more difficult question is the mathematical preparation for GChem.  Much of the discussion has been about identifying those students who need help.  We might start with a list of tools that have been suggested and perhaps then survey ourselves about which the majority feel are the most useful.  An open discussion as we have seen can be scattered. If we can come to agreement on what students need to know and generally how to identify the students who need help, that alone will be useful in discussing remediation with our colleagues (maybe not so much), our chairs, deans and so forth because it is no longer simply a personal or local opinion but something broader.  Perhaps then the moderators could draft a short paper for J Chem Ed. How to remediate is a much more difficult problem. As Cary says you go to class with the students you have, not the students you want to have.  A point that recently came up on Twitter is that the first thing a new Assistant Professor needs to know is that they were not the typical student in their GChem class.  As we have heard there is no magic bullet, although, again, I agree with Cary small classes or recitation sections are key. In closing, thanks to all for their constructive work. Excellents points were raised during these discussions.   I agree on "As Cary says you go to class with the students you have, not the students you want to have" . If I may add based on my experiences, most of the students are aware of their limitations and are willing to do the extra work to get on the same page with math. A bit of guidance on math is much appreciated by them (especially with commuter/returning students). Perhaps a free online "math for general chemistry" course with short videos on math related to general chemistry topics (maybe on ACS or elsewhere). Students from diverse math backgrounds can watch these short videos and bring their math upto speed for Gen Chem classes. Every instructor can direct students to the same place. Thank you all for sharing your great pedagogies/ideas. scerri's picture Orbitals may not have 'real physical significance' and may indeed be unobservable. Yet they are very useful in rationalizing many aspects of spectroscopy and in chemical education.   Eric Scerri, UCLA Department of Chemistry Both atomic and molecular orbitals have been observed.  See the recent work of Wilson Ho at UC Irvine.  An earlier perspective is found in Dinse and Pratt, "Orbital Rotation", JACS 104, 2036 (1982). Rich Messeder's picture Many ideas in this conference address how to move forward, and I think that they should all be given serious consideration in the immediate future. My perspective is one of finding a core approach that is useful for STEM students, and have that core modified for specific fields (chem, physics, etc.). I iterate the recommendation for using computers to relieve faculty of the press of supporting many students in low-level reviews. For example, ALL entering STEM students could be enrolled in a computer class that "teaches" those concepts that we want memorized. Students would be //required// to meet minimum performance criteria; for example, be able to enter the answer to multiplication tables through 12s, randomly presented, in some reasonable time (constrained, calculators not permitted), with, say, 95% accuracy. Students who meet the requirement on the first pass have effectively complete the course. To ensure that "the cramming effect" is not active, students who did not pass everything on the first pass would be required to log in and retake the drill|exam periodically (weekly? monthly?) to provide the repetition that we have discussed here. These remarks are a just a starting place. I first saw computer-aided teaching at the UI/Urbana campus in the 1970s. Students were very excited to get time on the systems, which were advanced for the day, but primitive by today's standards. Nonetheless, I have been surprised by how little computers have been used for teaching over the decades. Amateur radio operators (I'm one) have used computers for decades to help them commit to memory material necessary to pass the different FCC license level exams. There are many resources available, and perhaps one of the tasks of academia is to sift through those resources and find the best of them to recommend to students. This would be an ongoing process, and it would be nice to have a sort of "clearinghouse" for all of it. This suggestion has the risk that the task will become overwhelming. Faculty from different institutions must be ready to "tolerate" recommendations from various sources. At any rate, for the fundamental material that we would like to see our students demonstrate mastery, this approach may be useful. The benefit of this approach is that it is easily absorbed by secondary schools, relieving them of the same time-burden, and could shorten the time that it takes to raise the math capabilities of our students. This approach also lends itself to "teaching" basic math tools, such as spreadsheets and basic MATLAB, Maple, etc., programming. Cary Kilner's picture Kudos to you and your team for showing how the problem of understanding mathematics in upper-level coursework can be addressed! My original degree was chemical engineering, so I did study much of this mathematics myself (not that I remember it). Here are some questions for you. I have seen little discussion of this issue in teaching P-Chem, although the problem must exist for many programs and instructors. Why do you suppose it has received so little attention, despite the importance for the conceptual understanding of this material for chem-majors in this very important course? (I know the chem-majors are certainly a very small subset of the gen-chem students we have been discussing in this ConfChem.) Do you think it likely that we have seen a decline in mathematics facility and understanding with even the stronger mathematics students found in the major’s track in the past few decades? Is this the result of changes in the way upper-level mathematics is being taught? And do you feel this could also be a reflection of recent changes in earlier mathematics education? Thank you! An obstacle to giving attention to maths for p-chem may be simply that the usual sequence works OK for other STEM disciplines, but a slightly different set of material (more understanding of linear operators, self-adjoint operators, orthogonal functions,  for example) is needed for p-chem, and there are usually not enough chemistry majors for math departments to worry about them. Regarding your question on decline in understanding of math majors, I can only give my impressions.  One hears growing frustrations with decline in students' ability in proof, particularly what we call "analysis"--proving the theorems of calculus rigorously.  However, I don't think that that is true for the stronger mathematics students--they seem as strong as ever to me.  Rich Messeder's picture I looked online at the course texts you referenced, examining the table of contents, and browsing pages where it was permitted. It seemed to me that the scope is such that it would cover 90%+ of applied math for several STEM majors. In a chem-specific context, examples use the vocabulary and semantics of chemistry. But it seems to me that most often college-level intro math courses spend a great deal of time on theory, sacrificing application, so that students walk away from the class able to write proofs, but less able to actually use the math. Much of the math course development that I see represented in these papers seems to be oriented toward practical application, which seems appropriate to me (even for theoretical physics). Did you consult with other STEM departments at your institution? Do you think that your course can be used by other departments with little modification? Physicists and engineers, for example, might want more numerical methods. I noted especially the comment about 3D visualization, which I think is an important element often overlooked at the undergrad level. What aspects of 3D visualization are covered? And +1 for Dr Nelson's question: I am surprised at the fast track to course acceptance. To what do you attribute this success? It is fairly understood that prior knowledge is the most predictive variable towards success in any course.  I would love to see you develop your anecdotal observations into a research study and attempt to discover where students' level of proficiency is: are they proficient in algebra, pre-cal, cal?  We might all be surprised to see where the problems stem from, and it might not be limited to just mathematics skills.   Discovering levels of proficiency of entering students would be a good starting point. One of my other questions is how do you get students to participate in the drill, practice, rest, revisit idea without using marks as a reward?  I also believe that getting students to get into the habit of doing practice before entering college or university will help students in being successful beyond secondary or high school.  Rich Messeder's picture No. And, you might have guessed, I won't be surprised to see research supports this answer. I have tried to refrain from writing pages of replies, in order that I not seem too pushy. My experiences in the private sector, especially as an engineering supervisor, and my further experiences teaching at HS and university, suggest strongly to me that the same principles and goals that are appropriate for success in the private sector apply to academia. I have taken management courses over the years that address the psychology of managers:employees::faculty:students, and paid close attention over the years to what works and what does not, to what adds or detracts from me as a leader and as a teacher, in an effort to continually improve myself. Regarding the suggestion that I might turn my experiences in to a research project: There are reasons why that is not likely to occur, though a related project could apply. Why? Time is of the essence. We are wasting our students' time, and doing both them and their employers an injustice, as well as hugely impacting scientific and engineering progress. It is time for well-thought action. Some of the research here that I find so useful and relevant is a decade or more old. I admire those institutions that have stepped up to the plate and changed for the better; the majority, it seems to me, have not, and it shows in the quality of students entering STEM courses in university. For example: In class, I regularly emphasize collaboration on studying and problem-solving, followed by individual writing...and relate specific factual anecdotes from my experiences that reflect dramatic improvements in performance. I roll these out occasionally during the course...Why occasionally? Because it seems to me that students are not up to speed on either of these points (collab & literacy), as evidenced in part by comments from the private sector regarding grads entering the workforce, and, just as we have been saying about repetition in math, repetition in all things results in internalization. I grade my STEM students on literacy, and tell them so the first day of class. An example of comments from the private sector: I have sat in on many industry "panels" at the undergraduate level. These panels are ostensibly to share industry perspectives, but I often think of them as recruiting opportunities (which is OK, too). Almost invariably, at the end of a panel discussion, some student will ask what the panelists opine that students might take from their undergrad experiences other than strictly academic work (grades, papers, etc.), and almost invariably the panelists reply with "collaboration skills and literacy". Part of my research: I have personal knowledge of a series of events (circa mid-1980s) at a huge private facility where the consequences of certain kinds of equipment failures posed significant risks to inhabitants of local communities. One day, the computers that monitored all that equipment made a mistake, and declared that something was wrong (everything BUT the computers was working just fine, it later proved). Nonetheless, this spurious fault condition triggered an attempt to activate several safety systems. One very important system did not activate. Post-game analysis showed that it had a design flaw, and that design flaw was identified by an engineer earlier. It seemed that the engineer was reviewing systems (for reasons I never knew), and decided that his calculations indicated a design flaw. He wrote it up and passed it to his immediate lead engineer. Well, that certainly should have gotten some attention, eh? But the document was so poorly worded that the lead engineer didn't get the point, and then, not realizing what the problem was, didn't follow up with the author. (Two significant problems: literacy and leadership.) When all this surfaced, the VP of engineering of this very large engineering staff had all the engineers take a literacy exam. All those who failed were tasked with taking remedial classes of the VP's choosing --- on their own time --- with the understanding that those who failed the first exam would be tested ~6 months later. Those who failed a 2nd time would be fired. Scientific research? Not exactly. Message received? Absolutely. Yet, when I mentioned this to a university physics faculty member, he said that literacy was not his concern...that's what the English department is for. But, I opine, standards /there/ are as poor as they are in mathematics, and anyway writing technical papers is very different from writing a paper criticising a novel. (Prior to the event mentioned above, I had already informed my engineers that literacy would be part of their annual review.) I opine that most US students entering college or university do not meet the reading, writing, and math skills of their forerunners of a few decades ago. They struggle to read challenging material, they struggle to write with any degree of literacy appropriate to their level of education, and they struggle to manage conceptual material in STEM classes because of the issues addressed in this conference. Side note: Sorry if I have already mentioned this: Compare the user manuals for the HP-11C and 15C (on the web) with that for the TI-89. This comment is NOT about the devices themselves, but about user manual content then and now. Sorry, I don't have a reference for older TI calculators. I find the research here, and referenced here, quite valuable, because it gives me something substantial to add to my anecdotes as I continue to work toward improving the quality of US academic life. A common criticism of courses of mathematics is that the theory is emphasized at the expense of the practice.  The same criticism might be made of general chemistry in that orbitals and other baggage eventually traceable to fraudulent quantum-mechanical bases are emphasized at the expense of the real basis of chemistry as a practical science; in the latter case, the instructors of general chemistry, merely teaching ill chosen textbooks, teach that material not because they understand it but because they fail to understand it.  I find nearly impossible to believe that instructors of mathematics at any level in general exhibit the analogous ignorance or that the textbooks of mathematics contain a similar proportion of rubbish, because mathematics is much more readily intrinsically testable.  The problem of poor mathematical preparation for general chemistry seems to be ultimately attributable to the failure in learning arithmetic -- multiplication tables et cetera, long before algebra and geometry, let alone calculus, are confronted.  To any strategies within the environment of college or university applied to entering students to redress the accumulated arithmetical or mathematical deficiencies, I admire the recognition of the need and the practice to remedy the deficiencies, as discussed in this conference. At the level above general chemistry, and under the assumption that the deficiencies noted above have been resolved for the students who advance therefrom, one can then apply various courses of mathematics within chemistry departments, to avoid the excessive emphasis on 'theory' -- theorems, corollaries, lemmas -- in courses taught by mathematicians who have little interest or knowledge of chemical or other applications.  The use of advanced mathematical software in the latter circumstances can be greatly beneficial, but is of no use if the students lack the fundamental skills of arithmetic. I do agree that this course (with some modification) could be opened up to other departments as an applied math sequence. In fact, CSU does have a Calculus for Biological Scientists sequence as well, which uses in part the text by Erich Steiner. However, that course sequence is two semesters in total, the second of which is not a required course for the major. My biggest concern might be for students wishing to go further in math. Since topics come from a variety of traditional math courses, it is a little bit unclear what other math classes they might take if they chose to continue with math. For example, students have some differential equations, but not a whole course worth.  Numerical methods, for example, may be more important to other fields, but adding this in would cut other topics in an already full course.  In my opinion, part of the joy of teaching this course was having the chance to engage with students mathematically on topics that they already cared about/felt were valuable to think about. Because all of my students were chemists, I think that keeping chemical examples central helped with student buy-in. Instead of a set of rules to be memorized, math was shown to be useful for thinking about physical systems that interested them.  I know not every school is large enough to be able to support these types of "flavored" math classes. I wonder if some of that feeling of relevance would have disappeared if the applications were varied.  To answer the question on 3D visualization, we spent time learning how to sketch surfaces in 3D,  set up and evaluate volume integral and some work with plotting in matlab.   I guess it was kind of fast.  We ran the course two years as an experimental course, and in the second year the process was put through to make it a regular course.  I think that we can officially run a course three years as an experimental course.  It was important that the Maths department chair and undergrad director were supportive and that faculty from chemistry were involved with the course design and wanted the course to continue running.  Positive feedback from students was important too. Since we introduced the course, faculty from physics, computer science, and chemical engineering have all expressed interest in switching to this sequence.  The possible obstable to having physics and computer science join is that we want to keep the focus on p-chem applications.  Also, physics and chemical engineering majors will need to take a differential equations course as well, and that makes for a lot of overlap with the first course in the Maths for Chemists sequence.  I have suggested for chemical engineering that they do the first semester of Maths for Chemists, and then switch back to the normal Calc. III course. I think that even for maths majors a sequence of i) Calculus I (up to fundamental theorem of calculus, ii) differential equations-based course, much like the first semester of Maths for Chemists, iii) Calculus III would be a good sequence.  The Calc II coordinator here is interested in taking some of the ideas from the MfC sequence and giving more of a focus on differential equations in Calc. II. Your paper states one of your goals as: “students should be able to decide if solutions are reasonable with estimation techniques and order of magnitude calculations.” That topic was addressed previously in this conference (in Paper #1) for a course in physical chemistry. How is this done in your program? Are some numeric calculations on graded assignments expected to be done without a calculator? -- rick nelson I appreciate the careful thought and methodology of teaching estimation in the first paper. In practice, in our program, estimation was a recurring theme throughout the semester, not a topic taught on its own. It was often the response to "Does this answer make sense?" or "Is this what we expected?". I would run through estimations on the board or verbally after computations (often with a calculator) were completed. The accuracy of estimation would depend on the problem, sometimes it would be pretty rough--even just an order of magnitude argument. Hopefully, these checks became part of the routine of problem-solving.  I did not require students to do computations without a calculator. I can see how this forces students to sharpen arithmetic skills.  However, I hope that using estimation to check answers was communicating that even when using a calculator you need to do a "gut check" at the end.  I was the managing author and 2nd listed author of the book The Unified Learning Model (Shell et al.) referred to elsewhere in this conference. Willingham's book (Why Don't Students Like School?) is an excellent, readable summary of where educational psychologists are today in terms of their views on learning. IF that book has a shortcoming, it is that reading it will suggest to you that students actually DO like school. Of course, that would not be the title of a best seller. Two of my colleagues have joined me in writing a newer book rooted in information theory. This book is available at: The new book is edited periodically. A new round of edits will be posted before the end of the year. The edits are maintained such that readers of earlier editions are directed to the new changes.  The book has four sections. First is the general theory. Next are the applications. Third are elements of the basic underpinning science (such as EEG or studies of snails). Last are the enumerated edits. The book was first posted in August 2015. The advantages of a Web-based book include the opportunity to edit based on new information and the ability to link directly to multimedia. For example, check out: Interesting--concentrating on other things makes the details hard to see!  Dr. Brooks – It is rare to find a single source on the “science of learning for educators” that is both comprehensive and up-to-date. Your “Minds, Models, and Mentors,” at the link you provided, I think is Number One. For instructors in “science-major chemistry” with its focus on well-structured problem solving, my personal sequence of “recommended reading,” from short and simple to more comprehensive, would include: 1. Four pages on fundamentals of how the brain solves problems in the section “The Human Brain – Learning 101,”on pages 8-11 at 2. Eight pages on problems in the physical sciences and math on pages 4-2 to 4-10 of The Report of the Task Group on Learning Processes in the Final Report of the National Mathematics Advisory Panel (NMAP) at The section on “automaticity” is especially important in helping students solve scientific calculations. 3. The book Make It Stick by Brown, Roediger, and McDaniel (2014) describing specific study strategies such as retrieval and interleaved practice, summary sheets, and elaboration. 4. Your “Minds, Models, and Mentors” as a comprehensive summary of the brain’s structure and its impact on learning. -- rick nelson Cary Kilner's picture Thank you for your contribution to the ConfChem. You may recall that we met at Princeton in 1984 at the Woodrow Wilson Dreyfus Master Teachers Institute, where our charge was periodicity and descriptive chemistry. That one month was an outstanding experience for me; it kick-started my career as a chemistry teacher (I had been a professional musician after college), and reinvigorated my love of DOING chemistry, and not just talking about it. It encouraged me to continue to develop demonstrations, which I eventually used to help teach chem-math to apprehensive students. As I recall you were editor of an ACS publication, and came for a week to participate as a leader. I don’t believe it was Chem-Matters, but maybe so. Please refresh my memory. I ordered a class set and used it for 20 years. My students looked forward to it coming every few months and loved reading it. I had them read back issues as well as the four that came each year. Were you interested in cognitive science at that time? I will certainly check out your references.
89b8ec99974605c6
Skip to content Is Factoring Really In BQP? Really? January 23, 2011 Is the factoring problem in the complexity class BQP? Peter Shor is the author of the paper on quantum computing—the paper that helped create an entire new research agenda. It was not the first paper on quantum computing, it certainly is not the last paper on quantum computing, but it probably is the most exciting one. Peter won the Nevanlinna Prize in 1998, among many other well deserved awards for this seminal work. The paper is known as the one that shows factoring is in {\mathsf{BQP}}. Today I have an embarrassingly simple question: Is factoring in {\mathsf{BQP}}? This is one of those discussions that perhaps will show how wrong I can be. I try to get things right, but we are all human. Take a look at this Sunday New York Times’ chess column to see examples of chess blindness: one example is Magnus Carlsen, the top rated player in the world, making a “colossal blunder” that basically lost a Knight in one move. In my case the blunder may even be in a position where I made the right move 17 moves ago. Then I was involved with Umesh Vazirani and Ethan Bernstein over measurement and precision issues in their famous seminal paper on quantum Turing machine complexity, which formally defined {\mathsf{BQP}}. Perhaps my question today is just silly, but I am going to press forward anyway. Even if I am wrong, perhaps this will yield a nice intuitive answer to the other way of putting my question: Is there a simple way of reducing search problems to decision problems in quantum computations, especially those that involve sampling? Ken Regan and I have discussed this issue quite a bit and the comments to follow are joint. If this turns out to be just wrong or not even wrong, then I am probably the one to blame the most—if it’s a good question then thanks go to Ken. If you know the quantum complexity class {\mathsf{BQP}}, then you can skip ahead. But we thought it is important to be sure we all are together. Recall {\mathsf{BQP}} is the counterpart of {\mathsf{BPP}}—does that help? The complexity class {\mathsf{BPP}} is the set of languages that are accepted by a probabilistic machine that runs in polynomial time. Further when the machine accepts it does so with probability at least {2/3}, and when it rejects it also does so with probability at least {2/3}. The consequence of forcing accept/reject to be bounded away from {1/2} is that this makes the class of languages “nice.” Note if a language {L} is in {\mathsf{BPP}}, then it is possible to determine if a string {x} is in {L} to high probability: just run the machine repeatedly with different random bits and take the majority answer. The class {\mathsf{BQP}} is the same except that now the probabilistic machine is replaced by a polynomial time quantum one. The machine also must have the {2/3} property: accept/reject is always with probability at least {2/3}. This again is the reason for the bounded in the name. Let’s turn to factoring, one of our favorite problems. What Shor Proved Shor showed that there is a polynomial time quantum computation that can find a factor of a number. It succeeds with a reasonable probability, and can be repeated, if needed, to find all the factors of the number. This is a brilliant result, a wonderful result, a game changing result. What Shor Did Not Prove Peter’s paper does not directly prove that factoring is in {\mathsf{BQP}}. Yet everyone knows that factoring is now in {\mathsf{BQP}}. Right? The claim is everywhere: it’s on wiki pages, quantum sites, in lecture notes, and elsewhere. But we do not understand the proof—actually one of us, guess who, thinks there is a problems with the “proofs.” Let’s look at the issue more carefully. The factoring problem is about decomposing an integer {N>1} into its prime factors. Thus given {N} as the input, a factoring algorithm must output a list of primes \displaystyle p_1, \dots, p_m so that {N = p_1 \times \dots \times p_m}. Note if we insist that the list is ordered, then the answer is unique, since the integers have unique factorization. Technically this is a function: given an input it returns unique output. Thus if {F} denotes the factoring function, some examples are: \displaystyle \begin{array}{rcl} F(17) &=& \{ 17 \} \\ F(57) &=& \{ 3, 19 \} \\ F(12345) &=& \{ 3, 5, 823 \} \end{array} Since factoring is a function we cannot say that it is in a complexity class—that makes no sense. It a type error like trying to take the square root of a character string: languages give only one bit and the factoring function returns many bits. This has nothing to do with the power of the complexity class, it is a straight consequence of their definition. Thus it is meaningless to say that {F} is in {\mathsf{BQP}} or even in {\mathsf{EXPTIME}}. Shor’s paper itself speaks of {\mathsf{BQP}} only once, and only as a function class, but of course it is a language class. Thus we need to speak of factoring as a decision problem. How To Encode Factoring As A Language There are, as you probably know, many ways to encode a function into a language. This is, therefore, what people mean when they say “factoring is in {\mathsf{BQP}}.” Sounds easy—we just have to create a language that encodes {F} and then show that the language is in {\mathsf{BQP}}. Some sites define the following language {L_{<}} as their encoding of factoring. The language is: \displaystyle \{ (x,k) \mid x \text{ has a non-trivial factor smaller than } k \}. Is this a reasonable language to encode factoring? It must satisfy, we claim, two properties: 1. Given an {x} there must be a polynomial time algorithm that computes {F(x)} if allowed access to {L_{<}} as an oracle. 2. Given an {x} there must be a polynomial time algorithm that determines whether or not {x} is in {L_{<}} if allowed access to {F(x)}. What these conditions really say is “the language captures factoring, but no more.” The statement (1) is easily seen to be true: use binary search to find in polynomial time a factor of {x}. Divide by this factor, and then continue until all the factors are found. Finally sort them and output the value of {F(x)}. Perfect—so far so good. The statement (2) is also easy to check. The “Proof” Here is a proof that {L_{<}} is in {\mathsf{BQP}}. It is from a site on quantum computing—we know you can probably find it, but we thought we would not name it directly. The decision problem for factoring is in {\mathsf{BQP}}: given {n} and {k}, is there a non-trivial factor of n smaller than {k}? This is easily seen to be equivalent to having a polynomial-time quantum algorithm to find a non-trivial factor of any composite number {n}, with probability of outputting such a factor of at least {2/3}. Shor’s efficient quantum algorithm for factoring can be used to find factors in this way, and so it follows that the decision problem for factoring is in {\mathsf{BQP}}. Color us confused, since we, especially one of us, does not follow this argument. Let’s consider the argument carefully with {\mathsf{BQP}} replaced by polynomial time. In this case the argument is rock solid. Suppose that there is the new super factoring algorithm that given {n} finds all its factors. Then here is the proof that factoring is in {\mathsf{P}}: 1. Let {n} be the input. 2. Run the polynomial time algorithm and factor {n = p_1 \times \dots \times p_m}. 3. Check if any {p_i <k}. 4. If yes, then output accept; otherwise, output reject. This proves that factoring is in {\mathsf{P}} provided there is a polynomial time factoring algorithm. Rock solid. No problem, if the algorithms are classic. If the factoring algorithm is Shor’s, then we do not see why this will be in {\mathsf{BQP}}. There are several reasons that we are confused, but the main one is simple: A quantum {\mathsf{BQP}} algorithm makes one measurement and gets one bit. The above algorithm uses Shor as a subroutine, and his algorithm makes a measurement of many qubits. This seems to be a problem. It says to us that the proof is not correct, that it does not prove that factoring as a language is in {\mathsf{BQP}}. What Is Going On? Now we hasten to mention that the aspect of Shor’s algorithm needing to measure many bits is the basis of Leonid Levin’s critique. This has been viewed as an issue of whether the quantum states created in Shor’s algorithm are really physically feasible, see Scott Aaronson here. We have a more facile query: what if you can only measure one qubit and must then start over? Here is an abstract description of the problem we see that does not mention quantum. Suppose in the functional (i.e., search-problem) case we have a “black box” {B} that when we press a button at time {i} gives us a random function value {y_i}. This is a “multi-valued function,” really a sampling procedure. We also have an efficient deterministic procedure {W} such that with high probability over a sufficient number {m} of samples, {W(y_1,y_2,\dots,y_m)} gives us the correct answer. Shor’s algorithm can be viewed this way, and so far that’s fine. Now, however, suppose that in place of {B} we have a black-box {B_1} that gives only one bit according to a language representation of the multi-valued function. Thus instead of getting {y_i}, we get only one bit pertaining to {y_i}. The problem is that if we press the button again at time {i+1}, we may get a bit that pertains to some other value {y_{i+1}} that is not the same as {y_i}. If we could guarantee that repeated taps on {B_1} would stay focused on {y_i} then we could apply search-to-decision and recover the function value {y_i}. Our problem in a nutshell is that we do not see how Shor’s algorithm carries over to this setting. We are confused. The whole idea of quantum computing, the whole reason it is more powerful, the whole source of excitement is due to new rules. Old simple tricks from classic computing do not work anymore. You cannot erase—computations must be reversible—is one good example. We think there are three possible situations: 1. We are completely wrong. The proof is correct and we are just confused. Quite possible—would not be the first time. 2. We are right, but. The above “proof” is the one written down in many places, but all quantum experts are aware of how to fix it. The real proof is so trivial that no one has ever bothered to write it down. Again quite possible. But we would then suggest that it might be nice to write down a correct proof—no? 3. We are right. There really is a missing argument here. Possible? What references seem to be telling us is that the answer is deep within closure properties of {\mathsf{BQP}} rather than on the surface. But even here, the details cannot be found in the Wikipedia article on {\mathsf{BQP}}—rather the article on {\mathsf{PostBQP}} (a different class that equals {\mathsf{PP}}) proves such properties for {\mathsf{PostBQP}} and alludes that the proofs for {\mathsf{BQP}} are similar. Can we make this clearer? Open Problems Is factoring really in {\mathsf{BQP}}? We believe that it is, and we believe that there is a proper proof of this fact. But we do not think the “proof” that is out there is correct. It would be great if we have simply overlooked a well-known proof. If you know one please tell us—somehow we think if this is the case we will be told. 52 Comments leave one → 1. Philip permalink January 23, 2011 7:45 pm I don’t really understand quantum computing all that well…but isn’t it the case that, given x and k, you could run Shor’s algorithm repeatedly until you have found all of the factors…and then accept if one of those factors is less than k, and reject otherwise? Doesn’t this solve the decision problem? • Philip permalink January 23, 2011 7:52 pm Sorry, just re-read the article more carefully and noted that you addressed what I said. I’m still confused, though…BPP is contained in BQP. Surely it’s possible to do the reduction if BPP is a subset of BQP…right? • rjlipton permalink* January 23, 2011 9:54 pm This is just find, but does not show factoring is in BQP. It shows that factoring can be done by a larger class of quantum computations. 2. January 23, 2011 8:38 pm LOL … Dick and Ken’s question reminds me of the reluctant penguin phenomenon … who’s going to be the first to dive into these waters? Plunging in fearlessly, the orthodox explanations of why Shor’s algorithm establishes that factoring is in BQP have always satisfied me, via the following reasoning. Let us suppose we are given as input (x,k), and after a lot of classical pre-processing, quantum register initialization, quantum processing, quantum error correction, measurement of as many ancilla qubits as we like, etc.—all precisely as described by Shor’s original article and elaborated by subsequent QIT researchers—when all the (polynomially many) quantum steps are completed, we end up with classical registers that store the quantities (x,k,F(x)). As the final step, the connection to BQP is established by classically computing a single (k-dependent) classical decision bit, which is quantum-encoded in one final output qubit … and we are done. Is this right? Or am I about to be devoured by a leopard seal? 🙂 • January 23, 2011 8:52 pm Again being very facile, our beef is that BQP as perceived by John Q. Public says at the end you measure 1 qubit and with 2/3 probability it gives the right decision answer—but in the process you describe, you’ve already measured many qubits. And per Levin’s critique, you’ve measured many at a time—though I realize there are steps around the further objection that this requires insane precision. The positive query, which other events kept me from asking selected people beforehand, is has anyone formalized and studied what we might call “Bottleneck BQP”? This is where you measure 1 qubit and then everything’s zapped and you have to re-run, though based on results so far (i.e. “adaptively”) you can change the re-start configuration in some limited manner. • January 23, 2011 9:14 pm My (engineering-level) understanding is that every concrete design for a quantum computer, that has ever been seriously proposed, entails ancilla qubit measurements at intermediate stages of the calculation, as a necessary ingredient of quantum error correction. It is not necessary to show the results of these ancilla measurements to the outside world … but it *is* necessary to make them. As one of Dick and Ken’s greatest Gödel’s Lost Letter columns once asked “Are they allowed to do that?” … meaning in the present case … “At intermediate stages of a calculation, are quantum computers allowed to perform measurements on ancilla qubits, and perform classical computations on these measured values, and adapt subsequent quantum computations to the result of these measurements?” … and the consensus definition of quantum computing (AFAIK) is that the answer is “yes”. • January 23, 2011 9:58 pm Indeed I agree with intermediate measurements being realistic, but then do we need to clarify the definition of “BQP” to state that? The nub is in the Anonymous 8:42pm part of the comments—to reference the proverb, can one always “cut twice and measure once”? • Lurker permalink January 24, 2011 5:58 am as far as I understand, all the intermediate measurements (and operations conditioned on their results) could be replaced by (quantum-)conditional gates with only polynomial overhead – i.e. all the classical post-processing could be done coherently on a quantum computer. Then the only measurement would be the one deciding if (one of) the factor(s) found is smaller than k. What am I missing? • January 25, 2011 11:22 am Lurker asks: What am I missing? Ken and Dick’s questions have caused me to realize, you are missing the case in which the intermediate classical processing is accomplished by a Turing machine that is running any of the algorithms in P for which there is no proof (even in principle) that the algorithm runs in P. How exactly can one emulate an unprovably-in-P classical computation via polynomially-many reversible logic gates? Hey … don’t ask me! 🙂 The proof machinery associated to this kind of analysis might well be illuminating in itself … as well as being essential to a rigorous definition of BQP. 3. Anonymous permalink January 23, 2011 8:42 pm I think this is case (2). I will try to write more later, but a good place to look is at the so-called Principle of Deferred Measurement. You can defer all measurements of a hybrid classical/quantum algorithm until the very end, when you only need to measure one qubit. • January 23, 2011 8:56 pm Ah—the “only need to measure one” is the part we’re missing…!? • rjlipton permalink* January 23, 2011 9:52 pm This would be nice to see. But the given proofs do not yes any deferred measurements? No? • January 24, 2011 11:40 am Anonymous’ post is one of several that have raised profound issues, which in aggregate have led me to to agree with Dick and Ken. A good starting reference is Michael Nielsen and Isaac Chang’s admirably concise and clear discussion of what they call the Principle of Deferred Measurement and the Principle of Implicit Measurement (Section 4.4 of Quantum Computation and Quantum Information). When we translate the algorithmic postulates of Nielsen and Chuang’s text into the geometric postulates of category theory—as engineers find convenient for practical computations—then we find that the Nielsen & Chang measurement principles are natural in both pictures (algebraic and geometric) … and yet their respective algebraic and geometric representations “sit in our brains in very different ways” (to borrow Bill Thurston’s wonderful phrase). Thus it now seems (IMHO) that a discussion of these measurement-related issues in the context of a rigorous definition of BQP would be very interesting and valuable, precisely to the extent that it helps us “feel these ideas in our whole brain” (again to borrow a phrase from Bill Thurston). 4. January 23, 2011 9:52 pm It’s been many years since I’ve thought about this, but unless I’m misremembering something, or misunderstanding the problem (certainly possible!) this is not difficult to fix. The key point is that if you want to use Shor’s algorithm as a subroutine in a classical computation, then you can actually convert all the parts in the classical computation to quantum, eliminating the measurement steps along the way. Formally, the procedure you apply to do this is: (1) convert the classical parts of the computation to a reversible classical computation using gates such as the Toffoli and Fredkin gates; (2) eliminate all the measurement steps, and replace bits by qubits, and classical Toffoli etc gates by their quantum equivalents. At the end, you need measure a single qubit: the qubit which corresponds to the classical bit you would have inspected. All the other qubits can safely be ignored. That last paragraph is a summary. It’s instructive to work through an example which illustrates the general principle. Suppose we have a unitary operation U which acts on (say) three qubits. At the end we measure those three qubits, and than postprocess those measurement results by doing a classical computation C, which we’ll assume is done using just Toffoli gates (say), and give just a single bit as output. In fact, this procedure is equivalent to not doing the measurement of the three qubits, and replacing the classical Toffoli gates by their quantum equivalents, followed by measuring the appropriate qubit. This isn’t difficult to prove, with a bit of playing around, although I won’t try it in a comment. I hope I’ve understood the question correctly, and addressed it. • January 23, 2011 10:09 pm I am, by the way, assuming that you agree with the following: you can use Shor’s (quantum) algorithm as a subroutine in a classical algorithm that will, with high probability, tell you if a number n has a factor smaller than k. The total complexity is polynomial in the size of n. That’s the starting assumption behind my comment: if we’re not agreed on this, then I have misunderstood your objection. • January 24, 2011 7:34 am Here’s an example that illustrates what’s going on. It’s not intended as any kind of formal proof, but rather as an example to explain the intuition behind what’s going on. (1) Suppose we perform a quantum computation that results in a state \sum_x \psi_x |x> (2) We measure the state, getting result x with probability |\psi_x|^2. (3) We now do a (reversible) classical computation, getting result C(x) with probability |\psi_x|^2, where C(.) is some classical permutation. (4) We consider the first bit only, C_1(x). The result is 0 with probability \sum^0_x |\psi_x|^2, where the ^0 indicates that we only sum over those x for which C_1(x) = 0. Similar considerations holds for the first bit being 1. Now consider the following alternate series of steps: (1′) Suppose we perform a quantum computation that results in a state \sum_x \psi_x |x> (2′) [This step intentionally left blank, i.e., no measurement is performed] (3′) We take the reversible classical computation from step (3), above, and replace all the gates by their quantum equivalents, e.g., classical Toffoli by quantum Toffoli, etc. We do the resulting quantum computation, with the final state being \sum_x \psi_x |C(x)>. (4′) We measure the first qubit only, and leave the other qubits alone. The result is 0 with probability \sum_x^0 |\psi_x|^2, by the standard rules for quantum measurement. Note that the probability for a 0 outcome in step (4′) is exactly the same as in step (4). 5. aram permalink January 24, 2011 12:33 am As Michael and others have pointed out, measuring only one bit does not limit the power of BQP, since BQP contains BPP. For example, BQP can be amplified, just like BPP, using similar techniques. Measuring a single bit at the end is really just a convention, due to the fact that languages demand one bit answers (accept/reject). In this sense, BQP is just like BPP. However, “Bottleneck BQP” probably is weaker. One way to formalize this is to consider BPP^{BQNC1}, which still contains factoring, but is weaker than BQP relative to an oracle. By the way, the reason no one talks about solving factoring in this way is that no one other than complexity theorists cares about it. If a quantum computer is built for factoring, then it’d be insane to perform the expensive continued fraction expansion on it, when that part of Shor’s algorithm works perfectly fine on a classical computer. Ditto for performing the complete factorization and outputting a single bit to answer whether there exist a factor of N less than some threshold. So most discussions of quantum factoring will present it as a hybrid classical-quantum algorithm. • rjlipton permalink* January 24, 2011 7:48 am Of course in practice you are right. But the statement factoring as language L is in BQP is a mathematical statement. As such it deserves a proof—no? 6. January 24, 2011 12:53 am It seems like this objection extends beyond factoring to the decision form of many functions, and so it is definitely important to clear up any misunderstanding about this. Michael’s approach works in almost all circumstances (and possibly all, though I’m just worried that there is a possibility of destructive interference for some problems if you input the output of one computation into the other without performing measurements along the way). Either way, here is a slightly different way of achieving the same thing for any function being converted to a decision problem: Whenever you should measure a qubit in the Z basis, instead CNOT it with an ancilla prepared in the |0> state, making a pseudo copy of the state of the qubit. Note that measuring in the Z basis commutes with the control of the CNOT, and so whether you measure before or after, you obtain the same result. This means that this operation is equivalent to simply making a copy of the classical result of the measurement. Next you simply perform your next bit of computation on this pseudo copy, as if it were the output bit, while not performing any operations on the measured qubit. As Michael mentions reversible classical computation can be achieved through Toffoli gates, but you can also perform more quantum operations if necessary (depending on how you have formulated the decision problem). Clearly, the measurements all no come at the end of the computation, and only one of these corresponds to the output of the decision problem. As such, you can simply ignore the other outputs, and never bother to measure them, meaning you only use one measurement. Hope this answers the question. 7. Lance Fortnow permalink January 24, 2011 6:27 am Bennett, Bernstein, Brassard and Vazirani (, Section 4) show that BQP with BQP subroutines still sits in BQP. Combined with Shor that shows that language versions for Factoring also sit in BQP. • rjlipton permalink* January 24, 2011 7:44 am Let me say this simply. One: the `proofs” out there are not complete. Two: where is the pointer to a paper that proves this important result? • Anonymous permalink January 25, 2011 10:25 am I am a huge fan of your blog, but this kind of question does not seem well-suited for the format. The proof uses well-known background material, but that does not make it incomplete. • January 25, 2011 10:59 am Anonymous, the physicist Lowell Brown described such proof technology (sardonically) as “Well-known to those who know it well.” • rjlipton permalink* January 24, 2011 7:46 am I still do not see this. Okay do lots of BQP subroutines, but Shor gives a different value each time. How do you put them together? • January 24, 2011 11:47 am Actually I’m quite confused by this kind of entanglement: Suppose |\psi> = |00> + |11>, when you have measured the first qubit and the result is 0, the measurement result for the second one will automatically come to be 0, even before you actually measure it. I don’t know what is the physical foundation of it — why Nature does things in this way (or is it)? But this simple example shows the intrinsic nature of quantum computing. Thanks to the Principle of Deferred Measurement, just one qubit (the decision qubit) needs to be measured. You can turn the whole classical decision framework into an equivalent quantum one and just measure the final decision qubit. After you have done all the computation except measuring that qubit, the state of it will be a proper superposition of all possible results. If the classical method ensures a greater share of the results are correct, then by the way a measurement works, the quantum method also ensures that. In a sentence, superposition accumulates, while entanglement differs. That’s where the power of quantum computing lies. 8. Alex permalink January 24, 2011 8:33 am I think this is a really good question for, how about you post it there? For instance, Peter Shor himself uses the site quite regularly, so if anyone knows the answer, chances are someone there will. • January 24, 2011 10:18 am Alex has made is a really good suggestion IMHO … certainly the comments (so far) on this topic have raised just as many questions as they have answered. For example, Michael Nielsen’s procedure for classical-to-quantum conversion of intermediate steps make perfect sense for when the intermediate computation is carried out by circuits … but does this argument go through when the intermediate classical computation is a Turing computation? So I have come to agree with Ken Regan and Dick Lipton, that a thorough exposition of these issues would be welcomed by many folks for at least three reasons: (1) Questions like “Is factoring in BQP?” deserve a rigorous answer, (2) The proof machinery associated to that answer is of independent interest, and (2) The communal response to this class of question illuminates the Reinhard Selten-esque proposition “Quantum computing is for proving theorems, not for computing.” 9. Huck Bennett permalink January 24, 2011 3:09 pm It seems that your opposition comes from the definition of BQP: are we restricted to measuring exactly one qubit? If not, if we’re allowed to measure other qubits during computation, then Lance’s comment and link seem like a clear confirmation of the canonical “proof” which you’re questioning. I particularly don’t understand the issue of “Shor’s giving a different value each time”. As usual we can just use trial division to verify the outputs of Shor’s algorithm and reduce the problem size. Polynomially many repetitions will result in an exponentially small chance of not finding any factors. • rjlipton permalink* January 24, 2011 6:07 pm Shor’s algorithm, the quantum part, does not give the same answer. It gives different information each time. 10. January 24, 2011 9:00 pm Dick: Do you, or do you not, agree that Michael, Aram, Joe, and Lance have answered your question? If you agree, then I think it would be a useful public service to add a note to the original post, to clear up any possible confusion: “Yes, factoring is really in BQP. Really! And in other breaking news, the Karp-Lipton Theorem really holds, c^2 really equals a^2+b^2 for a right triangle…” Provided you agree that your question has been answered, your complaint is really about pedagogy: you think that Shor’s original paper (as well as later expositions) should have directly showed that some decision problem related to factoring has polynomial-size quantum circuits, rather than giving poly-size circuits for the factoring *search* problem and then appealing to well-known closure properties of BQP (as shown, for example, in the BBBV paper) to get that the decision version is also in BQP. For whatever it’s worth, I strongly disagree with you about pedagogy. By similar logic, one could assert that *nothing* in theoretical computer science has “really” been proved, so long as the authors have merely described their reductions in English, and not given explicit state transition diagrams for the appropriate Turing machines! I think a fairer standard is that something is “proved” when the gaps are small enough that any expert (or enterprising student), working in good faith, can fill them without difficulty. And that standard is more than met by Shor’s algorithm, as evidenced (for example) by the thread above. • rjlipton permalink* January 24, 2011 10:17 pm I am not not for being pedantic nor for excessive formalism. I have written papers arguing against that for years. The reason for the post was to make a couple of points: (I) that factoring is in BQP is a very important theorem; (II) as such it should have a proof that is on the web and available to all. The A answer that no one made was: “here is a url to the proof in X’s text/paper/wiki etc.” The answer that I got is: yes the simple argument that is sometimes made is not the real one, but here is the correct one. I think that there is a big difference between being pedantic and simply wanting to see a sketch of what they have said in the comments. I do agree they have answered the issue, but would suggest that someone write down a short proof so all can see. Scott you are the expert of experts in quantum computing, and I meant no disrespect to you or anyone else. I just wanted to see how this fundamental result is proved. So yes it is true. • January 24, 2011 11:00 pm I think perhaps the issue now stems with how big a step deferred measurement is. When I read the proof in your post, I immediately considered it a full proof because deferring measurements seems to me a small step (falling into one of the gaps Scott mentions). Obviously, however, if this has caused the two of you some trouble in understanding the proofs, it is not as obvious a step as it might seem to those writing up the proof. So, with that said, here is a short proof (excuse the latex, I’m not sure if you support it in comments or not): One way to decide the problem is simply to run Shor’s algorithm $\log(N)$ times (where $N$ is the input number), and then check whether any of these factors is less than $k$. Clearly $N$ has at most $\log(N)$ factors, and so the error probability is bounded. Each run can be written as a circuit composed of a quantum part $Q$, a measurement in the computational basis $M$, and a classical part $C$. So, the input state evolves as $C M(Q |input\rangle \langle input| Q^\dagger) C^\dagger$. As this is done $m$ times in parallel, we have $(C M(Q |input\rangle \langle input| Q^\dagger) C^\dagger)^{\otimes m}$. To this we then need to apply another round of classical computation $S$ to check whether one of these circuits outputted a factor $<k$. So the full operation is $S (C M(Q |input\rangle \langle input| Q^\dagger) C^\dagger)^{\otimes m} S^\dagger$. Lastly, we explicitly measure the output bit for this decission problem (again in the computational basis) which we denote $M_D$. Note, both $C$ and $S$ need to be performed reversibly for this particular proof. So the entire process can be written as $M_D(S (C M(Q |input\rangle \langle input| Q^\dagger) C^\dagger)^{\otimes m} S^\dagger)$. The step here which has lead to the confusion comes next: we note that not only that $M_D$ and $M^{otimes m}$ are simultaneously diagonalizable, but that this is also true for $K M^{otimes m} K^\dagger$, for any reversible classical computation $K$. Taking $K = S C^{\otimes m}$ and this allows us to rewrite the operation as $M(S^\dagger (C^\dagger)^{\otimes m} M_D( S(C Q |input\rangle \langle input| Q^\dagger C^\dagger)^{\otimes m} S^\dagger) C^{\otimes m} S)$ by commuting the measurements. So, we still have the same number of measurements, but not $M_D$ is measured first. Since we only care about the result of $M_D$ (it is the output of the decision problem, after all), we can simply ignore the subsequent computation, and choose not to perform it, since it will not alter the value returned for $M_D$. This yields $K = S C^{\otimes m}$ and this allows us to rewrite the operation as $M_D( S(C Q |input\rangle \langle input| Q^\dagger C^\dagger)^{\otimes m} S^\dagger)$. Note that this now has only a measurement of one qubit, and it is performed at the end of the computation, giving the output of the decision problem, and so the result is proved. • January 25, 2011 12:27 am Dick: No offense taken! On the one hand, I knew that you personally were seeking knowledge in complete good faith; but on the other hand, I also knew from experience that the Quantum Confuseniks would seize on your post as proof that even well-known computer scientists doubt the mathematical correctness of Shor’s algorithm. I disagree with your repeated contention that the proof of factoring in BQP that appears in textbooks is “not the real proof.” It IS the real proof—it just assumes a bit of background knowledge that needs to be filled in if you don’t have it. Which, fortunately, is exactly what people did here—and in my experience, also what sites like MathOverflow and CS Theory StackExchange are perfectly designed for. • Elements permalink January 24, 2011 11:34 pm Well, I actually support a “pedantic” attitude here. It’s not just “pedagogy”, it’s science. A mathematical result should have a written proof. Nothing more or less. 11. January 25, 2011 7:47 am Dick and Ken’s questions, and the comments upon their questions, certainly show us that that quantum dynamics can “sit in our brains” in many different ways (in Bill Thurston’s wonderful phrase). And this is a wonderful thing … because as Thurston again has put it: “There is a real joy in doing mathematics, in learning ways of thinking that explain and organize and simplify. One can feel this joy discovering new mathematics, rediscovering old mathematics, learning a way of thinking from a person or text, or finding a new way to explain or to view an old mathematical structure … What we are producing is human understanding. We have many different ways to understand and many different processes that contribute to our understanding. We will be more satisfied, more productive and happier if we recognize and focus on this.” The preceeding quotes are from Thurston’s foreward to Daina Taimina’s Crocheting Adventures with Hyperbolic Planes (2009), and from Thurston’s AMS essay On Proof and Progress in Mathematics (1994) … both essays are well worth reading (for students especially). Perhaps no element of math and physics sits in our brains in more delightfully diverse ways than what Nielsen and Chuang’s Quantum Computation and Quantum Information call the “Principles of Deferred and Implicit Measurement” (Section 4.4), or more formally, their “Theorem 8.2: Unitary Freedom in the Operator-Sum Representation”, Nielsen and Chuang have clearly set forth the algebraic and informatic aspects of these principles … and yet it is necessary to appreciate that these same ideas appear in the literature in innumerable other guises: Mensky’s textbook Continuous Quantum Measurements and Path Integrals (1993) embraces a path integral point of view; Carlton Caves’ on-line notes Completely Positive Maps, Positive Maps, and the Lindblad Form (revised 2008) translates these ideas into the language of stochastic calculus; Ashtekar and Schilling’s Geometrical Formulation of Quantum Mechanics (1999) makes at least a start toward translating these ideas into geometric language; and van Holten’s recent review Aspects of BRST Quantization (2005) continues this program of linking algebra and geometry in the context of quantum dynamics. Literally hundreds more such references could be cited, and the diversity of their points-of-view is both incredible and wonderful. In consequence of these diverse ways that quantum dynamics can “sit in our brains”—none of which are mathematically mature at present, and whose connexions and technological implications especially are mysterious to everyone—the collective process of understanding quantum dynamics can seem slow & immensely confusing … to student and expert alike. So it seems to me, that in raising these issues so plainly and explicitly, and in such a friendly student-accessible style, Dick and Ken have done us all a service that is very much in the spirit of Feynman: “It always seems odd to me that the fundamental laws of physics, when discovered, can appear in so many different forms that are not apparently identical at first, but, with a little mathematical fiddling you can show the relationship. An example of that is the Schrödinger equation and the Heisenberg formulation of quantum mechanics. I don’t know why this is – it remains a mystery, but it was something I learned from experience. There is always another way to say the same thing that doesn’t look at all like the way you said it before. … I don’t know what it means, that nature chooses these curious forms, but maybe that is a way of defining simplicity. Perhaps a thing is simple if you can describe it fully in several different ways without immediately knowing that you are describing the same thing.” Hence, my thanks and appreciation are extended, for the wonderful physical and mathematical questions that Ken and Dick have raised, and for their sustained hard work in hosting this outstanding weblog. 12. January 25, 2011 2:19 pm Dear all, this interesting post is related to several things I am interested in so let me mention three. The first is about scientific blogging and other Internet scientific resources and discussions. I always hoped that those can serve as useful way to get answers to basic questions and to explain basic and simple proofs/issues. (This was one of the motivation in my own blog and Internet activities.) And since we have no shortage of space we can give the arguments in full details (or add details interatively) and there is no special reason to be cryptic. (Even if for experts just a hint or a shorter argument will suffice”. Second when we talk about “P” we both mean “what classical computers (Turing machines) can do in polynomial time, and also the more restricted definition about decision problems. I thought these two meanings lead to no confusions and it is safe to regard them as more or less the same. (But I am a novice computational complexitian, do I miss something?). The most important thing about Shor theeorem is that it shows that factoring can be done by a quantum computer in polynomial time. The question if BQP capture a decision version of factoring is also important and interesting (but not as spectacular) so I think it is a good service to ask it. In the quantum case the gap between “what a quantum computer can do in a polynomial time” and “BQP” is wider and more subtle than for the classic case. Lets call “QUANTUM-P” “Whar quantum computers can do in polynomial time”. It is very interesting if “QUANTUM-P” and “BQP” express more or less similar computational power. Third, “QUANTUM P” includes “BPP^QSAMPLE” (what a classical probabilistic computer can do with additional ability to sample the state of a quantum computer with n qubits) [OK, maybe I ignore a non uniformity issue but take the “word” definition as the commiting one”. There were interesting discussions recently in connection to a paper by Aaronson and Arkhipov if BQP captures more or less the same power as BPP^QSAMPLE. (Certainly a quantum computer can in polynomial time perform tasks requiring probabilistic classical computation with access to sampling states that a quantum computer can reach in polynomial time. Maybe the QUANTUM P is even stronger than that?? I think Scott was more leaning than me to believe that QUANTUM-P (as expressed by BPP^QSAMPLE) is stronger substantially compared to BQP. The relative power of BQP and BPP^QSAMPLE is discussed in my post and also on Scott blog • January 25, 2011 4:42 pm Gil, it’s easy to see that BQP = BPP^QSAMPLE. So as far as the definition of BQP is concerned, this is a complete non-issue. Now, if you want a class different than BQP, of course you can consider QSAMPLE itself … but then you’re talking about sampling problems, not decision problems! Or you can consider a class like BPP^NP^QSAMPLE, which my and Alex’s work indeed shows might be stronger than BPP^NP^BQP. 13. January 25, 2011 5:26 pm Right Scott, I was confused. I meant to compare BQP with QSAMPLE. Or more precisely with what classical computers can do – not just decision problems- equiped with QSAMPLE subroutines. Not with BPP^QSAMPLE. (I got it right in my quoted post.) In any case, one point I wanted to mention is that when you prove “Quantum computer can do X” it may reflect something computationally stronger than (or something which does not follow from) “Quantum computers can solve decisions problems in BQP”. So “quantum computers can factor” which is a theorem nobody can doubt, does not automatically answers what Dick’s asked about. As it seems clear from the comments the required implication is known to knowers and it uses some familiar steps (we should all be familiar with). Another comment is that for factoring we only need on top of BQP log depth quantum computation (this is a remarkable quite recent result but I will not attempt to spell the names of the discoveres from memory) so in a sense factoring does not require the full power of quantum computers. I wonder if this log depth result can also be translated to the decision problem setting. (But again this may be obvious and routine to experts.) • January 25, 2011 5:28 pm I meant “on top of BPP” or “on top of classical computers” • aram harrow permalink January 25, 2011 5:40 pm Hi Gil, In response to your question about log depth being enough for factoring (I’m not sure how to quote here), the result is due to Cleve and Watrous: Technically, only the quantum part is log depth and so classical poly-time postprocessing is still necessary (see my comment above). So for the decision problem, Thm 3 of that paper states that it’s contained in ZPP^{BQNC}. • steve uurtamo permalink January 26, 2011 7:56 am i promise that this is not meant out of pedantry, but the distinction between what you mean by “quantum computers” and any particular complexity class (for instance, BQP) is exactly why complexity classes have careful definitions and mean particular things. since we do not currently have (useful, real, imaginary) “quantum computers” to play with, the phrase must mean some particular model of what they could or might be. and capturing distinctions in those different ideas of models is why we have different classes. fair enough? so asking about membership in a class for a problem really does matter, since it is really the only careful way to ask the question. • January 28, 2011 8:47 am Lets define פ (PE) as everything a classical computer can do in polynomial time ר (RESH) as anything quantum computer with unlimmited supply of random bits can do in polynimial time and ק (KUF) as anything a quantum computer can do in polynomial time. So פ (PE) contains P and its computational power seems to be expressed by P. So we can talk about decision problems sampling optimization and whatever else we imagine. ר(RESH) contains RP and BPP. ק contains BQP and also QSAMPLE and also it contains ר^QSAMPLE (RESH^QSAMPLE). (QSAMPLE is sampling a distribution expressed by a quantumcomputer after running a polynomial number of steps.) My intuition was always that the computational power of ק is essentially expressed by BQP. There are some indications that QSAMPLE might be genuinly stronger than BQP. (But those are far from being definite.) Maybe ק (KUF) is even stronger than ר^QSAMPLE (RESH^QSAMPLE) so the are further powerful things we can do with quantum computers. Can we formalize פ, ר, ק (PE RESH and KUF)? I dont see why not? I think we disnot have much motivation to do it since it looked that decision problems capture computational power very good (and it still look so for P and BPP and may well be true also for BQP.) • January 26, 2011 8:40 am The result I mentioned is by R. Cleve and J. Watrous. (I see no problem to translate it to the decision version.) 14. January 25, 2011 5:38 pm Factoring in log-depth (BPP^BQNC) was done by Cleve and Watrous in 2000 — it’s not that “recent” anymore! And yes, the decision problem of (say) whether there’s a prime factor ending in 7 is also in BPP^BQNC — no additional difficulty whatsoever. 15. January 27, 2011 10:59 am Scott Aaronson suggests: Background knowledge that needs to be filled … is exactly what sites like MathOverflow and CS Theory StackExchange are perfectly designed for. What Scott says is plain common sense and so, guided by the many fine comments on this topic, I have plunged into the mathematical waters by posting a generalization of the question (an imperfect generalization, perhaps) on MathOverFlow as Does BQP^P = BQP ? … and what proof machinery is available? Comments, outright answers, and alternative versions of this question all are very welcome. 16. Cristopher Moore permalink January 28, 2011 9:30 am Of course I agree that we need careful definitions of complexity classes! But I am still confused about how this question got rolling. As Lance (and the Zookeeper) pointed out, we’ve known since 1997 that BQP^BQP = BQP, and this includes BPP^BQP. So there’s no problem using Shor’s algorithm as part of a classical randomized algorithm, and we can solve the function problem Factoring with high probability in (quantum) polynomial time. We can then sort the factors and see if there is a small one, or one ending in 7, or whatever, thus solving decision versions of Factoring as well. The original question seems to have stemmed from whether we can reduce the quantum part of the entire algorithm to measuring a single qubit. Again, I think it’s nice that we can. But this seems more to me like showing that “measuring one qubit BQP” = BQP. Perhaps my confusion stems from the fact that no model of quantum computation currently in use in the community is limited to measuring a single qubit… and I don’t see this limitation in e.g. the definition of BQP in the Complexity Zoo. What am I missing? – Cris • February 3, 2011 9:42 am Cris, over on TCS StackExchange, Luca Trevisan has given an ingenious construction that answers the two-part question “Do runtimes for P require EXP resources to upper-bound? … are concrete examples known?” with “yes” and “yes for all practical purposes” (FAPP). The point is not that anyone thinks BQP^P≠BQP, but rather, that the obstructions to P-uniform reduction of algorithms→circuits are more formidable than is generally appreciated … this is the good point behind Ken and Dick’s question (as I understand it). 17. June 12, 2014 9:58 pm Where does this discussion makes of Leonid Levin’s critique? That it is well-founded or not? 1. Tweets that mention Is Factoring Really In BQP? Really? « Gödel’s Lost Letter and P=NP -- 2. Factoring Is In BQP « Gödel’s Lost Letter and P=NP 3. Perpetual Motion of The 21st Century? « Gödel’s Lost Letter and P=NP Leave a Reply to John Sidles Cancel reply You are commenting using your account. Log Out /  Change ) Google photo Twitter picture Facebook photo Connecting to %s
7466e4d0d89ed5fa
This Quantum World/Implications and applications/Atomic hydrogen Atomic hydrogenEdit While de Broglie's theory of 1923 featured circular electron waves, Schrödinger's "wave mechanics" of 1926 features standing waves in three dimensions. Finding them means finding the solutions of the time-independent Schrödinger equation with   the potential energy of a classical electron at a distance   from the proton. (Only when we come to the relativistic theory will we be able to shed the last vestige of classical thinking.) In using this equation, we ignore (i) the influence of the electron on the proton, whose mass is some 1836 times larger than that of he electron, and (ii) the electron's spin. Since relativistic and spin effects on the measurable properties of atomic hydrogen are rather small, this non-relativistic approximation nevertheless gives excellent results. For bound states the total energy   is negative, and the Schrödinger equation has a discrete set of solutions. As it turns out, the "allowed" values of   are precisely the values that Bohr obtained in 1913: However, for each   there are now   linearly independent solutions. (If   are independent solutions, then none of them can be written as a linear combination   of the others.) Solutions with different   correspond to different energies. What physical differences correspond to linearly independent solutions with the same  ? Using polar coordinates, one finds that all solutions for a particular value   are linear combinations of solutions that have the form   turns out to be another quantized variable, for   implies that   with   In addition,   has an upper bound, as we shall see in a moment. Just as the factorization of   into   made it possible to obtain a  -independent Schrödinger equation, so the factorization of   into   makes it possible to obtain a  -independent Schrödinger equation. This contains another real parameter   over and above   whose "allowed" values are given by   with   an integer satisfying   The range of possible values for   is bounded by the inequality   The possible values of the principal quantum number   the angular momentum quantum number   and the so-called magnetic quantum number   thus are: Each possible set of quantum numbers   defines a unique wave function   and together these make up a complete set of bound-state solutions ( ) of the Schrödinger equation with   The following images give an idea of the position probability distributions of the first three   states (not to scale). Below them are the probability densities plotted against   Observe that these states have   nodes, all of which are spherical, that is, surfaces of constant   (The nodes of a wave in three dimensions are two-dimensional surfaces. The nodes of a "probability wave" are the surfaces at which the sign of   changes and, consequently, the probability density   vanishes.) Take another look at these images: The letters s,p,d,f stand for l=0,1,2,3, respectively. (Before the quantum-mechanical origin of atomic spectral lines was understood, a distinction was made between "sharp," "principal," "diffuse," and "fundamental" lines. These terms were subsequently found to correspond to the first four values that   can take. From   onward the labels follows the alphabet: f,g,h...) Observe that these states display both spherical and conical nodes, the latter being surfaces of constant   (The "conical" node with   is a horizontal plane.) These states, too, have a total of   nodes,   of which are conical. Because the "waviness" in   is contained in a phase factor   it does not show up in representations of   To make it visible, the phase can be encoded as color: In chemistry it is customary to consider real superpositions of opposite   like   as in the following images, which are also valid solutions. The total number of nodes is again   the total number of non-spherical nodes is again   but now there are   plane nodes containing the   axis and   conical nodes. What is so special about the   axis? Absolutely nothing, for the wave functions   which are defined with respect to a different axis, make up another complete set of bound-state solutions. This means that every wave function   can be written as a linear combination of the functions   and vice versa.
81b1ac4a536cf413
Dragons and Unicorns When I was an undergraduate I was often told by lecturers that I should find quantum mechanics very difficult, because it is unlike the classical physics I had learned about up to that point. The difference – or so I was informed – was that classical systems were predictable, but quantum systems were not. For that reason the microscopic world could only be described in terms of probabilities. I was a bit confused by this, because I already knew that many classical systems were predictable in principle, but not really in practice. I blogged about this some time ago, in fact. It was only when I had studied theory for a long time – almost three years – that I realised what was the correct way to be confused about it. In short, quantum probability is a very strange kind of probability that displays many peculiarities and subtleties  that one doesn’t see in the kind of systems we normally think of as “random”, such as coin-tossing or roulette wheels. To illustrate how curious the quantum universe is we have to look no further than the very basic level of quantum theory, as formulated by the founder of wave mechanics, Erwin Schrödinger. Schrödinger was born in 1887 into an affluent Austrian family made rich by a successful oilcloth business run by his father. He was educated at home by a private tutor before going to the University of Vienna where he obtained his doctorate in 1910. During the First World War he served in the artillery, but was posted to an isolated fort where he found lots of time to read about physics. After the end of hostilities he travelled around Europe and started a series of inspired papers on the subject now known as wave mechanics; his first work on this topic appeared in 1926. He succeeded Planck as Professor of Theoretical Physics in Berlin, but left for Oxford when Hitler took control of Germany in 1933. He left Oxford in 1936 to return to Austria but fled when the Nazis seized the country and he ended up in Dublin, at the Institute for Advanced Studies which was created especially for him by the Irish Taoiseach, Eamon de Valera. He remained there happily for 17 years before returning to his native land at the University of Vienna. Sadly, he became ill shortly after arriving there and died in 1961. Schrödinger was a friendly and informal man who got on extremely well with colleagues and students alike. He was also a bit scruffy even to the extent that he sometimes had trouble getting into major scientific conferences, such as the Solvay conferences which are exclusively arranged for winners of the Nobel Prize. Physicists have never been noted for their sartorial elegance, but Schrödinger must have been an extreme case. The theory of wave mechanics arose from work published in 1924 by de Broglie who had suggested that every particle has a wave somehow associated with it, and the overall behaviour of a system resulted from some combination of its particle-like and wave-like properties. What Schrödinger did was to write down an equation, involving a Hamiltonian describing particle motion of the form I have discussed before, but written in such a way as to resemble the equation used to describe wave phenomena throughout physics. The resulting mathematical form for a single particle is i\hbar\frac{\partial \Psi}{\partial t} = \hat{H}\Psi = -\frac{\hbar^2}{2m}\nabla^2 \Psi + V\Psi, in which the term \Psi  is called the wave-function of the particle. As usual, the Hamiltonian H consists of two parts: one describes the kinetic energy (the first term on the right hand side) and the second its potential energy represented by V. This equation – the Schrödinger equation – is one of the most important in all physics. At the time Schrödinger was developing his theory of wave mechanics it had a rival, called matrix mechanics, developed by Werner Heisenberg and others. Paul Dirac later proved that wave mechanics and matrix mechanics were mathematically equivalent; these days physicists generally use whichever of these two approaches is most convenient for particular problems. Schrödinger’s equation is important historically because it brought together lots of bits and pieces of ideas connected with quantum theory into a single coherent descriptive framework. For example, in 1911 Niels Bohr had begun looking at a simple theory for the hydrogen atom which involved a nucleus consisting of a positively charged proton with a negatively charged electron moving around it in a circular orbit. According to standard electromagnetic theory this picture has a flaw in it: the electron is accelerating and consequently should radiate energy. The orbit of the electron should therefore decay rather quickly. Bohr hypothesized that special states of this system were actually stable; these states were ones in which the orbital angular momentum of the electron was an integer multiple of Planck’s constant. This simple idea endows the hydrogen atom with a discrete set of energy levels which, as Bohr showed in 1913, were consistent with the appearance of sharp lines in the spectrum of light emitted by hydrogen gas when it is excited by, for example, an electrical discharge. The calculated positions of these lines were in good agreement with measurements made by Rydberg so the Bohr theory was in good shape. But where did the quantised angular momentum come from? The Schrödinger equation describes some form of wave; its solutions \Psi(\vec{x},t) are generally oscillating functions of position and time. If we want it to describe a stable state then we need to have something which does not vary with time, so we proceed by setting the left-hand-side of the equation to zero. The hydrogen atom is a bit like a solar system with only one planet going around a star so we have circular symmetry which simplifies things a lot. The solutions we get are waves, and the mathematical task is to find waves that fit along a circular orbit just like standing waves on a circular string. Immediately we see why the solution must be quantized. To exist on a circle the wave can’t just have any wavelength; it has to fit into the circumference of the circle in such a way that it winds up at the same value after a round trip. In Schrödinger’s theory the quantisation of orbits is not just an ad hoc assumption, it emerges naturally from the wave-like nature of the solutions to his equation. The Schrödinger equation can be applied successfully to systems which are much more complicated than the hydrogen atom, such as complex atoms with many electrons orbiting the nucleus and interacting with each other. In this context, this description is the basis of most work in theoretical chemistry. But it also poses very deep conceptual challenges, chiefly about how the notion of a “particle” relates to the “wave” that somehow accompanies it. To illustrate the riddle, consider a very simple experiment where particles of some type (say electrons, but it doesn’t really matter; similar experiments can be done with photons or other particles) emerge from the source on the left, pass through the slits in the middle and are detected in the screen at the right. In a purely “particle” description we would think of the electrons as little billiard balls being fired from the source. Each one then travels along a well-defined path, somehow interacts with the screen and ends up in some position on the detector. On the other hand, in a “wave” description we would imagine a wave front emerging from the source, being diffracted by the screen and ending up as some kind of interference pattern at the detector. This is what we see with light, for example, in the phenomenon known as Young’s fringes. In quantum theory we have to think of the system as being in some sense both a wave and a particle. This is forced on us by the fact that we actually observe a pattern of “fringes” at the detector, indicating wave-like interference, but we also can detect the arrival of individual electrons as little dots. Somehow the propensity of electrons to arrive in positions on the screen is controlled by an element of waviness, but they manage to retain some aspect of their particleness. Moreover, one can turn the source intensity down to a level where there is only every one electron in the experiment at any time. One sees the dots arrive one by one on the detector, but adding them up over a long time still yields a pattern of fringes. Curiouser and curiouser, said Alice. Eventually the community of physicists settled on a party line that most still stick to: that the wave-function controls the probability of finding an electron at some position when a measurement is made. In fact the mathematical description of wave phenomena favoured by physicists involves complex numbers, so at each point in space at time \Psi is a complex number of the form \Psi= a+ib, where i =\sqrt{-1}; the corresponding probability is given by |\Psi^2|=a^2+b^2. This protocol, however, forbids one to say anything about the state of the particle before it measured. It is delocalized, not being definitely located anywhere, but only possessing a probability to be any particular place within the apparatus. One can’t even say which of the two slits it passes through. Somehow, it manages to pass through both slits. Or at least some of its wave-function does. I’m not going to into the various philosophical arguments about the interpretation of quantum probabilities here, but I will pass on an analogy that helped me come to grips with the idea that an electron can behave in some respects like a wave and in others like a particle. At first thought, this seems a troubling paradox but it only appears so if you insist that our theoretical ideas are literal representations of what happens in reality. I think it’s much more sensible to treat the mathematics as a kind of map or sketch that is useful for us to do find our way around nature rather than confusing it with nature itself. Neither particles nor waves really exist in the quantum world – they’re just abstractions we use to try to describe as much as we can of what is going on. The fact that it doesn’t work perfectly shouldn’t surprise us, as there are are undoubtedly more things in Heaven and Earth than are dreamt of in our philosophy. Imagine a mediaeval traveller, the first from your town to go to Africa. On his journeys he sees a rhinoceros, a bizarre creature that is unlike anything he’s ever seen before. Later on, when he gets back, he tries to describe the animal to those at home who haven’t seen it.  He thinks very hard. Well, he says, it’s got a long horn on its head, like a unicorn, and it’s got thick leathery skin, like a dragon. Neither dragons nor unicorns exist in nature, but they’re abstractions that are quite useful in conveying something about what a rhinoceros is like. It’s the same with electrons. Except they don’t have horns and leathery skin. Obviously. 19 Responses to “Dragons and Unicorns” 1. i have to admit when i got a book to study quantum mechanics that the mathematics were basically impossible for me to follow. the wave function, gamma, looks only vaguely familiar. perhaps it is because of my present state of ignorance that i still perceive wave particle duality to be a paradox. it is unreasonable for me to comprehend of such a thing as a physical phenomenon that transforms between particle and wave, especially without actually seeing a working example. sure i can accept the double slit experiment, but what is actually going on? i don’t particularly know much about waves either, i got a degree in business technology, and dabbled a bit in computer science, but from what i understand about waves, is that they need a medium. every wave i can think of is energy moving through a medium. i don’t know, the more i think about it, the more it seems that space itself is a medium. from what i know about general relativity, space bends, but how can nothing bend? i of course have zero experimental evidence for these random thoughts, and no matter how hard i rack my brain, i can’t conceive of an experiment that would test any of these wacky ideas. and even if i could, i wouldn’t have the resources to implement. i mean, is it possible to build a particle decelerator? i guess the question does lie in whether or not language can utterly describe and explain reality. if it can, then i would very humbly suggest that these theories are more than just maps. they are more like a blueprint of how the whole universe works, or at least, i’m hoping that’s what they can be, because i don’t have much faith for humanity’s (or my) future prospects, otherwise. 2. Anton Garrett Says: I’m just back from Lords. Can you bribe the particle to go through a particular slit? 3. It helps if you think of the wave function as the “real” thing and the particle is just the way it interacts with the world. Then asking where the particle is when it isn’t observed is like asking where the wind is when it isn’t blowing. The very question is clearly nonsensical. Well you still have strange correlations to deal with and you should be careful how seriously you take your ontological commitments. In the end our monkey brains evolved in a classical world. It isn’t really a big surprise that we are poorly equipped to deal with things on the quantum level. Its kinda cool that things are so different. • telescoper Says: Actually, most of the confusions arising from quantum mechanics arise from assertions about what is “real”, whatever that means. Physicists are very likely to say foolish things when they start talking ontology. 4. > Physicists are very likely to say foolish things when they start talking ontology. But, in my hubble opinion, not as foolish as philosophers when they go on about how physics works and what we do. • Anton Garrett Says: Well said Cusp! Popper’s doctrine of falsifiability is a perversion of the truth. Whoever believes that scientists find fulfilment by seeing their theories proved *wrong*? Rather, theories have to be testable – meaning that the probabilities we assign to them are capable of being changed by experimental data. In a test of Newtonian vs relativistic mechanics, the data haved driven the probability of the latter to 0.9999999999… and of the former to 0.00000000001, and we use the words True and False as a shorthand. But, someday, fresh anomalies might emerge and a new theory be found which supersedes Einstein in the same way. Popper could not put it like that, because he rejected inductive logic, and inductive logic IS probability theory provided that it is done correctly. Popper never understood that; he accepted probability but denied induction, and hid his misunderstanding beneath a mass of disingenuous rhetoric (expertly dissected by the philosopher David Stove). Hence his unhappy notion of falsifiability. Thomas Kuhn also rejected induction and concluded that ‘paradigms’ (such as Newtonian and Einsteinian) come and go as arbitrarily as fashions in clothing. The better fit of relativistic mechanics to the data ultimately means nothing to him (although again, he never admits it so starkly). He was a good historian of science but a lousy philosopher of it. Today this strand of thought has merged, via PK Feyerabend, with the wider postmodernist movement which denies that there is any such thing as truth. My response to that claim (which it is itself touted as true!!) is to point out that people clearly live their lives as if certain things are true. 5. > in my hubble opinion Am currently sitting here fitting globular cluster profiles in ACS data – so a Freudian slip there on my part. 6. Garret Cotter Says: Peter: I learned only recently that Young’s Slits has been demonstrated with Fullerene, which begged the question in a discussion with a friend, should one do a double-slit experiment with double-decker buses, how far away would the screen have to be to show fringes? That would make a good first-year undergraduate estimation question. Anton: I’m confused by your last argument. You seem to be responding to those of us who say “The map is not the land” by saying “Well some people say it is”? Do I misunderstand you? (And I remind you I’m a committed Bayesian!) • Anton Garrett Says: Hi Garret, you are using an analogy so I’m not sure exactly what part of my argument you are questioning – do say more – but if you mean my passing shot at postmodernism, then I mean that if you get to know somebody well you will find that they are passionately committed to certain deep ideals by which they live their life. This is so even if they are not intellectual enough to speak out those ideals in words. They live their lives on the basis that those ideals are true. I assert that postmodernists are no exception and that their claims that truth does not exist are at odds with the way they live their lives. Their life axioms might well be rather different from mine, but they *do* have them. Re your response to Peter, I think that the classical limit is still a live issue. More than one parameter is set to zero in the classical limit of quantum mechanics, and how the (conceptual) limiting process is done can be important, eg h –> 0 and a –> 0 (‘a’ is the size of the system) but does a/h remain constant or itself –> 0 or infinity? Then there is the issue of decoherence and how isolated from its environment the system is. Complex issues! • telescoper Says: I did some work a few years ago on trying to use Schrodinger’s equation to describe classical (compressible) fluids. It’s not as mad as it seems, because Madelung showed that by transforming variables you can write the Schrodinger equation as an Euler equation coupled to a Poisson equation for the potential. The fluid density turns out to be basically Psi^2, so if you do perturbation theory on Psi you will always get a positive density, which isn’t the case if you do perturbative calculations with rho directly. It’s fun, but has yet to catch on and change the world. I mention it here because how you take the classical limit in that case is indeed rather subtle. • Garret Cotter Says: Hi Anton, All I wanted to say, really, (and it’ll probably get lost in the heat and noise of the Hawking business) is that it’s possible to have a worldview without axioms, as we have touched on here before. Unless, perhaps, “We can never know the truth for sure” is counted as an axiom, but I think in formal logic it isn’t! I completely agree with your point that many people have implicit axioms that they don’t admit. And as a convinced subjective Bayesian I could never say to a theist such as yourself that they were wrong, simply that I find their worldview, personally, utterly improbable. I just “don’t get it”; which I think, to cross-reference to some of the followups to Peter’s blog entry on the Hawking debacle, is what Feynman’s attitude was. And I certainly won’t attempt to make ethical decisions based on models of basic physics; but I have to admit that I have to work them out myself, and _there_ one has to start thinking of axioms. But ethical axioms are not, I feel, welded into some “truth” of nature. Does that make me a some sort of post-modernist relativist, though? Well, if you choose to live by doubt, you have to spend a lot of time worrying about these things. I certainly do. And I definitely don’t think there should be a free-for-all on ethics. But do you think there are any ethical axioms constant in time and space? Peter: are you talking about taking probability current and making it compressible? Is there an elementary writeup anywhere? Looks quite neat. • Anton Garrett Says: Hi Garret; you say that it is possible to have a worldview without axioms, but you agree with me that many people have implicit axioms that they don’t admit. I don’t think it is possible to have a worldview without axioms. Secular people have axioms as much as theists, and they take those axioms from the prevailing culture. For that very reason their axioms are as hard to discern as glass immersed in water (having similar refractive index). To make the point: Can you give examples of public figures whom you believe lived their lives with explicit axioms; with implicit axioms; and with no axioms? The first category is easy, but how do you dsitinguish between the 2nd and 3rd categories? • telescoper Says: You can find an overview of this approach in a conference talk I gave some years ago: • Garret Cotter Says: Peter – thanks! Anton – from the point of view of ethics, I think we’re in agreement. But I originally intended to focus strictly on the physical world. To take your question, I would say that the “Feynman” view is in your third category; do you see hidden axioms in it, to make it actually in the second? 7. Anton Garrett Says: Peter: Where the Navier-Stokes equations predict negative density they obviously fail, but surely that is a warning that you should correct them by imposing rho=0 by hand within the predicted negative-density region? To overcome the problem by changing equations to one which always has a smoothly varying rho seems to me to lose the capability to model important phenomena such as cavitation. • Yes indeed, but this was a specific fix for weakly non-linear perturbative calculations where it’s difficult to enforce positivity. It’s definitely the case that you can’t model strongly non-linear phenomena such as shocks using this approach. In fact it only works in the case I was interested in because the expanding background makes the growing mode of instability quite slow, so you can get a lot with 1st order perturbations. 8. […] few days ago I posted what was intended to be a fun little item about the wave-particle duality in quantum mechanics. Basically, what I was trying to say is that […] Leave a Reply to telescoper Cancel reply WordPress.com Logo Google photo Twitter picture Facebook photo Connecting to %s %d bloggers like this:
d72384897912716e
Does consciousness matter? A good starting point for our brief discussion of consciousness is quantum cosmology, the theory that tries to unify cosmology and quantum mechanics. If quantum mechanics is universally correct, then one may try to apply it to the universe in order to find its wave function. This would allow us find out which events are probable and which are not. However, it often leads to paradoxes. For example, the essence of the Wheeler-DeWitt equation (DeWitt, 1967), which is the Schrödinger equation for the wave function of the universe, is that this wave function does not depend on time, since the total Hamiltonian of the universe, including the Hamiltonian of the gravitational field, vanishes identically. This result was obtained in 1967 by Bryce DeWitt. Therefore if one would wish to describe the evolution of the universe with the help of its wave function, one would be in trouble: The universe as a whole does not change in time. The resolution of this paradox suggested by Bryce DeWitt is rather instructive (DeWitt, 1967). The notion of evolution is not applicable to the universe as a whole since there is no external observer with respect to the universe, and there is no external clock that does not belong to the universe. However, we do not actually ask why the universe as a whole is evolving. We are just trying to understand our own experimental data. Thus, a more precisely formulated question is why do we see the universe evolving in time in a given way. In order to answer this question one should first divide the universe into two main pieces: i) an observer with his clock and other measuring devices and ii) the rest of the universe. Then it can be shown that the wave function of the rest of the universe does depend on the state of the clock of the observer, i.e. on his ‘time’. This time dependence in some sense is ‘objective’: the results obtained by different (macroscopic) observers living in the same quantum state of the universe and using sufficiently good (macroscopic) measuring apparatus agree with each other. Thus we see that without introducing an observer, we have a dead universe, which does not evolve in time. This example demonstrates an unusually important role played by the concept of an observer in quantum cosmology. John Wheeler underscored the complexity of the situation, replacing the word observer by the word participant, and introducing such terms as a ‘self-observing universe’. Most of the time, when discussing quantum cosmology, one can remain entirely within the bounds set by purely physical categories, regarding an observer simply as an automaton, and not dealing with questions of whether he/she/it has consciousness or feels anything during the process of observation. This limitation is harmless for many practical purposes. But we cannot rule out the possibility that carefully avoiding the concept of consciousness in quantum cosmology may lead to an artificial narrowing of our outlook. Let us remember an example from the history of science that may be rather instructive in this respect. Prior to the invention of the general theory of relativity, space, time, and matter seemed to be three fundamentally different entities. Space was thought to be a kind of three-dimensional coordinate grid which, when supplemented by clocks, could be used to describe the motion of matter. Space-time possessed no intrinsic degrees of freedom, it played secondary role as a tool for the description of the truly substantial material world. The general theory of relativity brought with it a decisive change in this point of view. Space-time and matter were found to be interdependent, and there was no longer any question which one of the two is more fundamental. Space-time was also found to have its own inherent degrees of freedom, associated with perturbations of the metric – gravitational waves. Thus, space can exist and change with time in the absence of electrons, protons, photons, etc.; in other words, in the absence of anything that had previously (i.e., prior to general relativity) been called matter. Of course, one can simply extend the notion of matter, because, after all, gravitons (the quanta of the gravitational field) are real particles living in our universe. On the other hand, the introduction of the gravitons provides us, at best, with a tool for an approximate (perturbative) description of the fluctuating geometry of spacetime.This is completely opposite to the previous idea that space-time is only a tool for the description of matter. A more recent trend, finally, has been toward a unified geometric theory of all fundamental interactions, including gravitation. Prior to the end of the 1970’s, such a program seemed unrealizable; rigorous theorems were proven on the impossibility of unifying spatial symmetries with the internal symmetries of elementary particle theory. Fortunately, these theorems were sidestepped after the discovery of supersymmetry and supergravity. In these theories, matter fields and space-time became unified within the general concept of superspace. Now let us turn to consciousness. The standard assumption is that consciousness, just like space-time before the invention of general relativity, plays a secondary, subservient role, being just a function of matter and a tool for the description of the truly existing material world. But let us remember that our knowledge of the world begins not with matter but with perceptions. I know for sure that my pain exists, my ‘green’ exists, and my ‘sweet’ exists. I do not need any proof of their existence, because these events are a part of me; everything else is a theory. Later we find out that our perceptions obey some laws, which can be most conveniently formulated if we assume that there is some underlying reality beyond our perceptions. This model of material world obeying laws of physics is so successful that soon we forget about our starting point and say that matter is the only reality, and perceptions are nothing but a useful tool for the description of matter. This assumption is almost as natural (and maybe as false) as our previous assumption that space is only a mathematical tool for the description of matter. We are substituting reality of our feelings by the successfully working theory of an independently existing material world. And the theory is so successful that we almost never think about its possible limitations. Guided by the analogy with the gradual change of the concept of space-time, we would like to take a certain risk and formulate several questions to which we do not yet have the answers (Linde, 1990a; Page, 2002): Note, that the gravitational waves usually are so small and interact with matter so weakly that we did not find any of them as yet. However, their existence is absolutely crucial for the consistency of our theory, as well as for our understanding of certain astronomical data. Could it be that consciousness is an equally important part of the consistent picture of our world, despite the fact that so far one could safely ignore it in the description of the well studied physical processes? Will it not turn out, with the further development of science, that the study of the universe and the study of consciousness are inseparably linked, and that ultimate progress in the one will be impossible without progress in the other? Instead of discussing these issues here any further, we will return back to a more solid ground and concentrate on the consequences of eternal inflation and the multiverse theory that do not depend on the details of their interpretation. As an example, we will discuss here two questions that for a long time were considered too complicated and metaphysical. We will see that the concept of the multiverse will allow us to propose possible answers to these questions. Inflation, Quantum Cosmology and the Anthropic Principle Link to the full article Anthropic principle can help us to understand many properties of our world. However, for a long time this principle seemed too metaphysical and many scientists were ashamed to use it in their research. I describe here a justification of the weak anthropic principle in the context of inflationary cosmology and suggest a possible way to justify the strong anthropic principle using the concept of the multiverse. Andrei Linde
66c8f089a2e1d1a3
We gratefully acknowledge support from the Simons Foundation and member institutions. Authors and titles for Mar 2014 [ total of 2796 entries: 1-25 | 26-50 | 51-75 | 76-100 | ... | 2776-2796 ] [ showing 25 entries per page: fewer | more ] [1]  arXiv:1403.0012 [pdf, ps, other] Title: A Stochastic Geometry Analysis of Inter-cell Interference Coordination and Intra-cell Diversity Subjects: Information Theory (cs.IT) [2]  arXiv:1403.0020 [pdf, ps, other] Title: Topos Semantics for Higher-Order Modal Logic Journal-ref: Logique & Analyse vol 57, no 228 (2014), 591--636 Subjects: Logic (math.LO); Category Theory (math.CT) [3]  arXiv:1403.0021 [pdf, ps, other] Title: Frobenius manifolds and Frobenius algebra-valued integrable systems Comments: We have removed section 4 of version 1 of this paper. This material will be moved to a new paper entitled "Integrability of the Frobenius algebra-valued KP hierarchy" which is an improved version of the paper arXiv:1401.2216v1. For the current paper, we have added two new sections to discuss "$\mathcal{A}$-valued TQFT" and "$\mathcal{A}$-valued dispersive integrable systems Subjects: Mathematical Physics (math-ph); Differential Geometry (math.DG); Exactly Solvable and Integrable Systems (nlin.SI) [4]  arXiv:1403.0022 [pdf, other] Title: Noise prevents infinite stretching of the passive field in a stochastic vector advection equation Comments: 23 pages, 4 figures [5]  arXiv:1403.0023 [pdf, ps, other] Title: Superspecial rank of supersingular abelian varieties and Jacobians Comments: V2: New coauthor, major rewrite [6]  arXiv:1403.0026 [pdf, ps, other] Title: Commensurations and Metric Properties of Houghton's Groups Journal-ref: Pacific J. Math. 285 (2016) 289-301 Subjects: Group Theory (math.GR) [7]  arXiv:1403.0027 [pdf, ps, other] Title: The Frobenius-Virasoro algebra and Euler equations Authors: Dafeng Zuo Comments: Comments are welcome Journal-ref: Journal of Geometry and Physics 86(2014)203--210 [8]  arXiv:1403.0028 [pdf, ps, other] Title: Connections of Zero Curvature and Applications to Nonlinear Partial Differential Equations Authors: Paul Bracken Comments: 22 Journal-ref: Discrete and Continuous Dynamical Systems, Series S, 7, 6, 1165-1179, (2014) Subjects: Differential Geometry (math.DG) [9]  arXiv:1403.0039 [pdf, ps, other] Title: Canonical bases in tensor products revisited Comments: 7 pages, v2, improved exposition, one reference added, to appear in Amer. J. Math Journal-ref: Amer. J. Math. 138 (2016), 1731-1738 Subjects: Representation Theory (math.RT) [10]  arXiv:1403.0041 [pdf, ps, other] Title: Individual dynamics induces symmetry in network controllability Comments: 5 pages, 3 figures [11]  arXiv:1403.0042 [pdf, ps, other] Title: Infinitely many solutions to a fractional nonlinear Schrödinger equation Comments: arXiv admin note: text overlap with arXiv:1307.2301 by other authors Subjects: Analysis of PDEs (math.AP) [12]  arXiv:1403.0045 [pdf, ps, other] Title: Polyhedra, Complexes, Nets and Symmetry Authors: Egon Schulte Comments: Acta Crystallographica Section A (to appear) Subjects: Metric Geometry (math.MG); Combinatorics (math.CO) [13]  arXiv:1403.0046 [pdf, other] Title: Well-posedness and Robust Preconditioners for the Discretized Fluid-Structure Interaction Systems Authors: Jinchao Xu, Kai Yang Comments: 1. Added two preconditioners into the analysis and implementation 2. Rerun all the numerical tests 3. changed title, abstract and corrected lots of typos and inconsistencies 4. added references Journal-ref: Computer Methods in Applied Mechanics and Engineering 292 (2015): 69-91 Subjects: Numerical Analysis (math.NA) [14]  arXiv:1403.0053 [pdf, ps, other] Title: Bootstrapping and Askey-Wilson polynomials Comments: 17 pages, no figures [15]  arXiv:1403.0054 [pdf, other] Title: Multi-Objective Resource Allocation for Secure Communication in Cognitive Radio Networks with Wireless Information and Power Transfer Comments: Accepted with minor revisions for publication as a regular paper in the IEEE Transactions on Vehicular Technology Subjects: Information Theory (cs.IT) [16]  arXiv:1403.0060 [pdf, ps, other] Title: Regression analysis in quantum language Authors: Shiro Ishikawa Comments: arXiv admin note: text overlap with arXiv:1402.0606, arXiv:1401.2709, arXiv:1312.6757 Subjects: Statistics Theory (math.ST) [17]  arXiv:1403.0063 [pdf, ps, other] Title: Restricted Kac modules of Hamiltonian Lie superalgebras of odd type Authors: Jixia Yuan, Wende Liu Comments: 13 pages Journal-ref: Monatsh. Math. 178(2015) 473-488 Subjects: Representation Theory (math.RT) [18]  arXiv:1403.0070 [pdf, ps, other] Title: Equidistribution of saddle periodic points for Henon-type automorphisms of C^k Comments: 49 pages [19]  arXiv:1403.0075 [pdf, ps, other] Title: Singularity of the varieties of representations of lattices in solvable Lie groups Authors: Hisashi Kasuya Comments: 11 pages. To appear in J. Topol. Anal Subjects: Group Theory (math.GR); Algebraic Geometry (math.AG); Complex Variables (math.CV); Geometric Topology (math.GT) [20]  arXiv:1403.0076 [pdf, ps, other] Title: Unavoidable collections of balls for processes with isotropic unimodal Green function Authors: Wolfhard Hansen [21]  arXiv:1403.0078 [pdf, ps, other] Title: A note on bi-linear multipliers Subjects: Classical Analysis and ODEs (math.CA) [22]  arXiv:1403.0079 [pdf, ps, other] Title: An extension of Herglotz's theorem to the quaternions Comments: to appear in Journal of Mathematical Analysis and Applications 2014 Subjects: Functional Analysis (math.FA) [23]  arXiv:1403.0088 [pdf, ps, other] Title: Union-intersecting set systems Comments: 9 pages Subjects: Combinatorics (math.CO) [24]  arXiv:1403.0089 [pdf, ps, other] Title: Factorization Property of Generalized s-self-decomposable measures and class $L^f$ distributions Journal-ref: Theory Probab. Appl. 55, No 4,(2011), pp. 692-698; and Teor. Verojatn. Primenen. 55 no 4 (2010), pp 8-12-819 Subjects: Probability (math.PR) [25]  arXiv:1403.0094 [pdf, ps, other] Title: Asymptotics of eigenstates of elliptic problems with mixed boundary data on domains tending to infinity Comments: Asymptotic Analysis, 2013 Subjects: Analysis of PDEs (math.AP) [ showing 25 entries per page: fewer | more ] Disable MathJax (What is MathJax?)
c95a972cb9e694a2
Latest News Welcome to the project homepage of the Multiconfigurational Time-Dependent Hartree for Bosons Quantum Many-Body Physics with Ultra-Cold Bosons Synopsis: The MCTDHB Package is a parallel software to compute the many-body dynamics of ultracold bosons, which is particularly efficient and can handle many millions of time-dependent configurations using tens of time-adaptive orbitals. It is an implementation of the MCTDHB algorithm to solve the many-body Schrödinger equation for bosons. The MCTDHB Package project was initiated by Alexej I. Streltsov who also wrote the first few versions of the code starting in 2005.
8c8ea2dca8d10341
The Equations project Students in scientific illustration have attempted, in collaboration with physicists, to stage physics equations, and not just any of them: the Schrödinger equation for quantum physics, that of Navier-Stokes for fluid mechanics , the equation of general relativity, and the propagation equation of electromagnetic waves. Each group of students used an original graphic form: the pop-up book, the poetic animation, the comic strip, and the animated gif. Discover and use these productions to understand and stage these fundamental equations. This work is the result of a collaboration between the DSAA of Scientific Illustration Design of the Estienne school, Julien Bobroff (Univ Paris-Sud) and Roland Lehoucq (CEA-Saclay). Copyright: this entire project is made available under the Creative Commons BY-NC-ND license.
61b7c2cb0ab4d04e
elementary particle Also found in: Dictionary, Thesaurus, Medical, Wikipedia. Related to elementary particle: Elementary particle physics elementary particle Collins Discovery Encyclopedia, 1st edition © HarperCollins Publishers 2005 Elementary particle A particle that is not a compound of other particles. At one time the elementary particles of matter were the atoms of the chemical elements, but the atoms are now known to be compounds of the electron, proton, and neutron. In turn, the proton and neutron, and likewise all the other hadrons (strongly interacting particles), are now known to be compounds of quarks. It is convenient, however, to continue to call hadrons elementary particles to distinguish them from their compounds (atomic nuclei, for instance); this usage is also justified by the fact that quarks are not strictly particles, because, as far as is known, they cannot be isolated. The term fundamental particle can be used to denote particles that are truly fundamental constituents of matter and are not compounds in any sense. See Electron, Hadron, Neutron, Proton, Quarks The known fundamental particles (see table) fall into two categories: the gauge bosons, comprising the photon, gluon, and weak bosons; and the fermions, comprising the quarks and leptons. The graviton, the quantum of the gravitational field, has been omitted from table since it plays no role in high-energy particle physics: it is firmly predicted by theory, but the prospect of direct observation is exceedingly remote. Of the gauge bosons, the photon has been known since the beginning of quantum mechanics. The heavy gauge bosons W ± and Z 0 were observed in 1983; their properties had been deduced from the weak interactions, for which they are responsible. The lightest (and stable) lepton, the electron (e), is the first known fundamental particle. The next found was the muon (μ, originally called the mu meson). The fundamental fermions are grouped into three families. Gluons and quarks are never seen as free particles; this phenomenon is known as confinement. Particles that are composed of quarks and gluons are called hadrons; essentially, mesons are composed of a quark-antiquark pair q, and baryons are three quarks qqq, bound together by the exchange of gluons. See Baryon, Gluons, Graviton, Intermediate vector boson, Lepton, Meson, Photon enlarge picture Particles with the properties of the quarks of the quark model (charges ±23e or ±13e and masses less than 300 MeV) have never been observed. Direct evidence both for quarks and for their confinement is given by the phenomenon of hadronic jets. For example, in high-energy deep-inelastic electron-proton scattering, in which the electron loses a sizable fraction of its energy, the observed cross section shows that the charge of the proton is carried by pointlike (radius less than 10-1 femtometer) particles of small mass. However, no such particles are seen in the final state of this process, or indeed of any other high-energy collision. What is seen is a narrow shower of hadrons. The interpretation is that the electron scatters off one of the quarks in the proton and gives it a large energy and momentum, the quark responding as though it were a free particle of mass much less than 100 MeV, consistent with the masses of the u and d quarks (see table). Later, through the production of quark-antiquark pairs, the energy and momentum of the struck quark is divided among a number of hadrons, mostly pions, a process called hadronization or fragmentation of the quark, which is to be distinguished from the decay of a free particle. The resulting shower of hadrons, whose total momentum vector is roughly that of the original quark, is called a hadronic jet (like a jet of water which breaks up into a spray of droplets). Such jets are also seen in other high-energy reactions, such as e+e- annihilation into hadrons, and also in pp collisions; they are the closest available phenomenon to the actual observation of a quark as a free particle. To each kind of particle there corresponds an antiparticle, or conjugate particle, which has the same mass and spin, belongs to the conjugate representation (multiplet) of internal symmetry, and has opposite values of charge, I3, strangeness, and so forth (quantum numbers which are conserved additively). The product of the space parities of a particle and its antiparticle is +1 if the particle is a boson, -1 if a fermion. For instance, the electron e and its antiparticle, the positron e-, have the same masses and spins, and opposite charges and lepton number, and an S-wave state of e and e- has parity -1. Particles for which the antiparticle is the same as the particle are called self-conjugate; examples are the photon γ and the neutral pion &pgr;0. The equality of masses implies the equality of lifetimes of particle and antiparticle. Thus the positron is stable; however, in the presence of ordinary matter it soon annihilates with an electron, and thus is not a component of ordinary matter. See Antimatter, Positron The interactions of particles are responsible for their scattering and transformations (decays and reactions). Because of interactions, an isolated particle may decay into other particles. Two particles passing near each other may transform, perhaps into the same particles but with changed momenta (elastic scattering) or into other particles (inelastic scattering). The rates or cross sections of these transformations, and so also the interactions responsible for them, fall into three groups: strong (typical decay rates of 1021–1023 s-1), electromagnetic (1016–1019 s-1), and weak (<1015 s-1). Strong interactions occur only between hadrons. Electromagnetic interactions result from the coupling of charge to the electromagnetic field. Weak interactions are usually unobservable in competition with strong or electromagnetic interactions. They are observable only when they do something which those much stronger interactions cannot do (forbidden by the selection rules); for instance, by changing flavors they can make a particle decay which would otherwise be stable, and by making parity-violating transition amplitudes they can produce an otherwise absent asymmetry in the angular distribution of a reaction. See Selection rules (physics) Most particles are unstable and decay into smaller-mass particles. The only particles which appear to be stable are the massless particles (graviton, photon), the neutrinos (possibly massless), the electron, the proton, and the ground states of stable nuclei, atoms, and molecules. It is speculated that some or all of the neutrinos may be massive and unstable and that the proton (and therefore all nuclei) may be unstable. The present view is that the only massive particles which are strictly stable are the electron and the lightest neutrino(s). The electron is the lightest charged particle; its decay would be into neutral particles and could not conserve charge. Likewise, the lightest neutrino is the lightest fermion; its decay would be into bosons and could not conserve angular momentum. See Neutrino The unstable elementary particles must be studied within a short time of their creation, which occurs in the collision of a fast (high-energy) particle with another particle. Such fast particles exist in nature, namely the cosmic rays, but their flux is small; thus most elementary particle research is based on high-energy particle accelerators. See Nuclear reaction, Particle accelerator, Particle detector Hadrons can be divided into the quasistable (or hadronically stable) and the unstable. The quasistable hadrons are simply those that are too light to decay into other hadrons by way of the strong interactions, such decays being restricted by the requirement that isobaric spin I and flavors be conserved. The unstable hadrons are also called particle resonances. Their lifetimes, of the order of 10-23 s, are much too short to be observed directly. Instead they appear, through the uncertainty principle, as spreads in the masses of the particles—that is, in their widths—just as in the case of nuclear resonances. See Uncertainty principle A characteristic of the hadrons is that they are grouped into i-spin multiplets (for example, n, p; &pgr;-, &pgr;0, &pgr;+); the masses of the particles in each multiplet differ by only a few megaelectronvolts (MeV). The i-spin multiplets of hadrons themselves form groups (called supermultiplets) which were recognized in 1961 as multiplets (representations) of the group SU3 (now referred to as SU3flavor to distinguish this physical symmetry from SU3color). For instance, the lightest mesons (&eegr;, K, &pgr;) and baryons (&Lgr;, N, &Xgr;, &Sgr;) are each a set of eight particles having i-spins I = (0, 12, 12, 1) and hypercharges Y = (0,1, - 1,0) respectively; this pattern is that of the octet, {8}, representation of the group SU3. Again, the lowest-mass JP = &frac;32+ baryons (Δ, &Sgr;*, &Xgr;*, &OHgr;), ten particles with I = (&frac;32, 1, 12, 0) and Y = (1, 0, -1, -2), form a decuplet, {10}, representation of SU3. The spread of the masses in these groups is about a hundred times greater than in the i-spin multiplets, a few hundred MeV compared to a few MeV. According to the quark model, this SU3 symmetry and the pattern of charges in the SU3 multiplets result simply from the existence of a third kind (flavor) of quark, the s (strange) quark, with charge the same as the d quark, namely 13, together with the flavor independence of the glue force; that is, all three quarks u, d, and s have the same interaction with the glue field. The resulting flavor SU3 symmetry is broken by the relatively large mass of the s, approximately 150 MeV. The three quarks make up the fundamental triplet, {3}, representation of SU3. Hadrons are known which contain yet more massive quarks, the c and the b (see the table). The resulting symmetry is badly broken, and the supermultiplets hardly recognizable. It appears that the “glue” field which binds quarks together to make hadrons is a Yang-Mills (that is, a non-abelian) gauge field of an SU3 symmetry group, SU3color. This is an exact symmetry of nature. The quanta of the field are called gluons, and its quantum theory is called quantum chromodynamics (QCD). The gluon field resembles the electromagnetic field, but has an internal symmetry index (octet index) which runs over eight values; that is, there are really eight fields, corresponding to the eight parameters needed to specify an SU3 transformation. Just as the electromagnetic field is coupled to (that is, photons are emitted and absorbed by) the density and current of a conserved quantity, charge, the gluon field is coupled to color. The coupling of the gluon to a particle is fixed by the color of the particle (that is, what member of what color multiplet) and just one universal coupling constant g, analogous to the electronic unit of charge e. (The analogy breaks down in quantum theory, as discussed below; the quantity g is no longer constant but it is still universal.) Since the long-range forces observed between hadrons are no different than those between other particles, hadrons must be colorless, that is, color singlet combinations of quarks, their colored constituents. The two simplest combinations of quarks which can be colorless are 1q2 and q1q2q3; these are found in nature as the basic structure of mesons and baryons, respectively. The exchange of gluons between any of the quarks in these colorless combinations gives rise to an attractive force, which binds them together. Gluons are not colorless, and therefore they are coupled to themselves. This situation is very different from electromagnetism, where the photon does not carry charge. The consequence of this self-coupling of massless particles is a severe infrared (small momentum transfer or large distance) divergence of perturbation theory. In particular, the interaction between two colored particles through the gluon field, which in lowest order is an inverse-square Coulomb force, proportional to g2/r2 (where r is the distance between the particles), becomes stronger than this inverse-square force at larger r. A way of describing this is to say that the coupling constant g is effectively larger at larger r; this defines the so-called running coupling constant g(r). According to the first-order radiative correction, g(r) becomes infinite at a certain distance, the so-called scale parameter rc. A specific form for the gluonic force between two colored particles, at large r, namely that it falls to a nonzero constant value λ, of the order of &planck;crc-2 (where &planck; is Planck's constant divided by 2&pgr;, and c is the speed of light), is suggested by a model, the superconductor analogy. This force is confining. The conjecture is that the vacuum is like a superconductor with respect to color, with the interchange, however, of electric and magnetic quantities. That is, the vacuum acts like a color magnetic superconductor which confines color flux into bundles which have a diameter of order rc and an energy per unit length equal to λ of order &planck;crc-2. The color flux bundles run between colored particles; they can also form closed loops. These flux bundles are often idealized as having vanishing diameter and are then called strings. This idealization is obviously good only if the flux bundles are long compared to rc, and if their local radius of curvature is always much larger than rc. According to the so-called naive quark model, hadrons are bound states of nonrelativistic (slowly moving) quarks, analogous to nuclei as bound states of nucleons. The interactions between the quarks are taken qualitatively from QCD, namely a confining central potential and (exactly analogous to electrodynamic interactions) spin-spin (hyperfine) and spin-orbit potentials; quantitatively, these potentials are adjusted to make the energy levels of the model system fit the observed hadron masses. This model should be valid for hadrons composed of heavy quarks but not for hadrons containing light quarks (u, d, s), but in fact it succeeds in giving a good description of many properties of all hadrons. One reason is that many of these properties follow from so-called angular physics, that is, symmetry-based physical principles that transcend the specific model. A meson is a bound state of a quark and an antiquark, q1q2. A baryon is a bound state of three quarks, q1q2q3. The known heavy quarks are the c (charm), b (bottom), and t (top) quarks, whose masses are larger than the natural energy scale of QCD, &ap;1 GeV. But because the width of the t is also larger than 1 GeV, the t quark decays before the QCD force acts on it, and thus before any well-defined hadron forms. So in the present context “heavy quarks” mean only c and b. A hadron which contains a single heavy quark resembles an atom; the heavy quark sits nearly at rest at the center, and is a static source of the color field, just as the atomic nucleus is a static source of the electric field. Just as an atom is changed very little (except in mass) if its nucleus is replaced by another of the same charge (an isotope), a heavy-quark hadron is changed very little (except in mass) if its heavy quark is replaced by another of the same color. This is called heavy-quark symmetry. So, for example, the D, D*, B, and B* mesons are similar, except in mass. This plays an important role in the quantitative analysis of their weak decays. If a hadron contains two heavy quarks, then in a not too highly excited state the heavy quarks move slowly, compared to the speed of light c, and so the effect of the exchange of gluons between the quarks can be approximated (up to radiative corrections) by a potential energy which depends only on the positions of the quarks (local static potential); further, the wave function of the system satisfies the ordinary nonrelativistic Schrödinger equation. Consequently, the properties of hadrons composed of heavy quarks are rather easily calculated. Mesons with the composition c and b are called charmonium and bottomonium, respectively. These names are based on the model of positronium, ee-; the generic name for flavorless mesons, q, is quarkonium. Since both heavy quarkonium and positronium are systems of a fermion bound to its antifermion by a central force, they are qualitatively very similar. The electroweak theory, starting from the observation that both the electromagnetic and weak interactions result from the exchange of vector (spin-1) bosons, has unified these interactions into a spontaneously broken gauge theory. Similarly, the observation that the strong (hadronic) interactions are also due to the exchange of vector bosons (gluons) suggests that all these vector bosons (the photon, the three weak bosons, and the eight gluons) are quanta of the components of the gauge field of a large symmetry group, SU5 or larger. Such theories are called grand unification theories (GUTs). The large symmetry group of the grand unification theory must be spontaneously broken, making all the gauge bosons massive except the gluon octet and the photon, leaving SU3 × U1 (color × electromagnetism) as the apparent gauge symmetry of the world. See Grand unification theories In these theories, the leptons and quarks occur together in multiplets of the large symmetry group. These multiplets are called families (or generations). The known fundamental fermions do seem to fall into three families (see table). Each family consists of a weak i-spin doublet of leptons (neutrino [charge 0] and charged lepton [charge +e]), and a color triplet of weak i-spin doublets of quarks (up-type [charge 23e] and down-type [charge -13e]). elementary particle [‚el·ə′men·trē ′pärd·i·kəl] (particle physics) A particle which, in the present state of knowledge, cannot be described as compound, and is thus one of the fundamental constituents of all matter. Also known as fundamental particle; particle; subnuclear particle. References in periodicals archive ? Here to him (this boson), as well as to the very mechanism of interaction of elementary particles for ever, and a convenient and brief, but not quite fair epithet <<Higgs>>. In these exotic materials electrons effectively behave in the very same way as the elementary particles studied in high energy accelerators. An elementary unparticle is an intermediate form between an elementary particle and its antiparticle, which can be presented by [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] for a subset C [subset] X ([[??].sub.N]), such as those shown in Fig. Light would stop shining and matter would disintegrate into a chaotic soup of elementary particles no longer obeying the normal laws of physics. The discovery confirms the theory that there is an anti-particle for every known elementary particle.... London, Dec 2 (ANI): The search for the elusive 'God Particle' may be nearing its end as scientists are all set to announce the latest evidence about the massive elementary particle, which may provide the strongest hints yet about its existence. Editors Fritzsch and Gell-Mann present readers with a collection of academic essays and scholarly articles on developments and achievements in the elementary particle physics surrounding the last fifty years of research into the nature and function of quarks. In July 2012, an elementary particle has been identified which is believed to be key to the formation of stars, planets and eventually life after the Big Bang 13.7 billion years ago. For graduate students, postdoctorates, and senior researchers, this proceedings volume contains presentations at TASI 2010, the conference of the Theoretical Advanced Study Institute in Elementary Particle Physics, held in Boulder, Colorado. NEW PARTICLE DISCOVERED -- The discovery of a new elementary particle, omega meson, made at the University of California's Lawrence Radiation Laboratory, and the finding of unidentified additional "particle systems" in the subatomic realm are believed to make it possible to push on in the next decade or two to a better explanation of how matter is put together. Full browser ?
c629b8edf839897e
An optically resolvable Schrödinger’s cat from Rydberg dressed cold atom clouds An optically resolvable Schrödinger’s cat from Rydberg dressed cold atom clouds S. Möbius, M. Genkin, A. Eisfeld, S. Wüster and J. M. Rost Max Planck Institute for the Physics of Complex Systems, Nöthnitzer Strasse 38, 01187 Dresden, Germany In Rydberg dressed ultra-cold gases, ground state atoms inherit properties of a weakly admixed Rydberg state, such as sensitivity to long-range interactions. We show that through hyperfine-state dependent interactions, a pair of atom clouds can evolve into a spin and subsequently into a spatial Schrödinger’s cat state: The pair, containing atoms in total, is in a coherent superposition of two configurations, with cloud locations separated by micrometers. The mesoscopic nature of the superposition state can be proven with absorption imaging, while the coherence can be revealed though recombination and interference of the split wave packets. 03.75.Gg, 32.80.Ee, 34.20.Cf, 32.80.Qk Introduction: When and why mesoscopic objects begin to behave according to our classical intuition, as exemplified by Schrödinger’s famous thought-experiment Schrödinger (1935), remains one of the fundamental questions in physics. Experimental progress to demonstrate quantum coherence in mesoscopic systems is impressive, with recent creation of superposition states of macroscopic Josephson currents Friedman et al. (2000), ten-qubit photonic cat states Wei-Bo Gao et al. (2010), six-qubit atomic hyperfine cats Leibfried et al. (2005), interference of fullerenes and even large bio-molecules Gerlich et al. (2011), superpositions of photon coherent states Takahashi et al. (2008) and many more Lu et al. (2007); Monroe et al. (1996). In most of these experiments the quantum mechanical superposition does not pertain to an intuitive classical observable taking common-sense values, such as the original ”alive” or ”dead” of Schrödinger’s cat. Instead, the superposition typically is achieved with intrinsically quantum mechanical degrees of freedom (hyperfine- or photon number states). Realizations of position-space superpositions have been limited to small delocalization lengths (several nm for Ref. Gerlich et al. (2011)) the resolution of which requires sophisticated near-field interferometry. Here, we propose a Schrödinger cat superposition in the relative distance of two ultra-cold atom clouds more than m apart. The relative distances of the two superposed configurations also differ on a micrometer scale, hence the existence of the two possible cloud configurations can be revealed with direct absorption imaging and the coherence of that superposition can be proven by interference upon recombination, taking Schrödinger’s cat into the optically resolvable micrometer domain. In contrast to prior proposals with ultracold or Bose-Einstein condensed atoms (e.g. Weiss and Castin (2009); Streltsov et al. (2009); Bar-Gill et al. (2011); Cirac et al. (1998); Dalvit et al. (2000); Dunningham et al. (2006); Gordon and Savage (1999); Hallwood et al. (2010); Ng (2008)) we use Rydberg states, taking advantage of their inherently strong long-range interactions and short dynamical time-scales Gallagher (1994); Saffman et al. (2010). The resulting internal forces Wüster et al. (2010); Ates et al. (2008) let the system turn itself from a spin cat state into a spatial cat state. In addition, Rydberg systems typically allow for an accurate control of decoherence mechanisms. Figure 1: (color online) Schematic of Rydberg dressed atom clouds. (a) All atoms are in either of two hyperfine ground states , . Dressing lasers can couple one atom per cloud to either of the , Rydberg states. The Rydberg states participate in state changing dipole-dipole interactions . (b) Two atom clouds with width , separated by a distance , indicates the blockade-radius. Due to hyperfine state dependent inter-cloud forces, a suitable initial state evolves dynamically into a non-classical position space superposition state, with the pair of clouds in either the red- or green shaded configuration. The scheme (see Fig. 1) is based on a pair of atom clouds, each containing about alkali atoms, which can be in one of two hyperfine levels and of the atomic ground state. To induce long-range interactions between the clouds we weakly dress the states and with Rydberg states and , respectively Santos et al. (2000); Henkel et al. (2010); Wüster et al. (2011a); Maucher et al. (2011). These are chosen such that each cloud is in the full dipole-blockade regime Jaksch et al. (2000), where only a single Rydberg excitation per cloud is possible. However, the inter-cloud distance is so large that excitations in different clouds do not block each other. Such interactions can lead to collective relative motion of the clouds, with a repulsive or attractive character depending on the total hyperfine state. To realize this scheme, we first identify a suitable effective state space and Hamiltonian for our system. We then show how to create a hyperfine state, formally already a spin cat state Opatrný and Mølmer (2012); Ma et al. (2011), in which the two clouds evolve as a coherent superposition of attractive and repulsive dynamics. After a brief dwell time, single-shot absorption images would at this stage show either the green or the red configuration in Fig. 1. To see the coherent character of this many-body state via interference fringes, recombination of the two configurations is finally possible with the help of an external (double-well) potential, as demonstrated in the last section. Ultra cold Rydberg dressing, dipole-dipole interactions and blockade: Consider an assembly of neutral atoms of mass located at positions , restricted to one dimension and confined to a double-well atom trap. Half the atoms are localized in one of the wells, forming cloud and the rest in the other well, forming cloud . Near the centres of each well at , the potential is approximately harmonic: , and the atoms are initially in the Gaussian trap ground state of width . We consider four essential states in Rb atoms. Two of them are long lived hyperfine states , namely and ( is the total angular momentum and the associated magnetic quantum number). The other two essential states are Rydberg states, designated by and ( is the principal quantum number and the orbital angular momentum). The Rydberg states are coupled to the ground states with Rabi frequency and detuning , as sketched in Fig. 1. The coupling is off-resonant, hence . As shown in Wüster et al. (2011a) this arrangement gives rise to effective long-range (state changing) dipole-dipole interactions of the form , between dressed ground states , . We have , where the transition dipole parametrizes the strength of the bare dipole-dipole interaction. Hence, we can further reduce the essential electronic state space of a single atom to and , on which we build the many-body basis , where describes the electronic state of the atom . We formulate the many-body Hamiltonian with where , in , while describes the induced transition dipole-dipole interactions. Here the operator acts only on the Hilbertspace of atom and as unity otherwise. The vector contains all atom co-ordinates. Note that there are no interactions between two atoms in the same cloud, since the required doubly Rydberg excited intermediate state is strongly energetically suppressed through the dipole blockade Möbius et al. (2012). The total number of atoms in state is used to classify the electronic states, since is conserved by . Having set up our effective state space and Hamiltonian, we can construct adiabatic Born-Oppenheimer (BO) potential surfaces defined by  Domcke et al. (2004). As discussed in previous work Wüster et al. (2011b); Möbius et al. (2011) the motion of atoms is determined by these BO potentials, as long as non-adiabatic effects are small, see also supplementary information sup (). We characterize Born-Oppenheimer surfaces in the vicinity of the initial configuration sketched in Fig. 1, with the dichotomic central many-body position around which the positions of the atoms () are randomly distributed with width . We choose m and m. In Fig. 2 (a) we show cuts through BO surfaces for states with (half the atoms in ) as a function of . The insets show coefficients of the two eigenstates with the largest absolute eigenvalues . These states are of particular interest, since on the corresponding BO surfaces the entire clouds attract or repel, as deduced from the gradient of . Consequently, after preparing the twin atom clouds in a hyperfine state , one obtains a spatial superposition state as sketched in Fig. 1 through motional dynamics. If our underlying basis is mapped onto a spin system sup (), this process can be viewed as conversion of a collective spin Schrödinger’s cat state into a spatial one. The states are close to coherent spin states in this picture, as sketched in Fig. 2a. This conversion does not require external fields, but proceeds entirely through internal interactions within the system. Note that the collective cloud motion in a blockade regime crucially relies on the dressed character of the interaction. For bare dipole-dipole interactions only a single atom per cloud would be accelerated Möbius et al. (2012). Figure 2: (color online) (a) Born-Oppenheimer surfaces as a function of for the case , m, and atomic units. Positions have been chosen around in accordance with a single realization of a Gaussian distribution (width ). The insets show the eigenstates belonging to the two colored energies. red, top inset: Coefficients of the most repulsive state. blue, bottom inset: , most attractive. A plot of the modulus coincides with , since the states differ only by signs of coefficients (blue dashed, top inset). The transparent spheres visualize the Q-functions of in a pseudo-spin picture, in which the states resemble coherent spin states sup (). (b-d) Creation of for m, using a chirped micro-wave pulse as described in the text. (b) Time dependence of micro wave Rabi frequency (black) and detuning (red). (c) Resulting energy spectrum of , Eq. (11) the red line is the state to be adiabatically followed. (d) (black) Energy gap between the two highest states of (c), compared with analytical prediction Eq. (3) (red-dashed). Having established the fundamental mechanism behind our cat, we will outline how the initial hyperfine state can be prepared, and then proceed to model spatial dynamics and interference. Initial state creation: The first stage of assembling , starting from the simple state , is to create . This can be achieved on time scales shorter than that of atomic motion by using a micro-wave field which couples the two hyperfine-ground states so that the atom-field interaction Hamiltonian during initial state creation is foo (): When we analyze the spectrum of Eq. (11) for constant and Rabi-frequency , as a function of micro wave detuning , we see that the eigenstate at large negative detuning evolves continuously into at . This state is adiabatically followed in Fig. 2 (c), using the chirped micro-wave pulse shown in Fig. 2 (b). The pulse avoids non-adiabatic transitions since the pulse length is long compared to the inverse energy gap between the two highest energy eigenstates. The latter is well approximated by with an only weakly -dependent factor , as shown in the supplementary material sup (). The result Eq. (3) simplifies the determination of realistic parameter regimes. We have numerically modelled the pulse of Fig. 2 (b) for and found a fidelity when averaging over the atomic position distribution. Our creation scheme for closely follows the method of Pohl et al. (2010). The second stage of initial state creation is to convert into . We find that and are always related as shown in Fig. 2 (a): is obtained from by a phase shift to every coefficient of basis states involving an odd number , of atoms in in cloud . By applying this phase shift conditional on some control atom in a superposition we achieve our goal. This can be realized precisely as in a recent proposal for mesoscopic Rydberg quantum computation gates Müller et al. (2009), see also sup (). When modelling this final step of the initial state creation sequence, we find that fidelity loss is negligible compared to the one incurred in the previous stage of creating . This situation should persist for larger Müller et al. (2009). Spatial cat state creation and interference: To turn the electronic state prepared so far into a spatial cat, we keep the dressed interactions switched on for an acceleration period , after which they are adiabatically switched off to avoid spontaneous decay of Rydberg population. After mechanical evolution in the trap for a time , the clouds reach their maximal displacement, where the macroscopic spatial superposition character of the quantum state can be shown with m resolution atom detection. An absorption image would always show two inert clouds, with probability at either of the two configurations marked and in Fig. 3. If instead the spatial dynamics is allowed to proceed until time where the spatial wave function recombines and all atoms are reunited in the same hyperfine state 111E.g. by using a second chirped microwave pulse to transfer all atoms to ; dressed interactions can remain off., we form an interference pattern, demonstrating the coherence of the superposition. We solve the Schrödinger equation as in Wüster et al. (2011b); sup (); xmd () to model the quantum dynamics of acceleration, splitting and recombination for in a plane wave basis and for in a Hermite-Gauss basis. Interference fringes develop in the probability distribution of the relative inter-cloud distance . We extract from the many-body wave function as , where denotes integration over all coordinates orthogonal to . At , we find full contrast interference fringes in both cases. We thus believe that they persist also for larger atom numbers, as no new physics enters beyond atoms per cloud. Atomic densities and interference for are shown in Fig. 3, which additionally includes results obtained with Tully’s quantum classical algorithm Tully (1990); Wüster et al. (2010); Möbius et al. (2011), with which slightly larger atom numbers can be treated (). We find that nonadiabatic effects during the acceleration phase are negligible, with a population loss of out of the target state for the situation of Fig. 3. For larger the situation improves further. Figure 3: (color online) Evolution of the initial hyperfine state into a spatial superposition of cloud locations for and trap-frequency Hz. We compare the total atomic density from quantum-classical simulations (grey shading, black 3D lines), to full quantum solutions (yellow dashed). The lines overlayed on the grey shading are exemplary repulsive (green) and attractive (red) quantum-classical trajectories. Transparent spheres show the conversion of a spin cat state into a spatial cat state that would underly this process for , spin coorindate axes for spheres as in Fig. 2. Nonadiabatic population loss from the cat state during the initial acceleration phase is shown magnified in the left inset (magenta). The back panel shows the interference signal found in the relative distance distribution at (blue), note the different abscissa used. While computational demands limit the simulations shown to we extrapolate that spatial cat states are realistic for up to for the parameters used in this article. Nonadiabatic effects during acceleration and initial state creation are under control for larger . The main limitation comes from the life-time of the Rydberg states used for the dressing, since just a single decay has the potential to destroy the fragile cat state. However, we can choose parameters for which the probability of even a single decay is small. This is for example achieved for m, m, , assuming Rb atoms with . We use , where a.u. for Rb. The overall life-time of the system under dressing interactions is , with and Beterov et al. (2009). For our parameters ms, larger than the time required for initial state creation (ms) and acceleration (s). Conclusion: We have proposed a setup in which two cold atom clouds of about atoms each evolve dynamically by internal forces into a spatial Schrödinger’s cat state if exposed to Rydberg dipole-dipole interactions through dressing. The interactions create a state where two entire atomic clouds simultaneously are at two quantum mechanically superimposed locations, which are macroscopically distinguishable. Hence they can be resolved by visible light. The internal forces that induce motion of the atomic clouds are also instrumental in creating the required intermediate hyperfine state . This state may have interesting applications by itself due to its entanglement structure between the two clouds. Finally, the hyperfine state prior to any spatial dynamics realizes a collective spin Schrödinger’s cat state. We gladly acknowledge fruitful discussions with Igor Lesanovsky, Klaus Mølmer, Markus Müller, Pierre Pillet, Thomas Pohl and Shannon Whitlock, and EU financial support received from the Marie Curie Initial Training Network (ITN) ÔCOHERENCE”. I Supplemental material This supplemental material provides additional details regarding the definition and use of Born-Oppenheimer surfaces, the spin structure of initial states as well as our proposal for initial state creation. Motion on Born-Oppenheimer surfaces: We insert the expansion into the time-dependent Schrödinger equation with the Hamiltonian from Eq. (1) of the main article. Upon projection onto the electronic basis state this yields a system of coupled Schrödinger equations for the atomic motion in and electronic dynamics The form (4) is used in the main article. It is often also instructive to convert to a picture using the Born-Oppenheimer separation. To this end we expand the total wave function as in terms of eigenstates of the electronic Hamiltonian Deriving the equation of motion from the Hamltonian using this expansion leads to the Born-Oppenheimer separated version of the Schrödinger equation: where are non-adiabatic coupling terms [30]. As long as these remain small, components of the many-body wave function with different are effectively decoupled. The eigenvalues form separate Born-Oppenheimer potential energy surfaces that are in our context most useful to anticipate the atomic dynamics: If the many-body wave function is localized initially near in a narrow region of the dimensional parameter-space, atom will be accelerated along the downhill gradient of the energy surface . Calculating these gradients for the surfaces discussed in the main text, we find approximately , where the lower (upper) sign applies for an atom in cloud (). has reverse signs. These expressions become exact for small and large . The -scaling of reflects the number of interacting atom pairs, and the -scaling of the number of atoms excerting a force on atom . Importantly, the force in those two states will induce motion of the clouds as a whole, since it is almost equally strong for all atoms. Spin analogy: After elimination of the Rydberg states our atoms are described with just two essential states, hence a mapping to coupled spin- particles is possible with , . We can then define collective spin operators for each cloud where the individual spin operators act on atom only, and are Pauli matrices. For equal interactions between all pairs of atoms from different clouds, , which is approximately realized due to , the interaction Hamiltonian takes the form where are collective raising and lowering operators. Restricting ourselves to the same Hilbert space as in the main body of the paper ( atoms in both clouds together, with equal numbers in and ), we see that only collective spin states with total spin in and magnetic quantum numbers have to be considered. In terms of the assignment of states, , where is the number of atoms in state within cloud . The analogy to a spin model allows a spin-sphere representation of hyperfine states in our system, for which we calculate the -function defined by for each surface element of the sphere. These are shown in Figs. 2 and 3. The underlying coherent spin states are defined as common in coupled spin-systems (Ref. [26]). We used , where normalizes the state. Chirped micro-wave pulse: In the main body of the text we have described how interaction of our system with a time-dependent micro-wave field, described by the Hamiltonian can be used to adiabatically create the fully repulsive electronic state . As we are interested in fast micro-wave pulses to minimize the chance for spontaneous decay and to avoid an onset of motion, it is crucial to know the energy gap between the state that is to be adiabatically followed and other states in the spectrum. The problem separates for the case of no interaction , and each atom can independently be in the two eigenstates of the field Hamiltonian, which in matrix representation reads We denote the eigenstates by or . These have energies . The gap between the highest energy state (all atoms in ) and the state adjacent in energy (one atom in , rest in ) is thus . For the opposite case without field, we can numerically solve for configurations as in Fig. 1 and find a gap , where the prefactor is only weakly -dependent and approaches for of the order or larger. For small , e.g., . Our analytical expression in Eq. 3 interpolates between these two limiting cases through simple addition, and is found to describe all inspected cases satisfactorily. While the final gap for is thus roughly independent of the number of atoms, the fidelity in numerical simulations nonetheless decreases slightly for larger , due to stronger non-adiabatic couplings. It could probably be increased by pulse-shape optimization. Conditional phase flip: It was described in the main article how the maximally repulsive hyperfine state can be obtained from a state of all atoms in using a chirped microwave field. This section supplies details on the subsequent step, the transfer to the state using the scheme of Ref. [32]. For this we assume that dressed dipole-dipole interactions are adiabatically removed, so that is now given in terms of bare ground states and . Let there be a control atom with two internal states , , embedded in cloud , which is otherwise unaffected by the creation of so that the process described above corresponds to . The control atom could be another species of atom, or one in two hyperfine states different from , . Consider the following protocol: 1. Apply a Rabi pulse on the control atom to obtain . 2. Using the mesoscopic Rydberg quantum gate of Ref. [32], iff the control is in we now transfer all atoms in cloud only from into a third hyperfine state . This creates , where is obtained from by replacing each atom in cloud that was in state by one in state . 3. Apply a Rabi pulse between and some auxiliary level to obtain a phase per atom that was in . This has created the state , as can be seen from Fig. 2 (a). 4. Apply the gate again to return all atoms from to , resulting in , with which we have reached our goal. • Schrödinger (1935) E. Schrödinger, Naturwissenschaften 23, 807 (1935). • Friedman et al. (2000) J. R. Friedman, V. Patel, W. Chen, S. K. Tolpygo, and J. E. Lukens, Nature 406, 43 (2000). • Wei-Bo Gao et al. (2010) Wei-Bo Gao et al., Nature Physics 6, 331 (2010). • Leibfried et al. (2005) D. Leibfried et al., Nature 438, 639 (2005). • Gerlich et al. (2011) S. Gerlich et al., Nature Comm. 2, 263 (2011). • Takahashi et al. (2008) H. Takahashi et al., Phys. Rev. Lett. 101, 233605 (2008). • Lu et al. (2007) C.-L. Lu et al., Nature Physics 3, 91 (2007). • Monroe et al. (1996) C. Monroe, D. M. Meekhof, B. E. King, and D. J. Wineland, Science 272, 1131 (1996). • Weiss and Castin (2009) C. Weiss and Y. Castin, Phys. Rev. Lett. 102, 010403 (2009). • Streltsov et al. (2009) A. I. Streltsov, O. E. Alon, and L. S. Cederbaum, J. Phys. B: At. Mol. Opt. Phys. 42, 091004 (2009). • Bar-Gill et al. (2011) N. Bar-Gill, D. D. Bhaktavatsala Rao, and G. Kurizki, Phys. Rev. Lett. 107, 010404 (2011). • Cirac et al. (1998) J. I. Cirac, M. Lewenstein, K. Mølmer, and P. Zoller, Phys. Rev. A 57, 1208 (1998). • Dalvit et al. (2000) D. A. R. Dalvit, J. Dziarmaga, and W. H. Zurek, Phys. Rev. A 62, 013607 (2000). • Dunningham et al. (2006) J. A. Dunningham, K. Burnett, R. Roth, and W. D. Phillips, New J. Phys. 8, 182 (2006). • Gordon and Savage (1999) D. Gordon and C. M. Savage, Phys. Rev. A 59, 4623 (1999). • Hallwood et al. (2010) D. W. Hallwood, T. Ernst, and J. Brand, Phys. Rev. A 82, 063623 (2010). • Ng (2008) H. T. Ng, Phys. Rev. A 77, 033617 (2008). • Gallagher (1994) T. F. Gallagher, Rydberg Atoms (Cambridge University Press, Cambridge, 1994). • Saffman et al. (2010) M. Saffman, T. G. Walker, and K. Mølmer, Rev. Mod. Phys. 82, 2313 (2010). • Wüster et al. (2010) S. Wüster, C. Ates, A. Eisfeld, and J. M. Rost, Phys. Rev. Lett. 105, 053004 (2010). • Ates et al. (2008) C. Ates, A. Eisfeld, and J. M. Rost, New J. Phys. 10, 045030 (2008). • Santos et al. (2000) L. Santos, G. V. Shlyapnikov, P. Zoller, and M. Lewenstein, Phys. Rev. Lett. 85, 1791 (2000). • Henkel et al. (2010) N. Henkel, R. Nath, and T. Pohl, Phys. Rev. Lett. 104, 195302 (2010). • Wüster et al. (2011a) S. Wüster, C. Ates, A. Eisfeld, and J. M. Rost, New J. Phys. 13, 073044 (2011a). • Maucher et al. (2011) F. Maucher, N. Henkel, M. Saffman, W. Królikowski, S. Skupin, and T. Pohl, Phys. Rev. Lett. 106, 170401 (2011). • Jaksch et al. (2000) D. Jaksch, J. I. Cirac, P. Zoller, S. L. Rolston, R. Côté, and M. D. Lukin, Phys. Rev. Lett. 85, 2208 (2000). • Opatrný and Mølmer (2012) T. Opatrný and K. Mølmer, Phys. Rev. A 86, 023845 (2012). • Ma et al. (2011) J. Ma, X. Wang, C. P. Sun, and F. Nori, Phys. Rep. 509, 89 (2011). • Möbius et al. (2012) S. Möbius, M. Genkin, S. Wüster, A. Eisfeld, and J.-M. Rost (2012), eprint physics.atom-ph/1212.1267. • Domcke et al. (2004) W. Domcke, D. R. Yarkony, and H. Köppel, Conical Intersections (World Scientific, 2004). • Wüster et al. (2011b) S. Wüster, A. Eisfeld, and J. M. Rost, Phys. Rev. Lett. 106, 153002 (2011b). • Möbius et al. (2011) S. Möbius, S. Wüster, C. Ates, A. Eisfeld, and J. M. Rost, J. Phys. B: At. Mol. Opt. Phys. 44, 184011 (2011). • (33) See Supplemental Material for the Born-Oppenheimer separation, spin-analogy and initial state creation details. • (34) The microwave couples to the bare ground states. Due to the separate time-scales of bare dipole interactions MHz, Rydberg dressing MHz, and microwave coupling MHz, we expect that it adiabatically manipulates dressed states and write Eq. (11) directly in terms of dressed ground states , . • Pohl et al. (2010) T. Pohl, E. Demler, and M. D. Lukin, Phys. Rev. Lett. 104, 043002 (2010). • Müller et al. (2009) M. Müller, I. Lesanovsky, H. Weimer, H. P. Büchler, and P. Zoller, Phys. Rev. Lett. 102, 170502 (2009). • (37) The SE and Tully’s algorithm were both implemented in the high-level simulation language XMDS Dennis et al. (2013, 2012). • Tully (1990) J. C. Tully, J. Chem. Phys. 93, 1061 (1990). • Beterov et al. (2009) I. I. Beterov, I. I. Ryabtsev, D. B. Tretyakov, and V. M. Entin, Phys. Rev. A 79, 052504 (2009). • Dennis et al. (2013) G. R. Dennis, J. J. Hope, and M. T. Johnsson, Comput. Phys. Comm. 184, 201 (2013). • Dennis et al. (2012) G. R. Dennis, J. J. Hope, and M. T. Johnsson (2012), Comments 0 Request Comment You are adding the first comment! How to quickly get a good reply: Add comment Loading ... This is a comment super asjknd jkasnjk adsnkj The feedback must be of minumum 40 characters The feedback must be of minumum 40 characters You are asking your first question! How to quickly get a good answer: • Keep your question short and to the point • Check for grammar or spelling errors. • Phrase it like a question Test description
efb688d671902cef
Citation for this page in APA citation style.           Close Mortimer Adler Rogers Albritton Alexander of Aphrodisias Samuel Alexander William Alston Louise Antony Thomas Aquinas David Armstrong Harald Atmanspacher Robert Audi Alexander Bain Mark Balaguer Jeffrey Barrett William Barrett William Belsham Henri Bergson George Berkeley Isaiah Berlin Richard J. Bernstein Bernard Berofsky Robert Bishop Max Black Susanne Bobzien Emil du Bois-Reymond Hilary Bok Laurence BonJour George Boole Émile Boutroux Michael Burke Lawrence Cahoone Joseph Keim Campbell Rudolf Carnap Ernst Cassirer David Chalmers Roderick Chisholm Randolph Clarke Samuel Clarke Anthony Collins Antonella Corradini Diodorus Cronus Jonathan Dancy Donald Davidson Mario De Caro Daniel Dennett Jacques Derrida René Descartes Richard Double Fred Dretske John Dupré John Earman Laura Waddell Ekstrom Herbert Feigl Arthur Fine John Martin Fischer Frederic Fitch Owen Flanagan Luciano Floridi Philippa Foot Alfred Fouilleé Harry Frankfurt Richard L. Franklin Michael Frede Gottlob Frege Peter Geach Edmund Gettier Carl Ginet Alvin Goldman Nicholas St. John Green H.Paul Grice Ian Hacking Ishtiyaque Haji Stuart Hampshire Sam Harris William Hasker Georg W.F. Hegel Martin Heidegger Thomas Hobbes David Hodgson Shadsworth Hodgson Baron d'Holbach Ted Honderich Pamela Huby David Hume Ferenc Huoranszki William James Lord Kames Robert Kane Immanuel Kant Tomis Kapitan Walter Kaufmann Jaegwon Kim William King Hilary Kornblith Christine Korsgaard Saul Kripke Thomas Kuhn Andrea Lavazza Christoph Lehner Keith Lehrer Gottfried Leibniz Jules Lequyer Michael Levin George Henry Lewes David Lewis Peter Lipton C. Lloyd Morgan John Locke Michael Lockwood E. Jonathan Lowe John R. Lucas Alasdair MacIntyre Ruth Barcan Marcus James Martineau Storrs McCall Hugh McCann Colin McGinn Michael McKenna Brian McLaughlin John McTaggart Paul E. Meehl Uwe Meixner Alfred Mele Trenton Merricks John Stuart Mill Dickinson Miller Thomas Nagel Otto Neurath Friedrich Nietzsche John Norton Robert Nozick William of Ockham Timothy O'Connor David F. Pears Charles Sanders Peirce Derk Pereboom Steven Pinker Karl Popper Huw Price Hilary Putnam Willard van Orman Quine Frank Ramsey Ayn Rand Michael Rea Thomas Reid Charles Renouvier Nicholas Rescher Richard Rorty Josiah Royce Bertrand Russell Paul Russell Gilbert Ryle Jean-Paul Sartre Kenneth Sayre Moritz Schlick Arthur Schopenhauer John Searle Wilfrid Sellars Alan Sidelle Ted Sider Henry Sidgwick Walter Sinnott-Armstrong Saul Smilansky Michael Smith Baruch Spinoza L. Susan Stebbing Isabelle Stengers George F. Stout Galen Strawson Peter Strawson Eleonore Stump Francisco Suárez Richard Taylor Kevin Timpe Mark Twain Peter Unger Peter van Inwagen Manuel Vargas John Venn Kadri Vihvelin G.H. von Wright David Foster Wallace R. Jay Wallace Ted Warfield Roy Weatherford C.F. von Weizsäcker William Whewell Alfred North Whitehead David Widerker David Wiggins Bernard Williams Timothy Williamson Ludwig Wittgenstein Susan Wolf Michael Arbib Walter Baade Bernard Baars Jeffrey Bada Leslie Ballentine Gregory Bateson John S. Bell Mara Beller Charles Bennett Ludwig von Bertalanffy Susan Blackmore Margaret Boden David Bohm Niels Bohr Ludwig Boltzmann Emile Borel Max Born Satyendra Nath Bose Walther Bothe Hans Briegel Leon Brillouin Stephen Brush Henry Thomas Buckle S. H. Burbury Melvin Calvin Donald Campbell Anthony Cashmore Eric Chaisson Gregory Chaitin Jean-Pierre Changeux Arthur Holly Compton John Conway Jerry Coyne John Cramer Francis Crick E. P. Culverwell Antonio Damasio Olivier Darrigol Charles Darwin Richard Dawkins Terrence Deacon Lüder Deecke Richard Dedekind Louis de Broglie Stanislas Dehaene Max Delbrück Abraham de Moivre Paul Dirac Hans Driesch John Eccles Arthur Stanley Eddington Gerald Edelman Paul Ehrenfest Manfred Eigen Albert Einstein Hugh Everett, III Franz Exner Richard Feynman R. A. Fisher David Foster Joseph Fourier Philipp Frank Steven Frautschi Edward Fredkin Lila Gatlin Michael Gazzaniga Nicholas Georgescu-Roegen GianCarlo Ghirardi J. Willard Gibbs Nicolas Gisin Paul Glimcher Thomas Gold A. O. Gomes Brian Goodwin Joshua Greene Dirk ter Haar Jacques Hadamard Mark Hadley Patrick Haggard J. B. S. Haldane Stuart Hameroff Augustin Hamon Sam Harris Ralph Hartley Hyman Hartman John-Dylan Haynes Donald Hebb Martin Heisenberg Werner Heisenberg John Herschel Art Hobson Jesper Hoffmeyer E. T. Jaynes William Stanley Jevons Roman Jakobson Pascual Jordan Ruth E. Kastner Stuart Kauffman Martin J. Klein William R. Klemm Christof Koch Simon Kochen Hans Kornhuber Stephen Kosslyn Ladislav Kovàč Leopold Kronecker Rolf Landauer Alfred Landé Pierre-Simon Laplace David Layzer Joseph LeDoux Gilbert Lewis Benjamin Libet Seth Lloyd Hendrik Lorentz Josef Loschmidt Ernst Mach Donald MacKay Henry Margenau Humberto Maturana James Clerk Maxwell Ernst Mayr John McCarthy Warren McCulloch George Miller Stanley Miller Ulrich Mohrhoff Jacques Monod Emmy Noether Alexander Oparin Abraham Pais Howard Pattee Wolfgang Pauli Massimo Pauri Roger Penrose Steven Pinker Colin Pittendrigh Max Planck Susan Pockett Henri Poincaré Daniel Pollen Ilya Prigogine Hans Primas Henry Quastler Adolphe Quételet Jürgen Renn Juan Roederer Jerome Rothstein David Ruelle Tilman Sauer Jürgen Schmidhuber Erwin Schrödinger Aaron Schurger Thomas Sebeok Claude Shannon David Shiang Herbert Simon Dean Keith Simonton B. F. Skinner Lee Smolin Ray Solomonoff Roger Sperry John Stachel Henry Stapp Tom Stonier Antoine Suarez Leo Szilard Max Tegmark Libb Thims William Thomson (Kelvin) Giulio Tononi Peter Tse Francisco Varela Vlatko Vedral Mikhail Volkenstein Heinz von Foerster John von Neumann Jakob von Uexküll John B. Watson Daniel Wegner Steven Weinberg Paul A. Weiss John Wheeler Wilhelm Wien Norbert Wiener Eugene Wigner E. O. Wilson Stephen Wolfram H. Dieter Zeh Ernst Zermelo Wojciech Zurek Konrad Zuse Fritz Zwicky Free Will Mental Causation James Symposium The Problem of Microscopic Reversibility Loschmidt's Paradox In 1874, Josef Loschmidt criticized his younger colleague Ludwig Boltzmann's 1866 attempt to derive from basic classical dynamics the increasing entropy required by the second law of thermodynamics. Increasing entropy is the intimate connection between time and the second law of thermodynamics that Arthur Stanley Eddington later called the Arrow of Time. (The fundamental arrow of time is the expansion of the universe, which makes room for all the other arrows.) Despite never seeing entropy decrease continuously in an isolated system, attempts to "prove" that it always increases have been failures. As Boltzmann knew, the increase in entropy is only statistical. And as Albert Einstein showed when he rederived the equations of statistical mechanics in his papers of 1902-1904, there are always fluctuations away from equilibrium. So there can be decreases in entropy, but they are (almost) always short-lived. Loschmidt's criticism was based on the simple idea that the laws of classical dynamics are time reversible. Consequently, if we just turned the time around, the time evolution of the system should lead to decreasing entropy. Of course we cannot turn time around, but a classical dynamical system will evolve in reverse if all the particles could have their velocities exactly reversed. Apart from the practical impossibility of doing this, Loschmidt had shown that systems could exist for which the entropy should decrease instead of increasing. This is called Loschmidt's "Reversibility Objection" (Umwiederkehreinwand) or "Loschmidt's paradox." We call it the problem of microscopic reversibility. We can visualize the free expansion of a gas that occurs when we rapidly withdraw a piston. Because this is a movie, we can reverse the movie to show what Loschmidt imagined would happen. But Boltzmann thought that even if the particles could all have their velocities reversed, minute errors in the collisions would likely prevent a perfect return to the original state. Forward Time Time Reversal To demonstrate the randomness in each collision, which Boltzmann described as "molecular disorder (molekular ungeordnet) we need a program that reverses the velocities of the gas particles, and adds randomness into the collisions. (This is a work in progress.) Information physics claims that microscopic reversibility is actually extremely unlikely and that the intrinsic path information in particles needed to reduce entropy is erased by matter-radiation interactions or by internal quantum transitions in the colliding atoms considered as a "quasi"-molecule. Microscopic time reversibility is one of the foundational assumptions of both classical mechanics and quantum mechanics. It is mistakenly thought to be the basis for the "detailed balancing" of chemical reactions in thermodynamic equilibrium. In fact microscopic reversibility is an assumption that is only statistically valid in the same limits as any "quantum to classical transition." This is the limit when the number of particles is large enough that we can average over quantum effects. Quantum events also approach classical behavior in the limit of large quantum numbers, which Niels Bohr called the "correspondence principle." What "detailed balancing" means is that in thermodynamic equilibrium, the number of forward reactions is exactly balanced by the number of reverse reactions. And this is correct. But microscopic reversibility, while still true when considering averages over time, should not be confused with the time reversibility of a specific individual collision between particles. We will examine the collision of two atoms and show that if their velocities are reversed at some time after the collision, it is highly improbable that they will retrace their paths. This does not mean that, given enough particle collisions, there will not be statistically many collisions that are essentially the same as the "reverse collisions" needed for detailed balancing in chemical reactions, for transport processes with the Boltzmann equation, and for the Onsager reciprocal relations in non-equilibrium conditions. The Origin of Irreversibility Our careful quantum analysis shows that time reversibility fails even in the most ideal conditions (the simplest case of two particles in collision), provided internal quantum structure or the quantum-mechanical interaction with radiation is taken into account. Albert Einstein was the first to see this, first in his 1909 extension of work on the photoelectric effect but especially in his 1916-17 work on the emission and absorption of radiation. This was the work in which Einstein showed that quantum theory implies ontological chance, which he famously disliked, ("God does not play dice!"). For Einstein, detailed balancing was not the result of microscopic reversibility, it was his starting assumption. Einstein's work is sometimes cited as proof of detailed balancing and microscopic reversibility. (Wikipedia, for example.) In fact, Einstein used Boltzmann's assumption of detailed balancing, along with the "Boltzmann principle" that the probability of states with energy E is reduced by the exponential "Boltzmann factor," f(E) ∝ e-E/kT, to derive his transition probabilities for emission and absorption of radiation. Einstein also derived Planck's radiation law and Bohr's second "quantum postulate" Em - En = hν. But Einstein distinctly denied any symmetry in the elementary processes of emission and absorption. As early as 1909, he noted that the elementary process of emission is not "invertible." There are outgoing spherical waves of radiation, but incoming spherical waves are never seen. In a deterministic universe, the path information needed to predict the future motions of all particles would be preserved. If information is a conserved quantity, the future and the past are all contained in the present. The information about future paths is precisely the same information that, if reversed, would predict microscopic reversibility of each and every collision. The introduction of ontological probabilities and statistics would deny such determinism. If the motions of particles have a chance element, such determinism can not exist. And this is exactly what Einstein did in his papers on the emission and absorption of radiation by matter. He found that quantum theory implies ontological chance. A "weakness in the theory," he called it. What we might call Einstein's "radiation asymmetry" was introduced with these words, Before Einstein the common view of light was that it is radiated in all directions (isotropically) as waves. After 1916, it was known by some, but very few took it seriously, that light is emitted as what Einstein called "light quanta" (now known as photons) in a single and random direction. This randomness is the basis of all chance in the universe. Einstein discovered it ten years before Werner Heisenberg claimed that his "uncertainty principle" had introduced indeterminism and eliminated causality in physics. The elementary process of the emission and absorption of radiation is asymmetric, because the process is directed, as Einstein had explicitly noted first in 1909, and we think he had seen as early as 1905 in his analysis of the photoelectric effect. The apparent isotropy of the emission of radiation is only what Einstein called "pseudo-isotropy" (Pseudoisotropie), a consequence of time averages over large numbers of events. Einstein often substituted time averages of a single system with "space" averages, or averages over a large "ensemble" of identical systems, as had J. Willard Gibbs in his statistical mechanics. a quantum theory free from contradictions can only be obtained if the emission process, just as absorption, is assumed to be directional. In that case, for each elementary emission process Zm->Zn a momentum of magnitude (εm—εn)/c is transferred to the molecule. If the latter is isotropic, we shall have to assume that all directions of emission are equally probable. If the molecule is not isotropic, we arrive at the same statement if the orientation changes with time in accordance with the laws of chance. Moreover, such an assumption will also have to be made about the statistical laws for absorption, (B) and (B'). Otherwise the constants Bmn and Bnm would have to depend on the direction, and this can be avoided by making the assumption of isotropy or pseudo-isotropy (using time averages). Now the principle of microscopic reversibility is a fundamental assumption of statistical mechanics. It underlies the principle of "detailed balancing," which is critical to the understanding of chemical reactions. In thermodynamic equilibrium, the number of forward reactions is exactly balanced by the number of reverse reactions. But microscopic reversibility, while true in the sense of averages over time, should not be confused with the reversibility of individual collisions between molecules. The equations of classical dynamics are reversible in time. And the deterministic Schrödinger equation of motion in quantum mechanics is also time reversible. But the interactions of photons and material particles like electrons and atoms are distinctly not reversible! An explanation of microscopic irreversibility in atomic and molecular collisions would provide the needed justification for Ludwig Boltzmann's assumption of "molecular disorder" and strengthen his H-Theorem. This is what we hope to do. In quantum mechanics, microscopic time reversibility is assumed true by most scientists because the deterministic Schrödinger equation itself is time reversible. But the Schrödinger equation only describes the deterministic time evolution of the probabilities of various quantum events, which are themselves not deterministic and not reversible. When an actual event occurs, the probabilities of multiple possible events collapse to the actual occurrence of one event. In quantum mechanics, this is the irreversible collapse of the wave function that John von Neumann called "Process 1." Treating two atoms as a temporary molecule means we must use molecular, rather than atomic, wave functions. The quantum description of the molecule now transforms the six independent degrees of freedom into three for the molecule's center of mass and three more that describe vibrational and rotational quantum states. The possibility of quantum transitions between closely spaced vibrational and rotational energy levels in the "quasi-molecule' introduces indeterminacy in the future paths of the separate atoms. The classical path information needed to ensure the deterministic dynamical behavior has been partially erased. The memory of the past needed to predict the "determined" future has been lost. Even assuming the practical impossibility of a perfect classical time reversal, in which we simply turn the two particles around, quantum physics would require two measurements to locate the two particles, followed by two state preparations to send them in the opposite direction. These could only be made within the precision of Heisenberg's uncertainty principle and so could not perfectly produce microscopic reversibility, which is thus only a classical idealization, like the idea of determinism.. Heisenberg indeterminacy puts calculable limits on the accuracy with which perfect reversed paths can be achieved. Let us assume this impossible task can be completed, and it sends the two particles into the reverse collision paths. But on the return path, there is only a finite probability that a "sum over histories" calculation will produce the same (or exactly reversed) quantum transitions between vibrational and rotational states that occurred in the first collision. Thus a quantum description of a two-particle collision establishes the microscopic irreversibility that Boltzmann sometimes described as his assumption of "molecular disorder." In his second (1877) derivation of the H-theorem, Boltzmann used a statistical approach and the molecular disorder assumption to get away from the time-reversibility assumptions of classical dynamics. We must develop a deep insight into Einstein's asymmetry between light and matter, one that was appreciated as early as the 1880's by Max Planck's great mentor Gustave Kirchhoff, but was not understood in quantum mechanical terms until Einstein's understanding of nonlocality and the relation between waves and particles in 1909.. It is still ignored in quantum statistical mechanics by those who mistakenly think that the time reversible Schrödinger equation means microscopic interactions are reversible. Maxwell and Boltzmann had shown that collisions between material particles, analyzed statistically, cause the distribution of positions and velocities to approach their equilibrium Maxwell-Boltzmann distribution. A bit later, Kirchhoff and Planck knew that an extreme non-equilibrium distribution of radiation, for example a monochromatic radiation field, will remain out of equilibrium indefinitely. But if that radiation interacts with even the tiniest amount of matter, a speck of carbon black was their example, all the wavelengths of the spectrum - the Kirchhoff law - soon appear. So we can say that the approach to equilibrium of a radiation field has the same origin of irreversibility as that of matter. Radiation without matter cannot equilibrate. Photons do not interact, except at the extremely high energies where they can convert to matter and anti-matter. Our new insight is that matter without radiation also cannot equilibrate in a way that escapes the reversibility and recurrence objections, as is taught in every textbook and review article on statistical mechanics to this day. It is thus the irreversible interaction of the two, light and matter, photons and electrons, that lies behind the increase of entropy in the universe. The second law of thermodynamics would not explain the increase of entropy except for the microscopic irreversibility that we have shown to be the case. Microscopic irreversibility not only explains the second law, it validates Boltzmann's brilliant assumption of "molecular disorder" to justify his statistical arguments. Zermelo's paradox was a later criticism of Ludwig Boltzmann's attempt to derive the increasing entropy required by the second law of thermodynamics. It also involves time. Assuming infinite available time, a finite universe with fixed matter, energy, and information will at some point return to any given earlier state. We now know that even a finite part of the universe cannot return to exactly the same state, because the surrounding universe will have aged and be in a different information state. This is the information philosophy solution to the problem of eternal recurrence, as seen by Arthur Stanley Eddington and H. Dieter Zeh. The Origin of Irreversibility (pdf) Microscopic Irreversibility, chapter 25 of Great Problems of Philosophy and Physics Solved?. For Teachers Chapter 5.7 - Recurrence Problem Chapter 5.9 - Universals Part Four - Freedom Part Six - Solutions Normal | Teacher | Scholar
3be7424602526da4
AdvanceSagePrinceton.pdf (294.38 kB) If quantum mechanics is the solution, what should the problem be? Download (294.38 kB) posted on 01.05.2020 by Vasil Penchev The paper addresses the problem, which quantum mechanics resolves in fact. Its viewpoint suggests that the crucial link of time and its course is omitted in understanding the problem. The common interpretation underlain by the history of quantum mechanics sees discreteness only on the Plank scale, which is transformed into continuity and even smoothness on the macroscopic scale. That approach is fraught with a series of seeming paradoxes. It suggests that the present mathematical formalism of quantum mechanics is only partly relevant to its problem, which is ostensibly known. The paper accepts just the opposite: The mathematical solution is absolute relevant and serves as an axiomatic base, from which the real and yet hidden problem is deduced. Wave-particle duality, Hilbert space, both probabilistic and many-worlds interpretations of quantum mechanics, quantum information, and the Schrödinger equation are included in that base. The Schrödinger equation is understood as a generalization of the law of energy conservation to past, present, and future moments of time. The deduced real problem of quantum mechanics is: “What is the universal law describing the course of time in any physical change therefore including any mechanical motion?” Declaration of conflicts of interest Corresponding author email Lead author country Lead author job role Higher Education Faculty 2-yr College Lead author institution Bulgarian Academy of Sciences Human Participants Log in to write your comment here... Logo branding
357ab16961875602
Skip to main content Mathematics and Physics Mathematics and Physics • UCAS code GF13 • Option 3 years full time • Year of entry 2021 The course At the end of your first year you will have the option of transferring onto the second year of our four-year MSci course, which is aimed at students who want to pursue mathematics and physics at a high level after graduation, for example in research or in specialist roles in industry. Core Modules Year 1 • In this module you will develop an understanding of the fundamental algebraic structures, including familiar integers and polynomial rings. You will learn how to apply Euclid's algorithm to find the greatest common divisor of two integers, and use mathematical induction to prove simple results. You will examine the use of arithmetic operations on complex numbers, extract roots of complex numbers, prove De Morgan's laws, and determine whether a given mapping is bijective. Year 2 • In this module you will develop an understanding of the concepts arising when the boundary conditions of a differential equation involves two points. You will look at eigenvalues and eigenfunctions in trigonometric differential equations, and determine the Fourier series for a periodic function. You will learn how to manipulate the Dirac delta-function and apply the Fourier transform. You will also examine how to solve differential equations where the coefficients are variable. Year 3 • Experimental or Theoretical Project • Advanced Skills Optional Modules Year 1 • All modules are core Year 2 • In this module you will develop an understanding of statistical modelling, becoming familiar with the theory and the application of linear models. You will learn how to use the classic simple linear regression model and its generalisations for modelling dependence between variables. You will examine how to apply non-parametric methods, such as the Wilxocon and Kolmogorov-Smirnov goodness-of-fit tests, and learn to use the R open source software package. • In this module you will develop an understanding of the basic principles of the mathematical theory of probability. You will use the fundamental laws of probability to solve a range of problems, and prove simple theorems involving discrete and continuous random variables. You will learn how to forumulate an explain fundamental limit theorems, such as the weak law of large numbers and the central limit theorem. • In this module you will develop an understanding of ring theory and how this area of algebra can be used to address the problem of factorising integers into primes. You will look at how these ideas can be extended to develop notions of 'prime factorisation' for other mathematical objects, such as polynomials. You will investigate the structure of explicit rings and learn how to recognise and construct ring homomorphisms and quotients. You will examine the Gaussian integers as an example of a Euclidean ring, Kronecker's theorem on field extensions, and the Chinese Remainder Theorem. • In this module you will develop an understanding of the algebraic structures known as groups. You will look at how groups represent symmetries in the world around us, examining examples that arise from the theory of matrices and permutations. You will see how groups are ubiquitous and used in many different fields of human study, including mathematics, physics, the study of crystals and atoms, public key cryptography, and music theory. You also will also consider how various counting problems concerning discrete patterns can be solved by means of group actions. • In this module you will develop an understanding of the language and concepts of linear algebra that are used within Mathematics. You will look at topics in linear algebra and the theory of modules, which can be seen as generalisations of vector spaces. You will learn how to use alternative matrix representations, such as the Jordan canonical or the rational canonical form, and see why they are important in mathematics. • In this module you will develop an understanding of the convergence of series. You will look at the Weierstrass definition of a limit and use standard tests to investigate the convergence of commonly occuring series. You will consider the power series of standard functions, and analyse the Intermediate Value and Mean Value Theorems. You will also examine the properties of the Riemann integral. Year 3 • You will carry out a detailed investigation on a topic of your choosing, guided by an academic supervisor. You will prepare a written report around 7,000 words in length, and give a ten-minute presentation outlining your findings. • In this module you will develop an understanding of a range of methods for teaching children up to A-level standard. You will act act as a role model for pupils, devising appropriate ways to convey the principles and concepts of mathematics. You will spend one session a week in a local school, taking responsibility for preparing lesson plans, putting together relevant learning aids, and delivering some of the classes. You will work with a specific teacher, who will act as a trainer and mentor, gaining valuable transferable skills. • In this module you will develop an understanding of how prime numbers are the building blocks of the integers 0, ±1, ±2, … You will look at how simple equations using integers can be solved, and examine whether a number like 2017 should be written as a sum of two integer squares. You will also see how Number Theory can be used in other areas such as Cryptography, Computer Science and Field Theory. • In this module you will develop an understanding of a range methods used for testing and proving primality, and for the factorisation of composite integers. You will look at the theory of binary quadratic forms, elliptic curves, and quadratic number fields, considering the principles behind state-of-the art factorisation methods. You will also look at how to analyse the complexity of fundamental number-theoretic algorithms. • In this module you will develop an understanding the different classes of computational complexity. You will look at computational hardness, learning how to deduce cryptographic properties of related algorithms and protocols. You will examine the concept of a Turing machine, and consider the millennium problems, including P vs NP, with a $1,000,000 prize on offer from the Clay Mathematics Institute if a correct solution can be found. • In this module you will develop an understanding of efficient algorithm design and its importance for handling large inputs. You will look at how computers have changed the world in the last few decades, and examine the mathematical concepts that have driven these changes. You will consider the theory of algorithm design, including dynamic programming, handling recurrences, worst-case analysis, and basic data structures such as arrays, stacks, balanced search trees, and hashing. • In this module you will develop an understanding of quantum theory, and the development of the field to explain the behaviour of particles at the atomic level. You will look at the mathematical foundations of the theory, including the Schrodinger equation. You will examine how the theory is applied to one and three dimensional systems, including the hydrogen atom, and see how a probabilistic theory is required to interpret what is measured. • In this module you will develop an understanding of how the Rayleigh-Ritz variational principle and perturbation theory can be used to obtain approximate solutions of the Schrödinger equation. You will look at the mathematical basis of the Period Table of Elements, considering spin and the Pauli exclusion principle. You will also examine the quantum theory of the interaction of electromagnetic radiation with matter. • In this module you will develop an understanding of how the theory of ideal fluids can be used to explain everyday phenomena in the world around us, such as how sound travels, how waves travel over the surface of a lake, and why golden syrup (or volcanic lava) flows differently from water. You will look at the essential features of compressible flow and consider basic vector analysis techniques. • In this module, you will develop an understanding of non-linear dynamical systems. You will investigate whether the behaviour of a non-linear system can be predicted from the corresponding linear system, and see how dynamical systems can be used to analyse mechanisms such as the spread of disease, the stability of the universe, and the evolution of economic systems. You will gain an insight into the 'secrets' of the non-linear world and the appearance of chaos, examining the significant developments achieved in this field during the final quarter of the 20th Century. • In this module you will develop an understanding of the main priciples and methods of statstics, in particular the theory of parametric estimation and hypotheses testing.You will learn how to formulate statistical problems in mathematical terms, looking at concepts such as Bayes estimators, the Neyman-Pearson framework, likelihood ratio tests, and decision theory. • In this module you will develop an understanding of some of the descriptive methods and theoretical techniques that are used to analyse time series. You will look at the standard theory around several prototype classes of time series models and learn how to apply appropriate methods of times series analysis and forecasting to a given set of data using Minitab, a statistical computing package. You will examine inferential and associated algorithmic aspects of time-series modelling and simulate time series based on several prototype classes. • In this module you will develop an understanding of the probabilistic methods used to model systems with uncertain behaviour. You will look at the structure and concepts of discrete and continuous time Markov chains with countable stable space, and consider the methods of conditional expectation. You will learn how to generate functions, and construct a probability model for a variety of problems. • In this module you will develop an understanding of the mathematics of communication, focusing on digital communication as used across the internet and by mobile telephones. You looking at compression, considering how small a file, such as a photo or video, can be made, and therefore how the use of data can be minimised. You will examine error correction, seeing how communications may be correctly received even if something goes wrong during the transmission, such as intermittent wifi signal. You will also analyse the noiseless coding theorem, defining and using the concept of channel capacity. • In this module you will develop an understanding of how the behaviour of quantum systems can be harnessed to perform information processing tasks that are otherwise difficult, or impossible, to carry out. You will look at basic phenomena such as quantum entanglement and the no-cloning principle, seeing how these can be used to perform, for example, quantum key distribution. You will also examine a number of basic quantum computing algorithms, observing how they outperform their classical counterparts when run on a quantum computer. • In this module you will develop an understanding of how financial markets operate, with a focus on the ideas of risk and return and how they can be measured. You will look at the random behaviour of the stock market, Markowitz portfolio optimisation theory, the Capital Asset Pricing Model, the Binomial model, and the Black-Scholes formula for the pricing of options. • In this module you will develop an understanding of some of the standard techniques and concepts of combinatorics, including methods of counting, generating functions, probabilistic methods, permutations, and Ramsey theory. You will see how algebra and probability can be used to count abstract mathematical objects, and how to calculate sets by inclusion and exclusion. You will examine the applications of number theory and consider the use of simple probabilistic tools for solving combinatorial problems. • In this module you will develop an understanding of how error correcting codes are used to store and transmit information in technologies such as DVDs, telecommunication networks and digital television. You will look at the methods of elementary enumeration, linear algebra and finite fields, and consider the main coding theory problem. You will see how error correcting codes can be used to reconstruct the original information even if it has been altered or degraded. • In this module you will develop an understanding of public key cryptography and the mathematical ideas that underpin it, including discrete logarithms, lattices and elliptic curves. You will look at several important public key cryptosystems, including RSA, Rabin, ElGamal encryption and Schnorr signatures. You will consider notions of security and attack models relevant for modern theoretical cryptography, such as indistinguishability and adaptive chosen ciphertext attack. • In this module you will develop an understanding of Field Theory. You will learn how to express equations such as X2017=1 in a formal algebraic setting, how to classify finite fields, and how to determine the number of irreducible polynomials over a finitie field. You will also consider some the applications of fields, including ruler and compass constructions and why it is impossible to generically trisect an angle using them. • Advanced Skills • Atomic Physics • Nonlinear Systems and Chaos The course has a flexible, modular structure and you will take a total of 12 course units at a rate of four, 30-credit modules per year. In addition to our compulsory core modules you will be free to choose between a number of optional courses. Some contribute 15 credits to your overall award while others contribute the full 30. We use a variety of teaching methods and there is a strong focus on small group teaching in the department. You will attend 12 to 15 hours of formal teaching in a typical week, including lectures, tutorials, problem-solving workshops, laboratory work and practical sessions. You will also be expected to work on worksheets, revision and project work outside of these times. In year 2, teaching will mainly be delivered through lectures and workshops and in year 3, mostly through relatively small group lectures. Our courses are mostly examined by two-hour written examinations at the end of the year but many include a coursework or in-class test element as well. Experimental work is generally assessed by written reports or oral presentation. A minimum of six of the eight course units must be passed each year, with a minimum score of 40%. In year 3 there are optional courses which are examined solely by a project and/or presentation. Outside of class time, you will be expected to work on group projects and independent study, with access to the College’s comprehensive e-learning facility, Moodle. A Levels: AAA-ABB Required subjects: • A-level in Mathematics at grade A • A-Level in Physics at grade A • A pass in the practical element of all Science A-levels taken English language requirements The scores we require • IELTS: 6.5 overall. No subscore lower than 5.5. • Cambridge English: Advanced (CAE) grade C. Country-specific requirements For international students who do not meet the direct entry requirements, we offer an International Foundation Year, run by Study Group at the Royal Holloway International Study Centre. Upon successful completion, you may progress on to selected undergraduate degree programmes at Royal Holloway, University of London. With our internationally recognised Mathematics and Physics degree, you will be in demand for your advanced understanding of the theoretical and practical aspects of both disciplines, as well as for your wide range of transferable skills, such as data handling and analysis, numeracy, logical thinking, technical skills and creative problem-solving abilities. Graduate employment levels for physicists are amongst the highest of any subject. Our Department of Mathematics is also part of the School of Mathematics and Information Security and enjoys particularly strong ties with the information security sector as well as with industry at large. In physics, we benefit from strong collaborative ties with international projects and laboratories such as CERN, the National Physical Laboratory (NPL) and SNOLAB. Recent mathematics and physics graduates have gone on to enjoy successful careers in a wide range of careers, including business management, IT consultancy, computer analysis and programming, accountancy, the civil service, teaching, actuarial science, finance, risk analysis, research and engineering. We have graduates working for organisations such as KPMG, Ernst & Young, the Ministry of Defence, Barclays Bank, Lloyds Banking Group, the Department of Health, Logica, McLaren and TowersWatson, and in research teams tackling problems as diverse as aircraft design, operational research and cryptography.  • According to the Institute of Physics, physics-related industries employ more than 1.79 million people in the UK, and physics graduates typically earn more than those in other disciplines. • Both departments offer competitive work experience schemes, with short-term placements and paid summer internships available during the summer holidays. • The University of London Careers Advisory Service also offers tailored sessions on finding relevant summer internships or holiday jobs and securing employment after graduation. Other essential costs***: £55 Explore Royal Holloway Discover more about our 21 departments and schools Discover world-class research at Royal Holloway
9457e5f3b06b492e
Time-dependent ballistic phenomena of electron injected into half-ellipse confined room Takuji Koiso, Masakazu Muraguchi, Kyozaburo Takeda, Naoki Watanabe Research output: Contribution to journalArticle 10 Citations (Scopus) We theoretically studied the time-developing ballistic phenomena of a single-electron confined in a half-ellipse infinite-potential wall by solving the time-dependent Schrödinger equation numerically. We also solved the corresponding Newton equation in order to compare the classical results with the quantum ones, and extracted the quantum features. The ellipse-shaped potential wall completely reflects an electron and causes the focusing ratio of unity in the classical limit. The dispersion of the wave packet of an electron, however, weakens this characteristic nature, and reduces the focusing ratio from unity. Because the dispersion also lets an electron arrive at the collector indistinctly, we define the effective arrival time by finding inflections in the time-dependent profile of the probability density at the collector. Based on the second-derivation technique, we further determine the quantum arrival time (QAT) at which the intrusion of the wave packet occurs dominantly. The comparison of this QAT with the classical arrival time (CAT) determines whether the corresponding ballistic propagation can be discussed on the basis of the quantum consideration or the classical prediction. We further studied how the change in the half-ellipse potential wall shape affects the ballistic phenomena through the change in the ellipticity γ, the system size L and the dispersion degree σ of the wave packet. Using the ellipse-shaped infinite-potential wall, the application of the magnetic field causes irrational cyclotron motion assisted by the ellipse potential, in addition to the rational cyclotron motions. The numerical solution of the time-dependent Schrödinger equation determines the unique cyclotron motion whose peculiarity is caused by the dispersion of the wave packet and is rarely predicted by the classical limit. Original languageEnglish Pages (from-to)4252-4268 Number of pages17 Issue number6 A Publication statusPublished - 2005 Jun 1 • Ballistic phenomena • Classical arrival time • Cyclotron motion • Half-ellipse confined room • Quantum arrival time • Time-dependent Schrödinger equation • Wave packet ASJC Scopus subject areas • Engineering(all) • Physics and Astronomy(all) Fingerprint Dive into the research topics of 'Time-dependent ballistic phenomena of electron injected into half-ellipse confined room'. Together they form a unique fingerprint. • Cite this
4a17f04c385515e1
Complex number Complex number A complex number can be visually represented as a pair of numbers forming a vector on a diagram called an Argand diagram, representing the complex plane. Re is the real axis, Im is the imaginary axis, and i is the square root of –1. A complex number is a number consisting of a real part and an imaginary part. Complex numbers extend the idea of the one-dimensional number line to the two-dimensional complex plane by using the number line for the real part and adding a vertical axis to plot the imaginary part. In this way the complex numbers contain the ordinary real numbers while extending them in order to solve problems that would be impossible with only real numbers. Complex numbers are used in many scientific fields, including engineering, electromagnetism, quantum physics, applied mathematics, and chaos theory. Italian mathematician Gerolamo Cardano is the first known to have introduced complex numbers; he called them "fictitious", during his attempts to find solutions to cubic equations in the 16th century.[1] Introduction and definition Complex numbers have been introduced to allow for solutions of certain equations that have no real solution: the equation x^2 + 1 = 0 \, has no real solution x, since the square of x is 0 or positive, so x2 + 1 cannot be zero. Complex numbers are a solution to this problem. The idea is to enhance the real numbers by introducing a non-real number i whose square is −1, so that x = i and x = −i are the two solutions to the preceding equation. A complex number is an expression of the form a+bi, \ where a and b are real numbers and i is the imaginary unit, satisfying i2 = −1. For example, −3.5 + 2i is a complex number. The real number a of the complex number z = a + bi is called the real part of z and the real number b is the imaginary part.[2] They are denoted Re(z) or ℜ(z) and Im(z) or ℑ(z), respectively. For example, \operatorname{Re}(-3.5 + 2i) = -3.5 \ \operatorname{Im}(-3.5 + 2i) = 2. \ Some authors write a+ib instead of a+bi. In some disciplines (in particular, electrical engineering, where i is a symbol for current), in order to avoid notational conflict, the imaginary unit i is instead written as j, so complex numbers are written as a + bj or a + jb. A real number a can usually be regarded as a complex number with an imaginary part of zero, that is to say, a + 0i. However the sets are defined differently and have slightly different operations defined, for instance comparison operations are not defined for complex numbers. Complex numbers whose real part is zero, that is to say, those of the form 0 + bi, are called imaginary numbers. It is common to write a for a + 0i and bi for 0 + bi. Moreover, when b is negative, it is common to write a − bi instead of a + (−b)i, for example 3 − 4i instead of 3 + (−4)i. The set of all complex numbers is denoted by C or \mathbb{C}. The complex plane Figure 1: A complex number plotted as a point (red) and position vector (blue) on an Argand diagram; a + bi is the rectangular expression of the point. A complex number can be viewed as a point or position vector in a two-dimensional Cartesian coordinate system called the complex plane or Argand diagram (see Pedoe 1988 and Solomentsev 2001), named after Jean-Robert Argand. The numbers are conventionally plotted using the real part as the horizontal component, and imaginary part as vertical (see Figure 1). These two values used to identify a given complex number are therefore called its Cartesian, rectangular, or algebraic form. The defining characteristic of a position vector is that it has magnitude and direction. These are emphasised in a complex number's polar form and it turns out notably that the operations of addition and multiplication take on a very natural geometric character when complex numbers are viewed as position vectors: addition corresponds to vector addition while multiplication corresponds to multiplying their magnitudes and adding their arguments (i.e. the angles they make with the x axis). Viewed in this way the multiplication of a complex number by i corresponds to rotating a complex number counterclockwise through 90° about the origin: (a + bi)i = ai + bi2 = − b + ai. History in brief Main section: History The solution of a general cubic equation in radicals (without trigonometric functions) may require intermediate calculations containing the square roots of negative numbers, even when the final solutions are real numbers, a situation known as casus irreducibilis. This conundrum led Italian mathematician Gerolamo Cardano to conceive of complex numbers in around 1545, though his understanding was rudimentary. Work on the problem of general polynomials ultimately led to the fundamental theorem of algebra, which shows that with complex numbers, a solution exists to every polynomial equation of degree one or higher. Complex numbers thus form an algebraically closed field, where any polynomial equation has a root. Many mathematicians contributed to the full development of complex numbers. The rules for addition, subtraction, multiplication, and division of complex numbers were developed by the Italian mathematician Rafael Bombelli.[3] A more abstract formalism for the complex numbers was further developed by the Irish mathematician William Rowan Hamilton, who extended this abstraction to the theory of quaternions. Elementary operations Geometric representation of z and its conjugate \bar{z} in the complex plane The complex conjugate of the complex number z = x + yi is defined to be xyi. It is denoted \bar{z} or z^*\,. Geometrically, \bar{z} is the "reflection" of z about the real axis. In particular, conjugating twice gives the original complex number: \bar{\bar{z}}=z. The real and imaginary parts of a complex number can be extracted using the conjugate: \operatorname{Re}\,(z) = \tfrac{1}{2}(z+\bar{z}), \, \operatorname{Im}\,(z) = \tfrac{1}{2i}(z-\bar{z}). \, Moreover, a complex number is real if and only if it equals its conjugate. Conjugation distributes over the standard arithmetic operations: \overline{z+w} = \bar{z} + \bar{w}, \, \overline{z w} = \bar{z} \bar{w}, \, \overline{(z/w)} = \bar{z}/\bar{w}. \, The reciprocal of a nonzero complex number z = x + yi is given by \frac{1}{z}=\frac{\bar{z}}{z \bar{z}}=\frac{\bar{z}}{x^2+y^2}. This formula can be used to compute the multiplicative inverse of a complex number if it is given in rectangular coordinates. Inversive geometry, a branch of geometry studying more general reflections than ones about a line, can also be expressed in terms of complex numbers. Addition and subtraction Complex numbers are added by adding the real and imaginary parts of the summands. That is to say: (a+bi) + (c+di) = (a+c) + (b+d)i. \ Similarly, subtraction is defined by (a+bi) - (c+di) = (a-c) + (b-d)i.\ Using the visualization of complex numbers in the complex plane, the addition has the following geometric interpretation: the sum of two complex numbers A and B, interpreted as points of the complex plane, is the point X obtained by building a parallelogram three of whose vertices are 0, A and B. Equivalently, X is the point such that the triangles with vertices 0, A, B, and X, B, A, are congruent. Multiplication and division The multiplication of two complex numbers is defined by the following formula: (a+bi) (c+di) = (ac-bd) + (bc+ad)i.\ In particular, the square of the imaginary unit is −1: i^2 = ii = -1.\ The preceding definition of multiplication of general complex numbers is the natural way of extending this fundamental property of the imaginary unit. Indeed, treating i as a variable, the formula follows from this (a+bi) (c+di) = ac + bci + adi + bidi \ (distributive law) = ac + bidi + bci + adi \ (commutative law of addition—the order of the summands can be changed) = ac + bdi^2 + (bc+ad)i \ (commutative law of multiplication—the order of the factors can be changed) = (ac-bd) + (bc + ad)i \ (fundamental property of the imaginary unit). The division of two complex numbers is defined in terms of complex multiplication, which is described above, and real division: \,\frac{a + bi}{c + di} = \left({ac + bd \over c^2 + d^2}\right) + \left( {bc - ad \over c^2 + d^2} \right)i. Division can be defined in this way because of the following observation: \,\frac{a + bi}{c + di} = \frac{\left(a + bi\right) \cdot \left(c - di\right)}{\left (c + di\right) \cdot \left (c - di\right)} = \left({ac + bd \over c^2 + d^2}\right) + \left( {bc - ad \over c^2 + d^2} \right)i. As shown earlier, cdi is the complex conjugate of the denominator c + di. The real part c and the imaginary part d of the denominator must not both be zero for division to be defined. Square root The square roots of a + bi (with b ≠ 0) are \pm (\gamma + \delta i), where \gamma = \sqrt{\frac{a + \sqrt{a^2 + b^2}}{2}} \delta = \sgn (b) \sqrt{\frac{-a + \sqrt{a^2 + b^2}}{2}}, where sgn is the signum function. This can be seen by squaring \pm (\gamma + \delta i) to obtain a + bi.[4][5] Here \sqrt{a^2 + b^2} is called the modulus of a + bi, and the square root with non-negative real part is called the principal square root. Polar form Figure 2: The argument φ and modulus r locate a point on an Argand diagram; r(cos ϕ + isin ϕ) or reiϕ are polar expressions of the point. Absolute value and argument Another way of encoding points in the complex plane other than using the x- and y-coordinates is to use the distance of a point P to O, the point whose coordinates are (0, 0) (origin), and the angle of the line through P and O. This idea leads to the polar form of complex numbers. The absolute value (or modulus or magnitude) of a complex number z = x + yi is \textstyle r=|z|=\sqrt{x^2+y^2}.\, If z is a real number (i.e., y = 0), then r = |x|. In general, by Pythagoras' theorem, r is the distance of the point P representing the complex number z to the origin. The argument or phase of z is the angle of the radius OP with the positive real axis, and is written as arg(z). As with the modulus, the argument can be found from the rectangular form x + iy:[6] \varphi = \arg(z) = \arctan(\frac{y}{x}) & \mbox{if } x > 0 \\ \arctan(\frac{y}{x}) + \pi & \mbox{if } x < 0 \mbox{ and } y \ge 0\\ \arctan(\frac{y}{x}) - \pi & \mbox{if } x < 0 \mbox{ and } y < 0\\ \frac{\pi}{2} & \mbox{if } x = 0 \mbox{ and } y > 0\\ -\frac{\pi}{2} & \mbox{if } x = 0 \mbox{ and } y < 0\\ \mbox{indeterminate } & \mbox{if } x = 0 \mbox{ and } y = 0. The value of φ must always be expressed in radians. It can change by any multiple of 2π and still give the same angle. Hence, the arg function is sometimes considered as multivalued. Normally, as given above, the principal value in the interval ( − π,π] is chosen. Values in the range [0,2π) are obtained by adding if the value is negative. The polar angle for the complex number 0 is undefined, but arbitrary choice of the angle 0 is common. The value of φ equals the result of atan2: φ = atan2(imaginary,real). Together, r and φ give another way of representing complex numbers, the polar form, as the combination of modulus and argument fully specify the position of a point on the plane. Recovering the original rectangular co-ordinates from the polar form is done by the formula called trigonometric form z = r(\cos \varphi + i\sin \varphi ).\, Using Euler's formula this can be written as z = r e^{i \varphi}.\, Using the cis function, this is sometimes abbreviated to z = r \ \operatorname{cis} \ \varphi. \, In angle notation, often used in electronics to represent a phasor with amplitude r and phase φ it is written as[7] z = r \ang \varphi . \, Multiplication, division and exponentiation in polar form The relevance of representing complex numbers in polar form stems from the fact that the formulas for multiplication, division and exponentiation are simpler than the ones using Cartesian coordinates. Given two complex numbers z1 = r1(cos φ1 + isin φ1) and z2 =r2(cos φ2 + isin φ2) the formula for multiplication is z_1 z_2 = r_1 r_2 (\cos(\varphi_1 + \varphi_2) + i \sin(\varphi_1 + \varphi_2)).\, In other words, the absolute values are multiplied and the arguments are added to yield the polar form of the product. For example, multiplying by i corresponds to a quarter-rotation counter-clockwise, which gives back i 2 = −1. The picture at the right illustrates the multiplication of (2+i)(3+i)=5+5i. \, Since the real and imaginary part of 5+5i are equal, the argument of that number is 45 degrees, or π/4 (in radian). On the other hand, it is also the sum of the angles at the origin of the red and blue triangle are arctan(1/3) and arctan(1/2), respectively. Thus, the formula \frac{\pi}{4} = \arctan\frac{1}{2} + \arctan\frac{1}{3} holds. As the arctan function can be approximated highly efficiently, formulas like this—known as Machin-like formulas—are used for high-precision approximations of π. Similarly, division is given by \frac{z_1}{ z_2} = \frac{r_1}{ r_2} \left(\cos(\varphi_1 - \varphi_2) + i \sin(\varphi_1 - \varphi_2)\right). This also implies de Moivre's formula for exponentiation of complex numbers with integer exponents: z^n = r^n\,(\cos n\varphi + i \sin n \varphi). The n-th roots of z are given by \sqrt[n]{z} = \sqrt[n]r \left( \cos \left(\frac{\varphi+2k\pi}{n}\right) + i \sin \left(\frac{\varphi+2k\pi}{n}\right)\right) for any integer k satisfying 0 ≤ k ≤ n − 1. Here \sqrt[n]{r} is the usual (positive) nth root of the positive real number r. While the nth root of a positive real number r is chosen to be the positive real number c satisfying cn = x there is no natural way of distinguishing one particular complex nth root of a complex number. Therefore, the nth root of z is considered as a multivalued function (in z), as opposed to a usual function f, for which f(z) is a uniquely defined number. Formulas such as \sqrt[n]{z^n} = z (which holds for positive real numbers), do in general not hold for complex numbers. Field structure The set C of complex numbers is a field. Briefly, this means that the following facts hold: first, any two complex numbers can be added and multiplied to yield another complex number. Second, for any complex number a, its negative −a is also a complex number; and third, every nonzero complex number has a reciprocal complex number. Moreover, these operations satisfy a number of laws, for example the law of commutativity of addition and multiplication for any two complex numbers z1 and z2: z_1+ z_2 = z_2 + z_1, \, z_1 z_2 = z_2 z_1. \, These two laws and the other requirements on a field can be proven by the formulas given above, using the fact that the real numbers themselves form a field. Unlike the reals, C is not an ordered field, that is to say, it is not possible to define a relation z1 < z2 that is compatible with the addition and multiplication. In fact, in any ordered field, the square of any element is necessarily positive, so i2 = −1 precludes the existence of an ordering on C. When the underlying field for a mathematical topic or construct is the field of complex numbers, the thing's name is usually modified to reflect that fact. For example: complex analysis, complex matrix, complex polynomial, and complex Lie algebra. Solutions of polynomial equations Given any complex numbers (called coefficients) a0, ..., an, the equation a_n z^n + \dots + a_1 z + a_0 = 0, \, has at least one complex solution z, provided that at least one of the higher coefficients, a1, ..., an, is nonzero. This is the statement of the fundamental theorem of algebra. Because of this fact, C is called an algebraically closed field. This property does not hold for the field of rational numbers Q (the polynomial x2 − 2 does not have a rational root, since √2 is not a rational number) nor the real numbers R (the polynomial x2 + a does not have a real solution for a > 0, since the square of x is positive for any real number x). There are various proofs of this theorem, either by analytic methods such as Liouville's theorem, or topological ones such as the winding number, or a proof combining Galois theory and the fact that any real polynomial of odd degree has at least one root. Because of this fact, theorems that hold "for any algebraically closed field", apply to C. For example, any complex matrix has at least one (complex) eigenvalue. Algebraic characterization The field C has the following three properties: first, it has characteristic 0. This means that 1 + 1 + ... + 1 ≠ 0 for any number of summands (all of which equal one). Second, its transcendence degree over Q, the prime field of C is the cardinality of the continuum. Third, it is algebraically closed (see above). It can be shown that any field having these properties is isomorphic (as a field) to C. For example, the algebraic closure of Qp also satisfies these three properties, so these two fields are isomorphic. Also, C is isomorphic to the field of complex Puiseux series. However, specifying an isomorphism requires the axiom of choice. Another consequence of this algebraic characterization is that C contains many proper subfields which are isomorphic to C (the same is true of R, which contains many sub fields isomorphic to itself[citation needed]). Characterization as a topological field The preceding characterization of C describes the algebraic aspects of C, only. That is to say, the properties of nearness and continuity, which matter in areas such as analysis and topology, are not dealt with. The following description of C as a topological field (that is, a field that is equipped with a topology, which allows one to specify notions such as convergence) does take into account the topological properties. C contains a subset P (namely the set of positive real numbers) of nonzero elements satisfying the following three conditions: • P is closed under addition, multiplication and taking inverses. • If x and y are distinct elements of P, then either xy or yx is in P. • If S is any nonempty subset of P, then S + P = x + P for some x in C. Moreover, C has a nontrivial involutive automorphism x \mapsto x^* (namely the complex conjugation), such that xx is in P for any nonzero x in C. Any field F with these properties can be endowed with a topology by taking the sets B(x, p) = {y | p − (yx)(yx)P} as a base, where x ranges over the field and p ranges over P. With this topology F is isomorphic as a topological field to C. The only connected locally compact topological fields are R and C. This gives another characterization of C as a topological field, since C can be distinguished from R because the nonzero complex numbers are connected, while the nonzero real numbers are not. Formal construction Formal development Above, complex numbers have been defined by introducing i, the imaginary unit, as a symbol. More rigorously, the set C of complex numbers can be defined as the set R2 of ordered pairs (a, b) of real numbers. In this notation, the above formulas for addition and multiplication read (a, b) \cdot (c, d) = (ac - bd, bc + ad).\, It is then just a matter of notation to express (a, b) as a + ib. Though this low-level construction does accurately describe the structure of the complex numbers, the following equivalent definition reveals the algebraic nature of C more immediately. This characterization relies on the notion of fields and polynomials. A field is a set endowed with an addition, subtraction, multiplication and division operations which behave as is familiar from, say, rational numbers. For example, the distributive law (x+y) z = xz + yz \ is required to hold for any three elements x, y and z of a field. The set R of real numbers does form a field. A polynomial p(X) with real coefficients is an expression of the form where the a0, ..., an are real numbers. The usual addition and multiplication of polynomials endows the set R[X] of all such polynomials with a ring structure. This ring is called polynomial ring. The quotient ring R[X]/(X2+1) can be shown to be a field. This extension field contains two square roots of −1, namely (the cosets of) X and −X, respectively. (The cosets of) 1 and X form a basis of R[X]/(X2+1) as a real vector space, which means that each element of the extension field can be uniquely written as a linear combination in these two elements. Equivalently, elements of the extension field can be written as ordered pairs (a,b) of real numbers. Moreover, the above formulas for addition etc. correspond to the ones yielded by this abstract algebraic approach – the two definitions of the field C are said to be isomorphic (as fields). Together with the above-mentioned fact that C is algebraically closed, this also shows that C is an algebraic closure of R. Matrix representation of complex numbers Complex numbers can also be represented by 2×2 matrices that have the following form: a & -b \\ b & \;\; a Here the entries a and b are real numbers. The sum and product of two such matrices is again of this form, and the sum and product of complex numbers corresponds to the sum and product of such matrices. The geometric description of the multiplication of complex numbers can also be phrased in terms of rotation matrices by using this correspondence between complex numbers and such matrices. Moreover, the square of the absolute value of a complex number expressed as a matrix is equal to the determinant of that matrix: |z|^2 = a & -b \\ b & a = (a^2) - ((-b)(b)) = a^2 + b^2. The conjugate \overline z corresponds to the transpose of the matrix. Though this representation of complex numbers with matricies is the most common, many other representations arise from matrices other than \begin{pmatrix}0 & -1 \\1 & 0 \end{pmatrix} that square to the negative of the identity matrix. See the article on 2 × 2 real matrices for other representations of complex numbers. Complex analysis Color wheel graph of sin(1/z). Black parts inside refer to numbers having large absolute values. The study of functions of a complex variable is known as complex analysis and has enormous practical use in applied mathematics as well as in other branches of mathematics. Often, the most natural proofs for statements in real analysis or even number theory employ techniques from complex analysis (see prime number theorem for an example). Unlike real functions which are commonly represented as two-dimensional graphs, complex functions have four-dimensional graphs and may usefully be illustrated by color coding a three-dimensional graph to suggest four dimensions, or by animating the complex function's dynamic transformation of the complex plane. Complex exponential and related functions The notions of convergent series and continuous functions in (real) analysis have natural analogs in complex analysis. A sequence of complex numbers is said to converge if and only if its real and imaginary parts do. This is equivalent to the (ε, δ)-definition of limits, where the absolute value of real numbers is replaced by the one of complex numbers. From a more abstract point of view, C, endowed with the metric \operatorname{d}(z_1, z_2) = |z_1 - z_2| \, is a complete metric space, which notably includes the triangle inequality |z_1 + z_2| \le |z_1| + |z_2| for any two complex numbers z1 and z2. Like in real analysis, this notion of convergence is used to construct a number of elementary functions: the exponential function exp(z), also written ez, is defined as the infinite series \exp(z):= 1+z+\frac{z^2}{2\cdot 1}+\frac{z^3}{3\cdot 2\cdot 1}+\cdots = \sum_{n=0}^{\infty} \frac{z^n}{n!}. \, and the series defining the real trigonometric functions sine and cosine, as well as hyperbolic functions such as sinh also carry over to complex arguments without change. Euler's identity states: \exp(i\varphi) = \cos(\varphi) + i\sin(\varphi) \, for any real number φ, in particular \exp(i \pi) = -1 \, Unlike in the situation of real numbers, there is an infinitude of complex solutions z of the equation \exp(z) = w \, for any complex number w ≠ 0. It can be shown that any such solution z—called complex logarithm of a—satisfies \log(x+iy)=\ln|w| + i\arg(w), \, where arg is the argument defined above, and ln the (real) natural logarithm. As arg is a multivalued function, unique only up to a multiple of 2π, log is also multivalued. The principal value of log is often taken by restricting the imaginary part to the interval (−π,π]. Complex exponentiation zω is defined as z^\omega = \exp(\omega \log z). \, Consequently, they are in general multi-valued. For ω = 1 / n, for some natural number n, this recovers the non-unicity of n-th roots mentioned above. Holomorphic functions A function f : CC is called holomorphic if it satisfies the Cauchy-Riemann equations. For example, any R-linear map CC can be written in the form with complex coefficients a and b. This map is holomorphic if and only if b = 0. The second summand b \overline z is real-differentiable, but does not satisfy the Cauchy-Riemann equations. Complex analysis shows some features not apparent in real analysis. For example, any two holomorphic functions f and g that agree on an arbitrarily small open subset of C necessarily agree everywhere. Meromorphic functions, functions that can locally be written as f(z)/(zz0)n with a holomorphic function f(z), still share some of the features of holomorphic functions. Other functions have essential singularities, such as sin(1/z) at z = 0. Some applications of complex numbers are: Control theory In control theory, systems are often transformed from the time domain to the frequency domain using the Laplace transform. The system's poles and zeros are then analyzed in the complex plane. The root locus, Nyquist plot, and Nichols plot techniques all make use of the complex plane. In the root locus method, it is especially important whether the poles and zeros are in the left or right half planes, i.e. have real part greater than or less than zero. If a system has poles that are • in the right half plane, it will be unstable, • all in the left half plane, it will be stable, • on the imaginary axis, it will have marginal stability. If a system has zeros in the right half plane, it is a nonminimum phase system. Signal analysis Complex numbers are used in signal analysis and other fields for a convenient description for periodically varying signals. For given real functions representing actual physical quantities, often in terms of sines and cosines, corresponding complex functions are considered of which the real parts are the original quantities. For a sine wave of a given frequency, the absolute value |z| of the corresponding z is the amplitude and the argument arg(z) the phase. If Fourier analysis is employed to write a given real-valued signal as a sum of periodic functions, these periodic functions are often written as complex valued functions of the form f ( t ) = z e^{i\omega t} \, where ω represents the angular frequency and the complex number z encodes the phase and amplitude as explained above. In electrical engineering, the Fourier transform is used to analyze varying voltages and currents. The treatment of resistors, capacitors, and inductors can then be unified by introducing imaginary, frequency-dependent resistances for the latter two and combining all three in a single complex number called the impedance. This approach is called phasor calculus. This use is also extended into digital signal processing and digital image processing, which utilize digital versions of Fourier analysis (and wavelet analysis) to transmit, compress, restore, and otherwise process digital audio signals, still images, and video signals. Improper integrals In applied fields, complex numbers are often used to compute certain real-valued improper integrals, by means of complex-valued functions. Several methods exist to do this; see methods of contour integration. Quantum mechanics The complex number field is relevant in the mathematical formulations of quantum mechanics, where complex Hilbert spaces provide the context for one such formulation that is convenient and perhaps most standard. The original foundation formulas of quantum mechanics – the Schrödinger equation and Heisenberg's matrix mechanics – make use of complex numbers. In special and general relativity, some formulas for the metric on spacetime become simpler if one takes the time variable to be imaginary. (This is no longer standard in classical relativity, but is used in an essential way in quantum field theory.) Complex numbers are essential to spinors, which are a generalization of the tensors used in relativity. Dynamic equations In differential equations, it is common to first find all complex roots r of the characteristic equation of a linear differential equation or equation system and then attempt to solve the system in terms of base functions of the form f(t) = ert. Likewise, in difference equations, the complex roots r of the characteristic equation of the difference equation system are used, to attempt to solve the system in terms of base functions of the form f(t) = r t. Fluid dynamics In fluid dynamics, complex functions are used to describe potential flow in two dimensions. Certain fractals are plotted in the complex plane, e.g. the Mandelbrot set and Julia sets. Algebraic number theory Construction of a regular polygon using straightedge and compass. As mentioned above, any nonconstant polynomial equation (in complex coefficients) has a solution in C. A fortiori, the same is true if the equation has rational coefficients. The roots of such equations are called algebraic numbers – they are a principal object of study in algebraic number theory. Compared to Q, the algebraic closure of Q, which also contains all algebraic numbers, C has the advantage of being easily understandable in geometric terms. In this way, algebraic methods can be used to study geometric questions and vice versa. With algebraic methods, more specifically applying the machinery of field theory to the number field containing roots of unity, it can be shown that it is not possible to construct a regular nonagon using only compass and straightedge – a purely geometric problem. Another example are Pythagorean triples (a, b, c), that is to say integers satisfying (which implies that the triangle having sidelengths a, b, and c is a right triangle). They can be studied by considering Gaussian integers, that is, numbers of the form x + iy, where x and y are integers. Analytic number theory Analytic number theory studies numbers, often integers or rationals, by taking advantage of the fact that they can be regarded as complex numbers, in which analytic methods can be used. This is done by encoding number-theoretic information in complex-valued functions. For example, the Riemann zeta-function ζ(s) is related to the distribution of prime numbers. The earliest fleeting reference to square roots of negative numbers can perhaps be said to occur in the work of the Greek mathematician Heron of Alexandria in the 1st century AD, where in his Stereometrica he considers, apparently in error, the volume of an impossible frustum of a pyramid to arrive at the term \sqrt{81 - 144} in his calculations, although negative quantities were not conceived of in Hellenistic mathematics and Heron merely replaced it by its positive.[8] The impetus to study complex numbers proper first arose in the 16th century when algebraic solutions for the roots of cubic and quartic polynomials were discovered by Italian mathematicians (see Niccolo Fontana Tartaglia, Gerolamo Cardano). It was soon realized that these formulas, even if one was only interested in real solutions, sometimes required the manipulation of square roots of negative numbers. As an example, Tartaglia's cubic formula gives the solution to the equation x3 − x = 0 as The three cube roots of −1, two of which are complex At first glance this looks like nonsense. However formal calculations with complex numbers show that the equation z3 = i has solutions –i, {\scriptstyle\frac{\sqrt{3}}{2}}+{\scriptstyle\frac{1}{2}}i and {\scriptstyle\frac{-\sqrt{3}}{2}}+{\scriptstyle\frac{1}{2}}i. Substituting these in turn for {\scriptstyle\sqrt{-1}^{1/3}} in Tartaglia's cubic formula and simplifying, one gets 0, 1 and −1 as the solutions of x3 – x = 0. Of course this particular equation can be solved at sight but it does illustrate that when general formulas are used to solve cubic equations with real roots then, as later mathematicians showed rigorously, the use of complex numbers is unavoidable. Rafael Bombelli was the first to explicitly address these seemingly paradoxical solutions of cubic equations and developed the rules for complex arithmetic trying to resolve these issues. The term "imaginary" for these quantities was coined by René Descartes in 1637, although he was at pains to stress their imaginary nature [9] [...] quelquefois seulement imaginaires c’est-à-dire que l’on peut toujours en imaginer autant que j'ai dit en chaque équation, mais qu’il n’y a quelquefois aucune quantité qui corresponde à celle qu’on imagine. ([...] sometimes only imaginary, that is one can imagine as many as I said in each equation, but sometimes there exists no quantity that matches that which we imagine.) A further source of confusion was that the equation \sqrt{-1}^2=\sqrt{-1}\sqrt{-1}=-1 seemed to be capriciously inconsistent with the algebraic identity \sqrt{a}\sqrt{b}=\sqrt{ab}, which is valid for non-negative real numbers a and b, and which was also used in complex number calculations with one of a, b positive and the other negative. The incorrect use of this identity (and the related identity \scriptstyle 1/\sqrt{a}=\sqrt{1/a}) in the case when both a and b are negative even bedeviled Euler. This difficulty eventually led to the convention of using the special symbol i in place of \sqrt{-1} to guard against this mistake[citation needed]. Even so Euler considered it natural to introduce students to complex numbers much earlier than we do today. In his elementary algebra text book, Elements of Algebra, he introduces these numbers almost at once and then uses them in a natural way throughout. In the 18th century complex numbers gained wider use, as it was noticed that formal manipulation of complex expressions could be used to simplify calculations involving trigonometric functions. For instance, in 1730 Abraham de Moivre noted that the complicated identities relating trigonometric functions of an integer multiple of an angle to powers of trigonometric functions of that angle could be simply re-expressed by the following well-known formula which bears his name, de Moivre's formula: In 1748 Leonhard Euler went further and obtained Euler's formula of complex analysis: \cos \theta + i\sin \theta = e ^{i\theta } \, by formally manipulating complex power series and observed that this formula could be used to reduce any trigonometric identity to much simpler exponential identities. The idea of a complex number as a point in the complex plane (above) was first described by Caspar Wessel in 1799, although it had been anticipated as early as 1685 in Wallis's De Algebra tractatus. Wessel's memoir appeared in the Proceedings of the Copenhagen Academy but went largely unnoticed. In 1806 Jean-Robert Argand independently issued a pamphlet on complex numbers and provided a rigorous proof of the fundamental theorem of algebra. Gauss had earlier published an essentially topological proof of the theorem in 1797 but expressed his doubts at the time about "the true metaphysics of the square root of −1". It was not until 1831 that he overcame these doubts and published his treatise on complex numbers as points in the plane, largely establishing modern notation and terminology. The English mathematician G. H. Hardy remarked that Gauss was the first mathematician to use complex numbers in 'a really confident and scientific way' although mathematicians such as Niels Henrik Abel and Carl Gustav Jacob Jacobi were necessarily using them routinely before Gauss published his 1831 treatise.[10] Augustin Louis Cauchy and Bernhard Riemann together brought the fundamental ideas of complex analysis to a high state of completion, commencing around 1825 in Cauchy's case. The common terms used in the theory are chiefly due to the founders. Argand called cos ϕ + isin ϕ the direction factor, and r = \sqrt{a^2+b^2} the modulus; Cauchy (1828) called cos ϕ + isin ϕ the reduced form (l'expression réduite) and apparently introduced the term argument; Gauss used i for \sqrt{-1}, introduced the term complex number for a + bi, and called a2 + b2 the norm. The expression direction coefficient, often used for cos ϕ + isin ϕ, is due to Hankel (1867), and absolute value, for modulus, is due to Weierstrass. Later classical writers on the general theory include Richard Dedekind, Otto Hölder, Felix Klein, Henri Poincaré, Hermann Schwarz, Karl Weierstrass and many others. Generalizations and related notions The process of extending the field R of reals to C is known as Cayley-Dickson construction. It can be carried further to higher dimensions, yielding the quaternions H and octonions O which (as a real vector space) are of dimension 4 and 8, respectively. However, with increasing dimension, the algebraic properties familiar from real and complex numbers vanish: the quaternions are only a skew field, i.e. x·y ≠ y·x for two quaternions, the multiplication of octonions fails (in addition to not being commutative) to be associative: (x·yz ≠ x·(y·z). However, all of these are normed division algebras over R. By Hurwitz's theorem they are the only ones. The next step in the Cayley-Dickson construction, the sedenions fail to have this structure. The Cayley-Dickson construction is closely related to the regular representation of C, thought of as an R-algebra (an R-vector space with a multiplication), with respect to the basis 1, i. This means the following: the R-linear map \mathbb{C} \rightarrow \mathbb{C}, z \mapsto wz for some fixed complex number w can be represented by a 2×2 matrix (once a basis has been chosen). With respect to the basis 1, i, this matrix is \operatorname{Re}(w) & -\operatorname{Im}(w) \\ \operatorname{Im}(w) & \;\; \operatorname{Re}(w) i.e., the one mentioned in the section on matrix representation of complex numbers above. While this is a linear representation of C in the 2 × 2 real matrices, it is not the only one. Any matrix J = \begin{pmatrix}p & q \\ r & -p \end{pmatrix}, \quad p^2 + qr + 1 = 0 has the property that its square is the negative of the identity matrix: J2 = −I. Then \{ z = a I + b J : a,b \in R \} is also isomorphic to the field C, and gives an alternative complex structure on R2. This is generalized by the notion of a linear complex structure. Hypercomplex numbers also generalize R, C, H, and O. For example this notion contains the split-complex numbers, which are elements of the ring R[x]/(x2 − 1) (as opposed to R[x]/(x2 + 1)). In this ring, the equation a2 = 1 has four solutions. The field R is the completion of Q, the field of rational numbers, with respect to the usual absolute value metric. Other choices of metrics on Q lead to the fields Qp of p-adic numbers (for any prime number p), which are thereby analogous to R. There are no other nontrivial ways of completing Q than R and Qp, by Ostrowski's theorem. The algebraic closure \overline {\mathbf{Q}_p} of Qp still carry a norm, but (unlike C) are not complete with respect to it. The completion \mathbf{C}_p of \overline {\mathbf{Q}_p} turns out to be algebraically closed. This field is called p-adic complex numbers by analogy. The fields R and Qp and their finite field extensions, including C, are local fields. See also 1. ^ Burton (1995, p. 294) 2. ^ Aufmann, Richard N.; Barker, Vernon C.; Nation, Richard D. (2007), College Algebra and Trigonometry (6 ed.), Cengage Learning, p. 66, ISBN 0618825150, , Chapter P, p. 66 3. ^ Katz (2004, §9.1.4) 4. ^ Abramowitz, Miltonn; Stegun, Irene A. (1964), Handbook of mathematical functions with formulas, graphs, and mathematical tables, Courier Dover Publications, p. 17, ISBN 0-486-61272-4, , Section 3.7.26, p. 17 5. ^ Cooke, Roger (2008), Classical algebra: its nature, origins, and uses, John Wiley and Sons, p. 59, ISBN 0-470-25952-3, , Extract: page 59 6. ^ Kasana, H.S. (2005), Complex Variables: Theory And Applications (2nd ed.), PHI Learning Pvt. Ltd, p. 14, ISBN 81-203-2641-5, , Extract of chapter 1, page 14 7. ^ Nilsson, James William; Riedel, Susan A. (2008), Electric circuits (8th ed.), Prentice Hall, p. 338, ISBN 0-131-98925-1, , Chapter 9, page 338 8. ^ Nahin, Paul J. (2007). An Imaginary Tale: The Story of \scriptstyle \sqrt {-1}. Princeton University Press. ISBN 9780691127989. Retrieved 20 April 2011.  9. ^ Descartes, René (1954) [1637], La Géométrie | The Geometry of Rene Descartes with a facsimile of the first edition, Dover Publications, ISBN 0486600688,, retrieved 20 April 2011  10. ^ Hardy, G. H.; Wright, E. M. (2000) [1938], An Introduction to the Theory of Numbers, OUP Oxford, p. 189 (fourth edition), ISBN 0199219869  Mathematical references Historical references • Burton, David M. (1995), The History of Mathematics (3rd ed.), New York: McGraw-Hill, ISBN 978-0-07-009465-9  • Katz, Victor J. (2004), A History of Mathematics, Brief Version, Addison-Wesley, ISBN 978-0-321-16193-2  • Nahin, Paul J. (1998), An Imaginary Tale: The Story of \scriptstyle\sqrt{-1} (hardcover ed.), Princeton University Press, ISBN 0-691-02795-1  A gentle introduction to the history of complex numbers and the beginnings of complex analysis. • H.-D. Ebbinghaus ... (1991), Numbers (hardcover ed.), Springer, ISBN 0-387-97497-0  An advanced perspective on the historical development of the concept of number. Further reading • The Road to Reality: A Complete Guide to the Laws of the Universe, by Roger Penrose; Alfred A. Knopf, 2005; ISBN 0-679-45443-8. Chapters 4-7 in particular deal extensively (and enthusiastically) with complex numbers. • Unknown Quantity: A Real and Imaginary History of Algebra, by John Derbyshire; Joseph Henry Press; ISBN 0-309-09657-X (hardcover 2006). A very readable history with emphasis on solving polynomial equations and the structures of modern algebra. • Visual Complex Analysis, by Tristan Needham; Clarendon Press; ISBN 0-19-853447-7 (hardcover, 1997). History of complex numbers and complex analysis with compelling and useful visual interpretations. External links Wikimedia Foundation. 2010. Look at other dictionaries: • Complex number — Complex Com plex (k[o^]m pl[e^]ks), a. [L. complexus, p. p. of complecti to entwine around, comprise; com + plectere to twist, akin to plicare to fold. See {Plait}, n.] 1. Composed of two or more parts; composite; not simple; as, a complex being; …   The Collaborative International Dictionary of English • complex number — n. any number expressed as a + bi, where a and b are real numbers and i is the imaginary unit, √P1: if b is zero it is a real number, but if b is not zero it is an imaginary number …   English World dictionary • complex number — ► NOUN Mathematics ▪ a number containing both a real and an imaginary part …   English terms dictionary • complex number — noun (mathematics) a number of the form a+bi where a and b are real numbers and i is the square root of 1 (Freq. 3) • Syn: ↑complex quantity, ↑imaginary number, ↑imaginary • Topics: ↑mathematics, ↑math, ↑ …   Useful english dictionary • complex number — a mathematical expression (a + bi) in which a and b are real numbers and i2 = 1. [1825 35] * * * Any number consisting of both real numbers and imaginary numbers. It has the form a + bi, where a and b are real numbers and i = 1; a is called the… …   Universalium • complex number — kompleksinis skaičius statusas T sritis informatika apibrėžtis Skaičius, išreiškiamas tokia forma: a + bi, kur a ir b yra realieji skaičiai, o i yra šaknis iš 1 ir vadinama menamąja dalimi. Kompleksiniai skaičiai gali būti pavaizduojami taškais… …   Enciklopedinis kompiuterijos žodynas • complex number — number which is a combination of real and imaginary numbers, non concrete numbers (Mathematics) …   English contemporary dictionary • complex number — noun Date: 1856 a number of the form a + b √ 1 where a and b are real numbers …   New Collegiate Dictionary • complex number — noun A number of the form a + bi, where a and b are real numbers and i = minus;1 …   Wiktionary • complex number — See number …   Philosophy dictionary
b9066b6380410367
Editor's note: The following is a text-only version. The complete version with artwork is available for purchase here (PDF). The periodic table of the elements is one of the most powerful icons in science: a single document that consolidates much of our knowledge of chemistry. A version hangs on the wall of nearly every chemical laboratory and lecture hall in the world. Indeed, nothing quite like it exists in the other disciplines of science. The story of the periodic system for classifying the elements can be traced back over 200 years. Throughout its long history, the periodic table has been disputed, altered and improved as science has progressed and as new elements have been discovered [see “Making New Elements,” by Peter Armbruster and Fritz Peter Hessberger]. But despite the dramatic changes that have taken place in science over the past century—namely, the development of the theories of relativity and quantum mechanics—there has been no revolution in the basic nature of the periodic system. In some instances, new findings initially appeared to call into question the theoretical foundations of the periodic table, but each time scientists eventually managed to incorporate the results while preserving the table’s fundamental structure. Remarkably, the periodic table is thus notable both for its historical roots and for its modern relevance. The term “periodic” reflects the fact that the elements show patterns in their chemical properties in certain regular intervals. Were it not for the simplification provided by this chart, students of chemistry would need to learn the properties of all 112 known elements. Fortunately, the periodic table allows chemists to function by mastering the properties of a handful of typical elements; all the others fall into so-called groups or families with similar chemical properties. (In the modern periodic table, a group or family corresponds to one vertical column.) The discovery of the periodic system for classifying the elements represents the culmination of a number of scientific developments, rather than a sudden brainstorm on the part of one individual. Yet historians typically consider one event as marking the formal birth of the modern periodic table: on February 17, 1869, a Russian professor of chemistry, Dimitri Ivanovich Mendeleev, completed the first of his numerous periodic charts. It included 63 known elements arranged according to increasing atomic weight; Mendeleev also left spaces for as yet undiscovered elements for which he predicted atomic weights. Prior to Mendeleev’s discovery, however, other scientists had been actively developing some kind of organizing system to describe the elements. In 1787, for example, French chemist Antoine Lavoisier, working with Antoine Fourcroy, Louis-Bernard Guyton de Morveau and Claude-Louis Berthollet, devised a list of the 33 elements known at the time. Yet such lists are simply onedimensional representations. The power of the modern table lies in its two- or even three-dimensional display of all the known elements (and even the ones yet to be discovered) in a logical system of precisely ordered rows and columns. In an early attempt to organize the elements into a meaningful array, German chemist Johann Döbereiner pointed out in 1817 that many of the known elements could be arranged by their similarities into groups of three, which he called triads. Döbereiner singled out triads of the elements lithium, sodium and potassium as well as chlorine, bromine and iodine. He noticed that if the three members of a triad were ordered according to their atomic weights, the properties of the middle element fell in between those of the first and third elements. For example, lithium, sodium and potassium all react vigorously with water. But lithium, the lightest of the triad, reacts more mildly than the other two, whereas the heaviest of the three, potassium, explodes violently. In addition, Döbereiner showed that the atomic weight of the middle element is close to the average of the weights for the first and third members of the triad. Döbereiner’s work encouraged others to search for correlations between the chemical properties of the elements and their atomic weights. One of those who pursued the triad approach further during the 19th century was Peter Kremers of Cologne, who suggested that certain elements could belong to two triads placed perpendicularly. Kremers thus broke new ground by comparing elements in two directions, a feature that later proved to be an essential aspect of Mendeleev’s system. In 1857 French chemist Jean-Baptiste- André Dumas turned away from the idea of triads and focused instead on devising a set of mathematical equations that could account for the increase in atomic weight among several groups of chemically similar elements. But as chemists now recognize, any attempt to establish an organizing pattern based on an element’s atomic weight will not succeed, because atomic weight is not the fundamental property that characterizes each of the elements. Periodic Properties The crucial characteristic of Mendeleev’s system was that it illustrated a periodicity, or repetition, in the properties of the elements at certain regular intervals. This feature had been observed previously in an arrangement of elements by atomic weight devised in 1862 by French geologist Alexandre- Emile Béguyer de Chancourtois. The system relied on a fairly intricate geometric configuration: de Chancourtois positioned the elements according to increasing atomic weight along a spiral inscribed on the surface of a cylinder and inclined at 45 degrees from the base. The first full turn of the spiral coincided with the element oxygen, and the second full turn occurred at sulfur. Elements that lined up vertically on the surface of the cylinder tended to have similar properties, so this arrangement succeeded in capturing some of the patterns that would later become central to Mendeleev’s system. Yet for a number of reasons, de Chancourtois’s system did not have much effect on scientists of the time: his original article failed to include a diagram of the table, the system was rather complicated, and the chemical similarities among elements were not displayed very convincingly. Several other researchers put forward their own versions of a periodic table during the 1860s. Using newly standardized values for atomic weights, English chemist John Newlands suggested in 1864 that when the elements were arranged in order of atomic weight, any one of the elements showed properties similar to those of the elements eight places ahead and eight places behind in the list—a feature that Newlands called “the law of octaves.” In his original table, Newlands left empty spaces for missing elements, but his more publicized version of 1866 did not include these open slots. Other chemists immediately raised objections to the table because it would not be able to accommodate any new elements that might be discovered. In fact, some investigators openly ridiculed Newlands’s ideas. At a meeting of the Chemical Society in London in 1866, George Carey Foster of University College London asked Newlands whether he had considered ordering the elements alphabetically, because any kind of arrangement would present occasional coincidences. As a result of the meeting, the Chemical Society refused to publish Newlands’s paper. Despite its poor reception, however, Newlands’s work does represent the first time anyone used a sequence of ordinal numbers (in this case, one based on the sequence of atomic weights) to organize the elements. In this respect, Newlands anticipated the modern organization of the periodic table, which is based on the sequence of so-called atomic numbers. (The concept of atomic number, which indicates the number of protons present within an atom’s nucleus, was not established until the early 20th century.) The Modern Periodic Table Chemist Julius Lothar Meyer of Breslau University in Germany, while in the process of revising his chemistry textbook in 1868, produced a periodic table that turned out to be remarkably similar to Mendeleev’s famous 1869 version—although Lothar Meyer failed to classify all the elements correctly. But the table did not appear in print until 1870 because of a publisher’s delay—a factor that contributed to an acrimonious dispute for priority that ensued between Lothar Meyer and Mendeleev. Around the same time, Mendeleev assembled his own periodic table while he, too, was writing a textbook of chemistry. Unlike his predecessors, Mendeleev had sufficient confidence in his periodic table to use it to predict several new elements and the properties of their compounds. He also corrected the atomic weights of some already known elements. Interestingly, Mendeleev admitted to having seen certain earlier tables, such as those of Newlands, but claimed to have been unaware of Lothar Meyer’s work when developing his chart. Although the predictive aspect of Mendeleev’s table was a major advance, it seems to have been overemphasized by historians, who have generally suggested that Mendeleev’s table was accepted especially because of this feature. These scholars have failed to notice that the citation from the Royal Society of London that accompanied the Davy Medal (which Mendeleev received in 1882) makes no mention whatsoever of his predictions. Instead Mendeleev’s ability to accommodate the already known elements may have contributed as much to the acceptance of the periodic system as did his striking predictions. Although numerous scientists helped to develop the periodic system, Mendeleev receives most of the credit for discovering chemical periodicity because he elevated the discovery to a law of nature and spent the rest of his life boldly examining its consequences and defending its validity. Defending the periodic table was no simple task—its accuracy was frequently challenged by subsequent discoveries. One notable occasion arose in 1894, when William Ramsay of University College London and Lord Rayleigh (John William Strutt) of the Royal Institution in London discovered the element argon; over the next few years, Ramsay announced the identification of four other elements—helium, neon, krypton and xenon—known as the noble gases. (The last of the known noble gases, radon, was discovered in 1900 by German physicist Friedrich Ernst Dorn.) The name “noble” derives from the fact that all these gases seem to stand apart from the other elements, rarely interacting with them to form compounds. As a result, some chemists suggested that the noble gases did not even belong in the periodic table. These elements had not been predicted by Mendeleev or anyone else, and only after six years of intense effort could chemists and physicists successfully incorporate the noble gases into the table. In the new arrangement, an additional column was introduced between the halogens (the gaseous elements fluorine, chlorine, bromine, iodine and astatine) and the alkali metals (lithium, sodium, potassium, rubidium, cesium and francium). A second point of contention surrounded the precise ordering of the elements. Mendeleev’s original table positioned the elements according to atomic weight, but in 1913 Dutch amateur theoretical physicist Anton van den Broek suggested that the ordering principle for the periodic table lay instead in the nuclear charge of each atom. Physicist Henry Moseley, working at the University of Manchester, tested this hypothesis, also in 1913, shortly before his tragic death in World War I. Moseley began by photographing the x-ray spectrum of 12 elements, 10 of which occupied consecutive places in the periodic table. He discovered that the frequencies of features called K-lines in the spectrum of each element were directly proportional to the squares of the integers representing the position of each successive element in the table. As Moseley put it, here was proof that “there is in the atom a fundamental quantity, which increases by regular steps as we pass from one element to the next.” This fundamental quantity, first referred to as atomic number in 1920 by Ernest Rutherford, who was then at the University of Cambridge, is now identified as the number of protons in the nucleus. Moseley’s work provided a method that could be used to determine exactly how many empty spaces remained in the periodic table. After this discovery, chemists turned to using atomic number as the fundamental ordering principle for the periodic table, instead of atomic weight. This change resolved many of the lingering problems in the arrangement of the elements. For example, when iodine and tellurium were ordered according to atomic weight (with iodine first), the two elements appeared to be incorrectly positioned in terms of their chemical behavior. When ordered according to atomic number (with tellurium first), however, the two elements were in their correct positions. Understanding the Atom The periodic table inspired the work not only of chemists but also of atomic physicists struggling to understand the structure of the atom. In 1904, working at Cambridge, physicist J. J. Thomson (who also discovered the electron) developed a model of the atom, paying close attention to the periodicity of the elements. He proposed that the atoms of a particular element contained a specific number of electrons arranged in concentric rings. Furthermore, according to Thomson, elements with similar configurations of electrons would have similar properties; Thomson’s work thus provided the first physical explanation for the periodicity of the elements. Although Thomson imagined the rings of electrons as lying inside the main body of the atom, rather than circulating around the nucleus as is believed today, his model does represent the first time anyone addressed the arrangement of electrons in the atom, a concept that pervades the whole of modern chemistry. Danish physicist Niels Bohr, the first to bring quantum theory to bear on the structure of the atom, was also motivated by the arrangement of the elements in the periodic system. In Bohr’s model of the atom, developed in 1913, electrons inhabit a series of concentric shells that encircle the nucleus. Bohr reasoned that elements in the same group of the periodic table might have identical configurations of electrons in their outermost shell and that the chemical properties of an element would depend in large part on the arrangement of electrons in the outer shell of its atoms. Bohr’s model of the atom also served to explain why the noble gases lack reactivity: noble gases possess full outer shells of electrons, making them unusually stable and unlikely to form compounds. Indeed, most other elements form compounds as a way to obtain full outer electron shells. More recent analysis of how Bohr arrived at these electronic configurations suggests that he functioned more like a chemist than has generally been credited. Bohr did not derive electron configurations from quantum theory but obtained them from the known chemical and spectroscopic properties of the elements. In 1924 another physicist, Austrianborn Wolfgang Pauli, set out to explain the length of each row, or period, in the table. As a result, he developed the Pauli Exclusion Principle, which states that no two electrons can exist in exactly the same quantum state, which is defined by what scientists call quantum numbers. The lengths of the various periods emerge from experimental evidence about the order of electron-shell filling and from the quantum-mechanical restrictions on the four quantum numbers that electrons can adopt. The modifications to quantum theory made by Werner Heisenberg and Erwin Schrödinger in the mid-1920s yielded quantum mechanics in essentially the form used to this day. But the influence of these changes on the periodic table has been rather minimal. Despite the efforts of many physicists and chemists, quantum mechanics cannot explain the periodic table any further. For example, it cannot explain from first principles the order in which electrons fill the various electron shells. The electronic configurations of atoms, on which our modern understanding of the periodic table is based, cannot be derived using quantum mechanics (this is because the fundamental equation of quantum mechanics, the Schrödinger equation, cannot be solved exactly for atoms other than hydrogen). As a result, quantum mechanics can only reproduce Mendeleev’s original discovery by the use of mathematical approximations—it cannot predict the periodic system. Variations on a Theme In more recent times, researchers have proposed different approaches for displaying the periodic system. For instance, Fernando Dufour, a retired chemistry professor from Collège Ahuntsic in Montreal, has developed a three-dimensional periodic table, which displays the fundamental symmetry of the periodic law, unlike the common two-dimensional form of the table in common use. The same virtue is also seen in a version of the periodic table shaped as a pyramid, a form suggested on many occasions but most recently refined by William B. Jensen of the University of Cincinnati. Another departure has been the invention of periodic systems aimed at summarizing the properties of compounds rather than elements. In 1980 Ray Hefferlin of Southern Adventist University in Collegedale, Tenn., devised a periodic system for all the conceivable diatomic molecules that could be formed between the first 118 elements (only 112 have been discovered to date). Hefferlin’s chart reveals that certain properties of molecules—the distance between atoms and the energy required to ionize the molecule, for instance—occur in regular patterns. This table has enabled scientists to predict the properties of diatomic molecules successfully. In a similar effort, Jerry R. Dias of the University of Missouri at Kansas City devised a periodic classification of a type of organic molecule called benzenoid aromatic hydrocarbons. The compound naphthalene (C10H8), found in mothballs, is the simplest example. Dias’s classification system is analogous to Döbereiner’s triads of elements: any central molecule of a triad has a total number of carbon and hydrogen atoms that is the mean of the flanking entries, both downward and across the table. This scheme has been applied to a systematic study of the properties of benzenoid aromatic hydrocarbons and, with the use of graph theory, has led to predictions of the stability and reactivity of some of these compounds. Still, it is the periodic table of the elements that has had the widest and most enduring influence. After evolving for over 200 years through the work of many people, the periodic table remains at the heart of the study of chemistry. It ranks as one of the most fruitful ideas in modern science, comparable perhaps to Charles Darwin’s theory of evolution. Unlike theories such as Newtonian mechanics, it has not been falsified or revolutionized by modern physics but has adapted and matured while remaining essentially unscathed. Further Reading The Periodic System of Chemical Elements: A History of the First Hundred Years. J. W. van Spronsen. Elsevier, 1969. The Surprising Periodic Table: Ten Remarkable Facts. Dennis H. Rouvray in Chemical Intelligencer, Vol. 2, No. 3, pages 39–47; July 1996. Classification, Symmetry and the Periodic Table. William B. Jensen in Computing and Mathematics with Applications, Vol. 12B, Nos. 1–2, pages 487–510; 1989. Plus ça Change. E. R. Scerri in Chemistry in Britain, Vol. 30, No. 5, pages 379–381; May 1994. The Electron and the Periodic Table. Eric R. Scerri in American Scientist, Vol. 85, pages 546–553; November–December 1997.
dbbbedbfe4d0e997
AdS/CFT correspondence AdS/CFT correspondence In theoretical physics, the anti-de Sitter/conformal field theory correspondence, sometimes called Maldacena duality or gauge/gravity duality, is a conjectured relationship between two kinds of physical theories. On one side of the correspondence are conformal field theories (CFT) which are quantum field theories, including theories similar to the Yang–Mills theories that describe elementary particles. On the other side are anti-de Sitter spaces (AdS) which are used in theories of quantum gravity, formulated in terms of string theory or M-theory. It also provides a powerful toolkit for studying strongly coupled quantum field theories.[2] Much of the usefulness of the duality results from the fact that it is a strong-weak duality: when the fields of the quantum field theory are strongly interacting, the ones in the gravitational theory are weakly interacting and thus more mathematically tractable. This fact has been used to study many aspects of nuclear and condensed matter physics by translating problems in those subjects into more mathematically tractable problems in string theory. The AdS/CFT correspondence was first proposed by Juan Maldacena in late 1997. Important aspects of the correspondence were elaborated in articles by Steven Gubser, Igor Klebanov, and Alexander Markovich Polyakov, and by Edward Witten. By 2010, Maldacena's article had over 7000 citations, becoming the most highly cited article in the field of high energy physics.[3] • Background 1 • Quantum gravity and strings 1.1 • Quantum field theory 1.2 • Overview of the correspondence 2 • The geometry of anti-de Sitter space 2.1 • The idea of AdS/CFT 2.2 • Examples of the correspondence 2.3 • Applications to quantum gravity 3 • A non-perturbative formulation of string theory 3.1 • Black hole information paradox 3.2 • Applications to quantum field theory 4 • Nuclear physics 4.1 • Condensed matter physics 4.2 • Criticism 4.3 • History and development 5 • String theory and nuclear physics 5.1 • Black holes and holography 5.2 • Maldacena's paper 5.3 • AdS/CFT finds applications 5.4 • Generalizations 6 • Three-dimensional gravity 6.1 • dS/CFT correspondence 6.2 • Kerr/CFT correspondence 6.3 • Higher spin gauge theories 6.4 • See also 7 • Notes 8 • References 9 Quantum gravity and strings Our current understanding of gravity is based on Albert Einstein's general theory of relativity.[4] Formulated in 1915, general relativity explains gravity in terms of the geometry of space and time, or spacetime. It is formulated in the language of classical physics[5] developed by physicists such as Isaac Newton and James Clerk Maxwell. The other nongravitational forces are explained in the framework of quantum mechanics. Developed in the first half of the twentieth century by a number of different physicists, quantum mechanics provides a radically different way of describing physical phenomena based on probability.[6] Quantum gravity is the branch of physics that seeks to describe gravity using the principles of quantum mechanics. Currently, the most popular approach to quantum gravity is string theory,[7] which models elementary particles not as zero-dimensional points but as one-dimensional objects called strings. In the AdS/CFT correspondence, one typically considers theories of quantum gravity derived from string theory or its modern extension, M-theory.[8] In everyday life, there are three familiar dimensions of space (up/down, left/right, and forward/backward), and there is one dimension of time. Thus, in the language of modern physics, one says that spacetime is four-dimensional.[9] One peculiar feature of string theory and M-theory is that these theories require extra dimensions of spacetime for their mathematical consistency: in string theory spacetime is ten-dimensional, while in M-theory it is eleven-dimensional.[10] The quantum gravity theories appearing in the AdS/CFT correspondence are typically obtained from string and M-theory by a process known as compactification. This produces a theory in which spacetime has effectively a lower number of dimensions and the extra dimensions are "curled up" into circles.[11] A standard analogy for compactification is to consider a multidimensional object such as a garden hose. If the hose is viewed from a sufficient distance, it appears to have only one dimension, its length, but as one approaches the hose, one discovers that it contains a second dimension, its circumference. Thus, an ant crawling inside it would move in two dimensions.[12] Quantum field theory The application of quantum mechanics to physical objects such as the electromagnetic field, which are extended in space and time, is known as quantum field theory.[13] In particle physics, quantum field theories form the basis for our understanding of elementary particles, which are modeled as excitations in the fundamental fields. Quantum field theories are also used throughout condensed matter physics to model particle-like objects called quasiparticles.[14] In the AdS/CFT correspondence, one considers, in addition to a theory of quantum gravity, a certain kind of quantum field theory called a conformal field theory. This is a particularly symmetric and mathematically well behaved type of quantum field theory.[15] Such theories are often studied in the context of string theory, where they are associated with the surface swept out by a string propagating through spacetime, and in statistical mechanics, where they model systems at a thermodynamic critical point.[16] Overview of the correspondence A tessellation of the hyperbolic plane by triangles and squares. The geometry of anti-de Sitter space In the AdS/CFT correspondence, one considers string theory or M-theory on an anti-de Sitter background. This means that the geometry of spacetime is described in terms of a certain vacuum solution of Einstein's equation called anti-de Sitter space.[17] Three-dimensional anti-de Sitter space is like a stack of hyperbolic disks, each one representing the state of the universe at a given time. The resulting spacetime looks like a solid cylinder. This construction describes a hypothetical universe with only two space and one time dimension, but it can be generalized to any number of dimensions. Indeed, hyperbolic space can have more than two dimensions and one can "stack up" copies of hyperbolic space to get higher-dimensional models of anti-de Sitter space.[18] The idea of AdS/CFT An important feature of anti-de Sitter space is its boundary (which looks like a cylinder in the case of three-dimensional anti-de Sitter space). One property of this boundary is that, locally around any point, it looks just like Minkowski space, the model of spacetime used in nongravitational physics.[21] One can therefore consider an auxiliary theory in which "spacetime" is given by the boundary of anti-de Sitter space. This observation is the starting point for AdS/CFT correspondence, which states that the boundary of anti-de Sitter space can be regarded as the "spacetime" for a conformal field theory. The claim is that this conformal field theory is equivalent to the gravitational theory on the bulk anti-de Sitter space in the sense that there is a "dictionary" for translating calculations in one theory into calculations in the other. Every entity in one theory has a counterpart in the other theory. For example, a single particle in the gravitational theory might correspond to some collection of particles in the boundary theory. In addition, the predictions in the two theories are quantitatively identical so that if two particles have a 40 percent chance of colliding in the gravitational theory, then the corresponding collections in the boundary theory would also have a 40 percent chance of colliding.[22] A hologram is a two-dimensional image which stores information about all three dimensions of the object it represents. The two images here are photographs of a single hologram taken from different angles. Notice that the boundary of anti-de Sitter space has fewer dimensions than anti-de Sitter space itself. For instance, in the three-dimensional example illustrated above, the boundary is a two-dimensional surface. The AdS/CFT correspondence is often described as a "holographic duality" because this relationship between the two theories is similar to the relationship between a three-dimensional object and its image as a hologram.[23] Although a hologram is two-dimensional, it encodes information about all three dimensions of the object it represents. In the same way, theories which are related by the AdS/CFT correspondence are conjectured to be exactly equivalent, despite living in different numbers of dimensions. The conformal field theory is like a hologram which captures information about the higher-dimensional quantum gravity theory.[19] Examples of the correspondence Following Maldacena's insight in 1997, theorists have discovered many different realizations of the AdS/CFT correspondence. These relate various conformal field theories to compactifications of string theory and M-theory in various numbers of dimensions. The theories involved are generally not viable models of the real world, but they have certain features, such as their particle content or high degree of symmetry, which make them useful for solving problems in quantum field theory and quantum gravity.[24] The most famous example of the AdS/CFT correspondence states that type IIB string theory on the product space AdS_5\times S^5 is equivalent to N = 4 supersymmetric Yang–Mills theory on the four-dimensional boundary.[25] In this example, the spacetime on which the gravitational theory lives is effectively five-dimensional (hence the notation AdS_5), and there are five additional "compact" dimensions (encoded by the S^5 factor). In the real world, spacetime is four-dimensional, at least macroscopically, so this version of the correspondence does not provide a realistic model of gravity. Likewise, the dual theory is not a viable model of any real-world system as it assumes a large amount of supersymmetry. Nevertheless, as explained below, this boundary theory shares some features in common with quantum chromodynamics, the fundamental theory of the strong force. It describes particles similar to the gluons of quantum chromodynamics together with certain fermions.[7] As a result, it has found applications in nuclear physics, particularly in the study of the quark–gluon plasma.[26] Another realization of the correspondence states that M-theory on AdS_7\times S^4 is equivalent to the so-called (2,0)-theory in six dimensions.[27] In this example, the spacetime of the gravitational theory is effectively seven-dimensional. The existence of the (2,0)-theory that appears on one side of the duality is predicted by the classification of superconformal field theories. It is still poorly understood because it is a quantum mechanical theory without a classical limit.[28] Despite the inherent difficulty in studying this theory, it is considered to be an interesting object for a variety of reasons, both physical and mathematical.[29] Yet another realization of the correspondence states that M-theory on AdS_4\times S^7 is equivalent to the ABJM superconformal field theory in three dimensions.[30] Here the gravitational theory has four noncompact dimensions, so this version of the correspondence provides a somewhat more realistic description of gravity.[31] Applications to quantum gravity A non-perturbative formulation of string theory Interaction in the quantum world: world lines of point-like particles or a world sheet swept up by closed strings in string theory. In quantum field theory, one typically computes the probabilities of various physical events using the techniques of [32] Although this formalism is extremely useful for making predictions, these predictions are only possible when the strength of the interactions, the coupling constant, is small enough to reliably describe the theory as being close to a theory without interactions.[33] The starting point for string theory is the idea that the point-like particles of quantum field theory can also be modeled as one-dimensional objects called strings. The interaction of strings is most straightforwardly defined by generalizing the perturbation theory used in ordinary quantum field theory. At the level of Feynman diagrams, this means replacing the one-dimensional diagram representing the path of a point particle by a two-dimensional surface representing the motion of a string. Unlike in quantum field theory, string theory does not yet have a full non-perturbative definition, so many of the theoretical questions that physicists would like to answer remain out of reach.[34] The problem of developing a non-perturbative formulation of string theory was one of the original motivations for studying the AdS/CFT correspondence.[35] As explained above, the correspondence provides several examples of quantum field theories which are equivalent to string theory on anti-de Sitter space. One can alternatively view this correspondence as providing a definition of string theory in the special case where the gravitational field is asymptotically anti-de Sitter (that is, when the gravitational field resembles that of anti-de Sitter space at spatial infinity). Physically interesting quantities in string theory are defined in terms of quantities in the dual quantum field theory.[19] Black hole information paradox In 1975, Stephen Hawking published a calculation which suggested that black holes are not completely black but emit a dim radiation due to quantum effects near the event horizon.[36] At first, Hawking's result posed a problem for theorists because it suggested that black holes destroy information. More precisely, Hawking's calculation seemed to conflict with one of the basic postulates of quantum mechanics, which states that physical systems evolve in time according to the Schrödinger equation. This property is usually referred to as unitarity of time evolution. The apparent contradiction between Hawking's calculation and the unitarity postulate of quantum mechanics came to be known as the black hole information paradox.[37] The AdS/CFT correspondence resolves the black hole information paradox, at least to some extent, because it shows how a black hole can evolve in a manner consistent with quantum mechanics in some contexts. Indeed, one can consider black holes in the context of the AdS/CFT correspondence, and any such black hole corresponds to a configuration of particles on the boundary of anti-de Sitter space.[38] These particles obey the usual rules of quantum mechanics and in particular evolve in a unitary fashion, so the black hole must also evolve in a unitary fashion, respecting the principles of quantum mechanics.[39] In 2005, Hawking announced that the paradox had been settled in favor of information conservation by the AdS/CFT correspondence, and he suggested a concrete mechanism by which black holes might preserve information.[40] Applications to quantum field theory Nuclear physics One physical system which has been studied using the AdS/CFT correspondence is the quark–gluon plasma, an exotic state of matter produced in particle accelerators. This state of matter arises for brief instants when heavy ions such as gold or lead nuclei are collided at high energies. Such collisions cause the quarks that make up atomic nuclei to deconfine at temperatures of approximately two trillion kelvins, conditions similar to those present at around 10^{-11} seconds after the Big Bang.[41] The physics of the quark–gluon plasma is governed by quantum chromodynamics, but this theory is mathematically intractable in problems involving the quark–gluon plasma.[42] In an article appearing in 2005, Đàm Thanh Sơn and his collaborators showed that the AdS/CFT correspondence could be used to understand some aspects of the quark–gluon plasma by describing it in the language of string theory.[26] By applying the AdS/CFT correspondence, Sơn and his collaborators were able to describe the quark gluon plasma in terms of black holes in five-dimensional spacetime. The calculation showed that the ratio of two quantities associated with the quark–gluon plasma, the shear viscosity \eta and volume density of entropy s, should be approximately equal to a certain universal constant: \frac{\eta}{s}\approx\frac{\hbar}{4\pi k} where \hbar denotes the reduced Planck's constant and k is Boltzmann's constant.[43] In addition, the authors conjectured that this universal constant provides a lower bound for \eta/s in a large class of systems. In 2008, the predicted value of this ratio for the quark–gluon plasma was confirmed at the Relativistic Heavy Ion Collider at Brookhaven National Laboratory.[44] Another important property of the quark–gluon plasma is that very high energy quarks moving through the plasma are stopped or "quenched" after traveling only a few femtometers. This phenomenon is characterized by a number \widehat{q} called the jet quenching parameter, which relates the energy loss of such a quark to the squared distance traveled through the plasma. Calculations based on the AdS/CFT correspondence have allowed theorists to estimate \widehat{q}, and the results agree roughly with the measured value of this parameter, suggesting that the AdS/CFT correspondence will be useful for developing a deeper understanding of this phenomenon.[45] Condensed matter physics A magnet levitating above a high-temperature superconductor. Today some physicists are working to understand high-temperature superconductivity using the AdS/CFT correspondence.[46] Over the decades, experimental condensed matter physicists have discovered a number of exotic states of matter, including superconductors and superfluids. These states are described using the formalism of quantum field theory, but some phenomena are difficult to explain using standard field theoretic techniques. Some condensed matter theorists including Subir Sachdev hope that the AdS/CFT correspondence will make it possible to describe these systems in the language of string theory and learn more about their behavior.[47] So far some success has been achieved in using string theory methods to describe the transition of a superfluid to an insulator. A superfluid is a system of electrically neutral atoms that flows without any friction. Such systems are often produced in the laboratory using liquid helium, but recently experimentalists have developed new ways of producing artificial superfluids by pouring trillions of cold atoms into a lattice of criss-crossing lasers. These atoms initially behave as a superfluid, but as experimentalists increase the intensity of the lasers, they become less mobile and then suddenly transition to an insulating state. During the transition, the atoms behave in an unusual way. For example, the atoms slow to a halt at a rate that depends on the temperature and on Planck's constant, the fundamental parameter of quantum mechanics, which does not enter into the description of the other phases. This behavior has recently been understood by considering a dual description where properties of the fluid are described in terms of a higher dimensional black hole.[48] With many physicists turning towards string-based methods to attack problems in nuclear and condensed matter physics, some theorists working in these areas have expressed doubts about whether the AdS/CFT correspondence can provide the tools needed to realistically model real-world systems. In a talk at the Quark Matter conference in 2006,[49] Larry McLerran pointed out that the N=4 super Yang–Mills theory that appears in the AdS/CFT correspondence differs significantly from quantum chromodynamics, making it difficult to apply these methods to nuclear physics. According to McLerran, N=4 supersymmetric Yang–Mills is not QCD ... It has no mass scale and is conformally invariant. It has no confinement and no running coupling constant. It is supersymmetric. It has no chiral symmetry breaking or mass generation. It has six scalar and fermions in the adjoint representation ... It may be possible to correct some or all of the above problems, or, for various physical problems, some of the objections may not be relevant. As yet there is not consensus nor compelling arguments for the conjectured fixes or phenomena which would insure that the N=4 supersymmetric Yang Mills results would reliably reflect QCD.[49] In a letter to Physics Today, Nobel laureate Philip W. Anderson voiced similar concerns about applications of AdS/CFT to condensed matter physics, stating As a very general problem with the AdS/CFT approach in condensed-matter theory, we can point to those telltale initials "CFT"—conformal field theory. Condensed-matter problems are, in general, neither relativistic nor conformal. Near a quantum critical point, both time and space may be scaling, but even there we still have a preferred coordinate system and, usually, a lattice. There is some evidence of other linear-T phases to the left of the strange metal about which they are welcome to speculate, but again in this case the condensed-matter problem is overdetermined by experimental facts.[50] History and development Gerard 't Hooft obtained results related to the AdS/CFT correspondence in the 1970s by studying analogies between string theory and nuclear physics. String theory and nuclear physics The discovery of the AdS/CFT correspondence in late 1997 was the culmination of a long history of efforts to relate string theory to nuclear physics.[51] In fact, string theory was originally developed during the late 1960s and early 1970s as a theory of hadrons, the subatomic particles like the proton and neutron that are held together by the strong nuclear force. The idea was that each of these particles could be viewed as a different oscillation mode of a string. In the late 1960s, experimentalists had found that hadrons fall into families called Regge trajectories with squared energy proportional to angular momentum, and theorists showed that this relationship emerges naturally from the physics of a rotating relativistic string.[52] On the other hand, attempts to model hadrons as strings faced serious problems. One problem was that string theory includes a massless spin-2 particle whereas no such particle appears in the physics of hadrons.[51] Such a particle would mediate a force with the properties of gravity. In 1974, Joel Scherk and John Schwarz suggested that string theory was therefore not a theory of nuclear physics as many theorists had thought but instead a theory of quantum gravity.[53] At the same time, it was realized that hadrons are actually made of quarks, and the string theory approach was abandoned in favor of quantum chromodynamics.[51] In quantum chromodynamics, quarks have a kind of charge that comes in three varieties called colors. In a paper from 1974, Gerard 't Hooft studied the relationship between string theory and nuclear physics from another point of view by considering theories similar to quantum chromodynamics, where the number of colors is some arbitrary number N, rather than three. In this article, 't Hooft considered a certain limit where N tends to infinity and argued that in this limit certain calculations in quantum field theory resemble calculations in string theory.[54] Stephen Hawking predicted in 1975 that black holes emit radiation due to quantum effects. Black holes and holography In 1975, Stephen Hawking published a calculation which suggested that black holes are not completely black but emit a dim radiation due to quantum effects near the event horizon.[36] This work extended previous results of Jacob Bekenstein who had suggested that black holes have a well defined entropy.[55] At first, Hawking's result appeared to contradict one of the main postulates of quantum mechanics, namely the unitarity of time evolution. Intuitively, the unitarity postulate says that quantum mechanical systems do not destroy information as they evolve from one state to another. For this reason, the apparent contradiction came to be known as the black hole information paradox.[56] Leonard Susskind made early contributions to the idea of holography in quantum gravity. Later, in 1993, Gerard 't Hooft wrote a speculative paper on quantum gravity in which he revisited Hawking's work on black hole thermodynamics, concluding that the total number of degrees of freedom in a region of spacetime surrounding a black hole is proportional to the surface area of the horizon.[57] This idea was promoted by Leonard Susskind and is now known as the holographic principle.[58] The holographic principle and its realization in string theory through the AdS/CFT correspondence have helped elucidate the mysteries of black holes suggested by Hawking's work and are believed to provide a resolution of the black hole information paradox.[39] In 2004, Hawking conceded that black holes do not violate quantum mechanics,[59] and he suggested a concrete mechanism by which they might preserve information.[40] Maldacena's paper In late 1997, Juan Maldacena published a landmark paper that initiated the study of AdS/CFT.[27] According to Alexander Markovich Polyakov, "[Maldacena's] work opened the flood gates."[60] The conjecture immediately excited great interest in the string theory community[39] and was considered in articles by Steven Gubser, Igor Klebanov and Polyakov,[61] and by Edward Witten.[62] These papers made Maldacena's conjecture more precise and showed that the conformal field theory appearing in the correspondence lives on the boundary of anti-de Sitter space.[60] Juan Maldacena first proposed the AdS/CFT correspondence in late 1997. One special case of Maldacena's proposal says that N=4 super Yang–Mills theory, a gauge theory similar in some ways to quantum chromodynamics, is equivalent to string theory in five-dimensional anti-de Sitter space.[30] This result helped clarify the earlier work of 't Hooft on the relationship between string theory and quantum chromodynamics, taking string theory back to its roots as a theory of nuclear physics.[52] Maldacena's results also provided a concrete realization of the holographic principle with important implications for quantum gravity and black hole physics.[1] By the year 2010, Maldacena's paper had become the most highly cited paper in high energy physics with over 7000 citations.[3] These subsequent articles have provided considerable evidence that the correspondence is correct, although so far it has not been rigorously proved.[63] AdS/CFT finds applications In 1999, after taking a job at Columbia University, nuclear physicist Đàm Thanh Sơn paid a visit to Andrei Starinets, a friend from Sơn's undergraduate days who happened to be doing a Ph.D. in string theory at New York University.[64] Although the two men had no intention of collaborating, Sơn soon realized that the AdS/CFT calculations Starinets was doing could shed light on some aspects of the quark–gluon plasma, an exotic state of matter produced when heavy ions are collided at high energies. In collaboration with Starinets and Pavel Kovtun, Sơn was able to use the AdS/CFT correspondence to calculate a key parameter of the plasma.[26] As Sơn later recalled, "We turned the calculation on its head to give us a prediction for the value of the shear viscosity of a plasma ... A friend of mine in nuclear physics joked that ours was the first useful paper to come out of string theory."[47] Today physicists continue to look for applications of the AdS/CFT correspondence in quantum field theory.[65] In addition to the applications to nuclear physics advocated by Đàm Thanh Sơn and his collaborators, condensed matter physicists such as Subir Sachdev have used string theory methods to understand some aspects of condensed matter physics. A notable result in this direction was the description, via the AdS/CFT correspondence, of the transition of a superfluid to an insulator.[48] Another emerging subject is the fluid/gravity correspondence, which uses the AdS/CFT correspondence to translate problems in fluid dynamics into problems in general relativity.[66] Three-dimensional gravity In order to better understand the quantum aspects of gravity in our four-dimensional universe, some physicists have considered a lower-dimensional mathematical model in which spacetime has only two spatial dimensions and one time dimension.[67] In this setting, the mathematics describing the gravitational field simplifies drastically, and one can study quantum gravity using familiar methods from quantum field theory, eliminating the need for string theory or other more radical approaches to quantum gravity in four dimensions.[68] Beginning with the work of J. D. Brown and Marc Henneaux in 1986,[69] physicists have noticed that quantum gravity in a three-dimensional spacetime is closely related to two-dimensional conformal field theory. In 1995, Henneaux and his coworkers explored this relationship in more detail, suggesting that three-dimensional gravity in anti-de Sitter space is equivalent to the conformal field theory known as Liouville field theory.[70] Another conjecture formulated by Edward Witten states that three-dimensional gravity in anti-de Sitter space is equivalent to a conformal field theory with monster group symmetry.[71] These conjectures provide examples of the AdS/CFT correspondence that do not require the full apparatus of string or M-theory.[72] dS/CFT correspondence Unlike our universe, which is now known to be expanding at an accelerating rate, anti-de Sitter space is neither expanding nor contracting. Instead it looks the same at all times.[18] In more technical language, one says that anti-de Sitter space corresponds to a universe with negative cosmological constant, whereas the real universe has a small positive cosmological constant.[73] Although the properties of gravity at short distances should be somewhat independent of the value of the cosmological constant,[74] it is desirable to have a version of the AdS/CFT correspondence for positive cosmological constant. In 2001, Andrew Strominger introduced a version of the duality called the dS/CFT correspondence.[75] This duality involves a model of spacetime called de Sitter space with a positive cosmological constant. Such a duality is interesting from the point of view of cosmology since many cosmologists believe that the very early universe was close to being de Sitter space.[18] Our universe may also resemble de Sitter space in the distant future.[18] Kerr/CFT correspondence Although the AdS/CFT correspondence is often useful for studying the properties of black holes,[76] most of the black holes considered in the context of AdS/CFT are physically unrealistic. Indeed, as explained above, most versions of the AdS/CFT correspondence involve higher-dimensional models of spacetime with unphysical supersymmetry. In 2009, Monica Guica, Thomas Hartman, Wei Song, and Andrew Strominger showed that the ideas of AdS/CFT could nevertheless be used to understand certain astrophysical black holes. More precisely, their results apply to black holes that are approximated by extremal Kerr black holes, which have the largest possible angular momentum compatible with a given mass.[77] They showed that such black holes have an equivalent description in terms of conformal field theory. The Kerr/CFT correspondence was later extended to black holes with lower angular momentum.[78] Higher spin gauge theories The AdS/CFT correspondence is closely related to another duality conjectured by Igor Klebanov and Alexander Markovich Polyakov in 2002.[79] This duality states that certain "higher spin gauge theories" on anti-de Sitter space are equivalent to conformal field theories with O(N) symmetry. Here the theory in the bulk is a type of gauge theory describing particles of arbitrarily high spin. It is similar to string theory, where the excited modes of vibrating strings correspond to particles with higher spin, and it may help to better understand the string theoretic versions of AdS/CFT and possibly even prove the correspondence.[80] In 2010, Simone Giombi and Xi Yin obtained further evidence for this duality by computing quantities called three-point functions.[81] See also 1. ^ a b de Haro et al. 2013, p. 2 2. ^ Klebanov and Maldacena 2009 3. ^ a b "Top Cited Articles during 2010 in hep-th". Retrieved 25 July 2013.  4. ^ A standard textbook on general relativity is Wald 1984. 5. ^ Maldacena 2005, p. 58 6. ^ Griffiths 2004 7. ^ a b Maldacena 2005, p. 62 8. ^ See the subsection entitled "Examples of the correspondence". For examples which do not involve string theory or M-theory, see the section entitled "Generalizations". 9. ^ Wald 1984, p. 4 10. ^ Zwiebach 2009, p. 8 11. ^ Zwiebach 2009, pp. 7–8 12. ^ This analogy is used for example in Greene 2000, p. 186. 13. ^ A standard text is Peskin and Schroeder 1995. 15. ^ Conformal field theories are characterized by their invariance under conformal transformations. 16. ^ For an introduction to conformal field theory emphasizing its applications to perturbative string theory, see Volume II of Deligne et al. 1999. 17. ^ Klebanov and Maldacena 2009, p. 28 18. ^ a b c d e f Maldacena 2005, p. 60 19. ^ a b c Maldacena 2005, p. 61 20. ^ The mathematical relationship between the interior and boundary of anti-de Sitter space is related to the ambient construction of Charles Fefferman and Robin Graham. For details see Fefferman and Graham 1985, Fefferman and Graham 2011. 21. ^ Zwiebach 2009, p. 552 22. ^ Maldacena 2005, pp. 61–62 23. ^ Maldacena 2005, p. 57 24. ^ The known realizations of AdS/CFT typically involve unphysical numbers of spacetime dimensions and unphysical supersymmetries. 25. ^ This example is the main subject of the three pioneering articles on AdS/CFT: Maldacena 1998; Gubser, Klebanov, and Polyakov 1998; and Witten 1998. 26. ^ a b c Merali 2011, p. 303; Kovtun, Son, and Starinets 2001 27. ^ a b Maldacena 1998 29. ^ See Moore 2012 and Alday, Gaiotto, and Tachikawa 2010. 30. ^ a b Aharony et al. 2008 31. ^ Aharony et al. 2008, sec. 1 32. ^ A standard textbook introducing the formalism of Feynman diagrams is Peskin and Schroeder 1995. 33. ^ Zee 2010, p. 43 34. ^ Zwiebach 2009, p. 12 35. ^ Maldacena 1998, sec. 6 36. ^ a b Hawking 1975 37. ^ For an accessible introduction to the black hole information paradox, and the related scientific dispute between Hawking and Leonard Susskind, see Susskind 2008. 38. ^ Zwiebach 2009, p. 554 39. ^ a b c Maldacena 2005, p. 63 40. ^ a b Hawking 2005 41. ^ Zwiebach 2009, p. 559 42. ^ More precisely, one cannot apply the methods of perturbative quantum field theory. 43. ^ Zwiebach 2009, p. 561; Kovtun, Son, and Starinets 2001 44. ^ Merali 2011, p. 303; Luzum and Romatschke 2008 45. ^ Zwiebach 2009, p. 561 46. ^ Merali 2011 47. ^ a b Merali 2011, p. 303 48. ^ a b Sachdev 2013, p. 51 49. ^ a b McLerran 2007 50. ^ Anderson, Philip. "Strange connections to strange metals". Physics Today. Retrieved 14 August 2013.  51. ^ a b c Zwiebach 2009, p. 525 52. ^ a b Aharony et al. 2008, sec. 1.1 53. ^ Scherk and Schwarz 1974 54. ^ 't Hooft 1974 55. ^ Bekenstein 1973 56. ^ Susskind 2008 57. ^ 't Hooft 1993 58. ^ Susskind 1995 59. ^ Susskind 2008, p. 444 60. ^ a b Polyakov 2008, p. 6 61. ^ Gubser, Klebanov, and Polyakov 1998 62. ^ Witten 1998 63. ^ Maldacena 2005, p. 63; Cowen 2013 64. ^ Merali 2011, pp. 302–303 65. ^ Merali 2011; Sachdev 2013 66. ^ Rangamani 2009 67. ^ For a review, see Carlip 2003. 68. ^ According to the results of Witten 1988, three-dimensional quantum gravity can be understood by relating it to Chern–Simons theory. 69. ^ Brown and Henneaux 1986 70. ^ Coussaert, Henneaux, and van Driel 1995 71. ^ Witten 2007 72. ^ Guica et al. 2009, p. 1 73. ^ Perlmutter 2003 74. ^ Biquard 2005, p. 33 75. ^ Strominger 2001 76. ^ See the subsection entitled "Black hole information paradox". 77. ^ Guica et al. 2009 78. ^ Castro, Maloney, and Strominger 2010 79. ^ Klebanov and Polyakov 2002 80. ^ See the Introduction in Klebanov and Polyakov 2002. 81. ^ Giombi and Yin 2010 • Aharony, Ofer; Gubser, Steven; Maldacena, Juan; Ooguri, Hirosi; Oz, Yaron (2000). "Large N Field Theories, String Theory and Gravity". Phys. Rept. 323 (3–4): 183–386.   • Alday, Luis; Gaiotto, Davide; Tachikawa, Yuji (2010). "Liouville correlation functions from four-dimensional gauge theories". Letters in Mathematical Physics 91 (2): 167–197.   • Bekenstein, Jacob (1973). "Black holes and entropy". Physical Review D 7 (8): 2333.   • Biquard, Olivier (2005). AdS/CFT Correspondence: Einstein Metrics and Their Conformal Boundaries. European Mathematical Society.   • Brown, J. David; Henneaux, Marc (1986). "Central charges in the canonical realization of asymptotic symmetries: an example from three dimensional gravity". Communications in Mathematical Physics 104 (2): 207–226.   • Carlip, Steven (2003). Quantum Gravity in 2+1 Dimensions. Cambridge Monographs on Mathematical Physics.   • Castro, Alejandra; Maloney, Alexander; Strominger, Andrew (2010). "Hidden conformal symmetry of the Kerr black hole". Physical Review D 82 (2).   • Coussaert, Oliver; Henneaux, Marc; van Driel, Peter (1995). "The asymptotic dynamics of three-dimensional Einstein gravity with a negative cosmological constant". Classical and Quantum Gravity 12 (12): 2961.   • Cowen, Ron (2013). "Simulations back up theory that Universe is a hologram". Nature News & Comment.   • de Haro, Sebastian; Dieks, Dennis; 't Hooft, Gerard; Verlinde, Erik (2013). "Forty Years of String Theory Reflecting on the Foundations". Foundations of Physics 43 (1): 1–7.   • Fefferman, Charles; Graham, Robin (1985). "Conformal invariants". Asterisque: 95–116.  • Fefferman, Charles; Graham, Robin (2011). The Ambient Metric. Princeton University Press.   • Giombi, Simone; Yin, Xi (2010). "Higher spin gauge theory and holography: the three-point functions". Journal of High Energy Physics 2010 (9): 1–80.   • Greene, Brian (2000).   • Gubser, Steven; Klebanov, Igor; Polyakov, Alexander (1998). "Gauge theory correlators from non-critical string theory". Physics Letters B 428: 105–114.   • Guica, Monica; Hartman, Thomas; Song, Wei; Strominger, Andrew (2009). "The Kerr/CFT Correspondence". Physical Review D 80 (12).   • Hawking, Stephen (1975). "Particle creation by black holes". Communications in mathematical physics 43 (3): 199–220.   • Hawking, Stephen (2005). "Information loss in black holes". Physical Review D 72 (8).   • Klebanov, Igor; Maldacena, Juan (2009). "Solving Quantum Field Theories via Curved Spacetimes" (PDF).   • Klebanov, Igor; Polyakov, Alexander (2002). "The AdS dual of the critical O(N) vector model". Physics Letters B 550 (3–4): 213–219.   • Luzum, Matthew; Romatschke, Paul (2008). "Conformal relativistic viscous hydrodynamics: Applications to RHIC results at \sqrt{s_{NN}}=200 GeV". Physical Review C 78 (3).   • Maldacena, Juan (1998). "The Large N limit of superconformal field theories and supergravity". Advances in Theoretical and Mathematical Physics 2: 231–252.   • Maldacena, Juan (2005). "The Illusion of Gravity" (PDF). Scientific American 293 (5): 56–63.   • McLerran, Larry (2007). "Theory Summary : Quark Matter 2006". Journal of Physics G: Nuclear and Particle Physics 34 (8): S583.   • Moore, Gregory (2012). "Lecture Notes for Felix Klein Lectures" (PDF). Retrieved 14 August 2013.  • Perlmutter, Saul (2003). "Supernovae, dark energy, and the accelerating universe". Physics Today 56 (4): 53–62.   • Polyakov, Alexander (2008). "From Quarks to Strings".   • Rangamani, Mukund (2009). "Gravity and Hydrodynamics: Lectures on the fluid-gravity correspondence". Classical and quantum gravity 26 (22): 4003.   • Scherk, Joel; Schwarz, John (1974). "Dual models for non-hadrons". Nuclear Physics B 81 (1): 118–144.   • Strominger, Andrew (2001). "The dS/CFT correspondence". Journal of High Energy Physics 2001 (10): 034.   • Susskind, Leonard (2008). The Black Hole War: My Battle with Stephen Hawking to Make the World Safe for Quantum Mechanics. Little, Brown and Company.   • 't Hooft, Gerard (1974). "A planar diagram theory for strong interactions". Nuclear Physics B 72 (3): 461–473.   • 't Hooft, Gerard (1993). "Dimensional Reduction in Quantum Gravity".   • Wald, Robert (1984). General Relativity. University of Chicago Press.   • Witten, Edward (1988). "2+1 dimensional gravity as an exactly soluble system". Nuclear Physics B 311 (1): 46–78.   • Witten, Edward (1998). "Anti-de Sitter space and holography". Advances in Theoretical and Mathematical Physics 2: 253–291.   • Witten, Edward (2007). "Three-dimensional gravity revisited".  
d60b25c2a9680d14
Path integral formulation From Wikipedia, the free encyclopedia   (Redirected from Feynman path integral) Jump to: navigation, search This article is about a formulation of quantum mechanics. For integrals along a path, also known as line or contour integrals, see line integral. The path integral formulation of quantum mechanics is a description of quantum theory which generalizes the action principle of classical mechanics. It replaces the classical notion of a single, unique trajectory for a system with a sum, or functional integral, over an infinity of possible trajectories to compute a quantum amplitude. The basic idea of the path integral formulation can be traced back to Norbert Wiener, who introduced the Wiener integral for solving problems in diffusion and Brownian motion.[1] This idea was extended to the use of the Lagrangian in quantum mechanics by P. A. M. Dirac in his 1933 paper.[2] The complete method was developed in 1948 by Richard Feynman. Some preliminaries were worked out earlier, in the course of his doctoral thesis work with John Archibald Wheeler. The original motivation stemmed from the desire to obtain a quantum-mechanical formulation for the Wheeler–Feynman absorber theory using a Lagrangian (rather than a Hamiltonian) as a starting point. This formulation has proven crucial to the subsequent development of theoretical physics, because it is manifestly symmetric between time and space. Unlike previous methods, the path-integral allows a physicist to easily change coordinates between very different canonical descriptions of the same quantum system. These are just three of the paths that contribute to the quantum amplitude for a particle moving from point A at some time t0 to point B at some other time t1. Quantum action principle[edit] In quantum mechanics, as in classical mechanics, the Hamiltonian is the generator of time-translations. This means that the state at a slightly later time differs from the state at the current time by the result of acting with the Hamiltonian operator (multiplied by the negative imaginary unit, −i). For states with a definite energy, this is a statement of the De Broglie relation between frequency and energy, and the general relation is consistent with that plus the superposition principle. But the Hamiltonian in classical mechanics is derived from a Lagrangian, which is a more fundamental quantity relative to special relativity. The Hamiltonian tells you how to march forward in time, but the time is different in different reference frames. So the Hamiltonian is different in different frames, and this type of symmetry is not apparent in the original formulation of quantum mechanics. The Hamiltonian is a function of the position and momentum at one time, and it tells you the position and momentum a little later. The Lagrangian is a function of the position now and the position a little later (or, equivalently for infinitesimal time separations, it is a function of the position and velocity). The relation between the two is by a Legendre transform, and the condition that determines the classical equations of motion (the Euler–Lagrange equations) is that the action is a minimum. In quantum mechanics, the Legendre transform is hard to interpret, because the motion is not over a definite trajectory. So what does the Legendre transform mean? In classical mechanics, with discretization in time, \epsilon H = p(t)(q(t+\epsilon) - q(t)) - \epsilon L \, p = {\partial L \over \partial \dot{q} } \, where the partial derivative with respect to q holds q(t + ε) fixed. The inverse Legendre transform is: \epsilon L = \epsilon p \dot{q} - \epsilon H \, \dot q = {\partial H \over \partial p} \, and the partial derivative now is with respect to p at fixed q. In quantum mechanics, the state is a superposition of different states with different values of q, or different values of p, and the quantities p and q can be interpreted as noncommuting operators. The operator p is only definite on states that are indefinite with respect to q. So consider two states separated in time and act with the operator corresponding to the Lagrangian: e^{i( p (q(t+\epsilon) - q(t)) - \epsilon H(p,q) )}\, If the multiplications implicit in this formula are reinterpreted as matrix multiplications, what does this mean? It can be given a meaning as follows: The first factor is e^{-ip q(t)} \, If this is interpreted as doing a matrix multiplication, the sum over all states integrates over all q(t), and so it takes the Fourier transform in q(t), to change basis to p(t). That is the action on the Hilbert space – change basis to p at time t. Next comes: e^{-i\epsilon H(p,q)} \, or evolve an infinitesimal time into the future. Finally, the last factor in this interpretation is e^{i p q(t+\epsilon)} \, which means change basis back to q at a later time. This is not very different from just ordinary time evolution: the H factor contains all the dynamical information – it pushes the state forward in time. The first part and the last part are just doing Fourier transforms to change to a pure q basis from an intermediate p basis. Another way of saying this is that since the Hamiltonian is naturally a function of p and q, exponentiating this quantity and changing basis from p to q at each step allows the matrix element of H to be expressed as a simple function along each path. This function is the quantum analog of the classical action. This observation is due to Paul Dirac. "...we see that the integrand in (11) must be of the form eiF/h where F is a function of qT,q1,q2 ... qm,qt, which remains finite as h tends to zero. Let us now picture one of the intermediate qs, say qk, as varying continuously while the other ones are fixed. Owing to the smallness of h, we shall then in general have F/h varying extremely rapidly. This means that eiF/h will vary periodically with a very high frequency about the value zero, as a result of which its integral will be practically zero. The only important part in the domain of integration of qk is thus that for which a comparatively large variation in qk produces only a very small variation in F. This part is the neighbourhood of a point for which F is stationary with respect to small variations in qk. We can apply this argument to each of the variables of integration ....and obtain the result that the only important part in the domain of integration is that for which F is stationary for small variations in all intermediate qs. ...We see that F has for its classical analogue t L dt , which is just the action function which classical mechanics requires to be stationary for small variations in all the intermediate qs. This shows the way in which equation (11) goes over into classical results when h becomes extremely small." Dirac (1932) op. cit., p. 69 Dirac further noted that one could square the time-evolution operator in the S representation e^{i\epsilon S} \, and this gives the time evolution operator between time t and time t + 2ε. While in the H representation the quantity that is being summed over the intermediate states is an obscure matrix element, in the S representation it is reinterpreted as a quantity associated to the path. In the limit that one takes a large power of this operator, one reconstructs the full quantum evolution between two states, the early one with a fixed value of q(0) and the later one with a fixed value of q(t). The result is a sum over paths with a phase which is the quantum action. Crucially, Dirac identified in this paper the deep quantum mechanical reason for the principle of least action controlling the classical limit (see quote box). Feynman's interpretation[edit] Dirac's work did not provide a precise prescription to calculate the sum over paths, and he did not show that one could recover the Schrödinger equation or the canonical commutation relations from this rule. This was done by Feynman.[4] Feynman showed that Dirac's quantum action was, for most cases of interest, simply equal to the classical action, appropriately discretized. This means that the classical action is the phase acquired by quantum evolution between two fixed endpoints. He proposed to recover all of quantum mechanics from the following postulates: 1. The probability for an event is given by the modulus length squared of a complex number called the "probability amplitude". 3. The contribution of a path is proportional to e^{i S/\hbar}. while S is the action given by the time integral of the Lagrangian along the path. In order to find the overall probability amplitude for a given process, then, one adds up, or integrates, the amplitude of postulate 3 over the space of all possible paths of the system in between the initial and final states, including those that are absurd by classical standards. In calculating the amplitude for a single particle to go from one place to another in a given time, it is correct to include paths in which the particle describes elaborate curlicues, curves in which the particle shoots off into outer space and flies back again, and so forth. The path integral assigns to all these amplitudes equal weight but varying phase, or argument of the complex number. Contributions from paths wildly different from the classical trajectory may be suppressed by interference (see below). Feynman showed that this formulation of quantum mechanics is equivalent to the canonical approach to quantum mechanics when the Hamiltonian is quadratic in the momentum. An amplitude computed according to Feynman's principles will also obey the Schrödinger equation for the Hamiltonian corresponding to the given action. The path integral formulation of quantum field theory represents the transition amplitude (corresponding to the classical correlation function) as a weighted sum of all possible histories of the system from the initial to the final state. And Feynman diagram is a graphical representation of a perturbative contribution to the transition amplitude. Concrete formulation[edit] Feynman's postulates can be interpreted as follows: Time-slicing definition[edit] For a particle in a smooth potential, the path integral is approximated by zig-zag paths, which in one dimension is a product of ordinary integrals. For the motion of the particle from position xa at time ta to xb at time tb, the time sequence can be divided up into n + 1 little segments tjtj − 1, where j = 1,...,n + 1, of fixed duration \epsilon = \Delta t=\tfrac{t_b-t_a}{n+1}\,. This process is called time-slicing. An approximation for the path integral can be computed as proportional to \int\limits_{-\infty}^{+\infty}\,\ldots \int\limits_{-\infty}^{+\infty}\, \ \exp \left(\frac{{\rm i}}{\hbar}\int\limits_{t_a}^{t_b} L(x(t),v(t), t)\,\mathrm{d}t\right)dx_0 \, \ldots \, dx_n where L(x,v,t) is the Lagrangian of the 1d system with position variable x(t) and velocity v = (t) considered (see below), and dxj corresponds to the position at the jth time step, if the time integral is approximated by a sum of n terms.[note 1] In the limit n → ∞, this becomes a functional integral, which, apart from a nonessential factor, is directly the product of the probability amplitudes \langle x_a,t_a|x_b, t_b\rangle (more precisely, since one must work with a continuous spectrum, the respective densities) to find the quantum mechanical particle at ta in the initial state xa and at tb in the final state xb. Actually L is the classical Lagrangian of the one-dimensional system considered, also L(x,\dot x , t)=p\cdot \dot x - H(x,p,t)\,, where H is the Hamiltonian, p=\frac {\partial L}{\partial \dot x} , and the above-mentioned "zigzagging" corresponds to the appearance of the terms: \exp\left (\frac{{\rm i}}{\hbar}\epsilon\, \,\sum_{j=1}^{n+1} L \left (\tilde x_{j},\frac{x_j-x_{j-1}}{\epsilon},j \right )\right ) In the Riemannian sum approximating the time integral, which are finally integrated over x1 to xn with the integration measure dx1...dxnj is an arbitrary value of the interval corresponding to j, e.g. its center, (xj + xj − 1)/2. Thus, in contrast to classical mechanics, not only does the stationary path contribute, but actually all virtual paths between the initial and the final point also contribute. The diagram shows the contribution to the path integral of a free particle for a set of paths. Feynman's time-sliced approximation does not, however, exist for the most important quantum-mechanical path integrals of atoms, due to the singularity of the Coulomb potential e2/r at the origin. Only after replacing the time t by another path-dependent pseudo-time parameter s=\int \frac{dt}{r(t)} the singularity is removed and a time-sliced approximation exists, that is exactly integrable, since it can be made harmonic by a simple coordinate transformation, as discovered in 1979 by İsmail Hakkı Duru and Hagen Kleinert.[5][6] The combination of a path-dependent time transformation and a coordinate transformation is an important tool to solve many path integrals and is called generically the Duru–Kleinert transformation. Free particle[edit] The path integral representation gives the quantum amplitude to go from point x to point y as an integral over all paths. For a free particle action (m = 1, ħ = 1): S= \int {\dot{x}^2\over 2} dt the integral can be evaluated explicitly. To do this, it is convenient to start without the factor i in the exponential, so that large deviations are suppressed by small numbers, not by cancelling oscillatory contributions. K(x-y;T) = \int_{x(0)=x}^{x(T)=y} \exp\left\{-\int_0^T {\dot{x}^2\over 2} dt\right\} Dx Splitting the integral into time slices: K(x,y;T) = \int_{x(0)=x}^{x(T)=y} \Pi_t \exp\left\{-{1\over 2} \left({x(t+\epsilon) -x(t) \over \epsilon}\right)^2 \epsilon \right\} Dx where the Dx is interpreted as a finite collection of integrations at each integer multiple of ε. Each factor in the product is a Gaussian as a function of x(t + ε) centered at x(t) with variance ε. The multiple integrals are a repeated convolution of this Gaussian Gε with copies of itself at adjacent times. K(x-y;T) = G_\epsilon*G_\epsilon ... *G_\epsilon Where the number of convolutions is T/ε. The result is easy to evaluate by taking the Fourier transform of both sides, so that the convolutions become multiplications. \tilde{K}(p;T) = \tilde{G}_\epsilon(p)^{T/\epsilon} The Fourier transform of the Gaussian G is another Gaussian of reciprocal variance: \tilde{G}_\epsilon(p) = e^{-\epsilon {p^2/2} } and the result is: \tilde{K}(p;T) = e^{-T {p^2/2}} The Fourier transform gives K, and it is a Gaussian again with reciprocal variance: K(x-y;T) \propto e^{ -{(x-y)^2/(2T)}} The proportionality constant is not really determined by the time slicing approach, only the ratio of values for different endpoint choices is determined. The proportionality constant should be chosen to ensure that between each two time-slices the time-evolution is quantum-mechanically unitary, but a more illuminating way to fix the normalization is to consider the path integral as a description of a stochastic process. The result has a probability interpretation. The sum over all paths of the exponential factor can be seen as the sum over each path of the probability of selecting that path. The probability is the product over each segment of the probability of selecting that segment, so that each segment is probabilistically independently chosen. The fact that the answer is a Gaussian spreading linearly in time is the central limit theorem, which can be interpreted as the first historical evaluation of a statistical path integral. The probability interpretation gives a natural normalization choice. The path integral should be defined so that: \int K(x-y;T) dy = 1 This condition normalizes the Gaussian, and produces a Kernel which obeys the diffusion equation: {d\over dt} K(x;T) = {\nabla^2 \over 2} K For oscillatory path integrals, ones with an i in the numerator, the time-slicing produces convolved Gaussians, just as before. Now, however, the convolution product is marginally singular since it requires careful limits to evaluate the oscillating integrals. To make the factors well defined, the easiest way is to add a small imaginary part to the time increment \epsilon. This is closely related to Wick rotation. Then the same convolution argument as before gives the propagation kernel: K(x-y;T) \propto e^{i(x-y)^2 / (2T)} Which, with the same normalization as before (not the sum-squares normalization – this function has a divergent norm), obeys a free Schrödinger equation {d\over dt} K(x;T) = {\rm i} {\nabla^2 \over 2} K This means that any superposition of K's will also obey the same equation, by linearity. Defining \psi_t(y) = \int \psi_0(x) K(x-y;t) dx = \int \psi_0(x) \int_{x(0)=x}^{x(t)=y} e^{iS} Dx then ψt obeys the free Schrödinger equation just as K does: {\rm i}{\partial \over \partial t} \psi_t = - {\nabla^2\over 2} \psi_t The Schrödinger equation[edit] \psi(y;t+\epsilon) = \int_{-\infty}^\infty \;\;\psi(x;t)\int_{x(t)=x}^{x(t+\epsilon)=y} e^{{\rm i}\int_t^{t+\epsilon} (\frac{\dot{x}^2}{2} - V(x)) dt} Dx(t) \, dx \qquad (1) e^{-{\rm i}\epsilon V(x)} e^{{\rm i}\frac{\dot{x}^2}{2}\epsilon} \psi(y;t+\epsilon) \approx \int \psi(x;t) e^{-{\rm i}\epsilon V(x)} e^{{\rm i}(x-y)^2 \over 2\epsilon} dx \frac{\partial\psi}{\partial t} = {\rm i}\cdot \left[ \frac{1}{2}\nabla^2 - V(x)\right]\psi Equations of motion[edit] Since the states obey the Schrödinger equation, the path integral must reproduce the Heisenberg equations of motion for the averages of x and variables, but it is instructive to see this directly. The direct approach shows that the expectation values calculated from the path integral reproduce the usual ones of quantum mechanics. Start by considering the path integral with some fixed initial state \int \psi_0(x) \int_{x(0)=x} e^{{\rm i}S(x,\dot{x})} Dx Now note that x(t) at each separate time is a separate integration variable. So it is legitimate to change variables in the integral by shifting: x(t)=u(t)+\epsilon(t) where ε(t) is a different shift at each time but ε(0) = ε(T) = 0, since the endpoints are not integrated: \int \psi_0(x) \int_{u(0)=x} e^{{\rm i}S(u+\epsilon,\dot{u}+\dot{\epsilon})} Du The change in the integral from the shift is, to first infinitesimal order in epsilon: \int \psi_0(x) \int_{u(0)=x} \left( \int {\partial S \over \partial u } \epsilon + { \partial S \over \partial \dot{u} } \dot{\epsilon} dt \right) e^{iS} Du which, integrating by parts in t, gives: \int \psi_0(x) \int_{u(0)=x} -\left( \int \left({d\over dt} {\partial S\over \partial \dot{u}} - {\partial S \over \partial u}\right)\epsilon(t) dt \right) e^{iS} Du But this was just a shift of integration variables, which doesn't change the value of the integral for any choice of ε(t). The conclusion is that this first order variation is zero for an arbitrary initial state and at any arbitrary point in time: \langle \psi_0| {\delta S \over \delta x}(t) |\psi_0 \rangle = 0 this is the Heisenberg equations of motion. If the action contains terms which multiply and x, at the same moment in time, the manipulations above are only heuristic, because the multiplication rules for these quantities is just as noncommuting in the path integral as it is in the operator formalism. Stationary phase approximation[edit] If the variation in the action exceeds ħ by many orders of magnitude, we typically have destructive phase interference other than in the vicinity of those trajectories satisfying the Euler–Lagrange equation, which is now reinterpreted as the condition for constructive phase interference. Canonical commutation relations[edit] The formulation of the path integral does not make it clear at first sight that the quantities x and p do not commute. In the path integral, these are just integration variables and they have no obvious ordering. Feynman discovered that the non-commutativity is still there.[7] To see this, consider the simplest path integral, the brownian walk. This is not yet quantum mechanics, so in the path-integral the action is not multiplied by i: S= \int \left( {dx \over dt} \right)^2 dt The quantity x(t) is fluctuating, and the derivative is defined as the limit of a discrete difference. {dx \over dt} = {x(t+\epsilon) - x(t) \over \epsilon} Note that the distance that a random walk moves is proportional to √t, so that: x(t+\epsilon) - x(t) \approx \sqrt{\epsilon} This shows that the random walk is not differentiable, since the ratio that defines the derivative diverges with probability one. The quantity x ẋ is ambiguous, with two possible meanings: [1] = x { dx\over dt} = x(t) {(x(t+\epsilon) - x(t)) \over \epsilon } \, [2] = x {dx \over dt} = x(t+\epsilon) {(x(t+\epsilon) - x(t)) \over \epsilon} \, In elementary calculus, the two are only different by an amount which goes to zero as ε goes to zero. But in this case, the difference between the two is not zero: [2] - [1] = {( x(t + \epsilon) - x(t) )^2 \over \epsilon} \approx {\epsilon \over \epsilon} give a name to the value of the difference for any one random walk: {(x(t+\epsilon)- x(t))^2 \over \epsilon} = f(t) and note that f(t) is a rapidly fluctuating statistical quantity, whose average value is 1, i.e. a normalized "Gaussian process". The fluctuations of such a quantity can be described by a statistical Lagrangian \mathcal L = (f(t)-1)^2 \,, and the equations of motion for f derived from extremizing the action S corresponding to \mathcal L just set it equal to 1. In physics, such a quantity is "equal to 1 as an identity operator". In mathematics, it "weakly converges to 1". In either case, it is 1 in any expectation value, or when averaged over any interval, or for all practical purpose. Defining the time order to be the operator order: [x, \dot x] = x {dx\over dt} - {dx \over dt} x = 1 This is called the Itō lemma in stochastic calculus, and the (euclideanized) canonical commutation relations in physics. For a general statistical action, a similar argument shows that \left[x , {\partial S \over \partial \dot x} \right] = 1 and in quantum mechanics, the extra imaginary unit in the action converts this to the canonical commutation relation, [x,p ] ={\rm i} Particle in curved space[edit] For a particle in curved space the kinetic term depends on the position and the above time slicing cannot be applied, this being a manifestation of the notorious operator ordering problem in Schrödinger quantum mechanics. One may, however, solve this problem by transforming the time-sliced flat-space path integral to curved space using a multivalued coordinate transformation (nonholonomic mapping explained here). The path integral and the partition function[edit] The path integral is just the generalization of the integral above to all quantum mechanical problems— Z = \int e^{{\rm i}\mathcal{S}[x]/\hbar} Dx\,  where  \mathcal{S}[x]=\int_0^T L[x(t)] \mathrm{d}t is the action of the classical problem in which one investigates the path starting at time t=0 and ending at time t = T, and Dx denotes integration over all paths. In the classical limit, \mathcal{S}[x] \gg \hbar, the path of minimum action dominates the integral, because the phase of any path away from this fluctuates rapidly and different contributions cancel.[8] The connection with statistical mechanics follows. Considering only paths which begin and end in the same configuration, perform the Wick rotation tit, i.e., make time imaginary, and integrate over all possible beginning/ending configurations. The path integral now resembles the partition function of statistical mechanics defined in a canonical ensemble with temperature 1/T\hbar. Strictly speaking, though, this is the partition function for a statistical field theory. Clearly, such a deep analogy between quantum mechanics and statistical mechanics cannot be dependent on the formulation. In the canonical formulation, one sees that the unitary evolution operator of a state is given by |\alpha;t\rangle=e^{-{\rm i}Ht / \hbar}|\alpha;0\rangle where the state α is evolved from time t = 0. If one makes a Wick rotation here, and finds the amplitude to go from any state, back to the same state in (imaginary) time iT is given by Z={\rm Tr} [e^{-HT / \hbar}] which is precisely the partition function of statistical mechanics for the same system at temperature quoted earlier. One aspect of this equivalence was also known to Schrödinger who remarked that the equation named after him looked like the diffusion equation after Wick rotation. Measure theoretic factors[edit] Sometimes (e.g. a particle moving in curved space) we also have measure-theoretic factors in the functional integral. \int \mu[x] e^{iS[x]} \mathcal{D}x This factor is needed to restore unitarity. For instance, if S=\int \left[ \frac{m}{2}g_{ij}\dot{x}^i\dot{x}^j - V(x) \right] dt, then it means that each spatial slice is multiplied by the measure √g. This measure can't be expressed as a functional multiplying the \mathcal{D}x measure because they belong to entirely different classes. Quantum field theory[edit] The path integral formulation was very important for the development of quantum field theory. Both the Schrödinger and Heisenberg approaches to quantum mechanics single out time, and are not in the spirit of relativity. For example, the Heisenberg approach requires that scalar field operators obey the commutation relation [\phi(x),\partial_t \phi(y) ] = {\rm i} \delta^3(x-y) \, for x and y two simultaneous spatial positions, and this is not a relativistically invariant concept. The results of a calculation are covariant at the end of the day, but the symmetry is not apparent in intermediate stages. If naive field theory calculations did not produce infinite answers in the continuum limit, this would not have been such a big problem – it would just have been a bad choice of coordinates. But the lack of symmetry means that the infinite quantities must be cut off, and the bad coordinates make it nearly impossible to cut off the theory without spoiling the symmetry. This makes it difficult to extract the physical predictions, which require a careful limiting procedure. The problem of lost symmetry also appears in classical mechanics, where the Hamiltonian formulation also superficially singles out time. The Lagrangian formulation makes the relativistic invariance apparent. In the same way, the path integral is manifestly relativistic. It reproduces the Schrödinger equation, the Heisenberg equations of motion, and the canonical commutation relations and shows that they are compatible with relativity. It extends the Heisenberg type operator algebra to operator product rules which are new relations difficult to see in the old formalism. Further, different choices of canonical variables lead to very different seeming formulations of the same theory. The transformations between the variables can be very complicated, but the path integral makes them into reasonably straightforward changes of integration variables. For these reasons, the Feynman path integral has made earlier formalisms largely obsolete. The price of a path integral representation is that the unitarity of a theory is no longer self-evident, but it can be proven by changing variables to some canonical representation. The path integral itself also deals with larger mathematical spaces than is usual, which requires more careful mathematics not all of which has been fully worked out. The path integral historically was not immediately accepted, partly because it took many years to incorporate fermions properly. This required physicists to invent an entirely new mathematical object – the Grassmann variable – which also allowed changes of variables to be done naturally, as well as allowing constrained quantization. The integration variables in the path integral are subtly non-commuting. The value of the product of two field operators at what looks like the same point depends on how the two points are ordered in space and time. This makes some naive identities fail. The propagator[edit] In relativistic theories, there is both a particle and field representation for every theory. The field representation is a sum over all field configurations, and the particle representation is a sum over different particle paths. The nonrelativistic formulation is traditionally given in terms of particle paths, not fields. There, the path integral in the usual variables, with fixed boundary conditions, gives the probability amplitude for a particle to go from point x to point y in time T. K(x,y;T) = \langle y;T|x;0 \rangle = \int_{x(0)=x}^{x(T)=y} e^{i S[x]} Dx This is called the propagator. Superposing different values of the initial position x with an arbitrary initial state \psi_0(x) constructs the final state. \psi_T(y) = \int_{x} \psi_0(x) K(x,y;T) dx = \int^{x(T)=y} \psi_0(x(0)) e^{i S[x]} Dx For a spatially homogeneous system, where K(x, y) is only a function of (x − y), the integral is a convolution, the final state is the initial state convolved with the propagator. \psi_T = \psi_0 * K(;T) For a free particle of mass m, the propagator can be evaluated either explicitly from the path integral or by noting that the Schrödinger equation is a diffusion equation in imaginary time and the solution must be a normalized Gaussian: K(x,y;T) \propto e^{i m(x-y)^2\over 2T} Taking the Fourier transform in (x − y) produces another Gaussian: K(p;T) = e^{i T p^2\over 2m} and in p-space the proportionality factor here is constant in time, as will be verified in a moment. The Fourier transform in time, extending K(p; T) to be zero for negative times, gives the Green's Function, or the frequency space propagator: G_F(p,E) = {-i \over E - {\vec{p}^2\over 2m} + i\epsilon} Which is the reciprocal of the operator which annihilates the wavefunction in the Schrödinger equation, which wouldn't have come out right if the proportionality factor weren't constant in the p-space representation. The infinitesimal term in the denominator is a small positive number which guarantees that the inverse Fourier transform in E will be nonzero only for future times. For past times, the inverse Fourier transform contour closes toward values of E where there is no singularity. This guarantees that K propagates the particle into the future and is the reason for the subscript on G. The infinitesimal term can be interpreted as an infinitesimal rotation toward imaginary time. It is also possible to reexpress the nonrelativistic time evolution in terms of propagators which go toward the past, since the Schrödinger equation is time-reversible. The past propagator is the same as the future propagator except for the obvious difference that it vanishes in the future, and in the gaussian t is replaced by (−t). In this case, the interpretation is that these are the quantities to convolve the final wavefunction so as to get the initial wavefunction. G_B(p,E) = { - i \over - E - {i\vec{p}^2\over 2m} + i\epsilon} Given the nearly identical only change is the sign of E and ε. The parameter E in the Green's function can either be the energy if the paths are going toward the future, or the negative of the energy if the paths are going toward the past. For a nonrelativistic theory, the time as measured along the path of a moving particle and the time as measured by an outside observer are the same. In relativity, this is no longer true. For a relativistic theory the propagator should be defined as the sum over all paths which travel between two points in a fixed proper time, as measured along the path. These paths describe the trajectory of a particle in space and in time. K(x-y,\Tau) = \int_{x(0)=x}^{x(\Tau)=y} e^{i \int_0^\Tau \sqrt{{\dot x}^2} - \alpha d\tau} The integral above is not trivial to interpret, because of the square root. Fortunately, there is a heuristic trick. The sum is over the relativistic arclength of the path of an oscillating quantity, and like the nonrelativistic path integral should be interpreted as slightly rotated into imaginary time. The function K(x-y,\tau) can be evaluated when the sum is over paths in Euclidean space. K(x-y,\Tau) = e^{-\alpha \Tau} \int_{x(0)=x}^{x(\Tau)=y} e^{-L} This describes a sum over all paths of length \Tau of the exponential of minus the length. This can be given a probability interpretation. The sum over all paths is a probability average over a path constructed step by step. The total number of steps is proportional to \Tau, and each step is less likely the longer it is. By the central limit theorem, the result of many independent steps is a Gaussian of variance proportional to \Tau. K(x-y,\Tau) = e^{-\alpha \Tau} e^{-(x-y)^2\over \Tau} The usual definition of the relativistic propagator only asks for the amplitude is to travel from x to y, after summing over all the possible proper times it could take. K(x-y) = \int_0^{\infty} K(x-y,\Tau) W(\Tau) d\Tau Where W(\Tau) is a weight factor, the relative importance of paths of different proper time. By the translation symmetry in proper time, this weight can only be an exponential factor, and can be absorbed into the constant α. K(x-y) = \int_0^{\infty} e^{-{(x-y)^2\over\Tau} -\alpha \Tau} d\Tau This is the Schwinger representation. Taking a Fourier transform over the variable (x − y) can be done for each value of \Tau separately, and because each separate \Tau contribution is a Gaussian, gives whose Fourier transform is another Gaussian with reciprocal width. So in p-space, the propagator can be reexpressed simply: K(p) = \int_0^{\infty} e^{-\Tau p^2 - \Tau \alpha} d\Tau = {1\over p^2 + \alpha } Which is the Euclidean propagator for a scalar particle. Rotating p0 to be imaginary gives the usual relativistic propagator, up to a factor of (−i) and an ambiguity which will be clarified below. K(p) = {i\over p_0^2 - \vec{p}^2 - m^2} This expression can be interpreted in the nonrelativistic limit, where it is convenient to split it by partial fractions: 2 p_0 K(p) = {i \over p_0 - \sqrt{\vec{p}^2 + m^2}} + {i \over p_0 + \sqrt{\vec{p}^2 + m^2}} For states where one nonrelativistic particle is present, the initial wavefunction has a frequency distribution concentrated near p0 = m. When convolving with the propagator, which in p space just means multiplying by the propagator, the second term is suppressed and the first term is enhanced. For frequencies near p0 = m, the dominant first term has the form: 2m K_\mathrm{NR}(p) = {i \over (p_0-m) - {\vec{p}^2\over 2m} } This is the expression for the nonrelativistic Green's function of a free Schrödinger particle. The second term has a nonrelativistic limit also, but this limit is concentrated on frequencies which are negative. The second pole is dominated by contributions from paths where the proper time and the coordinate time are ticking in an opposite sense, which means that the second term is to be interpreted as the antiparticle. The nonrelativistic analysis shows that with this form the antiparticle still has positive energy. The proper way to express this mathematically is that, adding a small suppression factor in proper time, the limit where t → −∞ of the first term must vanish, while the t → +∞ limit of the second term must vanish. In the Fourier transform, this means shifting the pole in p0 slightly, so that the inverse Fourier transform will pick up a small decay factor in one of the time directions: K(p) = {i \over p_0 - \sqrt{\vec{p}^2 + m^2} + i\epsilon} + {i \over p_0 - \sqrt{\vec{p}^2+m^2} - i\epsilon} Without these terms, the pole contribution could not be unambiguously evaluated when taking the inverse Fourier transform of p0. The terms can be recombined: K(p) = { i \over {p^2 - m^2 + i\epsilon}} Which when factored, produces opposite sign infinitesimal terms in each factor. This is the mathematically precise form of the relativistic particle propagator, free of any ambiguities. The ε term introduces a small imaginary part to the α = m2, which in the Minkowski version is a small exponential suppression of long paths. So in the relativistic case, the Feynman path-integral representation of the propagator includes paths which go backwards in time, which describe antiparticles. The paths which contribute to the relativistic propagator go forward and backwards in time, and the interpretation of this is that the amplitude for a free particle to travel between two points includes amplitudes for the particle to fluctuate into an antiparticle, travel back in time, then forward again. Unlike the nonrelativistic case, it is impossible to produce a relativistic theory of local particle propagation without including antiparticles. All local differential operators have inverses which are nonzero outside the lightcone, meaning that it is impossible to keep a particle from travelling faster than light. Such a particle cannot have a Greens function which is only nonzero in the future in a relativistically invariant theory. Functionals of fields[edit] However, the path integral formulation is also extremely important in direct application to quantum field theory, in which the "paths" or histories being considered are not the motions of a single particle, but the possible time evolutions of a field over all space. The action is referred to technically as a functional of the field: S[ϕ] where the field ϕ(xμ) is itself a function of space and time, and the square brackets are a reminder that the action depends on all the field's values everywhere, not just some particular value. In principle, one integrates Feynman's amplitude over the class of all possible combinations of values that the field could have anywhere in space–time. Much of the formal study of QFT is devoted to the properties of the resulting functional integral, and much effort (not yet entirely successful) has been made toward making these functional integrals mathematically precise. Such a functional integral is extremely similar to the partition function in statistical mechanics. Indeed, it is sometimes called a partition function, and the two are essentially mathematically identical except for the factor of i in the exponent in Feynman's postulate 3. Analytically continuing the integral to an imaginary time variable (called a Wick rotation) makes the functional integral even more like a statistical partition function, and also tames some of the mathematical difficulties of working with these integrals. Expectation values[edit] In quantum field theory, if the action is given by the functional \mathcal{S} of field configurations (which only depends locally on the fields), then the time ordered vacuum expectation value of polynomially bounded functional F, <F>, is given by \left\langle F\right\rangle=\frac{\int \mathcal{D}\phi F[\phi]e^{i\mathcal{S}[\phi]}}{\int\mathcal{D}\phi e^{i\mathcal{S}[\phi]}} The symbol \int \mathcal{D}\phi here is a concise way to represent the infinite-dimensional integral over all possible field configurations on all of space–time. As stated above, we put the unadorned path integral in the denominator to normalize everything properly. As a probability[edit] Strictly speaking the only question that can be asked in physics is: "What fraction of states satisfying condition A also satisfy condition B?" The answer to this is a number between 0 and 1 which can be interpreted as a probability which is written as P(B|A). In terms of path integration, since P(B|A) = \frac{P(A \cap B)}{P(A)} this means: P(B|A) = \frac{\sum_{F\subset A \cap B}\left| \int \mathcal{D}\phi O_{in}[\phi]e^{i\mathcal{S}[\phi]} F[\phi]\right|^2}{\sum_{F\subset A} \left|\int\mathcal{D}\phi O_{in}[\phi] e^{i\mathcal{S}[\phi]} F[\phi]\right|^2} where the functional Oin[ϕ] is the superposition of all incoming states that could lead to the states we are interested in. In particular this could be a state corresponding to the state of the Universe just after the big bang although for actual calculation this can be simplified using heuristic methods. Since this expression is a quotient of path integrals it is naturally normalised. Schwinger–Dyson equations[edit] Since this formulation of quantum mechanics is analogous to classical action principles, one might expect that identities concerning the action in classical mechanics would have quantum counterparts derivable from a functional integral. This is often the case. In the language of functional analysis, we can write the Euler–Lagrange equations as \frac{\delta \mathcal{S}[\phi]}{\delta \phi}=0 (the left-hand side is a functional derivative; the equation means that the action is stationary under small changes in the field configuration). The quantum analogues of these equations are called the Schwinger–Dyson equations. If the functional measure \mathcal{D}\phi turns out to be translationally invariant (we'll assume this for the rest of this article, although this does not hold for, let's say nonlinear sigma models) and if we assume that after a Wick rotation which now becomes for some H, goes to zero faster than a reciprocal of any polynomial for large values of φ, we can integrate by parts (after a Wick rotation, followed by a Wick rotation back) to get the following Schwinger–Dyson equations for the expectation: \left\langle \frac{\delta F[\phi]}{\delta \phi} \right\rangle = -i \left\langle F[\phi]\frac{\delta \mathcal{S}[\phi]}{\delta\phi} \right\rangle for any polynomially bounded functional F. \left\langle F_{,i} \right\rangle = -i \left\langle F \mathcal{S}_{,i} \right\rangle in the deWitt notation. These equations are the analog of the on shell EL equations. If J (called the source field) is an element of the dual space of the field configurations (which has at least an affine structure because of the assumption of the translational invariance for the functional measure), then, the generating functional Z of the source fields is defined to be: Z[J]=\int \mathcal{D}\phi e^{i(\mathcal{S}[\phi] + \left\langle J,\phi \right\rangle)}. Note that \frac{\delta^n Z}{\delta J(x_1) \cdots \delta J(x_n)}[J] = i^n \, Z[J] \, {\left\langle \phi(x_1)\cdots \phi(x_n)\right\rangle}_J Z^{,i_1\dots i_n}[J]=i^n Z[J] {\left \langle \phi^{i_1}\cdots \phi^{i_n}\right\rangle}_J {\left\langle F \right\rangle}_J=\frac{\int \mathcal{D}\phi F[\phi]e^{i(\mathcal{S}[\phi] + \left\langle J,\phi \right\rangle)}}{\int\mathcal{D}\phi e^{i(\mathcal{S}[\phi] + \left\langle J,\phi \right\rangle)}}. Basically, if \mathcal{D}\phi e^{i\mathcal{S}[\phi]} is viewed as a functional distribution (this shouldn't be taken too literally as an interpretation of QFT, unlike its Wick rotated statistical mechanics analogue, because we have time ordering complications here!), then \left\langle\phi(x_1)\cdots \phi(x_n)\right\rangle are its moments and Z is its Fourier transform. F[\phi]=\frac{\partial^{k_1}}{\partial x_1^{k_1}}\phi(x_1)\cdots \frac{\partial^{k_n}}{\partial x_n^{k_n}}\phi(x_n) and G is a functional of J, then F\left[-i\frac{\delta}{\delta J}\right] G[J] = (-i)^n \frac{\partial^{k_1}}{\partial x_1^{k_1}}\frac{\delta}{\delta J(x_1)} \cdots \frac{\partial^{k_n}}{\partial x_n^{k_n}}\frac{\delta}{\delta J(x_n)} G[J]. Then, from the properties of the functional integrals {\left \langle \frac{\delta \mathcal{S}}{\delta \phi(x)}\left[\phi \right]+J(x)\right\rangle}_J=0 we get the "master" Schwinger–Dyson equation: \mathcal{S}_{,i}[-i\partial]Z+J_i Z=0. If the functional measure is not translationally invariant, it might be possible to express it as the product M\left[\phi\right]\,\mathcal{D}\phi where M is a functional and \mathcal{D}\phi is a translationally invariant measure. This is true, for example, for nonlinear sigma models where the target space is diffeomorphic to Rn. However, if the target manifold is some topologically nontrivial space, the concept of a translation does not even make any sense. In that case, we would have to replace the \mathcal{S} in this equation by another functional \hat{\mathcal{S}}=\mathcal{S}-i\ln(M) The path integrals are usually thought of as being the sum of all paths through an infinite space–time. However, in Local quantum field theory we would restrict everything to lie within a finite causally complete region, for example inside a double light-cone. This gives a more mathematically precise and physically rigorous definition of quantum field theory. Functional identity[edit] If we perform a Wick rotation inside the functional integral, professors J. Garcia and Gerard 't Hooft showed using a functional differential equation, that \int D[x]e^{-\mathcal{S}[x]/\hbar}=-A[x]\sum_{n=0}^{\infty}(\hbar)^{n+1}\delta^{n} e^{-J/\hbar} \text{,} where S is the Wick-rotated classical action of the particle, J is the classical action with an extra term "x", delta (here) is the functional derivative operator and A[x]=\exp\left(\frac{1}{\hbar}\int X(t)\,\mathrm{d}t\right) \text{.} Ward–Takahashi identities[edit] See main article Ward–Takahashi identity. Now how about the on shell Noether's theorem for the classical case? Does it have a quantum analog as well? Yes, but with a caveat. The functional measure would have to be invariant under the one parameter group of symmetry transformation as well. Let's just assume for simplicity here that the symmetry in question is local (not local in the sense of a gauge symmetry, but in the sense that the transformed value of the field at any given point under an infinitesimal transformation would only depend on the field configuration over an arbitrarily small neighborhood of the point in question). Let's also assume that the action is local in the sense that it is the integral over spacetime of a Lagrangian, and that Q[\mathcal{L}(x)]=\partial_\mu f^\mu (x) for some function f where f only depends locally on φ (and possibly the spacetime position). If we don't assume any special boundary conditions, this would not be a "true" symmetry in the true sense of the term in general unless f=0 or something. Here, Q is a derivation which generates the one parameter group in question. We could have antiderivations as well, such as BRST and supersymmetry. Let's also assume \int \mathcal{D}\phi Q[F][\phi]=0 for any polynomially bounded functional F. This property is called the invariance of the measure. And this does not hold in general. See anomaly (physics) for more details. \int \mathcal{D}\phi\, Q\left[F e^{iS}\right][\phi]=0, which implies \left\langle Q[F]\right\rangle +i\left\langle F\int_{\partial V} f^\mu ds_\mu\right\rangle=0 where the integral is over the boundary. This is the quantum analog of Noether's theorem. Now, let's assume even further that Q is a local integral Q=\int d^dx q(x) q(x)[\phi(y)] = \delta^{(d)}(X-y)Q[\phi(y)] \, so that q(x)[S]=\partial_\mu j^\mu (x) \, j^{\mu}(x)=f^\mu(x)-\frac{\partial}{\partial (\partial_\mu \phi)}\mathcal{L}(x) Q[\phi] \, (this is assuming the Lagrangian only depends on φ and its first partial derivatives! More general Lagrangians would require a modification to this definition!). Note that we're NOT insisting that q(x) is the generator of a symmetry (i.e. we are not insisting upon the gauge principle), but just that Q is. And we also assume the even stronger assumption that the functional measure is locally invariant: \int \mathcal{D}\phi\, q(x)[F][\phi]=0. Then, we would have \left\langle q(x)[F] \right\rangle +i\left\langle F q(x)[S]\right\rangle=\left\langle q(x)[F]\right\rangle +i\left\langle F\partial_\mu j^\mu(x)\right\rangle=0. q(x)[S]\left[-i \frac{\delta}{\delta J}\right]Z[J]+J(x)Q[\phi(x)]\left[-i \frac{\delta}{\delta J}\right]Z[J]=\partial_\mu j^\mu(x)\left[-i \frac{\delta}{\delta J}\right]Z[J]+J(x)Q[\phi(x)]\left[-i \frac{\delta}{\delta J}\right]Z[J]=0. The above two equations are the Ward–Takahashi identities. Now for the case where f=0, we can forget about all the boundary conditions and locality assumptions. We'd simply have \left\langle Q[F]\right\rangle =0. \int d^dx\, J(x)Q[\phi(x)]\left[-i \frac{\delta}{\delta J}\right]Z[J]=0. The need for regulators and renormalization[edit] Path integrals as they are defined here require the introduction of regulators. Changing the scale of the regulator leads to the renormalization group. In fact, renormalization is the major obstruction to making path integrals well-defined. The path integral in quantum-mechanical interpretation[edit] In one philosophical interpretation of quantum mechanics, the "sum over histories" interpretation, the path integral is taken to be fundamental and reality is viewed as a single indistinguishable "class" of paths which all share the same events. For this interpretation, it is crucial to understand what exactly an event is. The sum over histories method gives identical results to canonical quantum mechanics, and Sinha and Sorkin (see the reference below) claim the interpretation explains the Einstein–Podolsky–Rosen paradox without resorting to nonlocality. (Note that the Copenhagen/pragmatism interpretation claims there is no paradox—only a sloppy materialism motivated question on the part of EPR—Joseph Wienberg a lecture. On the other hand, the fact that the EPR thought experiment (and its result) does represent the results of a QM experiment says that (despite the path dependence of parallelness/anti-parallelness in curved space) all contributions of paths close to black holes cancel in the action for an EPR style experiment here on earth.) Some[who?] advocates of interpretations of quantum mechanics emphasizing decoherence have attempted to make more rigorous the notion of extracting a classical-like "coarse-grained" history from the space of all possible histories. Quantum Gravity[edit] Whereas in Quantum Theory the path integral formulation is fully equivalent to other formulations, it may be that it can be extended to quantum gravity, which would make it different from the Hilbert space model. Feynman had some success in this direction and his work has been extended by Hawking and others.[9] Approaches that use this method include Causal Dynamical Triangulations, Tensor models and spinfoams. See also[edit] 1. ^ Masud Chaichian, Andrei Pavlovich Demichev (2001). "Introduction". Path Integrals in Physics Volume 1: Stochastic Process & Quantum Mechanics. Taylor & Francis. p. 1 ff. ISBN 0-7503-0801-X.  2. ^ Dirac, Paul A. M. (1933). "The Lagrangian in Quantum Mechanics". Physikalische Zeitschrift der Sowjetunion 3: 64–72. ; also see Van Vleck, John H (1928). "The correspondence principle in the statistical interpretation of quantum mechanics". Proceedings of the National Academy of Sciences of the United States of America 14 (2): 178–188. Bibcode:1928PNAS...14..178V. doi:10.1073/pnas.14.2.178. PMC 1085402. PMID 16577107.  More than one of |number= and |issue= specified (help) 3. ^ Kleinert, H. (1989). "6". Gauge Fields in Condensed Matter. 1: Superflow and Vortex Lines. Singapore: World Scientific. ISBN 9971-5-0210-0.  4. ^ Both noted that in the limit of action that is large compared to the reduced Planck's constant ħ (using natural units, ħ = 1), the path integral is dominated by solutions which are in the neighbourhood of stationary points of the action. 5. ^ Duru, H; Hagen Kleinert (1979-06-18). "Solution of the path integral for the H-atom". Physics Letters 84B (2): 185. Bibcode:1979PhLB...84..185D. doi:10.1016/0370-2693(79)90280-6. Retrieved 2007-11-25.  6. ^ For details see Chapter 13 in Kleinert's book cited above. 7. ^ Feynman, R. P. (1948). "Space-Time Approach to Non-Relativistic Quantum Mechanics". "Reviews of Modern Physics" 20 (2): 367–387. Bibcode:1948RvMP...20..367F. doi:10.1103/RevModPhys.20.367.  8. ^ Feynman, Richard P. (Richard Phillips); Hibbs, Albert R.; Styer, Daniel F. (2010). Quantum Mechanics and Path Integrals. Mineola, N.Y.: Dover Publications. pp. 29–31. ISBN 0-486-47722-3.  9. ^ "Most of the Good Stuff", Memories Of Richard Feynman, edited by Laurie M. Brown and John S. Rigden, American Institute of Physics, the chapter by Murray Gell-Mann. 1. ^ For a simplified, step by step, derivation of the above relation see Path Integrals in Quantum Theories: A Pedagogic 1st Step pdf vers Suggested reading[edit] External links[edit]
6fd950c3c05c5c65
Navigation und Service Nachwuchsgruppe Dr. Nicole Helbig Ab initio description of double and charge transfer excitations: from solvable models to complex systems In double excitations not only one but two electrons are excited. While this concept is strictly only defined for non-interacting electrons one can identify excitations with a large double excitation character also for interacting systems. In a charge-transfer excitation the excited electron is localized in a different part of the system than before the excitation, i.e. the charge is transfered from one part of the system to another part. In the endeavor for an ab initio understanding of the electronic structures of complex physical systems, double and charge transfer excitations are both receiving increasing attention due to their possible technological relevance. The former are involved in many ultra-fast processes which are now experimentally accessible while the latter are believed to be essential in explaining complex processes involved in photosynthesis. The challenges to describe double and charge-transfer excitations within a density-functional framework are related since both require a functional which is non-local in space and time. Especially the non-locality in time, i.e. a frequency dependence, is missing from currently available functionals. Within this project we will develop a frequency-dependent density functional which will enable us to describe both double and charge-transfer excitations. Moreover, as an alternative approach we will employ reduced density-matrix functional theory, which has proven to be capable of solving many long-standing problems in density-functional theory. The properties of all functionals will be derived from exact calculations for one and two-dimensional model systems where the interacting Schrödinger equation can be solved without approximations for a small number of particles.
c0de99a3bd838d73
Monthly Archives: September 2012 Leyendo rápidamente las inquietudes y alegrías de la gente en facebook esta mañana, leo en el muro de una amiga: “Dia mundial contra el cancer de mama” Y yo pienso, “joder, vaya madre más importante la suya, que tiene su propio día mundial; pensaré en ella yo también, espero que se recupere”. Como leer los muros del día es algo que se hace con sólo relativo interés y vaga atención, me lleva unos tres segundos darme cuenta de lo que acabo de hacer. Gente hispanoparlante, ¡acentuad vuestras palabras! retards in the train in the 3rd position, shame award: people who bring their house keys hanging from their neck in a red collar in the 2nd position, idiocy award: people who talk to their dog (with the same voice tone you use with babies) and actually believe to be having a conversation with them in the 1st position, disrespect award: people in the platform who get into the train car before letting the packed crowd inside get off every once in a while, like today in the train, i temporarily loose faith on human intelligence we are amazing we are amazing. look, i just asked google to find information on “peeing in fresh fallen snow”. i know, bear with me. i was speaking about that a couple of posts ago, so that’s why i came up with such a sentence to look for. thing is, i hoped it would be a rare enough of a concept as to give google search a hard time giving me back any sensible information. i have no clue about the internal mechanics of the search engine, but i was naively expecting that this being an infrequent query, the answer would not be cached anywhere and that a long search process would be run, possibly giving me some random links not related to the semantics of my query at all but to pages speaking of pee alone, or snow. so yeah, i just asked google to find information on “peeing in fresh fallen snow”; and in a blink, the search engine gave me a link to a page at the urban dictionary website which has the following definition: Urinart: Drawing a picture in freshly-fallen snow using urine and this, my friends, blows my mind in so, so many ways. first of all, the fact that the word urinarting has already been created is pretty awesome. secondly, that somebody invested the time to put this information online is also pretty amazing. third, that google was able to handle my weird query by crossing information with all sort of unstructured sources of information out there and that it found this definition is seriously astonishing. fourth, that it did it in no more than 0.26 seconds is ridiculously impressive. sixth, that humans have reached this state of mastery in information manipulation and management, that we do have tools to store, classify and index information in such a cheap manner that not even the most daring science fiction author would possibly have dreamed of just 20 years ago, this is freaking mind blowing. i don’t know. when i was a kid before internet became popular at around ’95, i would often have to cycle to the public library to physically scan shelves in order to search for an outdated version of the information i was looking for. my great-grandmother, who was born in a tiny village in the mountains by the same time the light bulb was created, knew nothing about the world but what a guy in a black dress would tell her every Sunday morning in form of canticles and rituals. so look at it with a bit of perspective. we are a ridiculously plastic species. high technology, low creativity you know those crazy high tech cameras able to record thousands of frames per second, that cost $250,000 ? i wonder if they are any useful beyond recording random objects being blown up in slow motion like, say, water balloons in peoples faces. seriously. it got so boooooooring lucky me being a man has some advantages, and some disadvantages. among the former, there is that of, when in the mountains during winter, being able to write your name by peeing in a bunch of new fallen snow (american’s do have it easier since they have it really short. their name, i mean – just one syllable most of the times). the joy of this realization is immense make up in the train in every early morning bart car heading to the city there’s a few young woman with a mirror in one hand, an eyeliner in the other. time is precious, and this is a great way to buy some extra 15 minutes of sleep back home. they change the eyeliner for a mascara applier and proceed with the eyelashes, there in the middle of a crowd with whom they have nothing to do. only the people that they can reach through the social network in their smart phone’s really matters to them. like the work colleagues they are about to meet in the office or the new clients they will talk to today. it’s time to go for some lipstick. some astonishingly precise moves, and they’re ready to go. in every late night bart car heading to the city there’s a few young woman with a mirror in one hand, an eyeliner in the other. time is precious, and this is a great way to buy some extra 15 minutes of rest back home. they change the eyeliner for a mascara applier and proceed with the eyelashes, there in the middle of a crowd with whom they have nothing to do. only the people that they can reach through the social network in their smart phone’s really matters to them. like the best friends they are about to meet in the pub or the new strangers they will talk to today. it’s time to go for some lipstick. some astonishingly precise moves, and they’re ready to go. wrong again – still thinking the universe turns around us today i saw this image below in a blog dedicated to science, and i got immediately sad, cause it reminds me that even people doing science themselves don’t always really get it – they seem to not fully understand what science is about. the statement above is basically saying that a perfect world doesn’t have discontinuities – that things change slowly without abrupt alterations, that things that are a lot don’t become a little suddenly without ramping down gradually, that if things are here now and they will be there later it’s only because they are going to be “in between” before. basically the image is claiming that in a perfect world things are not broken but smooth. that’s not true though, reality, the world, the things around us, everything is mostly broken and discontinuous. whoever wrote that blog post above noted it by implying that this world is indeed not perfect or ideal. see, this is my problem – there’s nothing wrong with the world. the world it’s ideal, it’s not the imperfect thing Plato thought it was (with terrible consequences for the western culture as we know). the world is doing just fine, believe me. let me repeat it: the world is doing just fine. humans aren’t. indeed, it is our mathematics that are not ideal. or at least, they are not up to the task of describing efficiently everything around us, discontinuities included. but surely enough, the universe is full of discontinuities at all scales (it can really get pretty fractallie sometimes), it’s not made of boring spheres and planes as Galileo wrongly claimed, nor it’s made out of derivatives, ordinary differential equation and any other human abstractions. an ideal world does not follow lim {x->c} f(x) = f(c). and this, is not a problem. it’s a gift. on the contrary, in an ideal world humans enjoy less primitive mathematics than our current, some mathematics that allow us to describe and model and manipulate discontinuities and all other beautiful features of all the things that we see around us. basically, we humans have a problem, the universe doesn’t. thinking that an ideal world is one where the universe follows our thinking process (and not the other way around) is simply a too much of a human egocentric position. which ironically the scientific community has always proudly claimed to refrain from. thing is that science too fails to do so sometimes, for humans have this tendency of making the universe orbitate around them. even some scientist. still today. i know. sight. words with one syllable… …there are quite a lot. and in spite of the fact that they are short, you can still say quite a lot with them. but since it can still get quite hard to say any long phrase too, and just for the sake of fun, i thought we might play this game where we only talk with them. what do you think, shall we give it a try? well, read this text back – it’s your turn now! signature to a surreal painting it’s a pretty regular fall day, not cold, not warm, a bit cloudy, but not overcast. just a pretty regular fall day, and just that. while in the last few meters of pedaling till my home i think i should probably go grocery shopping before they close the stores. so i climb the stairs, leave the iñicleta (my bicycle’s name), and head downstairs again. as i open the door to quit the building i notice something weird. i see some orange colors everywhere, like if there was some nearby building in fire or something. alarmed look around and i notice the it’s not any building nor car, but the sky which is orange and purple, tinting everything in deep saturated orange. it’s pretty gorgeous in fact. amazingly beautiful. extraordinary, such vivid colors, it’s completely surreal, i’ve certainly never seen anything like this in my life. i see lots of people looking up the sky too. there’s a rainbow. no, two rainbows! but i don’t mind, at this moment is not the sky colors nor the double rainbows, but the fact that the streets are full of people looking to the sky. people have left the shops, restaurants and cars and stop whatever they were doing in order to loop up in the sky. it’s an amazing phenomena. not only the sky, rainboes and the crazy colors of the city in orange and purple fire, but also seeing how everybody is amazed to the spectacle and we are all looking up the sky. to this fantastic surreal painting that we are part of, the double rainbow is nothing but just the perfect signature. closing circles, coincidences and flashbacks i love when random fact/events connect together. the connection often happens in the form of a flashback. event #1: i just woke up in a pretty fancy hotel in downtown LA. first thing to do in this sunny morning is to perform some exploration and try to identify a place for breakfast. so i start walking, and pass by a huge library that has this huge metallic plate with some equations on physics (or for the matter, on that gray area where physics meets chemistry). of course i pause my walk and have a closer look to it. i cannot tell exactly what they are, i only recognize what looks to me like Heisenberg’s uncertainty principle (but i’m a bit unsure, as this is not an area of science where i am exactly comfortable). but it is clear to me that this is about quantum physics, that’s all i can tell. intuitively E seems to be some sort of force or potential to me, given how it gets substracted from itself in the last equation and how it acts as a driving/forced excitation in the third. but who knows. yet, i cannot stop looking at the third equation – it really catches my attention, as its shape feels sort of familiar. i look at it more closely, and i realize it’s a Helmholtz equation plus an external force indeed, an equation that in isolation expresses the change of the change of something as being proportional to the thing in itself (yes two changes, this is, the laplacian). these sort of equations/behaviors are common in electrical engineering, and result in all sort of wave equations. but of course i don’t recognize the quantities in this particular wave equation at all, so i have no idea what the subject of the equation is. only that it must have be describing something in quantum physics and that since after taking changes (derivatives) of it twice still remains propotional to itself, it must be some sort of harmonic function, something that oscillates. indeed harmonics functions (which are eigenfunctions of the laplacian) result in stuff that oscillates like a pendulum, or like a wave (therefore the name of these equations). oscillation means cosinus functions (in 1D), complex exponential (in 2D) or spherical harmonics (in radial 3D). so whatever this equation is describing it is something what undulates like wave. of course at this point i cannot go further, and since i’m still hungry and the reason for this walk was to fulfill my stomach needs, i take a picture of the equations, which is my ever first picture in LA, and i continue walking. i’ll probably never see these equations again in my life. picture taken in the entrance to a library in LA event #2: i’m chatting with my friend to whom i didn’t talk in the last few weeks. today she has been preparing some notes for a course for undergraduate students of chemistry, and she expresses her concern about how to best introduce the Schrodinger’s equation first as an introduction without alienating them with an abstract understanding of what it means. of course, i have no idea myself what the heck she’s talking about, but a science lover as i am, my first reaction is of course to go to Wikipedia and look for “Schrödinger equation”. as soon as start i reading i realize how rotten my memories in physics are. i soon lose any hope to understand anything in this article, unless i would spend a couple of days diving in the subject, which i of course have no time to do. but at least i now know what she’s talking about. sort of. very superficially. i’m about to close the page, but i poke one more page-down in the article, and there suddenly i see something that produces an instantaneous flashback. there is an equation there that i have seen before. not that i’ve been trained in equation matching and detection or anything, but this one equation, yes, i have seen it before. i quickly go to my phone, and search for the picture i took in LA a few weeks before. and…. match!!! yay, that Helmholtz equation i saw in LA was this famous Schrodinger’s equation thingy, and from the little bit i understand of this article it seems it has something to do with physics/chemistry and the study of atom. so that’s that it was that thing in LA, cool! of course at this point i cannot go further, and since we are talking about other topics already anyway, i close the Wikipedia. i’ll probably never see these equations again in my life. event #3 weeks later my friend asks me for advice/help in realtime visualization of atomic structures, cause she believes that may probably help her fellow students understand what’s going on in three dimensional space. i receive the notes she is preparing for the students so i can see the context in which the visualization is needed. i’m reading the notes in my morning commuting in the b.a.r.t., and my eyes bump into one of diagrams she had. “eh, wait a minute!”. i have seen these diagrams before when working with the essentials of lighting in computer graphics. or are they just some similar diagrams? they look exactly the same to me, hm. i read the preceding paragraphs, and i see two dimensional coefficients called m and l related to these diagrams, m running from -l to l. pretty much like indices to Legendre polynomials. oki, this cannot be an accident, these are spherical harmonics. like in computer graphics. like in electrical engineering. i get an instantaneous flashback again. Legendre, Harmonics, Helmholtz, Schrödinger!! electromagnetic wave propagation, visibility encoding for computer graphics, atoms!! i read the full notes, and indeed, it feels like a present given to me after all these years since i last studied about the s, p, d and f atom orbitals at school. now, 17 years later, i finally learn what they actually are, or more correctly, why they are the way they are! where they come from, how to solve them, how to describe them! how exciting! but of course at this point i cannot go further, and since i’m heading to work and finally made it to my station, and i’m running late, i stop reading the notes here. but this time i won’t say that i’ll probably never see these equations again in my life. i love the tickles it produced in my spirit to close this circle today. relating things i know today to things i learnt no less than 17 years ago, as if they had been waiting for the connection to happen. learning is fascinating. and when it happens this way, even more. and all thanks to that metallic panel in doors to that library in Los Angles that one morning. poor hurricane reporter there aren’t many things more humiliating than being the hurricane reporter. your dignity gets miserably ruined forever, in front of the whole world, while you wear that ridiculous slicker and wellingtons, you fight the wind while trying to speak to the mic and your face gets slapped over and over again by your hoodie. i mean, was it really necessary to send anybody there to report the news? i can imagine the conversation that same morning in the officre: – hey, have you met the new guy yet? – the intern? – yep, Mr Look At Me I’m A Professional Journalist. i think we should teach him how things really are over here. – you know what, they told me there’s a hurricane coming tonight in Texas… looking back looking to the contacts in my phone seems like looking back in the past. it brings old memories of good times through names that i had almost forgotten, names that like a thread that i can pull from, allow me to recover amazingly vivid moments, situations, experiences, places, people, moods, expectations, smells, adventures, ideas, interests, sounds and songs that would otherwise have sunk and get lost forever in an ocean of past times. a few of these names belong to people i met 15 years ago and that i’m still in touch with, and many other names belong to people i only met for 15 minutes. sometimes even less. but regardless of that, as i scroll the contact list i take a moment to think about how i met every single one of these people, in which context. and regardless of that too, sometimes it all comes automatically in a fraction of a second, sharp and vivid, while other times i have to do an effort, like if for some reason the memory had decided to slip away, perhaps with complicity of the person the memory is about or with my own. but in the end all memories come back, one by one; and as i scroll this list down, for every of these names, i recover a bit of that myself i was once. looking to this contacts list in the phone really seems like looking back in the past.
22b9346992fbaaf4
Some Definitions Both physics and philosophy are jargon-ridden. So let’s first define some key concepts. Both “consciousness” and “physical” are contested terms. Accurately if inelegantly, consciousness may be described following Nagel (“What is it like to be a bat?”) as the subjective what-it’s-like-ness of experience. Academic philosophers term such self-intimating “raw feels” “qualia” – whether macro-qualia or micro-qualia. The minimum unit of consciousness (or “psychon”, so to speak) has been variously claimed to be the entire universe, a person, a sub-personal neural network, an individual neuron, or the most basic entities recognised by quantum physics. In The Principles of Psychology (1890), American philosopher and psychologist William James christened these phenomenal simples “primordial mind-dust“. This paper conjectures that (1) our minds consist of ultra-rapidly decohering neuronal superpositions in strict accordance with unmodified quantum physics without the mythical “collapse of the wavefunction”; (2) natural selection has harnessed the properties of these neuronal superpositions so our minds run phenomenally-bound world-simulations; and (3) predicts that with enough ingenuity the non-classical interference signature of these conscious neuronal superpositions will be independently experimentally detectable (see 6 below) to the satisfaction of the most incredulous critic. The “physical” may be contrasted with the supernatural or the abstract and – by dualists and epiphenomenalists, with the mental. The current absence of any satisfactory “positive” definition of the physical leads many philosophers of science to adopt instead the “via negativa“. Thus some materialists have sought stipulatively to define the physical in terms of an absence of phenomenal experience. Such a priori definitions of the nature of the physical are question-begging. Physicalism” is sometimes treated as the formalistic claim that the natural world is exhaustively described by the equations of physics and their solutions. Beyond these structural-relational properties of matter and energy, the term “physicalism” is also often used to make an ontological claim about the intrinsic character of whatever the equations describe. This intrinsic character, or metaphysical essence, is typically assumed to be non-phenomenal. “Strawsonian physicalists” (cf. “Consciousness and Its Place in Nature: Does Physicalism Entail Panpsychism?”) dispute any such assumption. Traditional reductive physicalism proposes that the properties of larger entities are determined by properties of their physical parts. If the wavefunction monism of post-Everett quantum mechanics assumed here is true, then the world does not contain discrete physical parts as understood by classical physics. Materialism” is the metaphysical doctrine that the world is made of intrinsically non-phenomenal “stuff”. Materialism and physicalism are often treated as cousins and sometimes as mere stylistic variants – with “physicalism” used as a nod to how bosonic fields, for example, are not matter. “Physicalistic materialism” is the claim that physical reality is fundamentally non-experiential and that the natural world is exhaustively described by the equations of physics and their solutions. Panpsychism” is the doctrine that the world’s fundamental physical stuff also has primitive experiential properties. Unlike the physicalistic idealism explored here, panpsychism doesn’t claim that the world’s fundamental physical stuff is experiential. Epiphenomenalism” in philosophy of mind is the view that experience is caused by material states or events in the brain but does not itself cause anything; the causal efficacy of mental agency is an illusion. For our purposes, “idealism” is the ontological claim that reality is fundamentally experiential. This use of the term should be distinguished from Berkeleyan idealism, and more generally, from subjective idealism, i.e. the doctrine that only mental contents exist: reality is mind-dependent. One potential source of confusion of contemporary scientific idealism with traditional philosophical idealism is the use by inferential realists in the theory of perception of the term “world-simulation”. The mind-dependence of one’s phenomenal world-simulation, i.e. the quasi-classical world of one’s everyday experience, does not entail the idealist claim that the mind-independent physical world is intrinsically experiential in nature – a far bolder conjecture that we nonetheless tentatively defend here. Physicalistic idealism” is the non-materialist physicalist claim that reality is fundamentally experiential and that the natural world is exhaustively described by the equations of physics and their solutions: more specifically, by the continuous, linear, unitary evolution of the universal wavefunction of post-Everett quantum mechanics. The “decoherence program” in contemporary theoretical physics aims to show in a rigorously quantitative manner how quasi-classicality emerges from the unitary dynamics. Monism” is the conjecture that reality consists of a single kind of “stuff” – be it material, experiential, spiritual, or whatever. Wavefunction monism is the view that the universal wavefunction mathematically represents, exhaustively, all there is in the world. Strictly speaking, wavefunction monism shouldn’t be construed as the claim that reality literally consists of a certain function, i.e. a mapping from some mind-wrenchingly immense configuration space to the complex numbers, but rather as the claim that every mathematical property of the wavefunction except the overall phase corresponds to some property of physical world. “Dualism”, the conjecture that reality consists of two kinds of “stuff”, comes in many flavours: naturalistic and theological; interactionist and non-interactionist; property and ontological. In the modern era, most scientifically literate monists have been materialists. But to describe oneself as both a physicalist and a monistic idealist is not the schizophrenic word-salad it sounds at first blush. Functionalism” in philosophy of mind is the theory that mental states are constituted solely by their functional role, i.e. by their causal relations to other mental states, perceptual inputs, and behavioural outputs. Functionalism is often associated with the idea of “substrate-neutrality”, sometimes misnamed “substrate-independence”, i.e. minds can be realised in multiple substrates and at multiple levels of abstraction. However, micro-functionalists may dispute substrate-neutrality on the grounds that one or more properties of mind, for example phenomenal binding, functionally implicate the world’s quantum-mechanical bedrock from which the quasi-classical worlds of Everett’s multiverse emerge. Thus this paper will argue that only successive quantum-coherent neuronal superpositions at naively preposterously short time-scales can explain phenomenal binding. Without phenomenal binding, no functionally adaptive classical world-simulations could exist in the first instance. The “binding problem(10), also called the “combination problem”, refers to the mystery of how the micro-experiences mediated by supposedly discrete and distributed neuronal edge-detectors, motion-detectors, shape-detectors, colour-detectors (etc) can be “bound” into unitary experiential objects (“local” binding) apprehended by a unitary experiential self (“global” binding). Neuroelectrode studies using awake, verbally competent human subjects confirm that neuronal micro-experiences exist. Classical neuroscience cannot explain how they could ever be phenomenally bound. Mereology” is the theory of the relations between part to whole and the relations between part to part within a whole. Scientifically literate humans find it’s natural and convenient to think of particles, macromolecules or neurons as having their own individual wavefunctions by which they can be formally represented. However, the manifest non-classicality of phenomenal binding means that in some contexts we must consider describing the entire mind-brain via a single wavefunction. Organic minds are not simply the “mereological sum” of discrete classical parts. Organic brains are not simply the “mereological sum” of discrete classical neurons. Quantum field theory” is the formal, mathematico-physical description of the natural world. The world is made up of the states of quantum fields, conventionally non-experiential in character, that take on discrete values. Physicists use mathematical entities known as “wavefunctions” to represent quantum states. Wavefunctions may be conceived as representing all the possible configurations of a superposed quantum system. Wavefunction(al)s are complex valued functionals on the space of field configurations. Wavefunctions in quantum mechanics are sinusoidal functions with an amplitude (a “measure”) and also a phase. The Schrödinger equation: describes the time-evolution of a wavefunction. “Coherence” means that the phases of the wavefunction are kept constant between the coherent particles, macromolecules or (hypothetically) neurons, while “decoherence” is the effective loss of ordering of the phase angles between the components of a system in a quantum superposition. Such thermally-induced “dephasing” rapidly leads to the emergence – on a perceptual naive realist story – of classical, i.e. probabilistically additive, behaviour in the central nervous system (“CNS”), and also the illusory appearance of separate, non-interfering organic macromolecules. Hence the discrete, decohered classical neurons of laboratory microscopy and biology textbooks. Unlike classical physics, quantum mechanics deals with superpositions of probability amplitudes rather than of probabilities; hence the interference terms in the probability distribution. Decoherence should be distinguished from dissipation, i.e. the loss of energy from a system – a much slower, classical effect. Phase coherence is a quantum phenomenon with no classical analogue. If quantum theory is universally true, then any physical system such as a molecule, neuron, neuronal network or an entire mind-brain exists partly in all its theoretically allowed states, or configuration of its physical properties, simultaneously in a “quantum superposition“; informally, a “Schrödinger’s cat state”. Each state is formally represented by a complex vector in Hilbert space. Whatever overall state the nervous system is in can be represented as being a superposition of varying amounts of these particular states (“eigenstates”) where the amount that each eigenstate contributes to the overall sum is termed a component. The “Schrödinger equation” is a partial differential equation that describes how the state of a physical system changes with time. The Schrödinger equation acts on the entire probability amplitude, not merely its absolute value. The absolute value of the probability amplitude encodes information about probability densities, so to speak, whereas its phase encodes information about the interference between quantum states. On measurement by an experimenter, the value of the physical quantity in a quantum superposition will naively seem to “collapse” in an irreducibly stochastic manner, with a probability equal to the square of the coefficient of the superposition in the linear combination. If the superposition principle really breaks down in the mind-brain, as traditional Copenhagen positivists still believe, then the central conjecture of this paper is false. Mereological nihilism“, also known as “compositional nihilism”, is the philosophical position that objects with proper parts do not exist, whether extended in space or in time. Only basic building blocks (particles, fields, superstrings, branes, information, micro-experiences, quantum superpositions, entangled states, or whatever) without parts exist. Such ontological reductionism is untenable if the mind-brain supports macroscopic quantum coherence in the guise of bound phenomenal states because coherent neuronal superpositions describe individual physical states. Coherent superpositions of neuronal feature-detectors cannot be interpreted as classical ensembles of states. Radical ontological reductionism is even more problematic if post-Everett(11) quantum mechanics is correct: reality is exhaustively described by the time-evolution of one gigantic universal wavefunction. If such “wavefunction monism” is true, then talk of how neuronal superpositions are rapidly “destroyed” is just a linguistic convenience because a looser, heavily-disguised coherence persists within a higher-level Schrödinger equation (or its relativistic generalisation) that subsumes the previously tighter entanglement within a hierarchy of wavefunctions, all ultimately subsumed within the universal wavefunction. Direct realism“, also known as “naive realism”, about perception is the pre-scientific view that the mind-brain is directly acquainted with the external world. In contrast, the “world-simulation model”(12) assumed here treats the mind-brain as running a data-driven simulation of gross fitness-relevant patterns in the mind-independent environment. As an inferential realist, the world-simulationist is not committed per se to any kind of idealist ontology, physicalistic or otherwise. However, s/he will understand phenomenal consciousness as broader in scope compared to the traditional perceptual direct realist. The world-simulationist will also be less confident than the direct realist that we have any kind of pre-theoretic conceptual handle on the nature of the “physical” beyond the formalism of theoretical physics – and our own phenomenally-bound physical consciousness. “Classical worlds” are what perceptual direct realists call the world. Quantum theory suggests that the multiverse exists in an inconceivably vast cosmological superposition. Yet within our individual perceptual world-simulations, familiar macroscopic objects 1) occupy definite positions (the “preferred basis” problem); 2) don’t readily display quantum interference effects; and 3) yield well-defined outcomes when experimentally probed. Cats are either dead or alive, not dead-and-alive. Or as one scientific populariser puts it, “Where Does All the Weirdness Go?” This paper argues that the answer lies under our virtual noses – though independent physical proof will depend on next-generation matter-wave interferometry. Phenomenally-bound classical world-simulations are the mind-dependent signature of the quantum “weirdness”. Without the superposition principle, no phenomenally-bound classical world-simulations could exist – and no minds. In short, we shouldn’t imagine superpositions of live-and-dead cats, but instead think of superpositions of colour-, shape-, edge- and motion-processing neurons. Thanks to natural selection, the content of our waking world-simulations typically appears classical; but the vehicle of the simulation that our minds run is inescapably quantum. If the world were classical it wouldn’t look like anything to anyone. A “zombie“, sometimes called a “philosophical zombie” or “p-zombie” to avoid confusion with its lumbering Hollywood cousins, is a hypothetical organism that is materially and behaviourally identical to humans and other organic sentients but which isn’t conscious. Philosophers explore the epistemological question of how each of us can know that s/he isn’t surrounded by p-zombies. Yet we face a mystery deeper than the ancient sceptical Problem of Other Minds. If our ordinary understanding of the fundamental nature of matter and energy as described by physics is correct, and if our neurons are effectively decohered classical objects as suggested by standard neuroscience, then we all ought to be zombies. Following David Chalmers, this is called the Hard Problem of consciousness. Non-Materialist Physicalism: An experimentally Testable Conjecture by David Pearce Leave a Reply
b8aeffd12caa87a6
Skip to main content Chemistry LibreTexts 23: Electron Spin, Indistinguishability and Slater Determinants • Page ID • Recap of Lecture 22 Lecture 22 reviewed the basic steps perturbation theory including how it applies to the energy and wavefunctions. An example of this theory applied to a perturbation of a harmonic oscillator was given. A reminder of the orbital approximation was discussed (where an N-electron wavefunction can be described as N 1-electron orbitals that resemble the hydrogen atom wavefunctions). A consequence of the orbital approximation is the ability to construct electron configurations that are filled by the aufbau principle. However, the aufbau principle is only a guideline and not a hardfast rule. Now Back to Electrons (The Orbital Approximation again) The generic multielectron atom including terms for the additional electrons with a general charge \(Z\); e.g. \[V(r_1) = -\dfrac {Ze^2}{4 \pi \epsilon _0 r_1}\] So the Hamiltonian for the multi-electron system becomes \[\hat {H} = -\dfrac {\hbar ^2}{2m_e} \sum _i^N \nabla ^2_i + \sum _i^N V (r_i) + \sum _{i > j}^N V (r_{ij}) \] We predict that exact solutions to the multi-electron Schrödinger equation would consist of a family of multi-electron wavefunctions, each with an associated energy eigenvalue, i.e.,  \[|\psi_a (r_1, r_2, \cdots r_i) \rangle\] with energy \(E_a\) and \[|\psi_b (r_1, r_2, \cdots r_i) \rangle\] with energy \(E_b\) and \[|\psi_c (r_1, r_2, \cdots r_i) \rangle\] with energy \(E_c\) etc. These wavefunctions and energies would describe the ground and excited states of the multi-electron atom, just as the hydrogen wavefunctions and their associated energies describe the ground and excited states of the hydrogen atom. We would predict quantum numbers to be involved, as well. For hydrogen, the wavefunctions are just the orbitals. For multi-electrons, the corresponding wavefunctions are combinations of orbitals that look and feel like hydrogenic orbits, but are not. The fact that electrons interact through their Coulomb repulsion means that an exact wavefunction for a multi-electron system would be a single function that depends simultaneously upon the coordinates of all the electrons; i.e., a multi-electron wavefunction: Unfortunately, the Coulomb repulsion terms make it impossible to find an exact solution to the Schrödinger equation for many-electron atoms. The most basic approximations to the exact solutions involve writing a multi-electron wavefunction as a simple product of single-electron orbitals, and obtaining the energy of the atom in the state described by that wavefunction as the sum of the energies of the one-electron components. By writing the multi-electron wavefunction as a product of single-electron functions, we conceptually transform a multi-electron atom into a collection of individual electrons located in individual functions whose spatial characteristics and energies can be separately identified. For atoms, these single-electron functions are called atomic orbitals. \[|\psi (r_1, r_2, \cdots , r_i) \rangle \approx |\phi _1 (r_1) \rangle| \phi _2 (r_2)\rangle \cdots | \phi _i(r_i)\rangle\] Knowing the orbitals of a particular species provides one information about the sizes, shapes, directions, symmetries, and energies of those regions of space that are available to the electrons (i.e., the complete set of orbitals that are available). This knowledge does not determine into which orbitals the electrons are placed. It is by describing the electronic configurations (i.e., orbital occupancies such as \(1s^22s^22p^2\) or \(1s^22s^22p^13s^1\)) appropriate to the energy range under study that one focuses on how the electrons occupy the orbitals (scaled by the effective nuclear charge). The approximate order of filling of atomic orbitals, following the arrows from 1s to 7p. (After 7p the order includes orbitals outside the range of the diagram, starting with 8s.) Image used with permission (CC BY-SA; Sharayanan) Using the information from the figure above it is possible to arrange the orbitals in an order that approximately reflects increasing energy which is useful to figure out the orbital occupancies using a few simple rules. The Aufbau Principle is not entirely correct -it is only a guideline! So unfortunately, it seems that even though most of the effects which combine to result in the observed electronic configurations are known, there is no qualitative way to predict where the configurations are going to mismatch with the aufbau principle or the energy levels of the orbitals. The aufbau principle is most decidedly wrong for practically every atom with respect to the placement of the orbital energy levels, but incredibly it happens to predict the configuration of the valence shell for most atoms. Electronic Configurations Specification of a particular occupancy of the set of orbitals available to the system gives an electronic configuration. For example, • \(1s^22s^22p^2\) is an electronic configuration for the carbon atom (and the \(N^{+1}\) and the \(O^{+2} \) ions) This configuration represents situations in which the electrons occupy low-energy orbitals of the system and, as such, are likely to contribute strongly to the true ground and low-lying excited states and to the low-energy states of molecules formed from these atoms or ions. Specification of an electronic configuration does not, however, specify a particular electronic state of the system (i.e., which orbital is occupied). For the electronic configuration of carbon with \(1s^22s^22p^2\), there are many ways in which the 2p orbitals can be occupied by the two electrons. As a result, there are a total of multiple "microstates" which cluster into energetically distinct levels, lying within this single configuration. To address these levels, we need to discuss electron spin and the fourth Quantum Number! Electron Spin In 1920, Otto Stern and Walter Gerlach designed an experiment, which unintentionally led to the discovery that electrons have their own individual, continuous spin even as they move along their orbital of an atom. The result was that silver atoms formed a beam that passed through a magnetic field in which it split in two. Stern–Gerlach experiment: silver atoms travel through an inhomogeneous magnetic field and are deflected up or down depending on their spin. We need a charged particle with angular momentum to produce a magnetic moment, just like that obtained by the orbital motion of the electron. We can postulate that our observation results from a motion of the electron that was not considered in the last section - electron spin. Electron Spin. In a magnetic field, an electron has two possible orientations with different energies, one with spin up, aligned with the magnetic field, and one with spin down, aligned against it. All other orientations are forbidden. (CC-BY-NC-SA; Anonymous by request) \(S\) is Analogous to \(L\) The important feature of the spinning electron is the spin angular momentum vector, which we label \(S\) by analogy with the orbital angular momentum \(L\). Remember Orbital angular momenta states for an electron in the \(\ell=2\) state. There is the \(2\ell+1\) degeneracy associated with the projection of \(\vec{L}\) onto the z-axis. The cones are a result of the Heisenberg uncertainty associated with angular momentum. We found for orbital angular momentum \[ \hat {L}^2 | Y^{m_l} _l \rangle= l(l + 1) \hbar^2 |Y^{m_l}_l\rangle \] so by analogy for the spin states, we must have spin-angular momentum \[ \hat {S}^2| \sigma ^{m_s} _s \rangle= s( s + 1) \hbar ^2 | \sigma ^{m_s}_s \rangle\] where \(\sigma\) is a spin wavefunction with quantum numbers \(s\) and \(m_s\) that obey the same rules as the quantum numbers \(l\) and \(m_l\) associated with the spherical harmonic wavefunction \(Y^{m_l} _l\). We also found a quantization of the projection of \(\vec{L}\) on the z-axis \[ \hat {L}_z | Y^{m_l}_l \rangle= m_l \hbar | Y^{m_l}_l \rangle\] so by analogy, we must have a similar quanitization of \(\vec{S}\) \[ \hat {S}_z | \sigma ^{m_s}_s \rangle = m_s \hbar |\sigma ^{m_s}_s \rangle\] Since \(m_l\) ranges in integer steps from \(-l\) to \(+l\), also by analogy \(m_s\) ranges in integer steps from \(-s\) to \(+s\). Heuristic depiction of spin angular momentum cones for a spin-1/2 particle. Image used with permission (Public Domain; Maschen). Consequently, the two values of \(m_s\) must be \(+s\) and \(-s\), and the difference in \(m_s\) for the two states, labeled \(f\) and \(i\) below, must be the smallest integer step, i.e. 1. The result of this logic is that \[\begin{align*} m_{s,f} - m_{s,i} &= 1 (+s) - (-s) \\[4pt] 2s &= 1 \\[4pt] \color{red} s &= \dfrac {1}{2} \end{align*}\] the magnitude of the spin quantum number is 1/2 and the values for \(m_s\) are +1/2 and -1/2. Hence, the magnitude of the z-component of spin angular momentum, \(S_z\), is given by \[S_z = m_s \hbar \label {8.4.6}\] so the value of \(S_z\) is +ħ/2 for spin state \(\alpha\) and -ħ/2 for spin state \(\beta\). Electrons are not really Spinning Electron's hypothetical surface would have to be moving faster than the speed of light for it to rotate quickly enough to produce the observed angular momentum. Hence, an electron is not simply a spinning ball or ring and electron spin appears to be an intrinsic angular moment of the particle rather than a consequence of its rotation. Even though we do not know their functional forms, the spin wavefunctions are taken to be normalized and orthogonal to each other. \[ \langle \alpha | \alpha \rangle = \langle \beta | \beta \rangle =1 \] \[ \langle \alpha | \beta \rangle = \langle \alpha | \beta \rangle = 0 \] where the integral is over the spin variable \(\tau _s\). Complete the Quantum Number Tetrad: spin-orbitals Hydrogenic spin-orbitals used as components of multi-electron systems are identified in the same way as they are for the hydrogen atom. Each spin-orbital consists of a spatial wavefunction, specified by the the quantum numbers (\(n\), \(\ell\), and \(m_\ell\)) and denoted 1s, 2s, 2p, 3s, 3p, 3d, etc, multiplied by a spin function, specified by the quantum number ms and denoted \(\alpha\) or \(\beta\). The subscript on the argument of the spatial function reveals which electron is being described (\(r_1\) is a vector that refers to the coordinates of electron 1, for example.) No argument is given for the spin function. An example of a spin-orbital for electron 2 in a \(3p_z\) orbital: \[ | \phi _{3p_z} \alpha (r_2) \rangle = \phi _{3,1,0}(r_2) \alpha \] The basic mathematical functions and thus the general shapes and angular momenta for hydrogenic orbitals are the same as those for hydrogen orbitals. Ordering of energy levels for \(\ce{Ar}\). Energy level differences are not to scale. The energy of each electron now depends not only on its principle quantum number, \(n\), but also on its angular momentum quantum number, \(\ell\). Pauli Exclusion Principle Three rules apply for predicting the ground state configuration of an atom: 1. The Pauli Exclusion Principle. There is one more quantum number called the spin quantum number \(m_s\) that can take values -½ or -½ (for electrons). No two electrons can have the same four quantum numbers so an orbital can "hold" only two electrons. (i.e. only two electrons can be described by the same spacial wave function.) This rule cannot be broken. 2. The Aufbau principle. Electron configurations are built up by filling the lowest energy orbitals first (provided the energy differences are significant). Remember this rule gives only the ground state. Other excited configurations that do not violate the Pauli principle are possible. 3. Hund's (first) rule. Where orbitals have the same energy (are degenerate) or nearly so, they will be filled one electron in each, with parallel spins, before pairing begins. Other configurations are excited states, i.e., not forbidden. The mathematical analogue of these steps is the construction of the approximate multi-electron wavefunction as a product of the single-electron "atomic" spin-orbitals. For example, the configuration of the boron atom, shown schematically in the energy level diagram, is written in shorthand form as 1s22s22p1. The degeneracy of the 2s and 2p orbitals is broken by the electron-electron interactions in multi-electron systems. Orbital energy level diagram that represents the electron configuration of the boron atom. Orbital energy differences are approximately scaled. Rather than showing the individual spin-orbitals in the diagram or in the shorthand notation, we commonly say that up to two electrons can be described by each spatial orbital, one with spin function \(\alpha\) (electron denoted by an arrow pointing up) and the other with spin function \(\beta\) (arrow pointing down). This restriction is a manifestation of the Pauli Exclusion Principle mentioned above. We will use the following statement as a guide to keep our explorations focused on the development of a clear picture of the multi-electron atom: “When a multi-electron wavefunction is built as a product of single-electron wavefunctions, the corresponding concept is that exactly one electron’s worth of charge density is described by each atomic spin-orbital.” A subtle, but important part of the conceptual picture is that the electrons in a multi-electron system are not distinguishable from one another by any experimental means. Since the electrons are indistinguishable, the probability density we calculate by squaring the modulus of our multi-electron wavefunction also cannot change when the electrons are interchanged (permuted) between different orbitals. In general, if we interchange two identical particles, the world does not change. As we will see below, this requirement leads to the idea that the world can be divided into two types of particles based on their behavior with respect to permutation or interchange. We could symbolically write an approximate two-particle wavefunction as \(|\psi (r_1, r_2) \rangle\). This could be, for example, a two-electron wavefunction for helium. To exchange the two particles, we simply substitute the coordinates of particle 1 (\(r_l\)) for the coordinates of particle 2 (\(r_2\)) and vice versa, to get the new wavefunction \(|\psi (r_1, r_2) \rangle\). This new wavefunction must have the property that \[|\psi (r_1, r_2)|^2 = \psi (r_2, r_1)^*\psi (r_2, r_1) = \psi (r_1, r_2)^* \psi (r_1, r_2)\] This will be true only if the wavefunctions before and after permutation are related by a factor of \(e^{i\phi}\), \[\psi (r_1, r_2) = e^{i\phi} \psi (r_1, r_2) \] so that when squared \[ \left ( e^{-i\phi} \psi (r_1, r_2) ^*\right ) \left ( e^{i\phi} \psi (r_1, r_2) ^*\right ) = \psi (r_1 , r_2 ) ^* \psi (r_1 , r_2) \label {9-40}\] If we exchange or permute two identical particles twice, we are (by definition) back to the original situation. If each permutation changes the wavefunction by \(e^{i \phi}\), the double permutation must change the wavefunction by \(e^{i\phi} e^{i\phi}\). Since we then are back to the original state, the effect of the double permutation must equal 1; i.e., \[e^{i\phi} e^{i\phi} = e^{i 2\phi} = 1 \] which is true only if \(\phi = 0 \) or an integer multiple of \(π\). The requirement that a double permutation reproduce the original situation limits the acceptable values for \(e^{i\phi}\) to either +1 (when \(\phi = 0\)) or -1 (when \(\phi = \pi\)). Both possibilities are found in nature. Bosons (symmetric) The behavior of some particles requires that the wavefunction be symmetric with respect to permutation \((e^{i\phi} =+1)\). A wavefunction that is symmetric with respect to electron interchange is one whose output does NOT change sign when the electron coordinates are interchanged, as shown below. \[ \psi (r_2 , r_1) = e^{i\phi} \psi (r_1, r_2) = + \psi (r_1, r_2) \] These particles are called bosons and have integer spin such as deuterium nuclei, photons, and gluons. Fermions (antisymmetric) The behavior of other particles requires that the wavefunction be antisymmetric with respect to permutation \((e^{i\phi} = -1)\). A wavefunction that is antisymmetric with respect to electron interchange is one whose output changes sign when the electron coordinates are interchanged, as shown below. \[ \psi (r_2 , r_1) = e^{i\phi} \psi (r_1, r_2) = - \psi (r_1, r_2) \] These particles, called fermions, have half-integer spin and include electrons, protons, and neutrinos. Example \(\PageIndex{1}\): Helium Blindly following the first statement of the Pauli Exclusion Principle, that each electron in a multi-electron atom must be described by a different spin-orbital, we try constructing a simple product wavefunction for helium using two different spin-orbitals. Both have the 1s spatial component, but one has spin function \(\alpha\) and the other has spin function \(\beta\) so the product wavefunction matches the form of the ground state electron configuration for He, \(1s^2\). \[ | \psi (\mathbf{r}_1, \mathbf{r}_2 ) \rangle = |\phi _{1s\alpha} (\mathbf{r}_1) \phi _{1s\beta} ( \mathbf{r}_2) \rangle\label{8.6.1}\] After permutation of the electrons, this becomes \[| \psi ( \mathbf{r}_2,\mathbf{r}_1 ) \rangle =| \phi _{1s\alpha} ( \mathbf{r}_2) \phi _{1s\beta} (\mathbf{r}_1) \rangle \label{8.6.2}\] which is different from the starting function since \(\phi _{1s\alpha}\) and \(\phi _{1s\beta}\) are different spin-orbital functions. However, an antisymmetric function must produce the same function multiplied by (–1) after permutation, and that is not the case here. We must try something else. To avoid getting a totally different function when we permute the electrons, we can make a linear combination of functions. A very simple way of taking a linear combination involves making a new function by simply adding or subtracting functions. The function that is created by subtracting the right-hand side of Equation \(\ref{8.6.2}\) from the right-hand side of Equation \(\ref{8.6.1}\) has the desired antisymmetric behavior. The constant on the right-hand side accounts for the fact that the total wavefunction must be normalized. \[| \psi (\mathbf{r}_1, \mathbf{r}_2) \rangle = \dfrac {1}{\sqrt {2}} [ \phi _{1s\alpha}(\mathbf{r}_1) \phi _{1s\beta}( \mathbf{r}_2) - \phi _{1s\alpha}( \mathbf{r}_2) \phi _{1s\beta}(\mathbf{r}_1)]\]​ Does this satisfy the antisymmetric requirement for electron exchange: \[| \psi (\mathbf{r}_1, \mathbf{r}_2) \rangle \overset{?}{=} - | \psi (\mathbf{r}_2, \mathbf{r}_1) \rangle \] \[ \dfrac {1}{\sqrt {2}} [ \phi _{1s\alpha}(\mathbf{r}_1) \phi _{1s\beta}( \mathbf{r}_2) - \phi _{1s\alpha}( \mathbf{r}_2) \phi _{1s\beta}(\mathbf{r}_1)] \overset{?}{=} - \dfrac {1}{\sqrt {2}} [ \phi _{1s\alpha}(\mathbf{r}_2) \phi _{1s\beta}( \mathbf{r}_1) - \phi _{1s\alpha}( \mathbf{r}_1) \phi _{1s\beta}(\mathbf{r}_2)] \] Slater Determinants as a way to "Hardwire" Indistinguishability into the Wavefunction A linear combination that describes an appropriately antisymmetrized multi-electron wavefunction for any desired orbital configuration is easy to construct for a two-electron system. However, interesting chemical systems usually contain more than two electrons. For these multi-electron systems, a relatively simple scheme for constructing an antisymmetric wavefunction from a product of one-electron functions is to write the wavefunction in the form of a determinant. John Slater introduced this idea so the determinant is called a Slater determinant. The Slater determinant for the two-electron wavefunction of helium is \[ | \psi (\mathbf{r}_1, \mathbf{r}_2) \rangle = \dfrac {1}{\sqrt {2}} \begin {vmatrix} \phi _{1s} (1) \alpha (1) & \phi _{1s} (1) \beta (1) \\ \phi _{1s} (2) \alpha (2) & \phi _{1s} (2) \beta (2) \end {vmatrix} \label{slater} \] We can introduce a shorthand notation for the arbitrary spin-orbital \[ \phi_{i\alpha}(\mathbf{r}) = \phi_i \alpha\] \[ \phi_{i\beta}(\mathbf{r}) = \phi_i \beta\] as determined by the \(m_s\) quantum number. A shorthand notation for the determinant in Equation \ref{slater} is then \[ | \psi (\mathbf{r}_1 , \mathbf{r}_2) \rangle = 2^{-\frac {1}{2}} Det | \phi_{1s\alpha} (\mathbf{r}_1) \phi_{1s\beta} ( \mathbf{r}_2) | \] The determinant is written so the electron coordinate changes in going from one row to the next, and the spin orbital changes in going from one column to the next. The advantage of having this recipe is clear if you try to construct an antisymmetric wavefunction that describes the orbital configuration for uranium! Note that the normalization constant is \((N!)^{-\frac {1}{2}}\) for N electrons. The generalized Slater determinant for a multe-electron atom with N electrons is then \[ | \psi(\mathbf{r}_1, \mathbf{r}_2, \ldots, \mathbf{r}_N) \rangle =\dfrac{1}{\sqrt{N!}} \left| \begin{matrix} \phi_1(\mathbf{r}_1) & \phi_2(\mathbf{r}_1) & \cdots & \phi_N(\mathbf{r}_1) \\ \phi_1(\mathbf{r}_2) & \phi_2(\mathbf{r}_2) & \cdots & \phi_N(\mathbf{r}_2) \\ \vdots & \vdots & \ddots & \vdots \\ \phi_1(\mathbf{r}_N) & \phi_2(\mathbf{r}_N) & \cdots & \phi_N(\mathbf{r}_N) \end{matrix} \right| \]
567fe7c4e4fe6e38
Welcome to LIME! Package:  https://github.com/binggu56/lime LIME is a python package created to provide researchers advanced computational tools that I have implemented during the years. The primary focus is on light-matter interaction including computational methods in quantum dynamics, open quantum systems, periodically driven quantum systems, non-adiabatic dynamics, trajectory-based approximate methods. 1. download the package 2. enter into the main directory, install the package with pip install . Current modules 1. Quantum dynamics a. Adiabatic Nuclear Quantum Dynamics Exact methods: • Split-operator method • Discrete variable representation Semiclassical methods: • Quantum trajectory method b. Non-adiabatic molecular dynamics Exact nonadiabatic quantum dynamic with multiple electronic surfaces. • Split-operator method in the diabatic representation Mixed quantum-classical methods • Surface-hopping method 2. Quantum chemistry Quantum chemistry solves the electronic structure given the nuclear geometry. This has been an active field of research for many decades, and sophisticated programs like Gaussian, Qchem, Psi4, Pyscf, Molpro have been widely used in the scientific community. We will take advantage of these remarkable developments. On one hand, we will apply existing methods to interesting molecules and materials. On the other hand, we will develop new techniques based on these programs as well since many functions and modules such as Coulomb integrals are the same irrespective of what your method is. Currently, we primarily use Pyscf and Molpro for Quantum Chemistry computations. 3. Polaritonic dynamics Quantum dynamics of molecules coupled to the electromagnetic photon modes confined inside an optical cavity. 4. Stochastic Schrödinger equation Generate white and colored noise to simulate stochastic dynamics, e.g., stochastic Schrödinger equations. 5. Band structure of solids • Compute band structure from tight-binding Hamiltonians. 6. Open quantum systems Quantum systems are rarely isolated from their surrounding environment. For an isolated quantum system, dynamics can be described by the time-dependent Schrödinger equation (TDSE). One straightforward approach simulating the open quantum system dynamics is to include the environment degrees of freedom directly into the TDSE. While conceptually simple, this is not always the optimal choice when the environment is complex. Alternatively, we can solve a quantum master equation describing the equation of motion for the reduced density matrix. The following methods are currently implemented • Redfield equations • Lindblad quantum master equations • Hierarchical equation of motion • second-order time-convolutionless equation 7. Periodically driven quantum systems • Quasienergy levels of a periodically driven quantum system using Floquet theorem 8. Nonlinear molecular spectroscopy Time-independent approach to optical signals via sum-over-states expressions • Absorption • Transient absorption • Photo echo Time-dependent approach to coherent signals via explicitly solving the dynamics of matter interacting with laser pulses employed in the spectroscopic experiment. For example, pump and probe pulses in pump-probe experiments. • Transient absorption 9. Quantum transport This module is to compute the current of nano-structures under a bias. 1. Landauer transport
44a9bf2686c0112e
Experimental tests of general relativity starting from the Schrödinger equation: the Pound-Rebka experiment Jayson Vavrek Doctoral Candidate Laboratory for Nuclear Security and Policy The Pound-Rebka experiment (1959) was one of the first precision tests of Einstein's general relativity. General relativity predicts that a photon's energy will be red- or blue-shifted by a gravitational field, but the magnitude of the effect is exceedingly small over typical laboratory distances and under terrestrial gravity. However, these small energy shifts can be detected in high-precision spectroscopy experiments using the Mössbauer effect, as was done by Pound and Rebka at Harvard, providing a link between physics on cosmological and quantum scales. Starting from the Schrödinger equation, I will show how the finite potential well gives rise to the Breit-Wigner resonance cross section under a first-order Taylor expansion. I will then discuss the Breit-Wigner behaviour of nuclear resonance fluorescence and its special case, the Mössbauer effect. Next, I will show how the Mössbauer effect can be used for high-precision (ΔE/E ~ 10-15) measurements of photon energies. Finally, I will derive the competing photon redshift predictions from both special and general relativity and cover how the observed net redshift was confirmed by Pound and Rebka to closely match the theoretical result.
8576196594aec34e
Dismiss Notice Join Physics Forums Today! Many Worlds Interpretation 1. Aug 11, 2006 #1 I recently learned about the Many Worlds Interpretation of quantum mechanics from another post on this forum. Unfortunately, the post became more of an argument about whether some experiment had or hadn't proven this interpretation to be true, and there wasn't a whole lot of information on what MWI was. Does anyone have any suggestions on where to find more information? From what I have read on it, I like the idea of a wavefunction which evolves solely according to Schroedinger's equation and does not collapse during observations. However, I was under the impression that the Copenhagen interpretation's collapse of the wavefunction was a necessary outcome of experiments. If an electron is observed to be at a particular location, then one femtosecond later, if observed again, it will not have strayed far. Measurements repeated in very quick succession, return nearly the same value for the electron's location. Is that true? If so, then I don't see how the MWI can work. Let's say one observation of the electron finds it in an unlikely location. If the wavefunction did not collapse and still obeys Schroedinger's law, then a second observation is likely to find it in an entirely different spot. If I'm wrong and the quickly taken electron measurements do not need to be consistent, then why was the wavefront collapse added to the Copenhagen interpretation? 2. jcsd 3. Aug 11, 2006 #2 User Avatar Staff Emeritus Science Advisor Gold Member In MWI, measurements don't happen in the absolute sense; instead a measurement is simply entangling a "measuring device" with the system being "measured". So, in your thought experiment, you began with two independent systems: (1) Your electron, which is in a superposition of being in a "likely spot" and an "unlikely spot" (2) Your measuring device. Then after the experiment, the systems have been entangled, and are in a superposition of the two states: (A) Your electron is in a likely spot and your measuring device says the electron was in a likely spot. (B) Your electron is in an unlikely spot and your measuring device says the electron was in an unlikely spot. If you had another measuring device and repeated the measurement, then you'd get the superposition of (A) Your electron is in a likely spot and both measuring devices say the electron was in a likely spot. (B) Your electron is in an unlikely spot and both measuring devices say the electron was in an unlikely spot. and so forth. 4. Aug 11, 2006 #3 I like it too. Specially as it is a direct consequence of the interaction (entanglement) between a microscopic object and a macroscopic object. No need for more! No need for science fiction! (See Landau & Lifchits, Quantum Mechanics Chap 1, §7) Last edited: Aug 12, 2006 5. Aug 11, 2006 #4 I'm of the same mindset as Riposte on MWI v Copenhagen. I can accept a non-deterministic future, but I have difficulty accepting a non-deterministic past. But that's just me. Does anyone know of mathematical explorations of MWI? My understanding is that at the mathematical level, interpretations aren't relevant, so maybe that's a dumb question. What I am looking for is work showing how QM reduces to CM as particle number (or mass, or something else) becomes large. Mainly particle number, as an entanglement of a large number of particles would be a a description of macroscopic reality. Shining a light on a baseball to see where it is is a measurement. 6. Aug 12, 2006 #5 The transition from QM to CM is explained in many books. I can mention "Quantum by E. Elbaz, Springer". It starts with an overview on classical mechanics: Lagrange, Hamilton, least action, Hamilton-Jacobi. If I remember well "Messiah" is also a good reference on that. You could also read about optics: the transition from physical optics to geometrical optics and the Maupertuis principle. The transition from QM to CM is not really related to the number of particles. The transition occurs when the limit [itex]\hbar \rightarrow 0[/itex] becomes a good approximation for the system considered. For example, this is the case for the electron in an hydrogen atom for high 'n' states, called the Rydberg states. A classical picture of the motion emerges then. You can learn the essential by yourself. Refresh on the least action principle in CM. Then work out this "exercice": start from the Schrödinger equation [tex]\newcommand{\pd}[3]{ \frac{ \partial^{#3}{#1} }{ \partial {#2}^{#3} } }i \hbar \pd{\Psi}{t}{} =- \frac{\hbar^2}{2 m} \ \pd{\Psi}{x}{2} + V \Psi[/tex] assume [tex]\Psi = e^{\frac{i}{\hbar} S}[/tex], S is the "action" make this subtitution and get a partial differential equation for S observe what happen if [itex]\hbar \rightarrow 0[/itex] in this limit, the equation you get is the "Jacobi" equation from CM the physical interpretation is very interresting it explains -in a sense- the origin of the least action principle in CM the term neglected in the classical limit is called the http://en.wikipedia.org/wiki/Bohm_interpretation" it sustains the "random" of QM: it couples the probability to the motion !! this is the start point for the Bohm point of view on QM​ I think with all these keywords above, you will easily find anything you need from the web. Also, read Feynmann once more! Note also another book: https://www.amazon.com/gp/product/0...f=sr_1_1/103-3869637-5019838?ie=UTF8&s=books" that makes the reverse journey. This book goes from particular to more general, like history. Last edited by a moderator: May 2, 2017 7. Aug 13, 2006 #6 User Avatar Staff Emeritus Gold Member Dearly Missed Last edited by a moderator: Apr 22, 2017 8. Aug 13, 2006 #7 Thanks for that I checked it out on Amazon, apparently it's good as reference but terrible as first source for the material:frown: And I was going to buy that too? Any thoughts? 9. Aug 16, 2006 #8 User Avatar Science Advisor Homework Helper Gold Member So decoherence is totally irrelevant in MWI? I thought that one might have to consider instead a tensor product [tex]( |el-A> + |el-B>) \otimes (|md-A> + |md-B>) = |el-A> |md-A> +|el-A> |md-B>+|el-B> |md-A>+|el-B> |md-B> [/tex] where |el-A> = |electron in spot A>, |md-A> = |measuring device indicating elctron in spot A> , etc. And I thought that decoherence was explaining the extremely rapid dampening out of the "cross terms" |el-A> |md-B> and |el-B> |md-A>. I am probably mixing apples and oranges. Sorry. So my questions are: am I understanding decoherence right? and does decoherence play any role in MWI? 10. Aug 16, 2006 #9 My feeling is that in the MWI society, to be polite, you should not pronounce the word "decoherence". But you may use its (de facto) synonym: entanglement. I concede, that MWI really is a comfortable luxury theory. Life indeed becomes very simple: you can calculate and pretend not to "shut up"! You have as many universe as you may want to talk about and explain anything. But you don't predict anything more that simple "algebraic" QM. Is this wealthy explanation affordable for common sense? MWI is indeed a "simple" theory. Maybe it is also a counter-exemple to the principle of simplicity. It is simple to imagine a wealth of platonic universes, increasing beyond limits the number of degree of freedom in the explanations without bringing any new result. It would be better to reduce the degrees of freedom! Simplicity doesn't necessarily make sense! I would like to know why the world behaves as if MWI was true! Last edited: Aug 17, 2006 11. Aug 16, 2006 #10 User Avatar Staff Emeritus Science Advisor Gold Member I thought it was still important? Anyways, my impression is that it works like this. First, consider an isolated system. Say, a qubit whose basis states are |0> and |1>. You have an interaction T that performs the following transformation on your qubit. (ignoring constant factors) T|0> = (|0> + |1>) T|1> = (|0> - |1>) Then, if you prepared the superposition |0> + |1>, then applied your interaction, you get: T(|0> + |1>) = |0> the amplitudes for |1> have interfered and cancelled out. Life is great! But suppose the system is not isolated. When we prepare our superposition, we really have something like: (|0> + |1>)|e> where |e> denotes some environment state. Then, before we're ready, the environment interacts with our qubit (decoherence) giving us: |0>|e'> + |1>|e''> Then, when we try applying T, we get: |0>(|e'> + |e''>) + |1>(|e'> - |e''>) Lo and behold, decoherence has spoiled our superposition, and the amplitudes for |1> don't cancel out! :frown: If we leave the environment out of the analysis, and just focus on our qubit, then it can be described as follows: we started with our qubit in a superposition |0> + |1> and then decoherence had spoiled it, turning it into the statistical mixture 50% chance of being in |0> 50% chance of being in |1> I think that's a good description of what happens to the density matrices -- but if you're looking at the state vectors that's not right. 12. Aug 17, 2006 #11 Note that pure states can also be described by a density matrix. The discussion -apparently- is "what to do when projecting a density matrix on a lower-dimensional space". This starts with the density matrix of a pure-state but entangled system. We want to "view" it as a density matrix of the initial (usually small) system. The end result, everybody will agree, is not a pure state or 'pure' density matrix. And the rule comes from the measurement postulate. (Would another rule be even conceivable?) 13. Aug 17, 2006 #12 This is not true. See this for example. This is not true either. :rolleyes: See this. Because quantum theory is, as shown by Deutsch, a local theory when viewed from the Heisenberg picture and it merely appears to be non-local if viewed from the Schrödinger picture. By the way for all of you who have questions concerning the MWI, please see The Everett FAQ by Michael Clive Price. PS. Here is the link to the Wikipedia article about the subject. Last edited: Aug 17, 2006 14. Aug 17, 2006 #13 After reading carefully your references as well a watching the Deutsch videos, I am sorry to say that there is not 1 bit of prediction that MWI can add to known experimental facts. (but for this I would have to buy such a huge amount of bits for the additional MWI reality!) I only realised that the MWI is a very convenient way to brush off any discussion, which is similar to the "shut up and calculate". This is so because it is a way to formulate the essential difficulty of QM that is quite easy to visualize, like a science fiction cartoon. This essential difficulty can be traced back to the "quantum potential" in the Bohm viewpoint. Now, to balance my opinion, I must say that I would not be reluctant at all to speak the MWI language. I do really think it is convenient. But I cannot see any new physics or any new progress in it. Is that no good news? Finally it leaves the challenge totally open for those who like taking it. Can we seriously think that http://www.hedweb.com/manworld.htm#exact" don't believe it anymore. Guess how many papers have been published on the NLSE, most without reference to the MWI. Last edited by a moderator: Apr 22, 2017 15. Aug 17, 2006 #14 Apparently you didn't read this. It is also a really convenient way to explain quantum physics as a local theory, which it actually is, as shown by Deutsch (and Tipler - see the end of the post). Yes, and this is mostly due to physicists affraid of other physicists marking them as "crackpots". The poll in 1988 indicates that 58% of the 72 leading physicists at that time believed in many worlds interpretation. With the rise of string theory nowadays, I don't believe that the perceantage has decreased. You are, ofcourse, free to believe in any interpretation; it is a matter of taste. But then again, the interpretation should be local. Here is a paper by Frank Tipler who has come to the same conclusion as Deutsch: quantum physics must be local: "Thus, experiments confirming "nonlocality" are actually confirming the MWI." 16. Aug 17, 2006 #15 for the last decade David Deutsch's amazing work has firmly established the MWI as the only tenable interpretation of QM- the incredible success of his work and the work of those in quantum computer science has now made the MWI nearly universally accepted by the professional physics community- http://www.edge.org/3rd_culture/prize05/prize05_index.html however- this is a public physics forum- and given the socio-dynamics of forums it is inevitable that naysayers will still dispute the evidence as well as the consensus from the experts who actually understand the field and perform/publish experiments- and of course those of us who agree with their conclusions ~David Deutsch “The MWI is trivially true!” Steven Hawking Last edited: Aug 17, 2006 17. Aug 17, 2006 #16 User Avatar Staff Emeritus Science Advisor Gold Member First off, it makes a silly mistake. It is impossible to compare the results of (1) and (3). Secondly, ignoring that silly mistake, this experiment cannot distinguish between MWI and Copenhagen. If the result of the experiment ostensibly agrees with MWI, the Copenhagenist just says "There's proof that your `machine intelligence' cannot collapse a wavefunction". If the result of the experiment ostensibly agrees with Copenhagen, then MWIist just says "Decoherence happened, which spoiled the interference." Stop making such ridiculous statements. Your reference doesn't even come close to anything resembling support for your claim. :grumpy: Stop making such ridiculous statements. And, of course, it is inevitable that there will be closed-minded people who cannot see any of the problems with their beliefs and arguments. Interesting, the only reference on Google of SH saying exactly that in a post by you in another thread on PF. Both of you have such extremely biased positions -- I actually like the MW interpretation, but your posts are just so ridiculous that I have to reject them. Last edited by a moderator: May 2, 2017 18. Aug 18, 2006 #17 " MWI and calculate ! " is the latest school in QM interpretation. Fortunately, QM teaching all around the world do focus on what matters the most: the known physics, the maths behind and the their subtile and beautiful fit. Interpretations, like dreams, can contribute only as far as they turn out as new operational theories. There is still plenty of room for fruitful interpretations. I dream of three ways to investigate: the nature of space-time -again-, the nature of vacuum fluctuations, Information Theory​ I am very interrested to know about other tracks to go forward. 19. Aug 20, 2006 #18 I thought we'd knocked all this stuff on the head back in another thread. 20. Aug 20, 2006 #19 The reason for the MWI's popularity among quantum information theorists and philosophers of physics are a few fold. First, the Many Worlds interpretation is exactly that: an interpretation. There is no significant modification of the mathematical formalism of standard QM such as Schrodinger's equation. The state vector is interpreted not as a probabilistic superposition of states with a probabilistic collapse of the wave packet. Rather, all the possible consistent states of the measured system and the measuring apparatus are present in a real physical quantum superposition. This superposition of consistent state combinations of different systems is called an entangled state. With this interpretation, MWI actually removes the probabilistic projection postulates for the state vector and thus is distinctly simpler than Copenhagen. All this arises out of a single change in the interpretation of quantum superposition, and without any nonlinear mathematical modifications as in other formulations. Secondly, measurement processes in MWI incorporate Zurek's quantum decoherence theory which is also widely considered among physicsts to be the resolution to the measurement problem. Third, MWI is the only alternative to the Copenhagen formulation that is completely consistent with relativistic quantum field theory. Bohmian mechanics, stochastic mechanics, GRW, etc. are all still nonrelativistic in their formulation. And of course, for physicists like Deustch, the potential for quantum computation faster than any classical computer. 21. Aug 21, 2006 #20 User Avatar Staff Emeritus Science Advisor Gold Member Don't forget the relational QM interpretation! Similar Discussions: Many Worlds Interpretation
c8c4eaea2e762dbc
Subquantum Kinetics: Autogenesis in 2 Dimensions This is a 2-dimensional simulation of the process known as autogenesis in the framework of subquantum kinetics (SQK). The initial conditions at t=0 are two particles situated next to each other. The reaction-diffusion system known as Model G, under specific system parameters, gives rise to natural particle formation in the vicinity of existing matter. In this case the system parameters create a super-critical environment in which new particles are created with ease, such as in certain life stages of a neutron star (the particles being actual neutrons) or in the center of a galaxy in which the rapid matter formation can give rise to a galactic superwave. More specifically, what is shown are 3 etheric substances: • Y or Y-ons (tall yellow peaks and surrounding green rings) • X or X-ons (purple secondary rings, and inverted peaks below the plane seen near the end of the simulation) • G or G-ons (blue and can only occasionally be seen between the X and Y, and along the edge especially near the end of the simulation) According to Model G, these are involved in the 5 reactions: \begin{gather*}A \overset{k_1}{\underset{k_{-1}}{\rightleftharpoons}} G\qquad G \overset{k_2}{\underset{k_{-2}}{\rightleftharpoons}} X\\ B + X \overset{k_3}{\underset{k_{-3}}{\rightleftharpoons}} Y + Z\qquad 2X + Y \overset{k_4}{\underset{k_{-4}}{\rightleftharpoons}} 3X\qquad X \overset{k_5}{\underset{k_{-5}}{\rightleftharpoons}} \Omega\end{gather*} along with other etherons A, B, Z, \Omega which are held constant. Incorporating the process of diffusion (using Fick's second law) yields the reaction-diffusion differential equations for G, X, and Y: \begin{align*}\frac{\partial G}{\partial t} =& {\cal D}_G\nabla^2 G - (k_{-1} + k_2)G + k_{-2}X + k_1 A\\ \frac{\partial X}{\partial t} =& {\cal D}_X\nabla^2 X + k_2 G - (k_{-2} + k_3 B + k_5)X + \\ &k_{-3}ZY - k_{-4}X^3 + k_4 X^2 Y + k_{-5} \Omega\\\frac{\partial Y}{\partial t} =& {\cal D}_Y\nabla^2 Y + k_3 BX - k_{-3}ZY + k_{-4}X^3 - k_4 X^2Y\end{align*} These equation determine how the system evolves in time, from the initial two particles at the beginning of the simulation. For more information, see Stationary Dissipative Solitons of Model G. It is interesting to note the similarity of these equations with the Schrödinger equation of quantum mechanics: One present goal of current research is to find a correspondence principle between subquantum kinetics, and established quantum mechanics. What is Matter? One deep philosophical problem that SQK elegantly addresses is the question of what is a fundamental particle? SQK approaches this question from a systems perspective. That is, it looks at families of etherons and their interactions with each other. Like biological systems, the "fundamental" particles are not really fundamental at all, anymore than a biological organism is. They both consume, transform, and release substance from and to their environment. It is this dynamic interaction that maintains their forms. This "systems thinking" has permeated virtually all scientific disciplines, from geology to psychology, save one: high energy particle physics. It is still trapped in the paradigm of early 20th century thought. (And given the orientation of this web site, I might add that this is very convenient to those who wish to suppress certain technologies that would result from the suppressed physics.) If SQK explains matter in terms of the ether, isn't that just pushing the question of what is fundamental forward? Yes. However, much is explained now in terms of far less. The ether is a very simple substance, with virtually no structure at all. It only does 2 things: it reacts (like in the 5 kinetic equations above), and it diffuses. Diffusion is also just a direct consequence of random movement - it would be more unusual if it didn't diffuse. But that is all the ether does. It has no mass, no spin, no charge, no magnetic field, and no known structure. But the structures which naturally form from the ethers can have these properties. This is the payoff. It is the goal of physics, and science in general to explain the complex in terms of the simple, preferably with beauty and elegance, and to make predictions about the world. Subquantum kinetics does this. Update 2014 July 18 The C++ code for these simulations is open source. 5 thoughts on “Subquantum Kinetics: Autogenesis in 2 Dimensions 1. Wow - well done, Matt. (Paranoid Firefox browser users such as myself, who often disable Javascript, will want to enable it for and, or else Matt's key five reaction-diffusion differential equations for Model G will show up in what I guess is their raw Tex typesetting form, not at mathematical equations.) Like or Dislike: Thumb up 0 Thumb down 0 • Thanks! This uses a numerical algorithm called exponential time differencing with 4th-order Runge-Kutta, or ETDRK4b as it's called. This should readily generalize to 3 spatial dimensions. I may be hitting you up to help optimize the C++ for multi-core machines :) Like or Dislike: Thumb up 0 Thumb down 0 2. Matt wrote: >> I may be hitting you up to help optimize the C++ for multi-core machines :) Well ... as we saw a couple of years ago, there was a basic gap in my understanding of how to convert these equations into low level code. If my memory serves me correctly, I didn't know how to adjust the code to handle varying the resolution of the time incremen (delta-t) to obtain sufficient accuracy. I was basically stuck at a single granularity of clock rate, resulting in unstable results. Like or Dislike: Thumb up 0 Thumb down 0 • That's all taken care of :) The 3D sims can take days. Given that this is running on Linux, I have no doubt you'll be able to parallelize this better than I could. Like or Dislike: Thumb up 0 Thumb down 0 3. I have enjoyed making connections between SQK and Bohm-DeBroglie QM / Implicate Order. I think that if Bohm is correct, and a fundamental "motion" or "process" underwrites the "explicate" universe, then a reaction-diffusion type system would make sense. They come in both organic and inorganic varieties, so the vacuum doesn't necessarily have to be "alive", though it might have some type of "cosmic consciousness" or "proto-intelligence" associated with it (as is postulated by Bohm). SQK also fits well with Bohm's model of the electron, which Bohm suggested might be similar to a complex quantum radio/transducer of sorts. Whereas orthodox physics simply views the electron, proton, etc. as highly simplistic closed off from their environment. Like or Dislike: Thumb up 0 Thumb down 0 Leave a Reply
8de03cf51da7652d
Saturday, January 21, 2012 Some parallels between classical and quantum mechanics This isn't really a blog post. More of something I wanted to interject in a discussion on Google plus but wouldn't fit in the text box. I've always had trouble with the way the Legendre transform is introduced in classical mechanics. I know I'm not the only one. Many mathematicians and physicists have recognised that it seems to be plucked out of a hat like a rabbit and have even written papers to address this issue. But however much an author attempts to make it seem natural, it still looks like a rabbit to me. So I have to ask myself, what would make me feel comfortable with the Legendre transform? The Legendre transform is an analogue of the Fourier transform that uses a different semiring to the usual. I wrote briefly about this many years ago. So if we could write classical mechanics in a form that is analogous to another problem where I'd use a Fourier transform, I'd be happier. This is my attempt to do that. When I wrote about Fourier transforms a little while back the intention was to immediately follow it with an analogous article about Legendre transforms. Unfortunately that's been postponed so I'm going to just assume you know that Legendre transforms can be used to compute inf-convolutions. I'll state clearly what that means below, but I won't show any detail on the analogy with Fourier transforms. Free classical particles Let's work in one dimension with a particle of mass whose position at time is . The kinetic energy of this particle is given by . Its Lagrangian is therefore . The action of our particle for the time from to is therefore The particle motion is that which minimises the action. Suppose the position of the particle at time is and the position at time is . Then write for the action minimising path from to . So where we're minimising over all paths such that . Now suppose our system evolves from time to . We can consider this to be two stages, one from to followed by one from to . Let be the minimised action analogous to for the period to . The action from to is the sum of the actions for the two subperiods. So the minimum total action for the period to is given by Let me simply that a little. I'll use where I previously used and for . So that last equation becomes: Now suppose is translation-independent in the sense that . So we can write . Then the minimum total action is given by Infimal convolution is defined by so the minimum we seek is So now it's natural to use the Legendre transform. We have the inf-convolution theorem: where is the Legendre transform of given by and so (where we use to represent Legendre transform with respect to the spatial variable). Let's consider the case where from onwards the particle motion is free, so . In this case we clearly have translation-invariance and so the time evolution is given by repeated inf-convolution with and in the "Legendre domain" this is nothing other than repeated addition of . Let's take a look at . We know that if a particle travels freely from to over the period from to then it must have followed the minimum action path and we know, from basic mechanics, this is the path with constant velocity. So and hence the action is given by So the time evolution of is given by repeated inf-convolution with a quadratic function. The time evolution of is therefore given by repeated addition of the Legendre transform of a quadratic function. It's not hard to prove that the Legendre transform of a quadratic function is also quadratic. In fact: Addition is easier to work with than inf-convolution so if we wish to understand the time evolution of the action function it's natural to work with this Legendre transformed function. So that's it for classical mechanics in this post. I've tried to look at the evolution of a classical system in a way that makes the Legendre transform natural. Free quantum particles Now I want to take a look at the evolution of a free quantum particle to show how similar it is to what I wrote above. In this case we have the Schrödinger equation Let's suppose that from time onwards the particle is free so . Then we have Now let's take the Fourier transform in the spatial variable. We get: We can write this as So the time evolution of the free quantum particle is given by repeated convolution with a Gaussian function which in the Fourier domain is repeated multiplication by a Gaussian. The classical section above is nothing but a tropical version of this section. I doubt I've said anything original here. Classical mechanics is well known to be the limit of quantum mechanics as and it's well known that in this limit we find that occurrences of the semiring are replaced by the semiring . But I've never seen an article that attempts to describe classical mechanics in terms of repeated inf-convolution even though this is close to Hamilton's formulation and I've never seen an article that shows the parallel with the Schrödinger equation in this way. I'm hoping someone will now be able to say to me "I've seen that before" and post a relevant link below. I'm not sure how the above applies for a non-trivial potential . I wrote this little Schrödinger equation solver a while back. As might be expected, it's inconvenient to use the Fourier domain to deal with the part of the evolution due to . In order to simulate a time step of the code simulates in the Fourier domain assuming the particle is free and then spends solving for the -dependent part in the spatial domain. So even in the presence of non-trivial it can still be useful to work with a Fourier transform. Almost the same iteration could be used to numerically compute the action for the classical case. Blog Archive
6b3e05e4805ea8a7
lördag 16 juli 2011 Monstrosity of Quantum Mechanics 7: Basic Postulates In what sense are the basic postulates of quantum mechanics not Harry Potter fantasy? Lubos Motl makes in The Unbreakable Postulates of Quantum Mechanics a heroic effort to justify quantum mechanics almost 100 years after its formulation, starting with: The mission is to convince skeptics about the truths of the following basic postulates: 1. The set of possibilities in which a physical system may be found is described by a linear Hilbert space (more precisely by the rays in this space) equipped with an inner product. 2. Complex (nonzero) linear combinations of allowed states are allowed states, too. 3. A physical system composed out of N separated (or fully independent) subsystems has the Hilbert space equal to the tensor product of the Hilbert space describing the individual subsystems. 4. Physical quantities, also referred to as "observables" in the fancy quantum mechanical context, are encoded in Hermitean (linear) operators acting on the Hilbert space. 5. In particular, the evolution in time is generated by the operator known as the Hamiltonian. 6. The exponentials of its imaginary multiples are the operators that evolve the system over a finite interval and these operators are unitary; similarly, other symmetry transformations are given by other unitary (or anti-unitary, if the time reversal is included) operators. 7. The expectation values of the quantity "A" are given by the inner product ; if "A" is replaced by the projection operator "P", this expectation value expresses the probability that the condition connected with "P" will be satisfied once the system is measured. The motivations for 1 - 7 presented by Lubos tell us something essential about the solidity of quantum mechanics. Let see how Lubos motivates 1 - 3: 1. Why do we know that there is a Hilbert space? If a physical theory has a content, it must be able to manipulate with the information. We insert some information that we know - and it spits out another piece of information that we didn't know but that is predicted, or postdicted, by the theory. So there must exist some states; which state was realized in Nature, is realized in Nature, or will be realized in Nature, is the way to phrase all the information we have or we want to have about the world or its pieces. That was true even in classical physics: different states of a physical system were given by points in the phase space (spanned by the positions and their canonical momenta). 2. The new thing about quantum mechanics is that the complex linear superpositions of two allowed states are also allowed states. How do we know that? Well, we may actually design procedures that create such combined states in practice. 3. Now, there are other postulates and universal rules of quantum mechanics. For example, the composite systems are described by tensor products of Hilbert spaces. It's not hard to see why: if the dimensions of Hilbert spaces H1, H2 are equal to d1, d2, there are clearly d1 basis vectors of H1 and d2 basis vectors of H2. These basic vectors parameterize some linearly independent (i.e. fully mutually exclusive) possibilities. The set of linearly independent possibilities for the composite system obviously has to be the Cartesian product of the two sets for the separate subsystems. And the "linear envelope" of this Cartesian product - the new basis - is the tensor product of the original spaces. Its dimension - its number of basis vectors - is equal to d1.d2 as expected. This conclusion is pretty much inevitable, by basic logic. When you read this as a mathematician you understand that the motivation is weak, formal and touches triviality elevated to deep insight into the true inner mechanisms of the microscopic world. The Hilbert space assumption essentially reflects that the Schrödinger equation is linear. But why physics on atomic scales is linear allowing superposition, is not motivated. This appears as an ad hoc assumption which could be made by one who has recently fallen in love with linear Hilbert space theory and has been so overwhelmed by emotion that rational thinking has disappeared. The argument that "we may actually design procedures that create such combined states (superposed) in practice" sounds hollow, knowing that this principle of quantum computing has shown to be very difficult to demonstrate. Atomic physics concerns the interaction of elementary particles by certain forces and thus can be thought as N-body problems. But an N-body problem is not linear, and so it requires a lot of fantasy to believe that the N-body problem of quantum mechanics through some miracle decides to show up as linear. without being able to find any reasonable one. 2 kommentarer: 1. Seen this? 2. Yes. My position is that it is impossible to say if doubled CO2 will have a warming or cooling effect, but one can give several arguments indicating that the effect will be small, e.g smaller than plus minus 0.3C. In this sense I am a lukewarmist; I do not say, like some Slayers do, that the atmosphere has a cooling effect. I say that it has warming-cooling effect and that CO2 climate sensitivity is most likely so small that it can be forgotten.
3ee8d9cfb3c84d3e
Take the 2-minute tour × Many introductory quantum mechanics textbooks include simple exercises on computing the de Broglie wavelength of macroscopic objects, often contrasting the results with that of a proton, etc. For instance, this example, taken from a textbook: Calculate the de Broglie wavelength for (a) a proton of kinetic energy 70 MeV (b) a 100 g bullet moving at 900 m/s The pedagogical motivation behind these questions is obvious: when the de Broglie wavelength is small compared to the size of an object, the wave behaviour of the object is undetectable. But what is the validity of the actual approach of applying the formula naively? Of course, a 100g bullet is not a fundamental object of 100g but a lattice of some $10^{23}$ atoms bound by the electromagnetic force. But is the naive answer close to the actual one (i.e. within an order of magnitude or two)? How does one even calculate the de Broglie wavelength for a many body system accurately? share|improve this question add comment 4 Answers up vote 6 down vote accepted The de Broglie wavelength formula is valid to a non-fundamental (many body) object. The reason is that for a translation invariant system of interacting particles, the center of mass dynamics can be separated from the internal dynamics. Consequently, the solution of the Schrödinger equation can be obtained by separation of variables and the center of mass component of the wave funtion just satisfies a free Schrödinger equation (with the total mass parameter). Here are some details: Consider a many body nonrelativistic system whose dynamics is governed by the Hamiltonian: $\hat{H} = \hat{K} + \hat{V} = \sum_i \frac{\hat{p}_i^2}{2m_i} + V(x_i)$ ($K$ is the kinetic and $V$ the potential energies respectively). In the translation invariant case, the potential $V$ is a function of relative displacements of the individual bodies and not on their absolute values. In this case, the center of mass dynamics can be separated from the relative motion since the kinetic term can be written as: $ \hat{K} = \frac{\hat{P}^2}{2M} + \hat{K'} $ Where $P$ is the total momentum and $M$ is the total mass. $K'$ is the reduced kinetic term. In the case of a two-body problem, for example the hydrogen atom $K'$ has a nice formula in terms of the reduced mass, for larger number of particles, the $K'$ formula is less nice, but the essential point is that it depends on the relative momenta only. For this type of Hamiltonian (with no external forces), the Schrödinger equation can be solved by separation of variables: $\psi(x_i) = \Psi(X) \psi'(\rho_i)$ Where $X$ is the center of mass coordinate, and $\rho_i$ is a collection of the relative coordinates. After the separation of variables, the center of mass wave function satisfies the free Schrödinger equation: $ -\frac{\hbar^2}{2M}\nabla_X^2\Psi(X) = E \Psi(X) $ Whose solution (corresponding to the energy $E = \frac{p^2}{2M}$) has the form: $\Psi(X) \sim exp(i \frac{p X}{\hbar})$ from which the de Broglie wave length can be read $ \lambda = \frac{2 \pi \hbar}{p}$ share|improve this answer This is practically contradictory to John's answer except if one interprets it as "a wavelength" for an extended body. Won't the wavelengths of the constituents extend spatially by far over this wavelength? Where does decoherence come in? –  anna v Mar 20 '13 at 11:41 @Anna What I tried to emphasize is that the de Broglie wavelength is a property of a free degree of freedom. The internal state of the composite system is not free and will not be characterized with its constituents de Broglie wavelengths but rather with its bound state energies for example its vibrational modes. –  David Bar Moshe Mar 20 '13 at 13:51 @Anna cont. When conditions are provided such that the system stays in a single (ground state) internal state, it is possible to approximate it by its center of mass quantum dynamics governed by its de Broglie wavelength, for example in the case of the buckyball and other large molecule interferometry. –  David Bar Moshe Mar 20 '13 at 13:52 Anna cont. It is true that in the composite case the decoherence conditions are harder to achieve, because not only can this system loose coherence by being "kicked" by interacting particles but also it can loose coherence by random excitations of its internal degrees of freedom for example when the thermal excitations exceed its vibrational energies. –  David Bar Moshe Mar 20 '13 at 13:52 So would I be correct to say that decoherence ensures we cannot observe any behaviour related to the de Broglie wavelength (for a 100g bullet)? –  John Rennie Mar 20 '13 at 14:51 show 1 more comment If you've read about optical diffraction experiments like the Young's slits, you may have noticed they all refer to coherent light. This is the requirement that all the light in the experiment is in phase. If you aren't using coherent light you won't observe any diffraction because different bits of the light will diffract differently and the diffraction pattern is washed out. Exactly the same applies to observing the wavelike behaviour of quantum objects. If you're diffracting electrons this isn't a problem, but if you're trying to diffract a bullet you require all parts of the bullet to be coherent. In principle you could prepare a bullet in a coherent state, but even if you could manage this the bullet would immediately decohere due to interactions with it's environment. This process is known as quantum decoherence. I've linked a Wikipedia article but be warned that the article isn't well written for non-nerds. If you want to know more you'd be better off Googling for popular science articles on decoherence. Anyhow, as you obviously suspected from the way you've worded your question, because of decoherence it doesn't make sense to talk about a single de Broglie wavelength for macroscopic objects like bullets. As far as I know, the largest object ever to show quantum behaviour is an oscillator built by Andrew Cleland's group at Santa Barbara. This was around 50 - 100 microns in size, which is actually pretty big. However this is something of a special case and took enormous care to build. A more realistic upper limit is a buckyball, which is around a nanometre in size. share|improve this answer add comment The so called de Broglie wave has no meaning and so is its wave length. The wave function in shrodinger equation likewise does not represent any wave but is a mathematical edifice to De-localise a micro-particle in space.It is unfortunate that the Language of Science , instead of being realistic has remained traditional like literature.There is absolutely no reason why the Text Books should contain such silly questions of calculating wave length of de Broglie waves of a 100g bullet share|improve this answer The FAQ's proscription on pushing personal theories applies to answers as well as to questions. Please take a few minutes to read the other answers to this question. That is the standard of learned discourse we are looking for here –  dmckee Mar 20 '13 at 19:12 add comment I just spent a few hours researching this question, and it seems to me that: It's important to understand what is meant by a particle having a wavelength. This great link has more information (http://electron6.phys.utk.edu/phys250/modules/module%202/matter_waves.htm) It emphatically does not mean that if we were to somehow isolate the particle and set it moving with some velocity v, it would move sinusoidally in space as a function of time. The key idea is that although deBroglie's relation about wavelength applies both to photons and other particles, it means something different in both cases! For the photon, it makes pretty intuitive sense - it refers to the EM wave that is the photon. What about other particles? The wavelength refers to the particle's wavefunction, which, under the statistical interpretation, tells you the probability you would find the particle at a certain position x (http://hyperphysics.phy-astr.gsu.edu/hbase/uncer.html#c5). The previous link also makes sense of what it means to talk about the wavelength of a particle (if the wavelength isn't sinusoidal - basically, you take some sort of average). Does a macroscopic object, like the aforementioned bullet, have a wavelength? I believe so, since we can take all the individual particles and consider the interactions between all the particles in the bullet (giving us entanglements and whatnot). We can imagine the bullet having some complicated wavefunction that describes the probability of it being found somewhere. This is the point on which I'm unsure, but the bigger point is that it doesn't really matter, since the object does not live in isolation to the environment. The idea is that it decoheres by interaction with the environment (a great description here: www.ipod.org.uk/reality/reality_decoherence.asp), which basically means that the environemtn acts differently on each part of the bullet, so even if the bullet might have originally acted like a quantum system, it no longer does after a measurement (i.e. when we see it). And so it doesn't make sense now to talk about the wavelength of the bullet since we really need to consider it as part of a bigger system, the bullet plus the environment. share|improve this answer add comment Your Answer
fcce07cc27d04f81
(Short intro in Dutch, the post itself is in English.) De losgebarste discussie over het geknip in David Attenborough door de EO (mijn standpunt: het is -ook van de BBC die kennelijk toestemming gaf- niet erg netjes tegenover Attenborough, én de EO had op z’n minst aan het publiek duidelijk moeten aangeven wat ze gedaan hadden: “delen van deze documentaire zijn door ons gewijzigd of verwijderd”), die discussie deed mij denken aan twee oude postjes van mij over Intelligent Design en Quantummechanica. Want het is nog niet ingewikkeld genoeg. :-) (First posted on Friday 27th januari 2006 – 11:09:36 AM) Tangled Bank Last week I was talking about Intelligent Design (with, by the way, the founding father of the now sadly closed down UVvN) when an analogy with quantum mechanics occurred to me. Or maybe even two analogies. Today, I bring you the first one. Both theories have a problem with evolution. Of course, it’s not the same evolution. In the case of ID, it’s the famous Darwinian evolution of random mutations and natural selection, that is the mechanism for change in large groups of organisms, over (usually) large stretches of time. For QM, the evolution at stake is the way the Schrödinger equations tells the wavefunction of a system, say: the physical state it is in, to change over time. The wavefunction is said to evolve according to the Schrödinger equation. So how is there a problem with evolution? Well, it’s not the whole story. In QM (at least in the orthodox interpretation) there are exceptions to the rule that the wavefunction simply evolves. These exceptions are called measurements. In the usual course of nature, the wavefunction obeys the Schrödinger equation, but when a measurement is done, it is said to collapse instantly into something else. (It does not matter so much now into what exactly.) In ID, on the other hand, there are exceptions to the usual course of the Darwinian evolution. These exceptions happen when the change is too great, when something truly new and different emerges. In that case, there is said to be design. So in both cases there are two competing mechanisms to describe change. Either it is evolution, or it is a “jump” (either collapse or design). Now, having two mechanisms wouldn’t be too bad, if not too pretty either, if you knew exactly when to apply which. This is where the real problems are: when does a “jump” occur? For ID, this amounts to asking: what steps are too great? This question is answered with things like (I’m not sure if this are all the cases) there is a change from one species to another, and/or when something with irreducible complexity is formed. But: what is irreducibly complex? when are two individuals members of different species? Is this always well-defined? Likewise, in QM, the difficult question is: what is a measurement? And how is it different from any other physical interaction? There have been many attempts to dodge this question, for instance by saying that collapse occurs when a system becomes macroscopic. But that hardly helps: where, exactly, is the line between microscopic and macroscopic? And when we think we have answered these question, what if we find, say, animals with intermediate stages of formerly thought irreducibly complex features? Or when we see quantum behaviour of buckyballs or still larger molecules? Do we move the line and try not to think about it? It seems to boil down to this: is there a qualitative difference between situations in which ‘normal’ evolution is at work and situations with jumps? And in both cases, it proves very hard to find such a difference. I don’t mean to argue that ID and QM are completely similar. Just to be clear, I’ll add an important difference. QM is a widely accepted, very thoroughly tested and hugely successful theory, and although there is a thing called the ‘measurement problem’, QM is very clear on its predictions (and very right as well). ID, on the other hand, is at best an alternative for the otherwise widely accepted biological evolution theory, and does not (or even can not) provide predictions that distinguish it from ‘normal’ evolution theory. Moreover, while many physicists do not worry much about these things (as QM works so well), some of them and lots of philosophers of physics do, but it seems that the only people worrying about the evolution vs jumps problem in ID are its opponents. Related to this is the final point that while it is very hard, or maybe impossible, to get rid of the jumps in QM (there are attempts made, Bohm is probably the best known example), in ID the jumps are rather an add-on to the evolution, something deliberately put into the theory, next to evolution. More in Part 2, to appear soon. Eén reactie 1. Pingback: ID vs. QM (2): Take a chance « Qulog 2.0 Reacties zijn gesloten.
048d05e91715075c
Resonant States in Negative Ions by Brandefelt, Nicklas Abstract (Summary) Resonant states are multiply excited states in atoms and ions that have enough energy to decay by emitting an electron. The ability to emit an electron and the strong electron correlation (which is extra strong in negative ions) makes these states both interesting and challenging from a theoretical point of view. The main contribution in this thesis is a method, which combines the use of B splines and complex rotation, to solve the three-electron Schrödinger equation treating all three electrons equally. It is used to calculate doubly excited and triply excited states of 4S symmetry with even parity in He-. For the doubly excited states there are experimental and theoretical data to compare with. For the triply excited states there is only theoretical data available and only for one of the resonances. The agreement is in general good. For the triply excited state there is a significant and interesting difference in the width between our calculation and another method. A cause for this deviation is suggested. The method is also used to find a resonant state of 4S symmetry with odd parity in H2-. This state, in this extremely negative system, has been predicted by two earlier calculations but is highly controversial.Several other studies presented here focus on two-electron systems. In one, the effect of the splitting of the degenerate H(n=2) thresholds in H-, on the resonant states converging to this threshold, is studied. If a completely degenerate threshold is assumed an infinite series of states is expected to converge to the threshold. Here states of 1P symmetry and odd parity are examined, and it is found that the relativistic and radiative splitting of the threshold causes the series to end after only three resonant states. Since the independent particle model completely fails for doubly excited states, several schemes of alternative quantum numbers have been suggested. We investigate the so called DESB (Doubly Excited Symmetry Basis) quantum numbers in several calculations. For the doubly excited states of He- mentioned above we investigate one resonance and find that it cannot be assigned DESB quantum numbers unambiguously. We also investigate these quantum numbers for states of 1S even parity in He. We find two types of mixing of DESB states in the doubly excited states calculated. We also show that the amount of mixing of DESB quantum numbers can be inferred from the value of the cosine of the inter-electronic angle. In a study on Li- the calculated cosine values are used to identify doubly excited states measured in a photodetachment experiment. In particular a resonant state that violates a propensity rule is found. Bibliographical Information: School:Stockholms universitet School Location:Sweden Source Type:Doctoral Dissertation Keywords:NATURAL SCIENCES; Physics; Atomic and molecular physics; Molekylfysik; Atomfysik; Kärnfysik; Partikelfysik Date of Publication:01/01/2001 © 2009 All Rights Reserved.
54391916425b782e
Take the 2-minute tour × I'm an undergraduate mathematics student trying to understand some quantum mechanics, but I'm having a hard time understanding what is the status of the Schrödinger equation. In some places I've read that it's just a postulate. At least, that's how I interpret e.g. the following quote: (from the Wikipedia entry on the Schrödinger equation) However, some places seem to derive the Schrödinger equation: just search for "derivation of Schrödinger equation" in google. This motivates the question in the title: Is the Schrödinger equation derived or postulated? If it is derived, then just how is it derived, and from what principles? If it is postulated, then it surely came out of somewhere. Something like "in these special cases it can be derived, and then we postulate it works in general". Or maybe not? Thanks in advance, and please bear with my physical ignorance. share|improve this question add comment 2 Answers The issue is that the assumptions are fluid, so there aren't axioms that are agreed upon. Of course Schrödinger didn't just wake up with the Schrödinger equation in his head, he had a reasoning, but the assumptions in that reasoning were the old quantum theory and the de Broglie relation, along with the Hamiltonian idea that mechanics is the limit of wave-motion. These ideas are now best thought of as derived from postulating quantum mechanics underneath, and taking the classical limit with leading semi-classical corrections. So while it is historically correct that the semi-classical knowledge essentially uniquely determined the Schrödinger equation, it is not strictly logically correct, since the thing that is derived is more fundamental than the things used to derive it. This is a common thing in physics--- you use approximate laws to arrive at new laws that are more fundamental. It is also the reason that one must have a sketch of the historical development in mind to arrive at the most fundamental theory, otherwise you will have no clue how the fundamental theory was arrived at or why it is true. share|improve this answer add comment The Schrödinger equation is postulated. Any source that claims to "derive" it is actually motivating it. The best discussion of this that I'm aware of this is in Shankar, Chapter 4 ("The Postulates -- a General Discussion"). Shankar presents a table of four postulates of Quantum Mechanics, which each given as a parallel to classical postulates from Hamiltonian dynamics. Postulate II says that the dynamical variables x and p of Hamiltonian dynamics are replaced by Hermitian operators $\hat X$ and $\hat P$. In the X-basis, these have the action $\hat X\psi = \psi (x)$ and $\hat P\psi = -i\hbar\frac{d\psi}{dx}$. Any composite variable in Hamiltonian dynamics can be built out of x and p as $\omega(x,p)$. This is replaced by a Hermitian operator $\hat \Omega(\hat X,\hat P)$ with the exact same functional form. Postulate IV says that Hamilton's equations are replaced by the Schrödinger equation. The classical Hamiltonian retains its functional form, with x replaced by $\hat X$ and p replaced by $\hat P$. NB: Shankar doesn't discuss this, but Dirac does. The particular form of $\hat X$ and $\hat P$ can be derived from their commutation relation. In classical dynamics, x and p have the Poisson Bracket {x,p} = 1. In Quantum Mechanics, you can replace this with the commutation relation $[\hat X, \hat P] = i\hbar$. What Shankar calls Postulate II can be derived from this. So you could use that as your fundamental postulate if you prefer. Summary: the Schrödinger equation didn't just come from nowhere historically. It's a relatively obvious thing to try. Mathematically, there isn't anything more fundamental in the theory that you could use to derive it. share|improve this answer add comment Your Answer
aacef58e0a3183d0
Page semi-protected From Wikipedia, the free encyclopedia Jump to: navigation, search For other uses, see Atom (disambiguation). Helium atom Helium atom ground state. An illustration of the helium atom, depicting the nucleus (pink) and the electron cloud distribution (black). The nucleus (upper right) in helium-4 is in reality spherically symmetric and closely resembles the electron cloud, although for more complicated nuclei this is not always the case. The black bar is one angstrom (1010 m or 100 pm). Smallest recognized division of a chemical element Mass range: 1.67×1027 to 4.52×1025 kg Electric charge: zero (neutral), or ion charge Diameter range: 62 pm (He) to 520 pm (Cs) (data page) Components: Electrons and a compact nucleus of protons and neutrons An atom is the smallest constituent unit of ordinary matter that has the properties of a chemical element.[1] Every solid, liquid, gas, and plasma is made up of neutral or ionized atoms. Atoms are very small; typical sizes are around 100 pm (a ten-billionth of a meter, in the short scale).[2] However, atoms do not have well defined boundaries, and there are different ways to define their size which give different but close values. Atoms are small enough that classical physics give noticeably incorrect results. Through the development of physics, atomic models have incorporated quantum principles to better explain and predict the behavior. Every atom is composed of a nucleus and one or more electrons bound to the nucleus. The nucleus is made of one or more protons and typically a similar number of neutrons (none in hydrogen-1). Protons and neutrons are called nucleons. Over 99.94% of the atom's mass is in the nucleus. The protons have a positive electric charge, the electrons have a negative electric charge, and the neutrons have no electric charge. If the number of protons and electrons are equal, that atom is electrically neutral. If an atom has more or less electrons than protons, then it has an overall negative or positive charge, respectively, and it is called an ion. Electrons of an atom are attracted to the protons in an atomic nucleus by this electromagnetic force. The protons and neutrons in the nucleus are attracted to each other by a different force, the nuclear force, which is usually stronger than the electromagnetic force repelling the positively charged protons from one another. Under certain circumstances the repelling electromagnetic force becomes stronger than the nuclear force, and nucleons can be ejected from the nucleus, leaving behind a different element: nuclear decay resulting in nuclear transmutation. The number of protons in the nucleus defines to what chemical element the atom belongs: for example, all copper atoms contain 29 protons. The number of neutrons defines the isotope of the element.[3] The number of electrons influences the magnetic properties of an atom. Atoms can attach to one or more other atoms by chemical bonds to form chemical compounds such as molecules. The ability of atoms to associate and dissociate is responsible for most of the physical changes observed in nature, and is the subject of the discipline of chemistry. Not all the matter of the universe is composed of atoms. Dark matter comprises more of the Universe than matter, and is composed not of atoms, but of particles of a currently unknown type. History of atomic theory Main article: Atomic theory Atoms in philosophy Main article: Atomism The idea that matter is made up of discrete units is a very old one, appearing in many ancient cultures such as Greece and India. The word "atom", in fact, was coined by ancient Greek philosophers. However, these ideas were founded in philosophical and theological reasoning rather than evidence and experimentation. As a result, their views on what atoms look like and how they behave were incorrect. They also couldn't convince everybody, so atomism was but one of a number of competing theories on the nature of matter. It wasn't until the 19th century that the idea was embraced and refined by scientists, when the blossoming science of chemistry produced discoveries that only the concept of atoms could explain. First evidence-based theory In the early 1800s, John Dalton used the concept of atoms to explain why elements always react in ratios of small whole numbers (the law of multiple proportions). For instance, there are two types of tin oxide: one is 88.1% tin and 11.9% oxygen and the other is 78.7% tin and 21.3% oxygen (tin(II) oxide and tin dioxide respectively). This means that 100g of tin will combine either with 13.5g or 27g of oxygen. 13.5 and 27 form a ratio of 1:2, a ratio of small whole numbers. This common pattern in chemistry suggested to Dalton that elements react in whole number multiples of discrete units—in other words, atoms. In the case of tin oxides, one tin atom will combine with either one or two oxygen atoms.[4] Dalton also believed atomic theory could explain why water absorbs different gases in different proportions. For example, he found that water absorbs carbon dioxide far better than it absorbs nitrogen.[5] Dalton hypothesized this was due to the differences in mass and complexity of the gases' respective particles. Indeed, carbon dioxide molecules (CO2) are heavier and larger than nitrogen molecules (N2). Brownian motion In 1827, botanist Robert Brown used a microscope to look at dust grains floating in water and discovered that they moved about erratically, a phenomenon that became known as "Brownian motion". This was thought to be caused by water molecules knocking the grains about. In 1905 Albert Einstein produced the first mathematical analysis of the motion.[6][7][8] French physicist Jean Perrin used Einstein's work to experimentally determine the mass and dimensions of atoms, thereby conclusively verifying Dalton's atomic theory.[9] Discovery of the electron The Geiger–Marsden experiment Top: Expected results: alpha particles passing through the plum pudding model of the atom with negligible deflection. Bottom: Observed results: a small portion of the particles were deflected by the concentrated positive charge of the nucleus. The physicist J. J. Thomson measured the mass of cathode rays, showing they were made of particles, but were around 1800 times lighter than the lightest atom, hydrogen. Therefore they were not atoms, but a new particle, the first subatomic particle to be discovered, which he originally called "corpuscle" but was later named electron, after particles postulated by George Johnstone Stoney in 1874. He also showed they were identical to particles given off by photoelectric and radioactive materials.[10] It was quickly recognized that they are the particles that carry electric currents in metal wires, and carry the negative electric charge within atoms. Thomson was given the 1906 Nobel prize for physics for this work. Thus he overturned the belief that atoms are the indivisible, ultimate particles of matter.[11] Thomson also incorrectly postulated that the low mass, negatively charged electrons were distributed throughout the atom in a uniform sea of positive charge. This became known as the plum pudding model. Discovery of the nucleus In 1909, Hans Geiger and Ernest Marsden, under the direction of Ernest Rutherford, bombarded a metal foil with alpha particles to observe how they scattered. They expected all the alpha particles to pass straight through with little deflection, because Thomson's model said that the charges in the atom are so diffuse that their electric fields could not affect the alpha particles much. However, Geiger and Marsden spotted alpha particles being deflected by angles greater than 90°, which was supposed to be impossible according to Thomson's model. To explain this, Rutherford proposed that the positive charge of the atom is concentrated in a tiny nucleus at the center of the atom.[12] Discovery of isotopes While experimenting with the products of radioactive decay, in 1913 radiochemist Frederick Soddy discovered that there appeared to be more than one type of atom at each position on the periodic table.[13] The term isotope was coined by Margaret Todd as a suitable name for different atoms that belong to the same element. J.J. Thomson created a technique for separating atom types through his work on ionized gases, which subsequently led to the discovery of stable isotopes.[14] Bohr model The Bohr model of the atom, with an electron making instantaneous "quantum leaps" from one orbit to another. This model is obsolete. Main article: Bohr model In 1913 the physicist Niels Bohr proposed a model in which the electrons of an atom were assumed to orbit the nucleus but could only do so in a finite set of orbits, and could jump between these orbits only in discrete changes of energy corresponding to absorption or radiation of a photon.[15] This quantization was used to explain why the electrons orbits are stable (given that normally, charges in acceleration, including circular motion, lose kinetic energy which is emitted as electromagnetic radiation, see synchrotron radiation) and why elements absorb and emit electromagnetic radiation in discrete spectra.[16] Later in the same year Henry Moseley provided additional experimental evidence in favor of Niels Bohr's theory. These results refined Ernest Rutherford's and Antonius Van den Broek's model, which proposed that the atom contains in its nucleus a number of positive nuclear charges that is equal to its (atomic) number in the periodic table. Until these experiments, atomic number was not known to be a physical and experimental quantity. That it is equal to the atomic nuclear charge remains the accepted atomic model today.[17] Chemical bonding explained Chemical bonds between atoms were now explained, by Gilbert Newton Lewis in 1916, as the interactions between their constituent electrons.[18] As the chemical properties of the elements were known to largely repeat themselves according to the periodic law,[19] in 1919 the American chemist Irving Langmuir suggested that this could be explained if the electrons in an atom were connected or clustered in some manner. Groups of electrons were thought to occupy a set of electron shells about the nucleus.[20] Further developments in quantum physics The Stern–Gerlach experiment of 1922 provided further evidence of the quantum nature of the atom. When a beam of silver atoms was passed through a specially shaped magnetic field, the beam was split based on the direction of an atom's angular momentum, or spin. As this direction is random, the beam could be expected to spread into a line. Instead, the beam was split into two parts, depending on whether the atomic spin was oriented up or down.[21] In 1924, Louis de Broglie proposed that all particles behave to an extent like waves. In 1926, Erwin Schrödinger used this idea to develop a mathematical model of the atom that described the electrons as three-dimensional waveforms rather than point particles. A consequence of using waveforms to describe particles is that it is physically impossible to obtain precise values for both the position and momentum of a particle at the same time; this became known as the uncertainty principle, formulated by Werner Heisenberg in 1926. In this concept, for a given accuracy in measuring a position one could only obtain a range of probable values for momentum, and vice versa. This model was able to explain observations of atomic behavior that previous models could not, such as certain structural and spectral patterns of atoms larger than hydrogen. Thus, the planetary model of the atom was discarded in favor of one that described atomic orbital zones around the nucleus where a given electron is most likely to be observed.[22][23] Discovery of the neutron The development of the mass spectrometer allowed the mass of atoms to be measured with increased accuracy. The device uses a magnet to bend the trajectory of a beam of ions, and the amount of deflection is determined by the ratio of an atom's mass to its charge. The chemist Francis William Aston used this instrument to show that isotopes had different masses. The atomic mass of these isotopes varied by integer amounts, called the whole number rule.[24] The explanation for these different isotopes awaited the discovery of the neutron, an uncharged particle with a mass similar to the proton, by the physicist James Chadwick in 1932. Isotopes were then explained as elements with the same number of protons, but different numbers of neutrons within the nucleus.[25] Fission, high-energy physics and condensed matter In 1938, the German chemist Otto Hahn, a student of Rutherford, directed neutrons onto uranium atoms expecting to get transuranium elements. Instead, his chemical experiments showed barium as a product.[26] A year later, Lise Meitner and her nephew Otto Frisch verified that Hahn's result were the first experimental nuclear fission.[27][28] In 1944, Hahn received the Nobel prize in chemistry. Despite Hahn's efforts, the contributions of Meitner and Frisch were not recognized.[29] In the 1950s, the development of improved particle accelerators and particle detectors allowed scientists to study the impacts of atoms moving at high energies.[30] Neutrons and protons were found to be hadrons, or composites of smaller particles called quarks. The standard model of particle physics was developed that so far has successfully explained the properties of the nucleus in terms of these sub-atomic particles and the forces that govern their interactions.[31] Subatomic particles Main article: Subatomic particle Though the word atom originally denoted a particle that cannot be cut into smaller particles, in modern scientific usage the atom is composed of various subatomic particles. The constituent particles of an atom are the electron, the proton and the neutron; all three are fermions. However, the hydrogen-1 atom has no neutrons and the hydron ion has no electrons. The electron is by far the least massive of these particles at 9.11×1031 kg, with a negative electrical charge and a size that is too small to be measured using available techniques.[32] It is the lightest particle with a positive rest mass measured. Under ordinary conditions, electrons are bound to the positively charged nucleus by the attraction created from opposite electric charges. If an atom has more or fewer electrons than its atomic number, then it becomes respectively negatively or positively charged as a whole; a charged atom is called an ion. Electrons have been known since the late 19th century, mostly thanks to J.J. Thomson; see history of subatomic physics for details. Protons have a positive charge and a mass 1,836 times that of the electron, at 1.6726×1027 kg. The number of protons in an atom is called its atomic number. Ernest Rutherford (1919) observed that nitrogen under alpha-particle bombardment ejects what appeared to be hydrogen nuclei. By 1920 he had accepted that the hydrogen nucleus is a distinct particle within the atom and named it proton. Neutrons have no electrical charge and have a free mass of 1,839 times the mass of the electron,[33] or 1.6929×1027 kg, the heaviest of the three constituent particles, but it can be reduced by the nuclear binding energy. Neutrons and protons (collectively known as nucleons) have comparable dimensions—on the order of 2.5×1015 m—although the 'surface' of these particles is not sharply defined.[34] The neutron was discovered in 1932 by the English physicist James Chadwick. In the Standard Model of physics, electrons are truly elementary particles with no internal structure. However, both protons and neutrons are composite particles composed of elementary particles called quarks. There are two types of quarks in atoms, each having a fractional electric charge. Protons are composed of two up quarks (each with charge +23) and one down quark (with a charge of −13). Neutrons consist of one up quark and two down quarks. This distinction accounts for the difference in mass and charge between the two particles.[35][36] The quarks are held together by the strong interaction (or strong force), which is mediated by gluons. The protons and neutrons, in turn, are held to each other in the nucleus by the nuclear force, which is a residuum of the strong force that has somewhat different range-properties (see the article on the nuclear force for more). The gluon is a member of the family of gauge bosons, which are elementary particles that mediate physical forces.[35][36] Main article: Atomic nucleus The binding energy needed for a nucleon to escape the nucleus, for various isotopes All the bound protons and neutrons in an atom make up a tiny atomic nucleus, and are collectively called nucleons. The radius of a nucleus is approximately equal to 1.07 3A fm, where A is the total number of nucleons.[37] This is much smaller than the radius of the atom, which is on the order of 105 fm. The nucleons are bound together by a short-ranged attractive potential called the residual strong force. At distances smaller than 2.5 fm this force is much more powerful than the electrostatic force that causes positively charged protons to repel each other.[38] Atoms of the same element have the same number of protons, called the atomic number. Within a single element, the number of neutrons may vary, determining the isotope of that element. The total number of protons and neutrons determine the nuclide. The number of neutrons relative to the protons determines the stability of the nucleus, with certain isotopes undergoing radioactive decay.[39] The proton, the electron, and the neutron are classified as fermions. Fermions obey the Pauli exclusion principle which prohibits identical fermions, such as multiple protons, from occupying the same quantum state at the same time. Thus, every proton in the nucleus must occupy a quantum state different from all other protons, and the same applies to all neutrons of the nucleus and to all electrons of the electron cloud. However, a proton and a neutron are allowed to occupy the same quantum state.[40] For atoms with low atomic numbers, a nucleus that has more neutrons than protons tends to drop to a lower energy state through radioactive decay so that the neutron–proton ratio is closer to one. However, as the atomic number increases, a higher proportion of neutrons is required to offset the mutual repulsion of the protons. Thus, there are no stable nuclei with equal proton and neutron numbers above atomic number Z = 20 (calcium) and as Z increases, the neutron–proton ratio of stable isotopes increases.[40] The stable isotope with the highest proton–neutron ratio is lead-208 (about 1.5). Illustration of a nuclear fusion process that forms a deuterium nucleus, consisting of a proton and a neutron, from two protons. A positron (e+)—an antimatter electron—is emitted along with an electron neutrino. The number of protons and neutrons in the atomic nucleus can be modified, although this can require very high energies because of the strong force. Nuclear fusion occurs when multiple atomic particles join to form a heavier nucleus, such as through the energetic collision of two nuclei. For example, at the core of the Sun protons require energies of 3–10 keV to overcome their mutual repulsion—the coulomb barrier—and fuse together into a single nucleus.[41] Nuclear fission is the opposite process, causing a nucleus to split into two smaller nuclei—usually through radioactive decay. The nucleus can also be modified through bombardment by high energy subatomic particles or photons. If this modifies the number of protons in a nucleus, the atom changes to a different chemical element.[42][43] If the mass of the nucleus following a fusion reaction is less than the sum of the masses of the separate particles, then the difference between these two values can be emitted as a type of usable energy (such as a gamma ray, or the kinetic energy of a beta particle), as described by Albert Einstein's mass–energy equivalence formula, E = mc2, where m is the mass loss and c is the speed of light. This deficit is part of the binding energy of the new nucleus, and it is the non-recoverable loss of the energy that causes the fused particles to remain together in a state that requires this energy to separate.[44] The fusion of two nuclei that create larger nuclei with lower atomic numbers than iron and nickel—a total nucleon number of about 60—is usually an exothermic process that releases more energy than is required to bring them together.[45] It is this energy-releasing process that makes nuclear fusion in stars a self-sustaining reaction. For heavier nuclei, the binding energy per nucleon in the nucleus begins to decrease. That means fusion processes producing nuclei that have atomic numbers higher than about 26, and atomic masses higher than about 60, is an endothermic process. These more massive nuclei can not undergo an energy-producing fusion reaction that can sustain the hydrostatic equilibrium of a star.[40] Electron cloud A potential well, showing, according to classical mechanics, the minimum energy V(x) needed to reach each position x. Classically, a particle with energy E is constrained to a range of positions between x1 and x2. The electrons in an atom are attracted to the protons in the nucleus by the electromagnetic force. This force binds the electrons inside an electrostatic potential well surrounding the smaller nucleus, which means that an external source of energy is needed for the electron to escape. The closer an electron is to the nucleus, the greater the attractive force. Hence electrons bound near the center of the potential well require more energy to escape than those at greater separations. Electrons, like other particles, have properties of both a particle and a wave. The electron cloud is a region inside the potential well where each electron forms a type of three-dimensional standing wave—a wave form that does not move relative to the nucleus. This behavior is defined by an atomic orbital, a mathematical function that characterises the probability that an electron appears to be at a particular location when its position is measured.[46] Only a discrete (or quantized) set of these orbitals exist around the nucleus, as other possible wave patterns rapidly decay into a more stable form.[47] Orbitals can have one or more ring or node structures, and they differ from each other in size, shape and orientation.[48] Wave functions of the first five atomic orbitals. The three 2p orbitals each display a single angular node that has an orientation and a minimum at the center. How atoms are constructed from electron orbitals and link to the periodic table Each atomic orbital corresponds to a particular energy level of the electron. The electron can change its state to a higher energy level by absorbing a photon with sufficient energy to boost it into the new quantum state. Likewise, through spontaneous emission, an electron in a higher energy state can drop to a lower energy state while radiating the excess energy as a photon. These characteristic energy values, defined by the differences in the energies of the quantum states, are responsible for atomic spectral lines.[47] The amount of energy needed to remove or add an electron—the electron binding energy—is far less than the binding energy of nucleons. For example, it requires only 13.6 eV to strip a ground-state electron from a hydrogen atom,[49] compared to 2.23 million eV for splitting a deuterium nucleus.[50] Atoms are electrically neutral if they have an equal number of protons and electrons. Atoms that have either a deficit or a surplus of electrons are called ions. Electrons that are farthest from the nucleus may be transferred to other nearby atoms or shared between atoms. By this mechanism, atoms are able to bond into molecules and other types of chemical compounds like ionic and covalent network crystals.[51] Nuclear properties By definition, any two atoms with an identical number of protons in their nuclei belong to the same chemical element. Atoms with equal numbers of protons but a different number of neutrons are different isotopes of the same element. For example, all hydrogen atoms admit exactly one proton, but isotopes exist with no neutrons (hydrogen-1, by far the most common form,[52] also called protium), one neutron (deuterium), two neutrons (tritium) and more than two neutrons. The known elements form a set of atomic numbers, from the single proton element hydrogen up to the 118-proton element ununoctium.[53] All known isotopes of elements with atomic numbers greater than 82 are radioactive.[54][55] About 339 nuclides occur naturally on Earth,[56] of which 254 (about 75%) have not been observed to decay, and are referred to as "stable isotopes". However, only 90 of these nuclides are stable to all decay, even in theory. Another 164 (bringing the total to 254) have not been observed to decay, even though in theory it is energetically possible. These are also formally classified as "stable". An additional 34 radioactive nuclides have half-lives longer than 80 million years, and are long-lived enough to be present from the birth of the solar system. This collection of 288 nuclides are known as primordial nuclides. Finally, an additional 51 short-lived nuclides are known to occur naturally, as daughter products of primordial nuclide decay (such as radium from uranium), or else as products of natural energetic processes on Earth, such as cosmic ray bombardment (for example, carbon-14).[57][note 1] For 80 of the chemical elements, at least one stable isotope exists. As a rule, there is only a handful of stable isotopes for each of these elements, the average being 3.2 stable isotopes per element. Twenty-six elements have only a single stable isotope, while the largest number of stable isotopes observed for any element is ten, for the element tin. Elements 43, 61, and all elements numbered 83 or higher have no stable isotopes.[58][page needed] Stability of isotopes is affected by the ratio of protons to neutrons, and also by the presence of certain "magic numbers" of neutrons or protons that represent closed and filled quantum shells. These quantum shells correspond to a set of energy levels within the shell model of the nucleus; filled shells, such as the filled shell of 50 protons for tin, confers unusual stability on the nuclide. Of the 254 known stable nuclides, only four have both an odd number of protons and odd number of neutrons: hydrogen-2 (deuterium), lithium-6, boron-10 and nitrogen-14. Also, only four naturally occurring, radioactive odd–odd nuclides have a half-life over a billion years: potassium-40, vanadium-50, lanthanum-138 and tantalum-180m. Most odd–odd nuclei are highly unstable with respect to beta decay, because the decay products are even–even, and are therefore more strongly bound, due to nuclear pairing effects.[58][page needed] Main articles: Atomic mass and mass number The large majority of an atom's mass comes from the protons and neutrons that make it up. The total number of these particles (called "nucleons") in a given atom is called the mass number. It is a positive integer and dimensionless (instead of having dimension of mass), because it expresses a count. An example of use of a mass number is "carbon-12," which has 12 nucleons (six protons and six neutrons). The actual mass of an atom at rest is often expressed using the unified atomic mass unit (u), also called dalton (Da). This unit is defined as a twelfth of the mass of a free neutral atom of carbon-12, which is approximately 1.66×1027 kg.[59] Hydrogen-1 (the lightest isotope of hydrogen which is also the nuclide with the lowest mass) has an atomic weight of 1.007825 u.[60] The value of this number is called the atomic mass. A given atom has an atomic mass approximately equal (within 1%) to its mass number times the atomic mass unit (for example the mass of a nitrogen-14 is roughly 14 u). However, this number will not be exactly an integer except in the case of carbon-12 (see below).[61] The heaviest stable atom is lead-208,[54] with a mass of 207.9766521 u.[62] As even the most massive atoms are far too light to work with directly, chemists instead use the unit of moles. One mole of atoms of any element always has the same number of atoms (about 6.022×1023). This number was chosen so that if an element has an atomic mass of 1 u, a mole of atoms of that element has a mass close to one gram. Because of the definition of the unified atomic mass unit, each carbon-12 atom has an atomic mass of exactly 12 u, and so a mole of carbon-12 atoms weighs exactly 0.012 kg.[59] Shape and size Main article: Atomic radius Atoms lack a well-defined outer boundary, so their dimensions are usually described in terms of an atomic radius. This is a measure of the distance out to which the electron cloud extends from the nucleus.[2] However, this assumes the atom to exhibit a spherical shape, which is only obeyed for atoms in vacuum or free space. Atomic radii may be derived from the distances between two nuclei when the two atoms are joined in a chemical bond. The radius varies with the location of an atom on the atomic chart, the type of chemical bond, the number of neighboring atoms (coordination number) and a quantum mechanical property known as spin.[63] On the periodic table of the elements, atom size tends to increase when moving down columns, but decrease when moving across rows (left to right).[64] Consequently, the smallest atom is helium with a radius of 32 pm, while one of the largest is caesium at 225 pm.[65] When subjected to external forces, like electrical fields, the shape of an atom may deviate from spherical symmetry. The deformation depends on the field magnitude and the orbital type of outer shell electrons, as shown by group-theoretical considerations. Aspherical deviations might be elicited for instance in crystals, where large crystal-electrical fields may occur at low-symmetry lattice sites. Significant ellipsoidal deformations have recently been shown to occur for sulfur ions[66] and chalcogen ions[67] in pyrite-type compounds. Atomic dimensions are thousands of times smaller than the wavelengths of light (400–700 nm) so they cannot be viewed using an optical microscope. However, individual atoms can be observed using a scanning tunneling microscope. To visualize the minuteness of the atom, consider that a typical human hair is about 1 million carbon atoms in width.[68] A single drop of water contains about 2 sextillion (2×1021) atoms of oxygen, and twice the number of hydrogen atoms.[69] A single carat diamond with a mass of 2×104 kg contains about 10 sextillion (1022) atoms of carbon.[note 2] If an apple were magnified to the size of the Earth, then the atoms in the apple would be approximately the size of the original apple.[70] Radioactive decay Main article: Radioactive decay This diagram shows the half-life (T½) of various isotopes with Z protons and N neutrons. Every element has one or more isotopes that have unstable nuclei that are subject to radioactive decay, causing the nucleus to emit particles or electromagnetic radiation. Radioactivity can occur when the radius of a nucleus is large compared with the radius of the strong force, which only acts over distances on the order of 1 fm.[71] The most common forms of radioactive decay are:[72][73] • Alpha decay: this process is caused when the nucleus emits an alpha particle, which is a helium nucleus consisting of two protons and two neutrons. The result of the emission is a new element with a lower atomic number. • Beta decay (and electron capture): these processes are regulated by the weak force, and result from a transformation of a neutron into a proton, or a proton into a neutron. The neutron to proton transition is accompanied by the emission of an electron and an antineutrino, while proton to neutron transition (except in electron capture) causes the emission of a positron and a neutrino. The electron or positron emissions are called beta particles. Beta decay either increases or decreases the atomic number of the nucleus by one. Electron capture is more common than positron emission, because it requires less energy. In this type of decay, an electron is absorbed by the nucleus, rather than a positron emitted from the nucleus. A neutrino is still emitted in this process, and a proton changes to a neutron. • Gamma decay: this process results from a change in the energy level of the nucleus to a lower state, resulting in the emission of electromagnetic radiation. The excited state of a nucleus which results in gamma emission usually occurs following the emission of an alpha or a beta particle. Thus, gamma decay usually follows alpha or beta decay. Other more rare types of radioactive decay include ejection of neutrons or protons or clusters of nucleons from a nucleus, or more than one beta particle. An analog of gamma emission which allows excited nuclei to lose energy in a different way, is internal conversion— a process that produces high-speed electrons that are not beta rays, followed by production of high-energy photons that are not gamma rays. A few large nuclei explode into two or more charged fragments of varying masses plus several neutrons, in a decay called spontaneous nuclear fission. Each radioactive isotope has a characteristic decay time period—the half-life—that is determined by the amount of time needed for half of a sample to decay. This is an exponential decay process that steadily decreases the proportion of the remaining isotope by 50% every half-life. Hence after two half-lives have passed only 25% of the isotope is present, and so forth.[71] Magnetic moment Elementary particles possess an intrinsic quantum mechanical property known as spin. This is analogous to the angular momentum of an object that is spinning around its center of mass, although strictly speaking these particles are believed to be point-like and cannot be said to be rotating. Spin is measured in units of the reduced Planck constant (ħ), with electrons, protons and neutrons all having spin ½ ħ, or "spin-½". In an atom, electrons in motion around the nucleus possess orbital angular momentum in addition to their spin, while the nucleus itself possesses angular momentum due to its nuclear spin.[74] The magnetic field produced by an atom—its magnetic moment—is determined by these various forms of angular momentum, just as a rotating charged object classically produces a magnetic field. However, the most dominant contribution comes from electron spin. Due to the nature of electrons to obey the Pauli exclusion principle, in which no two electrons may be found in the same quantum state, bound electrons pair up with each other, with one member of each pair in a spin up state and the other in the opposite, spin down state. Thus these spins cancel each other out, reducing the total magnetic dipole moment to zero in some atoms with even number of electrons.[75] In ferromagnetic elements such as iron, cobalt and nickel, an odd number of electrons leads to an unpaired electron and a net overall magnetic moment. The orbitals of neighboring atoms overlap and a lower energy state is achieved when the spins of unpaired electrons are aligned with each other, a spontaneous process known as an exchange interaction. When the magnetic moments of ferromagnetic atoms are lined up, the material can produce a measurable macroscopic field. Paramagnetic materials have atoms with magnetic moments that line up in random directions when no magnetic field is present, but the magnetic moments of the individual atoms line up in the presence of a field.[75][76] The nucleus of an atom will have no spin when it has even numbers of both neutrons and protons, but for other cases of odd numbers, the nucleus may have a spin. Normally nuclei with spin are aligned in random directions because of thermal equilibrium. However, for certain elements (such as xenon-129) it is possible to polarize a significant proportion of the nuclear spin states so that they are aligned in the same direction—a condition called hyperpolarization. This has important applications in magnetic resonance imaging.[77][78] Energy levels These electron's energy levels (not to scale) are sufficient for ground states of atoms up to cadmium (5s2 4d10) inclusively. Do not forget that even the top of the diagram is lower than an unbound electron state. The potential energy of an electron in an atom is negative, its dependence of its position reaches the minimum (the most absolute value) inside the nucleus, and vanishes when the distance from the nucleus goes to infinity, roughly in an inverse proportion to the distance. In the quantum-mechanical model, a bound electron can only occupy a set of states centered on the nucleus, and each state corresponds to a specific energy level; see time-independent Schrödinger equation for theoretical explanation. An energy level can be measured by the amount of energy needed to unbind the electron from the atom, and is usually given in units of electronvolts (eV). The lowest energy state of a bound electron is called the ground state, i.e. stationary state, while an electron transition to a higher level results in an excited state.[79] The electron's energy raises when n increases because the (average) distance to the nucleus increases. Dependence of the energy on is caused not by electrostatic potential of the nucleus, but by interaction between electrons. For an electron to transition between two different states, e.g. grounded state to first excited level (ionization), it must absorb or emit a photon at an energy matching the difference in the potential energy of those levels, according to Niels Bohr model, what can be precisely calculated by the Schrödinger equation. Electrons jump between orbitals in a particle-like fashion. For example, if a single photon strikes the electrons, only a single electron changes states in response to the photon; see Electron properties. The energy of an emitted photon is proportional to its frequency, so these specific energy levels appear as distinct bands in the electromagnetic spectrum.[80] Each element has a characteristic spectrum that can depend on the nuclear charge, subshells filled by electrons, the electromagnetic interactions between the electrons and other factors.[81] An example of absorption lines in a spectrum When a continuous spectrum of energy is passed through a gas or plasma, some of the photons are absorbed by atoms, causing electrons to change their energy level. Those excited electrons that remain bound to their atom spontaneously emit this energy as a photon, traveling in a random direction, and so drop back to lower energy levels. Thus the atoms behave like a filter that forms a series of dark absorption bands in the energy output. (An observer viewing the atoms from a view that does not include the continuous spectrum in the background, instead sees a series of emission lines from the photons emitted by the atoms.) Spectroscopic measurements of the strength and width of atomic spectral lines allow the composition and physical properties of a substance to be determined.[82] Close examination of the spectral lines reveals that some display a fine structure splitting. This occurs because of spin–orbit coupling, which is an interaction between the spin and motion of the outermost electron.[83] When an atom is in an external magnetic field, spectral lines become split into three or more components; a phenomenon called the Zeeman effect. This is caused by the interaction of the magnetic field with the magnetic moment of the atom and its electrons. Some atoms can have multiple electron configurations with the same energy level, which thus appear as a single spectral line. The interaction of the magnetic field with the atom shifts these electron configurations to slightly different energy levels, resulting in multiple spectral lines.[84] The presence of an external electric field can cause a comparable splitting and shifting of spectral lines by modifying the electron energy levels, a phenomenon called the Stark effect.[85] If a bound electron is in an excited state, an interacting photon with the proper energy can cause stimulated emission of a photon with a matching energy level. For this to occur, the electron must drop to a lower energy state that has an energy difference matching the energy of the interacting photon. The emitted photon and the interacting photon then move off in parallel and with matching phases. That is, the wave patterns of the two photons are synchronized. This physical property is used to make lasers, which can emit a coherent beam of light energy in a narrow frequency band.[86] Valence and bonding behavior Valency is the combining power of an element. It is equal to number of hydrogen atoms that atom can combine or displace in forming compounds.[87] The outermost electron shell of an atom in its uncombined state is known as the valence shell, and the electrons in that shell are called valence electrons. The number of valence electrons determines the bonding behavior with other atoms. Atoms tend to chemically react with each other in a manner that fills (or empties) their outer valence shells.[88] For example, a transfer of a single electron between atoms is a useful approximation for bonds that form between atoms with one-electron more than a filled shell, and others that are one-electron short of a full shell, such as occurs in the compound sodium chloride and other chemical ionic salts. However, many elements display multiple valences, or tendencies to share differing numbers of electrons in different compounds. Thus, chemical bonding between these elements takes many forms of electron-sharing that are more than simple electron transfers. Examples include the element carbon and the organic compounds.[89] The chemical elements are often displayed in a periodic table that is laid out to display recurring chemical properties, and elements with the same number of valence electrons form a group that is aligned in the same column of the table. (The horizontal rows correspond to the filling of a quantum shell of electrons.) The elements at the far right of the table have their outer shell completely filled with electrons, which results in chemically inert elements known as the noble gases.[90][91] Main articles: State of matter and Phase (matter) Snapshots illustrating the formation of a Bose–Einstein condensate Quantities of atoms are found in different states of matter that depend on the physical conditions, such as temperature and pressure. By varying the conditions, materials can transition between solids, liquids, gases and plasmas.[92] Within a state, a material can also exist in different allotropes. An example of this is solid carbon, which can exist as graphite or diamond.[93] Gaseous allotropes exist as well, such as dioxygen and ozone. At temperatures close to absolute zero, atoms can form a Bose–Einstein condensate, at which point quantum mechanical effects, which are normally only observed at the atomic scale, become apparent on a macroscopic scale.[94][95] This super-cooled collection of atoms then behaves as a single super atom, which may allow fundamental checks of quantum mechanical behavior.[96] Scanning tunneling microscope image showing the individual atoms making up this gold (100) surface. The surface atoms deviate from the bulk crystal structure and arrange in columns several atoms wide with pits between them (See surface reconstruction). The scanning tunneling microscope is a device for viewing surfaces at the atomic level. It uses the quantum tunneling phenomenon, which allows particles to pass through a barrier that would normally be insurmountable. Electrons tunnel through the vacuum between two planar metal electrodes, on each of which is an adsorbed atom, providing a tunneling-current density that can be measured. Scanning one atom (taken as the tip) as it moves past the other (the sample) permits plotting of tip displacement versus lateral separation for a constant current. The calculation shows the extent to which scanning-tunneling-microscope images of an individual atom are visible. It confirms that for low bias, the microscope images the space-averaged dimensions of the electron orbitals across closely packed energy levels—the Fermi level local density of states.[97][98] An atom can be ionized by removing one of its electrons. The electric charge causes the trajectory of an atom to bend when it passes through a magnetic field. The radius by which the trajectory of a moving ion is turned by the magnetic field is determined by the mass of the atom. The mass spectrometer uses this principle to measure the mass-to-charge ratio of ions. If a sample contains multiple isotopes, the mass spectrometer can determine the proportion of each isotope in the sample by measuring the intensity of the different beams of ions. Techniques to vaporize atoms include inductively coupled plasma atomic emission spectroscopy and inductively coupled plasma mass spectrometry, both of which use a plasma to vaporize samples for analysis.[99] A more area-selective method is electron energy loss spectroscopy, which measures the energy loss of an electron beam within a transmission electron microscope when it interacts with a portion of a sample. The atom-probe tomograph has sub-nanometer resolution in 3-D and can chemically identify individual atoms using time-of-flight mass spectrometry.[100] Spectra of excited states can be used to analyze the atomic composition of distant stars. Specific light wavelengths contained in the observed light from stars can be separated out and related to the quantized transitions in free gas atoms. These colors can be replicated using a gas-discharge lamp containing the same element.[101] Helium was discovered in this way in the spectrum of the Sun 23 years before it was found on Earth.[102] Origin and current state Atoms form about 4% of the total energy density of the observable Universe, with an average density of about 0.25 atoms/m3.[103] Within a galaxy such as the Milky Way, atoms have a much higher concentration, with the density of matter in the interstellar medium (ISM) ranging from 105 to 109 atoms/m3.[104] The Sun is believed to be inside the Local Bubble, a region of highly ionized gas, so the density in the solar neighborhood is only about 103 atoms/m3.[105] Stars form from dense clouds in the ISM, and the evolutionary processes of stars result in the steady enrichment of the ISM with elements more massive than hydrogen and helium. Up to 95% of the Milky Way's atoms are concentrated inside stars and the total mass of atoms forms about 10% of the mass of the galaxy.[106] (The remainder of the mass is an unknown dark matter.)[107] Electrons are thought to exist in the Universe since early stages of the Big Bang. Atomic nuclei forms in nucleosynthesis reactions. In about three minutes Big Bang nucleosynthesis produced most of the helium, lithium, and deuterium in the Universe, and perhaps some of the beryllium and boron.[108][109][110] Ubiquitousness and stability of atoms relies on their binding energy, which means that an atom has a lower energy than an unbound system of the nucleus and electrons. Where the temperature is much higher than ionization potential, the matter exists in the form of plasma—a gas of positively charged ions (possibly, bare nuclei) and electrons. When the temperature drops below the ionization potential, atoms become statistically favorable. Atoms (complete with bound electrons) became to dominate over charged particles 380,000 years after the Big Bang—an epoch called recombination, when the expanding Universe cooled enough to allow electrons to become attached to nuclei.[111] Since the Big Bang, which produced no carbon or heavier elements, atomic nuclei have been combined in stars through the process of nuclear fusion to produce more of the element helium, and (via the triple alpha process) the sequence of elements from carbon up to iron;[112] see stellar nucleosynthesis for details. Isotopes such as lithium-6, as well as some beryllium and boron are generated in space through cosmic ray spallation.[113] This occurs when a high-energy proton strikes an atomic nucleus, causing large numbers of nucleons to be ejected. Elements heavier than iron were produced in supernovae through the r-process and in AGB stars through the s-process, both of which involve the capture of neutrons by atomic nuclei.[114] Elements such as lead formed largely through the radioactive decay of heavier elements.[115] Most of the atoms that make up the Earth and its inhabitants were present in their current form in the nebula that collapsed out of a molecular cloud to form the Solar System. The rest are the result of radioactive decay, and their relative proportion can be used to determine the age of the Earth through radiometric dating.[116][117] Most of the helium in the crust of the Earth (about 99% of the helium from gas wells, as shown by its lower abundance of helium-3) is a product of alpha decay.[118] There are a few trace atoms on Earth that were not present at the beginning (i.e., not "primordial"), nor are results of radioactive decay. Carbon-14 is continuously generated by cosmic rays in the atmosphere.[119] Some atoms on Earth have been artificially generated either deliberately or as by-products of nuclear reactors or explosions.[120][121] Of the transuranic elements—those with atomic numbers greater than 92—only plutonium and neptunium occur naturally on Earth.[122][123] Transuranic elements have radioactive lifetimes shorter than the current age of the Earth[124] and thus identifiable quantities of these elements have long since decayed, with the exception of traces of plutonium-244 possibly deposited by cosmic dust.[125] Natural deposits of plutonium and neptunium are produced by neutron capture in uranium ore.[126] The Earth contains approximately 1.33×1050 atoms.[127] Although small numbers of independent atoms of noble gases exist, such as argon, neon, and helium, 99% of the atmosphere is bound in the form of molecules, including carbon dioxide and diatomic oxygen and nitrogen. At the surface of the Earth, an overwhelming majority of atoms combine to form various compounds, including water, salt, silicates and oxides. Atoms can also combine to create materials that do not consist of discrete molecules, including crystals and liquid or solid metals.[128][129] This atomic matter forms networked arrangements that lack the particular type of small-scale interrupted order associated with molecular matter.[130] Rare and theoretical forms Superheavy elements Main article: Transuranium element While isotopes with atomic numbers higher than lead (82) are known to be radioactive, an "island of stability" has been proposed for some elements with atomic numbers above 103. These superheavy elements may have a nucleus that is relatively stable against radioactive decay.[131] The most likely candidate for a stable superheavy atom, unbihexium, has 126 protons and 184 neutrons.[132] Exotic matter Main article: Exotic matter Each particle of matter has a corresponding antimatter particle with the opposite electrical charge. Thus, the positron is a positively charged antielectron and the antiproton is a negatively charged equivalent of a proton. When a matter and corresponding antimatter particle meet, they annihilate each other. Because of this, along with an imbalance between the number of matter and antimatter particles, the latter are rare in the universe. The first causes of this imbalance are not yet fully understood, although theories of baryogenesis may offer an explanation. As a result, no antimatter atoms have been discovered in nature.[133][134] However, in 1996 the antimatter counterpart of the hydrogen atom (antihydrogen) was synthesized at the CERN laboratory in Geneva.[135][136] Other exotic atoms have been created by replacing one of the protons, neutrons or electrons with other particles that have the same charge. For example, an electron can be replaced by a more massive muon, forming a muonic atom. These types of atoms can be used to test the fundamental predictions of physics.[137][138][139] See also 1. ^ For more recent updates see Interactive Chart of Nuclides (Brookhaven National Laboratory). 2. ^ A carat is 200 milligrams. By definition, carbon-12 has 0.012 kg per mole. The Avogadro constant defines 6×1023 atoms per mole. 1. ^ "Atom". Compendium of Chemical Terminology (IUPAC Gold Book) (2nd ed.). IUPAC. Retrieved 2015-04-25.  2. ^ a b Ghosh, D. C.; Biswas, R. (2002). "Theoretical calculation of Absolute Radii of Atoms and Ions. Part 1. The Atomic Radii". Int. J. Mol. Sci. 3: 87–113. doi:10.3390/i3020087.  3. ^ Leigh, G. J., ed. (1990). International Union of Pure and Applied Chemistry, Commission on the Nomenclature of Inorganic Chemistry, Nomenclature of Organic Chemistry – Recommendations 1990. Oxford: Blackwell Scientific Publications. p. 35. ISBN 0-08-022369-9. An atom is the smallest unit quantity of an element that is capable of existence whether alone or in chemical combination with other atoms of the same or other elements.  4. ^ Andrew G. van Melsen (1952). From Atomos to Atom. Mineola, N.Y.: Dover Publications. ISBN 0-486-49584-1.  5. ^ Dalton, John. "On the Absorption of Gases by Water and Other Liquids", in Memoirs of the Literary and Philosophical Society of Manchester. 1803. Retrieved on August 29, 2007. 6. ^ Einstein, Albert (1905). "Über die von der molekularkinetischen Theorie der Wärme geforderte Bewegung von in ruhenden Flüssigkeiten suspendierten Teilchen" (PDF). Annalen der Physik (in German) 322 (8): 549–560. Bibcode:1905AnP...322..549E. doi:10.1002/andp.19053220806. Retrieved 4 February 2007.  7. ^ Mazo, Robert M. (2002). Brownian Motion: Fluctuations, Dynamics, and Applications. Oxford University Press. pp. 1–7. ISBN 0-19-851567-7. OCLC 48753074.  8. ^ Lee, Y.K.; Hoon, K. (1995). "Brownian Motion". Imperial College. Archived from the original on 18 December 2007. Retrieved 18 December 2007.  9. ^ Patterson, G. (2007). "Jean Perrin and the triumph of the atomic doctrine". Endeavour 31 (2): 50–53. doi:10.1016/j.endeavour.2007.05.003. PMID 17602746.  10. ^ Thomson, J. J. (August 1901). "On bodies smaller than atoms". The Popular Science Monthly (Bonnier Corp.): 323–335. Retrieved 2009-06-21.  11. ^ "J.J. Thomson". Nobel Foundation. 1906. Retrieved 20 December 2007.  12. ^ Rutherford, E. (1911). "The Scattering of α and β Particles by Matter and the Structure of the Atom" (PDF). Philosophical Magazine 21 (125): 669–88. doi:10.1080/14786440508637080.  13. ^ "Frederick Soddy, The Nobel Prize in Chemistry 1921". Nobel Foundation. Retrieved 18 January 2008.  14. ^ Thomson, Joseph John (1913). "Rays of positive electricity". Proceedings of the Royal Society. A 89 (607): 1–20. Bibcode:1913RSPSA..89....1T. doi:10.1098/rspa.1913.0057.  15. ^ Stern, David P. (16 May 2005). "The Atomic Nucleus and Bohr's Early Model of the Atom". NASA/Goddard Space Flight Center. Retrieved 20 December 2007.  16. ^ Bohr, Niels (11 December 1922). "Niels Bohr, The Nobel Prize in Physics 1922, Nobel Lecture". Nobel Foundation. Retrieved 16 February 2008.  17. ^ Pais, Abraham (1986). Inward Bound: Of Matter and Forces in the Physical World. New York: Oxford University Press. pp. 228–230. ISBN 0-19-851971-0.  18. ^ Lewis, Gilbert N. (1916). "The Atom and the Molecule". Journal of the American Chemical Society 38 (4): 762–786. doi:10.1021/ja02261a002.  19. ^ Scerri, Eric R. (2007). The periodic table: its story and its significance. Oxford University Press US. pp. 205–226. ISBN 0-19-530573-6.  20. ^ Langmuir, Irving (1919). "The Arrangement of Electrons in Atoms and Molecules". Journal of the American Chemical Society 41 (6): 868–934. doi:10.1021/ja02227a002.  21. ^ Scully, Marlan O.; Lamb, Willis E.; Barut, Asim (1987). "On the theory of the Stern-Gerlach apparatus". Foundations of Physics 17 (6): 575–583. Bibcode:1987FoPh...17..575S. doi:10.1007/BF01882788.  22. ^ Brown, Kevin (2007). "The Hydrogen Atom". MathPages. Retrieved 21 December 2007.  23. ^ Harrison, David M. (2000). "The Development of Quantum Mechanics". University of Toronto. Archived from the original on 25 December 2007. Retrieved 21 December 2007.  24. ^ Aston, Francis W. (1920). "The constitution of atmospheric neon". Philosophical Magazine 39 (6): 449–55. doi:10.1080/14786440408636058.  25. ^ Chadwick, James (12 December 1935). "Nobel Lecture: The Neutron and Its Properties". Nobel Foundation. Retrieved 21 December 2007.  26. ^ "Otto Hahn, Lise Meitner and Fritz Strassmann". Chemical Achievers: The Human Face of the Chemical Sciences. Chemical Heritage Foundation. Archived from the original on 24 October 2009. Retrieved 15 September 2009.  27. ^ Meitner, Lise; Frisch, Otto Robert (1939). "Disintegration of uranium by neutrons: a new type of nuclear reaction". Nature 143 (3615): 239–240. Bibcode:1939Natur.143..239M. doi:10.1038/143239a0.  28. ^ Schroeder, M. "Lise Meitner – Zur 125. Wiederkehr Ihres Geburtstages" (in German). Retrieved 4 June 2009.  29. ^ Crawford, E.; Sime, Ruth Lewin; Walker, Mark (1997). "A Nobel tale of postwar injustice". Physics Today 50 (9): 26–32. Bibcode:1997PhT....50i..26C. doi:10.1063/1.881933.  30. ^ Kullander, Sven (28 August 2001). "Accelerators and Nobel Laureates". Nobel Foundation. Retrieved 31 January 2008.  31. ^ "The Nobel Prize in Physics 1990". Nobel Foundation. 17 October 1990. Retrieved 31 January 2008.  32. ^ Demtröder, Wolfgang (2002). Atoms, Molecules and Photons: An Introduction to Atomic- Molecular- and Quantum Physics (1st ed.). Springer. pp. 39–42. ISBN 3-540-20631-0. OCLC 181435713.  33. ^ Woan, Graham (2000). The Cambridge Handbook of Physics. Cambridge University Press. p. 8. ISBN 0-521-57507-9. OCLC 224032426.  34. ^ MacGregor, Malcolm H. (1992). The Enigmatic Electron. Oxford University Press. pp. 33–37. ISBN 0-19-521833-7. OCLC 223372888.  35. ^ a b Particle Data Group (2002). "The Particle Adventure". Lawrence Berkeley Laboratory. Archived from the original on 4 January 2007. Retrieved 3 January 2007.  36. ^ a b Schombert, James (18 April 2006). "Elementary Particles". University of Oregon. Retrieved 3 January 2007.  37. ^ Jevremovic, Tatjana (2005). Nuclear Principles in Engineering. Springer. p. 63. ISBN 0-387-23284-2. OCLC 228384008.  38. ^ Pfeffer, Jeremy I.; Nir, Shlomo (2000). Modern Physics: An Introductory Text. Imperial College Press. pp. 330–336. ISBN 1-86094-250-4. OCLC 45900880.  39. ^ Wenner, Jennifer M. (10 October 2007). "How Does Radioactive Decay Work?". Carleton College. Retrieved 9 January 2008.  40. ^ a b c Raymond, David (7 April 2006). "Nuclear Binding Energies". New Mexico Tech. Archived from the original on 11 December 2006. Retrieved 3 January 2007.  41. ^ Mihos, Chris (23 July 2002). "Overcoming the Coulomb Barrier". Case Western Reserve University. Retrieved 13 February 2008.  42. ^ Staff (30 March 2007). "ABC's of Nuclear Science". Lawrence Berkeley National Laboratory. Archived from the original on 5 December 2006. Retrieved 3 January 2007.  43. ^ Makhijani, Arjun; Saleska, Scott (2 March 2001). "Basics of Nuclear Physics and Fission". Institute for Energy and Environmental Research. Archived from the original on 16 January 2007. Retrieved 3 January 2007.  44. ^ Shultis, J. Kenneth; Faw, Richard E. (2002). Fundamentals of Nuclear Science and Engineering. CRC Press. pp. 10–17. ISBN 0-8247-0834-2. OCLC 123346507.  45. ^ Fewell, M. P. (1995). "The atomic nuclide with the highest mean binding energy". American Journal of Physics 63 (7): 653–658. Bibcode:1995AmJPh..63..653F. doi:10.1119/1.17828.  46. ^ Mulliken, Robert S. (1967). "Spectroscopy, Molecular Orbitals, and Chemical Bonding". Science 157 (3784): 13–24. Bibcode:1967Sci...157...13M. doi:10.1126/science.157.3784.13. PMID 5338306.  47. ^ a b Brucat, Philip J. (2008). "The Quantum Atom". University of Florida. Archived from the original on 7 December 2006. Retrieved 4 January 2007.  48. ^ Manthey, David (2001). "Atomic Orbitals". Orbital Central. Archived from the original on 10 January 2008. Retrieved 21 January 2008.  49. ^ Herter, Terry (2006). "Lecture 8: The Hydrogen Atom". Cornell University. Retrieved 14 February 2008.  50. ^ Bell, R. E.; Elliott, L. G. (1950). "Gamma-Rays from the Reaction H1(n,γ)D2 and the Binding Energy of the Deuteron". Physical Review 79 (2): 282–285. Bibcode:1950PhRv...79..282B. doi:10.1103/PhysRev.79.282.  51. ^ Smirnov, Boris M. (2003). Physics of Atoms and Ions. Springer. pp. 249–272. ISBN 0-387-95550-X.  52. ^ Matis, Howard S. (9 August 2000). "The Isotopes of Hydrogen". Guide to the Nuclear Wall Chart. Lawrence Berkeley National Lab. Archived from the original on 18 December 2007. Retrieved 21 December 2007.  53. ^ Weiss, Rick (17 October 2006). "Scientists Announce Creation of Atomic Element, the Heaviest Yet". Washington Post. Retrieved 21 December 2007.  54. ^ a b Sills, Alan D. (2003). Earth Science the Easy Way. Barron's Educational Series. pp. 131–134. ISBN 0-7641-2146-4. OCLC 51543743.  55. ^ Dumé, Belle (23 April 2003). "Bismuth breaks half-life record for alpha decay". Physics World. Archived from the original on 14 December 2007. Retrieved 21 December 2007.  56. ^ Lindsay, Don (30 July 2000). "Radioactives Missing From The Earth". Don Lindsay Archive. Archived from the original on 28 April 2007. Retrieved 23 May 2007.  57. ^ Tuli, Jagdish K. (April 2005). "Nuclear Wallet Cards". National Nuclear Data Center, Brookhaven National Laboratory. Retrieved 16 April 2011.  58. ^ a b CRC Handbook (2002). 59. ^ a b Mills, Ian; Cvitaš, Tomislav; Homann, Klaus; Kallay, Nikola; Kuchitsu, Kozo (1993). Quantities, Units and Symbols in Physical Chemistry (PDF) (2nd ed.). Oxford: International Union of Pure and Applied Chemistry, Commission on Physiochemical Symbols Terminology and Units, Blackwell Scientific Publications. p. 70. ISBN 0-632-03583-8. OCLC 27011505.  60. ^ Chieh, Chung (22 January 2001). "Nuclide Stability". University of Waterloo. Retrieved 4 January 2007.  61. ^ "Atomic Weights and Isotopic Compositions for All Elements". National Institute of Standards and Technology. Archived from the original on 31 December 2006. Retrieved 4 January 2007.  62. ^ Audi, G.; Wapstra, A.H.; Thibault, C. (2003). "The Ame2003 atomic mass evaluation (II)" (PDF). Nuclear Physics A 729 (1): 337–676. Bibcode:2003NuPhA.729..337A. doi:10.1016/j.nuclphysa.2003.11.003.  63. ^ Shannon, R. D. (1976). "Revised effective ionic radii and systematic studies of interatomic distances in halides and chalcogenides". Acta Crystallographica A 32 (5): 751–767. Bibcode:1976AcCrA..32..751S. doi:10.1107/S0567739476001551.  64. ^ Dong, Judy (1998). "Diameter of an Atom". The Physics Factbook. Archived from the original on 4 November 2007. Retrieved 19 November 2007.  65. ^ Zumdahl, Steven S. (2002). Introductory Chemistry: A Foundation (5th ed.). Houghton Mifflin. ISBN 0-618-34342-3. OCLC 173081482. Archived from the original on 4 March 2008. Retrieved 5 February 2008.  66. ^ Birkholz, M.; Rudert, R. (2008). "Interatomic distances in pyrite-structure disulfides – a case for ellipsoidal modeling of sulfur ions]" (PDF). phys. stat. sol. b 245: 1858–1864. Bibcode:2008PSSBR.245.1858B. doi:10.1002/pssb.200879532.  67. ^ Birkholz, M. (2014). "Modeling the Shape of Ions in Pyrite-Type Crystals". Crystals 4: 390–403. doi:10.3390/cryst4030390.  68. ^ Staff (2007). "Small Miracles: Harnessing nanotechnology". Oregon State University. Retrieved 7 January 2007. —describes the width of a human hair as 105 nm and 10 carbon atoms as spanning 1 nm. 69. ^ Padilla, Michael J.; Miaoulis, Ioannis; Cyr, Martha (2002). Prentice Hall Science Explorer: Chemical Building Blocks. Upper Saddle River, New Jersey USA: Prentice-Hall, Inc. p. 32. ISBN 0-13-054091-9. OCLC 47925884. There are 2,000,000,000,000,000,000,000 (that's 2 sextillion) atoms of oxygen in one drop of water—and twice as many atoms of hydrogen.  70. ^ Feynman, Richard (1995). Six Easy Pieces. The Penguin Group. p. 5. ISBN 978-0-14-027666-4. OCLC 40499574.  71. ^ a b "Radioactivity". Archived from the original on 4 December 2007. Retrieved 19 December 2007.  72. ^ L'Annunziata, Michael F. (2003). Handbook of Radioactivity Analysis. Academic Press. pp. 3–56. ISBN 0-12-436603-1. OCLC 16212955.  73. ^ Firestone, Richard B. (22 May 2000). "Radioactive Decay Modes". Berkeley Laboratory. Retrieved 7 January 2007.  74. ^ Hornak, J. P. (2006). "Chapter 3: Spin Physics". The Basics of NMR. Rochester Institute of Technology. Archived from the original on 3 February 2007. Retrieved 7 January 2007.  75. ^ a b Schroeder, Paul A. (25 February 2000). "Magnetic Properties". University of Georgia. Archived from the original on 29 April 2007. Retrieved 7 January 2007.  76. ^ Goebel, Greg (1 September 2007). "[4.3] Magnetic Properties of the Atom". Elementary Quantum Physics. In The Public Domain website. Retrieved 7 January 2007.  77. ^ Yarris, Lynn (Spring 1997). "Talking Pictures". Berkeley Lab Research Review. Archived from the original on 13 January 2008. Retrieved 9 January 2008.  78. ^ Liang, Z.-P.; Haacke, E. M. (1999). Webster, J. G., ed. Encyclopedia of Electrical and Electronics Engineering: Magnetic Resonance Imaging. vol. 2. John Wiley & Sons. pp. 412–426. ISBN 0-471-13946-7.  79. ^ Zeghbroeck, Bart J. Van (1998). "Energy levels". Shippensburg University. Archived from the original on 15 January 2005. Retrieved 23 December 2007.  80. ^ Fowles, Grant R. (1989). Introduction to Modern Optics. Courier Dover Publications. pp. 227–233. ISBN 0-486-65957-7. OCLC 18834711.  81. ^ Martin, W. C.; Wiese, W. L. (May 2007). "Atomic Spectroscopy: A Compendium of Basic Ideas, Notation, Data, and Formulas". National Institute of Standards and Technology. Archived from the original on 8 February 2007. Retrieved 8 January 2007.  82. ^ "Atomic Emission Spectra — Origin of Spectral Lines". Avogadro Web Site. Retrieved 10 August 2006.  83. ^ Fitzpatrick, Richard (16 February 2007). "Fine structure". University of Texas at Austin. Retrieved 14 February 2008.  84. ^ Weiss, Michael (2001). "The Zeeman Effect". University of California-Riverside. Archived from the original on 2 February 2008. Retrieved 6 February 2008.  85. ^ Beyer, H. F.; Shevelko, V. P. (2003). Introduction to the Physics of Highly Charged Ions. CRC Press. pp. 232–236. ISBN 0-7503-0481-2. OCLC 47150433.  86. ^ Watkins, Thayer. "Coherence in Stimulated Emission". San José State University. Archived from the original on 12 January 2008. Retrieved 23 December 2007.  87. ^ oxford dictionary – valency 88. ^ Reusch, William (16 July 2007). "Virtual Textbook of Organic Chemistry". Michigan State University. Retrieved 11 January 2008.  89. ^ "Covalent bonding – Single bonds". chemguide. 2000.  90. ^ Husted, Robert et al. (11 December 2003). "Periodic Table of the Elements". Los Alamos National Laboratory. Archived from the original on 10 January 2008. Retrieved 11 January 2008.  91. ^ Baum, Rudy (2003). "It's Elemental: The Periodic Table". Chemical & Engineering News. Retrieved 11 January 2008.  92. ^ Goodstein, David L. (2002). States of Matter. Courier Dover Publications. pp. 436–438. ISBN 0-13-843557-X.  93. ^ Brazhkin, Vadim V. (2006). "Metastable phases, phase transformations, and phase diagrams in physics and chemistry". Physics-Uspekhi 49 (7): 719–24. Bibcode:2006PhyU...49..719B. doi:10.1070/PU2006v049n07ABEH006013.  94. ^ Myers, Richard (2003). The Basics of Chemistry. Greenwood Press. p. 85. ISBN 0-313-31664-3. OCLC 50164580.  95. ^ Staff (9 October 2001). "Bose-Einstein Condensate: A New Form of Matter". National Institute of Standards and Technology. Archived from the original on 3 January 2008. Retrieved 16 January 2008.  96. ^ Colton, Imogen; Fyffe, Jeanette (3 February 1999). "Super Atoms from Bose-Einstein Condensation". The University of Melbourne. Archived from the original on 29 August 2007. Retrieved 6 February 2008.  97. ^ Jacox, Marilyn; Gadzuk, J. William (November 1997). "Scanning Tunneling Microscope". National Institute of Standards and Technology. Archived from the original on 7 January 2008. Retrieved 11 January 2008.  98. ^ "The Nobel Prize in Physics 1986". The Nobel Foundation. Retrieved 11 January 2008. —in particular, see the Nobel lecture by G. Binnig and H. Rohrer. 99. ^ Jakubowski, N.; Moens, Luc; Vanhaecke, Frank (1998). "Sector field mass spectrometers in ICP-MS". Spectrochimica Acta Part B: Atomic Spectroscopy 53 (13): 1739–63. Bibcode:1998AcSpe..53.1739J. doi:10.1016/S0584-8547(98)00222-5.  100. ^ Müller, Erwin W.; Panitz, John A.; McLane, S. Brooks (1968). "The Atom-Probe Field Ion Microscope". Review of Scientific Instruments 39 (1): 83–86. Bibcode:1968RScI...39...83M. doi:10.1063/1.1683116.  101. ^ Lochner, Jim; Gibb, Meredith; Newman, Phil (30 April 2007). "What Do Spectra Tell Us?". NASA/Goddard Space Flight Center. Archived from the original on 16 January 2008. Retrieved 3 January 2008.  102. ^ Winter, Mark (2007). "Helium". WebElements. Archived from the original on 30 December 2007. Retrieved 3 January 2008.  103. ^ Hinshaw, Gary (10 February 2006). "What is the Universe Made Of?". NASA/WMAP. Archived from the original on 31 December 2007. Retrieved 7 January 2008.  104. ^ Choppin, Gregory R.; Liljenzin, Jan-Olov; Rydberg, Jan (2001). Radiochemistry and Nuclear Chemistry. Elsevier. p. 441. ISBN 0-7506-7463-6. OCLC 162592180.  105. ^ Davidsen, Arthur F. (1993). "Far-Ultraviolet Astronomy on the Astro-1 Space Shuttle Mission". Science 259 (5093): 327–34. Bibcode:1993Sci...259..327D. doi:10.1126/science.259.5093.327. PMID 17832344.  106. ^ Lequeux, James (2005). The Interstellar Medium. Springer. p. 4. ISBN 3-540-21326-0. OCLC 133157789.  107. ^ Smith, Nigel (6 January 2000). "The search for dark matter". Physics World. Archived from the original on 16 February 2008. Retrieved 14 February 2008.  108. ^ Croswell, Ken (1991). "Boron, bumps and the Big Bang: Was matter spread evenly when the Universe began? Perhaps not; the clues lie in the creation of the lighter elements such as boron and beryllium". New Scientist (1794): 42. Archived from the original on 7 February 2008. Retrieved 14 January 2008.  109. ^ Copi, Craig J.; Schramm, DN; Turner, MS (1995). "Big-Bang Nucleosynthesis and the Baryon Density of the Universe". Science 267 (5195): 192–99. arXiv:astro-ph/9407006. Bibcode:1995Sci...267..192C. doi:10.1126/science.7809624. PMID 7809624.  110. ^ Hinshaw, Gary (15 December 2005). "Tests of the Big Bang: The Light Elements". NASA/WMAP. Archived from the original on 17 January 2008. Retrieved 13 January 2008.  111. ^ Abbott, Brian (30 May 2007). "Microwave (WMAP) All-Sky Survey". Hayden Planetarium. Retrieved 13 January 2008.  112. ^ Hoyle, F. (1946). "The synthesis of the elements from hydrogen". Monthly Notices of the Royal Astronomical Society 106: 343–83. Bibcode:1946MNRAS.106..343H. doi:10.1093/mnras/106.5.343.  113. ^ Knauth, D. C.; Knauth, D. C.; Lambert, David L.; Crane, P. (2000). "Newly synthesized lithium in the interstellar medium". Nature 405 (6787): 656–58. doi:10.1038/35015028. PMID 10864316.  114. ^ Mashnik, Stepan G. (2000). "On Solar System and Cosmic Rays Nucleosynthesis and Spallation Processes". arXiv:astro-ph/0008382 [astro-ph].  115. ^ Kansas Geological Survey (4 May 2005). "Age of the Earth". University of Kansas. Retrieved 14 January 2008.  116. ^ Manuel 2001, pp. 407–430, 511–519. 117. ^ Dalrymple, G. Brent (2001). "The age of the Earth in the twentieth century: a problem (mostly) solved". Geological Society, London, Special Publications 190 (1): 205–21. Bibcode:2001GSLSP.190..205D. doi:10.1144/GSL.SP.2001.190.01.14. Retrieved 14 January 2008.  118. ^ Anderson, Don L.; Foulger, G. R.; Meibom, Anders (2 September 2006). "Helium: Fundamental models". Archived from the original on 8 February 2007. Retrieved 14 January 2007.  119. ^ Pennicott, Katie (10 May 2001). "Carbon clock could show the wrong time". PhysicsWeb. Archived from the original on 15 December 2007. Retrieved 14 January 2008.  120. ^ Yarris, Lynn (27 July 2001). "New Superheavy Elements 118 and 116 Discovered at Berkeley Lab". Berkeley Lab. Archived from the original on 9 January 2008. Retrieved 14 January 2008.  121. ^ Diamond, H et al. (1960). "Heavy Isotope Abundances in Mike Thermonuclear Device". Physical Review 119 (6): 2000–04. Bibcode:1960PhRv..119.2000D. doi:10.1103/PhysRev.119.2000.  122. ^ Poston Sr., John W. (23 March 1998). "Do transuranic elements such as plutonium ever occur naturally?". Scientific American.  123. ^ Keller, C. (1973). "Natural occurrence of lanthanides, actinides, and superheavy elements". Chemiker Zeitung 97 (10): 522–30. OSTI 4353086.  124. ^ Zaider, Marco; Rossi, Harald H. (2001). Radiation Science for Physicians and Public Health Workers. Springer. p. 17. ISBN 0-306-46403-9. OCLC 44110319.  125. ^ Manuel 2001, pp. 407–430,511–519. 126. ^ "Oklo Fossil Reactors". Curtin University of Technology. Archived from the original on 18 December 2007. Retrieved 15 January 2008.  127. ^ Weisenberger, Drew. "How many atoms are there in the world?". Jefferson Lab. Retrieved 16 January 2008.  128. ^ Pidwirny, Michael. "Fundamentals of Physical Geography". University of British Columbia Okanagan. Archived from the original on 21 January 2008. Retrieved 16 January 2008.  129. ^ Anderson, Don L. (2002). "The inner inner core of Earth". Proceedings of the National Academy of Sciences 99 (22): 13966–68. Bibcode:2002PNAS...9913966A. doi:10.1073/pnas.232565899. PMC 137819. PMID 12391308.  130. ^ Pauling, Linus (1960). The Nature of the Chemical Bond. Cornell University Press. pp. 5–10. ISBN 0-8014-0333-2. OCLC 17518275.  131. ^ Anonymous (2 October 2001). "Second postcard from the island of stability". CERN Courier. Archived from the original on 3 February 2008. Retrieved 14 January 2008.  132. ^ Jacoby, Mitch (2006). "As-yet-unsynthesized superheavy atom should form a stable diatomic molecule with fluorine". Chemical & Engineering News 84 (10): 19. doi:10.1021/cen-v084n010.p019a.  133. ^ Koppes, Steve (1 March 1999). "Fermilab Physicists Find New Matter-Antimatter Asymmetry". University of Chicago. Retrieved 14 January 2008.  134. ^ Cromie, William J. (16 August 2001). "A lifetime of trillionths of a second: Scientists explore antimatter". Harvard University Gazette. Retrieved 14 January 2008.  135. ^ Hijmans, Tom W. (2002). "Particle physics: Cold antihydrogen". Nature 419 (6906): 439–40. Bibcode:2002Natur.419..439H. doi:10.1038/419439a. PMID 12368837.  136. ^ Staff (30 October 2002). "Researchers 'look inside' antimatter". BBC News. Retrieved 14 January 2008.  137. ^ Barrett, Roger (1990). "The Strange World of the Exotic Atom". New Scientist (1728): 77–115. Archived from the original on 21 December 2007. Retrieved 4 January 2008.  138. ^ Indelicato, Paul (2004). "Exotic Atoms". Physica Scripta T112 (1): 20–26. arXiv:physics/0409058. Bibcode:2004PhST..112...20I. doi:10.1238/Physica.Topical.112a00020.  139. ^ Ripin, Barrett H. (July 1998). "Recent Experiments on Exotic Atoms". American Physical Society. Retrieved 15 February 2008.  • Manuel, Oliver (2001). Origin of Elements in the Solar System: Implications of Post-1957 Observations. Springer. ISBN 0-306-46562-0. OCLC 228374906.  Further reading External links
f7d0fe1c784724ae
parka woolrich isabel marant heels woolrich parka woolrich milano isabel marant online hollister sale uk hollister sale Biological Mathematics - Biological Mathematics Revision as of 22:21, 22 January 2011 by Moncznik (Talk | contribs) Jump to: navigation, search Principia BioMathematica The notion that DNA and proteins form some sort of computational network goes back to at least as far as the 1970’s.   In 1982, Richard Feynman referred to the idea of a 'quantum computer', a computer that uses the effects of quantum mechanics.  In 1994, Len Adelman demonstrated his- computation in a test tube- based on DNA splicing mechanisms.  Currently, many uses of the term “biological mathematics” refer to Adelman’s splicing techniques.  More recently "Intramolecular computation" has been added to this list, i.e. computation occurring within single molecules e.g. The Amino Acid Code and The Histone Code, as well as any other techniques that biological systems may use that qualify as mathematical in nature. Computation implies some form of mathematics.  Adelman’s experiment solved the so-called traveling salesman or shortest path problem, at least for a limited set of data.  DNA and protein networks respond to complex logical environments involving decisions based on the absence or presence of different conditions, molecules or organisms in the cellular environment.  At the level of the brain, extremely complex mathematical processing must be occurring.  Artificial Intelligence techniques give many potential models- Decision theory, Statistical pattern recognition, Image processing techniques- to name just a few. The tetrahedral geometry of the carbon atom is at the center or kernel of biological mathematics(smart molecules).  Carbon atoms readily form chains amongst themselves (as well as other classes of atoms) by the sharing of electrons. The interactions of these neighboring covalent bonds result in switching elements similar in potential function to their man made digital counterparts.  The result is that as few as two or three atoms acting in concert have a wealth of mathematical and logical processing capabilities. (Computational Structures in Non-Coding DNA and the Histone Code) The Theory of Predication in Aristotle's Categories "There is a theory called the theory of categories which in a more or less developed form, with minor or major modifications, made its appearance first in a large number of Aristotelian writings and then, under the influence of these writings, came to be a standard part of traditional logic, a place it maintained with more or less success into the early part of this century, when it met the same fate as certain other parts of traditional logic. There are lots of questions one may ask about this theory. Presumably not the most interesting question, but certainly one for which one would want to have an answer if one took an interest in the theory at all, is the following: What are categories? It turns out that this is a rather large and difficult question. And hence I want to restrict myself to the narrower and more modest question, What are categories in Aristotle?, hoping that a clarification of this question ultimately will help to clarify the more general questions. But even this narrower question turns out to be so complicated and controversial that I will be content if I can shed some light on the simple questions: What does the word "category" mean in Aristotle? What does Aristotle have in mind when he talks of "categories"? Presumably it is generally agreed that Aristotle's doctrine of categories involves the assumption that there is some scheme of classification such that all there is, all entities, can be divided into a limited number of ultimate classes. But there is no agreement as to the basis and nature of this classification, nor is there an agreement as to how the categories themselves are related to these classes of entities. There is a general tendency among commentators to talk as if the categories just were these classes, but there is also the view that, though for each category there is a corresponding ultimate class of entities, the categories themselves are not to be identified with these classes. And there are various ways in which it could be true that the categories only correspond to, but are not identical with, these classes of entities. It might, e.g., be the case that the categories are not classes of entities but rather classes of expressions of a certain kind, expressions which we—following tradition—may call "categorematic." On this interpretation these categorematic expressions signify the various entities we classify under such headings as "substance," "quality," or "quantity." And in this case we have to ask whether the entities are classified according to a classification of the categorematic expressions by which they are signified, or whether, the other way round, the expressions are classified according to the classification of the entities they signify. Or it might be thought that the categories are classes of only some categorematic expressions, namely, those which can occur as predicate-expressions. Or it might be the case that the categories themselves are not classes at all, neither of entities nor of expressions, but rather headings or labels or predicates which collect, or apply to, either entities or expressions, i.e., the category itself, strictly speaking would be a term like "substance" or "substance word." Or it might be the case that categories are neither classes nor terms but concepts. All these views have had their ardent supporters." pp. 1-2 From: Michael Frede - Categories in Aristotle. In Studies in Aristotle. Edited by O'Meara Dominic. Washington: Catholic University Press 1981. pp. 1-25 Reprinted in: M. Frede - Essays in Ancient Philosophy - Minneapolis, University of Minnesota Press, pp. 29-48. In mathematics, a set can be thought of as any collection of distinct objects considered as a whole. Although this appears to be a simple idea, sets are one of the most fundamental concepts in modern mathematics. The study of the structure of possible sets, set theory, is rich and ongoing. Having only been invented at the end of the 19th century, set theory is now a ubiquitous part of mathematics education, being introduced from primary school in many countries. Set theory can be viewed as the foundation upon which nearly all of mathematics can be derived. Operations can involve dissimilar objects. A vector can be multiplied by a scalar to form another vector. And the inner product operation on two vectors produces a scalar. An operation may or may not have certain properties, for example it may be associative, commutative, anticommutative, idempotent, and so on. The values combined are called operands, arguments, or inputs, and the value produced is called the value, result, or output. Operations can have fewer or more than two inputs. An operation is like an operator, but the point of view is different. For instance, one often speaks of "the operation of addition" or "addition operation" when focusing on the operands and result, but one says "addition operator" (rarely "operator of addition") when focusing on the process, or from the more abstract viewpoint, the function +: S×S → S. In mathematics, the concept of a relation or relationship is a generalization of 2-place relations, such as the relation of equality, denoted by the sign "=" in a statement like "5 + 7 = 12," or the relation of order, denoted by the sign "<" in a statement like "5 < 12". Relations that involve two places or roles are called binary relations by some and dyadic relations by others, the latter being historically prior but also useful when necessary to avoid confusion with binary (base 2) numerals. The next step up is to consider relations that can involve more than two places or roles, but still a finite number of them. These are called finite place or finitary relations. A finitary relation that involves k places is variously called a k-ary, a k-adic, or a k-dimensional relation. The number k is then called the arity, the adicity, or the dimension of the relation, respectively. Numerical Analysis  One of the earliest mathematical writings is the Babylonian tablet YBC 7289, which gives a sexagesimal numerical approximation of \sqrt{2}, the length of the diagonal in a unit square.[1] Being able to compute the sides of a triangle (and hence, being able to compute square roots) is extremely important, for instance, in carpentry and construction.[2] In a square wall section that is two meters by two meters, a diagonal beam has to be \sqrt{8} \approx 2.83 meters long.[3] Numerical analysis continues this long tradition of practical mathematical calculations. Much like the Babylonian approximation to \sqrt{2}, modern numerical analysis does not seek exact answers, because exact answers are impossible to obtain in practice. Instead, much of numerical analysis is concerned with obtaining approximate solutions while maintaining reasonable bounds on errors. Numerical analysis naturally finds applications in all fields of engineering and the physical sciences, but in the 21st century, the life sciences and even the arts have adopted elements of scientific computations. Ordinary differential equations appear in the movement of heavenly bodies (planets, stars and galaxies); optimization occurs in portfolio management; numerical linear algebra is essential to quantitative psychology; stochastic differential equations and Markov chains are essential in simulating living cells for medicine and biology. Animations for Numerical Methods and Numerical Analysis Monte Carlo method The Monte Carlo method can be illustrated as a game of battleship. First a player makes some random shots. Next the player applies algorithms (i.e. a battleship is four dots in the vertical or horizontal direction). Finally based on the outcome of the random sampling and the algorithm the player can determine the likely locations of the other player's ships. Monte Carlo simulation methods are especially useful in studying systems with a large number of coupled degrees of freedom, such as fluids, disordered materials, strongly coupled solids, and cellular structures (see cellular Potts model). More broadly, Monte Carlo methods are useful for modeling phenomena with significant uncertainty in inputs, such as the calculation of risk in business. These methods are also widely used in mathematics: a classic use is for the evaluation of definite integrals, particularly multidimensional integrals with complicated boundary conditions. Orthonormal Basis A subset {v_1,...,v_k} of a vector space V, with the inner product <,>, is called orthonormal if <v_i,v_j>=0 when i!=j. That is, the vectors are mutually perpendicular. Moreover, they are all required to have length one: <v_i,v_i>=1. An orthonormal set must be linearly independent, and so it is a vector space basis for the space it spans. Such a basis is called an orthonormal basis. The simplest example of an orthonormal basis is the standard basis e_i for Euclidean space R^n. The vector e_i is the vector with all 0s except for a 1 in the ith coordinate. For example, e_1=(1,0,...,0). A rotation (or flip) through the origin will send an orthonormal set to another orthonormal set. In fact, given any orthonormal basis, there is a rotation, or rotation combined with a flip, which will send the orthonormal basis to the standard basis. These are precisely the transformations which preserve the inner product, and are called orthogonal transformations. Usually when one needs a basis to do calculations, it is convenient to use an orthonormal basis. For example, the formula for a vector space projection is much simpler with an orthonormal basis. The savings in effort make it worthwhile to find an orthonormal basis before doing such a calculation. Gram-Schmidt orthonormalization is a popular way to find an orthonormal basis. Another instance when orthonormal bases arise is as a set of eigenvectors for a symmetric matrix. For a general matrix, the set of eigenvectors may not be orthonormal, or even be a basis. A wave function or wavefunction is a mathematical tool used in quantum mechanics to describe any physical system. It is a function from a space that maps the possible states of the system into the complex numbers. The laws of quantum mechanics (i.e. the Schrödinger equation) describe how the wave function evolves over time. The values of the wave function are probability amplitudes — complex numbers — the squares of the absolute values of which give the probability distribution that the system will be in any of the possible states. The electron probability density for the first few hydrogen atom electron orbitals  form an orthonormal basis for the wave function of the electron. Unsupervised Learning Unsupervised learning is a method of machine learning where a model is fit to observations. It is distinguished from supervised learning by the fact that there is no a priori output. In unsupervised learning, a data set of input objects is gathered. Unsupervised learning then typically treats input objects as a set of random variables. A joint density model is then built for the data set. Unsupervised learning can be used in conjunction with Bayesian inference to produce conditional probabilities (i.e. supervised learning) for any of the random variables given the others. A holy grail of unsupervised learning is the creation of a factorial code of the data, i. e., a code with statistically independent components. Later supervised learning usually works much better when the raw input data is first translated into a factorial code. Unsupervised learning is also useful for data compression: fundamentally, all data compression algorithms either explicitly or implicitly rely on a probability distribution over a set of inputs. Another form of unsupervised learning is clustering, which is sometimes not probabilistic. Also see formal concept analysis. Distance measure An important step in any clustering is to select a distance measure, which will determine how the similarity of two elements is calculated. This will influence the shape of the clusters, as some elements may be close to one another according to one distance and further away according to another. For example, in a 2-dimensional space, the distance between the point (x=1, y=0) and the origin (x=0, y=0) is always 1 according to the usual norms, but the distance between the point (x=1, y=1) and the origin can be 2,\sqrt 2 or 1 if you take respectively the 1-norm, 2-norm or infinity-norm distance. Common distance functions: • The Euclidean distance (also called distance as the crow flies or 2-norm distance). A review of cluster analysis in health psychology research found that the most common distance measure in published studies in that research area is the Euclidean distance or the squared Euclidean distance. • The maximum norm • The Mahalanobis distance corrects data for different scales and correlations in the variables • The angle between two vectors can be used as a distance measure when clustering high dimensional data. See Inner product space. • The Hamming distance (sometimes edit distance) measures the minimum number of substitutions required to change one member into another. • Some notions of Semantic relatedness are distance functions. These include distances based on databases such as wordnet and search engines, and distances learned from machine-learned semantic analysis of a corpus. Cardinal and Ordinal numbers One view is that the core of mathematics is based upon two simple questions based on practical needs. . How many? . How much? This is the cardinal number viewpoint.  Another view is that mathematics may have an even earlier basis based on ordinals used to establish pecking orders and rank. Such basic questions are: . Who eats first, second, etc? . What comes first, etc? This is the ordinal number viewpoint.  One to One Correspondence The notion of one-to-one correspondence is  fundamental to counting. When we count out a set of cards, we say, 1, 2, 3, ... , 52, and as we say each number we lay down a card. Each number corresponds to a card. Technically, we can say that we have put the cards in the deck and the numbers from 1 to 52 in a one-to-one correspondence with each other. In abstract algebra, a homomorphism is a (one to one) structure-preserving map between two algebraic structures (such as groups, rings, or vector spaces). The word homomorphism comes from the Greek language: homos meaning "same" and morphe meaning "shape". Note the similar root word "homoios," meaning "similar," which is found in another mathematical concept, namely homeomorphisms. In abstract algebra, an isomorphism (Greek: ison "equal", and morphe "shape") is a (one to one and onto) bijective map f such that both f and its inverse f −1 are homomorphisms, i.e., structure-preserving mappings. Algebra is a branch of mathematics concerning the study of structure, relation and quantity. The name is derived from the treatise written by the Persian[1] mathematician, astronomer, astrologer and geographer, Muhammad bin Mūsā al-Khwārizmī titled (in Arabic الكتاب الجبر والمقابلة ) Al-Kitab al-Jabr wa-l-Muqabala (meaning "The Compendious Book on Calculation by Completion and Balancing"), which provided symbolic operations for the systematic solution of linear and quadratic equations Binary operations: The notion of addition (+) is abstracted to give a binary operation, * say. The notion of binary operation is meaningless without the set on which the operation is defined. For two elements a and b in a set S a*b gives another element in the set; this condition is called closure). Addition (+), subtraction (-), multiplication (×), and division (÷) can be binary operations when defined on different sets, as is addition and multiplication of matrices, vectors, and polynomials. Molecular Algebra The sidechain dihedral angles of proteins are denoted as χ15, depending on the distance up the sidechain. The χ1 dihedral angle is defined by atoms N-Cα-Cβ-Cγ, the χ2 dihedral angle is defined by atoms Cα-Cβ-Cγ-Cδ, and so on. The sidechain dihedral angles tend to cluster near 180°, 60°, and -60°, which are called the trans, gauche+, and gauche- conformations. The choice of sidechain dihedral angles is affected by the neighbouring backbone and sidechain dihedrals; for example, the gauche+ conformation is rarely followed by the gauche+ conformation (and vice versa) because of the increased likelihood of atomic collisions. Twenty Elementary Algebras The above image is a Moncznik (Perry Moncznik) multiplication table (in the Abstract Algebra sense) where the elements being "multiplied" are the different possible conformational states of a single amino acid. The twenty amino acids can each be represented by such a table and thus form twenty elemental algebras, which it could be argued, may in some sense form the basis from which a large portion of mathematics arises. Variables, Expressions, and Equations A variable is a symbol that represents a number. Usually we use letters such as n, t, or x for variables. For example, we might say that s stands for the side-length of a square. We now treat s as if it were a number we could use. The perimeter of the square is given by 4 × s. The area of the square is given by s × s. When working with variables, it can be helpful to use a letter that will remind you of what the variable stands for: let n be the number of people in a movie theater; let t be the time it takes to travel somewhere; let d be the distance from my house to the park.  The following are examples of expressions: 3 + 7 2 × y + 5 2 + 6 × (4 - 2) z + 3 × (8 - z) An equation is a statement that two numbers or expressions are equal. Equations are useful for relating variables and numbers. Many word problems can easily be written down as equations with a little practice. Many simple rules exist for simplifying equations A bijective function. In mathematics, a bijection, or a bijective function is a function f from a set X to a set Y with the property that, for every y in Y, there is exactly one x in X such that f(x) = y. Alternatively, f is bijective if it is a one-to-one correspondence between those sets; i.e., both one-to-one (injective) and onto (surjective).[1] (See also Bijection, injection and surjection.) For example, consider the function succ, defined from the set of integers to , that to each integer x associates the integer succ(x) = x + 1. For another example, consider the function sumdif that to each pair (x,y) of real numbers associates the pair sumdif(x,y) = (x + y, x − y). A bijective function is also called a permutation. This is more commonly used when X = Y. It should be noted that one-to-one function means one-to-one correspondence (i.e., bijection) to some authors, but injection to others. The set of all bijections from X to Y is denoted as XY. Bijective functions play a fundamental role in many areas of mathematics, for instance in the definition of isomorphism (and related concepts such as homeomorphism and diffeomorphism), permutation group, projective map, and many others. Composition and Inverses A function f is bijective if and only if its inverse relation f −1 is a function. In that case, f −1 is also a bijection. The composition g o f of two bijections f XY and g YZ is a bijection. The inverse of g o f is (g o f)−1 = (f −1o (g−1). A bijection composed of an injection and a surjection. On the other hand, if the composition g o f of two functions is bijective, we can only say that f is injective and g is surjective. A relation f from X to Y is a bijective function if and only if there exists another relation g from Y to X such that g o f is the identity function on X, and f o g is the identity function on Y. Consequently, the sets have the same cardinality. Mathematical structure In mathematics , a structure on a set, or more generally a type, consists of additional mathematical objects that in some manner attach to the set, making it easier to visualize or work with, or endowing the collection with meaning or significance. A partial list of possible structures are measures, algebraic structures (groups, fields, etc.), topologies, metric structures (geometries), orders, equivalence relations, and differential structures. Sometimes, a set is endowed with more than one structure simultaneously; this enables mathematicians to study it more richly. For example, an order induces a topology. As another example, if a set both has a topology and is a group, and the two structures are related in a certain way, the set becomes a topological group. Mappings between sets which preserve structures (so that structures in the domain are mapped to equivalent structures in the codomain) are of special interest in many fields of mathematics. Examples are homomorphisms, which preserve algebraic structures; homeomorphisms, which preserve topological structures; and diffeomorphisms, which preserve differential structures. Discrete Mathematics For contrast, see continuum, topology, and mathematical analysis. Discrete mathematics includes the following topics: Molecular Geometry The term geometric primitive in computer graphics and CAD systems is used in various senses, with common meaning of atomic geometric objects the system can handle (draw, store). Sometimes the subroutines that draw the corresponding objects are called "geometric primitives" as well. The most "primitive" primitives are point and straight line segment, which were all that early vector graphics systems had. In constructive solid geometry, primitives are simple geometric shapes such as a cube, cylinder, sphere, cone, pyramid, torus. Modern 2D computer graphics systems may operate with primitives which are lines (segments of straight lines, circles and more complicated curves), as well as shapes (boxes, arbitrary polygons, circles). A common set of two-dimensional primitives includes lines, points, and polygons, although some people prefer to consider triangles primitives, because every polygon can be constructed from triangles. All other graphic elements are built up from these primitives. In three dimensions, triangles or polygons positioned in three-dimensional space can be used as primitives to model more complex 3D forms. In some cases, curves (such as Bézier curves, circles, etc.) may be considered primitives; in other cases, curves are complex forms created from many straight, primitive shapes. Commonly used geometric primitives include: Note that in 3D applications basic geometric shapes and forms are considered to be primitives rather than the above list. Such shapes and forms include: These are considered to be primitives in 3D modelling because they are the building blocks for many other shapes and forms. A 3D package may also include a list of extended primitives which are more complex shapes that come with the package. For example, a teapot is listed as a primitive in 3D Studio Max. The specific three dimensional arrangement of atoms in molecules is referred to as molecular geometry. Molecular geometry is associated with the specific orientation of atoms as a result of bonding and non bonding electrons about the central atom. A careful analysis of electron pairs will usually result in correct molecular geometry determinations. In addition, the simple writing of Lewis diagrams which show the electron arrangements can also provide important clues for the determination of molecular geometry. Molecules with no lone electron pairs: Molecular geometry has its basis in the electron pair geometry of a molecule. If the molecule has all electron pairs bonded to atoms, then the the molecular geometry is identical with the electron pair geometry. This is a common occurrence An example of trigonal bipyramid molecular geometry that results from five electron pair geometry is PCl5. The phosphorus has 5 valence electrons and thus needs 3 more electrons to complete its octet. However this is an example where five chlorine atoms present and the octet is expanded. The Lewis diagram is as follows: Cl = 7 e- x 5 = 35 e- P = 5 e- = 5 e- Total = 40 e- The Chlorine atoms are as far apart as possible at nearly 90o and 120obond angle. This is trigonal bipyramid geometry. Trigonal bipyramid geometry is characterized by 5 electron pairs.  An example of octahedral molecular geometry that results from six electron pair geometry is SF6. The sulfur atom has 6 valence electrons. However this is an example where six fluoride atoms are present and the octet is expanded. The Lewis diagram is as follows: F = 7 e- x 6 = 42 e- S = 6 e- = 6 e- Total = 48 e- The fluorine atoms are as far apart as possible at nearly 90o bond angle in all directions. This is octahedral geometry. Octahedral geometry is characterized by 6 electron pairs. Trigonal planar  Trigonal pyramid.  In chemistry a trigonal bipyramid formation is a molecular geometry with one atom at the center and 5 more atoms at the corners of a triangular dipyramid. This is one of the few cases where bond angles surrounding an atom are not identical (see also pentagonal dipyramid), which is simply because there is no geometrical arrangement which can result in five equally sized bond angles in three dimensions. Isomers with a trigonal bipyramidal geometry are able to interconvert through a process known as Berry pseudorotation. Pseudorotation is similar in concept to the movement of a conformational diastereomer, though no full revolutions are completed. In the process of pseudorotation, two equatorial ligands (both of which have a shorter bond length than the third) "shift" toward the molecule's axis, while the axial ligands simultaneously "shift" toward the equator, creating a constant cyclical movement. Pseudorotation is particularly notable in simple molecules such as PF5. Constructive solid geometry (CSG) is a technique used in solid modeling. CSG is often, but not always, a procedural modeling technique used in 3D computer graphics and CAD. Constructive solid geometry allows a modeler to create a complex surface or object by using Boolean operators to combine objects. Often CSG presents a model or surface that appears visually complex, but is actually little more than cleverly combined or decombined objects. (In some cases, constructive solid geometry is performed on polygonal meshes, and may or may not be procedural and/or parametric.) The simplest solid objects used for the representation are called primitives. Typically they are the objects of simple shape: cuboids, cylinders, prisms, pyramids, spheres, cones. The set of allowable primitives is limited by each software package. Some software packages allow CSG on curved objects while other packages do not. It is said that an object is constructed from primitives by means of allowable operations, which are typically Boolean operations on sets: union, intersection and difference. A primitive can typically be described by a procedure which accepts some number of parameters; for example, a sphere may be described by the coordinates of its center point, along with a radius value. These primitives can be combined into compound objects using operations like these: Operations in constructive solid geometry Boolean union Boolean difference Boolean intersection Demonstration of CSG Union Demonstration of CSG Difference Demonstration of CSG Intersection The merger of two objects into one. The subtraction of one object from another. The portion common to both objects. Combining these elementary operations, it is possible to build up objects with high complexity starting from simple ones. The Centriole The Mother Centriole Plays an Instructive Role in Defining Cell Geometry  "The centriole is unique among cellular structures in its complexity, chirality, stability, and templated replication, and these features make it an ideal hub around which to organize and propagate particular aspects of cellular geometry." The Telomere Counting Mechanism The search for the molecular counting mechanism ended when Calvin Harley and Carol Greider discovered that the telomeres of cultured normal human fibroblasts become shorter each time the cells divide. When telomeres reach a specific short length, they signal the cell to stop dividing. Therefore, cellular aging, as marked by telomere shortening, is not based on the passage of time. Instead, telomere loss measures rounds of DNA replication. For this reason, Hayflick has coined the term "replicometer" for this mechanism. The Telomere Code Experimental design and data analysis  In the design of experiments and data analysis, control variables are those variables that are not changed throughout the trials in an experiment because the experimenter is not interested in the effect of that variable being changed for that particular experiment. In other words, control variables are extraneous factors, possibly affecting the experiment, that are kept constant so as to minimize their effects on the outcome. An example of a control variable in an experiment might be keeping the pressure constant in an experiment designed to test the effects of temperature on bacterial growth.  Control theory  In control theory, control variables are variables that are input to the control system. Reaction rate is the dependent variable and everything else that can change the reaction rate must be controlled (kept constant) so that you only measure the effects of concentration. Variables that need to be controlled in this case include temperature, catalyst, surface area of solids, and pressures of gases. If not controlled, they complicate the experiment and hence, the result.  In programming, a control variable is a program variable that is used to regulate the flow of control of the program. For example, a loop control variable is used to regulate the number of times the body of a program loop is executed; it is incremented (or decremented when counting down) each time the loop body is executed. The Microtubule Code anti-tubulin and DAPI In interphase, the DNA is neatly contained within the nucleus and is not condensed into chromosomes. Microtubules are radially arrayed from the center of the cell and if this cell was not fixed and dead the microtubules would be highly dynamic, shrinking and growing from their ends. When a cell is ready to divide it will replicate both its DNA and cellular contents and then split into two in the process known as mitosis. The images here come from Xenopus XL-177 cells. Tubulin is shown in green, DNA in blue. Dynein and kinesin motor proteins transport cellular cargoes toward opposite ends of microtubule tracks. In neurons, microtubules are abundantly decorated with microtubule-associated proteins (MAPs) such as tau. Motor proteins thus encounter MAPs frequently along their path. Dynein tends to reverse direction, whereas kinesin tends to detach at patches of bound tau. The differential modulation of dynein and kinesin motility suggests that MAPs can spatially regulate the balance of microtubule-dependent axonal transport. Does a "microtubule code" regulate activity of MAPs?  The microtubule lattice features a series of helical winding patterns which repeat on longitudinal protofilaments at 3, 5, 8, 13, 21 and higher numbers of subunit dimers (tubulins). These particular winding patterns (whose repeat intervals match the Fibonacci series) define attachment sites of the microtubule-associated proteins (MAPs), and are found in simulations of self-localized phonon excitations in microtubules (Samsonovich, 1992: These suggest topological global states in microtubules which may be resistant to local decoherence. Penrose has suggested the Fibonacci patterns on microtubules may be optimal for error correction.  Cylindrical cellular automata Source: Comm. Math. Phys. Volume 118, Number 4 (1988), 569-590. PDF File (2172 KB)  This paper is concerned with the analysis of one-dimensional cellular automata with periodic boundary conditions. Such an automaton may be viewed as a lattice of sites on a cylinder of specified size n evolving according to a local interaction rule…; Hexagonal Numbers A polygonal number and 6-polygonal number of the form n(2n-1). The first few are 1, 6, 15, 28, 45, ... (Sloane's A000384). The generating function for the hexagonal numbers is given by Every hexagonal number is a triangular number since In 1830, Legendre (1979) proved that every number larger than 1791 is a sum of four hexagonal numbers, and Duke and Schulze-Pillot (1990) improved this to three hexagonal numbers for every sufficiently large integer. There are exactly 13 positive integers that cannot be represented using four hexagonal numbers, namely 5, 10, 11, 20, 25, 26, 38, 39, 54, 65, 70, 114, and 130 (Sloane's A007527; Guy 1994a). Similarly, there are only two positive integers that cannot be represented using five hexagonal numbers, namely: 11 = 1+1+1+1+1+6 26 = 1+1+6+6+6+6. Every positive integer can be represented using six hexagonal numbers. SEE ALSO: Figurate Number, Hex Number, Heptagonal Hexagonal Number, Hexagonal Pentagonal Number, Octagonal Hexagonal Number, Triangular Number Duke, W. and Schulze-Pillot, R. "Representations of Integers by Positive Ternary Quadratic Forms and Equidistribution of Lattice Points on Ellipsoids." Invent. Math. 99, 49-57, 1990. Guy, R. K. "Every Number Is Expressible as the Sum of How Many Polygonal Numbers?." Amer. Math. Monthly 101, 169-172, 1994a. Guy, R. K. "Sums of Squares." §C20 in Unsolved Problems in Number Theory, 2nd ed. New York: Springer-Verlag, pp. 136-138, 1994b. Legendre, A.-M. Théorie des nombres, 4th ed., 2 vols. Paris: A. Blanchard, 1979. Sloane, N. J. A. Sequences A000384/M4108 and A007527/M3739 in "The On-Line Encyclopedia of Integer Sequences." Weisstein, Eric W. "Hexagonal Number." From MathWorld--A Wolfram Web Resource. Cellular Automata and the Game of Life in the Hexagonal Grid Only one true hexagonal Game of Life has been found. The rule 3/2 supports a glider and also stabilizes. (The rule 3,5/2 and 3,5,6/2 also behave silimarly.) (3/2,4,5) barely doesn't qualify, as random patterns never stabilize.  Polyglutamylation and polyglycylation are two posttranslational polymodifications that have initially been discovered on tubulin Highly desirable features in a network of parallel machines   •    minimal communication cost  •    efficient routing •    capability of embedding topological Data structures such as •    Ring •   Linear array •   Tree •   Mesh Stoichiometry (sometimes called reaction stoichiometry to distinguish it from composition stoichiometry) is the calculation of quantitative (measurable) relationships of the reactants and products in chemical reactions (chemicals). The Krebs Cycle The Kreb's cycle converts pyruvate to CO2 and reducing energy (NADH and FADH2) and phosphorylated energy (GTP). 2 pyruvate + 2 GDP + 2 H3PO4 + 4 H2O + 2 FAD + 8 NAD+ ----> 6 CO2 + 2 GTP + 2 FADH2 + 8 NADH Stoichiometry Matrix- Cellular Automata, Transition Algebras and The Genetic State Vector The Cellular machinery can be seen as the interaction of these 3 distinct mathematical objects acting in a loop.  The concept of "hypercellular automata", or multilayered cellular automata, has recently been proposed by [Bandini, 1995] and [Bandini et al., 1996], as a particular case of multilayered automata network. A hierarchical structure is defined through a hypergraph, i.e. a graph composed by vertices and arcs, where each vertex is in turn a hypergraph. The multilayered automata network is directly obtained from this structure by introducing status attributes and transition functions.  Two-level multilayered cellular automata have been developed and employed to model biological systems: the first level constitutes a bidimensional cellular space (diffusion space), while at the second level a totally connected graph corresponds to each cell (first-level vertex) to generate an intrinsically parallel and local reaction space.   Figure 1.  The hypercellular automaton Personal tools Google AdSense
67f575f4ecfac96f
Take the 2-minute tour × This is not a homework question, just a question I have developed to get a better conceptual understanding of the results of the Schrödinger equation. If I had a 3D spherical container or radius R, containing 2 particles of opposite charge, say a proton and an electron, what does the solution to the resulting Schrödinger equation look like? How does the solution compare to the solution of the Schrödinger equation for a simple hydrogen atom? What happens as R approaches infinity? share|improve this question 1 Answer 1 up vote 4 down vote accepted The hamiltonian of this system is quite simply the sum of hamiltonian of hydrogen atom and wall potentials for two particles: $$ H = \frac{1}{2 m_1} p_1^2 + \frac{1}{2 m_2} p_2^2 - \frac{e^2}{|\mathbf{r}_1-\mathbf{r}_2|} + V^\text{box}_1(r_1)+V^\text{box}_2(r_2), $$ where $V^\text{box}$ are the confining box potentials. For impenetrable box we can set $V^\text{box}_1(r)=V^\text{box}_2(r)=\infty \cdot \theta(r - R)$, with $\theta$ the Heaviside function. Additionally, if there is considerable difference in masses $m_1$ and $m_2$ (like for masses of proton and electron) the problem could be essentially reduced to a motion of a single electron while the proton sits at the center of the cavity, essentially (after separating the angular variables) giving the following one dimensional Schrödinger equation: \begin{equation} \left[ -\frac{d^{2}}{dr^{2}}+\frac{l(l+1)}{r^{2}}-\frac{A}{r}\right] \psi (r)=E\psi (r),~\psi (0)=\psi (R)=0 \end{equation} This problem could be easily analyzed using variety of methods. Additional degeneracy of hydrogen atom associated with conserved Lenz vector usually disappears in this problem, however for some specific values of $R$ this degeneracy reappears. There is quite a lot of literature on this problem. The first results go back to 1937: Michels, A., J. De Boer, and A. Bijl. "Remarks concerning molecural interaction and their influence on the polarisability." Physica 4.10 (1937): 981-994. For the overview of results let us look at one of the recent papers: Ciftci, H., Hall, R. L., & Saad, N. "Study of a confined hydrogen‐like atom by the asymptotic iteration method." International Journal of Quantum Chemistry 109.5 (2009): 931-937. Arxiv:0807.4135. From it we learn: The concept of a confined quantum system goes back to the early work of Michels et al [1] who studied the properties of an atomic system under very high pressures. They suggested to replace the interaction of the atoms with surrounding atoms by a uniform pressure on a sphere within which the atom is considered to be en closed. This led them to consider the problem of hydrogen with modified external boundary conditions [2]. Since then, the confined hydrogen atom attracted widespread attention [2]-[33]. Many researchers have carried out accurate calculations of eigenvalues of the confined hydrogen atom using various techniques. Some of these are variational methods [18]-[27], finite element methods [28], and algebraic methods [29]. The authors then present analysis of the problem, including exact solutions for some specific values of $R$. Another approach (originally by Wigner) to the problem of confined hydrogen atom is to start with free particle(s) in a box and use the Coulomb potential as a perturbation, obtaining the expansion in terms of $e^2$. This method is explained in: Aguilera-Navarro, V. C., W. M. Kloet, and A. H. Zimerman. "Application of the Rayleigh-Schrödinger perturbation theory to hydrogen atom". Instituto de Fisica Teorica, Sao Paulo, Brazil, 1971. online version This method is useful for small values of $R$ however the limit $R \to \infty$ presents porblems: By numerical computation we found that in the perturbation series for the energy the sign of each term (with the exception of the unperturbed energy) is always negative, making it in our opinion improbable that the series is convergent for $R \to \infty$ (in Wigner's paper [1] the possibility is discussed that, although each term in the perturbation series from the third order on in $e^2$ is more and more divergent for $R\to \infty$, the whole series could converge to the actual value as $R\to \infty$). share|improve this answer Your Answer
77cc6948c1fc7446
About this Journal Submit a Manuscript Table of Contents Mathematical Problems in Engineering Volume 2012 (2012), Article ID 532610, 6 pages Research Article Stable One-Dimensional Periodic Wave in Kerr-Type and Quadratic Nonlinear Media Department of Constructive and Technological Engineering—Lasers and Fibre Optic Communications, National Institute of R&D for Optoelectronics INOE 2000, 409 Atomistilor Street, P.O. Box MG-5, 077125 Magurele, Ilfov, Romania Received 6 December 2011; Revised 9 February 2012; Accepted 13 February 2012 Academic Editor: Cristian Toma We present the propagation of optical beams and the properties of one-dimensional (1D) spatial solitons (“bright” and “dark”) in saturated Kerr-type and quadratic nonlinear media. Special attention is paid to the recent advances of the theory of soliton stability. We show that the stabilization of bright periodic waves occurs above a certain threshold power level and the dark periodic waves can be destabilized by the saturation of the nonlinear response, while the dark quadratic waves turn out to be metastable in the broad range of material parameters. The propagation of (1+1) a dimension-optical field on saturated Kerr media using nonlinear Schrödinger equations is described. A model for the envelope one-dimensional evolution equation is built up using the Laplace transform. 1. Introduction The discrete spatial optical solitons have been introduced and studied theoretically as spatially localized modes of periodic optical structures [1]. A standard theoretical approach in the study of the discrete spatial optical solitons is based on the derivation of an effective discrete nonlinear Schrödinger equation and the analysis of its stationary localized solitons-discrete localized modes [1, 2]. The spatial solitons may exist in a broad branch of nonlinear materials, such as cubic Kerr, saturable, thermal, reorientation, photorefractive, and quadratic media, and periodic systems. Furthermore, the existence of solitons varies in topologies and dimensions [3]. The theory of spatial optical solitons has been based on the nonlinear Schrödinger (NLS) equation with a cubic nonlinearity, which is exactly integrable by means of the inverse scattering (IST) technique. From the physical point of view, the integrable NLS equation describes the -dimensional beams in a Kerr (cubic) nonlinear medium in the framework of the so-called paraxial approximation [4]. Bright solitons are formed due to the diffraction or dispersion compensated by self-focusing nonlinearity and appear as an intensity hump in a zero background. Solitons, which appear as intensity dips with a CW background, are called dark soliton [3]. Kerr solitons rely primarily on a physical effect, which produces an intensity-dependent change in refractive index [3]. The periodic wave structures play an important role in the nonlinear wave domain so that they are core of instability modulation development and optics chaos on continuous nonlinear media, modes of quasidiscrete systems or discrete system on mechanic and electric domain. Thus, periodic wave structures are unstable in the propagation process. For example, photorefractive crystals accept relatively high nonlinearity of saturated character at an already known intensity for He-Ne laser in continuous regime. 2. Methodology The propagation of the optical radiation in dimensions in saturable Kerr-type medium is described by the nonlinear Schrödinger equation for the varying field amplitude [5]: The transverse and the longitudinal   coordinates are scaled in terms of the characteristic pulse (beam) width and dispersion (diffraction) length, respectively; is the saturation parameter; stands for focusing (defocusing) media [5] The simplest periodic stationary solutions of (2.1) have the following form: where is the propagation constant. By replacing the field in such a form into (2.1), one gets To perform the linear stability analysis of periodic waves in the saturable medium, we use the mathematical formalism initially developed for periodic waves in cubic nonlinear media [5]. We consider an analytic model, which used the Laplace transform of (2.4): With the boundary conditions, From (2.5) we get the Laplace transform of the field:(i)direct form: (ii)inverse transformation form: where is a finite number. For the integration on real () and imaginary poles, we calculated the complex amplitude of nonlinear equation such as For the harmonic case integration form of the complex amplitude is By using the integration, we get or The total phase of the optical field envelope is as follows: We assume a frequency () as a speed variation of total phase such as We have the complex amplitude of envelope field with the following form: The hyperbolic secant plays this equation resulting in a conservative effect. The longitudinal component is Some numerical simulations of the complex amplitude of the nonlinear equation and the total phase of the optical field depending on the propagation constant and an integer number are illustrated in Figure 1. Figure 1: Numerical simulations of complex amplitude and phase. Figure 1 represents the model amplitude and the phase functions of the complex total number, which explained the theoretical model presented. Thanks to the complex model, the initial solution includes the hyperbolic secant and the conjugate complex part 3. Conclusions We have described the propagation in quadratic nonlinear media of the periodic waves in saturated Kerr type. The analytic solution for one-dimensional, bright and dark spatial solitons was found. To describe the spatial optical solitons in saturated Kerr type and the quadratic nonlinear media, we propose an analytical model based on Laplace transform. The theoretical model consists in solving analytically the Schrödinger equation with photonic network using Laplace transform. The propagation properties were found by using different forms of saturable nonlinearity. However, an exact analytic solution of the propagation problem presented herein creates possibilities for further theoretical investigation. As a result, it is a useful structure, which obtains one-dimensional “bright” and “dark” solitons with transversal structure and transversal one-dimensional periodic waves. 1. B. J. Eggleton, C. M. de Sterke, and R. E. Slusher, “Nonlinear pulse propagation in Bragg gratings,” Journal of the Optical Society of America B, vol. 14, no. 11, pp. 2980–2993, 1997. View at Scopus 2. F. Lederer, S. Darmanyan, and A. Kobyakov, Spatial Solitons, springer, Berlin, Germany, 2001. 3. X. u. Zhiyong, All-optical Soliton Control in Photonic Lattices, Master work, Universitat Politècnica de Catalunya, Barcelona, Spain, 2007. 4. Y. S. Kivshar, “Bright and dark spatial solitons in non-Kerr media,” Optical and Quantum Electronics, vol. 30, no. 7–10, pp. 571–614, 1998. View at Scopus 5. Y. V. Kartashov, A. A. Egorov, V. A. Vysloukh, and L. Torner, “Stable one-dimensional periodic waves in Kerr-type saturable and quadratic nonlinear media,” Journal of Optics B, vol. 6, no. 5, pp. S279–S287, 2004. View at Publisher · View at Google Scholar · View at Scopus
a1489ecd206208b5
Decoherence and the Quantum to Classical Transition; or Why We Don’t See Cats that are Both Dead and Alive Conflating Science with Pseudoscience Galileo Cartoon for Decoherence and the Transition from Quantum to Classical article: imageThe spreading of misinformation and misconceptions about the quantum world can be lumped into two different categories.  The first category are people who mean well, who want to advance science and scientific understanding.  Maybe they write a book, give public lectures, or create news articles about recent events in quantum science, for examples.  However, they use misleading analogies, miss essential features, fail to properly address alternatives to a failing orthodoxy, or mischaracterize apparently paradoxical phenomenon.  As a result, they end up misleading or confusing the general public or their students.  Another failure mode within this category is the use of excessive hype.  Due to their own passions or the desire to spread the excitement of physics, they mislead about the implications of quantum physics in general.  They over-promise when describing the latest incremental step in theoretical or experimental physics; or they mislead about the nature of reality. The second category is just plain fraudulent; people who deliberately make things up to deceive others for profit.  Prominent examples of this include books and talks like the ones by Deepak Chopra, and movies like What the Bleep Do We Know!?  Rest assured, there is no such thing as quantum healing.  You cannot change your quantum state through your thoughts.  Real harm is done by these quacks when, for example, someone forgoes proven medical treatments for pseudoscience. My contention is that because we do not do enough to mitigate the negative impact of the first category, the fraudulent category is able to spread easily and quickly amidst fertile grounds.  The public is susceptible to charlatans peddling pseudoscience and quackery by throwing in sciency sounding phrases, and references to quantum physics that no one (including themselves) understands.  Moreover, their claims have no relationship to reality. There will always be a certain number of people eager to believe whatever pseudoscience or pseudo-religion these hucksters want to sell.  But, if we want to influence the fraction of the public that is interested in separating fact from fantasy, we need to be clearer and more precise in our own presentations of physics.  Moreover, if we want to retain our credibility with the general public as we seek to dispel the drivel these hucksters distribute, we need to make sure we are precise about what QM is and what it is not, what we understand about it and what we do not. Misconceptions about the Quantum to Classical Transition Schrodingers_cat_experiment: image Experimental setup for the Schrodinger’s cat thought experiment. Image from Wikipedia. One example that contributes to the confusion is the parable of Schrödinger’s cat.  A cat, a flask of poison, and a radioactive source are placed in a sealed box (this is a hypothetical thought experiment, of course – no cats were harmed…).  If an internal monitor detects a single atom decaying, the flask is shattered, releasing the poison that kills the cat.  Naïve application of the Copenhagen interpretation of quantum mechanics leads to the conclusion that the cat is simultaneously dead and alive.  Up until it is measured by a conscious observer, the atom is in a superposition of having decayed and not decayed.  And this superposition allegedly extends to the radiation detector, the vial of poison, the hammer to break the vial, the cat, the box, and to you as you wait to open the box. People trot out Schrödinger’s cat whenever they want to tout how strange QM is.  “See how weird and paradoxical QM is, how bizarre and unintuitive it’s predictions, how strange the universe is?  Anything is possible with quantum mechanics, even if you don’t understand it or I can’t explain it.”   No, quantum mechanics is not an “anything goes” theory.  A cat cannot be simultaneously dead and alive, regardless of whether or not we observe it. References to the role of the observer or of consciousness in determining outcomes contributes to this mess.  Even in interpretations of QM that refer to a special role for an observer or a consciousness (interpretations that I believe miss the target of reality), the observer cannot control or manipulate outcomes by choice or thought.  He/she is merely triggering an outcome to become reality; the particular outcome that nature chooses is still random.  You cannot decide to pick out a different wave function for yourself.  Additionally, interpretations of QM that do not have any need for a special role for a conscious observer (and are thus, in my opinion, better approximations of reality) are readily available.  See, for example, the Transactional Interpretation. Isolating the Environment in Classical Physics In “Decoherence, einselection, and the quantum origins of the classical, Wojciech Zurek had this to say: “The idea that the “openness” of quantum systems might have anything to do with the transition from quantum to classical was ignored for a very long time, probably because in classical physics problems of fundamental importance were always settled in isolated systems.” For centuries, progress in our understanding of how the world works has been made by isolating the system under study from its environment.  In many experiments, the environment is a disturbance that perturbs the system under investigation and contaminates the results of the experiment.  The environment can cause unwanted vibrations, friction, heating, cooling, electrical transients, false detections, etc.  An isolated system is an idealization where other sources of disturbance have been eliminated as much as possible in order to discover the true underlying nature of the system or physical properties under investigation. Portrait_of_Galileo_Galilei for quantum decoherence and transition from quantum to classical articleGalileo Galilei is considered by many to be the founding father of the scientific method.  By isolating, reducing, or accounting for the secondary effects of the environment (in actual experiments and in thought experiments) he discovered several principles of motion and matter.  These principles, such as the fact that material objects fall at the same rate regardless of mass and what they are made of, had been missed or misunderstood by Galileo’s predecessors.  A famous example is the experiment where Galileo dropped two metal balls of different size, and hence different mass, from the top of a building (supposedly the leaning tower of Pisa). Luckily, the effects of air resistance were negligible for both balls, and they hit the ground at roughly the same time.  He would not have been able to do the experiment with a feather and a steel ball, for example, because air resistance has a much more dramatic effect on the light feather than on the steel ball.  Interesting bit of physics why that is the case, but I’ll avoid the temptation to take that detour for now. During an Apollo 15 moon walk, Commander David Scott performed Galileo’s famous experiment in a live demonstration for the television cameras (see the embedded video below). He used a hammer (1.32 kg) and a feather (0.03 kg; appropriately an eagle feather).  He held both out in front of himself and dropped them at the same time.  Since there is no atmosphere on the moon (effectively, a vacuum) there was no air resistance and both objects fell at the same rate.  Both objects were observed to undergo the same acceleration and strike the lunar surface simultaneously. Superposition and Interference: the Nature of Quantum Physics The situation is quite different in quantum mechanics.  First of all, the correlations between two systems can be of fundamental importance and can lead to properties and behaviors that are not present in classical systems.  The distinctly non-classical phenomena of superposition, interference, and quantum entanglement, are just such features.  Additionally, it is impossible to completely isolate a quantum system from its environment. According to quantum mechanics, any linear combination of possible states also corresponds to a possible state.  This is known as the superposition principle.  Probability distributions are not the sum of the squares of wave function amplitudes.  Rather, they are the square of the sums of the wave function amplitudes.  What this means is that there is interference between possible outcomes.  There is a possibility for outcome A and B, in addition to A or B, even though our preconceived notions, based on our classical experiences of everyday life, tell us that A and B should be mutually exclusive outcomes.  Superposition and the interference between possible states leads to observable consequences, such as in the double-slit experiment, k-mesons, neutrino oscillations, quantum computers, and SQUIDS. We do not see superpositions of macroscopic, everyday objects or events.  We do not see dead and alive cats.  Sometimes, our common sense intuitions can mislead us.  But this is not one of those times.  The quantum world is more fundamental than the classical world.  The classical world emerges from the quantum world.  So what happens that makes these quantum behaviors disappear?  Why does the world appear classical to us, in spite of its underlying quantum nature? Coherence, and Then Naturally, Decoherence Two waves are said to be coherent if they have a constant relative phase.  This leads to a stable pattern of interference between the waves.  The interference can be constructive (the waves build upon each other producing a wave with a greater amplitude) or destructive (the waves subtract from each other producing a wave with a smaller amplitude, or even vanishing amplitude).  Whether the interference is constructive or destructive depends on the relative phase of the two waves.  One of the game-changing realizations during the early days of quantum mechanics is that a single particle can interfere with itself.  Interference with another particle leads to entanglement, and the fun and fascinating excitement of non-locality. Decoherence is the Key to the Classical World The key to a quantum to classical transition is decoherence.  Maximillian Schlosshauer, in “Decoherence, the measurement problem, and the interpretations of quantum mechanics, states that “Proponents of decoherence called it an “historical accident” that the implications for quantum mechanics and for the associated foundational problems were overlooked for so long.” Decoherence provides a dynamical explanation for this transition without an ad hoc addition to the mathematics or processes of quantum mechanics.  It is an inevitable consequence of the immersion of a quantum system in its environment.  Coherence, or the ordering of the phase angles between particles or systems in a quantum superposition, is disrupted by the environment.  Different wave functions in the quantum superposition can no longer interfere with each other.  Superposition and entanglement do not disappear, however.  They essentially leak into the environment and become impossible to detect. I typically love the many educational and entertaining short videos by Minute Physics. However, the video below about Schrödinger’s cat is misleading.  Well before the cat could enter into a superposition, coherence in the chain of the events leading up to his death (or not) has been lost to the environment.  The existence of a multiverse is not a logical consequence of the Schrödinger’s cat experiment. Perhaps the muddled correspondence principle of the Copenhagen Interpretation could have been avoided, as well as myths and misconceptions about the role of consciousness and observers, if decoherence had been accounted for from the beginning. The Measurement Problem Decoherence occurs because the large number of particles in a macroscopic system are interacting with a large number of microscopic systems (collisions with air molecules, photons from the CMB, a light source, or thermal photons, etc.).  Even a small coupling to the environment is sufficient to cause extremely rapid decoherence.  Only quantum states that are robust in spite of decoherence have predictable consequences.  These are the classical outcomes.  The environment, in effect, measures the state of the object and destroys quantum coherence. So does decoherence solve the measurement problem?  Not really, at least not completely. It can tell us why some things appear classical when observed.  But, it does not explain what exactly a measurement is and how quantum probabilities are chosen.  Decoherence by itself cannot be used to derive the Born rule.   Additionally, it does not explain the uniqueness of the result of a given measurement.  Decoherence never selects a unique outcome. The Universe and You International_Space_Station_after_undocking: image The International Space Station (ISS). Image from Wikipedia. With care, mechanical, acoustic, and even electromagnetic isolation is possible.  But, isolating a system gravitationally, i.e. from gravitons, is another challenge.  In orbit around the Earth, like the space shuttle or the International Space Station, you are still in a gravitational field with a flux of gravitons that is not that much different than here on the surface of the Earth.  The apparent weightlessness is due to being in a continuous state of free fall (an example of microgravity).  Various theories have been developed that use the pervasiveness of gravitons to explain certain aspects of our quantum universe. So, yes, the atoms and subatomic particles in your body are entangled with the universe.  That does not mean that you can do anything about it, or use it to your advantage in any way.  There is no superposition, no coherent relationship between you (1) as a millionaire dating a super model and (2) not a millionaire and not dating a super model.  Sorry about that. Closing Loopholes in Quantum Mechanics Violations of Bell’s Inequalities and Loopholes in Quantum Mechanics Cosmic Bell Experiment: Closing Loopholes in Quantum Mechanics: imageRecall that, in 1935, Einstein, Podolsky, and Rosen wrote their famous paper that became known as the EPR paradox.  In it, they pointed out the bizarre consequences of the mathematics of quantum mechanics.  If two particles were in an entangled state, then measurement on one of the particles would immediately affect the results of a measurement on the other particle, even if the two particles were arbitrarily far apart at the time of the measurements.  This non-locality was later called “spooky action a distance” by Einstein. In the 1960’s, John Bell came up with a set of equations, inequalities, that quantified the disagreement between the predictions of quantum mechanics and that of a purely local theory (i.e. one that assumed the distant measurement could not affect the local measurement).  Since then, violations of these inequalities have been experimentally verified on numerous occasions.  Thus, the inescapable conclusion is that nature does make use of non-locality, some how.  However, this conclusion is based on the assumption that nothing else unusual or unexpected is happening during the experiment. Scrutinizing Loopholes in Observed Violations of Bell’s Inequalities Many different variations of the experiments have been done.  See, for example, my discussion at Quantum Weirdness: The unbridled ability of quantum physics to shock us.  Many more, different types of experiments have also been done.  In some of these experiments, the violation is more dramatic – not just a matter of the frequency of apparently correlated outcomes.  These experiments are go or no-go; they are designed to look for an event that would not happen under a purely local theory.  See Do We Really Understand Quantum Mechanics? or Do we really understand quantum mechanics? Strange correlations, paradoxes, and theorems for more in-depth discussions. Given that the implication of these experiments is so profound, scientists have gone to great lengths to ensure that there is not some more benign, classical, local, or deterministic explanation that has been missed.  One possibility is that, since we do not detect every photon due to limitations in detector efficiency, we are detecting a special subset of events.  Another possibility is that the detector settings are not actually independent or random.  Typically, detector settings are chosen randomly; for example, by a quantum random number generator.  But if there were even some slight correlations between the choice of detector settings and some sort of local hidden variables in the system being tested, then the observed violations of Bell’s inequality could be explained without resorting to non-locality. Closing the Settings-Independence Loophole Physicists at the Kavli Institute for Cosmological Physics in Chicago, and at MIT, have come up with a brilliant (and FUN) way to avoid the settings-independence loophole and also potentially further quantify non-locality.  See their paper  Testing Bell’s Inequality with Cosmic Photons: Closing the  Settings-Independence Loophole. Cosmic Bell Experiment Setup: to Close Loophole in Quantum Mechanics: image Fig. 1 from; Schematic of the proposed “Cosmic Bell” experiment. Cosmic sources are used to determine detector settings in a Bell-type experiment. Their idea is to use distant quasars or the Cosmic Microwave Background (CMB) to determine detector settings.  They would chose two distant quasars on opposite regions of the sky, or two separate patches of the CMB with sufficient angular separation.  Photons from these sources would be coming from events whose past light cones do not overlap.  These photons would then be used to determine the detector settings. This experiment will close the settings-independence loophole (assuming the results remain consistent with QM and non-locality!).  If something unexpected is seen, it will enable mapping non-local correlations as a function of the overlap between light cones of the two independent photon sources. Of course, the experiment will not be without some challenges.  The authors refer to a potential “noise loophole”.  They have to ensure that the cosmic photon detectors are not triggered by more local sources of photons, such as light pollution, scattered star light, zodiacal light, etc.  They also need to account for the impact of the intergalactic medium and Earth’s atmosphere on the cosmic photons.  It will be interesting to see where this leads in the coming years! Fun with Quantum Computing at University of Bristol Physics is Fun, at University of Bristol Run your own quantum computing experiments Quantum Computing at University of Bristol imageHave fun with quantum physics and quantum computing!  Gain practical experience using the resources offered by University of Bristol: Qcloud.  Test out your quantum experiments in their online quantum processor simulator; includes reference material and users guide.  Then, you can (starting 20 September) register and run your experiment in their lab: “create and manipulate your own qubits and measure the quantum phenomena of superposition and entanglement. Quantum Weirdness: The unbridled ability of quantum physics to shock us Double slit experiments and the root of quantum weirdness Quantum weirdenss: quantum superposition meets life cartoon You are probably familiar with the legendary double slit experiment.  It is a simple, straight-forward experiment that introduces the wave-particle duality and the weirdness that is at the foundation of quantum mechanics.  Many people read or hear about the generic double slit experiment and assume that it is the end of the story.  Worse, many intrinsically curious people erroneously assume that this phenomenon is understood.  Or, they believe that they have a benign explanation for a particular experimental result.  Hence, they don’t dig deeper into the challenge and excitement that can be found in the quantum world.  Digging through the layers of experimental results helps ensure we are questioning the implications of our hypothesis, and testing whether it remains consistent and valid. In the typical double slit experiment, a beam of light is directed at a pair of slits in a wall.  The resultant interference pattern is then observed on a screen or some other sort of detector.  You can see a five-minute introduction to the double slit experiment by Dr. Quantum: “Double Slit Experiment”.  Photons are typically used, although according to de Broglie and confirmed by Davisson and Germer, electrons or any other particle will work.  Using photons greatly simplifies the technical difficulties and cost of the experiment. Quantum Weirdness: double slit interference pattern imageThe experimental apparatus consists of a coherent light source, a wall with two narrow slits (the width and separation of the slits are comparable to the wavelength of the light), and a detection screen behind the wall.  A narrow beam of light strikes the pair of slits.  The pattern on the detection screen is an interference pattern.  So what, light is a wave.  Or, at least it acts that way sometimes.  We can easily calculate the details of the interference pattern using the dimensions of the experimental setup and the frequency of the light.  The two different paths, (source – slit A – screen; or source – slit B – screen), have different path lengths.  Hence, the two interfering waves have different phases and can add constructively or destructively. Doubling the dosage of quantum weirdness The rub comes in when you dial down the intensity of the beam of light so that only one photon at a time is passing through the slits.  One photon can’t pass through both slits, right?  It can’t interfere with itself, right?  Well, the interference pattern is still there.  Quantum mechanics tells us that the photon is in a superposition of states.  See Feynman’s QED: The Strange Theory of Light and Matter (Princeton Science Library) or Cox and Forshaw’s The Quantum Universe: (And Why Anything That Can Happen, Does)for excellent conceptual descriptions of how to visualize what the particle is up to. Quantum Weirdness: Wave-Particle Identity Crisis image I can contemplate how a single “particle” can act like a wave sometimes, a particle other times, or some combination thereof.  I can do that without feeling like I am losing my grip on reality.  But, here is where things start to get really weird.  If you modify the above experiment so that you can tell which slit the photon traveled through, the interference pattern goes away.  It is as if each photon realizes they are being watched and stops their shenanigans.  But how do they “know” they are being watched?  How do they know that it is time for wave function collapse”?  In wave function collapse, the photon discards the superposition of states and selects a specific value or path (a specific eigenstate). The quantum measurement problem This is the crux of the quantum measurement problem:   Why are there two different processes describing the evolution of a particle’s wave function?  These two processes are (1) the continuous evolution described by Schrödinger’s equation, and (2) the spontaneous collapse into a specific eigenstate.  What triggers the collapse?  Why can’t we observe a superposition or the collapse process? How can a wave function that is spread out over arbitrary distances collapse seemingly instantaneously?  Some scientists have argued that wave function collapse is triggered by an interaction between the observer and the photon, or between the measurement apparatus and the photon.  For example, the momentum that is imparted to the photon by the act of measurement may break the superposition. The problem with this argument, however, is that clever experimentalists have already devised and carried out experiments using interaction-free measurements. Now for some truly bizarre quantum weirdness If you have followed the discussion so far, it is still not safe to unfasten your seatbelt and start walking about the cabin.  This is the point where I start to feel my grip on reality slipping away.  Particles are doing something we completely do not understand, in apparent response to some trigger we completely do not understand.  Consider delayed choice experiments and quantum eraser experiments.  In delayed choice experiments, the decision to determine which path the photon used is made well after the photon has passed the slits*.  Yet, the results seem to indicate that the photon has retroactively collapsed its wave function and chosen a single path.  If you mark through which slit each photon went, the interference pattern is destroyed.  This happens even if you mark the path without disturbing the photons movement.  In quantum eraser experiments, photons are tagged based on which path they took. By itself, that tagging destroys the interference pattern.  However, when this path information is discarded (erased), the interference pattern is restored. This begs the following questions: What happens to the discarded part of the wave function when wave function collapse occurs?  Some people argue that the wave function is not real, it just encodes our knowledge of the situation.  But when the tagging is erased, how does the quantum system “know” what superposition to re-enter? Quantum Weirdness: delayed choice quantum eraser experiment imageDelayed choice quantum eraser” experiments combine all of the absurdities of the above experiments.  This experiment is arranged to identify which one of the paths the photon uses.  And, this information can be erased after the fact.  See the adjacent figure from Wikipedia.  Photons are emitted one at a time from the lower-left and then subjected to a 50% beam splitter.  After the beam splitter (green block), photons travel along two possible paths, the red or blue lines. Reminds me of a Tokyo subway map.  In the top diagram, the trajectory of each photon is known. If a photon emerges at the top of the apparatus, it appears to have come by the blue path.  If it emerges at the side of the apparatus, it appears to have come by the red path.  As shown in the bottom diagram, a second beam splitter is introduced at the top right.  This can direct either beam towards either path, erasing the which-path information.  So, the decision whether or not to remove the path information is made after-the-fact.  If you remove the path information, the interference pattern is restored.  The photon appears to recover its superposition properties. Mind-blowing quantum paradoxes Indeed, quantum physics is rich with paradoxes and non-intuitive behaviors.  While contemplating a certain experiment, it is important to ensure you have a complete picture of what the theorists and experimentalists are trying to tell us.  By merely considering one particular experiment, it is possible to convince yourself that you understand what is happening.  But, other experiments may contradict or invalidate your conceptual line of reasoning.  It seems to me that there is some deep, underlying concept or unifying principle that we are missing.  Some key piece of the puzzle that will show us that superposition and entanglement are fundamental, and apparently not constrained by space and time.  There must be some (comprehensible) reason for why matter behaves like this. * Many experiments use alternative path-separation devices, such as mirrors or beam splitters.  Additionally, different techniques have been used to “tag” specific paths or to detect the photons.  The math and the concepts are the same in these various setups.  Different arrangements help to clarify the results and resolve concerns over subtle technical issues. Einstein quote image"The supreme task of the physicist is to arrive at those universal elementary laws from which the cosmos can be built up by pure resting on sympathetic understanding of experience, can reach - Albert Einstein Contrary to Popular Belief, Einstein Was Not Mistaken About Quantum Mechanics Misconceptions and assumptions concerning quantum mechanics I get somewhat frustrated every time I read another blog post, book review, or journal article that claims Einstein was wrong about quantum mechanics (QM).  It must make for good headlines and is almost cliché.  First, these articles often give the misleading Einstein and quantum mechanics imageimpression that Einstein was the only physicist who had concerns with quantum mechanics during its development and exposition.  That simply is not true.  Many physicists (Schrödinger, de Broglie, Podolsky, Rosen, and several other major figures) had concerns.  Additionally, the relatively small fraction of physicists that are active today in the foundations and interpretations of quantum mechanics continue to debate the meaning, the implications, and the completeness of the theory with great vigor.  There is not yet a general consensus among experts as to the answers to some of the most fundamental questions about the implications of quantum theory in its present form. For decades, there has been a common misconception among many physicists that the conceptual problems with QM were already resolved or that any remaining questions were purely philosophical.  Contributing to this state of affairs, many textbooks focused solely on the computational aspects.  If interpretations or foundations were discussed at all, the focal point was on the Copenhagen interpretation.  There was little or no discussion of other viable formulations, and the solutions to conceptual problems that these formulations offered.  The prevailing interpretation of QM does not give a clear answer to the question “what, if anything, is objective reality”. Some alternatives, such as de Broglie-Bohm mechanics, do.  According to de Broglie-Bohm mechanics, particles are objective point-like objects with deterministic trajectories. These trajectories are guided by wave functions, which also objectively exist. Alternatives to conventional quantum mechanics I am not at all claiming that de Broglie-Bohm mechanics in its current form is the final word.  And I am not claiming that we need to immediately replace our existing paradigm with it, without further consideration or modification.  However, de Broglie-Bohm mechanics has not been properly vetted by generations of physicists.  I think failure to fully consider and evaluate such approaches may be blinding us to the way ahead.  The prevailing, fractured conceptual understanding of QM may be holding us back from making the next theoretical and technical leap in our quest to understand the universe. The venerable John S. Bell had this to say about de Broglie’s wave theory (see Speakable and Unspeakable in Quantum Mechanics):  And this about Bohmian mechanics:  “In 1952 I saw the impossible done. It was in papers by David Bohm. Bohm showed explicitly how parameters could indeed be introduced, into nonrelativistic wave mechanics, with the help of which the indeterministic description could be transformed into a deterministic one. More importantly, in my opinion, the subjectivity of the orthodox version, the necessary reference to the “observer,” could be eliminated. … But why then had Born not told me of this “pilot wave”? If only to point out what was wrong with it? … Why is the pilot wave picture ignored in text books? Should it not be taught, not as the only way, but as an antidote to the prevailing complacency? To show us that vagueness, subjectivity, and indeterminism, are not forced on us by experimental facts, but by deliberate theoretical choice?” EPR and quantum entanglement The famous “EPR paper”, so-named due to its authorship: A. Einstein, B. Podolsky, and N. Rosen, “Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?”, laid out some of Einstein’s main concerns.  These included lack of an objective physical reality in which deterministic properties of observables exist regardless of measurement.  And nonlocality, in which a measurement process carried out on one of a pair of entangled particles can seemingly affect the other particle’s properties, instantaneously and without regard to distance.Real-world quantum entanglement cartoon  Einstein continued to voice his objection to this fundamental property of quantum mechanics: “because it cannot be reconciled with the idea that physics should represent a reality in time and space, free from spooky actions at a distance.”  (Max Born, ed., The Born-Einstein Letters: Friendship, Politics and Physics in Uncertain Times (Macmillan, 1971), p. 178). After Einstein’s death, the phenomenal John Bell figured out how to quantify the “spooky” part of the intrinsically probabilistic behavior of a pair of entangled particles.  See his papers in Speakable and Unspeakble in Quantum Mechanics.  Years later, experimentalists such as Freedman, Clauser, and Aspect, confirmed that Nature really does make use of this spooky action at a distance, or nonlocality.  But to what end? Although nonlocality has subsequently been confirmed experimentally, it is ludicrous to criticize Einstein for his concerns about a theory that included it.  It would be a sad day for science if such a huge paradigm shift swept over the community without raising a few hairs.  Additionally, physicists still do not understand how the nonlocality is achieved, nor its implications. The quantum measurement problem A related issue is wave function collapse and the “measurement” problem. The measurement problem manifests itself in the fact that there are two rules for how a quantum state evolves in time. The Schrödinger equation tells us how the wave function (or more generally, the state vector) evolves in time when a quantum system is not being “observed” or “measured”.  With the Schrödinger equation, you can calculate the probabilities for possible outcomes to different measurements, and how those probabilities change over time.  This evolution of the state vector while no one is looking is continuous.  However, instantaneous collapse of the state vector into a particular eigenstate occurs upon measurement.  Why the discontinuity in the descriptions of the two processes?  What constitutes a measurement?  What are the dynamics for wave function collapse?  Does this mean that wave functions (or state vectors) are approximations to some more complete description of quantum systems? The Dalek Interpretation of Quantum Mechanics cartoonThe collapse postulate is ad hoc, based on the fact that we never observe superpositions of quantum states.  The core of the measurement problem is the inability of QM to explain the abrupt transition from linear evolution of the wave function, to non-unitary wave function collapse.  Steven Weinberg summarizes it thusly: “during measurement the state vector of the microscopic system collapses in a probabilistic way to one of a number of classical states, in a way that is unexplained and cannot be described by the time-dependent Schrödinger equation.” So, objective reality is not understood, nonlocality is not understood, wave function collapse is not understood.  We could go on.  My impression, based on trends in the literature, is that more and more of the community of physicists is recognizing the holes that remain in our conceptual understanding of the quantum world.  As more and more theoretical and experimental physicists struggle with these issues, perhaps we will get closer to a breakthrough. Additional material Here is a YouTube video with a quick introduction to entanglement: Quantum Entanglement – The Weirdness Of Quantum Mechanics.  And a ScienceDaily article on quantum entanglement, including links to additional background information on quantum mechanics. Comrade on the quest To my delight, just as I finished writing and editing this post, I found the following article on the electronic preprint archive,  Submitted today by Pablo Echenique-Robba, who apparently shares many of my views on the current state of QM: Title: Shut up and let me think! Or why you should work on the foundations of quantum mechanics as much as you please.  Abstract: If you have a restless intellect, it is very likely that you have played at some point with the idea of investigating the meaning and conceptual foundations of quantum mechanics. It is also probable (albeit not certain) that your intentions have been stopped on their tracks by an encounter with some version of the “Shut up and calculate!” command. You may have heard that everything is already understood. That understanding is not your job. Or, if it is, it is either impossible or very difficult. Maybe somebody explained to you that physics is concerned with “hows” and not with “whys”; that whys are the business of “philosophy” — you know, that dirty word. That what you call “understanding” is just being Newtonian; which of course you cannot ask quantum mechanics to be. Perhaps they also complemented these useful advices with some norms: The important thing a theory must do is predict; a theory must only talk about measurable quantities. It may also be the case that you almost asked “OK, and why is that?”, but you finally bit your tongue. If you persisted in your intentions and the debate got a little heated up, it is even possible that it was suggested that you suffered of some type of moral or epistemic weakness that tend to disappear as you grow up. Maybe you received some job advice such as “Don’t work in that if you ever want to own a house”.   I have certainly met all these objections in my short career, and I think that they are all just wrong. In this somewhat personal document, I try to defend that making sense of quantum mechanics is an exciting, challenging, important and open scientific endeavor. I do this by compulsively quoting Feynman (and others), and I provide some arguments that you might want to use the next time you confront the mentioned “opinions”. By analogy with the anti-rationalistic Copenhagen command, all the arguments are subsumed in a standard answer to it: “Shut up and let me think!” The Folly of Physics: Interpretations of Quantum Physics, Part 1 The issue With this post, I begin to layout some concerns that I have with descriptions and interpretations of quantum physics.  We still do not have a conceptual understanding of what the heck is going on in quantum mechanical processes.  Albert Einstein took issue with several aspects of quantum theory: the inherent randomness, the nonlocality, and the lack of realism, for example.  We may need to accept these aspects of nature, but is it asking too much to be able to understand how/why/what the universe is doing in these situations? Quantum physics is a fickle mistress Quantum Mechanics (QM) is perhaps one of the most successful hypotheses in the history of physics. That is, if you evaluate success based on agreement with experiment and ability to make predictions that are later confirmed by experiment.  And, quite frankly, that is (quite appropriately) how science judges hypotheses and theories.  Thousands of experiments have been performed, verifying the accuracy and relevance of QM.  These experiments include emission or absorption spectra predictions and measurements, magnetic moment predictions and measurements, and multiple variations of the double slit experiment, to name a few.  Physicists, chemists, and engineers have subjected matter to all kinds of bizarre tests that have validated the theory’s un-intuitive predictions.  Additionally, QM is not just a theoretical curiosity.  The range of technologies based on it is staggering.  Without QM, we would not have PCs, iPads, smart phones, laptops, modern TVs, modern medical imaging equipment, the microchips that control everything from our cars to our refrigerators, and so on. Unexplained behavior in quantum physics experiments. imageYet, after all this, we still do not understand HOW quantum physics works.  Even though the theory is a century old, we are far from a proper conceptual understanding of what it actually means and HOW the universe pulls off this behavior.  How does a particle manage to take every possible path?  How does a wave function seem to collapse, essentially instantaneously, across arbitrary distances?  How do entangled particles influence each other, seemingly without regard to time and space?  Why do certain quantities have to be quantized, rather than continuous?  These difficulties are related to the fact that a complex-valued state vector is used to describe a physical system.  So another way to ask these questions is, “why are complex quantities and a state vector required to quantitatively describe behavior at the quantum level?” Why is it so hard to visualize quantum physics? We can visualize general relativity (GR).  It is understood as the interplay between matter and spacetime.  Apart from some warping and dilation, GR makes intuitive sense.  There is a speed limit and strict enforcement of locality.  With some mathematics, we can readily convince ourselves that causality is safe.  Electromagnetic and nuclear interactions are described mathematically and conceptually as due to the exchange of particles called bosons.  These particles (photons for electromagnetism, gluons for the strong nuclear force, W and Z bosons for the weak nuclear force) account for the transfer of momentum and energy between fermions (i.e. quarks, electrons, protons, etc.).  They also account for the transfer of conserved quantum numbers.  In none of these fundamental forces do we have “spooky action at a distance”, or nonlocality.  We do have virtual particles, which is another story and takes some time to get used to.  But at least, even then, we have a picture in our heads of what is going on. In quantum physics, the state vector is not a physical description of the system.  And the evolution of the state vector seems to occur in two distinct phases.  First, a continuous evolution of the state vector occurs as the system evolves in time and space (described, for example, by the Schröedinger equation).  Then, there is an abrupt and discontinuous collapse of the state vector, into a particular eigenstate; the dynamics of which are not understood.  A common misconception is that this state vector collapse is caused by the interaction with the measurement device.  But clever, interaction-free measurement processes have been devised.  The collapse of the wave function has been verified in situations where interactions play no role in the measurement.  At least no known interactions. Why should we care about conceptualizing quantum physics? Given that QM works so well, and (so far) no experiments have contradicted it, why should we care how it is interpreted?  QM does not provide a physical description of a process.  The old adage is to just “shut up and calculate” (David Mermin).  However, this lack of a conceptual understanding may be what is holding physicists back from uniting the two pillars of modern physics, QM and GR.  It may be the key to understanding many of the most fundamental and provocative questions physicists are struggling with: • How do we unite the two theoretical paradigms of modern physics, QM and GR? • What happened at (and before) the origin of the universe? • What will be the ultimate fate of the universe? • What is driving the apparent acceleration of the universe’s expansion (i.e. what is dark energy)? • What happens inside a black hole? • What is time? It may also be necessary for the next great leap in technology, such as quantum computing.  Besides all that, I just want to know.  Is that asking too much? Competing interpretations of quantum physics At least to some extent, I think QM has been a victim of its own success.  Since it has worked so well, there is little less motivation to fix it.  Additionally, it is very difficult to distinguish between the predictions of some of these different interpretations.  So it will be difficult to experimentally validate the correct interpretation, at least for some time. Many different interpretations have been offered up over the years.  I will dig into some of these in later posts.  To name a few (Frank Laloë, “Do We Really Understand Quantum Mechanics?”): • Statistical interpretation • Relational interpretation • Logical, or algebraic approach • Veiled reality • Additional (hidden) variables • Modified Schröedinger dynamics • Transactional interpretation • History interpretation • Everett interpretation • Modal interpretation One of my favorites is the de Broglie-Bohm pilot wave interpretation.  In this model, a particle’s motion is determined by a wave.  Hence, you can reproduce both particle and wave-like behaviors and the predictions of generic QM.  However, there are some issues with de Broglie-Bohm theory.  These include things like relativistic invariance and the dynamics for how the particle and wave influence each other.  “We’ll talk about that later”. Unfortunately, alternative explanations have not been given full and proper consideration over the past 86 years (since the 1927 Solvay Conference and the birth of the Copenhagen interpretation).  Some alternatives have been appropriately disproven.  However, others have just been over-ridden or ignored.  A common theme is that someone publishes a paper “showing” that some interpretation is not workable.  Later, someone else shows how that paper was in error.  People remember the first paper and continue to assume that a certain idea is untenable.  Various interpretations are confused with each other, or assumptions are confused with conclusions.  Scientists are erroneously lead to believe that a particular approach is not valid. These myths are perpetuated in text books and lectures.  One example is de Broglie’s hidden variables theory, which was relegated to the scrap heap after the 1927 Solvay Conference.  It was resurrected after David Bohm developed his theory in the 1950s, and the similarities between it and de Broglie’s earlier work were noticed.  Another example: experiments confirming the violation of Bell’s inequality and hence confirming the concerns of the famous EPR paper (Einstein, Podolsky, Rosen) are often cited as confirming hidden variables theories are unworkable.  They actually show that hidden variables theories cannot sidestep the apparent nonlocality, not that they are altogether un-viable. De Broglie-Bohm mechanics at work? Take a look at this amazing video from the Science Channel’s Through the Wormhole, on Wave/Particle dynamics with silicon droplets.  It shows how the results of the double slit experiment can be reproduced by a silicon droplet (the “particle”) riding on an actual, physical “wave”.  There are a lot of details that go into this experiment, including how the apparatus works and how it is filmed.  So it is definitely not a proof of de Broglie-Bohm mechanics.  However, it is intriguing, and offers an irresistible visualization that begs further investigation. I think a significant factor in our failure to develop a consistent and deep conceptual understanding of quantum physics is rooted in the dogmatic presentation of the prevailing interpretation.  For decades, up-and-coming physicists have been indoctrinated in the “Copenhagen interpretation”.  Presented with the implicit assumption that interpretation questions are settled, many students don’t dig deeper.   The development of a proper, conceptual understanding has been further hobbled by misconceptions that are perpetuated through textbooks and instructors.  Students either assume the issue is resolved and look elsewhere for research opportunities, or they are discouraged by their advisors and forced to conform to availability of funding and job opportunities. I recall being confused and frustrated, as an undergrad physics student, when the explanations in the textbook or provided by the professor, just did not make sense and did not seem to be consistent with what the mathematics implied.  For example, the meaning and implication of the uncertainty principle are often explained as being due to the unavoidable transfer of momentum to the observed particle during a measurement.  However, experiments have been done that show this is not the case.  Moreover, it is an intrinsic property of the mathematics, in which momentum and position are Fourier transformations of each other, like time and frequency in everyday applications of Fourier theory to acoustic or electromagnetic signals. In the weeks and months ahead, I will expand on the specific points brought up in this article. It is entirely possible that Nature really is unknowable.  The Universe probably does not feel compelled to satisfy my desire for a visual, comprehensible model, unless it already intended to do that anyway. Welcome to “The Fun is Real!” (Fun with Physics, that is) Welcome to “The Fun Is Real“, a new blog that will explore wonders and mysteries of physics.  In particular, I am interested in the questions that are not yet understood.  These questions may be due to new experimental evidence that out-paces the theorists, like dark matter, dark energy, neutrino anomalies, etc.  Or it may be areas where the theory works, but we don’t have a conceptual understanding of how/why the universe does what it does.  One example of this is quantum physics and  quantum non-locality. The Fun Is Real: wave-particle duality imageThe predictions of quantum mechanics have been confirmed, time and time again, by experimentalists, with greater precision than any other theory in the history of physics.  In the history of science, for that matter!  Additionally, the engineering breakthroughs that have created our information society, and the current trajectory of our technology, are dependent upon quantum mechanics.  Yet, we do not understand how the universe pulls off some of the tricks inherent in quantum physics.  We don’t understand why certain things are quantized.  And entangled particles seem to be able to affect each other over arbitrary distances, without regard to time.  I will expand more on these issues in future blogs.  I also invite your inputs and ideas on the discussions. In addition to quantum non-locality, examples of other areas that you will see discussed here in the coming months include: (1) Given that a charged particle undergoing acceleration gives off electromagnetic radiation (i.e. emits photons), and a gravitational field is equivalent to acceleration, then why don’t charged particles emit photons simply due to being in a gravitational field? Or do they? (2) Would time exist if there were no matter? (3) Why does the universe insist upon the use of “imaginary”, or complex, numbers to communicate it’s behavior? (4) Where does inertia come from and why does gravitational mass appear to be the same as inertial mass? I don’t accept anthropomorphic explanations.  That is, I don’t accept as adequate an argument that states “we would not be here if it were not so”.  That does not contribute to our understanding of the how/why of the universe.  I also don’t accept “the theory has to be that way to be consistent with the evidence”.  I want to understand.  I want to know why.  I want to know how.  I want to know how a particle can impact measurements done on it’s entangled partner, in apparent violation of locality and the speed of light; not just how to do the calculations. This is a new website.  I am trying to make it interesting and accessible.  Let me know if you see problems or if you have ideas to make it better.  Remember, the physics may be theoretical, but “The Fun Is Real“.
73d9199cb0abcdd8
Quantum cascade laser From Wikipedia, the free encyclopedia Jump to: navigation, search Quantum cascade lasers (QCLs) are semiconductor lasers that emit in the mid- to far-infrared portion of the electromagnetic spectrum and were first demonstrated by Jerome Faist, Federico Capasso, Deborah Sivco, Carlo Sirtori, Albert Hutchinson, and Alfred Cho at Bell Laboratories in 1994.[1] Unlike typical interband semiconductor lasers that emit electromagnetic radiation through the recombination of electron–hole pairs across the material band gap, QCLs are unipolar and laser emission is achieved through the use of intersubband transitions in a repeated stack of semiconductor multiple quantum well heterostructures, an idea first proposed in the paper "Possibility of amplification of electromagnetic waves in a semiconductor with a superlattice" by R.F. Kazarinov and R.A. Suris in 1971.[2] Intersubband vs. interband transitions[edit] Interband transitions in conventional semiconductor lasers emit a single photon. Within a bulk semiconductor crystal, electrons may occupy states in one of two continuous energy bands - the valence band, which is heavily populated with low energy electrons and the conduction band, which is sparsely populated with high energy electrons. The two energy bands are separated by an energy band gap in which there are no permitted states available for electrons to occupy. Conventional semiconductor laser diodes generate light by a single photon being emitted when a high energy electron in the conduction band recombines with a hole in the valence band. The energy of the photon and hence the emission wavelength of laser diodes is therefore determined by the band gap of the material system used. A QCL however does not use bulk semiconductor materials in its optically active region. Instead it consists of a periodic series of thin layers of varying material composition forming a superlattice. The superlattice introduces a varying electric potential across the length of the device, meaning that there is a varying probability of electrons occupying different positions over the length of the device. This is referred to as one-dimensional multiple quantum well confinement and leads to the splitting of the band of permitted energies into a number of discrete electronic subbands. By suitable design of the layer thicknesses it is possible to engineer a population inversion between two subbands in the system which is required in order to achieve laser emission. Because the position of the energy levels in the system is primarily determined by the layer thicknesses and not the material, it is possible to tune the emission wavelength of QCLs over a wide range in the same material system. In quantum cascade structures, electrons undergo intersubband transitions and photons are emitted. The electrons tunnel to the next period of the structure and the process repeats. Additionally, in semiconductor laser diodes, electrons and holes are annihilated after recombining across the band gap and can play no further part in photon generation. However, in a unipolar QCL, once an electron has undergone an intersubband transition and emitted a photon in one period of the superlattice, it can tunnel into the next period of the structure where another photon can be emitted. This process of a single electron causing the emission of multiple photons as it traverses through the QCL structure gives rise to the name cascade and makes a quantum efficiency of greater than unity possible which leads to higher output powers than semiconductor laser diodes. Operating principles[edit] Rate equations[edit] Subband populations are determined by the intersubband scattering rates and the injection/extraction current. QCLs are typically based upon a three-level system. Assuming the formation of the wavefunctions is a fast process compared to the scattering between states, the time independent solutions to the Schrödinger equation may be applied and the system can be modelled using rate equations. Each subband contains a number of electrons (where is the subband index) which scatter between levels with a lifetime (reciprocal of the average intersubband scattering rate ), where and are the initial and final subband indices. Assuming that no other subbands are populated, the rate equations for the three level lasers are given by: In the steady state, the time derivatives are equal to zero and . The general rate equation for electrons in subband i of an N level system is therefore: Under the assumption that absorption processes can be ignored, (i.e. , valid at low temperatures) the middle rate equation gives Therefore, if (i.e. ) then and a population inversion will exist. The population ratio is defined as If all N steady-state rate equations are summed, the right hand side becomes zero, meaning that the system is underdetermined, and it is possible only to find the relative population of each subband. If the total sheet density of carriers in the system is also known, then the absolute population of carriers in each subband may be determined using: As an approximation, it can be assumed that all the carriers in the system are supplied by doping. If the dopant species has a negligible ionisation energy then is approximately equal to the doping density. Electron wave functions are repeated in each period of a three quantum well QCL active region. The upper laser level is shown in bold. Active region designs[edit] The scattering rates are tailored by suitable design of the layer thicknesses in the superlattice which determine the electron wave functions of the subbands. The scattering rate between two subbands is heavily dependent upon the overlap of the wave functions and energy spacing between the subbands. The figure shows the wave functions in a three quantum well (3QW) QCL active region and injector. In order to decrease , the overlap of the upper and lower laser levels is reduced. This is often achieved through designing the layer thicknesses such that the upper laser level is mostly localised in the left-hand well of the 3QW active region, while the lower laser level wave function is made to mostly reside in the central and right-hand wells. This is known as a diagonal transition. A vertical transition is one in which the upper laser level is localised in mainly the central and right-hand wells. This increases the overlap and hence which reduces the population inversion, but it increases the strength of the radiative transition and therefore the gain. In order to increase , the lower laser level and the ground level wave functions are designed such that they have a good overlap and to increase further, the energy spacing between the subbands is designed such that it is equal to the longitudinal optical (LO) phonon energy (~36 meV in GaAs) so that resonant LO phonon-electron scattering can quickly depopulate the lower laser level. Material systems[edit] The first QCL was fabricated in the InGaAs/InAlAs material system lattice-matched to an InP substrate.[1] This particular material system has a conduction band offset (quantum well depth) of 520 meV.[citation needed] These InP-based devices have reached very high levels of performance across the mid-infrared spectral range, achieving high power, above room-temperature, continuous wave emission.[3] In 1998 GaAs/AlGaAs QCLs were demonstrated by Sirtori et al. proving that the QC concept is not restricted to one material system.[citation needed] This material system has a varying quantum well depth depending on the aluminium fraction in the barriers.[citation needed] Although GaAs-based QCLs have not matched the performance levels of InP-based QCLs in the mid-infrared, they have proven to be very successful in the terahertz region of the spectrum.[citation needed] The short wavelength limit of QCLs is determined by the depth of the quantum well and recently QCLs have been developed in material systems with very deep quantum wells in order to achieve short wavelength emission. The InGaAs/AlAsSb material system has quantum wells 1.6 eV deep and has been used to fabricate QCLs emitting at 3 μm.[citation needed] InAs/AlSb QCLs have quantum wells 2.1 eV deep and electroluminescence at wavelengths as short as 2.5 μm has been observed.[citation needed] QCLs may also allow laser operation in materials traditionally considered to have poor optical properties. Indirect bandgap materials such as silicon have minimum electron and hole energies at different momentum values. For interband optical transitions, carriers change momentum through a slow, intermediate scattering process, dramatically reducing the optical emission intensity. Intersubband optical transitions however, are independent of the relative momentum of conduction band and valence band minima and theoretical proposals for Si/SiGe quantum cascade emitters have been made.[4] Emission wavelengths[edit] QCLs currently cover the wavelength range from 2.63 μm [5] to 250 μm [6](and extends to 355 μm with the application of a magnetic field.[citation needed]) Optical waveguides[edit] End view of QC facet with ridge waveguide. Darker gray: InP, lighter gray: QC layers, black: dielectric, gold: Au coating. Ridge ~ 10 um wide. End view of QC facet with buried heterostructure waveguide. Darker gray: InP, lighter gray: QC layers, black: dielectric. Heterostructure ~ 10 um wide The first step in processing quantum cascade gain material to make a useful light-emitting device is to confine the gain medium in an optical waveguide. This makes it possible to direct the emitted light into a collimated beam, and allows a laser resonator to be built such that light can be coupled back into the gain medium. Two types of optical waveguides are in common use. A ridge waveguide is created by etching parallel trenches in the quantum cascade gain material to create an isolated stripe of QC material, typically ~10 um wide, and several mm long. A dielectric material is typically deposited in the trenches to guide injected current into the ridge, then the entire ridge is typically coated with gold to provide electrical contact and to help remove heat from the ridge when it is producing light. Light is emitted from the cleaved ends of the waveguide, with an active area that is typically only a few micrometers in dimension. The second waveguide type is a buried heterostructure. Here, the QC material is also etched to produce an isolated ridge. Now, however, new semiconductor material is grown over the ridge. The change in index of refraction between the QC material and the overgrown material is sufficient to create a waveguide. Dielectric material is also deposited on the overgrown material around QC ridge to guide the injected current into the QC gain medium. Buried heterostructure waveguides are efficient at removing heat from the QC active area when light is being produced. Laser types[edit] Although the quantum cascade gain medium can be used to produce incoherent light in a superluminescent configuration,[7] it is most commonly used in combination with an optical cavity to form a laser. Fabry–Perot lasers[edit] This is the simplest of the quantum cascade lasers. An optical waveguide is first fabricated out of the quantum cascade material to form the gain medium. The ends of the crystalline semiconductor device are then cleaved to form two parallel mirrors on either end of the waveguide, thus forming a Fabry–Pérot resonator. The residual reflectivity on the cleaved facets from the semiconductor-to-air interface is sufficient to create a resonator. Fabry–Pérot quantum cascade lasers are capable of producing high powers,[8] but are typically multi-mode at higher operating currents. The wavelength can be changed chiefly by changing the temperature of the QC device. Distributed feedback lasers[edit] A distributed feedback (DFB) quantum cascade laser[9] is similar to a Fabry–Pérot laser, except for a distributed Bragg reflector (DBR) built on top of the waveguide to prevent it from emitting at other than the desired wavelength. This forces single mode operation of the laser, even at higher operating currents. DFB lasers can be tuned chiefly by changing the temperature, although an interesting variant on tuning can be obtained by pulsing a DFB laser. In this mode, the wavelength of the laser is rapidly “chirped” during the course of the pulse, allowing rapid scanning of a spectral region.[10] External cavity lasers[edit] Schematic of QC device in external cavity with frequency selective optical feedback provided by diffraction grating in Littrow configuration. In an external cavity (EC) quantum cascade laser, the quantum cascade device serves as the laser gain medium. One, or both, of the waveguide facets has an anti-reflection coating that defeats the optical cavity action of the cleaved facets. Mirrors are then arranged in a configuration external to the QC device to create the optical cavity. If a frequency-selective element is included in the external cavity, it is possible to reduce the laser emission to a single wavelength, and even tune the radiation. For example, diffraction gratings have been used to create[11] a tunable laser that can tune over 15% of its center wavelength. Extended tuning devices[edit] There exists several methods to extend the tuning range of quantum cascade lasers using only monolithically integrated elements. Integrated heaters can extend the tuning range at fixed operation temperature to 0.7% of the central wavelength[12] and superstructure gratings operating through the Vernier effect can extend it to 4% of the central wavelength,[13] compared to <0.1% for a standard DFB device. The alternating layers of the two different semiconductors which form the quantum heterostructure may be grown on to a substrate using a variety of methods such as molecular beam epitaxy (MBE) or metalorganic vapour phase epitaxy (MOVPE), also known as metalorganic chemical vapor deposition (MOCVD). Fabry-Perot (FP) quantum cascade lasers were first commercialized in 1998,[14] Distributed feedback (DFB) devices were first commercialized in 2004,[15] and broadly-tunable external cavity quantum cascade lasers first commercialized in 2006.[16] The high optical power output, tuning range and room temperature operation make QCLs useful for spectroscopic applications such as remote sensing of environmental gases and pollutants in the atmosphere[17] and homeland security. They may eventually be used for vehicular cruise control in conditions of poor visibility,[citation needed] collision avoidance radar,[citation needed] industrial process control,[citation needed] and medical diagnostics such as breath analyzers.[18] QCLs are also used to study plasma chemistry.[19] Their large dynamic range, excellent sensitivity, and failsafe operation combined with the solid-state reliability should easily[original research?] overcome many of the technological hurdles that impede existing technology in these markets. When used in multiple-laser systems, intrapulse QCL spectroscopy offers broadband spectral coverage that can potentially be used to identify and quantify complex heavy molecules such as those in toxic chemicals, explosives, and drugs.[clarification needed][20] Unguided QCL emission in the 3–5 μm atmospheric window could be used as a cheaper alternative to optical fibres for high-speed Internet access in built up areas.[citation needed] In fiction[edit] • The upcoming video game Star Citizen imagines external-cavity quantum cascade lasers as high-power weapons.[21] Also refer to Neal Asher, QC lasers appear as weapons, hand held green range. 1. ^ a b Faist, Jerome; Federico Capasso; Deborah L. Sivco; Carlo Sirtori; Albert L. Hutchinson; Alfred Y. Cho (April 1994). "Quantum Cascade Laser" (abstract). Science. 264 (5158): 553–556. Bibcode:1994Sci...264..553F. doi:10.1126/science.264.5158.553. PMID 17732739. Retrieved 2007-02-18.  2. ^ Kazarinov, R.F; Suris, R.A. (April 1971). "Possibility of amplification of electromagnetic waves in a semiconductor with a superlattice". Fizika i Tekhnika Poluprovodnikov. 5 (4): 797–800.  3. ^ Razeghi, Manijeh (2009). "High-Performance InP-Based Mid-IR Quantum Cascade Lasers" (abstract). IEEE Journal of Selected Topics in Quantum Electronics. 15 (3): 941–951. doi:10.1109/JSTQE.2008.2006764. Retrieved 2011-07-13.  4. ^ Paul, Douglas J (2004). "Si/SiGe heterostructures: from material and physics to devices and circuits" (abstract). Semicond. Sci. Technol. 19 (10): R75–R108. Bibcode:2004SeScT..19R..75P. doi:10.1088/0268-1242/19/10/R02. Retrieved 2007-02-18.  5. ^ Cathabard, O.; Teissier, R.; Devenson, J.; Moreno, J.C.; Baranov, A.N. "Quantum cascade lasers emitting near 2.6 μm". Applied Physics Letters. 96 (14): 141110. Bibcode:2010ApPhL..96n1110C. doi:10.1063/1.3385778.  6. ^ Walther, C.; Fischer, M.; Scalari, G.; Terazzi, R.; Hoyler, N.; Faist, J. "Quantum cascade lasers operating from 1.2 to 1.6 THz". Applied Physics Letters. 91 (13): 131122. Bibcode:2007ApPhL..91m1122W. doi:10.1063/1.2793177.  7. ^ Zibik, E. A.; W. H. Ng; D. G. Revin; L. R. Wilson; J. W. Cockburn; K. M. Groom; M. Hopkinson (March 2006). "Broadband 6 µm < λ < 8 µm superluminescent quantum cascade light-emitting diodes". Appl. Phys. Lett. 88 (12): 121109. Bibcode:2006ApPhL..88l1109Z. doi:10.1063/1.2188371.  8. ^ Slivken, S.; A. Evans; J. David; M. Razeghi (December 2002). "High-average-power, high-duty-cycle (λ ~ 6 µm) quantum cascade lasers". Applied Physics Letters. 81 (23): 4321–4323. Bibcode:2002ApPhL..81.4321S. doi:10.1063/1.1526462.  9. ^ Faist, Jérome; Claire Gmachl; Frederico Capasso; Carlo Sirtori; Deborah L. Silvco; James N. Baillargeon; Alfred Y. Cho (May 1997). "Distributed feedback quantum cascade lasers". Applied Physics Letters. 70 (20): 2670. Bibcode:1997ApPhL..70.2670F. doi:10.1063/1.119208.  10. ^ "Quantum-cascade lasers smell success". Laser Focus World. PennWell Publications. 2005-03-01. Retrieved 2008-03-26.  11. ^ Maulini, Richard; Mattias Beck; Jérome Faist; Emilio Gini (March 2004). "Broadband tuning of external cavity bound-to-continuum quantum-cascade lasers". Applied Physics Letters. 84 (10): 1659. Bibcode:2004ApPhL..84.1659M. doi:10.1063/1.1667609.  12. ^ Bismuto, Alfredo; Bidaux, Yves; Tardy, Camille; Terazzi, Romain; Gresch, Tobias; Wolf, Johanna; Blaser, Stéphane; Muller, Antoine; Faist, Jerome (2015). "Extended tuning of mid-ir quantum cascade lasers using integrated resistive heaters". Optics Express. 23 (23): 29715–29722. Bibcode:2015OExpr..2329715B. doi:10.1364/OE.23.029715. Retrieved May 2016.  Check date values in: |access-date= (help) 13. ^ Bidaux, Yves; Bismuto, Alfredo; Tardy, Camille; Terazzi, Romain; Gresch, Tobias; Blaser, Stéphane; Muller, Antoine; Faist, Jerome (4 November 2015). "Extended and quasi-continuous tuning of quantum cascade lasers using superstructure gratings and integrated heaters". Applied Physics Letters. 107 (22): 221108. Bibcode:2015ApPhL.107v1108B. doi:10.1063/1.4936931. Retrieved May 2016.  Check date values in: |access-date= (help) 14. ^ "Extrait du registre du commerce". Registre du commerce. Retrieved 2016-04-28.  15. ^ "Alpes offers CW and pulsed quantum cascade lasers". Laser Focus World. PennWell Publications. 2004-04-19. Retrieved 2007-12-01.  16. ^ "Tunable QC laser opens up mid-IR sensing applications". Laser Focus World. PennWell Publications. 2006-07-01. Retrieved 2008-03-26.  17. ^ Normand, Erwan; Howieson, Iain; McCulloch, Michael T. (April 2007). "Quantum-cascade lasers enable gas-sensing technology". Laser Focus World. 43 (4): 90–92. ISSN 1043-8092. Retrieved 2008-01-25.  18. ^ Hannemann, M.; Antufjew, A.; Borgmann, K.; Hempel, F.; Ittermann, T.; Welzel, S.; Weltmann, K.D.; Völzke, H.; Röpcke, J. (2011-04-01). "Influence of age and sex in exhaled breath samples investigated by means of infrared laser absorption spectroscopy". Journal of Breath Research. 5 (027101): 9. Bibcode:2011JBR.....5b7101H. doi:10.1088/1752-7155/5/2/027101.  19. ^ Lang, N.; Röpcke, J.; Wege, S.; Steinach, A. (2009-12-11). "In situ diagnostic of etch plasmas for process control using quantum cascade laser absorption spectroscopy". Eur. Phys. J. Appl. Phys. 49 (13110): 3. Bibcode:2010EPJAP..49a3110L. doi:10.1051/epjap/2009198.  20. ^ Howieson, Iain; Normand, Erwan; McCulloch, Michael T. (2005-03-01). "Quantum-cascade lasers smell success". Laser Focus World. 41 (3): S3–+. ISSN 0740-2511. Retrieved 2008-01-25.  21. ^ https://robertsspaceindustries.com/comm-link/transmission/13152-Galactic-Guide-Hurston-Dynamics External links[edit]
f1be1fb4ad7634e3
Take the 2-minute tour × After reading about the hydrogen atom and understanding how Schrodinger's equation explains most part of the atomic spectrum of an hydrogen atom, and also came to know that, it explains most of the chemical reactions and a huge tool in chemistry. I am now almost convinced, that it is wise to accept the Schrodinger equation as a law that govern's the motion of subatomic particles like electrons at quantum scales. Now I am a little curious about one problem. How does an electron (a distribution of charge) move under the influence of its own electrostatic Coulomb's field. I am interested only in strictly theoretical sense, but also like to know if there is any practical importance to it. I'd like to consider this problem first in a 1-D setup, purely due to my lack of acquaintance with partial differential equations. So lets consider a 1-D electron, as a linear charge distribution of constant density $\rho$ and distributed over a length $2r_e$. Now I am interested to setup the Schrodinger equation for it, in the case where there is no external field. I'd appreciate some help/comments on setting it up, solution and analysis/interpretation of the resulting wave function and what it actually means, at diefferent energy levels(very high, very low, etc). share|improve this question Comment to the question (v1): Note that standard 1D/2D/3D quantum mechanical treatment of the hydrogen atom considers the electron in the proton's electrostatic field rather than the electron's own electrostatic field. –  Qmechanic Jul 11 '13 at 13:28 Electrons are considered point particles - though this statement is sidenoted by this answer (does anyone care to expand on this?) - so the radius $r_e$ would be zero. But that's actually just a pedantic point in this context. The electric field around an electron (on its own in a vacuum) is spherically symmetric (as far as we can tell), so this would not influence the electron's movement. –  Wouter Jul 11 '13 at 13:30 @Qmechanic : I know this is a different problem from the Hydrogen atom. I am just curious about this special problem. –  Rajesh D Jul 11 '13 at 13:33 @RajeshD Yes and so the only term in the Schrödinger equation for a free electron would be the kinetic energy one, since it is not influenced by its own (spherically symmetric) electric field. –  Wouter Jul 11 '13 at 13:44 Well there is such a thing as self-interaction in QFT, but not (as far as I know) in the context of regular QM - which is what I think you're currently looking into? And I'm not even sure this self-interaction is possible for a free electron. Could be wrong though. –  Wouter Jul 11 '13 at 14:00 Your Answer Browse other questions tagged or ask your own question.
a910113929ca0f53
Try Our Apps What is Ghosting? Schrödinger equation noun, Physics. the wave equation of nonrelativistic quantum mechanics. Also called Schrödinger wave equation. Compare wave equation (def 2). Origin of Schrödinger equation 1950-55; after E. Schrödinger Unabridged Cite This Source British Dictionary definitions for schrodinger equation Schrödinger equation an equation used in wave mechanics to describe a physical system. For a particle of mass m and potential energy V it is written (ih/2π).(∂ψ/∂t) = (–h²/8π²m)∇²ψ + Vψ, where i = √–1, h is the Planck constant, t the time, ∇² the Laplace operator, and ψ the wave function Collins English Dictionary - Complete & Unabridged 2012 Digital Edition Cite This Source Word of the Day Difficulty index for Schrödinger equation Few English speakers likely know this word Word Value for schrodinger Scrabble Words With Friends Nearby words for schrodinger equation
fe0383098baa1c36
From Wikipedia, the free encyclopedia Jump to: navigation, search           This article is of interest to the following WikiProjects: WikiProject Hinduism / Philosophy / Swaminarayan / Shaktism (Rated C-class, Top-importance) WikiProject icon This article is within the scope of WikiProject Hinduism, a collaborative effort to improve the coverage of Hinduism on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks. Taskforce icon This article is supported by WikiProject Hindu philosophy (marked as Top-importance). WikiProject India (Rated C-class, Mid-importance) Note icon This article was last assessed in April 2012. WikiProject Religion (Rated C-class, Top-importance) WikiProject Philosophy (Rated C-class, High-importance) Meaning of Vedanta[edit] "The word Vedanta teaches that the believer's goal is to transcend the limitations of self-identity." This sentence in the introduction seems out of place and it doesn't make any sense. The word Vedanta itself does not 'teach' anything. The meaning stated here is very incorrect. Ved is to reveal the truth (using mimansha ie ved kriya). Anta means end. So Ved-anta literally means the result (conclusion) of ved. If ved krya was not flawed, then the :resultant conclusion must be the truth. - Divya Indu Chakraborty Gautama-- (talk) 23:09, 30 September 2010 (UTC) The page says,"Advaita (ad- not, dwaita- two; meaning non-duality)" Is it not supposed to read "Advaita (a- not, dwaita- two; meaning non-duality)"? Correct. Changed Chancemill 12:38, Apr 12, 2004 (UTC) references to other religions shouldn't be brought up in this article, people are putting POV into these articles, leave other religions out of it. This is about Vedanta Schrodinger, etc.[edit] That stuff about Vedanta and modern science is a bunch of orientalist hooey--it is irrelevant, and should be removed from the article completely. Response to "Schrodinger, etc."[edit] No it's not. A simple Google search turns up several links to confirm the fact that Schrodinger was a Vedantist. Here's one of them: Erwin Rudolf Josef Alexander Schrödinger (August 12, 1887 – January 4, 1961) was an Austrian physicist famous for his contributions to quantum mechanics, especially the Schrödinger equation, for which he won the Nobel Prize in 1933. He proposed the Schrödinger's cat thought experiment, and he had a life-long interest in Vedanta. Another point. Confronted with the complexities of quantum physics("--how can something be a wave and a particle at the same time?How can it be in two places at the same time?",students of Physics must grapple with issues of reality, perception and illusion. To my mind, Vedanta is the most suitable philosophical platform that addresses this type of enquiry-- doubtless prompting the interest of eminent physicists on the subject. (?08:40, 18 August 2006 (UTC)Canossa2006") If scientists today were more conscious of the difference between appearance and reality, they might not have so many paradoxes. According to Schopenhauer, we can only know appearances, except in one case, when we know ourselves as blind urge, impulse, will. Schrödinger was aware of the difference between appearance and reality as a result of his studies of philosophy and Vedanta. He knew that the paradoxes of quantum mechanics result from an ignorance of the fact that science can only describe and predict appearances and never truly know any underlying, non–appearing reality.Lestrade (talk) 13:20, 1 October 2008 (UTC)Lestrade Re-Response to Schrodinger, etc.[edit] Just because Schrodinger believed he was a follower of Vedanta is not an indication that he was a Vedantin. Many other western thinkers (e.g. Schopenhauer, Deussen) have thought of themselves as Vedantins, but their teachings are far from those of pre-modern Vedantins. The members of the Japanese cult Aum Shinrikyo believed that they were following the Buddha's teachings. They were the ones who were responsible for a nerve gas attack in a Tokyo subway station. Should we mention them in the entry on Buddhism? Re: Undoubtedly? Genghis Khan himself was a buddhist, of all the people. Plain declaration is of course not enough - but that is not the only source of confirming his saying though... Claims of famous people, are included in MANY articles of wikipedia. Thus, we can let them stay. There were many people who came close to vedantic thinking mind you, so your claim is not perfectly right. Like the new format[edit] I like the new subsections that have been created in the article. The earlier version wasn't organized very well. It's good to see how well the page has evolved since I turned the "Cleanup" flag on a few weeks ago. clean up[edit] I did a little cleaning up of this article today, let me know if you think anything should be added/taken away --Gozar 9 July 2005 06:09 (UTC) re: cleanup[edit] Looks good. Made some additional minor reformats/corrections. Now that this page is relatively stable, anybody doing major work (especially removal of content) should probably call for a vote here before proceeding with the changes. There is another article on "Upanishads". From What I understand, Vedanta and Upanishads are one and the same thing (atleast that is what the two articles convey). So why do we have two different articles on the same subject. Why can't we merge the two so that a search on both "Upanishad" and "Vedanta" is redirected to the same article. Any thoughts ? ...Syiem hi, i'm not sure if you need admin-ship for this or not, but why dont we try proposing a merger with Upanisads and see what the other users think? Then again, the Upanisads are distinctly a collection of writings while Vedanta is a school of thought based on those writings, I am busy at the moment and cant look into it extensively so if you could check that out it would be helpful.--Gozar 14:46, 4 August 2005 (UTC) No, the two articles should not be merged; the topics are seperate. Vedanta is the systematization of the teachings of the Upanishads hundreds of years after the Upanishads were written. They represent two different schools from two very different periods and cultures. Just as the Upanishads are a commentary on the Vedas, Shankara's thought — which is known as Vedanta — is a commentary on the Upanishads. --goethean 15:30, 4 August 2005 (UTC) Thank you very much for the clarification Goethean. Though I believe that this distinction between the two terms should be clearly brought out in the articles, especially when the two terms are used almost interchangeably. I will try to do some research on this and try to fill in the gaps. Thanks a lot ! Syiem 04:29, 5 August 2005 (UTC) Although in its earliest usage, "Vedanta" simply meant Upanisad, in later usage it also came to mean the school of thought based primarily on the exegesis of the Upanisad (the Brahma Mimamsa, or Uttara Mimamsa school). So it would be very misleading to collapse the "Upanisad" and "Vedanta" thread. There are many influences from Samkhya and Yoga in the Upanisads that are systematically ignored by Samkhya. Someone has confused the thinkers "Madhva" and "Madhava." The former is the founder of the Dvaita Vedanta school. The latter is a 14th c. Advaita Vedantin. I will fix this. It appears to me that "Vedanta," which means "at the conclusion of the Vedas," is not the same as "Upanishad," which means "sitting at the feet (of the teacher)." The Upanishads are only one part of Vedanta. Other writings, besides the Upanishads, that are included in Vedanta (after the final parts of the Vedas) are Gitas, Sutras, commentaries, and poems.Lestrade (talk) 13:55, 2 October 2008 (UTC)Lestrade Vedanta and Yoga[edit] Is there any source for the following statement: "As per some, it is a form of Jnana Yoga (one of the four basic yoga practices in Hinduism; the others are: Raja Yoga, Bhakti Yoga, Karma Yoga), a form of yoga which involves an individual seeking "the path of intellectual analysis or the discrimination of truth and reality." As per others, Vedanta encompasses all the four yogas." I object to this statement. If there are no citations for the aforementioned, I would rather remove it. Manas 09:06, 6 June 2006 (UTC) Please state your objection(s). --Vivek 18:20, 6 June 2006 (UTC) If I understand correctly what I have read, Vedanta encompasses all the four yogas. Now, there might have been some people who felt it was a form of Jnana Yoga only. In that case, references are required. Manas 06:12, 7 June 2006 (UTC) Then the correct thing to do would be to add a {{Fact}} tag for now, instead of removing it. --Vivek 19:57, 8 June 2006 (UTC) Jnana Yoga is the path of knowledge and discrimination. Many Jnani's end up with a Vedantic philosophy but becoming a Vedantist just because one is a Jnani is not a foregone conclusion. Acharya and Commentaries[edit] Is there any source for the following: "Also of note, historically, in order for a guru to be considered an acharya or great teacher of a philosophical school of Vedanta, he was required to write commentaries on three important texts in Vedanta, the Upanishads, Bhagavad Gita, and the Brahma Sutras." There were so many so many people who were called Acharyas who didn't write the commentaries on these books. Manas 09:32, 6 June 2006 (UTC) Other Vedantic Schools[edit] "The three schools they conceived are the most prevalent, however, proponents of other Vedantic schools continue to write and develop their ideas as well, although their works are not widely known outside of India." What are the other Vedantic schools that are widely known in India but not outside? It will be better to provide more information here. Manas 09:32, 6 June 2006 (UTC) Transition from Vedic to Vedantic Religion[edit] In my opinion, the information provided here is either incorrect or irrelevant to the section. This section needs a re-write by someone with understanding of history of that time. Manas 09:32, 6 June 2006 (UTC) List of schools of thought[edit] Shouldn't non-vaishnav schools of thought be listed too? Such as tantra, shakti vishishtadvaita, siva advaita, and others as such? Modern times section[edit] First, I must say that this section is not only pretty dubious, but very one-sided. The (incompletely cited) quotation from a biographer of Schrödinger needs to be investigated disinterestedly and dispassionately; I'll make a start on that. Schrödinger was in any case just one worker in the field, whereas the article makes it sound as though quantum theory was his creation. Capra's status is grossly overstated; he was a populariser whose book was riddled with philosophical and scientific errors and oversimplifications (there's a substantial literature from the early 1980s debunking him and others like him); he was also a Buddhist rather than a Vedantist... That Schrödinger and Capra are the only scientists mentioned is significant, of course. Thirdly, why is "in modern times" interpreted to mean "in Western science and literature"? Fourthly, the section (like much of the rest of the article) also need copy-editing and wikifying. I'll also make a start on that. --Mel Etitis (Μελ Ετητης) 23:17, 18 December 2006 (UTC) I agree that some of these claims are (a) overrated, (b) instrumentalized and (c) unclear whether they apply to Vedanta, Advaita-Vedanta in particular or Indian thought in general (incl. Buddhism). However, it is very interesting to mention such influences and clarify their importance collectively in this or related wikipedia articles. Therefore I have added more, namely Hesse and Hegel as examples from Germany. Gschadow 21:45, 1 January 2007 (UTC) The list of people who were influenced by or who commentated on vedanta is open-ended and fairly pointless. Many people who write generally on myth and religion, such as Joseph Campbell, will write on Vedanta — should they all be listed here, thousands of them? The list, if necessary at all (and text is better than a bare list), should be restricted to those who were actually influenced by Vedanta. --Mel Etitis (Μελ Ετητης) 10:14, 22 January 2007 (UTC) In response to Mel Etitis, How does is matter if Schrondiger was one of them or was all ten of them (people related to Quantum Mechanics)? Why you are trying to make too many half-baked arguments at once? Irrespective of who these people are (and you can correct that by removing "who" part and instead providing wiki-link for their pages), they were probably much more influenced by Vedanta than is stated here. I added Schopenhauer to the list, let me know if you have any problems and I will reply duly. Those people like Schopenhauer, Tesla, Schrödinger, or Mark-Twain said much better things about Vedanta then are being stated here. People can try to find on their own who they are, I am more interested in putting some of the stuff about what they said about Vedanta/Upnishads. Dear Gschadow, Read my above comments for "Mel Etitis", and as i asked him "remove the over-rating of individuals, and add simply the link to their respective pages in wiki-pedia", but the influence on such people is always under-rated and not over rated. Let me know if you want to suggest that the influence itself is over-rated in these few lines in this article. I will back them up with valid references. The idea of attributing everything to Budhdhism which borrowed all philosophy and meditation etc from Hinduism is British legacy (You will find it hard to see anything attributed to Hinduism in Encylopedia, even if somebody wrote extensively about Hinduism, you won't find the term Hindu or even its distant neighbour mentioned in the short biography). The most important and main Upnishads, that form the basis of Vedanta philosophy, predates Budhdhism by a few hundred years. And even after Budhdhism, the whole Vedanta philosophy was part of Hindu religion and was kept alive by custodian of Hindu or Vedic religion. Except for Hinduism, I have not seen any religion or sect being hated more for its good things than for its bad ones. For Hindus the religion is Sanatan Dharma or Vedic religion and the outsiders first use "Hindu" for that religion and then ask what Hindu means? It is like two way escape from accpeting the religious beliefs of such a large section or an attempt to create confusion as if the term is hard to define. We don't need to define Hinduism, this i stated elsewhere in wiki-pedia. We just need to understand that Vedas, Upnishads, Vedanta, Yoga (Meditation) all these things belong to the same belief system and that is called Sanatan Dharma or Vedic Dharma. Arabs and then British started calling that religion Hindu religion. It is very difficult to have two people aggreeing on all aspects of religious beliefs, so when we define Christianity, we don't say that different forms of Christianity are different religions or Christianity is not a religion. Similarly, it is idiotic to say that Hinduism is not a religion (It is just an attempt to cheat on people and in that case nothing is religion). —The preceding unsigned comment was added by Skant (talkcontribs) 03:17, 22 March 2007 (UTC). Also, now i would like to be more offending, it is stupid to suggest that list of people who were influenced by Vedanta is pointless or meaningless, here is the reason: Vedanta is philosophy, and if you read about different philosophies or philosophers, you will find both who influenced that philosopher/philosophy as well as who all were influenced by it. Vedanta may be some very trivial and crap philosophy for you, but you should check the credentials of the people that are being listed and what they said about it, to confirm your opinion. We anyway have enough reasons to list people who got influenced by this philosophy. Having added to this list myself, I agree with Mel Etitis that it is problematic. I think only those people should be included who (a) are eminent scholars in their own right (not just recognized for their work on eastern mysticism), (b) have published or where specific original communications (letters, diary entries etc.) exist to substantiate the influence. Hegel, Hesse, and Schopenhauer would qualify because they meet both (a) and (b). I doubt that Capra would qualify, he doesn't meet the criterion (a). I doubt that Einstein and Schrödinger qualify because criterion (b) is not met. Should we edit based on these criteria? Gschadow 15:38, 8 July 2007 (UTC) O.K. I am beginning to edit this. Initially I will add fact-tags to raise flags for the issues. Secondly I am deleting some unrelated talk. For example, the work of Vivekananda and Yogananda do not establish that Tesla was actually influenced. Also, I delete the passage of Walter Moore that only says that Schrödinger's "new view [is] consistent with the Vedantic concept [...]" but it does not provide evidence that Schrödinger actually wrote about being influenced or inspired by Vedanta philosophy. I am also removing the statement that western reception "often run the risk of oversimplifying and ignoring important differences [...]" even though I agree to this, but yet it is WP:NOR. I think if the Schroediger and Tesla issues are not sourced by reliable sources, that whole paragraph will have to go because of WP:NOR. For a moment I had it down like this: The western reception of eastern philosophy in the context of Physics is exemplified by Walter Moore (Biography on Erwin Schrödinger) and Fritjof Capra (The Tao of Physics). These authors claim that Advaita Vedanta has influenced eminent modern Physicists in the conception of their respective theories. For instance, Nikola Tesla is said to have been influenced by the teachings of Swami Vivekananda. Erwin Schrödinger is claimed to have been inspired by Vedanta in his discovery of quantum theory.[citation needed] but since Wikipedia articles should not report hearsay, I ended up removing the whole paragraph because even if Capra and Moore claim that this influence exists, it they are not a sufficient warrant for repeating it of Wikipedia. Instead original neutral references should be found (e.g., whatever sources Moore or Capra might cite could work.) Gschadow 20:55, 9 July 2007 (UTC) I am not sure about what wikipdeia rules are, they may be anything and wiki-pedia's ultimate fate will depend upon how it makes information better and not just "well proved". If we go to the extremes of using wiki-pedia rules to force bias instead of using them to improve information it won't remain much informative at all. Only science and mathematics can be recorded in wikipedia in that case. How can you talk about history without giving claims and counter claims? If you end up removing paragraphs and ideas each time you have problems in ascertahing what is truth and what is not (instead of giving both well known claims and counter claims), you will end up removing most of the hitorical articles (which one of them doesn't have something which is not certain). Now to this Erwin Schrödinger related case specifically, I am copying from wiki-quote, tell me what is the minimum status of person who qualifies as a good enough reference? Also, tell me what is the degree and type of influence that you will count. How do you measure tha Erwin Schrödinger was influenced by something? Does it need to be some printed article or does it need to be a statement in his own writings or does it need to be something on stanmped paper? skant12.7.175.2 22:34, 15 September 2007 (UTC) copied from wiki-quote "In itself, the insight is not new. The earliest records, to my knowledge, date back some 2500 years or more... the recognition ATMAN = BRAHMAN (the personal self equals the omnipresent, all-comprehending eternal self) was in Indian thought considered, far from being blasphemous, to represent the quintessence of deepest insight into the happenings of the world. The striving of all the scholars of Vedanta was after having learnt to pronounce with their lips, really assimilate in their minds this grandest of all thoughts." 22:38, 15 September 2007 (UTC)skant Dualism vs. Non-dualism[edit] I've just begun study of vedanta, and it seems like non-dualism is mentioned a lot, but I didn't see it mentioned here. As someone looking for information on vedanta, I'm suggesting that someone familiar with vedanta add something about "non-dualism". 18:22, 13 November 2007 (UTC) It means that everything is basically one and the same. You, your dog, an ant, lightning, water, a tiger, etc., are all essentially, fundamentally the same, discounting apparent differences. Compare this with the philosophy of Schopenhauer, who taught that everything in the world is basically the same as what we call Will (blind urge or impulse).Lestrade (talk) 13:09, 1 October 2008 (UTC)Lestrade Vedanta does not mean "the end of all knowledge." It means "that which comes at the end of the Vedic hymns." In other words, "end" here does not mean "purpose" or "goal." "End" means "conclusion." Lestrade (talk) 02:45, 1 October 2008 (UTC)Lestrade Where to put it best? Some idea? A Vedantist's View of Mary by Swami Yogeshananda Austerlitz -- (talk) 19:04, 23 December 2008 (UTC) Why is there no text describing the Etymological root of the sanskrit word Vedanta?? (talk) 15:19, 15 September 2009 (UTC)Doug Vedanta = end (conclusion) of the Vedic hymns.Lestrade (talk) 15:01, 11 April 2010 (UTC)Lestrade The beginning of the article is erroneous. It is claimed that "Vedanta is based on two simple propositions: Human nature is divine…." This is untrue. Vedanta simply asserts that the individual is at one with Brahman, not any divinity or God. Everything depends on the meaning of Brahman, which is the overall whole or totality of experienced things, not a god or God.Lestrade (talk) 15:01, 11 April 2010 (UTC)Lestrade As usual, Vedanta is interpreted through the Hebrew concept of Theism. This error occurs because of the inability of Westerners to understand Brahman as anything other than their God. Unfortunately, Vedanta then appears as merely another monotheistic religion like Judaism and its two branches, Christianity and Islam. In reality, however, it is very different and its doctrine does not include the concept of God.Lestrade (talk) 18:12, 27 April 2010 (UTC)Lestrade Of course it does. See Ishvara. The relationship between Ishvara, Atman, and Brahman is disputed among the schools in Vedanta. Mitsube (talk) 18:32, 28 April 2010 (UTC) Those who have been imbued with the Hebrew concept of an anthropomorphic God have understood Ishvara as being similar to their God. Many religions have imagined their gods as being analogous to very powerful humans. Vedanta's Brahman may be unlike this concept but cannot be understood as such by those who can only think in terms of their own concept of an omnipotent father. Spinoza was similarly misunderstood by those who associate the word "God" with a super–humanoid individual. Vedanta is mostly concerned with the equivalence of the particular self and the general world. This is Atman and Brahman. No God is needed. This equivalence is incomprehensible to people who were taught the Hebrew–Christian–Islamic doctrines from childhood. They can only think of Vedanta as a kind of monotheism with Ishvara as the exotic God.Lestrade (talk) 19:16, 28 April 2010 (UTC)Lestrade That is what most Hindus believe. And even Advaita Vedanta incorporates this belief, though the impersonal essence of the cosmos idea is more important there. Did you even look at the Ishvara article? The word means "Lord". Mitsube (talk) 19:34, 28 April 2010 (UTC) I am tempted to believe you when you say that most Hindus believe in an anthropomorphic God. It seems that most humans need to believe in such an entity. For example, Japanese Buddhists have their religion's essential Four Noble Truths, but most prefer to turn their attention to the humanoid Shinto gods. Persons, not abstract concepts, appeal to people. Shankara said that "Brahman is the universe and all things that exist within it." Like Spinoza and his absolutely infinite Being, Shankara's words do not resonate with the general audience who want their Brahman to be a divine Ishvara Lord and a good father. The list of various schools of Vedanta in the article reflects the fact that Vedanta can have very different meanings for different people. Vedanta's Brahman can be the universal one and the all or it can be a God who is similar to the Hebrew–Christian–Islamic father deity. Like other religions, the Vedantic schools are so different from each other that they are almost separate religions and share a very tenuous common thread that runs through their individual fabrics. Due to its confusing and misleading ambiguity, it might be better to place the word "God" within quotation marks wherever it appears in the article.Lestrade (talk) 22:50, 30 April 2010 (UTC)Lestrade Comparison to Western Philosophies[edit] I agree with others who have expressed concern with the purpose of such a section. The underlying motivation seems to be either (a) a desire to show that Western philosophy achieved similar heights, or (b) to show that the systems of thought are parallel, or (c) to add to the perceived value of the ideas by showing that prominent westerners thought highly of these ideas. Or perhaps all three. However, in my view none of these motivations justify the inclusion of the section. If there was a consistent stream of comparison points between Vedanta and Western thought, perhaps there might be some value in pointing to an article that developed that. But to identify only one individual (Spinoza's) who developed a comparable set of ideas is not convincing. Western academics are forever doing this, apparently missing the obvious challenge: that it is highly suspect to apply Western formulations of ideas--based as they are on a very particular approach to knowledge--to a sphere of knowledge that is demonstrably based on wholly different assumptions. Two people may, due to circumstances, cross the same bridge at the same time, but if they are headed in opposite directions it is hardly appropriate to compare their motives and thought patterns. There has to be a far more substantial demonstration of the consistency of ideas. As for needing prominent western thinkers (a scientist?) to add a stamp of approval...for heaven's sake! And Schopenhauer (and Emerson etc) were perhaps influenced by these ideas but that's worth a sentence not a whole section. -- (talk) 21:26, 23 October 2013 (UTC)
4498d63ddf440b43
Class #9: Quantum Mechanics and Quantum Computing Does quantum mechanics need an “interpretation”?  If so, why?  Exactly what questions does an interpretation need to answer?  Do you find any of the currently-extant interpretations satisfactory?  Which ones, and why? More specifically: is the Bayesian/neo-Copenhagen account really an “interpretation” at all, or is it more like a principled refusal even to discuss the measurement problem? What is the measurement problem? If one accepts the Many-Worlds account, what meaning should one attach to probabilistic claims?  (Does “this detector has a 30% chance of registering a photon” mean it will occur in 30% of universes-weighted-by-mod-squared-amplitudes?) This entry was posted in Uncategorized. Bookmark the permalink. 52 Responses to Class #9: Quantum Mechanics and Quantum Computing 1. bobthebayesian says: If we accept Occam’s razor, there may be several compelling features about Many-Worlds that make it preferable. It is a theory that is just as consistent with our observations (prediction-wise) as any of the others, and yet it gets to postulate only one rule (the Born probabilities) without any need to stipulate a second measurement rule. At first, this may not seem too controversial (and indeed it may not actually be), but the measurement rule is troubling in that it is the only thing about Quantum Mechanics that is not time-symmetric, unitary, or linear (there are probably a host of other “elegant” properties that it “ruins” but I am forgetting them). I think it is ironic that many people positively regard Einstein’s devotion to mathematical elegance as a tool for detecting correct theories, but find use of that idea in the case of Many-Worlds unsettling. Of course, this is not by any means a conclusive reason to believe in Many-Worlds, but I do think it shifts *extra* burden onto the other interpretations to justify why we specifically *require* an ontologically basic concept of “measurement” to obtain a correct theory. Since the QBayesian approach won’t really even discuss this, I do not consider it a real interpretation but just a fancy extension of the “shut-up-and-calculate” idea. I am also interested in what Many-Worlds says about personal identity and consciousness. I have a small “theory” that I like to think about which I call the “Alfred Hitchcock theory of consciousness.” For those who have not seen the old TV horror show called ‘Alfred Hitchcock Presents,” check out this brief YouTube clip of the show’s opening title screen: ( ). Also, I fully agree that quantum effects are not needed for the biological act of consciousness to emerge. That’s not what I am talking about here. Specifically, I think about Many-Worlds telling us that something like that Hitchcock silhouette is going on. As blobs of quantum amplitude (that significantly affect you) evolve, the particles in your brain are also evolving according to the Born probabilities. In this sense, at every “time instant” the thing that is “you” is just walking into a new silhouette, and there are infinitely many other versions of “you” that happened to step into all the different possible other silhouettes. I’m not saying this is the grand truth of the universe or anything — only that if we do accept Many-Worlds, it starts to play trippy side-effects on what we think of as “me” or consciousness. The fact that I have certain memories is partly because “I” happen to reside on the Everett branch in which the necessary particles in my brain to “have those memories” all went where they were supposed to. That is, not enough decoherence happened to affect enough different particles to actually cause me to walk into a silhouette in which I would have different memories. But, in principle, if I am riding the crest of a quantum wave that is splitting all the time, there is no reason to think that at a given time instant I wasn’t in, say, a vegetative state for the last 26 years and then at the very last “instant” I happened to find myself in the exactly right splitting to impart memories into my brain structure as if I had been vivid, walking around, being a college student, etc. etc. If what I just wrote seems completely ineffectual, then it’s probably because I explained it poorly, because it really starts to make you feel trippy about your own consciousness and self if you think about it according to what I actually mean. Of course, the reason why none of this affects us on a day-to-day basis is that there are so many particles in our brains that the decoherence we’d need to see is infinitesimally tinier than the probability I would be killed by a simultaneous lighting strike and asteroid collision. So out of all of the Everett branches I could split into (all the silhouettes I could walk into), the proportion of which lead to “weird” observations is surely less than 1/Graham’s number or something like that. What this means to me is that I should be very careful to remove all anthropocentric perceptions before asking questions about quantum interpretations. So to the posed question, “Does “this detector has a 30% chance of registering a photon” mean it will occur in 30% of universes-weighted-by-mod-squared-amplitudes?,” I would say we’re already on thin ice because really, the probabilistic part of this issue is just “which Everett branch will “I” happen to find myself in at the time I look at the detector?” In 30% of the Everett branches, “I” will see such a registering. In some Everett branches, I will accidentally go blind at that very instant and see nothing. In others, the machine will magically break due to a cosmic ray and fail to register when really it should, etc. etc. Probably most of the amplitude is concentrated into two events and all other branches are less than noise. But even so, it’s wrong to impart probabilities onto the object’s themselves, because asking what “I” will witness is just as much a part of the experiment as anything else. Lastly, I wanted to mention a comment that was in WPSCACC. Scott summarizes a particular counter to Many-Worlds by saying, “In other words, to whatever extent a collection of universes is useful for quantum computation, to that extent it is arguable whether we ought to call them “parallel universes” at all (as opposed to parts of one exponentially-large, self-interfering, quantum-mechanical blob).” I am a little confused by this because my understanding of Many-Worlds would say that the “parallel universes” *are* just one exponentially-large, self-interacting blob. It just happens to be the case that a lot of subspaces within that blob nicely factorize to make them more or less multiplicatively independent, except for these small “self-interfering” pieces. And the parts that more or less neatly factorize would (as a logical deduction from toy examples like the double slit experiment) seem to have basic, gross structure that we would immediately describe as a slightly altered copy of our own universe. It’s all just one amplitude blob, but happens to factorize into subspaces that we would anthropocentrically immediately see as “universes.” I guess I am confused because I do not see how this is an objection to the physical realism of Many-Worlds… I think that Deutsch would completely agree with the “one-blob-many-worlds” idea, but I am probably missing something. Of course, none of the above is meant to be decisive about Many-Worlds. Just a lot of interesting things to think about, and some real issues that do need to be addressed by any other interpretation. Also, and this in no way decides anything, I think it’s good to have some perspective. At the current time, a slight majority of physicists actually believe MW, including Hawking and Deutsch. Feynman also believed it, and Weinberg is often described as believing it with a few reservations that the others named didn’t share. Again, that doesn’t count as evidence for anything, but hopefully shows you don’t have to be crazy to believe it (haha, *if* you think Deutsch, Hawking, Feynman, and Weinberg aren’t/weren’t crazy!) • Scott says: Thanks for the interesting reaction, bobthebayesian (who’s apparently NOT bobtheQbayesian 🙂 )! A few responses, in no particular order: (1) As long as we’re making “arguments from authority” :-), the views of Hawking, Feynman, and Weinberg on QM are all somewhat complicated, so it would be best to rely on direct quotes from them if possible. I once read Weinberg saying that MWI is “like democracy, terrible except for the alternatives.” (For whatever it’s worth, I completely share that sentiment about MWI.) For his part, Feynman clearly had sympathy for MWI, but he also famously said, “I think it’s safe to say that nobody understands QM.” That’s very unlike the attitude of most modern MWI proponents, including Deutsch! The latter generally believe that they understand QM perfectly (or as well as they understand, say, Copernican astronomy), and that, just like in the Copernican case, it’s only parochial, anti-multiverse prejudice that prevents others from understanding it too. (2) Regarding WPSCACC: I completely agree that the “blob” aspect of quantum computing is perfectly understandable within MWI, and for exactly the reason you say. But I was addressing a different question: whether QC demonstrates MWI, i.e. whether it should persuade any reasonable person to think in MWI terms even if the person wasn’t previously doing so. And I was pointing out one almost-immediate difficulty there: that, to whatever extent we know that a quantum computation required an exponentially-large interference pattern, to that extent we also know that the “branches” of the computation never succeeded in establishing independent identities as “worlds.” (For a more careful development of this response, see this recent paper by Michael Cuffaro.) (3) Yes, I completely agree that MWI “really starts to make you feel trippy about your own consciousness and self” when you think about it carefully, and that that’s precisely the aspect of it that many people find troubling. (But then many people go further, and throw really bad anti-MWI arguments into the mix, for example that it’s “weird”—what important scientific discovery isn’t?—or that it “violates Occam’s Razor,” when a strong case can be made for exactly the opposite.) As I see it, the question then is whether we should be satisfied with MWI’s clear advantages in simplicity and elegance, or whether we should continue to search for a less “trippy” explanation. (After all, there are many simple, elegant theories whose “only” flaw is their failure to account for various aspects of our experience!) (4) Just a quibble, but 1/(Graham’s number) is overreaching! 🙂 Assuming Bousso’s cosmological entropy bound, the probability of matter rearranging itself in a given observable way will never be smaller than ~1/exp(10122), provided it’s nonzero. • bobthebayesian says: Thanks! I see better what you’re saying in item (2) now and I will read the linked paper asap. The first thought that comes to my mind is this: what are we to believe about a quantum computation in the unlikely case that it gives us the wrong answer? If we expect it to factor our numbers and it doesn’t do it correctly, the MW interpretation suggests we should view this as though we happen to find ourselves in one of the unlikely Everett branches that leads to one of the low-amplitude outcomes (assuming we designed our algorithm to successfully shift most amplitude onto the correct outcomes). What would the other interpretations have us believe about it, and would they be reasonable alternatives to this MW idea? Clearly we can just repeat the computation until we have 1-epsilon certainty that we’ve seen the right answer. But the fact that the algorithm can ever possible realize an incorrect output at all seems to be the key to Deutsch’s question “where was the integer factored?” (all of this is totally aside from noise in the device, cosmic rays, etc., which could all effect a classical computer too). I agree that if we were so good at quantum software development that we could arrange an algorithm to always cancel all amplitude on incorrect outputs, then running it would not help us believe we’d walked into a specific Everett branch. We could just as easily believe we’d engineered amplitude to knock out other amplitude as we move along in a single universe. But if we believe that different outcomes of the computation are possible before we execute the program, then doesn’t this mean we believe there are different realities we can find ourselves in depending on the Born probabilities? I’m very grateful to have the chance to discuss this, because this is perhaps the main thing I feel most confused about regarding quantum computing. • bobthebayesian says: (The main thing that confuses me as it relates to this debate, that is… there’s certainly a ton of confusing stuff in general.) • Scott says: Bob: Interesting, I’d never thought about the philosophical relevance of exact vs. bounded-error quantum algorithms! I guess the first, “technical” thing I can say is that there are quantum algorithms that plausibly achieve superpolynomial speedups and that succeed with probability 1, though of course they assume physically-preposterous error-free quantum gates. See for example this paper by Mosca and Zalka, which gives a zero-error version of Shor’s discrete log algorithm. Even the original Bernstein-Vazirani paper gave a black-box problem (Recursive Fourier Sampling) that’s solvable with n quantum queries exactly, but requires ~nlog(n) classical queries even with bounded probability of error. Now, on to your main question: suppose you were a neo-Copenhagenist or a QBayesian. Then why couldn’t you say the following (I’m just thinking out loud here): “I measured the QC’s state, and I observed a wrong answer. Observing the right answer, I simply regard as a hypothetical: something that could have happened, and in fact would have happened with probability 1-ε, but didn’t happen. It’s not something that ‘really happened in a 1-ε fraction of universes,’ whatever the hell that means. The situation is not much different from that of a classical coin that had a 1/2 probability of landing heads, but actually landed tails. If we don’t insist on parallel-universes language in the case of the classical coin, then why should we insist in the case of the quantum algorithm? Of course, one important difference is that, in the quantum case, I have to calculate the probability of a right answer by applying the Born rule to one self-interfering, quantum-mechanical blob, and the probability of a wrong answer by applying the Born rule to a different self-interfering blob. But then there’s a measurement—a primitive operation, according to my conception of science—and only one blob gets picked. From that point forward, the ‘other’ blob will only ever feature in my explanations of the universe around me as giving the probability for an unrealized hypothetical, so I’m perfectly justified in regarding it as such, just as I would in the classical case.” • bobthebayesian says: I have been thinking about this quite a bit. I am wondering if the problem doesn’t lie in this portion, you say: “Observing the right answer, I simply regard as a hypothetical: something that could have happened, and in fact would have happened with probability 1-ε, but didn’t happen.” This makes an ontological claim about the amplitude that was assigned to all of the possibilities that I did not see. With the classical coin example, we don’t have that problem because the fundamental reasons why the coin’s outcome is uncertain are in principle understandable. That is, we know specific things that, if measured with enough precision, would change the odds of the coin flip. We could build a little device with a camera and train classifiers to recognize Heads from Tails with > 50% accuracy, assuming our camera could compute certain already-understood physical quantities (like the force of the flip, air resistance, etc.) In fact, human beings can actually train themselves to flip a fair coin with targeted success rates. As Persi Diaconis said, “If you hit a coin with the same force in the same place, it always does the same thing.” (See also). Professional magicians can train themselves to flip a coin with >80% bias. The difference is that the outcome of a quantum computation is not like a coin flip in the sense that all we know are amplitudes. If we believe the amplitude is real, then some account must be given for where all the amplitude went for the events we don’t observe. It’s no ontological problem to say that all the probability “disappears” from other hypothetical events when we observe the outcome of a coin flip, because no one is claiming that that probability is ontologically basic. Unlike a coin flip, for a quantum outcome you cannot chalk it up to just not knowing this-or-that quantity with enough precision. As I understand it, this is precisely why there is a measurement problem in the first place. If we could just say, “oh, well, outcome X was just one hypothetical outcome like Heads or Tails on a coin” then we would be able to say, “Ah, here is quantity Z that, if we could measure it with much more fidelity, would tell us whether our quantum computation was more likely to come out as X or Y.” The problem I see with the traditional approaches is that they say things very much like your quote above, “I simply regard as a hypothetical: something that could have happened, and in fact would have happened with probability 1-ε, but didn’t happen.” This seems like perfectly innocuous language, but the “but didn’t happen” part is innocently concealing the whole problem. How you *know* it didn’t happen? If it didn’t happen, then what happened to the amplitude? And why should we believe you? Isn’t it strictly simpler to just say that it *did* happen? This is very different than mere probabilities assigned to outcomes of a macroscopic process. Many Worlds allows this simplification, but it too suffers a problem here. I think Robin Hanson puts it well when he says: “The big problem with the many worlds view is that no one has really shown how the usual linear rule in disguise can reproduce Born probability rule evolution. Many worlders who try to derive the Born rule from symmetry assumptions often forget that there is no room for “choosing” a probability rule to go with the many worlds view; if all evolution is the usual linear deterministic rule in disguise, then aside from unknown initial or boundary conditions, all experimentally verifiable probabilities must be calculable from within the theory. So what do theory calculations say? After a world splits a finite number of times into a large but finite number of branch worlds, the vast majority of those worlds will not have seen frequencies of outcomes near that given by the Born rule, but will instead have seen frequencies near an equal probability rule. If the probability of an outcome is the fraction of worlds that see an outcome, then the many worlds view seems to predict equal probabilities, not Born probabilities. … We have done enough tests by now that if the many worlds view were right, the worlds where the tests were passed would constitute an infinitesimally tiny fraction of the set of all those worlds where the test was tried. So the key question is: how is it that we happen to be in one of those very rare worlds? Any classical statistical significance test would strongly reject the hypothesis that we are in a typical world.” I think for the traditional interpretations to describe bounded-error QC, they have to explain “where did the extra amplitude go.” Or, why should we believe, in the face of experiment, that quantum amplitude is not “really there” and is instead just a parochial calculational tool? Why not the strictly simpler approach that quantum amplitude is really there, and then grapple with why we see the Born probabilities when a typical world would not? To me, the latter is an improvement over the former, though not without its problems. And this really is why Many Worlds is “terrible, except for the alternatives.” • Scott says: bobthebayesian: I’m glad you’ve stated your view with such precision and clarity, since I can now explain exactly where I disagree with you. Imagine that the laws of physics were completely classical, but they involved probability at a fundamental level. E.g., there were certain particles that under certain circumstances decayed with 50% probability, with no “hidden degrees of freedom” anywhere in the universe that determined whether the decay would happen (unlike the in case of the coin flip), but also no superposition or interference (unlike in the quantum case). In such a case, I would essentially shrug my shoulders, and say the universe was probabilistic. I wouldn’t be the slightest bit tempted to describe what was going on using Many-Worlds language (as I am the slightest bit tempted in the quantum case). Nor would I stay up at night wondering “what happened to the probability mass I didn’t observe.” What about you? • bobthebayesian says: This is a big difference. I’m not sure it is a valid thought experiment to envision a world where causeless probability happens at a fundamental level. By “causeless” I mean “we can tell a priori that there are no hidden degrees of freedom governing it.” This is different than the EPR results, because it is in fact amplitude (the physical thing) that gives rise to quantum probabilities (though we don’t know how and certainly could be wrong about this). The hard part to come to grips with is that it’s not properties of the local environment of particles, but rather what that local environment is made out of (i.e. amplitude) that gives rise to the probabilities. The Born rule is puzzling, but I don’t think a scientist could ever possibly be satisfied by shrugging shoulders and saying, “oh well, that just means amplitudes give rise to probabilities and the universe is inherently random,” or something along those lines. The other thing is that we can observe real, non-trivial objects in superposition, like buckyballs. I don’t understand how we can see a mixture of the different possible outcomes and believe anything other than that the amplitude for different outcomes is physically real. I’m not saying that I’m right, only that, truly, I do not understand how one could interpret that kind of observation in any other way. I would be happy to read more serious accounts of how the other interpretations reconcile that. I’m not convinced that there could be any sort of observation that leads me to believe the universe is inherently random. That there is no physical quantity that, when measured or computed, gives rise to the observed probabilities. For me, this more or less removes your suggested world from consideration. If I could prove there was not a hidden variable explanation for some uncertainty, and also that there was not some pool of outcomes that could superposed to yield uncertainty, then I would merely be asking, “well, then what does explain this uncertainty?” To me, that’s the job of a scientist. Maybe we have to go back a little into the philosophy of statistics to decide if such a “causeless probability” is really compatible with a scientific worldview. Again, I want to be clear that these are things that I fail to understand, which is not at all to claim that I am correct about them. • I am not a physicist, but it seems to me that we generally take things like electric charge to be fundamental, because we haven’t found any mechanism that somehow grants it to particles. We can try unifying electromagnetism with the other forces, but there’s still this quantity attached to each particle that we just have to measure. So why couldn’t the analogous situation hold for probabilities? • bobthebayesian says: I think in terms of applying physics to solve problems or applying probability to solve problems, your point of view is definitely correct. But in terms of fundamental explanations and theoretical physics, I’m not so sure. For example, why don’t we just believe that objects have intrinsic masses? Why are we looking for the Higgs boson? Once we discover it (or change our models when we don’t) little will change at the big picture level on which most mass-related properties of physics rest. But we still seem to care a great deal about giving a real account of where mass comes from, rather than being content to believe that it’s “just intrinsic.” • bobthebayesian: There seems to be a bait-and-switch going on here. Yes, whenever a more fundamental explanation is available—whether it’s Newtonian mechanics, the Higgs boson, whatever—of course we should go for it! But any theory in any area of science will have to stop somewhere, and posit some objects as primitive. So I think the real question is, what types of things are we at least willing to accept at a primitive level? Personally, I’m at least as willing to accept probabilities as I am to accept space, time, energy, etc. — but maybe that’s a function of dealing all the time with probabilistic algorithms (for which it’s completely irrelevant where the probabilities come from), and maybe you take a different view. • bobthebayesian says: I agree that this is the case in any specific theory. But how does one improve upon a given theory without challenging its assumptions, some of which are about which objects are actually primitive. I’m just saying that I don’t see any reason to be conclusively satisfied with a given set of primitives, whether they are probabilities or particles, or whatever. What would it mean to have a theory in which you literally knew (as much as it is physically possible for a human mind to know something) that you could not improve your explanation ever by entertaining previously un-entertained ideas about what the primitives should be? I agree this can happen in mathematics where we work forward from definitions. I’m not convinced in physics where we’re trying to solve the inverse problems that tell us what the definitions were in the first place. • I’ll put it this way: there are things that it would be nice to explain better if possible, and then there are things that keep you awake at night. For me, classical probabilities (supposing they arose in fundamental physics in some non-quantum way) are in the former category while amplitudes are in the latter. (Well, at least they kept me awake at night as a grad student. These days I sleep pretty well, whether justifiably or not. 🙂 ) • sbanerjee says: As BobTheBayesian points out and as Scott has also mentioned in this thread, MWI has some “trippy” implications for our perception of personal identity and consciousness. MWI’s trippyness may or may not be implied depending on what philosophy defines your perception of personal identity and consciousness. If the Buddhist perception of personal identity is taken into account, then the issues of quantum measurement and possible many worlds has no effect on one’s personal identity. The Buddhist perception is that there is no personal identity, we are supposedly just combinations of different things working in harmony – like a chariot is but a combination of wheels, a seat, etc. What is there in the Buddhist view of things, is that there is action and reactions. With the MWI, BobTheBayesian points out that ‘at every “time instant” the thing that is “you” is just walking into a new silhouette, and there are infinitely many other versions of “you” that happened to step into all the different possible other silhouettes.’ In the Buddhist perception, this would be fine as long as all actions in all of the silhouettes carry through in an ‘expected’ manner. I don’t mean to argue with religious philosophy, I just want to point out that issues of personal identity/consciousness might not be something that stops the acceptance of the MWI because some perspectives towards personal identity/consciousness work great with MWI. 2. I’m really helpless with the technical details of QM, but for me the big question is can “occurs in 30% of universes-weighted-by-mod-squared-amplitudes” be interpreted as anything like a genuine frequency. I mean, there are two intuitive ways to understand what porbabilites *are*:degrees of credence, and frequencies. Credence is off the table in this context, so can we interpret the mod-squared-amplitude as frequency in fully the same sense that “6 out of 10 marriages end in divorce” is a frequency? • bobthebayesian says: I don’t understand why credence is off the table here. • What is there to be wrong about? If you know that when you make a measurement x will occur in 30% of universes-weighted-by-mod-squared-amplitudes and z will occur in 60% of universes-weighted-by-mod-squared-amplitudes you know everything there is to know about the measurement. Then again I might be very confused on some basics. But Huw Price argues for this general point forecfully in a paper: • bobthebayesian says: The way I see it is that you know a probability distribution, which is just quantified uncertainty. If we knew *why* the Born probabilities are the way they are, or why they are a function of mod-squared amplitude and not something different, then we might be able to do physical calculations and determine *exactly* under what physical conditions you will emerge in the 30% x scenario and under what other circumstances you will emerge in the 60% z scenario, etc. Indeed, knowing what physically induces the Born probabilities is a tremendous open problem, but not one that is ruled out from being understood in terms of physical explanation rather than pure observational frequency. It’s similar to a coin flip. Naively, there is a 50/50 chance of either outcome. But in physical truth, if we could model the strength of the thumb that flips the coin, the air resistance, etc., precisely enough, then perhaps we could predict the outcome much better than with 50/50 odds, but still somewhat less than perfectly. This would be an explanation of the result, rather than purely frequency. We’re just much further from understanding how we could do this for the Born probabilities (i.e. what QM concepts are similar to “strength of the thumb that flips the coin” or “air resistance” when it comes to evolving states). • bobthebayesian says: Also, Robin Hanson offers an interesting speculative idea that avoids some of the decision theory problems that Price focuses on in the paper you linked: ( I think this approach is interesting, but really speculative. But the main thing is that there a lot of ways to meet this problem, most of which avoid all of the decision theory problems that Deutsch’s preferred approach suffers. • But what is it that you are uncertain *about*? It’s not like you don’t know if A will happen or B will happen — you know that both will happen. To quote Lev Vaidman: “There is a serious difficulty with the concept of probability in the context of the MWI. In a deterministic theory, such as the MWI, the only possible meaning for probability is an ignorance probability, but there is no relevant information that an observer who is going to perform a quantum experiment is ignorant about. The quantum state of the Universe at one time specifies the quantum state at all times. If I am going to perform a quantum experiment with two possible outcomes such that standard quantum mechanics predicts probability 1/3 for outcome A and 2/3 for outcome B, then, according to the MWI, both the world with outcome A and the world with outcome B will exist. It is senseless to ask: “What is the probability that I will get A instead of B?” because I will correspond to both “Lev”s: the one who observes A and the other one who observes B.[6]” • bobthebayesian says: But I think the matter of ignorance is exactly which “Lev” you will correspond to. I don’t understand why he says, “It is senseless to ask: “What is the probability that I will get A instead of B?” because I will correspond to both “Lev”s: the one who observes A and the other one who observes B.” You precisely *won’t* correspond to both Levs. If you knew why the Born rule was correct, then it would remove your ignorance about how you will evolve, and you would not see A with probability 1/3 or B with probability 2/3 … you would see A with probability 1 or 0 and you would know which without making the measurement. Then you cease to be ignorant about the outcome and probability would not apply. In this case, if you knew such physics, then you could define an objective version of yourself by tracing out all the Everett branches you would take ad infinitum. There would be objectively different versions of yourself and rather than seeing Peli_{t} as one individual that splits into a bunch of others, you would see, at “one instant in time” a bunch of different Peli’s that all just happen to overlap with each other prior to a particular choice of t. I agree that in MW, only ignorance probability makes sense. But then again, as a Bayesian, I already think that about all probabilities. Another very important / trippy part of all this is when you start to think of timeless physics. That might actually be the best way to address this probability issue, but I do not understand it very well at all yet. Maybe check this out: ( Essentially, and again this is just what I think is the right projection of MW interpretation here, I’m not an expert enough to assert that I’m correct about it, the thing you are ignorant about is which version of yourself you happen to already be (specifically *not* which version you “will become” but just which, of the infinitely many overlapping versions up to this point, do you happen to already be). Currently all you can do is specify an answer with some probability. Just like you can specify some probability that a coin will land on one side or the other. If you knew more about the mechanics of the coin, you remove ignorance and your probability estimate shifts closer to being a point mass. Probabilities are not about things that happen in the world. They are only about your state of knowledge. Somehow I still feel like I am missing something because it seems like we’re just coming at this from two different angles. I don’t see anything unusual about assigning probabilities to Everett branches. I don’t know the physics that causes the branching nor the physics that determines to which branch the experience of being “me” will go. Therefore, I assign probabilities to these two outcomes based on the data I have. observed, i.e. that the Born rule works, inductively. I don’t understand how Vaidman claims there is nothing for the observer to be ignorant about. Think of the double slit experiment. When no detector is placed by a slit, I observe an interference pattern. When a detector is there, I see only two stripes. By physically placing the detector there, I so arrange matter that I can only possibly evolve into certain Everett branches, where the photon is either in state 1 or state 0 definitely (went through slit 1 or went through slit 0). I remove uncertainty. If there is no detector, I allow myself to evolve into Everett branches where the photon is in linear superposition of states, so there are more branches possible. The one that “I” will evolve into is determined by the amplitudes of the different paths the photon can travel, and I physically have different credence about different paths according to Born’s rule. There surely is something to be ignorant of there, namely which photon-branch the molecules in my brain will be entangled into. In principle, there could be some physics that lets me calculate this explicitly with perfect certainty. I don’t know that physics, so I have to assign credence based on observation. • Right, I think we’re getting somewhere here: the key question is whether there are facts about persons above and beyond facts about person-slices. Why do you think there are distinct overlapping persons, rather than just splitting chains of person-slices? 3. I find it curious that when discussing “interpretations” of physical theories our brains seem to be particularly bothered by the presence of uncertainty in the laws of physics, and we keep trying to find a “physical” meaning to it. For example, in the case of “classical” probabilities, consider a fair coin that, when tossed, it lands heads “with probability half”. We seem to be most comforted when we realize that we do not really have to worry about what probability *is*, because all that’s happening is that we happen to lack enough information to trace the trajectory of the coin and exactly predict its outcome. If we did have enough such information, then we would not have to worry about what probability *is* and could predict the outcome perfectly. (And then the apparent probability would simply be the ignorance of information crucial for prediction, as already discussed in the comments above.) In the case of quantum-mechanical probabilities, however, we seem to be “stuck” with the weigh-by-mod-squared-amplitudes rule, and have no better explanation for it other than MWI. And the probabilities just won’t go away. And thus we struggle to interpret what the uncertainty in these quantum-mechanical outcomes might mean. But now suppose that we did find an explanation that removes uncertainty from the theory, and let’s say that the resulting theory is even elegant. Why would we be happy with a theory with no uncertainty? Won’t we have to also explain what the laws themselves *are*? (Even if now we don’t have to worry about uncertainty.) Why do we struggle so much to give a “physical” meaning to uncertainty/probability, while we might be happy to leave, say, a deterministic theory “in the abstract”? E.g., what *is* a unitary transformation? Why not worry about that too? • bobthebayesian says: I think these are really good questions. One short answer that comes to my mind is that a lot more is riding on whatever quantum amplitude “is” than whatever a unitary transformation “is.” But even at that, I think we do tend to impose physical interpretations on things from functional analysis, including unitary operators. This was a big motivation for the development of Lie theory and its extensions. Another point of view is that unitary transformations “are” whatever we define them to be, mathematically. Whereas probabilities of quantum outcomes are thrust upon us whether we like it or not. I personally feel more motivated to uncover the “why” behind something if it seems to be imposed by nature rather than postulated as part of mathematics, although not everyone shares this view and not everyone needs to. But it definitely is a problem for Many Worlds that all of our interpretations don’t lead to experimentally distinguishable consequences. If you can explain every outcome equally well, then you have zero knowledge… and I agree with Scott that many of the extremely evangelical Many World advocates seem to believe, in some limited sense, that this theory “explains it all.” I see Many Worlds as a good bad theory. You need bad theories to make progress, and it’s the best one we have. Once we find ways to test certain parts of it, it will probably be at least partly wrong. But that’s a good thing. I wish there were more good bad theories, in a lot of domains. • I think this puts its finger on a very important point, which is that we have a very deterministic conception about how the universe works. One of the expectations before quantum mechanics came along was that any imprecision on the part of our theories was simply because we didn’t know enough about our starting states. Now, on principle, there is no way we can know about them. I think this bothers a lot of people. 4. The crux of the issue of “interpreting quantum mechanics” is that, sometimes, successful scientific theories imply very strange things. And when faced with this sort of behavior, we have to decide how we’re going to treat these claims. For example, we could say that quantum mechanics is simply a convenient instrument for making certain types of physical predictions, and not perceive it to be making some ontological claims (such as the existence of the wavefunction.) Certainly it has not been the case that we’ve been able to reduce other scientific explanations to it: quantum mechanics is woefully inadequate for making predictions even on the level of chemistry, including empirical facts such as the aufbau (n+1) principle, the Hund principle and the Pauli principle. Physicists have strenuously attempted to derive these conclusions from the postulates of quantum mechanics, and have failed. We might take this to be an indication that we shouldn’t worry too much about philosophy of quantum mechanics, that it is all very tentative and something better will come along soon. Of course, the problem of reduction is nothing new, even for Newtonian mechanics (which becomes intractable even as quickly in the three-body case.) So, another approach me may take is try to use our existing intuitions to make sense of science, to “tame” the unbounded imagination of quantum mechanics in some sense. We interpret the theory in a way that makes sense to us, and if it all seems to strange, we reject it (like Einstein did.) I think the many-worlds interpretation is an outgrowth of this perspective: human beings possess a very strong capacity for counterfactual reasoning, and so imagining the “splitting” of timelines is a very natural intuition. But, as discussed in class, it often leads to preposterous claims about what physics should do, without any basis in “the mathematics”. The alternative is to claim that we should retrain our philosophical intuitions with the science. Arguably, this is what every theoretical physicist in training spends a large portion of their time doing: getting close up and personal with the equations and developing a private understanding of how they ought to work. But perhaps this is asking too much of someone who was not brought up on quantum mechanics, of someone who did not live in a world of quantum mechanical effects. • Scott says: Hi Edward, I confess I don’t know what you mean by the above. Quantum mechanics is like an operating system for physics: if you want to make actual predictions, you need to install “application software” on top of it, like nuclear physics or quantum electrodynamics. But all of those things are built on QM and none of them contradict it. The Pauli exclusion principle, in particular, has a very simple and beautiful explanation in terms of interference of amplitudes (I can explain it if you’re interested). Incidentally, though, I do like your presentation of MWI as an attempt to make quantum mechanics seem less strange! That’s the exact opposite of how most MWI proponents and critics alike view MWI: the proponents revel in the supposed “strangeness,” while the critics object to it. But to me, MWI has always felt more like an attempt to fit familiar sci-fi imagery onto a theory whose actual mathematics is stranger than any fiction. I think the remark I made was intended to consider the question, “Has chemistry been reduced to quantum mechanics?” I’m not familiar enough with QED to really know how the “operating system” metaphor applies and how it doesn’t apply. The remark about Pauli exclusion comes from a 1995 paper by Scerri: If in fact this situation has changed recently, I would love to know, and I know a philosopher (or two) who would be interested in this information. I think we are very much in agreement about the status of MWI imagery. 🙂 • Scott says: From reading the first page of that paper by Scerri, it seems to be chock-full of the exact sort of confusion that I was trying to dispel with my operating-system metaphor. Let me try again: saying you failed to reduce chemistry to QM is exactly like saying you failed to reduce astronomy to Newtonian mechanics, because you didn’t manage to derive the masses of the planets from F=ma or Gmm/r2. In both cases, we’re talking about questions that the fundamental theory simply wasn’t designed to answer, and that no serious person ever claimed it could answer. In the case at hand, though, the Pauli exclusion principle actually DOES follow straightforwardly from quantum mechanics, together with one additional fact: the behavior of fermions. Recall the principle says that two identical fermions can never be in the same place at the same time. This can be understood as follows: by definition, the quantum states of identical fermions are antisymmetric, in the sense that if you perform a physical operation that swaps two identical fermions, you get the same quantum state, except that the amplitude gets multiplied by -1. But if you have two identical fermions in the same place at the same time, then even the identity transformation swaps them! That means that the amplitude for such a configuration must equal minus itself, or in other words must be zero. • D.R.C. says: This reminds me a lot of this xkcd ( If we really need to make predictions at a certain level, we use the appropriate level of abstraction to do it most efficiently, so we don’t wind up with extremely difficult problems since we may not exactly have complete knowledge of the transitions between them. In theory, everything should be able to be explained in concept A can also by any more abstract concept (assuming a 1d scale, which may or may not be true, otherwise one of its “ancestors”), since concept A is just a special case of that. Of course, pure mathematics is not going to be very efficient at solving anything related to neuroscience, and biology might be more efficient than mathematics, it will probably not be as efficient as just using neuroscience to begin with. People tend to develop theories around certain ideas before others and center learning around that. These are not always found in the “correct” order of abstraction. For instance, knowledge of biology has been around for a very long time (even if it was just at the level of “if I stab someone in the neck, they die really quickly”), but so has economics (“I have something that someone else wants, how can I get the best thing for myself?”). 5. Katrina LaCurts says: Here is the specific problem I have with measurement in quantum mechanics (I’m going to avoid how I feel about the various interpretations, because I still haven’t decided): My initial understanding of the measurement problem was this: “Quantum mechanics has different rules for when you look and when you don’t, and that’s totally weird.” The major problem I had was how to define what it meant to look, i.e., to take a measurement. After reading Penrose, I had thought that in order to measure, we had to “blow things up” to a classical state, and that that’s how we defined a measurement. So to me, this just seemed like an engineering problem: our measurement-devices are “too big” to measure a quantum system without significantly interfering with the system, so certainly things change when we take a measurement (because we’ve interfered). Work should be done to figure out how to measure things with “smaller” devices. But, as I understand it now (after reading some more), this is not true; even these “smaller” measurements will cause problems. This left me again with the same problem: what is a measurement? So my next decision was that a measurement consisted of a photon bouncing off of something. This is about as small of measurement I can imagine, but at a quantum level, I still see the photon as interfering with the system. So to some extent, nothing seemed weird to me; again, we interfered with the system, and things changed. But then, I thought, photons bounce off of things all of the time. So maybe the previous definition is incorrect, and we should define measurement abstractly, as getting information out of the system. Then, is the wavefunction “collapse” something artificial, akin to the probabilities of a coin flip “collapsing” into 0 or 1 once we observe the coin flip? • Scott says: Hi Katrina, In one sense, you’re absolutely right: a measurement of a photon (call it A) by a human being, or a large measuring device, can be modeled in exactly the same way as a physical interaction between photon A and a second photon B. In both cases, quantum mechanics tells us that the combined system evolves from unentangled to entangled, as the second system (the human, the measuring device, or photon B) gains information about the original state of photon A. And in both cases, someone looking at photon A only will just see a photon whose “wavefunction has collapsed” — there’s no way, by looking at photon A only, to tell whether photon A was measured by a macroscopic object, or whether it was simply entangled with a second photon B. (You can only detect that two systems are entangled by measuring both of them.) The issue is this: It’s easy to understand what it means for photon A to become entangled with a second photon B, and in fact one can perform experiments that directly verify that behavior. But what does it mean for photon A to be entangled with a macroscopic measuring device, or a human brain? Well, it would mean that the measuring device or the human brain would have to enter a superposition of states, corresponding to the different possible outcomes of measuring photon A. That leads directly to the Many-Worlds Interpretation, with all of what bobthebayesian called its “trippy” consequences for personal identity. The obvious alternative would be to hold that, somewhere between the level of photons and human brains, “the buck stops,” and an actual measurement takes place with a single classical outcome. But of course, if you believe that, you then face the problem of explaining where exactly the buck stops, and why. Now, in principle, one could test the claim of the Many-Worlds Interpretation that macroscopic measuring devices and human brains evolve into superpositions of states, by performing experiments on the measuring devices or human brains that looked for quantum interference effects. In reality, though, we’re far, far away from being able to do such experiments, and might remain so for thousands of years or even forever. (Recall from class that, so far, the largest objects for which the double-slit experiment has been performed are various types of buckyballs! Though subtler quantum interference effects have also been seen in superconducting currents consisting of billions of electrons.) Anyway, hope that clarified things a bit. 6. D says: I think that it’s part of human nature to try to understand “what’s going on” at a physical level, and thus we constantly search for an “interpretation” that makes sense–even if it’s nothing more than an analogy to something we’re familiar with. The problem is that for quantum mechanics we don’t even seem to be sure what the right analogy is; any that have been proposed seem to have various failings. When in 8.01 I can talk about particles being ideal unitary objects that interact in specified ways (even though I know they’re composed of subatomic particles). In 8.02 I can talk about electricity as if it were a continuous fluid (even the term “current” brings to mind a liquid), even if I know it’s the motion of electrons; I can envision magnetic or electric fields as invisible forces because I’m familiar with another invisible force (gravity). Electromagnetism operates by a different set of mathematical rules, but there’s a clear analogy to something in the ordinary experience of everyone on the planet. Moving up the course 8 curriculum to 8.033, we reach relativity. Here things certainly start to get weird, and what is talked about is outside the realm of one’s typical experience, but the statements are simple: “Time isn’t constant for all observers–as you move faster, your clock slows down relative to others.” “Energy either is mass or has mass.” “If you’re moving, you’ll observe distances to be skewed.” Strange and unintuitive, but at least there’s a clear central thesis (there is no “privileged” reference frame) and the implications that follow can be described in terms people understand (I might think it’s weird that a clock slows down, but I know what it means). Once we get to 8.04 and 8.05, though, we don’t necessarily even have good analogies for what’s happening in the physical world. We can describe it mathematically, sure, but what’s “really happening” that our equations are describing? Should I think of these electrons as particles or waves? Many people could handle either one, but “both, as it seems convenient” is asking a lot. What causes a “measurement” (or “collapse”) of a quantum state, anyway? What makes entanglement with a particle that’s part of my quantum computer not a “measurement”, but entanglement with an identical particle that flew in from outside a “measurement”? I can understand a particle being in one location or another (or one energy state or another), and I can do the math that tells me what superposition the particle is in, but what–physically–does that correspond to? Ale and Edward, above, suggest that nondeterminism is a part of why many find quantum theory difficult to grasp. I think that’s a part of it, but it’s not just that there’s probability or nondeterminism involved–it’s that we don’t even really know what the particles we’re talking about are, even by analogy. The nondeterminism is just one obvious way in which they don’t behave like many things we’re more familiar with. • wjarjoui says: D, I really like the point you make, and how you lay out how intuition about the physical world decreases as we approach more and more QM. I think answering your question “what-physically-does that correspond to?” is hard because as humans we are not used to reasoning about the world through QM. I believe we are classical-level entities and hence over the course of history we developed notions and theories about other entities that we interact with in the classical-level. Only when our technology was advanced enough to allow us to interact with and observe objects in the quantum-level were we able to develop QM. I believe our understanding of QM will go as far as we are able to experience our world in the QM level. Hence the more we are able to observe interactions in the universe in QM-level, the more able we will be to answer your question. This is my take on it, and I could be wrong, but it certainly makes sense in a lot of ways: people can usually understand better what they have to deal with constantly. This might be a stretch, but I think a parallel with Searle’s CRT can be drawn from what you said as well: we have a seemingly working model of QM, but what will it take us to understand it? Perhaps the same as it would take the worker in Searle’s room, who has a model of Chinese, to understand the Chinese language? 7. kasittig says: I believe that the most difficult and confusing part of quantum mechanics is that it feels entirely unintuitive, which is not something that I have learned to expect from my physical systems. Consider classical mechanics – even without any idea about what is going on behind the scenes, babies are still able to pick up on basic concepts like gravity and force. Describing the physically world mathematically feels almost like overkill, as I can just go out and experiment to figure out what’s going on. Classical mechanics are all around me all of the time, and so I have an excellent grasp on what is reasonable and what is not reasonable in this regard. I’m not sure whether it’s the nondeterminism, as Ale and Edward suggest, or whether it’s the fact that I don’t know what the particles in quantum mechanics are, as D states, but naively (and perhaps this is silly), it just seems strange that something so fundamental could require so much math before I’m able to understand it. And it seems even stranger that these mysterious particles that I don’t really have any intuition about are actually governing the behaviors of all of the everyday objects that I feel comfortable interacting with in predictable and consistent ways. I’m not sure if an analogy would be helpful in my understanding, as I feel as though most humans develop an intuition about the physical world through repeated experimentation, and it appears that quantum particles are simply too small for us to carry out any sort of meaningful, simple experiments. 8. Cuellar says: Maybe someone can help me with a question: In the Many-Worlds, is consciousness necessarily in a single world? When we do a measurement, we become entangled with the particle. Does that mean that our consciousness also splits into two? I’m having trouble with understanding how there is continuity and identity in the mind. It seems that there is a multiplicity of ‘me’ in the different worlds but at the same time, our conscience has an identity. How do we solve this duality? • MIke says: I’m no expert but my understanding is that (i) a measurement does not require the presence of an conscious observer, only of irreversible processes, (ii) branches of the universal wavefunction differentiate when different components of a quantum superposition “decohere” from each other, and (iii) when a conscious observer makes a measurement of, say, a particle, the observer and particle become entangled and the wave function of the particle and the observer decohere and differentiate in respect of possible outcomes. The lack of effect of one branch on another implies that no observer will ever be aware of any “splitting” process. You wonder how to reconcile this with your subjective feeling that you have only a single identity. As Michael Clive Price has explained, “[a]rguments that the world picture presented by this theory is contradicted by experience, because we are unaware of any branching process, are like the criticism of the Copernican theory that the mobility of the earth as a real physical fact is incompatible with the common sense interpretation of nature because we feel no such motion. In both cases the argument fails when it is shown that the theory itself predicts that our experience will be what it in fact is. (In the Copernican case the addition of Newtonian physics was required to be able to show that the earth’s inhabitants would be unaware of any motion of the earth.)” • bobthebayesian says: To extend Mike’s very good reply, the Many-Worlds view would also say that the experience of consciousness is more like “finding out which world-branch ‘you’ happen to be in” than it is like “you actually split.” That’s why I think the splitting language is a bit misleading. What subset of physical reality would you draw a boundary around and declare to be ‘you’? Many-Worlds suggests that the subset you have to draw a boundary around is not only comprised of spatial and temporal dimensions, but also Everett branches, and that your “cohesive experience of consciousness” is ultimately an artifact of all the particles “going where they were supposed to go” (i.e. being entangled such that their outcomes at each time instant are correlated in a way to reproduce thoughts, memories, etc. that “make you you”). You can replace ‘particles’ in the last sentence with any sort of quantum entity, but it usually helps to pick one basic thing and think about things in terms of that. In short, consciousness means finding that your factorizable sub-chunk of the universal wave-function just so happens to update into an Everett branch in which all of the physical ingredients needed to producing the physical behavior of consciousness are in the right place to do so. Suppose there is a system of N particles in your brain that specifically corresponds to your ability to perceive pain. Further suppose that this functionality is independent of all the rest of the things your brain needs to do, other than that it sends and receives signals from other places. That is, if some of those N particles were perturbed significantly, your brain would still work the same way, just be absent correct pain signals. Clearly, N is very large. Also, evolution employs redundancy, so the pain processing is robust to the correlated failure of up to M < N of the particles, where by correlated failure I mean the particles deviate significantly from what macroscopic physics would predict. Thus, in order to "split" into an Everett branch where your own perception of pain is altered (affecting continuity of consciousness), you'd need to correlated splitting of at least M of the particles under question. When M is really large, this becomes outrageously unlikely. Another way to think of it is like this: why can't we repeat Young's Double Slit experiment, but instead of shooting electrons or photons at the two slits, we shoot ink pens? Why don't we see an interference pattern of ink pens? The reason is that to see a macroscopically distinct interference pattern, we need some HUGE number of particles in the ink pen to all simultaneously go down the same unlikely branch. Since the squared amplitude for this event is low, then the joint squared amplitude for this event happening simultaneously for many particles at the same time is even fleetingly lower. But in principle, ink pens do create an interference pattern. The same reasoning, scaled up even more to human minds, suggests that if we shoot human brains through the two slits, then even they have an interference pattern, and hence “which slit the conscious mind went through” is not explicitly defined. It could be a superposition. I wouldn’t advise shooting human brains through slits… that’s partly why Deutsch tries so much to conjure a way to do exactly the double slit experiment, but where you “shoot conscious entities through the slit” instead of shooting particles, and he tried very hard to strongarm quantum computing into giving such a test. • Cuellar says: This is a very good reply. I guess I was wondering how can there be a ‘me’ in a world, While every particle is in every world. But now I see how the Many-Worlds explains this. 9. nemion says: I am not very technically savvy in what comes to quantum mechanics. Nevertheless, I am somewhat interested in the relationship between philosophy of mind and the topics of this class. In particular I find interesting the first paragraph of what kasittig is saying. How is it that quantum mechanics requires such a complicated machinery to be understood and how does it fit into the question and framework we were talking about in induction, learn ability and the Occam’s Razor? In what sense is this following or not the principles of Occam Razor? How can we contextualize this information within our experience if it is so counterintuitive that we need to learn a whole mathematical framework to be able to understand it? How does this refute or not (and if not, why not?) Occam’s Razor? And in particular I find appealing the following paragraph discussion: in what sense do we already know what we are aiming to describe using quantum mechanics? How much access do we have through our rational thinking to the facts of the world? Related with the question of logical omniscience, if quantum mechanics (and any physical law) is a fact of the world (in what sense can we claim that physical laws are facts of the world?), then could we access to it through thought and no experimentation? Would the existence of randomness and in determinability in quantum mechanics a refusal to logical omniscience? What philosophical implications does it have the quantum mechanical fact that measurements of reality affect it? Do they pose a limit to what our rules of inference can access? Do they pose a limit to the possibility of omniscience in some way? • bobthebayesian says: I once heard it put like this: If you did not have any concept for describing what a differential equation was, Newtonian mechanics would look pretty indecipherable. Just by computing rigorous tables, you could easily come up with calculational rules that gave you great predictive power, and this would be a very successful theory. If differential equations really had not even occurred to anyone, or functioned like “grue” in your mathematical language, it would be pretty hard to generalize from your excellent empirically derived calculational rules and see that they were in fact differential equations. I think most physicists would agree that the empirical calculational tool offers you a lot less reach than knowing the differential equation interpretation, and so knowing how to mathematically represent that physics is very valuable — much more valuable than just knowing the empirical calculational tool itself. This sort of thing might be why we tend to try to see structure in things. If it really is the case that there’s nothing more than an empirical calculational tool, that’s kind of like the worst-case scenario and unless we can prove that that’s the case, we only stand to gain by trying to figure out what mathematics succeeds in properly describing why our calculational tool works. This is why I think it is not overkill to seek better mathematical explanations for the physical world. • Mike says: I suspect we have and can go a good deal further than knowing some empirical calculation tool. In fact, a new paper addresses an aspect of this question: Perhaps it should have been called “Vindication of Quantum Physicality”. • bobthebayesian says: Thank you for the link. This is a fascinating paper. If I am understanding it, it is saying that if we engineer our measurements correctly, we could arrange for non-zero statistical probability to be assigned to outcomes for which there is zero amplitude, hence we cannot view the physical description of quantum states as purely statistical constructs (all of this is under 3 mild assumptions). In particular, the 4th page lays out some consequences of their result: “If the quantum state is a physical property of a system (as it must be if one accepts the assumptions above) then the quantum collapse must correspond to a real physical process. This is especially mysterious when two entangled systems are at separate locations, and measurement of one leads to an instantaneous collapse of the quantum component has a direct counterpart in reality.” This would seem to suggest that if these assumptions hold, interpretations that characterize measurement as a separate rule would be forced to give some physical account of what it is and how it can physically act instantaneously over long distances without violating known physical limits. On the other hand, it seems to suggest that Many-Worlds-like interpretations would be in some sense more justified in asserting the physical realism of different outcomes. My prior belief is that this paper is not groundbreaking. But if not, then it must be for good reasons that dispute the 3 assumptions they make. Can anyone describe counterarguments to the assumptions of this paper? The paper concludes with an interesting remark of the memory overhead for physical systems: it will need an amount of memory which is exponential in the number of quantum systems. For these reasons and others, many will continue to hold that the quantum state is not a real object. We have shown that this is only possible if one or more of the assumptions above is dropped.” While I don’t think anyone from our class will see the statement on exponential memory to be controversial, it is controversial as to whether nature itself has to keep track of all of that (as opposed to amplitudes just being a nice calculational tool that works out to describe what nature will do, but which does not actually correspond to the way in which nature achieves it). Might this not give some more credence to Deutsch’s point of view of “where was the number factored”? If we believe the quantum state really exists, then either there is an exponential number of “worlds” (neatly factorizing blobs of amplitude) that each contain a classical amount of storage, or else there is one world that somehow contains an exponential amount of storage (the quantum state) and can’t be doing it with hidden variables… 10. Mike says: I don’t think the referenced paper is “groundbreaking” but I do think that is considered by some to be significant. On Google+, Matthew Leifer, a respected researcher in theoretical physics currently at University College London, and one who has not been unsympathetic to a more epistemic interpretation of QM, replied as follows when I asked what his conclusions were regarding the paper: “Well, I knew this paper was coming, so it is not a surprise. Basically, it means that if you believe that quantum states are epistemic then you have two options left: 1. neo-Copenhagenism: Claim that a deeper realist model was never needed to support an epistemic interpretation of the quantum state. The probabilities are just about measurement results, period. 2. The ontological states have to be more bizarre than imagined in current approaches. For example, you could have retrocausality or “relational” degrees of freedom (whatever that means). Note that, one could also evade the theorem of this paper by claiming that quantum i.i.d. product states do not correspond to i.i.d. probability distributions in the ontological model. However, doing this does not evade a related theorem by Alberto Montina, which is based on a single system. If neither of those options is to your taste, then you might as well become an Everettian or a Bohmian, since you are stuck with the state vector in your ontology in any case. Overall, I would say that this result is not too surprising. I think that most people in the “psi-epistemic” camp already had the intuition that a psi-epistemic ontological model formulated in the usual way would not be possible. That is why most of us were already promoting other possibilities, e.g. Fuchs is in the neo-Copenhagen camp and Spekkens often mumbles things about relationalism. Personally, I am quite interested in the idea of retrocausal psi-epistemic hidden variable theories. It is at least a fairly clearly formulated problem to try and come up with one, whereas relationalism seems vague to me, at least as it is applied to quantum theory. If that doesn’t work out then I would probably end up being an Everettian. Despite the attraction of the Fuchsian program, realism has to win out in the end for me.” 11. Miguel says: Regardless of the validity of some interpretation of QM or the other, the speedup achieved by Quantum computation proves that the self-interfering, non-local exponentiality of physical reality is not a mere artifact of ‘measurement’, but rather an empirical fact. How one comes to terms with the ‘trippy’ consequences of this fact is a problem of a different sort: why do we find the idea of ourselves being in an entangled state trippy in the first place? It seems to me that the ‘trippiness’ of an objectively quantum universe results from us not taking the consequences far enough: whenever one asks about ‘where was the number factored’ in Shor’s algorithm (or ‘in which of the branches do I perceive a given outcome’), one is sneaking in a decidedly classical view of the locality and existence of the steps of a computation (or of consciousness). Upon decoherence, it does not make sense to talk about a given entity ‘being in’ any particular branch; instead, entities ‘exist’ in a self-interfering way. If one is willing to shed away intuitions of locality and existence (!), then the language of parallel universes, doppelgaengers and measuring rules becomes unnecessary. Shedding such intuitions, however, feels very unnatural: since at our mesoscopic scale quantum effects do not predominate, we regularly perceive a classical universe –and thus our nervous systems developed through evolution an intuition of what is ‘trippy’ and what is not that is clearly classical, which finds statements about locality and existence as a priori ‘natural’. But I don’t see this as a fundamental reason to privilege our classical intuitions as intrinsically more ‘objective’ than a quantum view of reality. Consider for instance a wacky form of consciousness interacting in an environment where quantum effects predominate (say, some conscious vortex of plasma in a nebula): to such a consciousness, perhaps entangled states would be ‘natural’ while classical probabilities would seem very unnatural instead! Indeed there are concept classes that are quantum-learnable but not classically learnable (cf Servedio 2001), so maybe the type of language and logic employed by a consciousness evolving in such an environment would be different from that of ours. • bobthebayesian says: I think you get right to the issue, and this is also what Peli said in his above comment, “Why do you think there are distinct overlapping persons, rather than just splitting chains of person-slices?” The question I have is this: suppose you want to draw a boundary around some part of physical reality such that everything inside the boundary “is Miguel” and everything outside is “not Miguel.” Then, if Many-Worlds is true, do you need to draw that boundary only in “classical dimensions” but along one single Everett branch, or do you need to enlarge the boundary to include the factorized blobs of amplitudes that “correspond to Miguel” along many different Everett branches? I think in the latter case, it is much easier to reconcile identity by believing the orthodox views about QM (although, really, Many-Worlds is by now sort of an orthodox view too, but ‘orthodox’ here means NeoCopenhagen/QBayesian). But you are exactly right that this is the question. If there are overlapping quantum almost-twins, then personal identity suddenly becomes a more difficult thing. If there’s just one you and amplitudes just describe calculations that help predict what that one version will experience, then things seem relatively more cut and dry (though still by no means trivially easy or anything). Interesting! If you’re willing to say that the Miguel before making a quantum measurement is “the same person” as the Miguel after making the measurement, then by transitivity, it seems to me you also need to say that the Miguel after making the measurement and observing outcome 0, is “the same person” as the Miguel after making the measurement and observing outcome 1. In other words, I claim as a (trivial) theorem that trans-temporal identity implies trans-Everett-branch identity as well. 🙂 If you agree, then the answer to your question is the latter: we need to enlarge the boundary to include multiple factorized blobs of amplitudes. • bobthebayesian says: I agree, but I don’t see any reason to give a privileged direction to time. Looking backward in time, the usual Many-Worlds identity boundary drawing would suggest there are a myriad of physically different Miguels that are like the confluence of different rivers. I don’t think time can help us resolve this. It still comes down to whether you think the amplitude that corresponds to “Miguel after making the measurement and observing outcome 0” is real and different than the amplitude that corresponds to “Miguel after making the measurement and observing outcome 1.” • Miguel says: I think that time necessarily has to be part of such a boundary, insofar as the reality we perceive is classical and irreversible, which prompts our consciousness to construct an intertemporal notion of self.. But this notion of identity is definitely very trippy and unintuitive.. I just noticed though another context in which the notion of a causally-leaky boundary appears frequently and is not seen as nearly as strange: Markov random fields. Whenever the joint distribution of such processes can be factorized along the cliques of its graph, it makes sense to talk about the cliques as distinct entities in a causal and ‘stochastic’ sense, even though the individual nodes/random variables themselves might be dependent on other nodes outside of the clique.. Hard to say though whether this notion adds or substracts to the strangeness of this whole thing.. 12. Hagi says: Most people agree that QM is weird, and in turn our universe seems to be weird. However, this should not be enough to assume it needs work, or an “interpretation”. I seem to disagree with a couple of posts on this thread, but I do not believe that classical physics is that intuitive to us naturally. The fact that it took until 16th-17th century for humans to realize it is a good indication. I remember this study from a psychology class I took, which involved asking subjects the trajectory of a package dropped from a plane. I cannot remember the study exactly, nor could I find it online; but the important result as I remember it is that majority of the subjects were not aware of Newton’s first law, or they could not apply it. However, in addition to humans not being very natural at formal thinking, QM further complicates things by not obeying the very basic laws of classical logic. There are different ways of seeing Bell’s inequality and its experimental realizations (my favorite is Itamar Pitowsky’s geometric framework), and the conclusion is that regardless of QM’s validity, classical logic is not correct, at least at some length scales. At this point it is natural to think that QM needs an interpretation. Should we see the violation of Bell’s inequality as a deterministic but non-local phenomenon, or as a non-deterministic but local in the special relativistic sense and non-local in the “there is some spooky action at a distance” sense? However, I wonder if the discussions on the interpretations of QM are trying to solve more than just the questions about QM. For example, the question of whether the wavefunction is just a statistical estimate, our belief based on the information we have; or is it the physical property of the thing itself. A related example is the question of how we should interpret probabilities. What kind and level of locality should be imposed? Although I do believe these questions should be asked, I do not think they are fundamentally just about QM. I can imagine asking these questions even if we lived in a universe that does satisfy Bell’s inequality and/or deterministic. QM is like the near-death experience that brings out all the existentialist questions out of us, although we should have had them all along. Thanks—that’s my favorite nugget so far from this thread! 13. amosw says: I have spent some time thinking about how one might go about writing a program for a classical Turing machine (or in C) that simulates a Many Worlds Universe. The more I think about it the more I realize that either I have no idea how to do it, or I that I _do_ know how to do it but that I just can’t believe it. To be clear: I have some idea about how to write a quantum simulator, via numerical integration of the Schrödinger equation. Is it really that case that: “that’s it?” All I have to do is numerically integrate the Schrödinger equation for the _entire system_ that I am simulating (say a particle in a harmonic well) and I see before my eyes, when I dump the registers, Many Worlds? Lastly, it’s worth noting that in 1959 Everett visited Bohr in Copenhagen to try to explain that it is possible to have a single wave function describing the whole Universe with no need for collapse. Evidently it didn’t go very well, as one of Bohr’s students at the meeting described Everett as being “undescribably stupid and could not understand the simplest things in quantum mechanics”. I like to think that Everett, who basically drank and smoked himself to an early grave, is presently having the last laugh out there somewhere. Leave a Reply You are commenting using your account. Log Out / Change ) Twitter picture Facebook photo Google+ photo Connecting to %s
ba40bc3e5759bb57
FacebookTwitterYouTube RSS Feed Experimental Investigations of Quantum Mechanics Observing the Heisenberg Uncertainty Principle in the Laboratory The Cindy Regal group is so skillful at using laser light to track the position of a tiny drum that researchers have been able to observe a limit imposed by the Heisenberg Uncertainty Principle. In one experiment, the researchers measured the motion of the drum by sending light back and forth through it many times. During this measurement, however, 100 million photons from the laser beam struck the drum at random, making it vibrate. The extra vibration obscured the motion of the drum at exactly the level of precision predicted by the uncertainty principle. Two aspects of the experiment made it possible to observe very small vibrations due to quantum mechanical effects. First, the experiment was done at the very low temperature of 5 K (-451 °C). The temperature was sufficiently low to reduce the amount of vibration caused by heating of the experiment by the surrounding environment. Second, the researchers used special drums that lose vibrational energy very slowly. Thus, when they measured vibrations during the experiment, they were able to determine that quantum fluctuations of light were causing about half of them.  The detection of the extra vibration indicated that the experimenters had reached a limit on successive measurements imposed by the uncertainty principle. This principle dictates that the closer someone comes to measuring the exact position of an object, the less that can be known about how fast the object is moving. In other words, this law recognizes that it is not possible to both precisely measure the position of an object and how fast it is moving at the same instant. Of course how fast something is moving has a whole lot to do with its exact position in the future. This paradox results in a conundrum for the experiment physicist like Regal: Do we make the best position measurement now or obscure the motion later? Regal says the easiest way to get the best precision is to give up precise knowledge of an initial position to balance the combined uncertainty in position and velocity. However, there may exist ways to work around quantum limits. The challenge of trying to do so has a particular fascination for Regal. Using measurements to solve the Schrödinger equation The Steve Cundiff group has developed a new technique to measure key parameters needed to solve the Schrödinger equation, which describes the time-dependent evolution of quantum states in a physical system. For the equation to work, it’s necessary to figure out a key part of this equation known as the Hamiltonian, which describes the total energy of the system. However, this is not an easy task for theorists since Hamiltonians for real-life systems must characterize a multitude of quantum states and quantum pathways that inevitably exist inside a roiling quantum world. For experiments involving many atoms or other particles that interact with each other and the environment, the only hope of determining the correct Hamiltonian may be to do it experimentally, as has been demonstrated by the group. The group used a technique know as optical three-dimensional (3D) Fourier-transform spectroscopy to acquire detailed spectra of hot (180 ºC) potassium (K) atoms. The spectra provided a window into the quantum world of the atoms in the experiment. Using the spectra, the researchers were able to disentangle all possible pathways between specific initial conditions of the K atoms (typically ground states) and final conditions (including excited and superposition states). Once they had identified all possible pathways, they were able to make the measurements necessary for describing the pathways. This information allowed them to figure out some pieces of the Hamiltonian they needed. This work is a big step towards being able to experimentally determine a Hamiltonian for an even more complex system. It could even lead one day to the ability to coherently control chemical reactions. Reducing quantum noise in precision measurement The James Thompson group has come up with a creative way to measure the spins in a collection of a million atoms: Premeasure the quantum noise in the experiment and then subtract out the quantum noise at the end of the precision measurement. The secret is to avoid doing anything that detects and measures the spins of individual atoms in the ensemble. If states of individual atoms are measured, then those atoms stop being part of the collective superposition of all the atoms, and any subsequent precision measurement will be ruined. So regardless of what measurement technique is used, it must not alter the quantum state of specific atoms. Details on this clever technique can be found in Working around the Quantum Limits to Precision Measurement
d02ee5c9eb441533
Viewpoint: Searching high and low for bottomonium • Stephen Godfrey, Ottawa-Carleton Institute for Physics, Department of Physics, Carleton University, Ottawa, Canada K1S 5B6 Physics 1, 11 The BABAR collaboration at SLAC has observed the radiative decay of an excited state of bottomonium (the bound state of a bottom quark and its antiparticle) to its ground state ηb. Observing this long-sought ground state should enable better tests of quantum chromodynamic calculations of quark interactions and the computational approach called lattice quantum chromodynamics. Illustration: Alan Stonebraker; bottom panel courtesy of P. Grenier and the BaBar collabora Figure 1: (Top) The bb¯ spectrum showing electromagnetic transitions between levels. The states that have been observed are labeled with masses taken from the Particle Data Book [16] and unobserved states are shown unlabelled with masses given by Godfrey and Isgur [9]. The unlabeled arrows show electric dipole transitions and the labeled transition (green arrow) is the M1 transition observed by the BABAR collaboration in their discovery of the ηb. The dashed line indicates the threshold above which bb¯ will have large decay rate to BB¯ final states. (Bottom) The inclusive photon spectrum observed by BABAR is shown, including the background components (green, blue) that must be subtracted to obtain the ηb signal (red) [3].(Top) The bb¯ spectrum showing electromagnetic transitions between levels. The states that have been observed are labeled with masses taken from the Particle Data Book [16] and unobserved states are shown unlabelled with masses given by Godfrey and I... Show more Just over thirty years ago, a new generation of quarks was discovered when Fermilab announced they had found the bottom quark [1], adding to the known up, down, strange, and charm quarks. The discovery was indirect—the actual detection involved finding bottom-antibottom quark pairs (bb¯) that form bound states via strong interactions and have a rich spectroscopy analogous to that of the hydrogen atom [2]. These composite particles are called bottomonium, an analogy to the well-known electron-positron pairs called positronium. The first two bb¯ states that were discovered are named upsilon particles (Υ and Υ) and were found in 1977 during experiments with collisions of 400-GeV protons on nuclear targets at Fermilab [1]. Subsequently, a variety of other excited states (all spin triplets) have been observed. However, no spin-singlet state had been seen until the observation of the ground state called ηb, now reported in Physical Review Letters [3] by the BABAR collaboration. The difference in mass between the Υ and the ηb is important in understanding quark-antiquark states (generally called quarkonia) by testing existing models, the applicability of perturbative quantum chromodynamics to the bb¯ system, and the results of the numerical approach, known as lattice quantum chromodynamics (lattice QCD), to calculate hadron properties [4]. More importantly, having a measured value will challenge theorists to perform more precise calculations that can be compared to experiment. Heavy quarkonia, which are bound states of a heavy quark and antiquark, are well described by nonrelativistic potential models originally derived to describe charm-anticharm (cc¯) states [5, 6]. The potentials incorporate general features of quantum chromodynamics (QCD)—the theory of quarks and gluons describing the strong interactions. At short distances, these QCD-motivated potentials take the form of a one-gluon exchange potential, analogous to the photon exchange that is responsible for the Coulomb interaction in quantum electrodynamics (QED). Added to this are relativistic corrections, such as spin-spin and spin-orbit terms, all with “color” factors reflecting the more complicated group structure of QCD compared to QED. The spin-spin term, for example, is analogous to the hyperfine interaction that gives rise to the 21-cm line in hydrogen. At large separation the potential is described by a linearly rising interaction that confines the quarks. The QCD-motivated phenomenological potential is in good agreement with results obtained using numerical lattice-QCD methods [4]. Lattice QCD is a nonperturbative approach that deals with the nonlinear nature of the strong interaction by dividing space and time into discrete grid points and then integrating over quark and gluon configurations. In these potential models, quarkonium energy levels are found by solving a nonrelativistic Schrödinger equation, although more sophisticated calculations take into account relativistic corrections [7]. The calculations yield energy levels that are characterized by the radial quantum number n, which is equal to one plus the number of nodes of the radial wave function, and L, the relative orbital angular momentum between the quark and antiquark. In fact, much of the nomenclature is familiar from atomic physics. The orbital levels are labeled by S, P, D (corresponding to L=0,1,2). The spins of the quark and antiquark couple to give total spin S=0 (spin-singlet) or S=1 (spin-triplet) states. S and L couple to give the total angular momentum of the state J, which can take on values J=L-1, L, or L+1. Thus the L=0 states are 1S0 and 3S1; the L=1 states are 1P1 and 3P0, 3P1, 3P2, etc. In addition to the spin-independent potential, there are spin-dependent interactions that give rise to splittings within multiplets [7]. With these, we can predict Υ-ηb splittings in bottomonium and similar splittings in charmonium that are analogous to the hyperfine splittings in hydrogen. Splittings within P-wave and higher L-state multiplets are due to spin-orbit and spin-spin interactions arising from one-gluon exchange and a relativistic spin-orbit precession term. The contact spin-spin splitting between the singlet and triplet P-wave states is predicted to be small due to its short range and because the wavefunction at the origin for P-wave states is zero. The observations of the ηb by BABAR [3] and the charmonium state hc by CLEO [8] are important validations of this picture. In the experiment reported by Aubert et al., electrons and positrons from the PEP-II storage ring at SLAC collide with a center-of-mass energy of 10.355 GeV. This energy is selected so that the collisions create Υ(3S) particles, some of which then decay radiatively to the ηb(1S) state. Figure 1 shows the bb¯ spectrum of observed states along with predictions for missing states [9] by Isgur and myself. The commonly used names of observed levels are shown. Note that bb¯ states with mass greater than two times the mass of a B meson (the ground state of a meson made up of a bottom-quark and a light up or down quark), will have a large decay rate into B-B¯ pairs so the branching ratio for radiative decays will be small. Electromagnetic transitions between the levels can be calculated in the quark model and provide an important tool in understanding the quarkonium internal structure [10]. The theory and terminology of electromagnetic transitions between quarkonium states closely follows the treatment given for transitions in the hydrogen atom in undergraduate quantum mechanics textbooks with the replacement of the electric charge of the electron with that of the quark charge and one has to include both the quark and antiquark transition amplitudes. The leading-order transition amplitudes are due to electric dipole transitions (E1) between states with the same total spin and magnetic dipole transitions (M1) which flip the quark or antiquark spin and are inversely proportional to the constituent quark mass. The predictions for E1 transitions, 3PJ3S1, in the bottomonium system are in good agreement with experimental data [10]. Recently, the CLEO experiment observed a 1D bottomonium state in a cascade of E1 transitions with a mass of 10161.1±0.6±1.6MeV/c2 [11], which is in good agreement with theoretical predictions [7]. In the nonrelativistic limit the spatial overlap integrals for M1 transitions equal one between S-wave states within the same multiplet (that is, they are favored transitions) and zero for transitions between states with different radial quantum numbers (that is, these transitions are hindered). Relativistic corrections leads to small overlaps for these hindered transitions, which can be compensated by large phase-space factors [12]. Until the observation of the hindered Υ(3S) to ηb transition, no M1 transitions had been observed in the bottomonium system. Until now, all of the observed states in the bottomonium system were spin-triplet states but quark models predict the existence of spin-singlet partners including the ground state. As mentioned above, while the decay amplitudes for hindered transitions are much smaller than those for favored transitions, this can be compensated with the larger available phase space in transitions such as Υ(3S)ηb(1S). BABAR collected a large data set by tuning the e+e- energy to the mass of the Υ(3S) and observed a signal in the photon energy with Eγ=921.2+2.1/-2.8(stat)±2.4(syst)MeV where the first error is statistical and the second systematic, which they interpreted as an M1 transition to the ηb(1S) [3]. This corresponds to an ηb(1S) mass of 9388.9+3.1/-2.3(stat)±2.7(syst)MeV/c2 with corresponding Υ(1S)-ηb hyperfine mass splitting of 71.4+2.3/-3.1(stat)±2.7(syst)MeV/c2. The measured Υ(3S)-ηb splitting is consistent with potential model predictions although a significant subset of predictions lie outside experimental one-sigma error bounds [12]. A recent lattice-QCD calculation predicts a value of 61±14MeV/c2, which is consistent within the large errors [4]. Two recent calculations using a perturbative-QCD approach predict splittings of 39±14MeV/c2 [13] and 44±11MeV/c2 [14], both being over two standard deviations away from the BABAR measurement. One can see that the recent BABAR result poses a serious challenge to theorists, which should spur renewed effort to improve calculations. More precise measurement of the ηb mass would allow precision tests of lattice-QCD and perturbative-QCD calculations of the Υ-ηb splitting. The large amount of data that BABAR has accumulated on the Υ(3S) state should allow searches for other missing bb¯ states. In particular, it may be possible to observe the ηb(2S) state via M1 transitions. Many models predict the branching ratio to the ηb(2S) to be only a factor of 2 or 3 smaller than that to the ηb(1S) and therefore possibly observable. Other interesting possibilities consist of searching for the hb(11P1) in the processes Υ(3S)π0hb(11P1)π0γηb and the sequential process Υ(3S)π+π-hb(11P1)π+π-γηb [15]. The discovery of these states would represent an important step in completing the bottomonium spectrum and provide an important test of QCD-based models and calculations. Measurement of the hyperfine mass splittings between the triplet and singlet quarkonium states is crucial to understanding the role of spin-spin interactions in quarkonium models and in testing QCD calculations [7]. The author gratefully acknowledges Nathan Isgur and Jon Rosner for teaching him much of what he knows about this subject. This research was supported in part by the Natural Sciences and Engineering Research Council of Canada. 1. S. W. Herb et al., Phys. Rev. Lett. 39, 252 (1977) 2. W. Kwong, J. L. Rosner, and C. Quigg, Annu. Rev. Nucl. Part. Sci. 37, 325 (1987) 3. B. Aubert et al. (BABAR), Phys. Rev. Lett. 101, 071801 (2008) 4. A. Gray, I. Allison, C. T. H. Davies, E. Gulez, G. P. Lepage, J. Shigemitsu, and M. Wingate (HPQCD and UKQCD Collaborations), Phys. Rev. D 72, 094507 (2005) 5. E. Eichten and K. Gottfried, Physics Letters B 66, 286 (1977) 6. W. Celmaster, H. Georgi, and M. Machacek, Phys. Rev. D 17, 886 (1978) 7. N. Brambilla, et al. (Quarkonium Working Group), arXiv:hep-ph/0412158 8. J. L. Rosner et al. (CLEO Collaboration), Phys. Rev. Lett. 95, 102003 (2005) 9. S. Godfrey and N. Isgur, Phys. Rev. D 32, 189 (1985) 10. E. Eichten, S. Godfrey, H. Mahlke, and J. L. Rosner, arXiv:hep-ph/0701208 11. G. Bonvicini et al. (CLEO Collaboration), Phys. Rev. D 70, 032001 (2004) 12. S. Godfrey and J. L. Rosner, Phys. Rev. D 64, 074011 (2001); 65, 039901(E) (2002) 13. B. A. Kniehl, A. A. Penin, A. Pineda, V. A. Smirnov, and M. Steinhauser, Phys. Rev. Lett. 92, 242001 (2004) 14. S Recksiegel and Y. Sumino, Phys. Lett. B 578, 369 (2004) 15. S. Godfrey and J. L. Rosner, Phys. Rev. D 66, 014012 (2002) 16. W.-M. Yao et al., J. Phys. G: Nucl. Part. Phys. 33, 1 (2006) About the Author Image of Stephen Godfrey Stephen Godfrey received his B.A.Sc. from the University of Toronto in 1976, his M.Sc. from the Weizmann Institute in 1978, and his Ph.D. from the University of Toronto in 1983. He was a postdoctoral fellow at TRIUMF in Vancouver and at Brookhaven National Laboratory before becoming an Assistant Professor at the University of Guelph in 1987. He moved to Carleton University in Ottawa, Canada, in 1990 where he is currently a Professor of Physics. His research is in particle physics phenomenology, ranging from hadron spectroscopy to physics beyond the standard model at the LHC. Subject Areas Particles and Fields Related Articles Synopsis: Pentaquark Discovery Confirmed Particles and Fields Synopsis: Pentaquark Discovery Confirmed New results from the LHCb experiment confirm the 2015 discovery that quarks can combine into groups of five. Read More » Synopsis: Searching for Majorana Neutrinos Particles and Fields Synopsis: Searching for Majorana Neutrinos The KamLAND-Zen collaboration has run the most sensitive search to date for a radioactive decay that could reveal whether neutrinos are Majorana fermions. Read More » Viewpoint: Hunting the Sterile Neutrino Particles and Fields Viewpoint: Hunting the Sterile Neutrino A search for sterile neutrinos with the IceCube detector has found no evidence for the hypothetical particles, significantly narrowing the range of masses that a new kind of neutrino could possibly have. Read More » More Articles
c0641e49f1e5d923
Open Access Data model, dictionaries, and desiderata for biomolecular simulation data indexing and sharing • Julien C Thibault1, • Daniel R Roe2, • Julio C Facelli1Email author and • Thomas E CheathamIII2 Journal of Cheminformatics20146:4 DOI: 10.1186/1758-2946-6-4 Received: 30 September 2013 Accepted: 15 January 2014 Published: 30 January 2014 Few environments have been developed or deployed to widely share biomolecular simulation data or to enable collaborative networks to facilitate data exploration and reuse. As the amount and complexity of data generated by these simulations is dramatically increasing and the methods are being more widely applied, the need for new tools to manage and share this data has become obvious. In this paper we present the results of a process aimed at assessing the needs of the community for data representation standards to guide the implementation of future repositories for biomolecular simulations. We introduce a list of common data elements, inspired by previous work, and updated according to feedback from the community collected through a survey and personal interviews. These data elements integrate the concepts for multiple types of computational methods, including quantum chemistry and molecular dynamics. The identified core data elements were organized into a logical model to guide the design of new databases and application programming interfaces. Finally a set of dictionaries was implemented to be used via SQL queries or locally via a Java API built upon the Apache Lucene text-search engine. The model and its associated dictionaries provide a simple yet rich representation of the concepts related to biomolecular simulations, which should guide future developments of repositories and more complex terminologies and ontologies. The model still remains extensible through the decomposition of virtual experiments into tasks and parameter sets, and via the use of extended attributes. The benefits of a common logical model for biomolecular simulations was illustrated through various use cases, including data storage, indexing, and presentation. All the models and dictionaries introduced in this paper are available for download at Biomolecular simulations Molecular dynamics Computational chemistry Data model Repository XML UML Thanks to a dramatic increase in computational power, the field of biomolecular simulation has been able to generate more and more data. While the use of quantum mechanics (QM) is still limited to the modelling of small biomolecules [1] composed of less than a couple hundred of atoms, atomistic or coarser-grain molecular representations have allowed researchers to simulate large biomolecular systems (i.e. with hundreds of thousands of atoms) on time scales that are biologically significant (e.g. millisecond for protein folding) [2]. Classical molecular dynamics (MD) and hybrid approaches such as quantum mechanics/molecular mechanics (QM/MM) are some of the most popular methods to simulate biomolecular systems. With the explosion of data created by these simulations — generating terabytes of atomistic trajectories — it is increasingly more difficult for researchers to manage their data. Moreover results of these simulations are now becoming of interest to bench scientists to aid in the interpretation of increasingly complex experiments and to other simulators for assessing force fields and to develop coarse-grain models. Opening these large data sources to the community, or at least within collaborative networks, will facilitate the comparison of results to detect and correct issues with the methods, identify biologically relevant patterns or anomalies, and provide insight for new experiments. While the Protein Data Bank [3] is very useful as a central repository for structural data, the number of repositories for biomolecular simulations is still very limited. To the best of our knowledge the only databases that currently provide access to MD data for the community are Dynameomics [4, 5] and MoDEL (Molecular Dynamics Extended Library [6]). Dynameomics and MoDEL were populated with about 11,000 and 17,000 MD trajectories of proteins respectively. One of the problems with such repositories is that the published data was generated in a specialized environment to study a given biological process (e.g. protein folding), resulting in fairly homogeneous data compared to the range of methods and software available to the community. These repositories are somewhat tied to these environments and it is uncertain how one would publish data generated outside these environments or how external systems would index or interface with these repositories. As more repositories are created the need for a common representation of the data becomes crucial to achieve semantic interoperability and enable the development of federated querying tools and scientific gateways. Note that other efforts to build repositories and scientific gateways, such as the BioSimGrid project [7] and work by Terstyanszky et al. [8], have been undertaken but so far none has been widely adopted outside their original deploying institution or organization. In the computational quantum chemistry community, more progress has been achieved towards the development of repositories using standard data representations to enable collaborative networks. One of the main on-going efforts is led by the Quixote project [9] which aims to create a federated infrastructure for quantum chemistry calculations where data is represented with CML CompChem (Chemical Markup Language – Computational chemistry [10]) and integrated into the semantic web through RDF (Resource Description Framework, The Chemical Markup Language [11] (CML) and its computational component CML-CompChem aim to provide a standard representation of computational chemistry data. While the core CML XML specifies the requirements to represent molecular system topologies and properties, CML-CompChem supplements CML to allow the representation of computational chemistry data, including input parameters and output data (calculations). So far these extensions have mainly focused on representing quantum computational chemistry experiments as XML files. These files can be created by converting input and/or output files generated by a particular software package through file parsers such as the ones supported by the Blue Obelisk group [12] (e.g. Chemistry Development Kit, Open Babel). While CML-CompChem has a great potential for QM calculations [13], its usefulness for MD and biomolecular simulations in general might be limited. For example, typically trajectories of atomic positions need to be compressed or binary encoded for data movement, storage purposes, and/or accuracy. Embedding this information into a verbose XML file such as CML will not be the optimal solution, at least not for the description and formatting of the raw output. Another obstacle to the conversion of MD experiments to a single-file representation is the common definition of many separate input files (e.g. system topology, method parameters, force field) necessary to prepare an MD simulation and define the different iteration cycles (e.g. minimization, equilibration, production MD). In quantum chemistry, the targeted molecules and calculation parameters are typically defined in a single input file (e.g. “.com” file for Gaussian [14] and “.nw” file for NWChem [15]) which makes this conversion much simpler. The output files generated by quantum chemistry software packages usually already contain the final results the user is interested in while in MD the raw output – i.e. multiple files containing the trajectories of atomic positions, energies and other output information – has to be further processed through various analysis tasks to create meaningful information. These post-processing steps involve the creation of new input and output files, making the conversion of an experiment to a single XML file even more difficult. Perhaps one of the main barriers to build repositories for biomolecular simulations is the lack of standard models to represent these simulations. To the authors’ knowledge no published study has assessed the needs of the community regarding biomolecular simulation repository data models. Therefore it is unclear which pieces of information are considered essential by researchers and how they should be organized in a computable manner, so that users can: • Index their data and build structured queries to find simulations or calculations of interest, not only via the annotations, but also with access to the raw data (files). • Summarize, present, and visualize simulation data either through a web portal or more static documents (e.g. PDF document, XML file). These models should be designed to include not only the description of the various independent computational tasks performed but also a high-level description of the overall simulated experiment. Each experiment can be related to multiple concepts that help understanding what was simulated, how, and in which context. These concepts can be grouped into the following categories: • Authorship: information about the author, grants and publications related to the experiment • Methods: computational method description (e.g. model building, equilibration procedure, production runs, enhanced sampling methodology) and associated inputs / parameters • Molecular system: description of the simulated molecules from a structural, chemical, and biological point of view • Computational platform: description of the software used to run the computational tasks, the host machine (computational environment), and execution configuration • Analysis: derived data that can be used for quality assessment of the simulations • Files: information about the raw simulation input and output files, such as format, size, location, and hosting file system In this study we describe our efforts to formalize the needs of the community regarding the elements necessary to index simulation data. This work was initiated in part to support the iBIOMES (Integrated BIOMolEcular Simulations) project [16], an effort to create a searchable repository for biomolecular simulations, where the raw data (input and output files) is made available so that researchers can rerun the simulations or calculations, or reuse the output to perform their own analysis. In the initial prototype a set of software-specific file parsers were developed to automatically extract common data elements (metadata) and publish the raw data (i.e. the input and output files) to a distributed file system using iRODS (integrated Rule-Oriented Data System [17]). The published files and collection of files (experiments) are indexed based on the extracted data elements and are stored as attribute-value-unit triplets in a relational database. In this paper we introduce a list of common data elements and a data model that will help iBIOMES and future biomolecular simulation data repository developments move towards semantic interoperability. Motivation for a common data representation: examples The development of a common framework for data representation provides users with a large amount of flexibility to develop new tools for managing the data while maintaining interoperability with external resources. In this section we present three different examples that demonstrate the need for a standard representation of biomolecular simulation data, whether it is for indexing or presentation to the user. All three examples have been implemented to some extent in prototype form here. The first example is based on our experience with iBIOMES [16], where simulation-specific metadata is associated at the file or directory level, through a specialized file system (iRODS [17]). The second example shows how one would use a model-based approach to build a repository where simulation parameters and provenance metadata are stored in a relational database. Finally the last example illustrates how a model-based API (Application Programming Interface) can be used to automatically generate XML and HTML summaries for the simulations being published. Example 1: building a repository based on file annotations One of the simplest ways to index simulations is to tag the associated files and directories with user annotations summarizing their content. These tags can be simply stored in a database or indexed via dedicated systems such as Hadoop [18, 19] or Apache Lucene [20]. This approach is well suited for fast searches based on keywords or attribute-value pairs. In the iBIOMES system [16] these tags are managed by the iRODS framework [17], which enables the assignment of attribute-value-unit triplets to each file and directory in a distributed file system. This approach is very flexible since it allows the use of tags that represent common concepts such as computational methods and biological features, and user- or lab-specific attributes as well. In iBIOMES, a catalogue of common attributes was defined for users to annotate their data. The definition of such attributes is important as they can be tied to actionable processes, such as analyses, visualizations, and ultimately more complex workflows. It is then possible to build a user interface that presents the data and performs certain actions based on the existence of certain attributes or their associated values. For example if the format of a file is PDB (File format = “PDB”), then the user interface could enable 3D rendering of the associated molecules through Jmol [21]. A data dictionary that would offer possible values for a particular attribute is important as well. Each term should be well defined to leave no ambiguity to the user. A dictionary of force fields for example could list all the common force fields with a textual description, a type (e.g. classical, polarizable, coarse-grained), and the associated citations for each entry. A catalogue of common data elements, associated to a data dictionary, is also useful for users to pick from to facilitate annotations and build queries. The catalogue used in iBIOMES was defined internally by our lab and probably is not yet sufficiently exhaustive for the community at large. However, creating a catalogue of common data elements (CDE) supported by the community is a first step towards the standardization of biomolecular simulation data description. Defining a subset as recommended (i.e. the core data elements) would go a step further and set a criterion to assess the quality of the data publication process. Finally, linking these CDEs to existing terminologies or ontologies would bring semantic meaning to the annotations, enabling data discovery and query via external systems. Example 2: building a repository based on a relational database While a CDE catalogue is important, it lacks the representation of relationships between elements unless it is linked to a well-structured taxonomy. For example, if a user is interested in simulations of nucleic acids, a hierarchical representation of biomolecules could be used to infer that the user is actually looking for any simulation of DNA or RNA. The aim of a logical data model is to give a representation of the domain that captures the business needs and constraints while being independent from any implementation concern [22]. Such a model can provide the foundations for the design of a database and can be used to automatically generate API skeletons using modern modelling tools (e.g. Enterprise Architect, ArgoUML, Visual Paradigm). Since it is a domain-specific representation of the data, it can also serve as a starting point to develop a terminology or ontology specific to this domain. In this second example we demonstrate how a data model could be used to prototype a repository for biomolecular simulations where simulation parameters and provenance metadata are organized and stored in a relational database. We created a UML (Unified Modeling Language, model including logical and physical entities to build a relational database that could eventually be wrapped as a Grid service. The Grid [23] represents a great infrastructure for collaboration because of the underlying authentication scheme and data discovery services available, but also because of the semantic and syntactic integration. For this example we decided to mock up a data grid service using the caGrid [24] framework. caGrid was supported by the National Cancer Institute (NCI) and aimed to create a collaborative network for researchers to share cancer data, including experimental and computational data. The caCORE (cancer Common Ontologic Representation Environment) tools that were developed in this context facilitate the creation of the grid interfaces by automatically generating the necessary Java code from a UML model. These tools are now maintained by the National Cancer Informatics Program (NCIP) and available at: For this example we mapped the logical model to a data model using the caAdapter graphical tool. The final UML model and database creation scripts for MySQL ( are available for download at: More details about the UML model are provided in the section introducing the logical data model. The caCORE SDK (Software Development Kit) was then used to generate the Hibernate ( interfaces to the database along with a web interface that can be used to create simple queries or browse the published data. A screenshot of the generated interface is given in Figure 1 (listing of various published computational tasks). To actually build and deploy the data service onto a Grid, one would have to use the Introduce module. Semantic integration is also possible via the Semantic Integration Workbench (SIW), which enables tagging of the domain model with concepts from standard terminologies (e.g. ChEBI, Gene Ontology). Figure 1 Screenshot of the web interface generated via the caGrid tools. The screenshot presents a listing of the computational tasks that were published into the caGrid test system. The user request was automatically translated into a SQL query via Hibernate to return the rows form the tables mapping to the class ExperimentTask and its child classes MinimizationTask (minimizations), MDTask (MD runs), and QMTask (QM calculations). For each row, a set of get methods (e.g. getSoftware) link to the associated objects for more details (e.g. Software name and version). Example 3: representing experiments using XML While a database provides a single endpoint to query data, other types of data descriptors become necessary when moving data between file systems, or simply to provide a light-weight description of the data. XML has been widely adopted by the scientific community to represent structured data because of its flexibility and support by web technologies. In the field of computational chemistry CML-CompChem[10] aims to provide a detailed representation of computations but currently lacks support in the molecular dynamics community. BioSimML (Biomolecular Simulation Markup Language [25]) was developed specifically for biomolecular modelling and supports QM/MM simulation representations but its current status in uncertain. The Unified Molecular Modeling (UMM) XML schema [26] is currently being developed by ScalaLife (Scalable Software for Life Sciences, and will attempt to provide a detailed description of MD runs, so that these files can be used as a standard input to run within various MD engines. So far these XML-based formats have focused on giving a low-level representation of the simulation runs so that data can be converted between legacy formats. In this example we generate an XML-based representation of the experiment as a whole (multiple tasks), with a limited granularity for the description of each task. For this purpose we developed a Java API based on our logical model to generate XML representations of experiments (Figure 2). Format-specific file parsers developed for the iBIOMES project [16] read in input and output files associated to an experiment to create an internal representation of the experiment and associated computational tasks. In the Java code, classes are annotated with Java Architecture for XML Binding (JAXB, annotations to map the logical model to an XML schema. The JAXB API can then be used to automatically output XML documents based on the internal Java representation of the experiment or read in an XML file to build the Java objects. The same process could be implemented in various languages, using CodeSynthesis XSD ( in C++ or PyXB ( in Python for example. Figure 2 Generating an XML representation of experiments using a Java API. The Java API is used to parse the input files and create an internal representation of the virtual experiment as a set of computational tasks. JAXB is then used to generate an XML representation of this internal model, while XSLT is used to perform a last transformation into a user-friendly HTML page. The XML output does not aim to be sufficient to recreate input or output files in legacy formats but it will provide enough information for users to rapidly understand the computational methods and structures represented by the associated raw data. This type of XML document can be used as a way to give a detailed summary of experiments when exchanging data, compressed with the raw data for example. These documents can be transformed through XSLT (eXtensible Stylesheet Language Transformations) to be rendered as HTML pages and build repository web interfaces. A sample XML output along with an HTML-based tree view generated through XSLT are presented in Figure 3. For this example a set of AMBER-specific [27] file parsers was used to parse a directory containing all the input and output files associated to an MD study of RNA. Common data elements related to the molecular system topology were extracted from the AMBER parameter/topology file while task (minimization and MD runs), parameter set (e.g. implicit solvent, number of iterations), and computational platform information were extracted from the AMBER MD output files. Figure 3 XML and HTML-based representations of an experiment. Auto-generated XML sample (left) and corresponding HTML tree view (right) representing the different tasks run for an MD study of RNA using the AMBER software package. These three prototypes serve as examples demonstrating the need for a catalogue of CDEs and the representation of relationships between concepts through a data model. The catalogue of CDEs, associated to a data dictionary, provides the basis for a controlled vocabulary that can be used to annotate experiment data (e.g. files and directories) and build queries. The data model provides extra information as it links concepts together and allows more complex and structured queries, through a relational database for example. The second example showed how modern software engineering tools can use data models to generate database schemas and APIs for repository developments. Finally the last example showed that XML representations can be easily generated if the API follows a model-based approach. In this paper we introduce a list of CDEs built upon community feedback, and a logical model that ties dictionaries and common data elements together. Common data elements for simulation data indexing and presentation were identified through a survey, while recommendations are made for trajectory and analysis data description. The common data elements were organized through a logical data model, which was refined to include dictionaries and minimize data redundancy. Finally the design and implementation for a subset of these dictionaries is introduced. Identification of core data elements A survey was distributed to the community to assess the list of data elements that was defined in iBIOMES [16]. This initial list of common data elements was based on the BioSimGrid [7] data model and supplemented with new elements to reflect the needs of our lab and various collaborators at the University of Utah, and to add descriptions of quantum chemistry calculations. The main goal of the survey was to identify which elements were missing and which ones were not so important according to the community. A list of 47 data elements describing simulation runs and the associated files was presented to experts. These data elements were grouped into 6 categories for organizational purpose: authorship (user information and referenced citations related to a particular run), platform (hardware/software), molecular system (molecules being studied, independently from the model chosen), molecules (info about the molecules composing the system), methods (can apply to any method, including QM and MD), molecular dynamics, and quantum mechanics. The experts were asked to score the data elements based on how important they are to them to describe their own data and/or to index community data and build search queries. Scoring was based on a Likert scale (1 = “Not important at all”, 2 = “Not very important”, 3 = “Not sure, 4 = “Important”, 5 = “Very important”, and “N/A” for non-applicable). In each group, the experts were also allowed to propose missing data elements and/or comment on the listed elements. The survey was made available online (see extract in Additional file 1) in March 2012 for about a month and promoted through the Computational Chemistry List (CCL) and the AMBER developers’ mailing list. The CCL list is a fairly well known group for general discussions related to computational chemistry, perhaps with an emphasis on QM-related methods. The AMBER developers group represents a variety of theoretical disciplines (MD, QM, QM/MM), with developments targeting various types of systems (e.g. proteins, nucleic acids, lipids, carbohydrates, small compounds) and discussions on how to best use the software, methods and force fields. Individual emails were also sent to different research groups at the University of Utah that are specialized in computational chemistry. Trajectory and analysis data The survey did not include any analysis- or file-related data elements. The Dublin Core metadata ( can be used as a good reference to describe files at a high level (e.g. author, format). Analysis data on the other hand is very complex to describe because of its direct relation to the raw data it derives from (e.g. use of multiple input files representing experimental and computed data) and the existence of numerous analysis methods that can be problem-specific (e.g. Protein vs. RNA, QM vs. MD). In most cases it will not make sense to use analysis data to index an experiment either. For example looking for MD trajectories with a particular RMSD (root mean square deviation) value would be irrelevant without providing more context about the system and the method used to calculate the value. Although analysis data is a key factor to assess the quality of a simulation, its use for data indexing and retrieval is not trivial and therefore was not included in the survey. A generic framework for the description of trajectory and derived data is nevertheless provided in the Results section. Logical model The logical model presented here was derived from a conceptual model that organized all the identified common data elements into a defined domain. The conceptual model was reduced into a logical model with the assumption that the raw input and output files are made available (in a repository similar to iBIOMES or MoDEL) and that the model would be used to index the data rather than providing a complete view of the results (e.g. calculation output, structures defined in each MD trajectory frame). Although analysis data and quality criteria are crucial to provide an objective perspective on experiment results, no associated concept was included in the current model. The granularity of the model was limited to a sufficient level of details that makes it computable. For example, the description of the theory behind modelling methods is not part of the model. The end-goal being to share the results of the simulations or calculations with the community we limited our model to include only popular methods that are used for the study of biomolecules or smaller ligands. Use of dictionaries One of the main features of this logical model is the integration of dictionaries to avoid data redundancy. For example a dictionary containing definitions of force fields (e.g. name, type, citations) can be referenced by molecular dynamics tasks, instead of creating individual force field definition entries every time the force field is used. The integration of dictionaries into the model should not enforce mappings to standard definitions but rather enable links between specific values and standard definitions only if they exist. If no mapping exists the user should still be able to publish the data. This is achieved through the storage of “specific names” outside the dictionaries with an optional reference to the term definition, where the standard version of the name (not necessarily different) is defined. For example if the basis set “LANL2DZ” is used in a QM calculation, but no corresponding entry exists in the basis set dictionary, the name of the basis set will still be stored in the database when publishing the data to allow queries on the calculation. Certain attributes need to be associated to a unit to be understood by a human or a computer. Different software packages might use different units to represent the same attribute. For example, distances in AMBER [27] are measured in Ångströms while GROMACS [28] uses nanometres. When publishing data to a repository one should either convert the values using units previously agreed upon or make sure that the units are published along with the values. In both cases, mechanisms should be in place to provide a description of the units when pulling data from the repository. For the description of this model we assume that the units are already set in the repository, therefore they are not included in the description of the model. While most of the data described in a logical model for biomolecular simulations can be directly parsed from the input and output files, dictionaries containing standard definitions and values for certain data elements need to be prepopulated. In this paper we present the design and implementation of several dictionaries that can be used to facilitate data publication and queries. For example, if a user is interested in QM calculations based on Configuration Interaction (CI) theory, a dictionary of all CI methods will be needed to return all the calculations of interest (e.g. CISD, CISD(T)). Another interesting use of these dictionaries is within the code of the file parsers. Instead of defining standard values within the code, one can use these dictionaries to lookup information on the fly, and possibly use it to publish the data into the target repository. An initial set of dictionaries was populated using the BiosimGrid [7] database dictionaries (source code available at: They were then refined internally and supplemented with new dictionaries, especially to include QM-related definitions (e.g. basis sets, QM Methods). Identification of core data elements At the closing of the survey we were able to collect 39 responses (20 through CCL, 10 through the AMBER developers list, and 9 through emails). The results of the survey are presented in Additional file 2. The respondents listed a few data elements they felt were missing from the proposed list or that needed to be refined (see comments in Additional file 3). For instance, in the authorship category, a data element representing research grants was missing. For the representation of the molecular system, data elements representing important functional groups of the solute molecules should be added, along with, optionally, the apparent pH of the solvent. Adjustments should also be made to distinguish the different species in the system, and flag them as part of the solvent or the solute. For the computing environment information, a respondent showed interest in knowing whether the software package is compiled in single, double, or mixed precision, what the memory requirements are for a run, and even what parallelization scheme is used. All these elements are very technical and might interest only a very limited number of users, even in the developer’s community. The notion of hardware architecture was not clearly defined in the survey since it should have already included the use of GPU (see comment in Additional file 3). A better representation of the hardware architecture can be done through three different data elements: the CPU architecture (e.g. x86, PowerPC), the GPU or accelerator architecture (e.g. Nvidia GeForce GTX 780, AMD Radeon HD 7970, Intel PHI), and possibly a machine or supercomputer architecture identification (e.g. Cray XK7, IBM Blue Gene/Q, commodity Infiniband cluster, etc.) and name (,,, etc.). For the computational methods, data elements were missing for the representation of both MD and QM-specific parameters. In QM, the following elements were missing: exchange-correlation functionals (for DFT), pseudopotentials and plane wave cut-offs, and whether frozen core calculations are performed or not. Some comments pointed the fact that the notion of convergence can be very subjective, especially when dealing with MD trajectories where multiple minima (conformations) can be found over time (see comments in Additional file 3). The convergence flag and criteria were assigned as QM-specific data elements to reflect this. For MD, the context of the run (i.e. whether it is a minimization, an equilibration, or a production run) was missing. Representations of restraints and advanced sampling methods (e.g. replica-exchange, umbrella sampling) were also missing. More detailed properties were listed by the respondents. These included the order of expansion for LINCS-based constraints and the order of interpolation for Particle-Mesh Ewald. At this point it is not clear if such parameters need to be tracked since users would hardly use these to create queries and we assume that they can be directly read from the raw input files if necessary. Based on the results of the survey and the various comments of the community we propose a set of common data elements for biomolecular simulation data indexing, listed in Additional file 4. The table reorganizes the granularity of the identified elements by making a distinction between data elements (concepts) and attributes (properties). For example the barostat data element has at least one property: an implementation name (e.g. “Andersen”, “Berendsen”). Depending on the type of barostat other properties could include a time constant and a chain length (e.g. Nose-Hoover barostat). We also included “derived” properties that would be inferred from other properties if the right terminology or dictionary is available. For example, the name of a QM method (e.g. MP2, B3LYP) should be enough to infer the level of theory (e.g. Møller-Plesset, DFT), and the name of the force field (e.g. AMBER FF99SB) should be sufficient to infer its type (e.g. classical). This distinction is important as it can help the developers choose which properties should be actually stored (e.g. in a database or an XML file) and which ones could be inferred. The set also contains recommended and optional data elements/attributes. An attribute is marked as recommended if its average score (i.e. the sum of Likert scale scores divided by the number of responses for that element) is greater than 4.0 (“Important”), otherwise it is marked as optional. Attributes proposed by the respondents were categorized through an internal review performed by our lab, composed of researchers running molecular dynamics simulations and quantum chemistry calculations on a daily basis. A data element is considered recommended if it has at least one recommended attribute. The current list contains 32 data elements and 72 attributes (including 30 recommended attributes). We recognize that the process by which the data elements were defined and characterized is not perfect. Although the number of respondents was fair (between 37 and 39 depending on the data element), certain data elements had to be added or redefined based on an internal review by some of our lab members, which might have created some bias towards the needs of our lab rather than a general consensus in the community. Despite these limitations the list of data elements proposed here may be considered the first attempt to summarize the needs of the computational chemistry community to enable biomolecular simulation data indexing and queries. This list should be a good starting point to create a list of standard metadata to tag files using simple attribute-value pairs or attribute-value-unit triplets, as it is the case for iBIOMES via the iRODS metadata catalogue [17]. Although this list is fairly exhaustive, it is not complete and we hope that by publishing it the community will be able to provide more feedback and build on it, with the intent of this data model being extensible. The list is available on the iBIOMES Wiki at: Field experts who want to contribute to the list can request an account on the wiki. Trajectory files In most MD software packages the computed trajectories of atomic coordinates are stored in large files (~MB-TB) with each containing one or multiple time frames (e.g. PDB, AMBER NetCDF, DCD). This is the raw data that repositories would actually store and index for retrieval. Until now we have been focusing on the description of the computational tasks that were used to generate this data, i.e. the provenance metadata. This metadata can be used to find a given experiment and all associated trajectory files. On the other hand new attributes need to be assigned at the trajectory file level to describe their content and ultimately enable automatic data extraction and processing by external tools (e.g. VMD [29], CPPTRAJ [30], MDAnalysis [31]). Such attributes include the number of time frames, time between frames, number of atoms in the system and/or reference to the associated topology file, presence or absence of box coordinates, velocity information, and so on. It is important to note that the use of self-descriptive formats such as NetCDF ( would allow trajectory files to carry not only the description of the dataset, but also the provenance metadata, for example using the CDEs previously defined. Perhaps one of the most important attributes to give context within a full experiment is the index of a trajectory file within the set of all trajectory files representing a given task or series of tasks. Although self-descriptive formats could easily keep track of this information, it is non-trivial to generate such an index as tasks can be run independently outside of a managed workflow such as MDWeb [32], which would be able to assign these indexes at file creation time. The order of trajectory files is therefore commonly inferred from their names (e.g. “1.traj, 2.traj, 3.traj”). This approach usually works well although some errors might occur when trying to automate this ordering process. For example “10.traj” would be ranked before “2.traj” if a straight string comparison is performed (vs. “02.traj”). Strict naming conventions for trajectory data (raw, averaged, and filtered on space or time) should help circumvent these problems. Analysis data Although some analysis tasks are common to most biomolecular systems for a particular method (e.g. RMSD calculations of each frame in the trajectory to a reference structure) the number of analysis calculations one can perform is virtually infinite. There is currently no standard to describe the output of the analysis. Some formats might enable the description of the values (e.g. simple CSV or tab-delimited file with labelled columns and/or rows) but more structured files are required to describe the actual analysis process that generated the set of values contained in the file. Formats such as NetCDF are adapted to store this kind of description but are not commonly used to store biomolecular simulation analysis data. Instead comma- or tab-delimited files formats are usually preferred for their simplicity, readability, and support by popular plotting tools (e.g. MS Excel, OpenOffice, XmGrace). Assuming that the dataset is physically stored in such a file or in a relational database, a minimal set of attributes should be defined to facilitate reproduction of the analysis, as well as enable reading and loading into visualization tools with minimal user input. We believe that the strategy used in the NetCDF framework to break down data into variables with associated dimensions is a simple and logical one, and so we follow a similar strategy here. – Data dimensions: Defines dimension sizes for defined data sets (i.e. variables). Any number of dimensions (including zero if data is scalar) can be defined. – Data variables: The actual data. Report type (e.g. integer, float), labels, and units for all the values contained in a given set. One or more dimensions can be associated with a given variable based on its overall dimensionality. Zero dimensions correspond to a single value (e.g. average RMSD value), one dimension is an array (e.g. RMSD time series), two dimensions are a matrix (e.g. coordinate covariance), etc. Another set of attributes need to be defined to represent the provenance metadata, i.e. how the analysis data was derived from the raw trajectories. Although different analysis tasks will require different input data types and parameters, a list of common attributes can be defined to provide a high-level description of the analysis task: • Name (e.g. “RMSD”) and description (“Root mean square deviation calculation”) of analysis method (see entries defined in our MD analysis method dictionary) • Path to the input file describing the task (if applicable). • Name and version of the program used, along with the actual command executed. • Execution timestamp • Reference system, if any (self, experimental, or other simulated structure) While these attributes might not be sufficient to automatically replicate the results they should provide enough information for users other than the publisher to understand how the analysis data was generated and how the analysis task can be replicated. A further set of attributes can be defined to provide additional details on the scope of the analysis and describe in detail the data from which the current data has been derived: • File dependencies • Filter on time • Filter on space (e.g. heavy atoms only, specific residue) These would facilitate maximum reproducibility as well as enable detailed searches on very specific types of analysis. The ‘File dependencies’ attribute may include information like the trajectory used in a given calculation, which could also be used to check if the current analysis is up-to-date (e.g. if the trajectory file is newer than the analysis data, the analysis can be flagged as needing to be updated). The ‘Filter on time’ attribute might describe a specific time window or subset of frames used in the analysis. Since these attributes are perhaps not as straightforward for analysis programs to report as the other attributes, they could be considered optional and/or set by the user after the data is published. The ‘Filter on space’ attribute could be particularly useful, since it would allow one for example to search for all analyses of a particular system done using only protein backbone atoms or only heavy atoms, etc. However, this would require translation of each individual analysis program’s atom selection syntax to some common representation, which is no small task and would increase the size of the metadata dramatically for certain atom selections. In many cases it is likely that the atoms used in the analysis could be inferred from the command used, so this attribute could also be considered optional. Two examples of how these attributes might be applied to common analysis data are given in Additional file 5. Logical model In this model the central concept is the virtual experiment, a set of dependent computational tasks represented by several input and output files. The goal of this model is to help create a common description of these virtual experiments (stored in a database or distributed file system for example) for indexing and retrieval. The overall organization of virtual experiments is illustrated in Figure 4. For the rest of this paper virtual experiments will be simply denoted as experiments. The organization of an experiment as a list of processes and tasks was inspired by the CML-CompChem[10] schema. In CML-CompChem the job concept represents a computer simulation task and can be included into a series of consecutive sub-tasks designated as a job list. The concepts of experiment, process group, process, and task are introduced here to handle the representation of tasks that might be run in parallel or sequentially, and that might target the same or different systems. An experiment process group is defined as a set of computational processes targeting the same molecular system, where a process is defined as a set of similar tasks (e.g. minimization tasks, MD tasks, QM tasks). In MD, the minimization-heating-production steps can be considered as a single process group with 3 different process instances. If multiple copies of the system are simulated, each copy will be considered a separate process group. In QM, a process would represent a set of sequential calculations on a compound. If various parts of the overall system are studied separately (e.g. ligand vs. receptor), each subsystem should be assigned to a different process group. Figure 4 Illustration of the data model used to represent virtual experiments. Each experiment is a set of tasks, grouped into processes (e.g. minimization, equilibration, production MD) and process groups applied to the same molecular system (e.g. B-DNA oligomer). Within the scope of an experiment, multiple tasks and group of tasks will be created sequentially or in parallel, and based on intermediate results. To keep track of this workflow, dependence relationships (dependencies) can be created between tasks, between processes, and between process groups. In the following sections we present the overall organization of the model through an object-oriented approach where the concepts (e.g. experiments, tasks, parameter sets, and molecular systems) are represented by classes with attributes. The description is supported by several class diagrams using the UML notation. For example inheritance is characterized through a solid arrow with an unfilled head going from the child to the parent class. Along with standard UML notations, we defined the following colour scheme to guide the reader: • Blue: classes giving a high-level description of the experiments and tasks • Yellow/orange: method/parameter description • Green: classes describing the molecular system independently from the computational methods • Pink: classes related to authorship and publication (e.g. citations, grants) • Grey: description of the hardware or software used to run the tasks Finally, classes representing candidates for dictionary entries are marked with wider borders. Experiments, processes, and tasks Figure 5 presents the concepts that can be used to describe the context of an experiment. Each experiment can be given a role, i.e. the general rationale behind the experiment. Examples of experiment roles include simulation (dynamics), geometry optimization, and docking. These roles should not be associated to any computational method in particular. Each experiment can be linked to a particular author (including institution, and contact information) to allow collaborations between researchers with common interests. Publications related to a particular experiment (citations) or that use the results of the experiments can be referenced. Grant information is important as well since it allows researchers to keep track of what their funding actually supports. Figure 5 Concepts used to describe the context of the experiments. Experiment sets (Figure 2) are collections of independent experiments that are logically associated together, because of similar context (e.g. study of the same system using different methods) or simply for presentation purpose or to ease retrieval by users (e.g. all the experiments created by a certain working group). An experiment can be assigned to multiple experiment sets. An experiment task corresponds to a unique computational task defined in an input file. Figure 6 presents the main concepts associated to experiment tasks. These include the definition of the actual calculation (e.g. frequency calculation and/or geometry optimization in QM, whether the dynamics of the system are simulated), the description of the simulated conditions (reference pressure and temperature), and the definition of the method (e.g. QM, MD, minimization) and input parameters (e.g. basis set, force field). More details about the different types of tasks and simulation parameters are given in the computational method section. Each task is executed within a computing environment, i.e. the set of hardware and software components used to run the simulation software package. These components include the operating system, the processor architecture, and the machine/domain name. Information about the task execution within the computing environment, including execution time, start and end timestamps, and termination status can be tracked as well. The software information includes name (e.g. “AMBER”) and version (“12”). In certain cases a more specific name for the executable is available. This can provide extra information about the compilation step and/or the features available. In Gaussian [14], for example, this information can be found in the output files: “Gaussian 09” would give a generic version of the software package while “EM64L-G09RevC.01” would give the actual revision number (“C.01”) and the target architecture of the executable (e.g. Intel EM64). For AMBER, the executable name would be either “SANDER” (Simulated Annealing with NMR-Derived Energy Restraints) or “PMEMD” (Particle-Mesh Ewald Molecular Dynamics), which are two alternatives to run MD tasks within the software package. Figure 6 Description of experiments, processes, and tasks. Computational methods The most common methods for biomolecules include QM, MD, and hybrid QM/MM. In this model we focus on these methods but we allow the addition of other methods by associating each task to one or multiple parameter sets that can be combined to create new hybrid approaches. This decomposition was applied to MD, minimizations (e.g. steepest descent, conjugate gradient), QM, and QM/MM methods as illustrated in Figure 7. Figure 7 Organization of computational methods into tasks and parameter sets. Method-specific tasks and parameter sets Common attributes of any computational method are represented at the ExperimentTask level. These include names (e.g. “Molecular dynamics”), description (e.g. “new unknown method”), types of boundary conditions (periodic or not), and the type of solvent (in vacuo, implicit, or explicit). Method-specific tasks (MinimizationTask, MDTask, QMTask, QMMMTask) are created to capture the parameters that would not be shared between all methods. Simulation parameters include any parameter related to the method or task that would be set before a simulation is run. These parameters are aggregated into sets that can be reused between methods. For example, the MD-specific task (MDTask) references MDParameterSet, which includes the definitions of the barostat, thermostat and force fields. The QM/MM-specific task (QMMMTask) references the same parameter set since these definitions are necessary to describe the computational method to treat the MM region. It also references a QM-specific parameter set to describe the QM method and a QM/MM-specific parameter set to describe the treatment of the QM/MM boundary. A new task type could be created for multi-level quantum calculations. In this case the task would reference multiple QM parameter sets and a new type of parameter sets that would define at least the algorithm or implementation used to integrate the different levels (e.g. ONIOM [33]). In molecular dynamics, the behaviour of the simulated system is governed by a force field: a parameterized mathematical function describing the potential energy of the system, and the parameters of the function, with dynamics propagated using Newton’s equations of motion and the atomic forces determined from the forces or first derivatives of the potential energy function. Different parameters will be used for different types of atoms (or group of atoms in the type of coarse grain dynamics). A given force field parameter set is usually adapted to particular types of residues in molecules (e.g. nucleobases in nucleic acids vs. amino acids in proteins). For a single molecular dynamics task multiple force fields and parameter sets can be used simultaneously. When simulating an explicit water-based solvent for example, the specific force field parameter set used to represent these water molecules (e.g. TIP3P, TIP4P, SPC/E [34]) will typically be different from the set used to parameterize the atoms of the solute or the ions. The ForceField class presented in Figure 8 represents instances of force fields referenced by a particular run while ForceFieldDefinition represents an entry from the dictionary listing known force fields. Force field types include classical, polarizable, and reactive force fields. Figure 8 Description of MD tasks and parameter sets. Molecular dynamics methods can be classified into more specific classes of methods. For example in stochastic dynamics (Brownian or Langevin Dynamics), extra parameters can be added to represent friction and noise [35]. In coarse-grain dynamics the force field is applied to groups of atoms rather than individual atoms. The differentiation between atomistic and coarse-grain dynamics is then achieved solely based on the type of force field used. In this model Langevin dynamics and coarse-grain dynamics are not represented by different types of tasks as they share the same parameter set as classic molecular dynamics. The collision frequency attribute used specifically by stochastic dynamics was added to the MD parameter set while a flag specifying whether the force field is atomistic or coarse grain is set in the force field dictionary. Each parameter set can be associated to a barostat and a thermostat to define how pressure and temperature are constrained in the simulated system (Figure 8). The ensemble type (microcanonical, canonical, isothermal–isobaric, or generalized) can be defined directly in the parameter set. The model also includes the concepts of constraints and restraints. Both have a target (i.e. the list of atoms they apply to), which can be described by an atom mask or a textual description (e.g. ‘:WAT’, ‘water’). The type of constraint is defined by the algorithm used (e.g. SHAKE, LINCS) while the type of restraint is characterized by the property being restrained (e.g. bond, angle). Enhanced sampling methods are gaining interest in the MD community as larger systems and longer time scales can be simulated faster than with classic approaches [36]. These methods usually involve the creation of multiple ensembles or replica that can be run in parallel (e.g. temperature replica-exchange, umbrella sampling). A dictionary of such methods was created to list popular enhanced sampling methods. At the core the runs based on these methods can still be represented with multiple molecular dynamics tasks. Depending on the method, the implementation, and the definition of the input files, the set of MD tasks corresponding to a given enhanced sampling run can be grouped into processes where each process represents either a separate ensemble/replica or a group of tasks run in parallel. For a replica exchange MD (REMD) run using 4 replicas, one could either group the 4 MD tasks into a single process representing the whole REMD run or 4 separate processes with a single task each. In quantum chemistry the two main elements that define the theory and approximations made for a particular run are the level of theory (or QM method) and the basis set (Figure 9). Basis sets provide sets of wave functions to create molecular orbitals and can be categorized into plane wave basis sets or atomic basis sets. They are defined in a dictionary (BasisSetDefinition). Different levels of theory are available to approximate the selected basis set and find a discrete set of solutions to the Schrödinger equation. Popular methods include Hartree-Fock and post-Hartree-Fock methods (e.g. Configuration Interaction, Møller-Plesset, Coupled-Cluster), multi-reference methods, Density Functional Theory (DFT), and Quantum Monte Carlo [37]. The classification of QM methods is not trivial because of the range of features dependent on the level of theory. For example, DFT method names typically correspond to the name of the exchange-correlation functional while semi-empirical method names provide a reference to the empirical approximations of the method. For this model we defined the concepts of QM method, class and family. At the highest level the family defines the method as “ab initio”, “semi-empirical”, or “empirical”. The class defines the level of theory for ab-initio methods (e.g. Hartree-Fock, Møller-Plesset, Configuration Interaction, DFT, Multi-reference), or the type of semi-empirical method (pi-electron restricted or all valence electron restricted). Note that one method can be part of multiple classes (e.g. Multi-reference configuration interaction, hybrid methods). At the lowest level the method name (e.g. MP2, B3LYP, AM1) corresponds to a specific method, as it would be called by a particular software package. Approximations of pure ab-initio quantum methods can be used to reduce the computational cost of the simulations. Typical approximations include the use of frozen cores to exclude inner shells from the correlation calculations and pseudo-potentials (effective core potentials) to remove the need to use basis functions for the core electrons. The use of such approximations is noted at the QM parameter set level. Figure 9 Description of QM tasks and parameters. Molecular dynamics methods can be “improved” by injecting quantum characteristics to the models (semi-classical methods). In ab-initio molecular dynamics, the forces for the system are calculated using full electronic structure calculations, avoiding the need to develop parameters a prior. In hybrid QM/MM, the simulation domain is divided into an MM space where the MD force field applies, and a QM space where molecular orbitals will be described. Different methods exist to treat the boundaries between the two spaces. The decomposition of runs into tasks and parameter sets make the integration of such methods possible and fairly straight forward. For example, one could create a new type of tasks for ab-initio molecular dynamics that would have at least two parameter sets: the QM parameter set defined earlier and a new parameter specific to ab-initio molecular dynamics that would define the time steps (number, length) and the type of method (e.g. Car-Parinello MD, Born-Oppenheimer MD). Molecular system In this model a distinction is made between biomolecules (e.g. RNA, protein) and “small molecules” (Figure 10). Here we define a small molecule as a chemical or small organic compound that could potentially be used as a ligand. They are defined at the level of a single molecule while biomolecules are described by chains of residues. Typically, QM calculations will target small molecules while MD simulations will target larger biomolecules and ligand-receptor complexes. Properties such as molecular weight and formula are worth being tracked for small compounds but their importance is not that obvious when dealing with larger molecules. Figure 10 Decomposition of the molecular system into molecules with structural and biological features. Three dictionaries are necessary to provide definitions for standard residues, atomic elements (as defined in the periodic table), and element families (e.g. “Alkaline”, “Metals”). Note that here we minimize the amount of structural data by keeping track of occurrences of residues (ResidueOccurrence) and atom types (AtomOccurrence) in a particular molecule, rather than storing individual instances. For example, in the case of water, there will be a single entry for the hydrogen atom with a count set to 2, and another entry for the oxygen atom with a count set to 1. The same approach is used to keep track of the various molecules in the system. For example explicit solvent using water would be represented by the definition of the water molecule and the count of these molecules in the system. To enable searches of specific ligands a simple text representation of the compound is necessary. Molecule identifiers such as SMILES (Simplified Molecular-Input Line-Entry System [38]) or InChI (International Chemical Identifier [39]) strings can be associated to small molecules to enable direct molecule matching and similarity and substructure searches. The residue sequence is also available to search biomolecules based on an ordered list of residues. The residue sequence can be represented by two different strings: the original chain, or specific chain, as referenced in the input file defining the molecular topology, and a normalized chain. The specific chain, can potentially give more information about the individual residues within the context of the software that was used, and reference non-standard residues defined by the user. The normalized chain on the other hand uses a normalized nomenclature for the residue: one-letter codes representing either amino-acids or nucleobases. The normalized chain can be used to query the related molecule without prior knowledge about the software used, and enables advanced matching queries (e.g. BLAST [40]). Both residue and atom occurrences can be given a specific symbol, which represents a software-specific name, usually referencing a computational model for the entity. In MD the specific symbol would be the force field atom type while in QM this would be used to specify which basis set should be applied. The description of the biomolecules should include at least a generic type such as DNA, RNA or protein to classify the simulated molecules at a high level. Other biological information such as species (e.g. Mus musculus, Homo sapiens) and molecule role can be added as well. As defined by the Chemical Entities of Biological Interest (ChEBI [41]), each molecule can have one or multiple roles (application, chemical role, and/or biological role). This data element is very important as it would allow researchers to query molecules based on their function rather than their structure. On the other hand this type of information is not included in the raw simulation files, which means that it would have to be entered manually by the owner of the data. To avoid this one can imagine populating this information automatically by referencing external databanks that already store these attributes (e.g. Protein Data Bank [3]). This is reflected in this model by the reference structure concept, which keeps track of the database and the structure entry ID. If the topology of a simulated system is actually derived from a reference structure an extra field can be used to describe the protocol used to prepare the reference structure so that it serves as an input of the simulations. Possible steps include: choice of the specific model number if several are available in a single PDB entry or which PDB entry if multiple entries are possible, possible addition of missing residues from disordered regions, or specification of homology or other putative models. Files and file system So far the description of the model focused on the data elements related to the experiment itself to explain why the different tasks were run and what they represent. Another important aspect of this model is the inclusion of a reference to the files (input and output) that contain the actual data being described. This is illustrated in Figure 11. Each experiment can be associated to one or several file collections stored on local or remote file systems (e.g. NFS, Amazon S3, iRODS server). For each of these collections no assumption should be made on the location or the implementation of the file system, therefore it is necessary to keep track of the type of file server and host information to find a route to the host and access the files using the right protocol and/or API. The individual files should be associated to the tasks they represent and a distinction between input (parameters and methods) and output (e.g. logs, trajectories) files should be made. The topology files should be associated to the molecular system instead. Note that in certain cases, especially for QM calculations, the topology and input parameters might be contained in the same file. Each file reference should at least contain a unique identifier (UID) within its host file system and a format specification. Figure 11 References to the file system and hosted files containing the raw data. Extended attributes It is obvious that no single data model will be able to capture the needs of any lab running biomolecular simulations. The intent of this logical model is to provide a simple yet fairly exhaustive description of the concepts involved. To allow the addition of new properties, to provide more details about the experiment or to keep track of user- or lab-defined attributes, the notion of extended attribute can be introduced to the model. Each extended attribute would be an attribute-value-unit triplet referenced by a given class to extend its own attributes, as defined in the logical model. For example one user might want to keep track of the order of interpolation and the direct space tolerance for PME-based simulations. These parameters are currently not represented in the model, which only keeps track of the name of the electrostatics model (“PME”). To add these two parameters, one could add two extended attributes to the MD parameter set class (Figure 8) called “PME interpolation order” and “PME tolerance”. From an object-oriented perspective, all the classes introduced in the logical model could inherit from a single superclass that would reference extended attributes, where each extended attribute would be an attribute-value-unit triplet with a possible link to a concept identifier defining the attribute in an existing terminology. From a database perspective, an extra table would be needed to store all the extended attributes. Such table would need the necessary columns to represent the attribute-value-unit triplet, a possible concept identifier, and the name of the table each attribute would extend. Although this is an easy way to gather all the extended attributes in a single table this approach is not rigorous from a relational approach. To allow SQL queries that do not involve injection of table names each table would have to be associated to an extra table storing its extended attributes. The logical model presented here defines a domain that should be sufficient to index biomolecular simulation data at the experiment level. In total over 60 classes were defined to represent the common data elements identified through the survey, along with new elements and dictionaries that should avoid data redundancy and facilitate queries using standard values. From a developer’s perspective this model provides some guidelines for the creation of a physical data model that would be more dependent on a particular technology, whether it is for the implementation of a database or an API. At a more abstract level the concepts introduced in this logical model provide a good starting point for the creation of a new terminology or ontology specific to biomolecular simulations. The current list of dictionaries include: force field parameter set names and types (e.g. classical, polarizable), enhanced sampling methods, MD analysis functions, barostats, thermostats, ensemble types, constraint algorithms, electrostatics models, basis sets and their types, calculation types (e.g. optimization, frequency, NMR), residues, atomic elements (periodic table) and their families, functional groups, software packages, and chemical file formats. The list also includes a dictionary of computational methods (e.g. Langevin dynamics, MP2, B3LYP) with their class (e.g. MD, Perturbation Theory, DFT) and family (e.g. ab initio, semi-empirical, empirical). All these dictionaries are available for browsing and lookups at: Examples of dictionary entries are also provided in Additional file 6 (force fields) and Additional file 7 (computational methods). All our dictionaries follow the same implementation method. The raw data is defined in CSV files and can be loaded into a database for remote queries and/or indexed using Apache Lucene [20] for local access via Java APIs (Figure 12). Apache Lucene is a text search engine written in Java that uses high-performance indexing to enable exact and partial string matching. Each CSV file contains a list of entries for a given dictionary with at least three columns representing: the identifiers, the terms (e.g. “QM/MM”), and the term descriptions (e.g. “Hybrid computational method mixing quantum chemistry and molecular mechanics”). More columns can be defined depending on the type of dictionary, either to represent extra attributes or to link to other dictionaries (foreign keys). For example the CSV file listing the QM method classes would have an extra column with the IDs of the associated QM method families. A set of SQL scripts was written to automatically create the database schema necessary to store the dictionaries and to load the CSV data into the tables. These scripts become very useful if one wants to integrate these dictionaries into a repository. Another script was written to automatically build the Lucene indexes. The script calls a Java API which parses the CSV files and uses the Lucene API to build the indexes. These indexes can then be used locally by external codes via the Lucene API, avoiding the need for static definitions of these dictionaries within the code or the creation of dependencies with remote resources such as a database. They should also help future developments of chemical file parsers and text processing tools for chemical information extraction from the literature (i.e. natural language processing). The Lucene-based dictionaries can be directly queried through a simple command-line interface. Additional file 8 demonstrates how one would look up a term using this program. This design is fairly simple and enables updates of the dictionary entries directly through the CSV files. One limitation is the lack of synonyms for the terms defined. To create richer lists it will be necessary to add an extra CSV file for each dictionary that would contain the list of all the synonyms and the ID of the associated terms. Successful implementations of terminologies in other domains, such as the UMLS (Unified Medical Language System [42]), should be used to guide the organization of the raw data and facilitate the integration of existing terminologies representing particular aspects of the biomolecular simulations (e.g. chemical data, biomolecules, citations). Figure 12 Building process for the dictionaries. Each dictionary can be either indexed via Apache Lucene for use via a Java API or loaded into a database to enable remote SQL queries. Maintenance and community support Until this point the development of the dictionaries has been restricted to an internal effort by our lab. To support the work of the community at large these dictionaries have to be extended and adjusted based on user feedback. For this purpose the dictionaries are now available on our project Wiki at, which enables discussions and edits by identified users. This will serve as a single endpoint to draft new versions of the dictionaries. The source code for the dictionaries, including the CSV files, SQL scripts, and Java API, is available from GitHub at: Updates on the CSV files hosted there should occur according to the status of the dictionaries in the Wiki. With time we might find that a dedicated database with a custom user interface becomes necessary for a defined group of editors to update existing terms, add new entries, add new dictionaries, and keep track of changes (logs). In any case, the number of editors should be limited to a small group of experts, actively participating and working together [43, 44]. In this paper we introduced a set of common data elements and a logical data model for biomolecular simulations. The model was built upon community needs, identified through a survey and refined internally. Elements described by the model cover the concepts of authorship, molecular system, computational method and platforms. Although the model presented here might not be complete, it integrates the methods that are the most significant for simulations of biomolecular systems: molecular dynamics, quantum chemistry and QM/MM. We introduced a new representation of the method landscape through method-specific parameter sets, which should allow the integration of more computational methods in the future. The addition of extended attributes to the model should enable customization by labs to fit their specific needs or represent properties that are currently not described by the model. The use cases presented here showed how the model can be used in real applications, to partially automate the creation of database schemas and generate XML descriptions. Multiple dictionaries, populated through reviews of online resources and literature, were implemented to supplement the model and provide developers with new tools to facilitate text extraction from chemical files and population of repositories. Although the current version of the dictionaries is fairly exhaustive they will become a powerful tool only if they are updated by the community. A missing piece in this model is a catalogue of available force field parameter sets and atom types that could be used to generate force field description files and serve as an input for popular MD software packages. The EMSL Basis Set Exchange [45] already offers something similar for basis sets, and provides a SOAP-based web service to access the data computationally. While it is important to allow the whole community to provide input on the CDEs and dictionaries, eventually a consensus needs to be made by a group of experts representing the main stakeholders: simulation engine developers, data repository architects, and users. The creation of a consortium including users, developers and informaticians from the QM and the MD community could help formalize this process if such entity leads: • Active polling, for example via annual surveys assessing the need for changes or additions in the CDEs, dictionaries, or the data model. Information about the respondents such as software usage, preferred computational methods (e.g. all-atom or coarse-grain MD, DFT) and target systems (e.g. chemical compounds, biomolecules) will provide more details for the development of more adequate recommendations for specialized communities. • Monitoring of community discussions, which might take place on a dedicated online forum or a wiki such as the one introduced here • Recurring creation and distribution of releases for the CDEs, dictionaries, and data model. The CDEs in particular should include at least 2 levels of importance (recommended or optional) to provide some criteria about the completeness of the data descriptors. A third level characterizing certain CDEs as mandatory might provide a standard for developers and data publishers to populate repositories. Our current focus is on indexing data at the experiment level so that the associated collection of input and output files can be retrieved. While the CDEs can be used to tag individual files it is not clear yet how much metadata is necessary to enable automatic data extraction (e.g. extract properties for a single frame from a time series) and processing, and if such metadata can be extracted directly from the files without user input. The popularization of self-explanatory formats (e.g. NetCDF, CML) to store calculation results or MD trajectories would certainly help. The ongoing work within the ScalaLife programme should help the community move in this direction, while the data model presented here will provide a good framework to organize, describe, and index computational experiments comprising multiple tasks. By publishing this model and the list of CDEs we hope to encourage developments of new repositories for biomolecular simulations, whether they are part of an integrated computational environment (e.g. MDWeb) or not (e.g. iBIOMES). Both approaches should be addressed. On one hand, computational environments can easily keep track of the tasks performed during an experiment since the input parameters and topologies are directly specified within the environment. On the other hand, we still need to think about the developer community that works on new simulation engines, new force fields and new computational methods. They will still need to customize their simulation runs within more flexible environments where they can manually edit input files or compile new codes, and use local or allocated high-performance computing resources. Independent data repositories where data can be deposited through a publication process are probably more viable to overcome these requirements. Finally it is not clear who will be given access to these large computational environments or who will have the computational, storage, and human resources to deploy, sustain, and make such complex systems available to the community. The goal of the proposed data model is to lay the foundations for a standard to represent biomolecular simulations, from the experiment level to the task level. For this purpose we wanted to integrate MD, QM, and QM/MM methods, all of which play a particular role in the field. Although classical MD is arguably the most popular approach for biomolecular simulations we believe that QM/MM approaches and ab initio MD for example will gain more and more interest as computational power increases and they should not be left out of a future standard. On the other hand we recognize that our model might not be as granular as others. The UMM XML [26] schema for example will be one of the first attempts to describe MD simulation input with enough granularity so that software-specific input files can be generated without information loss. Such effort is highly valuable for the MD community, and our data model will certainly evolve to integrate such models. Our short-term goal is to engage current repository and data model developers such as the ScalaLife ( and Mosaic ( groups for MD and the Blue Obelisk ( group for QM and cheminformatics so that we can learn more about each other’s experience and try to align our effort towards an integrated data model that would fit the needs of the whole biomolecular simulation community. The framework presented here introduces a data model and a list of dictionaries built upon community feedback and selected experts’ experience. The list of core data elements, the models, and the dictionaries are available on our wiki at: As more implementation efforts are taken, the community will be able to assess the present data model more accurately and provide valuable feedback to make it evolve, and eventually support collaborative research. The list of desiderata for data model developments, for both conceptual and physical representations, should provide some guidance for the long task at play. This paper uses semi-structured interview methods to establish the community needs and preferences regarding biomolecular simulation data indexing and presentation. The common data elements were identified using an approach similar to [46], while the data model was built using standard modelling techniques to derive logical and physical models. Interested readers can find details of these techniques in [22]. Molecular dynamics Molecular mechanics Quantum Mechanics Common data elements Particle-Mesh Ewald. Computational support was provided by the Center for High Performance Computing (CHPC) at the University of Utah, the Blue Waters sustained-petascale computing project (NSF OCI 07–25070 and PRAC OCI-1036208), and the NSF Extreme Science and Engineering Discovery Environment (XSEDE, OCI-1053575) and allocation MCA01S027P. Research funding came from the NSF CHE-1266307 (TEC3). Thanks to the CHPC staff for hardware and software support that allowed the implementation of the prototypes. Authors’ Affiliations Department of Biomedical Informatics, University of Utah Department of Medicinal Chemistry, University of Utah 1. Šponer J, Šponer JE, Mládek A, Banáš P, Jurečka P, Otyepka M: How to understand quantum chemical computations on DNA and RNA systems? A practical guide for non-specialists. Methods. 2013, 64 (1): 3-11. 10.1016/j.ymeth.2013.05.025.View ArticleGoogle Scholar 2. Dror RO, Dirks RM, Grossman JP, Xu H, Shaw DE: Biomolecular simulation: a computational microscope for molecular biology. Annu Rev Biophys. 2012, 41: 429-452. 10.1146/annurev-biophys-042910-155245.View ArticleGoogle Scholar 3. Bernstein FC, Koetzle TF, Williams GJB, Meyer EF, Brice MD, Rodgers JR, Kennard O, Shimanouchi T, Tasumi M: The protein data bank. Eur J Biochem. 2008, 80 (2): 319-324.View ArticleGoogle Scholar 4. Simms AM, Toofanny RD, Kehl C, Benson NC, Daggett V: Dynameomics: design of a computational lab workflow and scientific data repository for protein simulations. Protein Eng Des Sel. 2008, 21 (6): 369-377. 10.1093/protein/gzn012.View ArticleGoogle Scholar 5. Toofanny RD, Simms AM, Beck DA, Daggett V: Implementation of 3D spatial indexing and compression in a large-scale molecular dynamics simulation database for rapid atomic contact detection. BMC Bioinformatics. 2011, 12: 334-10.1186/1471-2105-12-334.View ArticleGoogle Scholar 6. Meyer T, D’Abramo M, Hospital A, Rueda M, Ferrer-Costa C, Perez A, Carrillo O, Camps J, Fenollosa C, Repchevsky D, et al: MoDEL (molecular dynamics extended library): a database of atomistic molecular dynamics trajectories. Structure. 2010, 18 (11): 1399-1409. 10.1016/j.str.2010.07.013.View ArticleGoogle Scholar 7. Ng MH, Johnston S, Wu B, Murdock SE, Tai K, Fangohr H, Cox SJ, Essex JW, Sansom MSP, Jeffreys P: BioSimGrid: grid-enabled biomolecular simulation data storage and analysis. Future Gen Comput Syst. 2006, 22 (6): 657-664. 10.1016/j.future.2005.10.005.View ArticleGoogle Scholar 8. Terstyanszky G, Kiss T, Kukla T, Lichtenberger Z, Winter S, Greenwell P, McEldowney S, Heindl H: Application repository and science gateway for running molecular docking and dynamics simulations. Stud Health Technol Inform. 2012, 175: 152-161.Google Scholar 9. Adams S, de Castro P, Echenique P, Estrada J, Hanwell MD, Murray-Rust P, Sherwood P, Thomas J, Townsend J: The quixote project: collaborative and open quantum chemistry data management in the internet age. J Cheminform. 2011, 3: 38-10.1186/1758-2946-3-38.View ArticleGoogle Scholar 10. Phadungsukanan W, Kraft M, Townsend JA, Murray-Rust P: The semantics of Chemical Markup Language (CML) for computational chemistry: CompChem. J Cheminform. 2012, 4 (1): 15-10.1186/1758-2946-4-15.View ArticleGoogle Scholar 11. Murray-Rust P, Rzepa HS: Chemical markup, XML, and the World Wide Web. 4. CML schema. J Chem Inf Comput Sci. 2003, 43 (3): 757-772. 10.1021/ci0256541.View ArticleGoogle Scholar 12. Guha R, Howard MT, Hutchison GR, Murray-Rust P, Rzepa H, Steinbeck C, Wegner J, Willighagen EL: The Blue Obelisk-interoperability in chemical informatics. J Chem Inf Comput Sci. 2006, 46 (3): 991-998. 10.1021/ci050400b.View ArticleGoogle Scholar 13. de Jong WA, Walker AM, Hanwell MD: From data to analysis: linking NWChem and Avogadro with the syntax and semantics of Chemical Markup Language. J Cheminform. 2013, 5 (1): 25-10.1186/1758-2946-5-25.View ArticleGoogle Scholar 14. Frisch MJ, Trucks GW, Schlegel HB, Scuseria GE, Robb MA, Cheeseman JR, Scalmani G, Barone V, Mennucci B, Petersson GA, et al: Gaussian 09, Revision C.01. 2009, Wallingford, CT: Gaussian, IncGoogle Scholar 15. Valiev M, Bylaska EJ, Govind N, Kowalski K, Straatsma TP, Van Dam HJJ, Wang D, Nieplocha J, Apra E, Windus TL: NWChem: a comprehensive and scalable open-source solution for large scale molecular simulations. Comput Phys Commun. 2010, 181 (9): 1477-1489. 10.1016/j.cpc.2010.04.018.View ArticleGoogle Scholar 16. Thibault JC, Facelli JC, Cheatham TE: IBIOMES: managing and sharing biomolecular simulation data in a distributed environment. J Chem Inf Model. 2013, 53 (3): 726-736. 10.1021/ci300524j.View ArticleGoogle Scholar 17. Rajasekar A, Moore R, Hou CY, Lee CA, Marciano R, de Torcy A, Wan M, Schroeder W, Chen SY, Gilbert L: iRODS Primer: integrated rule-oriented data system. Synth Lect Inform Concepts Retrieval Serv. 2010, 2 (1): 1-143.View ArticleGoogle Scholar 18. Abouzied A, Bajda-Pawlikowski K, Huang J, Abadi DJ, Silberschatz A: HadoopDB in action: building real world applications. Proceedings of the 2010 ACM SIGMOD International Conference on Management of data. 2010, Indianapolis, IN, USA: ACM, 1111-1114.View ArticleGoogle Scholar 19. Thusoo A, Sarma JS, Jain N, Shao Z, Chakka P, Zhang N, Antony S, Liu H, Murthy R: Hive-a petabyte scale data warehouse using hadoop. Data Engineering (ICDE), 2010 IEEE 26th International Conference on. 2010, Long Beach, CA, USA: IEEE, 996-1005.View ArticleGoogle Scholar 20. Apache Lucene. Access January 2014 21. Herráez A: Biomolecules in the computer: jmol to the rescue. Biochem Mol Biol Educ. 2006, 34 (4): 255-261. 10.1002/bmb.2006.494034042644.View ArticleGoogle Scholar 22. Tillmann G: A practical guide to logical data modeling. 1993, New York: McGraw-HillGoogle Scholar 23. Foster I, Kesselman C: The Grid 2: Blueprint for a new Computing Infrastructure. 2003, San Francisco, CA: Morgan Kaufmann, 2Google Scholar 24. Saltz J, Oster S, Hastings S, Langella S, Kurc T, Sanchez W, Kher M, Manisundaram A, Shanbhag K, Covitz P: caGrid: design and implementation of the core architecture of the cancer biomedical informatics grid. Bioinformatics. 2006, 22 (15): 1910-1916. 10.1093/bioinformatics/btl272.View ArticleGoogle Scholar 25. Sun Y, McKeever S: Converting biomolecular modelling data based on an XML representation. J Integr Bioinform. 2008, 5 (2): doi:10.2390/biecoll-jib-2008-95.Google Scholar 26. Goni R, Apostolov R, Lundborg M, Bernau C, Jamitzky F, Laure E, Lindhal E, Andrio P, Becerra Y, Orozco M, et al: ScalaLife white paper: standards for data handling. ScalaLife, Scalable Software Services for Life Science. 2013, Available at, access January 2014)Google Scholar 27. Case DA, Cheatham TE, Darden T, Gohlke H, Luo R, Merz KM, Onufriev A, Simmerling C, Wang B, Woods RJ: The amber biomolecular simulation programs. J Comput Chem. 2005, 26 (16): 1668-1688. 10.1002/jcc.20290.View ArticleGoogle Scholar 28. Hess B, Kutzner C, van der Spoel D, Lindahl E: GROMACS 4: algorithms for highly efficient, load-balanced, and scalable molecular simulation. J Chem Theory Comput. 2008, 4 (3): 435-447. 10.1021/ct700301q.View ArticleGoogle Scholar 30. Roe DR, Cheatham TE: PTRAJ and CPPTRAJ: software for processing and analysis of molecular dynamics trajectory data. J Chem Theory Comput. 2013, 9 (7): 3084-3095. 10.1021/ct400341p.View ArticleGoogle Scholar 31. Michaud‒Agrawal N, Denning EJ, Woolf TB, Beckstein O: MDAnalysis: a toolkit for the analysis of molecular dynamics simulations. J Comput Chem. 2011, 32 (10): 2319-2327. 10.1002/jcc.21787.View ArticleGoogle Scholar 32. Hospital A, Andrio P, Fenollosa C, Cicin-Sain D, Orozco M, Lluis Gelpi J: MDWeb and MDMoby: an integrated Web-based platform for molecular dynamics simulations. Bioinformatics. 2012, 28 (9): 1278-1279. 10.1093/bioinformatics/bts139.View ArticleGoogle Scholar 33. Svensson M, Humbel S, Froese RD, Matsubara T, Sieber S, Morokuma K: ONIOM: A multilayered integrated MO+ MM method for geometry optimizations and single point energy predictions. A test for Diels-Alder reactions and Pt (P (t-Bu) 3) 2+ H2 oxidative addition. J Phys Chem. 1996, 100 (50): 19357-19363. 10.1021/jp962071j.View ArticleGoogle Scholar 34. Jorgensen WL, Tirado-Rives J: Potential energy functions for atomic-level simulations of water and organic and biomolecular systems. Proc Natl Acad Sci USA. 2005, 102 (19): 6665-6670. 10.1073/pnas.0408037102.View ArticleGoogle Scholar 35. Nadler W, Brunger AT, Schulten K, Karplus M: Molecular and stochastic dynamics of proteins. Proc Natl Acad Sci USA. 1987, 84 (22): 7933-7937. 10.1073/pnas.84.22.7933.View ArticleGoogle Scholar 36. Schlick T: Molecular dynamics-based approaches for enhanced sampling of long-time, large-scale conformational changes in biomolecules. F1000 Biol Rep. 2009, 1: 51-Google Scholar 37. Cramer CJ: Essentials of Computational Chemistry : Theories and Models. 2004, Chichester, West Sussex, England ; Hoboken, NJ: Wiley, 2Google Scholar 39. McNaught A: The IUPAC International Chemical Identifier: InChI – a new standard for molecular informatics. Chem Int. 2006, 28 (6): 12-14.Google Scholar 42. Bodenreider O: The unified medical language system (UMLS): integrating biomedical terminology. Nucleic Acids Res. 2004, 32 (Database Issue): D267-View ArticleGoogle Scholar 43. Hardiker N, Kim TY, Bartz CC, Coenen A, Jansen K: Collaborative development and maintenance of health terminologies. AMIA Annu Symp Proc 2013. 2013, Washington DC: American Medical Informatics Association, 572-577.Google Scholar 44. Noy NF, Tudorache T: Collaborative ontology development on the (semantic) web. AAAI Spring Symposium: Symbiotic Relationships between Semantic Web and Knowledge Engineering. 2008, Stanford University, CA: AAAI Press, 63-68.Google Scholar 45. Schuchardt KL, Didier BT, Elsethagen T, Sun L, Gurumoorthi V, Chase J, Li J, Windus TL: Basis set exchange: a community database for computational sciences. J Chem Inf Model. 2007, 47 (3): 1045-1052. 10.1021/ci600510j.View ArticleGoogle Scholar 46. Kawamoto K, Del Fiol G, Strasberg HR, Hulse N, Curtis C, Cimino JJ, Rocha BH, Maviglia S, Fry E, Scherpbier HJ, et al: Multi-national, multi-institutional analysis of clinical decision support data needs to inform development of the HL7 virtual medical record standard. AMIA Annu Symp Proc 2010. 2010, Washington DC: American Medical Informatics Association, 377-381.Google Scholar © Thibault et al.; licensee Chemistry Central Ltd. 2014
b15a9e7a6143d6cf
The nine billion names of God English: A GIF animation about the summary of ... English: A GIF animation about the summary of quantum mechanics. Schrödinger equation, the potential of a “particle in a box”, uncertainty principle and double slit experiment. (Photo credit: Wikipedia) If you are an easily offended religious fundamentalist you should probably stop reading this now. The nine billion names of God” is a famous science fiction short story by Arthur C. Clarke. In essence the plot is that some researchers complete a piece of work and suddenly notice that the world is being switched off. A piece of whimsy, obviously. But what if it were something that could really happen (I am now risking a listing under “questions to which the answer is no” by John Rentoul)? If your scientific experiment reached a conclusion would you just let it run on, or switch it off (or maybe wait till your paper was accepted and then switch it off!) The issue here is the question of whether or not the universe, as we see it, is in fact all just a gigantic computer simulation. As I have written before, if we accept that computing power will continue to grow without limit we are almost bound to accept it is much more likely we are inside a simulated than a real universe. Of course, if the universe was confirmed as a simulation it would make no physical difference to us (though I suspect the psychological blow to humanity would be profound), so long as nobody turned the simulation off. Testing whether it is true that the universe is simulated requires finding a fundamental minimal size beyond which we cannot further explore the universe: this is because computing a simulation relies on the fundamental digital nature of a computer – you cannot get below one bit, however you have scaled the bits. Now, chance, God, the simulators (take your pick) have made this quite difficult via the Heisenberg Uncertainty Principle: Where \sigma_x is the uncertainty in a particle’s position, \sigma_p uncertainty in momentum and \hbar a very small number – 1.055 x 10^{-34} Joule seconds. In most situations the very smallness of \hbar means the uncertainty principle is of no concern but once we start to reduce \sigma_x (ie look at extremely small parts of space) then \sigma_p starts to soar and the amount of energy needed to conduct experiments also flies through the roof. But nature also gives us extreme energies for free in the form of cosmic rays and these could hold the clue as to whether the universe is grainy (hence a simulation) or smooth (at least at currently detectable sizes). Footnote: the fundamental weakness in the argument seems to me to be the fact that computing is increasingly showing that an unlimited increase in computing power is unlikely. But if you want to know more about this I really do recommend Brian Greene’s The Hidden Reality. 3 thoughts on “The nine billion names of God 1. I’d like to point out a couple of implicit assumptions I think you are making. One is that the hypothetical universe simulator runs on a digital computer (thus the “can’t go below one bit” argument). Before digital computers, there were analog machines, and I’m not sure there is an analogous lower bound for an analog machine. The other assumption is that a simulator capable of something as complex as a simulated universe would employ a technological paradigm we poor (simulated) mortals could recognize, let alone understand. 2. Well, you are right, though I did think about them. On the digital point, i think the foundation of this is the acceleration of digital computing power – we cannot make the same claim for analogue computing (though I am dubious about it for digital too). On the technological paradigm question, yes, of course that is true. I suppose I should have stated it that if this fundamental boundary of investigation were found then it would show we lived in a world simulated on a highly advanced form of our existing technology. Comments are closed.
9601a0b413e69ac7
Modelling Photon Phase Modelling a photon presents a number of challenges. In quantum physics, a photon flies through the air like a wave. They diffract around corners like a wave. Photons interact with each other like waves. wave_diffractionSometimes they reinforce each other, sometimes they interfere and seem to disappear. The classical explanation uses Maxwell’s equations to describe the wave nature of light. Light waves have perpendicular magnetic and electrical fields that vary in a periodic nature over time. Since the modes of an electromagnetic field have the same classical equations as a simple harmonic oscillator, a photon can and is modeled as a simple harmonic oscillator, where the energy of the photon is proportional to the frequency of the photon. This is commonly known as the first quantization. The Photon as a Harmonic Oscillator single_photon_oscillatorA simple harmonic oscillator is anything with a restoring potential. The simplest examples are things like a spring, or a string with tension, or a wave. In 1926, an Austrian physicist Erwin Schrödinger, described mathematically how a harmonic oscillator stores energy and how to calculate how much energy it stores. The Schrödinger equation describes the system’s “wave function” or “state”, a condition that changes over time on a periodic basis. Click to view single photons For the photon, the value of this restoring potential is known as the Planck constant (h). Planck’s constant, relates the amount of energy stored in a photon to its wavelength (λ). Put another way, Planck’s constant tells you the amount of time it takes the photon to undergo one cycle of whatever its doing given that the photon has a specific amount of energy. The equation E for energy = h / λ, tells us that a photon with low energy will take much longer to complete one cycle of the wave then a photon with high energy. We start the model of photon phase with a simple harmonic oscillator. Consider a photon model as an expanding and contracting bubble flying through the air at the speed of light. The energy stored in the harmonic oscillator is inversely proportional to the speed of the oscillation. Click to play the photon oscillator game. Watch higher energy photons expand and contract quickly, low energy photons take much longer to go through through a single cycle. The color of a photon depends on its energy and over time, different color photons will get out of phase with each other. Modelling the photon as a simple harmonic oscillator is what Planck had in mind when he proposed the quantization of the energy in oscillators. The Photon as a Moving Harmonic Oscillator Click to view trapped photons The second quantization deals with multiple copies (ie. integer numbers of excitation’s) of these harmonic oscillators. Each oscillator stores a specific amount of energy and if you split the oscillator, you end up with two oscillators (photons), each of which has 1/2 the energy of the original oscillator (photon). The next step is to put these photons in motion and watch them interact with each other in a small cavity over a period of time. Modeling the photons as oscillating between a dark state and a light state makes it easy for the animator to illustrate wave reinforcement and wave interference. Click to bounce photons around in a photon cavity. By making the photon somewhat see-through, the “white” phase tends to reinforce other photons in the “white” phase making two “white” photons look even “whiter”. “Dark” photons tend to interfere with “white” photons and the pair simply disappear in the background gray color. “Dark” photons will also reinforce other “dark” photons making for very “dark” spots against the background gray color. The animation starts with a single 630 nanometer photon (1.97 eVolts) bouncing back and forth in a reflective cavity four wavelengths wide (2,520 nanometers). Releasing additional photons at a rate of 2.1 femtoseconds per photon, produces some interesting interference with one photon per wavelength. Releasing additional photons at the rate of 0.1 femtoseconds per photon, allows the modelling of short bursts of coherent photons from a laser (50 photons over a period of 4.5 femtoseconds) and the amazing patterns that are produced. To produce coherent photons, we simply assume they are created at the same place (through stimulated emission) and fly off at the speed of light. Producing photons from a more random location with a random phase angle produces an incoherent beam of photons that look very different. Photon Interference- The Michelson Interferometer Click to view photon interference The Michelson interferometer is used to measure tiny differences in length along two different light paths. A laser light source is split into two arms with a beamsplitter. Each arm is reflected back toward the beamsplitter which then combines their amplitudes. The resulting interference pattern that is not directed back toward the source is directed to some type of photoelectric detector or camera. If there is a slight angle between the two returning beams, then an imaging detector will record a sinusoidal fringe pattern. If there is perfect spatial alignment between the returning beams, then there will not be any such pattern but rather a constant intensity over the beam dependent on the difference of the two path lengths. The model starts with splitting the beam with a beamsplitter. Click to watch photons flow through a Michelson interferometer. Note the beamsplitter is causing the reflected beam to have its phase shifted by 180° while the transmitted beam is not phase shifted. When the beams are recombined, they appear to disappear as the two beams are interfering with each other. Adding a compensating piece of glass shifts the one beam by another 180°, resulting in the two beams now reinforcing each other. Finally, the length of one of the arms in increased by first 1/2 wavelength, then a full wavelength, resulting in first interference, then reinforcement of the wave. The Heisenberg Uncertainty Principle Heisenberg, like Schrödinger and Planck, viewed the photon in terms of a harmonic oscillator. He addressed the issue of attempting to measure the exact position of a photon at the same time you try to measure the exact energy of a photon. Heisenberg found that it cannot be done. Imagine photons in terms of expanding and contracting bubbles flying through the air. When tiny, you know exactly where the photon is. When it is big, you cant really say where it is, because it appears to be over a wide area. The Planck constant (h), which relates the cycle time (and by extension, how big it gets) of this bubble to the energy it contains, binding the relationship between size and location. Uncertainty principle: ΔxΔp ≥ h/4π In this context it makes perfect sense that you can not know both the position (x) and momentum (p), which is energy in the case of a photon, at the same time. It’s either big affecting a wide area, or it’s tiny and only affecting one small area, it cannot be both at the same time. The goal of modelling the photon is to provide mental images of what a photon is all about. With this image, the photon becomes a friendly little guy. When combined with a bunch of other photons, an amazing array of properties emerge. To view this mathematically, consider Landau & Lifshitz Vol. II. On page 108, the wave equation section talks about electromagnetic waves “in which the field depends only on one coordinate, say x (and on the time). Such waves are said to be plane”. Electromagnetic waves are ever changing “plane waves moving in the positive direction along the X axis”. In volume IV, page 5, Landau & Lifshitz continue, talking about Quantization of the Free Electromagnetic Field and on page 11, introducing photons: These formulae enable us to introduce the concept of radiation quanta or photons, which is fundamental throughout quantum electrodynamics. We may regard the free electromagnetic field as an ensemble of particles each with energy ω (= ħω) and momentum k (=nħω/c). The relationship between the photon energy and momentum is as it should be in relativistic mechanics for particles having zero rest-mass and moving with the velocity of light. … The polarization of the photon is analogous to the spin of other particles; …. It is easily seen that the whole of the mathematical formalism developed in §2 is fully in accordance with the representation of the electromagnetic field as an ensemble of photons; it is just the second quantization formalism, applied to the system of photons. … A photon is a plane wave travelling through space at the speed of light. Leave a Reply
75776385d56280c9
Sign up Here's how it works: 1. Anybody can ask a question 2. Anybody can answer While investigating the EPR Paradox, it seems like only two options are given, when there could be a third that is not mentioned - Heisenberg's Uncertainty Principle being given up. The setup is this (in the wikipedia article): given two entangled particles, separated by a large distance, if one is measured then some additional information is known about the other; the example is that Alice measures the z-axis and Bob measures the x-axis position, but to preserve the uncertainty principle it's thought that either information is transmitted instantaneously (faster than light, violating the special theory of relativity) or information is pre-determined in hidden variables, which looks to be not the case. What I'm wondering is why the HUP is not questioned? Why don't we investigate whether a situation like this does indeed violate it, instead of no mention of its possibility? Has the HUP been verified experimentally to the point where it is foolish to question it (like gravity, perhaps)? It seems that all the answers are not addressing my question, but addressing waveforms/commutative relations/fourier transforms. I am not arguing against commutative relations or fourier transforms. Is not QM the theory that particles can be represented as these fourier transforms/commutative relations? What I'm asking this: is it conceivable that QM is wrong about this in certain instances, for example a zero state energy, or at absolute zero, or in some area of the universe or under certain conditions we haven't explored? As in: Is the claim then that if momentum and position of a particle were ever to be known somehow under any circumstance, Quantum Mechanics would have to be completely tossed out? Or could we say QM doesn't represent particles at {absolute zero or some other bizarre condition} the same way we say Newtonian Physics is pretty close but doesn't represent objects moving at a decent fraction of the speed of light? EPR Paradox: "It considered two entangled particles, referred to as A and B, and pointed out that measuring a quantity of a particle A will cause the conjugated quantity of particle B to become undetermined, even if there was no contact, no classical disturbance." "According to EPR there were two possible explanations. Either there was some interaction between the particles, even though they were separated, or the information about the outcome of all possible measurements was already present in both particles." These are from the wikipedia article on the EPR Paradox. This seems to me to be a false dichotomy; the third option being: we could measure the momentum of one entangled particle, the position of the other simultaneously, and just know both momentum and position and beat the HUP. However, this is just 'not an option,' apparently. I'm not disputing that two quantities that are fourier transforms of each other are commutative / both can be known simultaneously, as a mathematical construct. Nor am I arguing that the HUP is indeed false. I'm looking for justification not just that subatomic particles can be models at waveforms under certain conditions (Earth like ones, notably), but that a waveform is the only thing that can possibly represent them, and any other representation is wrong. You van verify the positive all day long, that still doesn't disprove the negative. It is POSSIBLE that waveforms do not correctly model particles in all cases at all times. This wouldn't automatically mean all of QM is false, either - just that QM isn't the best model under certain conditions. Why is this not discussed? share|cite|improve this question I +1d to get rid of the downvote you had. It's the last line that did it for me. – Olly Price Aug 13 '12 at 22:37 Anyone who is downvoting care to elaborate on where my question is unclear, unuseful or shows no effort? I'd be glad to improve it if I can. – Ehryk Aug 13 '12 at 23:40 Try Bohmian mechanics. – MBN Sep 6 '12 at 11:02 @Ehryk: Not my downvote, but this question is a waste of time. You misunderstood what EPR is all about. The EPR effects have nothing to do with HUP, and you can show that they are inconsistent with local variables determining experimental outcomes without doing quantum mechanics, just from the experimental outcomes themselves. This means the weirdness is not due to the formalism, but really there in nature. – Ron Maimon Sep 11 '12 at 6:15 So in a universe without the commutative relation/HUP, where the commutative relation was sometimes zero / position and momentum could both be known, where's the paradox with EPR? You could just determine the values of both entangled particles, no paradox necessary. – Ehryk Sep 11 '12 at 7:24 12 Answers 12 up vote 5 down vote accepted In precise terms, the Heisenberg uncertainty relation states that the product of the expected uncertainties in position and in momentum of the same object is bounded away from zero. Your entanglement example at the end of your edit does not fit this, as you measure only once, hence have no means to evaluate expectations. You may claim to know something but you have no way to check it. In other entanglement experiments, you can compare statistics on both sides, and see that they conform to the predictions of QM. In your case, there is nothing to compare, so the alleged knowledge is void. The reason why the Heisenberg uncertainty relation is undoubted is that it is a simple algebraic consequence of the formalism of quantum mechanics and the fundamental relation $[x,p]=i\hbar$ that stood at the beginning of an immensely successful development. Its invalidity would therefore imply the invalidity of most of current physics. Bell inequalities are also a simple algebraic consequence of the formalism of quantum mechanics but already in a more complex set-up. They were tested experimentally mainly because they shed light on the problem of hidden variables, not because they are believed to be violated. The Heisenberg uncertainty relation is mainly checked for consistency using Gedanken experiments, which show that it is very difficult to come up with a feasible way of defeating it. In the past, there have been numerous Gedanken experiments along various lines, including intuitive and less intuitive settings, and none could even come close to establishing a potential violation of the HUP. Edit: One reaches experimental limitations long before the HUP requires it. Nobody has found a Gedankenexperiment for how to do defeat the HUP, even in principle. We don't know of any mechanism to stop an electron, thereby bringing it to rest. It is not enough to pretend such a mechanism exists; one must show a way how to achieve it in principle. For example, electron traps only confine an electron to a small region a few atoms wide, where it will roam with a large and unpredictable momentum, due to the confinement. Thus until QM is proven false, the HUP is considered true. Any invalidation of the foundations of QM (and this includes the HUP) would shake the world of physicists, and nobody expects it to happen. share|cite|improve this answer Why wouldn't it just invalidate it under certain conditions? For example: by some means, we completely arrest an electron. Position = center of device, momentum = 0. Both known simultaneously. Couldn't we just say QM is 'not a valid model for arrested particles but works for moving ones' without invalidating most of current physics? – Ehryk Sep 7 '12 at 21:43 Any invalidation of the foundations of QM would shake the world of physicists. - But the center of a device is usually poorly definable, and an electron cannot be arrested completely, neither in position nor in momentum. One reaches experimental limitations long before the HUP requires it. - In the past, there have been numerous Gedanken experiments along similar and many other lines, and none could even come close to establishing a violation of the HUP. – Arnold Neumaier Sep 9 '12 at 13:12 until QM is proven false, the HUP is true. – Arnold Neumaier Sep 10 '12 at 11:53 @Ehryk: here's why you're seeming nonsensical to everyone here: at small length scales, an electron looks very, very much like a wave. You get interference patterns and everything. Now, you want to 'stop' it. Well, a "slower" electron has a longer wavelength than a "faster" one, but this longer wavelength is going to spread it out farther. By the time you get to your limit of a 'stopped' electron, the electron will be spread out over all of space. – Jerry Schirmer Sep 14 '12 at 23:13 @Ehryk: sure. Under circumstances not observed, maybe anything could happen. There's just no reason to believe that will be the case. The default assumption should be that things we havne't observed will act like things we have observed. – Jerry Schirmer Oct 12 '12 at 19:42 In quantum mechanics, two observables that cannot be simultaneously determined are said to be non-commuting. This means that if you write down the commutation relation for them, it turns out to be non-zero. A commutation relation for any two operators $A$ and $B$ is just the following $$[A, B] = AB - BA$$ If they commute, it's equal to zero. For position and momentum, it is easy to calculate the commutation relation for the position and momentum operators. It turns out to be $$[\hat x ,\hat p] = \hat x \hat p - \hat p \hat x = i \hbar$$ As mentioned, it will always be some non-zero number for non-commuting observables. So, what does that mean physically? It means that no state can exist that has both a perfectly defined momentum and a perfectly defined position (since $ |\psi \rangle$ would be both a right eigenstate of momentum and of position, so the commutator would become zero. And we see that it isn't.). So, if the uncertainty principle was false, so would the commutation relations. And therefore the rest of quantum mechanics. Considering the mountains of evidence for quantum mechanics, this isn't a possibility. I think I should clarify the difference between the HUP and the classical observer effect. In classical physics, you also can't determine the position and momentum of a particle. Firstly, knowing the position to perfect accuracy would require you to use a light of infinite frequency (I said wavelength in my comment, that's a mistake), which is impossible. See Heisenberg's microscope. Also, determining the position of a particle to better accuracy requires you use higher frequencies, which means higher energy photons. These will disturb the velocity of the particle. So, knowing the position better means knowing the momentum less. The uncertainty principle is different than this. Not only does it say you can't determine both, but that the particle literally doesn't have a well defined momentum to be measured if you know the position to a high accuracy. This is a part of the more general fact in quantum mechanics that it is meaningless to speak of the physical properties of a particle before you take measurements on them. So, the EPR paradox is as follows - if the particles don't have well-defined properties (such as spin in the case of EPR), then observing them will 'collapse' the wavefunction to a more precise value. Since the two particles are entangled, this would seem to transfer information FTL, violating special relativity. However, it certainly doesn't. Even if you now know the state of the other particle, you need to use slower than light transfer of information to do anything with it. Also, Bell's theorem, and Aspect's tests based off of it, show that quantum mechanics is correct, not local realism. share|cite|improve this answer So how do we know that all particles have a non-commuting relationship, always and forever, under all conditions, even the ones we aren't able to measure or with technology or knowledge we don't yet possess? – Ehryk Sep 6 '12 at 10:47 What if you define position and momentum as the two real numbers that you measure at time t from experiment? (That's what most people consider "position" and "momentum" to be anyway.) What is this "new" definition of position and momentum? – Nick Sep 8 '12 at 23:11 Let me add this: I've taken QM and done those calculations for the commutation plenty of times to figure out what sets of compatible observables there are. But I could give someone a random formula for some random integral and divide by 6.3 and say "look, this always comes out to a real value -- thus position and momentum can't be simultaneously well-defined!" and that makes no sense whatsoever. Yeah, I know the whole spiel about eigenvalues and eigenstates and identical preparations of quantum systems, but what kind of physical experiment demonstrates this limit? – Nick Sep 8 '12 at 23:15 Noncommutativity of operators nicely explains emission spectra, which I believe were the subject of Heisenberg's (?) initial ponderings. There's a nice bit of this history explained at page 40 of this book by Alain Connes (there is probably a more focused reference for this history, but I don't know of one) – Ryan Thorngren Sep 9 '12 at 0:51 The Heisenberg's relation is not tied to quantum mechanics. It is a relation between the width of a function and the width of its fourier transform. The only way to get rid of it is to say that x and p are not a pair of fourier transform: ie to get rid of QM. share|cite|improve this answer So if by any means at all (entanglement, future machines, or divine powers) one could measure both position and momentum simultaneously, then all of quantum mechanics is false? There could be no QM in a universe in which this is possible? – Ehryk Aug 14 '12 at 9:35 You necessarily need to change the relationship between position and momemtum. It is mathematically impossible if they just form a fourier transform pair. But considering the huge amount of datas validating QM, one can try to extend QM by adding a small term in the pair or by using a fractional commutator (with fractional derivative) for instance. – Shaktyai Aug 14 '12 at 9:43 How about saying x and p are a pair of fourier transforms USUALLY, but not in certain circumstances such as {inside a black hole, at absolute zero, under certain entanglement experiments, in a zero rest energy universe, etc.} How do we know that because QM is right USUALLY or from what we can observe, that it is right ALWAYS and FOREVER? – Ehryk Sep 6 '12 at 10:43 That is to say: QM as we know it is not valid in these cases. There is no possible objections to such a statement, but for it to get accepted by the physicists, you need to prove that you can explain things in a simpler way and that you can predict something measurable. – Shaktyai Sep 6 '12 at 10:46 Because there is no proof whatsoever that QM fails. The day it fails we shall reconsider the question. However, there are many theorists working on alternative theories, so you have your chances. – Shaktyai Sep 6 '12 at 15:09 The wave formulation has in its seed the uncertainty relation. Let me be precise what is meant by the wave formulation: the amplitude over space points will give information about localization on space, while amplitude over momenta will give information about localization in momentum space. But for a function, the amplitude over momenta is nothing else but the Fourier transform of the space amplitude. The following is jut a mathematical fact, not up to physical discussion: the standard deviation, or the spread of the space amplitude, multiplied by the spread of the momenta amplitude (given by the Fourier transform of the former) will be bounded from below by one. So, it should be pretty clear that, as long as we stick to a wave formulation for matter fields, we are bound mathematically by the uncertainty relation. No work around over that. Why we stick to a wave formulation? because it works pretty nicely. The only way someone is going to seriously doubt that is the right description is to either: 1) find an alternate description that at least explains everything that the wave formulation describes, and hopefully some extra phenomena not predicted by wave formulation alone. 2) find an inconsistency in the wave formulation. In fact, if someone ever manages to measure both momenta and position for some electron below the Planck factor, it would be definitely an inconsistency in the wave formulation. It would mean we would have to tweak the De Broglie ansatz or something equally fundamental about it. Needless to say, nothing like that has happened share|cite|improve this answer It's a mathematical fact IF the particle can indeed be wholly represented by that specific function, right? So in the entanglement experiment, perhaps that function does not represent the state of TWO entangled particles? Maybe we have entanglement wrong, or maybe that function does not represent particles in certain conditions? Why are these possibilities not even discussed? – Ehryk Sep 6 '12 at 18:05 @Ehryk, because scientists, as all humans, tend to do the least amount of effort that will get the job done, it really does not make economical sense to do otherwise. As i said, there would be something to discuss if something in the experiment would not turn out as expected, but it does. If you want to do your life's mission to prove false the wave representation, then you need to build an experiment that will either confirm it or disprove it. then, people will likely start seriously discussing other possibilities. – lurscher Sep 6 '12 at 18:13 We can't prove Zeus doesn't exist, yet we don't accept his existence because of this. An idea shouldn't have to be 'debunked' to have a healthy amount of doubt in it, yet the wave formulation representing all particles, everywhere, at all times and locations seems to be presented 'beyond doubt' - so why is it stated with such certainty about unknowability and when challenged, the opposition gives in without so much as a mention? – Ehryk Sep 6 '12 at 18:26 (I'm not trying to prove it wrong, or stating that it is, I'm asking if it can be false and if so, why it's not treated as such) – Ehryk Sep 6 '12 at 18:28 @Ehryk, suppose someone starts asking why physicists assume that we only have one time dimension, and why we don't try to debunk that. We would reply the same thing; we have no reason to devote resources to debunk something that seems to fit so nicely with existing phenomena, so the ball is in the court of the person that insist that, say, two-dimensional time makes great deal of sense for X or Y experiment. Then, if the experiment sounds like something that has not been tested, and is under budget to implement, maybe some experimentalists will try to do it. That is how science works – lurscher Sep 6 '12 at 18:30 If we want the position and the momentum to be well-defined at each moment of time, the particle has to be classical. We inherited these notions from classical mechanics, where they apply successfully. Also they apply at macroscopic level. So, it is a natural question to ask if we can keep their good behavior in QM. Frankly, there is nothing to stop us to do this. We can conceive a world in which the particles are pointlike all the time, and move along definite trajectories, and this will "beat HUP". This was the first solution to be looked for. Einstein and de Broglie tried it, and not only them. Even Bohr, in his model, envisioned electrons as moving along definite trajectories in the atom (before QM). David Bohm was able to develop a model which has at a sub-quantum level this property, and in the meantime behaves like QM at quantum level. The price to be paid is to allow interactions which "beat the speed of light", and to adjust the model whenever something incompatible with QM was found. IMHO, this process of adjustments still continues today, and this looks very much like adding epicycles in the heliocentric model. But I don't want to be unfair with Bohm and others: it is possible to emulate QM like this, and if we learn QM facts which contradict it, it will always be possible to find such a model which behaves like QM, but also has a subquantum level which consists of classical-like point particles with definite positions and momenta. At this time, these examples prove that what you want is possible. One may argue that they are unaesthetic, because they are indeed more complicated than QM. But this doesn't mean that they are not true. Also, at this time they don't offer anything testable which QM can't offer. So, while QM describes what we observe, the additional features of hidden variable theories are not observable, more complicated, and violate special relativity. Or, if they don't violate special relativity, they contradict what QM predicts and we observed in experiments of entanglement like that of Alan Aspect. If EPR presents us with two alternatives, (1) spooky action at distance, (2) QM is incomplete, and that you propose, (3) HUP is false, let's not forget that Aspect's experiment and many others confirmed the alternative (1). Now, it would be much better for such models if they would stop adjusting themselves to mimic QM, and predict something new, like a violation of HUP. This would really be something. In conclusion, yes, you are right and in principle it is possible to beat HUP. The reason why most physicists don't care too much about this, is that the known ways to beat HUP are ugly, have hidden elements, violate other principles. But others consider them beautiful and useful, and if you are interested, start with Bohm's theory and the more recent developments of this. Synopsis: The Certainty of Uncertainty Violation of Heisenberg’s Measurement-Disturbance Relationship by Weak Measurements (arXiv link) share|cite|improve this answer This was rather helpful, so I appreciate it. I'm still just having difficulty wrestling with unknowability in relation to this; for example if we ever found a way to arrest a particle completely; we'd know it's position and momentum (0) both at the same time, and while it violated HUP, it could just be said 'this particle cannot be represented by a wavefunction.' The reach of the HUP seems to include this though, with no provisions, and just be accepted so OBVIOUSLY you can't stop a particle. Would we just say the particle is classical in that instance? – Ehryk Sep 6 '12 at 18:20 @Cristi I see (and generally have no objections to) your argument, but that conclusion seems misleading. Yes, it's possible to beat HUP (by discarding quantum mechanics) in the same sort of sense that it's possible to create a macroscopic stable wormhole: not strictly ruled out, but there is no evidence to support it. So I think it's misleading to be saying that this is possible. – David Z Sep 6 '12 at 18:45 @David Zaslavsky: Thanks. To make clear my conclusion, and less misleading, I wrote the first, rather lengthy, paragraph. This contains for instance the statement "while QM describes what we observe, the additional features of hidden variable theories are not observable, more complicated, and violate special relativity." Anyway, I considered it would be more misleading to claim that one knows HUP can't be violated no matter what. – Cristi Stoica Sep 6 '12 at 19:31 @Ehryk: "What happened to particle-wave duality?". Particles are represented as wavefunctions. They are defined on the full space, but may have a small support (bump functions). At limit, when concentrated at a point, bump becomes Dirac's $\delta$ function. Then, it has definite position $x$, but indefinite wave vector, so it spreads immediately (this corresponds to HUP). Its "dual" is a pure wave (with definite wave vector $k_x$, hence momentum $p_x$). The "particle-wave" duality refers to these two extreme cases. But most of the times the "wavicles" are somewhere between these two extremes. – Cristi Stoica Sep 6 '12 at 20:41 @Ehryk: "How does a wave have mass?" They have momentum and energy: multiply wave 4-vector with $\hbar$ and obtain $4$-momentum, so yes, they have mass. Interesting thing: the rest mass $m_0$ is the same, even though in general the wave 4-vector is undetermined. By "undetermined" you can understand that the wavefunction is a superposition of a large (usually infinite) number of pure wavefunctions. Pure wavefunctions have definite wave vector (hence momentum), but totally undetermined position. – Cristi Stoica Sep 6 '12 at 20:50 You are asking if a more complete theory might show that HUP is wrong and that position and momentum do exist simultaneously. But a more complete theory has to explain all the observations that QM already explains, and those observations already show that position and momentum cannot have definite values simultaneously. This is known because when particles such as photon, electrons, or even molecules are sent through a pair of slits one at a time, an interference pattern on the detector plate appears that shows that the probability of the measured location and time follows a specific mathematical relationship. The fact that certain regions have zero probability shows that before measurement, the particles exist in a superposition of possible states, such that the wave function for those states can cancel out with other states resulting in areas of low probability of observation. The observed relationships through increasingly complex experiments rules out possibilities other than what is described by QM. The only way that QM could be superseded by a new theory is for new observations to be made that violate QM, but the new theory would still result in the same predictions as QM in the circumstances that QM has already been tested. Since HUP results directly from QM, HUP would also follow from a new theory with the only possible exception in conditions such as super high energy conditions such when a single particle is nearly a black hole. Basically you have to get used to the idea that particles are really quantized fluctuations in a field and that the field exists in a superposition of states. Any better theory will simply provide additional details about why the field behaves in that way. share|cite|improve this answer "Accept it as true until it's debunked" is not scientific. "When a particle can be perfectly represented by a waveform and ONLY a waveform, then it cannot have definite momentum and position" is acceptable. Asserting the "When" is "Always and Forever" is not. – Ehryk Sep 15 '12 at 0:05 if can help Open timelike curves violate Heisenberg's uncertainty principle ...and show that the Heisenberg uncertainty principle between canonical variables, such as position and momentum, can be violated in the presence of interaction-free CTCs.... Foundations of Physics, March 2012, Volume 42, Issue 3, pp 341-361 ...considering that a D-CTC-assisted quantum computer can violate both the uncertainty principle... Phys Rev Lett, 102(21):210402, May 2009. arxiv 0811.1209 how a party with access to CTCs, or a "CTC-assisted" party, can perfectly distin- guish among a set of non-orthogonal quantum states.... Phys. Rev. A 82, 062330 2010. arxiv 1003.1987v2 ...and can be interacted with in the way described by this simple model, our results confirm those of Brun et al that non-orthogonal states can be discriminated... ...Our work supports the conclusions of Brun et al that an observer can use interactions with a CTC to allow them to discriminate unknown, non-orthogonal quantum states – in contradiction of the uncertainty principle... share|cite|improve this answer The only way to make Heisenberg's principle irrelevant is to measure the speed and the position (to make it simple) of a fundamental particle. In other words, you would have to observe a particle, without having it collide with a photon or reacting to a magnetic force, or without interacting with it. There might be an other way, which would be to find a very general law (but not statistical) which describes the characteristics (spin, speed, position etc) of an elemental particle in an absolute way.... share|cite|improve this answer I think that's just the observer effect, described in another answer, and I can beat that by hypothesyzing a future race that has developed a gravitational particle-position-and-momentum sensor machine, which does not use photons or interact with the particle in any way that would change the position or momentum (a read only sensor). Even in this case, the HUP says they CANNOT be known simultaneously. – Ehryk Sep 6 '12 at 10:56 I want to know what evidence there is to support this, even in the case of such a hypothetical machine. – Ehryk Sep 6 '12 at 10:57 In this case you interact using the gravitationnal interaction, so that's almost the same. – Yves Sep 6 '12 at 11:21 Not really. Bombarding it with photons are distinct events; surrounding it by a machine that is sensitive to the gravitation inside of it would only exert the same gravity that any other matter around it would, and if done as stated in my hypothetical, would not alter the position or momentum in any way once the particle has settled inside the machine. – Ehryk Sep 6 '12 at 11:24 Very interesting, and it would be possible if such a machine existed (my first point). But how would you measure something else than a change in the surronding gravitationnal field (which would imply an interaction with the particule) and how would you measure a spin ? It sounds like your method is equivalent to trying to measure an absolute quantity of energy, or to "forcing" the position or momentum of your particle, a case which doesn't fall under Heisenberg's principle. This reasoning might end up as a Ouroboros.. – Yves Sep 6 '12 at 11:33 "Heisenberg uncertainty principle" is a school term that is used in popular literature. It simply does not matter. What matters is the wavefunction and Schroedinger equation. The EPR paradox experiment never used any explicit "uncertainty principle" in the proof. share|cite|improve this answer As @MarkM pointed out above, what I meant but wasn't able to espouse was a 'non-commutation' property (a term I've not heard of in this context), or the claim that the exact position and momentum of a particle cannot be known simultaneously. I thought this was semantically equivalent to the Heisenberg Uncertainty Principle, which I guess it is not. – Ehryk Aug 13 '12 at 23:30 Also, from wikipedia: "The uncertainty principle is a fundamental concept in quantum physics." (from the disambiguation page, main article here: ). Could you explain or give sources for it 'not matter'ing? Further, the wiki article on the EPR Paradox explicitly uses the Heisenberg Uncertainty Principle - I'm not claiming WP is any authority, but it would be the source of my confusion. – Ehryk Aug 13 '12 at 23:38 @Annix This isn't true. Firstly, Heisenberg's matrix mechanics is an equally valid formulation of QM as wave mechanics, see Zettlli page 3. Second, the uncertainty principle is a part of wave mechanics. As you say, you can easily derive it from the Schrodinger equation. I find it odd that you say that this somehow makes the uncertainty principle irrelevant. You can't simultaneously know position and momentum to perfect accuracy, since localizing the position of the particle involves adding plane waves, which then makes the momentum uncertain. – Mark M Aug 13 '12 at 23:58 @Anixx If you claim that you may derive the HUP from the Schrödinger equation, you should show it. I actually think it is not possible, but I'm curious. One usually derive the HUP from the commutation relations and later one shows it is preserved by the unitary evolution. The Schr. equation tells us how the states evolve in time, while the HUP must be verified even in the initial state so I'm very skeptical about your derivation. In any case, the HUP is at least as fundamental as the Schr. equation and it is a term very often used in technical papers and seminars. – drake Aug 14 '12 at 0:28 @drake You can't derive it from the SE, but from the wave mechanics formulation (which is what I guess Annix means). See the'Proof of Kennard Inequality using Wave Mechanics' sub-section here:… However, I agree with you that the HUP is fundamental (see my above post.). – Mark M Aug 14 '12 at 1:32 Without gravity: The uncertainty principle is not really a principle because it is a derivable statement, it is not postulated. It is derivable and proven mathematically. Once you prove something you cannot unprove it. That means it cannot turn out to be false. For experimental verifications, see for example this article by Zeilinger et al and the references inside. Zeilinger is a world expert on quantum phenomena and it is expected that he will get Nobel prize in the future. With gravity, (and that matters only at extremely high energy, as high as the Planck scale): Intuitively you can use the uncertainty principle to give an estimate about the energy needed to resolve tiny region of space. For sufficiently small region in space you will create a black hole. So there is a limit on the spacial resolution one can achieve, because of gravity. If you try to use higher energy you will create a bigger black hole. Bottom line is, uncertainty principle does not make sense in this case because space loses its meaning and it cannot be defined operationally. share|cite|improve this answer Things can be unproven if one of the axioms or postulates they are based on is proven false. HUP may be true if <x, y and z> are true, but it certainly is based on foundations (waveforms representing matter, for one) that are not infallible. – Ehryk Aug 14 '12 at 11:37 @Ehryk You cannot unprove something by changing the postulates, because then you are talking about totally different problem. You can compare only 2 situations giving the same postulates/axioms. The axioms are true and not false in the sense that the coherent structure coming out of those postulates leads to predictions that are consistent with experimental observations. The world is quantum mechanical. – Revo Aug 14 '12 at 16:03 You cannot unprove it as a model of how things could work, no, but you could show that it is just not the most accurate model of the world we live it - just like we can theorize about hyperbolic geometry as a model, though it's unlikely to be the model of reality. Is it the case that you could not have a variant of something like QM that produces similar results while in some instances allowing precise position and momentum values, in the same way newton's laws were 'good enough' for the values we had measured at non relativistic speeds up until that point? – Ehryk Aug 15 '12 at 1:43 @Ehryk No. You could not have had something similar to Newtonian meachanics that underlies Quantum Mechanics. What you are thinking of has been thought of for long time ago, it is unknown as hidden variables theories. It has been proven experimentally that something like Newtonian mechanics or any deterministic theory cannot be the basis of Quantum Mechanics. May be you should also keep in mind the following main point: QM is more general than CM, hence it is more fundamental. Since QM is more general than CM, one should understand how CM emerges from QM, not the other way around. – Revo Aug 15 '12 at 1:50 @Ehryk One should understand CM in terms of QM not QM in terms of CM. – Revo Aug 15 '12 at 1:52 The way I see it, HUP cannot be disproven "at absolute zero", because absolute zero cannot be physically reached, er... due to HUP... is circular reasoning good enough? Let's try something else. Maybe try to imagine what would happen if HUP was to be violated? For one, I guess the proton - electron charges would cause one or two electrons to fall down into the nucleus, as HUP normally prevents that (if the electron fell down on nucleus we'd know it's position with great precision, requiring it to have indeterminate but large momentum, so it kind of settles for orbiting around nucleus). If you know more about the stuff than I do, try to imagine what else would happen, and how likely is that effect. For example, if HUP violation would imply violation of 2nd law of thermodynamics, this would render HUP violation pretty unlikely. That much from a layman. share|cite|improve this answer But then why can't we just say 'HUP is only for particles not at absolute zero'? It seems like violating it is 'not an option', even as above - so an electron falls into the nucleus. It has a measurable position and momentum. Why does HUP have to hold so strongly that we instead are comfortable with 'that particle must always have energy'? – Ehryk Sep 6 '12 at 18:31 The way I see it "absolute zero" is purely theoretical concept. Look up Bose-Einstein condensate, get a feeling for what happens at extremely low temperatures and then try to project that further to zero. Doesn't click. So saying "HUP is only for particles at absolute zero" is like saying "HUP is for all particles", for absolute zero can't be reached. – pafau k. Sep 6 '12 at 18:54 Do you have evidence or citations that nothing can be absolute zero? Or are you just asserting it? Note that saying 'we can't get to absolute zero' is different than 'no particle anywhere, at any time, can be at absolute zero.' – Ehryk Sep 6 '12 at 19:10 Let me quote the beginning of Wikipedia entry on absolute zero :) "Absolute zero is the theoretical temperature at which entropy reaches its minimum value", note the word theoretical. Temperature always flows from + to -, so the simple explanation is: you'd have to have something below absolute zero to cool something else to absolute zero. (this would violate laws of thermodynamics). – pafau k. Sep 6 '12 at 19:59 Transfer heat from hot to hotter? Decrease the volume of the container. Cool matter? Increase the volume of the container. In both cases, heat is not 'transferred', but temperature (average kinetic energy) has been changed without the interaction of other matter, either hotter or colder. – Ehryk Sep 10 '12 at 11:18 The Heisenberg uncertainty principle forms one of the most important pillars in physics. It can't be proven wrong because too many experimentally determined phenomena are a result of the uncertainty principle. However, something may be discovered in the future that can make a modification to the uncertainty principle - in a similar way that Newton's laws were modified by Einstein's special relativity. Saying that the uncertainty principle is wrong is like saying that Newton's law is wrong. In reply to the comments, I'm not saying that it can be falsified. It can't. In a classical sense, it will always be correct, in a similar way that Newton's law will always be correct.However, it can be modified. Until the day that all the open questions in physics have been resolved, how can you claim that the uncertainty principle can't be modified further? Do we know everything about extra dimensions? Do we know everything about string theory and physics at the Planck scale? By the way, it has already been modified. Please check this link. The uncertainty principle will always be correct. However, it can and has been modified. In its current formalism and interpretation, it could represent a special case of a larger underlying theory. The claim that the current formalism and limitations to the uncertainty principle are absolute and can never be modified under any circumstance in the universe, is a claim that does not obey the uncertainty principle itself. share|cite|improve this answer The uncertainty principle is a lot closer to uncertainty law than your answer lets on. It's not really about measurement so much as it's about a Fourier Transform. – Brandon Enright Jan 26 '14 at 23:47 The Heisenberg Uncertainty Principle is an unfalsifiable claim? All of (good) science is falsifiable. See the first paragraph: – Ehryk Jan 28 '14 at 6:12 protected by Qmechanic Jan 26 '14 at 23:37 Would you like to answer one of these unanswered questions instead?
a21cd1aa97ab20d3
Sunday, July 6, 2014 Why Jesus died many times for our sins St. Augustine was sure that Jesus died just once for our sins. However, Jesus died not only in our particular universe but also in many other parallel universes that are as real as ours. Let’s explore the chain of reasoning behind this claim. One assumption is that whether a particular parallel universe exists falls within the field of astrophysics, not theology nor logic. Astrophysics’ well-accepted Big Bang theory with eternal inflation implies a multiverse containing an unlimited number of parallel universes obeying the same scientific laws as in our particular universe. These other universes (which the physicist Max Tegmark calls Type 1 universes) are distant parts of physical reality. They are not abstract objects. Some contain flesh and blood beings. Parallel universes are not parallel to anything. They are very similar to what David Lewis called possible worlds, but they aren’t the same because his possible worlds must be spatiotemporally disconnected from each other. I cannot state specific criteria for transuniverse identity, but we do need the assumption that, in a universe, personal identity (whatever it is) supervenes on the physical realm. That is, a person can’t change without something physical changing. It is also reasonable to require that in any parallel universe in which Jesus exists he has Mary and Joseph as parents. The claim that Jesus in our universe is identical to Jesus in another universe does conflict with the intuitively plausible metaphysical principle that a physical object is not wholly in two places at once. This principle is useful to accept in our ordinary experience, but it is not accepted in contemporary physics. The Schrödinger equation of quantum field theory describes the extent to which a particle is wholly in many places at once. This is why physicists prefer to say the nucleus of a hydrogen atom is surrounded by an electron cloud rather than by an electron. In the double-slit interference experiment, a single particle goes through two slits at the same time. So, the metaphysical principle should not be used a priori to refute our claim about the transuniverse identity of Jesus. Our universe is the product of our Big Bang that occurred 13.8 billion years ago. It is approximately that part of physical reality we can observe, which is an expanding sphere with the Earth at the center, having a radius of 13.8 billion light years. Our universe once was a tiny bit of explosively inflating material. The energy causing the inflation was transformed into a dense gas of expanding hot radiation. This expansion has never stopped. But with expansion came cooling, and this allowed individual material particles to condense from the cooling radiation and eventually to clump into atoms and stars and then into Jesus. The other Type 1 parallel universes have their own Big Bangs, but they are currently not observable from Earth. However, they are expanding and might eventually penetrate each other. But, they might not. It all depends on whether inflation of dark energy is creating intervening space among the universes faster than the universes can expand toward each other. Scientists don’t have a clear understanding of which is the case. Why trust the Big Bang theory with eternal inflation? Is it even scientific, or is it mere metaphysical speculation? The crude answer is that the theory has no better competitors, and it is has been indirectly tested successfully. Its testable implications are, for example, that the results of measuring cosmic microwave-background radiation reaching Earth should have certain specific quantitative features. These features have been discovered—some only in the last five years. The theory also implies a multiverse of parallel universes having our known laws of science but perhaps different histories. If we accept a theory for its testable implications, then it would be a philosophical mistake not to accept its other implications. One other important assumption being made is that the cosmic microwave-background experiments have not detected any overall curvature in our universe because our universe is in fact not curved. Our universe being curved but finite is also consistent with all our observations. Similarly, if you are standing on a very large globe, it can look flat to you. If our 3-D universe is finite but curved like the surface of a 4-D hypersphere, then space would be extremely large with a very small curvature, but there would be only a finite number of parallel universes, and the argument about Jesus would break down. The most common assumption now among astrophysicists is that our universe is in fact infinite, the multiverse is infinite, and matter is approximately uniformly distributed throughout the multiverse. As Max Tegmark has pointed out, twenty years ago there were many astrophysicists opposed to parallel universes. They would say, “The idea is ridiculous, and I hate it.” Now, there are few opponents of parallel universes, and they say, “I hate it.” Having established that there are infinitely many parallel universes with the same laws but perhaps different histories, let’s return to the issue of whether Jesus died in more than one of them. One implication of the Big Bang theory with eternal inflation is that some universes are exact duplicates of each other. Here is why. If you shuffle a deck of playing cards enough times, then eventually you will have duplicate orderings. The duplicate orderings are the same, not just “David Lewis counterparts.” Similarly, if you have enough finite universes, which are just patterns of elementary particles, and each has a finite number of possible quantum states, then every universe has an infinite number of duplicates. One controversial assumption used here is the holographic principle: Even if spacetime were continuous, it is effectively discrete or pixilated at the Planck level. This means that it can make no effective difference to anything if an object is at position x meters as opposed to position x + 10 -35 meters. This completes the analysis of the chain of reasoning for why Jesus died more than once for our sins. Have you noticed any weak links? Brad Dowden Department of Philosophy Sacramento State 1. Brad, this is very interesting and fun, thanks. I could talk to you about this all day, but I'll confine my question to the idea that a particle can not be in two places at once. First, I wonder what it means to say that "The Schrödinger equation of quantum field theory describes the extent to which a particle is wholly in many places at once." If someone asks. Is particle P wholly at location L? The answer "To some extent," seems to mean the same thing as "No," (on the assumption that some extent is less then wholly." My understanding of the Schrödinger equation is that it tells us the probability that a particle is at any particular location, with the meaning of that statement varying depending on the resolution of the measurement problem you favor. Your remark seems to me to be most consistent with the Many Worlds Interpretation, where the probabilities represented in the Schrödinger equation may reflect the distinct worlds that actually exist. However, you do not make any reference to the Many Worlds Interpretation, and as far as I know, physicists are currently not at all sure what the relation is between the Multiverse and Many Worlds (though Sean Carroll thinks they could be the same: The other question I have is whether we really need to go this route at all. Kant, I believe, would have said that the idea that an object can't be wholly in two places at once is an a priori intuition, and therefore necessarily the case. But he said the same thing about 3-D spacetime, and he was just wrong. So why not simply respond that this intuition can denied without contradiction, and since the denial fits a very good physical theory, we should deny it? Specifically, why not just say that a particle can be wholly present in only one position in a single universe, but it can be wholly present in multiple positions in multiple universes? 1. For some reason the link to Sean Carroll's piece didn't post. 2. Kant’s intuition can be denied without contradiction, and the denial fits well with current physical theory. But I wouldn’t want to draw your conclusion that “why not just say that a particle can be wholly present in only one position in a single universe.” What fits best with physical theory is that a particle can be in multiple locations in a single universe and also in multiple locations across universes. I mentioned quantum mechanics in my blog post just to make the point that all experts agree that a particle can be in two locations at once in our own universe, despite the violation of common sense. You are right that the Schrödinger equation “tells us the probability that a particle is at any particular location,” but don’t assume from this that there is a definite location it has. Schrödinger abandoned the idea that a particle has a definite location in our universe. Niels Bohr’s Copenhagen Interpretation says a particle is at a definite location “to some extent,” meaning the particle is not wholly at any place when unobserved. The italics is what is special about a Copenhagen Interpretation. The Schrödinger equation tell us that particles are here and there at once; a particle is always in a “superposition” of here and there when it is not being observed. Bohr said, “No reality without observation!” I am not that much of an idealist and happen to believe the Copenhagen Interpretation is incorrect because I believe the wave function never collapses. This is the position of Hugh Everett. I was not promoting Everett’s Many Worlds Interpretation of quantum mechanics in my blog post, but it is a reasonable position, though still controversial. In the multiverse theory that I was promoting, the many parallel universes are far away; in the Many Worlds theory the universes are disconnected from our space and are neither near nor far. I think if Einstein were alive today he’d reject the Copenhagen Interpretation and go with the Many Worlds Interpretation. 3. Brad, thanks for that reply. There is a difference, don't you think, between the idea that a particle is not wholly at any place at one time and the idea that a particle is wholly at many places at once? The first formulation seems right to me, the second I have trouble grasping. I like the Everett interpretation, too, especially because of the way it seems to take the mystery out of EPR, but I actually do not quite understand exactly how it interprets the Schrödinger equation. On the basis of what I know it seems to me that it is a kind of hidden variable theory in which what we don't know is, not the actual, definite location of the particle in a particular universe, but whether we occupy a universe in which the particle is definitely in this position or definitely in that one. In other words, on the Everett interpretation, there is no collapse of the wave function, so what the wave function tells us is the probability that we are in a certain kind of universe, but this seems compatible with the view that particles have definite locations in every universe.. Is this wrong-headed and can you shed anymore light on this? (I know it is not central to your post.) 4. Randy, in quantum mechanics when talking about point particles such as electrons in a single universe, I believe it’s not helpful to emphasize a difference between not being wholly at any place at one time and being wholly at many places at once. That’s because a particle has no definite location (when it is not measured, according to the Copenhagen Interpretation). Now I used this example, with its implicit endorsement of the Copenhagen Interpretation, in order to suggest that physicists have for a long time been willing to say a particle can be in two places at once within an atom. However, I don’t endorse the Copenhagen Interpretation myself and prefer the Everett Interpretation, which describes the world as you say in your comments. The Everett many-worlds interpretation is compatible with the view that particles have definite locations in every universe, as you say. And, as you say, this removes the mystery from the EPR paradox (although it adds in mystery in another way—by introducing the unintuitive concept of parallel universes). That is the very reason why I commented that Einstein would approve of this interpretation over the Copenhagen interpretation if he were alive today. However, the Everett interpretation is also compatible with the claim that a particle can have two locations—that the particle can be wholly present in two places, namely having locations in two universes. In the last ten years, the Copenhagen Interpretation has fallen out of favor. 2. Thanks Brad for the thoughtful and fun post. I always learn something from you when you talk physics. I am not competent to comment on the physics so I will accept whatever you say on that, for the sake of argument. But I think the physics is a smokescreen. I avoid any conceptual or theological issues surrounding the nature of Jesus, the death of any god, child-sacrifice, etc. These too are not relevant, just like whether death is a singular or permanent event for Jesus or anybody is also tangential. I see two problems. (1) I think there is an implication failure perhaps due to ambiguity. That the same person with counterparts in many (or even all) other worlds dies could be true, and yet it could be false that each one (or any one) dies many times. Possible worlds or parallel worlds talk doesn’t even support the notion that, in any world where Jesus exists, that Jesus even dies once in all of them. (2) Some kind of quantifier shift error threatens. The general point you raise, that some guy died many times, is possible, sort of like the universe could have undergone an eternal series of crunches and expansions is possible. However, this fails to support any particular Jesus-story or Big-Bang story. It does not entail that anybody actually died many times in any given world. So “(there exists a) Jesus (who) died more than once for our sins” could be false even if “all Jesuses die in any world where any Jesus exists”. Again the latter claim is false, because there is a possible world or parallel universe where he does not die, or some worlds where his counterpart does not. So I suspect that your presumption is false, but again I don’t get the physics: “Similarly, if you have enough finite universes, which are just patterns of elementary particles, and each has a finite number of possible quantum states, then every universe has an infinite number of duplicates.” OK, I am not sure what this proves, but isn’t it possible that you could have an infinite series of arrangements of particles (with a finite number of possible states) and never get any duplicates? To presume this, I think, is to presume that whatever is possible is inevitable. It was dubious for Nietzsche to assume this in his doctrine of Eternal Recurrence and it is also dubious to presume it here. In short, Jesus died (or really did not) in this world, once, and that’s it. If you want to run worlds in parallel or series, the problem of ambiguity remains. I do not presume that names (proper or otherwise) are rigid-designators, so this could be part of my problem with understanding your argument. 1. Scott, you’ve made some very interesting comments. I agree with some and disagree with some others. You said, “isn’t it possible that you could have an infinite series of arrangements of particles (with a finite number of possible states) and never get any duplicates?” I believe the answer is “no.” You might shuffle a deck of cards an infinite number of times and still never get it back to its original order. But if you shuffle it a very large, finite number of times you are absolutely sure of getting two orderings that are the same. You don’t even need to shuffle it an infinite number of times. You are right that it’s mathematically possible there won’t be any duplicates of our universe even if an actually infinite number of parallel universes are generated. However, if you start shuffling with the deck in a certain order, then as you shuffle more and more, the probability of getting it back to the original order gets higher and higher and approaches one in the limit of an infinite number of shuffles. This is an implication of a theorem in probability theory, assuming random shuffles. I’d bet my life that you’ll get the deck back to its original order eventually. So, to speak epistemologically, I’d say I know you’ll produce the duplicate. Ditto for there being multiple Jesuses in the multiverse, assuming random generation of parallel universes. The assumption I left out in my original posting was that the generation of parallel universes is random. In most parallel universes there won’t be a Jesus, nor even homo sapiens. I hope you agree with me that solid historical evidence establishes that there was a Jesus in our universe about two millennia ago. In my blog I chose to talk about Jesus mere as an attention-getter. I could have made the same points talking about Abraham Lincoln. 3. Randy, here’s another thought about your recommendation that we say a particle is wholly present at one place in one universe. In the two-slit diffraction experiment, if you fire particles very slowly at a target, then we can show that the particle goes through both slits and interferes with “itself.” You wouldn't want to say the particle wasn’t “wholly present” when it went through the left slit. It's not like the particle's left half went through the left slit, right? But at the same time the particle was also going through the right slit, even though there was exactly one particle fired at the slits. So, the particle was wholly present in two places at once. 4. Brad, do we want to use the term 'particle' here? I thought the point here is that when, say, an electron interferes with itself it is because it is not behaving as a particle at all, but as a wave. If it were behaving as a particle, then, by definition, it would have to pass through one slit or the other, right? Is there a difference between how Bohr and Everett account for this experiment? As I understand it, Bohr says that the act of measurement collapses the wave function, and forces the entity to behave as a particle. But on the Many Worlds interpretation there is no collapse of the wave function, so why the difference when it is measured and when it is not? Why doesn't the entity continue to behave as a wave? 5. Randy, you are raising deep issues about the philosophy of quantum mechanics. Yes, it is better not to use the word “particle” when we are talking about wave interference. However, even if we were to stick to the classical Copenhagen Interpretation, its principle of complementarity allows the two-slit experiment to be considered either as a wave or as a particle experiment but not at the same time. So, let’s use the word “particle.” When we consider it as a particle experiment, then the particle must go through both slits simultaneously, yet hit the screen behind it at ONE place. It is wholly in two places at once as it goes through the screen. Richard Feynman said this is the essence of quantum weirdness. You asked, “on the Many Worlds interpretation there is no collapse of the wave function, so why the difference when it is measured and when it is not? One answer is that measurement produces quantum decoherence. You then asked, “Why doesn’t the entity continue to behave as a wave?” The answer is that it does. My blog post is basically reporting on the views of Max Tegmark from his 2014 book Our Mathematical Universe. He is leader of the quantum decoherence idea. According to Tegmark, the reason why there is such a big difference between measured and unmeasured particles is quantum decoherence or mixing with lots of other particles, not intrusion of consciousness as the Copenhagen people believe. According to Niels Bohr’s Copenhagen Interpretation of quantum mechanics, particles behave strangely only because they are unmeasured by a conscious being. In Schrödinger’s thought experiment in which there’s a 50-50 chance of the release of cyanide gas within ten minutes of the room being sealed, the Copenhagen people say Schrödinger’s cat is both alive and dead in the room because it is now ten minutes later and no conscious being has yet looked into the room and become aware of the situation. But then someone looks in and the cat is, say, still alive, and this looking collapses the wave function and that is why we never observe macroscopic objects in quantum superposition. Observing always causes collapse. Wrong, says Tegmark. Consciousness is not what collapses the wave function. It never collapses, and consciousness is unimportant. Instead what is important about measurement removing the weirdness is that the object measured gets entangled with the many objects in the measuring tools. Seeing requires bouncing photon objects off the measured object, which destroys quantum superposition. This destruction via getting entangled with many other objects is what Tegmark calls quantum decoherence. According to Tegmark, the reason why we never see macroscopic objects such as cows and people in two places at once is not because they are macroscopic, and not because they are observed, as the Copenhagen Interpretation hypothesizes, but rather because it is too hard to isolate them from other particles and prevent decoherence. I do not understand this process very well, but the point seems to be that it is an object’s interaction with other particles that destroys its quantum superposition via the process of quantum decoherence. But Tegmark’s interpretation of quantum mechanics is not yet standard, so in my original post I did not mention it. 6. Brad, thanks, that's very helpful. Charles Seife's book Decoding the Universe has an approving chapter on Tegmark's decoherence explanation. For him, the key conceptual move is to think of information as something that exists in the world, and measurement as the transfer of information from one place to another. Once you do that, there is no problem thinking of nature itself as constantly making measurements. Do you recommend Tegmark's book? 7. Randy, yes Tegmark's book is very clear and interesting. I'll have to take a look at Seife's book some day. 8. Brad: (This is Cliff, now in the La-La land of retirement.) Interesting as the multiple universe theory is, isn't it the case that there is no way to empirically confirm it? Can we make the jump from: Quantum theory is well-confirmed for our universe and quantum theory implies there are multiple universes, therefore there must be multiple universes? To me that's seems to stretch the idea of empirical confirmation too far. 9. Cliff, I worry about all those questions, too, and am not yet convinced of the claim that Jesus died many times for our sins. You have a fine pragmatic attitude toward the issue. Since we can’t make predictions about other universes, that is something to worry about. If there were no way to empirically confirm the claim that there exist alternative universes, then I wouldn’t believe it either, but there are ways to empirically confirm it—indirectly—because it provides good explanations of observations even if it can’t provide predictions that can be tested. You would say this indirect confirmation is too indirect and it stretches the idea of empirical confirmation too far. Proponents of alternative universes say it is time for science to change and accept this stretching. Here are some relevant quotations from Leonard Susskind in his 2006 book The Cosmic Landscape. “On the theoretical side, an outgrowth of inflationary theory called Eternal Inflation is demanding that the world be a megaverse, full of pocket universes that have bubbled up out of inflating space, like bubbles in an uncorked bottle of champagne." (p. 21) He calls alternative universes “pocket universes,” and he calls the Level I Multiverse the “megaverse.” "There is very little doubt that we are embedded in a vastly bigger megaverse." (21-2) “But certainly the critics are correct that in practice, for the foreseeable future, we are stuck in our own pocket with no possibility of directly observing other ones. Like quark theory, the confirmation will not be direct and will rely on a great deal of theory.” (196) As for rigid philosophical rules, it would be the height of stupidity to dismiss a possibility just because it breaks some philosophers’s dictum about falsifiability. …Just as general are always fight the last war, philosophers are always parsing the last scientific revolution.” (196)
c1d07fe87f485c38
Sign up Here's how it works: 1. Anybody can ask a question 2. Anybody can answer As we can see in the picture in this website: It's strange that the bound state wavefunction always reach its largest peak near the boundary of its classically forbidden region(not in the region). Is it true that this phenomenom holds for all bound state wavefunction? I think that the reflected wave may interfere with the original one, thus creating the peak near the forbidden region.But I can't explain why it is the largest peak or there is no peak inside the classically forbidden region,Thanks for your attention. share|cite|improve this question up vote 2 down vote accepted Yes, the wavefunction will peak near the boundary of the forbidden region and this effect will increase at higher energy levels. In the limit of very high energy levels the quantum harmonic oscillator must reproduce the classical result and a classical harmonic oscillator will more likely be found near the endpoints of its motion since that is when it is moving much slower than in the center where it has maximum velocity and maximum kinetic energy. A quote from Wikipedia: Note that the ground state probability density is concentrated at the origin. This means the particle spends most of its time at the bottom of the potential well, as we would expect for a state with little energy. As the energy increases, the probability density becomes concentrated at the classical "turning points", where the state's energy coincides with the potential energy. This is consistent with the classical harmonic oscillator, in which the particle spends most of its time (and is therefore most likely to be found) at the turning points, where it is the slowest. The correspondence principle is thus satisfied The Wikipedia article also has the following animated images showing the wavefunction for eigenstates as well as animations for wavefunctions of states that are not eigenstates that begin to approximate the classical behavior of moving back and forth from one limit state to the other. enter image description here Some trajectories of a harmonic oscillator according to Newton's laws of classical mechanics (A-B), and according to the Schrödinger equation of quantum mechanics (C-H). In (A-B), the particle (represented as a ball attached to a spring) oscillates back and forth. In (C-H), some solutions to the Schrödinger Equation are shown, where the horizontal axis is position, and the vertical axis is the real part (blue) or imaginary part (red) of the wavefunction. (C,D,E,F), but not (G,H), are energy eigenstates. (H) is a coherent state, a quantum state which approximates the classical trajectory. share|cite|improve this answer Your Answer
a615aaf3745e5f25
Encyclopedia … combined with a great Buyer's Guide! Sponsors:     and others Modulational Instability Acronym: MI Definition: a nonlinear optical effect which amplifies modulations of optical power German: Modulationsinstabilität Categories: nonlinear optics, physical foundations How to cite the article; suggest additional literature URL: https://www.rp-photonics.com/modulational_instability.html Modulational instabilities can result from different kinds of nonlinearities. In the context of optics, and in particular in nonlinear fiber optics, they are usually caused by the Kerr nonlinearity of an optical fiber in conjunction with anomalous chromatic dispersion. Essentially, they imply the amplification of sidebands in the optical spectrum, and lead to increasing oscillations of the optical power. Mathematical Description In the simplest case, one considers one-dimensional propagation of light, for example in a single-mode fiber. The light at a certain longitudinal position z and time t can then simply be described with a complex amplitude A(t,z), the modulus squared is the optical power. If the only relevant physical effects are the Kerr nonlinearity, quantified with a nonlinear coefficient γ (with units of rad/(W m)) and a frequency-independent group velocity dispersion β2 (i.e., no higher-order dispersion, no propagation losses etc.), the propagation can be described with the nonlinear Schrödinger equation: nonlinear Schrödinger equation Based on that equation, it can be shown relatively easily that a small sinusoidal amplitude modulation with frequency ωm added to a constant amplitude (with power P = |A|2) can be amplified if the following condition is fulfilled: condition for modulational instability That is obviously possible if β2 < 0 (anomalous dispersion) and the optical power is high enough. The gain coefficient of the modulation can then be calculated as gain of modulation instability where the first term under the square root is positive because β2 < 0. Note that the gain is zero if the above mentioned condition is not fulfilled; there is then a purely oscillatory behavior. The resulting nonlinear gain can be interpreted as resulting from parametric amplification by phase-matched four-wave mixing. Example Case As an example, we consider a situation where an optical signal with a weak very high frequency modulation (4 THz) is injected into a passive single-mode fiber with anomalous dispersion. Only during a short time interval (e.g. 20 ps) of an ultrashort pulse, we assume the average power to be quite high (3.5 kW). Problems with stimulated Brillouin scattering are avoided in that way. The optical spectrum of the original pulse (Figure 1) shows two weak side lobes around the central spectral component, and there is a noise background resulting from quantum fluctuations: optical spectrum Figure 1: Optical spectrum of the input light. The following diagram, showing the situation after 0.2 m of fiber, exhibits increased side lobes, associated with a significantly amplified power oscillation (not shown). Also, the quantum noise background is amplified within a couple of terahertz around the central wavelength: optical spectrum Figure 2: Optical spectrum after 0.2 m of fiber. After 0.4 m of fiber, the sidelobes got stronger again, and the noise amplification also got more pronounced: optical spectrum Figure 3: Optical spectrum after 0.4 m of fiber. After 0.6 m of fiber: optical spectrum Figure 4: Optical spectrum after 0.6 m of fiber. The simulation has been done with the software RP Fiber Power. MI gain Figure 5: Gain of the modulational instability for different optical power levels. Figure 5 shows the gain of the modulation or instability for three different levels of the optical power. One sees that increasing powers extend not only the magnitude but also the frequency range of the nonlinear gain. The following diagram shows the resulting amplified modulation in the time domain. It is much stronger than the input modulation, is no longer sinusoidal, and it exhibits some influences of the random quantum noise: optical spectrum Figure 6: Modulation in the time domain after 0.6 m of fiber, corresponding to the last shown spectrum. Variations of Modulation Instability More complicated modulational instability phenomena can occur under various circumstances. For example, it can be observed in birefringent fibers even if the chromatic dispersion is in the normal dispersion regime; this is called the vector modulational instability. Also, the effect may involve additional fiber modes if the fiber does not have a single-mode design, and may occur in various forms by cross-phase modulation. Modulational instability also often plays a role in the context of supercontinuum generation, where however the complexity of interacting effects is substantially larger. The modulation or instability is just one of many effects in such a situation. For example, it can be involved in the formation of Raman solitons. Problems related to the modulation or instability are observed in some telecom systems. Even in the normal dispersion regime, such problems can occur in systems where the signal power is periodically raised with fiber amplifiers. That periodic variation can lead to a kind of quasi-phase matching. Questions and Comments from Users Your question or comment: Spam check: [1]P. K. Shukla and J. Juul Rasmussen, “Modulational instability of short pulses in long optical fibers”, Opt. Lett. 11 (3), 171 (1986), doi:10.1364/OL.11.000171 [2]M. Nakazawa, K. Suzuki and H. A. Haus, “Modulational instability oscillation in nonlinear dispersive ring cavity”, Phys. Rev. A 38, 5193 (1988), doi:10.1103/PhysRevA.38.5193 [3]S. Trillo and S. Wabnitz, “Dynamics of the nonlinear modulational instability in optical fibers”, Opt. Lett. 16 (13), 986 (1991), doi:10.1364/OL.16.000986 [4]G. G. Luther and C. J. McKinstrie, “Transverse modulational instability of counterpropagating light waves”, J. Opt. Soc. Am. B 9 (7), 1047 (1992), doi:10.1364/JOSAB.9.001047 [5]E. A. Golovchenko and A. N. Pilipetskii, “Unified analysis of four-photon mixing, modulational instability, and stimulated Raman scattering under various polarization conditions in fibers”, J. Opt. Soc. Am. B 11 (1), 92 (1994), doi:10.1364/JOSAB.11.000092 [6]C. M. de Sterke, “Theory of modulational instability in fiber Bragg gratings”, J. Opt. Soc. Am. B 15 (11), 2660 (1998), doi:10.1364/JOSAB.15.002660 [7]P. M. Lushnikov, P. Lodahl and M. Saffman, “Transverse modulational instability of counterpropagating quasi-phase-matched beams in a quadratically nonlinear medium”, Opt. Lett. 23 (21), 1650 (1998), doi:10.1364/OL.23.001650 [8]S. Pitois, M. Haelterman and G. Millot, “Bragg modulational instability induced by a dynamic grating in an optical fiber”, Opt. Lett. 26 (11), 780 (2001), doi:10.1364/OL.26.000780 [9]G. Millot, “Multiple four-wave mixing-induced modulational instability in highly birefringent fibers”, Opt. Lett. 26 (18), 1391 (2001), doi:10.1364/OL.26.001391 [10]T. Tanemura and K. Kikuchi, “Unified analysis of modulational instability induced by cross-phase modulation in optical fibers”, J. Opt. Soc. Am. B 20 (12), 2502 (2003), doi:10.1364/JOSAB.20.002502 [11]S. Longhi, “Modulational instability and space-time dynamics in nonlinear parabolic-index optical fibers”, Opt. Lett. 28 (23), 2363 (2003), doi:10.1364/OL.28.002363 [12]A. Armaroli and S. Trillo, “Modulational instability due to cross-phase modulation versus multiple four-wave mixing: the normal dispersion regime”, J. Opt. Soc. Am. B 31 (3), 551 (2014), doi:10.1364/JOSAB.31.000551 (Suggest additional literature!) See also: nonlinearities, Kerr effect, supercontinuum generation and other articles in the categories nonlinear optics, physical foundations These sharing buttons are implemented in a privacy-friendly way!
abce0c24f39f758b
By Dr. Chris Mansell Shown below are summaries of a few interesting research papers related to quantum computing that have been published over the past month. Title: Model-free readout-error mitigation for quantum expectation values Organization: IBM The paper proposes an efficient way to mitigate the readout errors that occur when the final measurements of a quantum computation are Pauli measurements. Simulations of the proposed mitigation strategy showed that more accurate expectation values could be obtained even when the readout noise was correlated. You can view the paper published on arXiv here. Title: Quantum-accelerated multilevel Monte Carlo methods for stochastic differential equations in mathematical finance Organizations: Phasecraft Ltd and others The paper presents quantum-accelerated multilevel Monte Carlo methods for stochastic processes and applies them to several financial models. Theoretical analysis shows a quadratic speed-up in the precision of the computed expectation values. This is important for stochastic simulations where creating samples is costly. You can view the paper published on arXiv here. Title: Quantum Phases of Matter on a 256-Atom Programmable Quantum Simulator Organizations: QuEra Computing Inc. and others Researchers have created a programmable quantum device with 256 cold atom qubits in a two-dimensional array. They expect that the size, fidelity and degree of programmability of this state-of-the-art system could be increased considerably with technical improvements. In particular, the addition of rapidly switchable local control beams would enable universal quantum computation to be performed. You can view the paper published on arXiv here Related work by the same group and a different one can be found at and at Title: Information-theoretic bounds on quantum advantage in machine learning Organization: AWS The authors of this paper consider the query complexity of classical algorithms trained on data from quantum measurements. They rigorously show that for the task of making predictions with a desired average accuracy, it is comparable to optimal quantum machine learning models. However, for accurate predictions on every input, fully quantum machine learning can have an exponential advantage. The results bring the promise of both near-term and long-term quantum computing into sharper focus. You can view the paper published on arXiv here. Title: How to Reduce the Bit-width of an Ising Model by Adding Auxiliary Spins Organization: Waseda University Quantum annealers, such as those made by D-wave, have the potential to provide a quantum speedup for optimization problems. However, problems of interest may require arbitrarily precise couplings and magnetic fields while the available hardware can only achieve limited precision. Researchers at Waseda University have proposed a way to reduce the bit-width of any inputs so that they can be encoded onto a device, allowing the problem-instance to be solved. You can view the paper here. Title: A fault-tolerant continuous-variable measurement-based quantum computation architecture Organization: AWS The authors propose a simple yet complete architecture for fast, scalable, fault-tolerant quantum computing. They adapt an experimental set-up that has recently created some of the largest entangled states demonstrated to date – time-domain multiplexed photonic cluster states. They efficiently combine this with Gottesman-Kitaev-Preskill (GKP) encoded qubits and then test their proposal by simulating it in the presence of noise. You can view the paper published on arXiv here. Title: Pricing Financial Derivatives with Exponential Quantum Speedup Organizations: Quantum Mads, Santander, IQM The Black-Scholes model is a partial differential equation describing the price of a financial option over time. The researchers map it to the Schrödinger equation and embed it in an enlarged Hilbert space. Their approach achieves an exponential speed up over classical methods. Whether it is robust to the errors that occur on NISQ devices is left for future work. You can view the paper published on arXiv here. Title: Enhancing Combinatorial Optimization with Quantum Generative Models Organization: Zapata Computing Tensor network circuits are subsets of quantum circuits that may be especially well-suited to machine learning tasks. The paper explores their capability to (a) solve combinatorial optimization problems and (b) learn from other solvers, generalising and improving their attempted solutions. Promising results were achieved for financial asset allocation. You can view the paper published on arXiv here. Title: Training variational quantum algorithms is NP-hard — even for logarithmically many qubits and free fermionic systems Organization: Heinrich Heine University Variational quantum algorithms (VQAs) involve a classical computer training the parameters of a quantum circuit. The goal of the classical algorithm is to find the global minimum in the training landscape that it encounters. However, the authors of this paper show that in some important settings, there are many local minima and that they are significantly worse than the global one. This adds to the question of the feasibility of training VQAs. You can view the paper published on arXiv here. Title: Enhancing Generative Models via Quantum Correlations Organizations: QuEra Computing Inc. and others This work provides new insights into the design of practical quantum machine learning (ML) algorithms: start from a successful classical ML model, such as Bayesian networks or hidden Markov models, and minimally extend them by allowing local quantum measurements in bases other than the computational basis. The researchers find that these “basis-enhanced” algorithms have a higher expressive power due to the presence of quantum correlations. You can view the paper published on arXiv here. January 25, 2021